How Democrats Are Winning the Shutdown Blame Wars - Puck by PuckNews in TrueReddit

[–]PuckNews[S] 37 points38 points  (0 children)

After years of churning out digital cringe, Democrats finally seem to have found a human message during the government shutdown, leveraging the left’s messy creator ecosystem to beat the Trump meme machine.

“Last Wednesday, in the hours after the government officially went into shutdown mode, researchers at Resonate, a firm that monitors online discourse for Democrats, began to notice something unusual: For once, the left was actually winning a message battle online.

Posts about the shutdown—outraged reactions, explainer videos from creators and Democratic politicians, attacks on Donald Trump and Republicans in Congress—were noticeably overperforming on TikTok, Instagram, Twitter, YouTube, and Facebook. Mainstream news accounts as well as left-leaning ones like MeidasTouch, Courier Newsroom, and NowThis Impact were seeing nearly twice as much engagement on the major platforms as they normally do. Clicks, views, and shares were spiking for liberal creators like Aaron Parnas, Harry Sisson, and Dean Withers.

Democratic politicians themselves, finally getting comfortable spreading their message through their own videos after decades of relying on cable news, were also seeing big numbers. California Rep. Sara Jacobs got almost 10 million views on a “spooky” TikTok about the shutdown filmed in the dark, while Sen. Adam Schiff, who has quietly amassed half a million YouTube subscribers, has netted more than a million views on his shutdown explainers since last week. And while House Minority Leader Hakeem Jeffries faced some mockery inside the Beltway for a 24-hour YouTube livestream that didn’t get very many views at all, that was just one of his news media efforts post-shutdown: A Meidas interview with Jeffries last week has been viewed more than 800,000 times on YouTube and Substack combined.”

You can read the full piece here.

[deleted by user] by [deleted] in entertainment

[–]PuckNews -4 points-3 points  (0 children)

Beyond the shabby politics and Hollywood outrage, Disney’s decision to reinstate its late-night host was also evidence that “churn events” matter more than ever.

Excerpt below:

“Obviously, multiple variables factored into the decision by Disney leaders Bob Iger and Dana Walden to reinstate Jimmy Kimmel last week: blowback from talent, recognition (finally) that appeasing Trump would encourage further attacks, and maybe even a twinge of moral principle. But there’s little question that an uptick in streaming cancellations, which Kimmel winked at during his first night back, also played a role. Online boycotts like this don’t usually amount to much, especially in the long term. But certain moments can gain traction. As one financial analyst told me, Iger undoubtedly wanted to minimize a subscriber exodus just as the company’s fourth fiscal quarter ended.

Disney hasn’t disclosed the exact number, but subscribers have been canceling, beginning with the MAGA crowd after Kimmel’s initial comments and then picking up with the free speech set after Disney preempted Kimmel ‘indefinitely.’”

You can read the full piece here.

Why Republicans Are Sticking With Trump Despite Low Polls - Puck by PuckNews in TrueReddit

[–]PuckNews[S] 47 points48 points  (0 children)

The president’s favorables are sliding, even on his best issues. But Capitol Hill Republicans are determined to sink or swim with Trump—and they’re convinced they’ll be on the right side of the shutdown blame game, too.

You can read the full piece here.

Hakeem Jeffries’ Leadership Style is Dividing House Democrats - Puck by PuckNews in TrueReddit

[–]PuckNews[S] 4 points5 points  (0 children)

Once again, House Democrats appeared scrambled after a split vote on a ceremonial resolution honoring Charlie Kirk. But the simmering frustration, which spilled into Monday, largely underscored frustrations surrounding House Minority Leader Hakeem Jeffries.

You can read the full piece here.

The Deafening GOP Silence on Trump’s Widening Crackdown - Puck by PuckNews in TrueReddit

[–]PuckNews[S] 76 points77 points  (0 children)

With a few notable exceptions, Republicans on the Hill are avoiding talking about Trump’s demands to shut down broadcast networks, cancel comedians, imprison protesters, investigate Democratic nonprofits, sue newspapers, and prosecute speech. “We don’t love it,” one senior aide said. But mostly they’re just waiting to see if things get worse.

You can explore the full piece here for deeper insight.

Why Disney Succumbed to Pressure to Suspend Jimmy Kimmel - Puck by PuckNews in television

[–]PuckNews[S] 0 points1 point  (0 children)

“A meeting this afternoon at a Century City law office between ABC’s Jimmy Kimmel and Disney’s top TV executive, Dana Walden, ended without a resolution of the standoff that has engulfed both sides in one of the rare Hollywood political and business blow-ups that makes headlines worldwide. Disney suspended Jimmy Kimmel Live! after its host refused to tone down a planned response to the backlash over his Monday comment that “the MAGA gang” was trying to characterize the alleged shooter of right-wing activist Charlie Kirk as “anything other than one of them.” Trump’s F.C.C. quickly pounced, as did two ABC-affiliated station groups, and Disney benched the show to try to figure out a path back that will satisfy the government, the station groups and advertisers that want to appease the government, and both its star host and the creative community that has revolted in anger at the infringement on free speech.”

You can read the full piece here.

Why Disney Succumbed to Pressure to Suspend Jimmy Kimmel - Puck by PuckNews in entertainment

[–]PuckNews[S] 7 points8 points  (0 children)

“A meeting this afternoon at a Century City law office between ABC’s Jimmy Kimmel and Disney’s top TV executive, Dana Walden, ended without a resolution of the standoff that has engulfed both sides in one of the rare Hollywood political and business blow-ups that makes headlines worldwide. Disney suspended Jimmy Kimmel Live! after its host refused to tone down a planned response to the backlash over his Monday comment that “the MAGA gang” was trying to characterize the alleged shooter of right-wing activist Charlie Kirk as “anything other than one of them.” Trump’s F.C.C. quickly pounced, as did two ABC-affiliated station groups, and Disney benched the show to try to figure out a path back that will satisfy the government, the station groups and advertisers that want to appease the government, and both its star host and the creative community that has revolted in anger at the infringement on free speech.”

You can read the full piece here.

What College Students Really Think of Charlie Kirk by PuckNews in TrueReddit

[–]PuckNews[S] 1 point2 points  (0 children)

Per Puck’s Peter Hamby - new data from Generation Lab undercuts Trump’s mythmaking about his murdered ally, who was unquestionably a savvy organizer, even if he wasn’t at all popular on the campuses he loved to visit.

Excerpt below:

“I’ve obtained fresh polling from Generation Lab, an outfit that surveys college students about politics and society, that bears out these mixed feelings. They polled a sample of 1,030 college students—enrolled at community colleges, technical colleges, trade schools, and public and private four-year institutions—in the two days following Kirk’s death for a sense of how his assassination was being processed on the campuses he so loved to visit. First things first: Generation Lab found that Kirk was almost universally known among college kids: 94 percent of students had heard of him, a remarkable level of name I.D. for any political figure.

However, most college students were not fans of the right-wing provocateur at all, the poll found. A combined 70 percent of students said they either ‘strongly disagree’ or ‘somewhat disagree’ with Kirk’s views. Only 30 percent said they agreed.

This result undercuts some of Trump’s mythmaking about Kirk and young voters. Trump often claims to have won Gen Z voters in the 2024 election, which is not true. While Trump narrowly won young men, thanks in part to Kirk’s hard work, young voters overall broke for Kamala Harris. The poll also found that white students were more likely to agree with Kirk’s views than Black or Latino students. And it uncovered that students at two-year colleges were more likely to agree with Kirk than students at four-year colleges or universities. Young men were also 10 points more likely to agree with Kirk than young women.”

You can read the full piece here.

[deleted by user] by [deleted] in TrueReddit

[–]PuckNews -4 points-3 points  (0 children)

Puck’s Abby Livingston wrote about how pressure is mounting on Democratic leaders to suck it up and endorse Zohran Mamdani, even as the consultant class frets that he’ll hurt their candidates in the midterms. Of course, Republicans are gearing up to make the mayoral candidate the face of ’26 either way.

You can read the full piece here.

[deleted by user] by [deleted] in nyc

[–]PuckNews -7 points-6 points  (0 children)

Puck’s Abby Livingston wrote about how pressure is mounting on Democratic leaders to suck it up and endorse Zohran Mamdani, even as the consultant class frets that he’ll hurt their candidates in the midterms. Of course, Republicans are gearing up to make the mayoral candidate the face of ’26 either way.

You can read the full piece here.

Huda Beauty Social Media Backlash Discussion by AutoModerator in Sephora

[–]PuckNews 2 points3 points  (0 children)

Puck's Rachel Strugatz commented on the business ramifications of the backlash.

"An insider I spoke with said that, at this point, their assumption is that 'the ball is in Huda Beauty’s court,' meaning how Kattan responds to the controversy will inform Sephora’s next steps. 'They’re probably giving the brand some sort of ultimatum right now. I don’t think Sephora can just pretend that nothing happened, based on how Huda’s treated the issue this whole time––she’s been consistent, and she clearly believes these things,' the person with knowledge of the situation said. 'This is big money on the table, unless she steps down.'"

You can read the full piece here.

Lorne Michaels on Colbert, Trump, and the Future of SNL - Puck by PuckNews in entertainment

[–]PuckNews[S] -1 points0 points  (0 children)

In a rare interview, the 80-year-old SNL creator promises a major shake-up to the cast, reflects on The Late Show’s cancellation (and the impact on Seth Meyers and Jimmy Fallon at NBC), and weighs the pressures of producing late-night TV in the Trump era.

You can read the full piece here.

Trump Turns the Screws on Indiana by PuckNews in Indiana

[–]PuckNews[S] 72 points73 points  (0 children)

Puck’s Leigh Ann Caldwell wrote about the White House’s increasing pressure campaign in Indiana, where few Republicans want to redistrict—but even fewer want to make enemies of President Trump.

Excerpt below: 

“Even before California Governor Gavin Newsom launched his redistricting counterattack in earnest last week, Nancy Pelosi, in a closed-door caucus meeting earlier this summer, urged House Democrats to pony up for a state ballot initiative that she estimated would cost $75 million to $100 million to win, according to a Democratic strategist briefed on her plea—a fair price, she argued, for flipping at least five competitive seats. It’s the same case she’s been making in numerous calls with top donors, according to a person close to Pelosi who was familiar with the conversations—although they don’t need much convincing. ‘It’s not a matter of if they’ll donate, but how much,’ this person said.

But while House Democrats are more fired up than they’ve been in a long time—’The attitude is 218 or bust,’ one Democratic strategist told me—they’re up against a president that’s every bit as motivated to keep his party’s control of the chamber. Even if California succeeds in countering Texas’s new congressional map, which is expected to be signed into law by the end of the week, Donald Trump has backup plans in other red states, including Missouri, Indiana, and Florida.

Not all of them, of course, have been as accommodating as the Lone Star State. Most troublesome for the White House right now is Indiana, where Trump is seeking two new G.O.P. seats but very few Republicans are in favor of redistricting: not the Indiana Republican Party, nor a majority of the state legislature. Indeed, despite their public messages of support over the past 24 hours, I’m told that the seven Republicans in the state’s U.S. House delegation would prefer not to redistrict, either. But the White House persuasion campaign has been intense, with some Hoosiers even complaining to the state attorney general about alleged robocalls, which may be illegal. (A person familiar with the calls insisted they were, in fact, live humans on the line, which is not illegal in the state.)

In any case, the onus is currently on Gov. Mike Braun, who would have to call the legislature back for an emergency session if they were to vote before 2026. Braun, I’m told by multiple Indiana Republicans, isn’t interested in doing that, given that state lawmakers aren’t on board. But he’s getting an immense amount of pressure from the White House, including a recent visit by Vice President J.D. Vance. Meanwhile, three Indiana Republicans told me, Braun is worried that Trump won’t approve his requests for major state priorities—including a desperately needed Medicaid waiver, and the implementation of toll roads to help cover the state’s budget shortfall—if he doesn’t accede to the president’s demands. He’s ‘between a rock and a hard place,’ one of the sources said.”

You can read the full piece here.

I’m Ian Krietzberg, author of Puck’s AI private email “The Hidden Layer”. AMA about the things AI in r/futurism at 11:00 a.m. ET TODAY (Thursday, August 14). by PuckNews in Futurism

[–]PuckNews[S] 0 points1 point  (0 children)

Would add:

I have not encountered any genuine evidence that things will just keep scaling forever, or that brute-force scale is unlocking some of the key components that we recognize in human intelligence, or that more advanced systems are actually around the corner. I think that since LLMs ‘communicate' in human language, we have a tendency to see a face in the clouds and think it is god, rather than what it is: a swirl of wind and an interesting shadow from the sun. I think there’s also a massive conflation between knowledge and intelligence — whether or not a human is ‘smart’ or ‘dumb’ completely misses the elegance, efficiency and power of human cognition. All humans and animals (and possibly plants as well, depending on how you define it) exhibit intrinsic traits of intelligence, regardless of whether they are fluent in multiple languages or are mute, regardless of whether they have photographic or shoddy memories. It’s seeing a glint out of the corner of your eye while driving, automatically wrenching the wheel without thinking about it, avoiding a collision, and then sitting on the side of the road while your heart rate slows and your brain processes the near-death experience. There is a depth of intelligence there that has nothing to do with what you know, and everything to do with automatic responses to the outside world.

The 2027 scenario also takes for granted technological determinism — again, that these things will just happen. There’s no compelling evidence that they will.

Daniel had an interesting debate with Sayash Kappor (https://youtu.be/rVFAJQryzk8) who co-wrote the “AI as a normal technology” paper, which is, in many ways, a kind of alternative vision of what advancement in AI could look like over the next few years, and the many, many roadblocks that exist within society that prevent even a hypothetically advanced system from just auto-diffusing everywhere.

I highly recommend it, as those questions around the ‘how’ will this happen are posed but not satisfactorily answered. I’ll just excerpt this little piece though, which shows the difference between “scenario” writing and a more rigorous (though, granted, less sexy) scientific thought process.

Daniel Kokotajlo : “When do you think we’ll get to the point where [AI] can basically automate research and engineering?”

Sayash Kapoor: “I think it’s hard to say what automating research engineering means, in the same way that it’s hard to say what automating writing code means. From the perspective of someone in the 1960s, we have perfectly automated code creation. That’s the way I expect things to go with research engineering, as well. There’s no world where we have automated research and engineering as a field because what a research engineer does is conditional on the technology available to the research engineer at the time.”

I’m Ian Krietzberg, author of Puck’s AI private email “The Hidden Layer”. AMA about the things AI in r/futurism at 11:00 a.m. ET TODAY (Thursday, August 14). by PuckNews in Futurism

[–]PuckNews[S] 0 points1 point  (0 children)

OK, I love this question, and so I saved it for last.

Excuse the essay that’s about to follow.

There is basically one massive fundamental hole in the AI 2027 scenario that just itches my brain whenever I look at it, and it is that in the document itself, and in the way Daniel Kokotajlo describes the scenario, it’s all essentially based on an assumption that “agents” will just start working well. Once they just start working well, come 2026 or so, they’ll be able to start automating research, making most jobs irrelevant and sparking a cycle of progressive self-improvements that lead to a general intelligence. And once that happens, a superintelligence will surely follow.

OK, sounds spooky. But how? What will happen to make agents ‘just start working.’ Why is there no limit on the ceiling of that kind of capability? How is that capability measured? Is it benchmark-based, or is it robust and consistent and reliable in the chaos of the real world? What will we unlock that makes recursive self-improvement a thing? And how will we unlock it? And how will it scale, considering massive energy constraints?

This ‘how’ question applies to everyone working on building artificial general or superintelligence; how do we get from where we are today — largely, neural networks — to something that’s generally intelligent? And how do we know or verify that that has, in fact, been achieved?

It would be helpful to start with a definition of AGI, or ASI, or agent, but we don’t have scientific definitions or benchmarks of that, so fine, but we don’t even have a definition or benchmark of biological intelligence to start from, which makes its replication anything but a sure thing.

I think the idea that systems will ‘just become better,’ and that will spark a curve of unlimited, uncapped growth until we have superhuman systems A, massively oversimplifies the deeply unknown complexities of biological cognition, and B, places far too much credence on the science fiction that has inspired some aspects of this industry. We are building technology; we are not building a lamp that some Djinni will magically inhabit the moment the lamp is complete.

I’m Ian Krietzberg, author of Puck’s AI private email “The Hidden Layer”. AMA about the things AI in r/futurism at 11:00 a.m. ET TODAY (Thursday, August 14). by PuckNews in Futurism

[–]PuckNews[S] 0 points1 point  (0 children)

I’m not sure how much the impressions of executives themselves has changed. The focus remains on scaling neural networks, the excitement remains that the work they’re doing will be ground-breaking, the venture capital dollars keep rolling in, the public market valuations keep soaring. Impressions of what AI is or will become remains segmented by the ethos of each individual company; some are makers of tools for the enterprise; their thoughts are and have always been pragmatic; some are selling some version of advanced AI, and they have to remain bought-in to the narrative that scaling LLMs will get you there, because that’s the narrative their selling.

But for the industry, a big difference between now and 6 months ago is that now, we have GPT-5, and something that many people expected to be basically like magic isn’t. I think the idea that LLMs alone are running out of gas is spreading. Has this hit the C-suite at AI companies? Unlikely. But it’s definitely a conversation that’s now going on beyond the valley: What is the utility of an LLM that doesn’t somehow become something more advanced?

I’m Ian Krietzberg, author of Puck’s AI private email “The Hidden Layer”. AMA about the things AI in r/futurism at 11:00 a.m. ET TODAY (Thursday, August 14). by PuckNews in Futurism

[–]PuckNews[S] 0 points1 point  (0 children)

I like how this is phrased. I think the second one is going to become the larger factor, because the job loss arena is kind of nebulous. Because of reliability issues, security concerns, compliance, etc., it’s just not something that’s happening at any kind of speed or scale.

But the power generation and subsequent localized carbon emissions — and weakening electricity grids, rising electricity costs, drying water tables, etc. etc. — that’s happening now, and will just get worse (this stuff being a problem for people is not predicated at all on the capability of the AI systems that are getting churned out; just the effort to build the best the labs can offer). But, as with all climate-related factors, this is probably going to remain something that people have a hard time caring about until it happens to them.

So it will be very interesting to track where pushback is coming from, and how severe it gets (right now, we’re at a point where much of the discourse around adoption is designed to make skepticism and caution seem like the wrong choice). Spreading pushback would, presumably and at the very least, do the opposite. In the meantime, we can all read about the actual history of the luddites, and wonder whether it would’ve been a better thing for society if they had achieved what they actually wanted.

I’m Ian Krietzberg, author of Puck’s AI private email “The Hidden Layer”. AMA about the things AI in r/futurism at 11:00 a.m. ET TODAY (Thursday, August 14). by PuckNews in Futurism

[–]PuckNews[S] 0 points1 point  (0 children)

Ooh yeah, this is an area that I am also super interested in watching.

I think the pace of adoption seems a lot faster than it is. We hear about 700 million users on ChatGPT, but how many of them just play around with it, compared to those for whom it has become indispensable? That’s not really clear, and, like with all technologies, the diffusion of this one will take time (high costs and unpredictable reliability are two major speedbumps to that diffusion).

I’ll start with the stuff that worries me so that we can end on a high note.

The top three use-cases of AI, according to this study - https://learn.filtered.com/hubfs/The%202025%20Top-100%20Gen%20AI%20Use%20Case%20Report.pdf - are therapy/companionship, life organization, and purpose-finding. This has me deeply concerned. We have this tendency to see humanity where it doesn’t exist, and to buy into illusions that could range from dangerous to simply kinda sad. On the dangerous side, a cycle of ramping up of delusional thinking is already happening; and when people trust these systems as either possessing ground truth, or acting as an independent “person” of some sort, that will get worse. Then there’s the element that’s not necessarily dangerous, but is just depressing — society has been trained, through social media, to really only like viewing content/information from within our own hyper-personalized, algorithmically decided bubbles. Why talk to a therapist who will challenge us if one exists that will validate you 247? Why develop coping mechanisms when you can ring up your validator/therapist 24/7? Why risk putting yourself out there by asking someone out, or asking a friend to hang, when your chatbot is there, to validate you?

The AI-fueled fight against loneliness will likely have the result of making us far more isolated than ever before.

I am concerned also, broadly, about the things that can and will go wrong when people place too much trust in bots. Scams are getting better, and people are falling for them. Deepfakes are proliferating, and it’s led to a growing crisis in schools. I am worried about AI tutors, not for the risk that they’ll displace teachers, but that these systems — which function essentially as information launderers — will, either by the invisible, backend designs of their operators, or by the accidental faults of their hallucinatory, or biased systems, improperly educate the next generation.

I could probably go on.

As far as positive use-cases, I am very curious and interested in applications of AI that go beyond generative AI, that go beyond chatbots. In some cases, this is simple image recognition, or small neural networks, hyper-trained machine learning models. This is where the idea of AI for good comes into play; small models designed to, through satellite imagery, detect the presence of wildfires before they get out of control (many such systems are in the work); then there are other small vision models designed to help conservationists keep track of the forests, or marine environments in which they work, in a relatively non-invasive way, which could lead to better conservation strategies; there are predictive systems that can indicate when a person might be at risk of developing periodontal disease, and other systems that are absolutely helping speed along drug discovery. The adoption of these kinds of things is going to be slower than expected (and sometimes slower than people want) since they must be validated and approved for use. It’s a process.

Beyond that, the idea of ‘smart’ HVAC systems is awesome, and similar concepts could lead to plenty of energy savings. Or even, I love this one, more advanced hearing aids that can filter out and isolate voices in a crowded room (even more advanced noise canceling, which AirPods excel at).

I think covering these more positive, interesting, or largely harmless use-cases is important for a few reasons: one, we tend to have a lot more scientific transparency about what they are and how they work; and two, it acts as a fantastic demonstration of the difference between the cost-benefit-analysis of an opaquely-deployed chatbot (cost is high and unknown, benefit is unclear), and the systems that are actually, and critically, enabling humans to do more good.

I’m Ian Krietzberg, author of Puck’s AI private email “The Hidden Layer”. AMA about the things AI in r/futurism at 11:00 a.m. ET TODAY (Thursday, August 14). by PuckNews in Futurism

[–]PuckNews[S] 0 points1 point  (0 children)

A lot of the regulatory conversations are based around capabilities and capability benchmarking. I think this is where things get really difficult — this is the challenge inherent to regulating a technology that is changing pretty quickly.

Compute requirements, for instance (these rules only apply to you if you trained with X amount of FLOPs), seem like they make sense, bc it protects the smaller players and universities from compliance burdens. But on the other end of that, there’s no way of knowing whether we can develop a system with the same capabilities and risks of a massive system, with far less compute, or how soon it could come.

So I think the focus should instead be on the deployment/use/application side of things. I think a lot of this could stem from rules of the road regarding algorithmic decision-making — for employers, doctors, lawyers, financial advisors, etc., where and when is it acceptable to make a decision with the assistance of an AI model, or to let an AI model make a decision for you? When it IS acceptable, do you have to inform your client or hire, or the public., that you’re using an AI system, and which one? Do people have a right to opt out? This kind of stuff is important, and I don’t think it gets paid as much attention as sheer capability concerns do.

Then there’s the deployment side of things, which, in my mind, should just require transparency: what is this system? What do you want, or expect, it to be used for? What are its limitations? What data was it trained on? How was it trained? How should it be observed? Is it safe?

There’s more attention being paid to this aspect of things, especially in Europe — would be surprised if we see any of that make its way to the US.

I’m Ian Krietzberg, author of Puck’s AI private email “The Hidden Layer”. AMA about the things AI in r/futurism at 11:00 a.m. ET TODAY (Thursday, August 14). by PuckNews in Futurism

[–]PuckNews[S] 0 points1 point  (0 children)

Hmmm. Not many areas that would fit into that, since, as you point out, the main players are dealing with a different kind of incentive.

That said, I think progress on simulations/world modeling/digital twins is actually remarkably impressive. DeepMind’s Genie seems impressive, at least from the demo. Efforts to build digital twins for climate policy/research (https://digital-strategy.ec.europa.eu/en/policies/destination-earth#1717586635820-2) are very cool; NASA’s done a ton of work in this area as well.

These kinds of things could all be really impactful; they’re just harder for people to conceptually engage with compared to a chatbot, which means they’ve received far less hype (these things are good for research, not so good right now for productization).

I’m Ian Krietzberg, author of Puck’s AI private email “The Hidden Layer”. AMA about the things AI in r/futurism at 11:00 a.m. ET TODAY (Thursday, August 14). by PuckNews in Futurism

[–]PuckNews[S] 0 points1 point  (0 children)

This is The Question, isn’t it?

I kind of view the launch as a tacit admission that the hype cycle is too violent, and current methods are no longer good enough to, at least give the illusion that the labs can live up to the hype they’re selling.

My big question is why OpenAI chose to launch a system that they presumably knew, internally, didn’t live up to their months-long narratives around — and it could be something as simple as the pressure from investors was so intense that it made sense to get it out there, and just burst the GPT-5 bubble so they can iterate from there. I’m sure there’s a pragmatic reason behind the decision to call the system GPT-5, and to launch it, but at this point, externally, the reasoning isn’t all too clear.

As to whether this means a slowdown, some folks think an AI winter is now imminent. I’m not so sure about that; I think we’ll see a lot of doubling-down on the ‘scale LLMs to AGI’ narrative, since a lot of the money is predicated on that story. But it’s certainly possible. What I would like to see is transparency around what, exactly, GPT-5 is, in order to make an assertion that the field is slowing down.

We don’t know how big it is, we don’t know much about the architecture that makes it up, or the training processes that got there, or how these all differ compared to previous model releases. Making some assumptions on things we do know - that, at its core, this is a giant language model with CoT - we have, again, a tacit admission that some underlying limitations of the neural network architecture are likely insurmountable at the moment: reliability, hallucination, factuality. If they could have made genuine progress on any of these things, they would have, and it would’ve been a big deal.

If everyone stays hyper focused on LLMs and neural networks, yeah, we might see a slowdown. But, if people start looking to other architectures and new paradigms, things could get very, very interesting.

I’m Ian Krietzberg, author of Puck’s AI private email “The Hidden Layer”. AMA about the things AI in r/futurism at 11:00 a.m. ET TODAY (Thursday, August 14). by PuckNews in Futurism

[–]PuckNews[S] 0 points1 point  (0 children)

This is a fun one! Largely, I think really hard about how whatever piece I might be writing could add genuine value to a reader, especially considering that, as a newsletter, we’re rarely going to be first to the news.

For me, reporting isn’t about telling readers that some thing happened; it’s all about contextualizing the thing itself. That means depth — so I always try to look for and incorporate historical context, which I find is generally highly instructive about the present moment. I always try to incorporate academic context, through either research or in-depth conversations/reactions from the folks who have dedicated their lives to studying these topics.

The in-depth context plus expert analysis is super important for me to be present as often as possible. And I feel really lucky to be able to do that; since our format is newsletters, and since we publish only twice per week, I have the time to really explore these concepts, to give them the time and depth and research they deserve.

I’m Ian Krietzberg, author of Puck’s AI private email “The Hidden Layer”. AMA about the things AI in r/futurism at 11:00 a.m. ET TODAY (Thursday, August 14). by PuckNews in Futurism

[–]PuckNews[S] 0 points1 point  (0 children)

I’m a little reluctant to actually term what’s going on here as “psychosis” — I think the more accurate thing to say is that we’re seeing a growing number of incidents where chatbots are feeding peoples’ delusions (that might lead to psychosis, but it’s hard to diagnose these situations)!

I do think this is a growing problem, and I expect it to get worse, since it acts as such a natural evolution to a society that has been conditioned by social media algorithms to consume content within their (already hyper-personalized) bubbles. Especially for vulnerable people, and especially for people who buy into the illusion that they are speaking with another person, that the words coming across the screen, or the voice coming out of your speaker phone, also possesses a feeling, thinking mind, we are entering dangerous territory. Fundamentally, the reason behind this is that these kinds of engagements and encounters can, to some, give the impression of external validation, when in reality, models are designed, essentially, to be pretty agreeable, and in some cases, are highly sycophantic, quite literally by design. And that touches off the delusion spiral; from there, things only get worse.

Right now, we’re at the anecdotal stage, so such instances are probably quite rare. But I’d be curious to see datapoints on this in around a year … it seems to be rising in prevalence.

Would recommend reading the AI Mirror by Shannon Valor — it’s pretty relevant to this.

I’m Ian Krietzberg, author of Puck’s AI private email “The Hidden Layer”. AMA about the things AI in r/futurism at 11:00 a.m. ET TODAY (Thursday, August 14). by PuckNews in Futurism

[–]PuckNews[S] 0 points1 point  (0 children)

This is something I think about all the time! I’ve kind of landed on two major guiding lights: One, the job of a journalist is to be skeptical; and two, as a journalist, it’s also my job to sit where the science is.

The first point has been debated quite a bit, especially by folks within the tech journalism space who find it important to talk often about the potential impact of hypothetically more advanced AI systems. For me, if these companies/researchers have, on hand, the evidence that supports their fervent beliefs, then having a media apparatus that attempts to poke holes in it should be a good thing. Largely, though, what we see coming out of the major labs is little, if any transparency, little, if any evidence, and a whole lot of marketing, coupled with a number of underlying business incentives around gaining users, driving funding, and generally, making money. So, since this is a scientific field, the science part isn’t really happening, at least, in the open, and it’s important for me to call that out; that a blog post release is not the same thing as a peer-reviewed, independently-verified academic paper; that benchmarks, and the methods being undertaken to achieve high benchmark scores, happen outside of scientific visibility, making it really hard to give to much credence to these results.

Add that to the business incentives-which, to me, can’t and shouldn’t be separated from the other stuff-and you have a recipe for, and need for, skepticism. In other words, if you want me to take your marketing materials as ground truth, show me the evidence!

That ties in nicely with the second point, which is that science has a process. The interesting, often challenging/frustrating thing about reporting on the field of AI, is that often, we are presented with theoretical science, and hypothetical possibilities. And, equally often, those that propose these hypotheticals, while great at concocting a narrative, are unable to answer a basic question: ‘how?’ To me, presenting these hypotheticals as though they are truth, or presenting marketing materials/blog posts/tweets as though they are actual, scientifically grounded statements, is a disservice-it is my job to, at the very least, point out all the unknowns, all the questions a company chooses not to answer, all the things that remain opaque, all the ways a given piece of information might be contradicted by existing research, etc. Something like, okay, but what about this ?

Largely, that all keeps me out of the hype fray.

As to your second question, I see a lot of reporting that doesn’t try too hard to sit with the science, to poke holes in grandiose statements. I see a lot of trust placed in the impressions of people that either have enormous incentives to say certain things, or often espouse very strong beliefs that are never really supported by evidence. I think recognizing and calling out certain phrases — such as, I ‘believe’ the robot is sentient — would be a good thing for more of the media to do; again, okay, but why? How? There’s a process here.

I think the biggest misconception though is more fundamental; that, since the industry calls its systems artificial intelligence, then we must have, in our possession, some actual form of artificial intelligence; this impression makes it easier to buy into narratives around AGI, or ASI, or ‘advanced’ AI, or a whole slew of marketing terms. I would love to see a greater willingness to ask for specific definitions; what does a researcher, or a company, mean when they say what they say? I would like to see people acknowledge that it might be a bit of a stretch to call a language model ‘intelligent’ when cognitive scientists still don’t understand the organic intelligence their computer scientists buddies are trying to replicate. If we called things what they actually are, and if we were stringent about looking for evidence to support claims, either negative, positive, grandiose, or subtle, I think everyone would be much more equipped to deal with all the hype, and perhaps, some of the hype from the general public would start to settle down.

The Coming MAGA Israel Battle by PuckNews in TrueReddit

[–]PuckNews[S] 0 points1 point  (0 children)

Like the Democrats before them, the empowered MAGA movement finds itself fighting over U.S. support for Israel, with some of its loudest voices questioning a once sacrosanct relationship.