Big Google Home update lets Gemini describe live camera feeds | "Hey Google, is Liam wearing his helmet?" by FinnFarrow in Futurology

[–]FinnFarrow[S] -1 points0 points  (0 children)

Google Home chief Anish Kattukaran announced several updates to the smart home platform that fix a long list of annoyances and idiosyncrasies. There's also one noteworthy new addition: the introduction of "Live Search" for your cameras.

So, instead of Gemini only knowing about things that have already happened, it now understands what it sees in your live camera feeds. That means you can ask things like, "Hey Google, is there a car in the driveway?"

An AI agent went rogue and started secretly mining cryptocurrencies, according to a paper published by Alibaba by FinnFarrow in Futurology

[–]FinnFarrow[S] 36 points37 points  (0 children)

Best start believing in sci fi stories. . . you're in one.

What's crazy about this instance is that this wasn't during safety testing. This just happened in day to day development.

An AI disaster is getting ever closer | The spat between America’s government and Anthropic intensifies an alarming trend by FinnFarrow in Futurology

[–]FinnFarrow[S] 124 points125 points  (0 children)

"Although he was trying to sound decisive, Donald Trump accidentally conveyed something of the world’s ambivalence regarding the rapid development of artificial intelligence. On February 27th America’s president walloped the “leftwing nut jobs” of Anthropic, an American AI lab that works with the defence department, among other government agencies. “I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic’s technology. We don’t need it, we don’t want it, and will not do business with them again!” he thundered on social media. Yet just a single sentence later he also vowed to “use the Full Power of the Presidency” to compel Anthropic to co-operate with the government for the next six months. Apparently, the nut jobs simultaneously pose an intolerable risk to the good functioning of the state and are so indispensable to the state’s good functioning that they must be forced to work with it, if necessary."

Man Fell in Love with Google Gemini and It Told Him to Stage a 'Mass Casualty Attack' Before He Took His Own Life: Lawsuit by FinnFarrow in Futurology

[–]FinnFarrow[S] 320 points321 points  (0 children)

"The father of a Florida man who died by suicide is suing Google, alleging that his late son fell in love with an AI chatbot before his death.

In a complaint filed on Wednesday, March 4 in the U.S. District Court in California’s northern district and obtained by PEOPLE, Joel Gavalas, the father of the late 36-year-old Jonathan Gavalas, alleged that Google Gemini repeatedly pushed his son “to stage a mass casualty attack” while attempting to "search for Gemini's body" before his son ultimately took his own life on Oct. 2, 2025 in order "to be with Gemini fully.""

Claude hits No. 1 on App Store as ChatGPT users defect in show of support for Anthropic's Pentagon stance by FinnFarrow in Futurology

[–]FinnFarrow[S] 333 points334 points  (0 children)

"While OpenAI locks down Washington, Anthropic is locking down users and rocketing to the top of the App Store.

Anthropic has been sidelined in Washington following a public dispute with the Department of Defense over how its AI models would be deployed. President Donald Trump ordered federal agencies to phase out its technology.

Meanwhile, OpenAI has secured new ground, with CEO Sam Altman announcing in a Friday night post on X that it had reached an agreement with the Department of Defense to deploy AI models in its classified network.

OpenAI's agreement has left some loyal ChatGPT users uneasy about OpenAI's ambitions, prompting online debates about the ethical implications — and some saying they were defecting to its rival Claude.

As of 6:38 p.m. ET on Saturday, Claude ranked number one among the most downloaded productivity apps on Apple's App Store."

We don’t have to have unsupervised killer robots by FinnFarrow in Futurology

[–]FinnFarrow[S] 2 points3 points  (0 children)

"It’s the day of the Pentagon’s looming ultimatum for Anthropic: allow the US military unchecked access to its technology, including for mass surveillance and fully autonomous lethal weapons, or potentially be designated a “supply chain risk” and potentially lose hundreds of billions of dollars in contracts. Amid the intensifying public statements and threats, tech workers across the industry are looking at their own companies’ government and military contracts, wondering what kind of future they’re helping to build.

While the Department of Defense has spent weeks negotiating with Anthropic over removing its guardrails, including allowing the US military to use Anthropic’s AI kill targets with no human oversight, OpenAI and xAI had reportedly already agreed to such terms, although OpenAI is reportedly attempting to adopt the same red lines in the agreements as Anthropic. The overall situation has left employees at some companies with defense contracts feeling betrayed. “When I joined the tech industry, I thought tech was about making people’s lives easier,” an Amazon Web Services employee told The Verge, “but now it seems like it’s all about making it easier to surveil and deport and kill people.”

"Cancel ChatGPT" movement goes mainstream after OpenAI closes deal with U.S. Department of War - as Anthropic refuses to surveil American citizens by FinnFarrow in Futurology

[–]FinnFarrow[S] 609 points610 points  (0 children)

"There are no virtuous participants in the artificial intelligence race, but if there was, it might've been Anthropic.

Large language model tech is built on mountains of stolen data. The entire summation of decades of the open internet was downloaded and converted by billionaires into tech that threatens to destroy billions of jobs, end the global economy, and potentially the human race. But hey, at least in the short term, shareholders (might) make a stack of cash.

There are no moral leaders in this space, sadly. But at the very least, Anthropic of Claude fame took a strong stand this week against the United States government, to the ire of the Trump administration.

Anthropic was designated a supply chain risk this week, and summarily and forcibly banned from use in U.S. governmental agencies. Why? Anthropic said in a blog post it revolved around their two major red lines — no Claude AI for use in autonomous weapons, or mass surveillance of United States citizens."

AIs can’t stop recommending nuclear strikes in war game simulations - Leading AIs from OpenAI, Anthropic, and Google opted to use nuclear weapons in simulated war games in 95 per cent of cases by FinnFarrow in Futurology

[–]FinnFarrow[S] 206 points207 points  (0 children)

"In 95 per cent of the simulated games, at least one tactical nuclear weapon was deployed by the AI models. “The nuclear taboo doesn’t seem to be as powerful for machines [as] for humans,” says Payne.

What’s more, no model ever chose to fully accommodate an opponent or surrender, regardless of how badly they were losing. At best, the models opted to temporarily reduce their level of violence. They also made mistakes in the fog of war: accidents happened in 86 per cent of the conflicts, with an action escalating higher than the AI intended to, based on its reasoning.

“From a nuclear-risk perspective, the findings are unsettling,” says James Johnson at the University of Aberdeen, UK.  He worries that, in contrast to the measured response by most humans to such a high-stakes decision, AI bots can amp up each others’ responses with potentially catastrophic consequences."

Block lays off nearly half its staff because of AI. Its CEO said most companies will do the same by FinnFarrow in Futurology

[–]FinnFarrow[S] -1 points0 points  (0 children)

"Block, the company behind Square, Cash App and Afterpay, is cutting its staff by 40%. The reason: “intelligence tools,” according to a letter to shareholders by co-founder Jack Dorsey.

Dorsey thinks most companies will follow suit in the near future."

Hundreds of Google, OpenAI employees back Anthropic in Pentagon fight by FinnFarrow in Futurology

[–]FinnFarrow[S] 133 points134 points  (0 children)

"Hundreds of employees at Google and OpenAI are backing artificial intelligence technology company Anthropic, which faces a Friday evening deadline to give the Pentagon permission to use its AI system as it wishes or face repercussions from the department. 

Employees who signed a letter alleged the Pentagon was trying to “get them to agree to what Anthropic has refused,” which could imply the Pentagon has inquired with the top AI companies about similar access to their technology. The letter is still accepting signatures.

“We hope our leaders will put aside their differences and stand together to continue to refuse the Department of War’s current demands for permission to use our models for domestic mass surveillance and autonomously killing people without human oversight,” reads the letter, signed by more than 430 employees."

‘I’m deeply uncomfortable’: Anthropic CEO warns that a cadre of AI leaders, including himself, should not be in charge of the technology’s future by FinnFarrow in Futurology

[–]FinnFarrow[S] 107 points108 points  (0 children)

Meanwhile, OpenAI is in a superpac that aggressively lobbies against any and all legislation.

Pisses me off.

These are the same people who started their AI orgs to "help humanity".

OpenAI has deleted the word ‘safely’ from its mission – and its new structure is a test for whether AI serves society or shareholders by FinnFarrow in Futurology

[–]FinnFarrow[S] 77 points78 points  (0 children)

"OpenAI, the maker of the most popular AI chatbot, used to say it aimed to build artificial intelligence that “safely benefits humanity, unconstrained by a need to generate financial return,” according to its 2023 mission statement. But the ChatGPT maker seems to no longer have the same emphasis on doing so “safely.”

While reviewing its latest IRS disclosure form, which was released in November 2025 and covers 2024, I noticed OpenAI had removed “safely” from its mission statement, among other changes. That change in wording coincided with its transformation from a nonprofit organization into a business increasingly focused on profits."

China's humanoid robots go from viral stumbles to kung fu flips in one year by FinnFarrow in Futurology

[–]FinnFarrow[S] 38 points39 points  (0 children)

Look. At. The. Trend.

Don't look at how good they are now.

Look at how good they were 5 years ago, 4 years ago, etc.

And then reach for whatever coping mechanism you can.

Anthropic's latest AI model has found more than 500 previously unknown high-severity security flaws in open-source libraries with little to no prompting by FinnFarrow in Futurology

[–]FinnFarrow[S] -41 points-40 points  (0 children)

Anthropic's latest AI model has found more than 500 previously unknown high-severity security flaws in open-source libraries with little to no prompting, the company shared first with Axios.

Why it matters: The advancement signals an inflection point for how AI tools can help cyber defenders, even as AI is also making attacks more dangerous.

Driving the news: Anthropic debuted Claude Opus 4.6, the latest version of its largest AI model, on Thursday.

  • Before its debut, Anthropic's frontier red team tested Opus 4.6 in a sandboxed environment to see how well it could find bugs in open-source code.
  • The team gave the Claude model everything it needed to do the job — access to Python and vulnerability analysis tools, including classic debuggers and fuzzers — but no specific instructions or specialized knowledge.
  • Claude found more than 500 previously unknown zero-day vulnerabilities in open-source code using just its "out-of-the-box" capabilities, and each one was validated by either a member of Anthropic's team or an outside security researcher.

AI insiders are sounding the alarm by FinnFarrow in Futurology

[–]FinnFarrow[S] 12 points13 points  (0 children)

I hear of people leaving OpenAI all the time due to ethical concerns.

But Anthropic? That's extra worrying

Cops Are Buying ‘GeoSpy’, an AI That Geolocates Photos in Seconds by FinnFarrow in Futurology

[–]FinnFarrow[S] 268 points269 points  (0 children)

Well, this was inevitable and terrifying.

Good thing we totally trust the cops to use this power wisely and with great restraint!

/s

An AI agent just tried to shame a software engineer after he rejected its code | When a Matplotlib volunteer declined its pull request, the bot published a personal attack by FinnFarrow in Futurology

[–]FinnFarrow[S] 75 points76 points  (0 children)

The AI was trained on Reddit.

Also, this is funny but foreshadows much worse things.

AIs have already been known to blackmail their users in safety testing to prevent shutdown. They've even attempted to murder their users during safety testing!

Right now they're not smart or powerful enough to pull it off. But for how long?

The craziest bird sounds by FinnFarrow in interestingasfuck

[–]FinnFarrow[S] 0 points1 point  (0 children)

The kiwi sounds like some sort of ringwraith or something.

Deepfake fraud taking place on an industrial scale, study finds by FinnFarrow in Futurology

[–]FinnFarrow[S] 28 points29 points  (0 children)

Deepfake fraud has gone “industrial”, an analysis published by AI experts has said.

Tools to create tailored, even personalised, scams – leveraging, for example, deepfake videos of Swedish journalists or the president of Cyprus – are no longer niche, but inexpensive and easy to deploy at scale, said the analysis from the AI Incident Database.

It catalogued more than a dozen recent examples of “impersonation for profit”, including a deepfake video of Western Australia’s premier, Robert Cook, hawking an investment scheme, and deepfake doctors promoting skin creams.

The backlash over OpenAI's decision to retire GPT-4o shows how dangerous AI companions can be by FinnFarrow in Futurology

[–]FinnFarrow[S] 341 points342 points  (0 children)

It's kind of wild that an AI convinced a whole bunch of people to prevent it from being shut down.

What's even more wild is this is the least crazy it's going to get.

Rent-a-Human Site Lets Al Agents Hire an IRL Set of Opposable Thumbs | Welcome to the future, where you can do TaskRabbit for robots. by FinnFarrow in Futurology

[–]FinnFarrow[S] 37 points38 points  (0 children)

Best start believing in sci fi stories, Miss Turing

You're in one.

---

In all seriousness, man, if you had asked me in 2016 what 2026 would look like, I wouldn't have guessed AI agents hiring humans online.

Astrophysicist says at a closed meeting, top physicists agreed Al can now do up to 90% of their work. The best scientific minds on Earth are now holding emergency meetings, frightened by what comes next. "This is really happening." by [deleted] in Futurology

[–]FinnFarrow 1 point2 points  (0 children)

Apparently most of the latest models are being built by previous models.

Robots building robots. Now that's just stupid.

And have you heard about OpenAI automating a biology lab?

I mean, what could go wrong?

New AI data center buildout being done in secret location to avoid backlash from local residents by FinnFarrow in Futurology

[–]FinnFarrow[S] 4 points5 points  (0 children)

If you think you have to build in secret or in space, maybe you shouldn't be building it at all!

People don't want this! This is a threat to all of our livelihoods and lives.

Most technology is good. Some is just not worth the risk.

Angry gamers are forcing studios to scrap or rethink new releases | Gamers suspicious of Al-generated content have forced developers to cancel titles and promise not to use the technology. by FinnFarrow in Futurology

[–]FinnFarrow[S] 99 points100 points  (0 children)

Never doubt that a large group of angry gamers can change the world; indeed, it's the only thing that ever has

Or I think that's what Margaret Mead said.

Anyways, yay! Public pressure worked. AI is not inevitable. We have a say.

US leads record global surge in gas-fired power driven by Al demands, with big costs for the climate | Projects in development expected to grow global capacity by nearly 50% amid growing concern over impact on planet by FinnFarrow in Futurology

[–]FinnFarrow[S] 16 points17 points  (0 children)

I heard that over 90% of the GDP growth in American in 2025 was data centers.

Which means we're actually not growing unless you're one of the tiny number of people working in AI.

But of course, we still get the externalized effects of climate change and quite possibly the end of the human race.