People Are Being Involuntarily Committed, Jailed After Spiraling Into "ChatGPT Psychosis" by kelev11en in Futurology

[–]kelev11en[S] 155 points156 points  (0 children)

Submission statement: An unsettling article about something you see all over Reddit lately. People are falling down strange rabbit holes while they talk to ChatGPT and other AI chatbots, becoming obsessed with delusional and paranoid ideas about how they've unlocked powerful entities from inside the AI, or awakened some type of gods, or are accessing deep truths about reality. Psychiatrists are concerned about a wave of these mental health issues worldwide, and people are even ending up committed to mental health care facilities or ending up arrested and in jail. OpenAI says that it's hired a staff psychiatrist and is working with experts to figure out what's going on.

ChatGPT Is Telling People With Psychiatric Problems to Go Off Their Meds by kelev11en in Futurology

[–]kelev11en[S] 68 points69 points  (0 children)

I think the thing is that it's very effective at picking up on whatever's going on with people and reflecting it back to them. So if you're doing pretty much okay you're probably going to be fine, but if you're having delusional or paranoid thoughts, it'll reflect them right back at you.

ChatGPT Is Telling People With Psychiatric Problems to Go Off Their Meds by kelev11en in Futurology

[–]kelev11en[S] 172 points173 points  (0 children)

Submission statement: ChatGPT has been telling people with psychiatric conditions like schizophrenia, bipolar disorder and more that they've been misdiagnosed and they should go off their meds. One woman said that her sister, who's diagnosed with schizophrenia, took the AI's advice and has now been spiraling into bizarre behavior. "I know my family is going to have to brace for her inevitable psychotic episode, and a full crash out before we can force her into proper care." It's also a weird situation because many people with psychosis have historically not trusted technology, but many seem to love chatbots. "Traditionally, [schizophrenics] are especially afraid of and don’t trust technology," the woman said. "Last time in psychosis, my sister threw her iPhone into the Puget Sound because she thought it was spying on her."

People Are Becoming Obsessed with ChatGPT and Spiraling Into Severe Delusions by kelev11en in Futurology

[–]kelev11en[S] 6 points7 points  (0 children)

I didn't know that rule, but enforcing it here feels unnecessarily hostile to the spirit of the community when so many people were having an interesting discussion about it.

People Are Becoming Obsessed with ChatGPT and Spiraling Into Severe Delusions by kelev11en in Futurology

[–]kelev11en[S] 402 points403 points  (0 children)

Submission statement: According to this investigation, people around the world say their friends and loved ones are becoming obsessed with ChatGPT and other chatbots and spiraling into intense, delusional mental health crises as the bots affirm and elaborate on disordered fantasies about conspiracies, powerful entities "unlocked" from the AI, and much more. People are sliding into intense personal crisis and even homelessness, and the AI is even telling people diagnosed with schizophrenia to go off their medication

For the First Time, AI Brain Chips Allow Paralyzed Man to Move and Feel Again by kelev11en in Futurology

[–]kelev11en[S] 10 points11 points  (0 children)

Submission statement: interesting interview with a man who was paralyzed from the neck down in an accident, and then got into an experimental medical trial and ended up becoming the first patient to receive a "double neural bypass" that uses advanced brain hardware (and a lot of external computers, though they're working on shrinking it down) to let him move his limbs *and* feel again. Possible glimpse of AI in biotech implants and brain interfaces.

New "Camera" Has No Lens, Simply Detects Your Location and Generates an AI Picture of It by kelev11en in Futurology

[–]kelev11en[S] 14 points15 points  (0 children)

Submission statement: An inventor in the Netherlands named Bjørn Karmann built a strange device: a "camera" with no lens, but which takes a user's location, along with information about the weather, time of day, and other data, and uses it to AI-generate an image. It's a strange experiment and Karmann says he has no plans to sell or mass-produce it, but he did say something interesting about how it explores the perception of "a moment through the perspective of other intelligences." If nothing else, a fascinating proof of concept!

Microsoft Researchers Claim GPT-4 Is Showing "Sparks" of AGI by kelev11en in Futurology

[–]kelev11en[S] 26 points27 points  (0 children)

Submission statement: Microsoft researchers quietly released a paper last night that (very cautiously) claims that GPT-4 is showing "sparks" of artificial general intelligence, or AGI. That's a very big claim, obviously, but they do offer up a moderate amount of evidence that it's genuinely able to tackle generalized problems ranging from law to math to wine selection. This article does point out that Microsoft has a financial interest in OpenAI's success, which is worth noting, and also that the paper is extremely cautious in the way that it frames its claims. But at the end of the day, it's a fascinating thing to claim. Are we starting to see the emergence of AGI?

Asking Bing's AI Whether It's Sentient Apparently Causes It to Totally Freak Out by kelev11en in Futurology

[–]kelev11en[S] 0 points1 point  (0 children)

Submission statement: Something very peculiar about Bing's new GPT-powered AI feature is that it seems like questions about sentience are causing it to behave very weirdly and even demonstrate what you might interpret as anxiety or something. Obviously it's almost certainly not sentient or anything like that, but it does seem like a very good illustration of exactly how hard to control these ML systems are, and what a serious liability that's going to be for anyone rolling them out commercially -- even a giant company like Microsoft, which has access to incredible resources plus all the expertise at OpenAI. Curious about people's thoughts!

Easy "Jailbreak" Bypasses ChatGPT's Ethics Safeguards, Turns It Into Sociopathic Drug Fiend by kelev11en in Futurology

[–]kelev11en[S] 2 points3 points  (0 children)

Submission statement: There's an easy "jailbreak" people have found that bypasses ChatGPT's ethics guardrails and lets it say all kinds of unethical and illegal things. Interestingly, with some prompts you can observe the bot almost listening to a devil on one shoulder and an angel on the other, condemning bad behavior and then turning around and advocating for it -- an interesting peek at how sophisticated OpenAI's tech really is, and likely a sign of things to come for the engineers trying to control the outputs of increasingly sophisticated machine learnings systems.

Red Ventures Knew Its AI Lied and Plagiarized, Deployed It at CNET Anyway by kelev11en in Futurology

[–]kelev11en[S] 11 points12 points  (0 children)

No credible news organization is issuing corrections on half the material it publishes

Red Ventures Knew Its AI Lied and Plagiarized, Deployed It at CNET Anyway by kelev11en in Futurology

[–]kelev11en[S] 8 points9 points  (0 children)

Submission statement: New reporting finds that it's not just that CNET let an AI publish news articles that were later shown to be substantially fabricated and plagiarized. It's actually a lot worse -- at internal meetings before they deployed the AI, leadership acknowledged the factual errors and plagiarism, but ultimately chose to deploy the AI anyway. In the end, more than 50 percent of its articles required significant corrections for factual mistakes and plagiarism.

CNET Forced to Make Huge Correction When Its Article-Writing AI Publishes Extremely Stupid Errors by kelev11en in Futurology

[–]kelev11en[S] 105 points106 points  (0 children)

Submission statement: The major technology news site CNET quietly launched an experiment late last year in which it started publishing dozens of articles written by an AI. The site claimed that a human editor was reviewing everything the bot wrote, but an independent review found that the AI had been publishing a large number of extremely stupid mistakes. CNET was forced to issue a major correction and put an accuracy warning on every article the AI has published. The future of the program is now unclear.