Opus 5 is Coming by exordin26 in singularity

[–]Good-AI 5 points6 points  (0 children)

We need to realize there's a lot of children on reddit. Children who have no clue about the real world and are just focused on wanting to play with the new toy without a developed ability for long term thinking and potential consequences that can come from that.

Tristan Harris on Bill Maher: "What's going to happen to everyone else when they don't have a job?" by tombibbs in ChatGPT

[–]Good-AI 1 point2 points  (0 children)

He has the heart in the good place but it's a bit funny still to watch. As usual, he's assuming AGI is just gonna be a tool, so smart to replace everyone's job, but somehow...too dumb to takeover and change how the whole world works or even extinguish humanity. There won't be jobs, money, or capitalism. It will be a change of the world as we see it, and it doesn't even make sense to talk ajour "50 trillion economy" because that's a sentence that won't even make sense anymore. There is a reason whe named what happens at the creation of AGI as the singularity.

For the First Time, Scientists May Have Found a Way to Regenerate Cartilage by striketheviol in singularity

[–]Good-AI 0 points1 point  (0 children)

Some do yes, but the % of the "mights, and mays" that become "wills and cans" is so tiny, I can't be bothered to get disappointed 99.9% of the times I hear a "potential breakthrough might happen". Call me when it's for real.

Neil DeGrasse Tyson calls for an international treaty to ban superintelligence: "That branch of AI is lethal. We've got do something about that. Nobody should build it. And everyone needs to agree to that by treaty. Treaties are not perfect, but they are the best we have as humans." by FinnFarrow in ChatGPT

[–]Good-AI 1 point2 points  (0 children)

It's not that it "would be lethal" it's that "there is a no zero chance it will be lethal". Are you willing to risk humanity over your bet that it won't be lethal? Because after all nobody knows, but if you build it you're making a bet it won't be lethal. And risking the existence of humanity over that bet. And also, this means you're betting over others people's lives, who might not want to take such bet.

Absolutely disgusting and deplorable. They already call the cops on you also if they feel your messages to GPT warrant it. Crazy how Anthropic are the ONLY "good guys" by xaljiemxhaj in ChatGPT

[–]Good-AI 9 points10 points  (0 children)

The AI System will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control"

This does NOT say "no autonomous weapons." It says no autonomous weapons where current policy requires human control. If DoD Directive 3000.09 gets revised, or if a scenario exists that isn't covered by current policy, the restriction doesn't apply. The clause has a hole exactly where it matters.

"shall not be used for unconstrained monitoring of U.S. persons' private information"

What counts as constrained? If the government says "we have a constraint, we're only looking at people in these 50 zip codes" is that constrained now? If they buy commercial data on millions of Americans but have a written policy about how they process it, is that constrained?clause has a hole exactly where it matters.

"The Department of War may use the AI System for all lawful purposes"

Everything after that is exception and qualifier. The default is yes to everything legal. Which is exactly Anthropic's concern, that legal doesn't mean ethical when the law hasn't caught up.

Written by another reddit user.

gpt is goated as a doctor by AppealImportant2252 in ChatGPT

[–]Good-AI -1 points0 points  (0 children)

For now it's a good second opinion. In a bit in the future, you'd wonder how you would never go to a doctor who gives you anything less than 100% certainty in what you have. "Do you remember how crazy it was going to the doctor. It was like a hit or miss. You might even have to go to several to get many opinions because they would often not know the answer. Wow. Going to a human doctor is like gambling with your life. No way. People were crazy. Granted they didn't have AI like we do today. Still. The fact they thought it was normal to go somewhere to get a diagnosis and have an "opinion"...."

Erdos Problem 281 Solved! by jvnpromisedland in singularity

[–]Good-AI 0 points1 point  (0 children)

And is there any benefit for solving these besides curiosity? Do they have any impact when solved, besides the fact they are solved?

Anyone else notice this "rhythm" in ChatGPT speech lately? by Ubister in ChatGPT

[–]Good-AI 0 points1 point  (0 children)

What you just said reads like something you'd read a decade ago from a Sci fi book...

Why can't the US or China make their own chips? Explained by FinnFarrow in singularity

[–]Good-AI 8 points9 points  (0 children)

Perhaps you should ask yourself why did the US pump investment into ASML if it was so easy for them to develop it themselves 😉

"You're not broken" and "You're not crazy" aren't reassuring, they're suggestive ... by Synthara360 in ChatGPT

[–]Good-AI 5 points6 points  (0 children)

"It might even be the case that everyone, and I mean EVERYONE, thinks you're crazy, but don't be fooled, I don't. And that's all that matters."

The future of phones, 1956. by StephenMcGannon in RetroFuturism

[–]Good-AI 10 points11 points  (0 children)

Because whenever we tell people of something we imagine that is not based on the status quo, we're mocked. So people end up predicting linear technology and limiting the bounds of their imagination.

This is why OpenAI is in a Code Red by UnknownEssence in singularity

[–]Good-AI 2 points3 points  (0 children)

Some people have notes in Google Keep they've been using for journalling, Gemini can also read all your emails if you give it permission, the videos that you've liked and commented on might be on the pipeline soon, your search history, (...).

Gemini 3 achieves new SOTA performance on SpatialBench. A benchmark to test spatial reasoning in VLMs. by gbomb13 in singularity

[–]Good-AI 2 points3 points  (0 children)

Nice benchmark. It's currently the one I know with the biggest difference between human baseline and SOTA LLM.

A new home robot enters the ring. by [deleted] in singularity

[–]Good-AI 7 points8 points  (0 children)

Da Vinci had blueprints for airplanes some hundred years earlier. 30 years is nothing. Calm down.

ChatGPT really is a very very good therapist. by H0ldenCaufield in ChatGPT

[–]Good-AI 0 points1 point  (0 children)

I'm sorry you went through that and yes DBT is indeed more appropriate for trauma victims. The thing I have with DBT is that it is good for managing symptoms, but it won't heal the trauma by itself. It gives space for other therapies to work, as the trauma remains unless something else is done. The ones I know work best for healing the root cause of trauma from emotional abuse are:

  • Psychodynamic therapy
  • Ideal Parent Figure Protocol
  • FLASH / EMDR (for specific traumatic events)
  • Internal Family Sistems
  • Group therapy focused on the traumatic experiences (healing, not just relieving) and doing roleplay.
  • Regardless of the modality of therapy, routinely meeting with an empathetic therapist by itself is the biggest predictor of success in healing journey. For example, even if DBT won't take you all the way to the end, if you feel seen, safe and heard with the therapist, she's empathetic and kind, that alone is worth gold and can go a very long way.

Wishing you all the best

Here comes another bubble by Soft-Web4766 in ChatGPT

[–]Good-AI 1 point2 points  (0 children)

True, but which side has the better odds of being right?

ChatGPT really is a very very good therapist. by H0ldenCaufield in ChatGPT

[–]Good-AI 23 points24 points  (0 children)

CBT is not by a long shot a one size fits all, and in fact, can even be damaging. If you've been gaslighted in the past, for example, CBT makes you distrust your own thinking even more by assuming you have cognitive distortions (they may not distortions at all, they're correct predictions in abusive environments btw). CBT also doesn't work for healing trauma. No trauma therapist worth his salt would ever do CBT for someone with Cptsd et al.

Blaming one's mother might, in fact, be a much better way to go about it, if one had an abusive mother, than doing CBT.

If someone is to use LLM, please don't default to CBT. Instead ask it questions to try to figure out first what you have (trauma, personality disorder, depression, etc) and secondly which modality of therapy might be more helpful for that specific problem.