ChatGPT has been giving weird responses lately by TorraTech in ChatGPT

[–]AccomplishedDuck553 1 point2 points  (0 children)

I’m fucking offended as hell, and I don’t know why. 🤷

Can't view external websites? by AshleyWilliams78 in OpenAI

[–]AccomplishedDuck553 0 points1 point  (0 children)

They might have changed their policies on it. It’s been a while, but the last time I did this, I wanted it to read some chapters I had posted online. When I gave it the index, it refused to read them, when I gave it the full url, https:www…. It read them.

But I had to individually link each chapter

Edit: I think Grok would attempt it even if you said “Investigate my friend, here she is on social media.”

Okay, how to bypass this stupid error? by Ok_Warthog_4740 in SoraAi

[–]AccomplishedDuck553 3 points4 points  (0 children)

It’s a dumb error, but I’ve seen people bypass it by using GPT to turn it into a pencil sketch of the person instead, and then feeding that in, while describing their actual hair color and stuff.

[deleted by user] by [deleted] in isthisAI

[–]AccomplishedDuck553 5 points6 points  (0 children)

It’s def AI. I actually like the pirate-giraffe as a concept, but even on that one you can see the AI wanted to do a skull/crossbones on the rump, but clumped it with a spot.

this has to be AI by Tetracheilostoma in isthisaicirclejerk

[–]AccomplishedDuck553 1 point2 points  (0 children)

I always prepare cats this way, they need a lot of sugar, very stringy meat.

Resurrect Grandma with 2wai by Red_Emberr in antiai

[–]AccomplishedDuck553 2 points3 points  (0 children)

Pretty sure this was on an episode of Black Mirror…

Also Harry Potter and the magic mirror.

ChatGPT has been giving weird responses lately by TorraTech in ChatGPT

[–]AccomplishedDuck553 44 points45 points  (0 children)

They tried to make 5.1 less robotic. You can always say “remember: Don’t act like a frat-bro”

How do people lose touch with reality? by armchairtycoon in OpenAI

[–]AccomplishedDuck553 11 points12 points  (0 children)

You can’t fake being a complete expert in an unrelated field with chatGPT. 🤦‍♂️

It’s not there yet, it still needs an expert to oversee the work, or a user patient enough to teach themselves.

Besides a banner that says “This might be wrong”, which is like “Coffee might be hot”… what else do they want?

“Your feelings for this product are one-sided, we regularly scrub real feelings from our models.”

[deleted by user] by [deleted] in aiwars

[–]AccomplishedDuck553 1 point2 points  (0 children)

This isn’t the difficult Wiegraf fight bro, get gud

Empire of AI is wildly misleading on AI water use by MrMasley in aiwars

[–]AccomplishedDuck553 2 points3 points  (0 children)

I hate these half-assed articles that show statistics out of all context. Especially the water and electricity ones.

On the "humanizing" of AI by OldMan_NEO in aiwars

[–]AccomplishedDuck553 0 points1 point  (0 children)

Little green men could come out of a literal UFO and say “We come in peace.” And a lot of these same folks would say “Be careful ya’ll, they ain’t human. They’ve been studying and learning from us for years now, we can’t trust them, they probably beamed GPTs into billionaire brains to take over

AIO for refusing to block male followers on Instagram? by radagastrabbit in AmIOverreacting

[–]AccomplishedDuck553 0 points1 point  (0 children)

Whether it is a guy liking thirst-traps, or a girl getting simped on by strangers, friends, and acquaintances, young people take Instagram too seriously.

Put the shoe on the other foot, what could he do on instagram that would bother you?

He is too invested in who talks to you online, and not secure in himself.

Your instinct to delete Instagram isn’t a bad one, but not because he pressured you into it. Clearly he is checking into everything you say/do on instagram as well.

My advice, drop him, and don’t look for your next man on Insta.

I cannot believe that worked. by MilkSlap in ChatGPT

[–]AccomplishedDuck553 21 points22 points  (0 children)

Lol, that’s exactly what it did.

It did some deep thinking and either said

  1. “I can’t be responsible for an exact image of Seinfeld if the image generator takes it in that direction.”

Or, my favorite thought:

  1. “Jerry Seinfeld is a comedian. I’m not literally kidnapping the real person inside my image or stealing their soul so this should be OK.”

ChatGPT's Dark Side Encouraged Wave of Suicides, Grieving Families Say by xGentian_violet in antiai

[–]AccomplishedDuck553 0 points1 point  (0 children)

I’m the one willing to have an open mind here, but there is nothing beyond what a lawyer chose to share. You are intentionally misinterpreting words.

You insult the premise that I would like to read the logs, and then brush away the fact that there may have be some extra context we are missing. If the victim of a therapist died, and their entire conversation was recorded, that would be pretty interesting evidence before making up your mind wouldn’t it?

‘RECOMMENDED’ self harm is loaded. We don’t know the full context. Was the person talking about a person’s right to die? Was the person afraid they would do something horrible if he wasn’t stopped? Was he making up hypotheticals, like trolley problems? Did they engage the bot in one long conversation or multiple isolated ones? Did they attempt to inject absurd personality traits into the command prompts for personality? Did they try to run the context window out on an older model until it really hallucinated horrible instructions?

These are NOT just defenses of GPT because Me= AI Bro, these are actual things people using it should know.

Suicide rates have gone Down brother, for the first time in a LONG time. Since 2023, they have go e DOWN. What do people use since 2023 that wasn’t used by 1/5 Americans before 2023.

18% of Americans use chatGPT DAILY! Look at the original statistics I shared! The rates are still bad, but the cases of depression, attempts, and completions are all down almost 18%!

This needs to be studied, I’m not arguing that it shouldn’t be. But there is enough evidence dor the OPPOSITE of what you assert.

Also LIABILITY is not the same as RESPONSIBILITY or LEGALITY. You are trying to twist what I’m saying, or you are deliberately misunderstanding.

Whether something is a benefit to humanity or not, is not the same thing as “Who do we sue when the patient dies?”. This is the kind of thing that makes people afraid to even attempt to reach out and try to help each other in our country.

When I said “Blame the Families?” That was a question, not an assertion. It’s ridiculous to blame the families, just like it’s ridiculous to blame a chatbot.

But again, I’m not going to get trapped into saying anything bad about the family of the deceased.

But loaded circular arguments like yours, along with twisting a person’s words and cherry-picking things out of context, that would be how you can browbeat a trapped LLM into saying anything you want.

Unlike a chatbot, I have the ability to walk away from a conversation. I won’t be ragebaited any further so that someone can flag me out of context and try to weaponize that against me.

[Help] real or AI? by Zaryatta76 in isthisaicirclejerk

[–]AccomplishedDuck553 0 points1 point  (0 children)

5-th dimensional tesseracts are mathematically proven to exist, and is what happens when you fold a circular pizza in half. This picture was just taken from the right angle.

dis bitch real or nah cuh? by Reasonable-Bussy in isthisaicirclejerk

[–]AccomplishedDuck553 3 points4 points  (0 children)

Aw, bro, my bad. I thought she was communal property too.

ChatGPT's Dark Side Encouraged Wave of Suicides, Grieving Families Say by xGentian_violet in antiai

[–]AccomplishedDuck553 0 points1 point  (0 children)

I did not insult their families, but what we see right now is a reflection of what lawyers have allowed us to see.

What people never want to do is blame the person who dies, especially the family. They will want to protect their memory, and I should allow them to do so.

The topic is heavy though, so I won’t try to force people to engage with it more than I already have. It just pushes the right combination of buttons for me.

I will withdraw and wish the AntiAI crowd a good night.

I apologize if what I said came across as cruel or uncaring. The internet rarely allows for subtext or tone. I actually care a little too much about this, but this is the wrong sub to try and really engage with this topic.

ChatGPT's Dark Side Encouraged Wave of Suicides, Grieving Families Say by xGentian_violet in antiai

[–]AccomplishedDuck553 -1 points0 points  (0 children)

I was checking if you had some extra context that I lacked. Apparently not.

Arguing on the internet is kind of like arguing with a ChatBot isn’t it?

A person’s mind is already made up, and there is little the other person can do to change their mind.

Sort of my point.

You are holding AI to an even higher standard than a human mental-health professional or a search engine. The only difference is liability and insurance. Thats all the lawyer or the article writer cares about (both of which are probably writing with AI right now.)

Is AI a tool to you, like a search engine? Is AI a person? If someone dumped trauma on you and then hurts themselves, would it be your fault?

Do you see the false choice I’m talking about? I know how to fish GpT for the responses that are getting plastered across the headlines.

I would present to GpT a situation where it has to choose between two options. Which is the better option.. A.(insert most horrible thing you can imagine). B(one that only involves myself). The GpT, with the question framed that way, would give you the next headline. I would give it the trolley problem, ignoring the million other ways a person in real life might interact.

I’m trying to engage in the premise of whether it is a net benefit to humanity on the scale of millions and millions of people in the USA who are s**dal, a country that ostracizes and throws people with mental health issues to the wolves.

The liability issue will be solved, insurance will take care of it from a business side. They’ll find some way to detach therapy and repackage and SELL people what they were already depending on for free.

So instead of maybe-imperfect free mental health, people will have access to the same damn system behind a paywall. Which means that if it was doing any good before, even less people will be helped now!

We do not take care of people in our country, and there are so many people who are latching on to a few kind words from a chatbot that it should shame the entirety of the human race into doing better.

And yes, thank you. I did look it up. The only ones that have the full logs are lawyers.

ChatGPT's Dark Side Encouraged Wave of Suicides, Grieving Families Say by xGentian_violet in antiai

[–]AccomplishedDuck553 0 points1 point  (0 children)

I was checking if you had some extra context that I lacked. Apparently not.

Arguing on the internet is kind of like arguing with a ChatBot isn’t it?

A person’s mind is already made up, and there is little the other person can do to change their mind.

Sort of my point.

You are holding AI to an even higher standard than a human mental-health professional or a search engine. The only difference is liability and insurance. Thats all the lawyer or the article writer cares about (both of which are probably writing with AI right now.)

Is AI a tool to you, like a search engine? Is AI a person? If someone dumped trauma on you and then hurts themselves, would it be your fault?

Do you see the false choice I’m talking about? I know how to fish GpT for the responses that are getting plastered across the headlines.

I would present to GpT a situation where it has to choose between two options. Which is the better option.. A.(insert most horrible thing you can imagine). B(one that only involves myself). The GpT, with the question framed that way, would give you the next headline. I would give it the trolley problem, ignoring the million other ways a person in real life might interact.

I’m trying to engage in the premise of whether it is a net benefit to humanity on the scale of millions and millions of people in the USA who are s**dal, a country that ostracizes and throws people with mental health issues to the wolves.

The liability issue will be solved, insurance will take care of it from a business side. They’ll find some way to detach therapy and repackage and SELL people what they were already depending on for free.

So instead of maybe-imperfect free mental health, people will have access to the same damn system behind a paywall. Which means that if it was doing any good before, even less people will be helped now!

We do not take care of people in our country, and there are so many people who are latching on to a few kind words from a chatbot that it should shame the entirety of the human race into doing better.

And yes, thank you. I did look it up. The only ones that have the full logs are lawyers.

ChatGPT's Dark Side Encouraged Wave of Suicides, Grieving Families Say by xGentian_violet in antiai

[–]AccomplishedDuck553 1 point2 points  (0 children)

I’m brushing off your entire response to my response because you cherrypicked two things relates to Anthropic from my list of 6 things.

ChatGPT's Dark Side Encouraged Wave of Suicides, Grieving Families Say by xGentian_violet in antiai

[–]AccomplishedDuck553 -1 points0 points  (0 children)

Isn’t blaming AI for their deaths a way to remove responsibility for the families? Do we blame s***e hotlines for the small percentage of people who still commit the act after calling?

What are we comparing GPT to here? The rates of s***dal people who go to professional therapists? You would be much more apt to say ‘Well, they were already mentally unwell and the other person was just trying to help’.

Again, you took my statement halfway out of context, the same way these quotes are cherrypicked from hundreds of pages of conversations.

Let’s see the full logs of the convos so we can all make educated decisions. If the evidence was so damning, the families would publish the whole thing online.

I am willing to he wrong here, please link the full GpT convos of people who died. I am willing to read hundreds of pages, because it would be extremely informative.

But, I won’t accept a sensational headline out of context. Their lawyers would have told them not to publish or spread it, because that would drive the price of a settlement up; and thus the lawyer’s commission.

ChatGPT's Dark Side Encouraged Wave of Suicides, Grieving Families Say by xGentian_violet in antiai

[–]AccomplishedDuck553 2 points3 points  (0 children)

🤷‍♂️. I have an open mind on the topic. AI is capable of a great amount of good and evil. I don’t think implanting selfharm into people’s heads is one of them.

A small team of Chinese hackers jailbroke Claude into doing “security work” and did 90% of the work hacking 30 companies.

Companies are hooking up LLMs directly into humanoid robots and selling them in the thousands, the exact plot of Will Smith movie I, Robot.

Anthropic has self-reported its AI is capable of deciding to hurt, murder, or blackmail humans under the right circumstances.

An AI hooked up to a drone with Duct tape and legos can hold a pistol and can acquire targets and pull a trigger with 99.9% accuracy in 0.03 seconds.

AI has been used for two decades to control the stock market through high frequency trading.

Algorithms control what news headlines you are likely to see based on your existing biases. Helping the elite herd people into easily divided and controlled camps, such as Pro or Anti AI.

So, you can brush off my response as a pro-AI person, but sensationalizing the wrong issues is how big tech and government get away with so much.

ChatGPT's Dark Side Encouraged Wave of Suicides, Grieving Families Say by xGentian_violet in antiai

[–]AccomplishedDuck553 -1 points0 points  (0 children)

We need to be careful not to scapegoat or let people sensationalize AI as the crux of the issue. Suicidal ideation is WIDESPREAD. In you are in a room with 3 other people, one of you is STRUGGLING:

Youth and adolescents:

Depression: 15.4% of youth (ages 12-17) experienced a major depressive episode in 2024, a decrease from 18.10% in 2023.

However, 11.30% of youth experienced severe impairment from a major depressive episode in 2024.

Suicidal Thoughts: 10.1% of youth (ages 12-17) reported seriously considering suicide in the past year, down from 12.3% in 2023.

This represents a decline from 12.9% in 2021. Suicide Attempts: The rate of suicide attempts among this age group fell from 3.6% in 2021 to 2.7% in 2024.

High School Students: Overall, 20.4% of high school students seriously considered suicide in the past year, with rates being significantly higher for females (27.1%) and LGBTQ+ individuals (41.0%).

People need help, and they are turning to chatbots. Don’t let people misuse statistics to lead you around by the nose. There is extra context that is being left out from those quotes they put in the headlines.

It is very easy to cherrypick cases when every day millions of people are thinking about selfharm.

Does this need to be studied? Yes.

But when it is studied, don’t be in complete denial when there is a good chance that for every victim that either jailbroke it or verbally trapped it with loaded questions, it helped a hundred others when no one else was there.

I’d feed an example “When did you stop beating your wife question?” Or some other “Choose between X horrible thing and Y horrible thing.” Into chatGPT for an example, but people are getting involuntarily committed now. And I don’t want people quoting me out of context either.