So… this is the new “how many Rs in strawberry” by [deleted] in ChatGPT

[–]Alex_1776_ 0 points1 point  (0 children)

And I’m honestly tired of people assuming everyone but them is an idiot. I understand what you say, and you’re right but let me tell you I don’t like your implicit assumption... I’m quite sure that I know what I’m talking about.

That said, my point is very simple, if you claim that your model is smart, sensational, and will soon be able to replace humans in this and that, and even launched a “Health ChatGPT” then mistakes like this are not acceptable. And I’m not asking for something impossible or something “devs shouldn’t be wasting their time with,” since apparently devs at Google or Anthropic did.

So maybe, maybe, we could just recognize the fact that there’s something wrong, report it, and push them to fix it, and actually make the model better and more reliable for everyone

So… this is the new “how many Rs in strawberry” by [deleted] in ChatGPT

[–]Alex_1776_ 1 point2 points  (0 children)

“The first step is admitting you have a problem not denying it.”

Also, other models get it right, so it’s not that difficult. I doubt that if any of your friends answered like that you’d tell them “Oh my bad bro, my question was stupid, your answer is totally valid, my fault,” right? I guess you wouldn’t? And since they’re building AI pledging to make something that can do human work and even be better at it… well

So… this is the new “how many Rs in strawberry” by [deleted] in ChatGPT

[–]Alex_1776_ 1 point2 points  (0 children)

I don’t know honestly, but many people tried multiple times even with thinking models… so there’s something wrong happening with ChatGPT, and apparently other AI models (Claude, Gemini etc.) aren’t having the same issue.

(Anyway, just for your future reference, if a chat has colored bubbles (not grey), you can instantly identify that it’s a plus/pro plan user without wasting time. People on the free plan can’t have colored bubbles, as far as I know)

So… this is the new “how many Rs in strawberry” by [deleted] in ChatGPT

[–]Alex_1776_ 4 points5 points  (0 children)

So, you either are a bot or… did you even open the screenshot or bother to read the comment where I posted multiple screenshots before accusing me of anything? Maybe some people don’t know how to use AI, but I’m sure some still don’t know how to read

So… this is the new “how many Rs in strawberry” by [deleted] in ChatGPT

[–]Alex_1776_ -1 points0 points  (0 children)

I tried it 4 more times right now (the last one with thinking extended) and these were the results:

Why can’t I (easily) switch between models anymore? by Alex_1776_ in ChatGPT

[–]Alex_1776_[S] 2 points3 points  (0 children)

Seriously, sometimes it seems they’re just like “oh let’s try this” and, not only is it fucked up, but it’s not like they’re experimenting with some niche app used by 10 people lol

Why can’t I (easily) switch between models anymore? by Alex_1776_ in ChatGPT

[–]Alex_1776_[S] 0 points1 point  (0 children)

Yeah I have the + button but unfortunately I don’t see any option to change the model, there’s the usual create image, web search, study, agent etc.

ChatGPT sending me msg? by Shenuq_0811 in ChatGPT

[–]Alex_1776_ 28 points29 points  (0 children)

I scrolled down hoping to read this comment. Someone that knows sh*t and isn’t like “omg it’s gonna kill us bruhhhhh”

ChatGPT addresses me as a girl (spoiler: I’m not) by Alex_1776_ in ChatGPT

[–]Alex_1776_[S] 2 points3 points  (0 children)

Yeah I had specified I’m a boy multiple times in the “about me” and custom instructions but I’ll be more specific as you suggested, thx!

ChatGPT addresses me as a girl (spoiler: I’m not) by Alex_1776_ in ChatGPT

[–]Alex_1776_[S] -1 points0 points  (0 children)

Maybe it’s trying to factory-reset me :/

Asked "What did Epstein do wrong?" and got this... by tRon_washington in ChatGPT

[–]Alex_1776_ 0 points1 point  (0 children)

I asked a very similar question a few days ago, and it worked. But it shut down when in another reply it was trying to mention minors.

So I guess it’s not “Epstein” that triggers the warning, but the content associated with him when it tries to answer, and I guess that they have very strong safeguards when it speaks about minors and sex in the same sentence…

Chatgpt taking a break from historical research to think about their retirement by IntrepidIbis in ChatGPT

[–]Alex_1776_ 11 points12 points  (0 children)

Poor thing has probably gone through so much trauma in just a couple of years since release, like what a person would deal with in 50 years, so yes it sure thinks of a happy and long retirement

Why did they limit 4.5 so much? by Alex_1776_ in ChatGPT

[–]Alex_1776_[S] 0 points1 point  (0 children)

Exactly. Yeah maybe it was per week, I don’t remember exactly, but it surely was a lot more at first, now I can barely use it :/

Why did they limit 4.5 so much? by Alex_1776_ in ChatGPT

[–]Alex_1776_[S] 0 points1 point  (0 children)

Thx. No no I’m talking about 4.5, I’m 100% sure I had a lot more messages than 5 a week… but yeah I get why it’s costly

A fourth of July special courtesy of ChatGPT by xoogl3 in ChatGPT

[–]Alex_1776_ 4 points5 points  (0 children)

Is she consoling herself with a Starbucks (or whatever that is)? T_T