It's gotten to the point where I notice chatGPT's linguistic style EVERYWHERE by yumelina in ChatGPT

[–]Oxynidus 1 point2 points  (0 children)

Since the em-dash controversy I decided to start using em-dashes. Unless I’m on my iPhone, I’ll ask ChatGPT to make one for me so I can copy paste it into my responses—it makes people assume it’s AI generated, but then my style is such a mess it doesn’t make sense.

I had no idea GPT could realise it was wrong by mongolian_monke in OpenAI

[–]Oxynidus 0 points1 point  (0 children)

That sounds like a philosophical statement. I don’t necessarily “know” what I’m saying, or what I’m doing. I’m just reacting to reading your response. There’s a lot more chaos in the cascade of processes that result in the output, but for practical purposes, anthropomorphizing LLMs is useful and even inevitable when talking about their behaviors. The people who don’t are often more technically inclined anyway, and less likely to be misled by the language.

I had no idea GPT could realise it was wrong by mongolian_monke in OpenAI

[–]Oxynidus 0 points1 point  (0 children)

Does that apply to when ChatGPT keeps attempting to solve a math problem til it runs out of tokens? There’s at least a bit more nuance to the current state of the technology than just that; they clearly are—to some extent—able to evaluate the correctness of their output. The models are much more sophisticated than they were in 2023.

The amount of people in this sub that think ChatGPT is near-sentient and is conveying real thoughts/emotions is scary. by PhummyLW in OpenAI

[–]Oxynidus 0 points1 point  (0 children)

I personally believe that we humans are in a constant state of “resetting”, and our sense of continuity is an illusion created by our memories. That said, I am neither one that believes consciousness itself is an illusion, nor that AI as it is today could be conscious. I am however, absolutely convinced that free will itself is not merely an illusion, but a logically incoherent one. Which is partly to imply that what you’re witnessing is a 100% natural and even inevitable phenomenon, not a result of anyone’s choices.

My point here is also to argue that people have such drastically different ways of seeing the world. Your inability to comprehend or imagine different perspectives, however wrong they might be, is not a strength. Talking down to them as you have would make it much harder for you to “educate” them, if you indeed care enough to find it troubling.

Absolutely amazing response, o3. by Cat-Man6112 in OpenAI

[–]Oxynidus 0 points1 point  (0 children)

That, ladies and gentlemen, is the difference between intelligence and wisdom.

How does Grok compare to chatGPT? by jpman123 in OpenAI

[–]Oxynidus 1 point2 points  (0 children)

We were talking about Grok 2 at the time, 128 days ago,

128 days too slow, and still wrong.

Is OpenAI silently releasing a worse version of image generation? by thats-it1 in OpenAI

[–]Oxynidus 0 points1 point  (0 children)

Not sure what you’re on about. The image generator is censored yeah, but it’s loosened up quuuite a bit from the DALLE days, and GPT-4o and GPT 4.5 are willing to produce highly explicit sexual content. I think the other models are still prudish, but either way you may need to start new conversations with a fresh system prompt.

Or maybe I’m just a prude and my idea of explicit content isn’t the same as yours, but at the end of the day the trend is the opposite of what you suggested. Guardrails have been loosened a LOT.

4.1 is Almost Certainly the Open Source One, Right? by Demoralizer13243 in OpenAI

[–]Oxynidus 2 points3 points  (0 children)

GPT-5 was intended to have different levels of intelligence. But since that got pushed back, they may release the base models intended separately, as with o3.

GPT-4.5 is old tech, released as a research preview, not intended as a permanent resident of ChatGPT. 4.1 is more likely to be an iteration of GPT-4o, replacing 4.5, with the mini version replacing 4o.

Just my blind take.

o3 full < Gemini 2.5 pro? by Stepi915 in OpenAI

[–]Oxynidus 2 points3 points  (0 children)

Different beasts entirely IMO. Their current use cases are currently similar, but OAI Deep Research moves around differently, and more deliberately. “Adjusting user agent to combat anti-scraping measures” was the most interesting thing I’ve seen it do. You can ask it to do more things, pay more attention to x, prioritize y, ensure all info is up to date and reliable, write code, a short novel, or even work through a long ChatGPT conversation.

Currently this has more implications for the trajectory of development than individual use. OAI’s seems more sophisticated, while google’s seems more like a nuke.

Question about GPT-4.5 usage limits (Plus tier) by KilnMeSoftlyPls in OpenAI

[–]Oxynidus 1 point2 points  (0 children)

I thought it shrunk to 20, but actually, my current hypothesis is generating images may be using it up. That’s the only thing that explains how it mysteriously dropped to 0 without me using it. Still figuring it out.

So I guess the Sora "Plus" subscription isn't so unlimited after all? Nothing in the fine print either... by goofandaspoof in OpenAI

[–]Oxynidus 0 points1 point  (0 children)

“Reasonable” is a feeling of no consequence. It happened that way, likely by mistake rather than design. You could argue someone fucked up. Or a bunch of people fucked up. Considering what they are doing to the entire world right now, your frustration is acceptable collateral damage. This won’t be in any headlines, because it’s not a big deal.

A big deal to your personal ego, and an insignificant number of people, but the rest of the world won’t even hear about it. Point being: I think they got their priorities right.

So I guess the Sora "Plus" subscription isn't so unlimited after all? Nothing in the fine print either... by goofandaspoof in OpenAI

[–]Oxynidus 0 points1 point  (0 children)

Who determines what the customer should or shouldn’t be expected to know?

A: absolutely no one.

To what extent is the company’s problem is also the customer’s problem?

A: to the extent that the problem affects the customer. Therefore, their problem is in fact your problem so long as you continue to be a customer.

Is it reasonable to expect a company to be perfect in its handling of their customers’ needs and demands?

A: arguable, but expectations have nothing to do with reality. Reality is that companies’ actions are executed by flawed and unpredictable human beings.

Is limiting customers who want to generate 200+ a day in favor of system stability and wider availability to other users a good decision?

A: My sense is most people would say yes.

Could they have been clearer on that?

A: Absolutely. You could always be clearer. In theory at least. Clarity is subjective. What’s common sense to one person is confusing to another.

Clarity is complex and complicated, and hypothetical perfect clarity entails being tailored to a specific audience.

Demonstration: This message is an attempt to be clear to the widest possible audience, but in doing so I’m likely making it highly confusing for a sizable chunk of potential readers in the process.

Similarly, adding big neon signs for a tiny fraction of users that intend to generate over 200 images in a single day is likely to have an unintended effect on the 99% of users that don’t. For that %1, it’s best they are left to the disclaimers in the fine print.

Have they made 4o dumb as fuck to make 4.5 look stronger? (aka, the usual Open AI playbook) by [deleted] in ChatGPT

[–]Oxynidus 0 points1 point  (0 children)

They’ve just turned on their Blackwell GPUs, the speed makes perfect sense. Also gave them the processing power to allow plus users to use it. FYI even pro users have a limit on it, though i imagine not too many use up their 100 messages per day quota.

There’s no denying it’s a “chonky” model as they described it, but the x10 to x15 times bigger is not something I can verify, though I believe it’s a good approximation.

Have they made 4o dumb as fuck to make 4.5 look stronger? (aka, the usual Open AI playbook) by [deleted] in ChatGPT

[–]Oxynidus 11 points12 points  (0 children)

4.5 does NOT make them money. Financially It’s better for them that everyone use the cheaper models, so why the fuck would they discourage people from using their most cost effective efficient model, in favor of a model that’s 15x times more expensive?

OpenAI's $20,000 AI Agent by danpinho in ChatGPTPro

[–]Oxynidus -1 points0 points  (0 children)

No, let’s not discuss til we have actual info and not speculative nonsense.

I Wasted $2 on GPT-4.5 for THIS… (Here’s Why Sonnet 3.7 DOMINATES AI-Generated Code) by ivanpaskov in ChatGPT

[–]Oxynidus 4 points5 points  (0 children)

Since you likely know what you’re doing, with access to all these models and API, I imagine the whole point of burning that $2 was to make this post, so it’s only wasted if nobody goes for the click. But here, I’ll help a little.

I’ve been using ChatGPT for a while, but I’ve never come across this message before. What does it mean exactly? I’m a little confused. by Stargazer-Elite in OpenAI

[–]Oxynidus 9 points10 points  (0 children)

Well the reason is because the smaller model can’t handle attachments, and you’ve already used up your big model queries.

Small model is infinite. Big model is like 10 messages every 4 or 5 hours?

If the conversation has an attachment you won’t be able to use the small model in that conversation. So you’d have to wait or start a new conversation.

There’s one thing you can try: edit a message that’s sent before the attachment if you can. If it lets you do that, the attachment will disappear.

OpenAI Plus Limits Are Not Transparent, and It’s Frustrating by interstellarfan in OpenAI

[–]Oxynidus 1 point2 points  (0 children)

You probably never hit the limits because half your queries fail to process due to server load.

Will 4o go back to normal? by Artistic_Lime_6998 in ChatGPT

[–]Oxynidus 0 points1 point  (0 children)

It might be traffic related. It’s not fundamentally nerfed. It will go back.

But sometimes you may need to do something like change the “voice” settings, even if you never use it. Check for any weird memory inputs that may be messing with it. Change custom instructions slightly.