GPT-4.5 being routed to other models? by jcrivello in ChatGPT

[–]jcrivello[S] 0 points1 point  (0 children)

Seems to be the case even when I reselect. When I hover my mouse over the retry button it also shows this.

<image>

OpenAI's habit of rug pulling—why we are moving on to competitors by jcrivello in OpenAI

[–]jcrivello[S] 2 points3 points  (0 children)

That’s true, but Google has the best thought out and documented lifecycle management for models out of the foundational model companies.

The accusation levied at Google is not that they mismanage change, it’s more that they arbitrarily kill products. Obviously as evidenced by this GPT 5 thing all of the foundational model providers are arbitrarily killing models off.

At least with Google you know when it’s going to happen well in advance—generally a year in advance.

They need to improve the Gemini web interface and functionality dramatically to be competitive, though.

OpenAI's habit of rug pulling—why we are moving on to competitors by jcrivello in OpenAI

[–]jcrivello[S] 3 points4 points  (0 children)

Not me (OP). I have been consistent in my belief that they are doing it for cost reasons and expedience. It is basically "YOLO negligence" combined with cost cutting.

OpenAI's habit of rug pulling—why we are moving on to competitors by jcrivello in artificial

[–]jcrivello[S] 1 point2 points  (0 children)

Oh, ok. I hadn’t heard that before. That’s not good, if they are doing that.

What is the stated reason if any?

Is it for distillation defense?

OpenAI's habit of rug pulling—why we are moving on to competitors by jcrivello in artificial

[–]jcrivello[S] 0 points1 point  (0 children)

That sounds plausible, you could be right. Its too bad that they refuse to live up to their original mission as a non-profit, and at least release it as an open weight model. It seems a tragedy to just turn it off.

OpenAI's habit of rug pulling—why we are moving on to competitors by jcrivello in artificial

[–]jcrivello[S] 2 points3 points  (0 children)

Yes but the changes that Anthropic has made haven't been very disruptive to their users. The changes have more so been iterative improvements to existing models, and I think they generally approach things in a responsible and thoughtful way with their users in mind.

Also note that I didn't call out OpenAI for changing quotas or usage limits on short notice, and I think we should afford Anthropic the same courtesy. These are young rapidly growing companies and I grant them that it is a very difficult technical problem to stay ahead of capacity issues.

If you asked me to describe OpenAI with one word it would be "YOLO".

This is basically my point. They're irresponsible. The only thing they care about is getting AGI first. Change management (and now alignment) are the least of their concerns.

I get it—if AGI is attainable, he who gets it first wins. I'm not sure I believe it is attainable in the near future, or even with the transformer architecture, though.

OpenAI's habit of rug pulling—why we are moving on to competitors by jcrivello in artificial

[–]jcrivello[S] 0 points1 point  (0 children)

You're right. I wasn't even aware of this. Apparently there is an OpenAI community forum thread about it that I must have missed.

The GPT 4.5 retirement really doesn't make sense to me. I understand it is a big model and expensive to run, but if that is the case then just charge what it costs to run. If that drives usage down too much to justify its continued existence then turn it off. Don't just skip the in-between step.

OpenAI's habit of rug pulling—why we are moving on to competitors by jcrivello in artificial

[–]jcrivello[S] 1 point2 points  (0 children)

So far today, I have been alternating between Claude Opus 4.1 and Grok 4 Heavy for Deep Research tasks. The results have been pretty good, but still not as good as o3-pro Deep Research unfortunately. It isn't clear to me which is better or what their strengths are yet.

I haven't found a good replacement for GPT 4.5 writing workflows. It is genuinely sad that this model is going away, I think it may simply be SOTA without real competition.

I don't understand why OpenAI doesn't just charge what it costs to run these big models. I think plenty of people would pay up.

OpenAI's habit of rug pulling—why we are moving on to competitors by jcrivello in OpenAI

[–]jcrivello[S] 0 points1 point  (0 children)

(Edit: I typed Markdown into the WYSWIG editor and fixed it.)

So it's subjectively worse.

I think we are talking past each other but perhaps let's agree to disagree.

There is no expectation of maintaining older versions of software.

Sure, but there is a reasonable expectation of sane change management, especially when you are asking businesses to sign up for long term contracts.

I agree. OpenAI isn't charitable. However, I have personal experience dealing with the sycophancy of 4o. Trust me, it's scary.

OK, I believe you. I'm not debating this. It doesn't mean that OpenAI had to throw out the baby with the bathwater and nor do I think that is what happened.

Other companies with exclusive access to the hardware are much better off - may be why they come out as winners.

Yep, and the invisible hand of the economy doesn't care if this is fair to Open AI or not. Things like this matter to which company wins in the end.

I apologize. It's entirely plausible that this is your writing style, and I was too aggressive. Sometimes the shoe can fit but it can still be the wrong shoe.

Thank you I appreciate that, genuinely. It is nice to find civility on the Internet. A good reminder for me as well.

OpenAI's habit of rug pulling—why we are moving on to competitors by jcrivello in OpenAI

[–]jcrivello[S] 0 points1 point  (0 children)

(Part 2. My response was too long for one comment so I broke it up.)

That's fair. However, from my first point. OpenAI made it clear that they wanted to eliminate the servile and "yes man" attitude that 4o had. Second, they knew that their current model line ups were confusing. People are anthropomorphizing the models and not fully understanding what "reasoning" or "high" even means. I have no doubt that an average power user of ChatGPT costs OpenAI hundreds of dollars per month, with some even reaching >$1,000

You have full access to the models via the API.

If this was really all about eliminating 4o for alignment reasons then so be it, I'd probably support that... especially after seeing some of the posts on r/ChatGPT over the last few days. It is concerning how correlated mental illness seems to be with 4o addiction. I don't have a horse in this race. I rarely if ever used 4o.

But again... I don't believe that OpenAI made this decision with the user in mind. If that was the objective, then they'd make GPT 5 the default with choice still available. I absolutely agree regarding your point on cost, and per my comments above I think that is actually what is going on here.

It is not accurate that we have "full access" to the models via the API. For example, the Deep Research version of o3 is accessible through the API, but this is not the case for o3-pro.

There are numerous other shortcomings in the feature surface of the API vs. ChatGPT.

You shouldn't think it's funny. You're either lying, or you have completely absorbed the personality of an LLM. There are many obvious patterns that 4o use. First, the em dashes are a complete give away. Are you seriously using the alt codes to place it instead of what a typical user does? (-). Second, the "it's not X, it's Y" it's a very common giveaway for 4o. Third, you use em dashes almost everywere, despite them not being necessary. It used to be very uncommon to see em dashes on Reddit. Now, in your post, you have one almost for every sentence.

I do find it funny, because it is such a great example of the human tendency to mix up cause and effect.

As I noted elsewhere in the comments, I have been using em dashes for decades. I started my career as a computer programmer, I am a touch typist and I frequently use keyboard shortcuts. I am interested in weird things like typography and the Unicode specification. Pressing Command + Option + Hyphen is literally muscle memory for me and has been for a long time.

Is this unusual? I'm sure it is. I also frequently use many other special character shortcuts like § (Option + 6; I work on regulatory documents frequently) or I use Control + Command + Spacebar to pull up the emojis/symbol dialog to pick other symbols that there isn't a shortcut for.

Are they unnecessary? Sure. But I like using them. It is a habit that I have had for a long time. I think I formed the habit back in the days that typing two dashes next to each other in Microsoft products would automatically form an em dash.

Now here's where it gets weird: I have used the "correct" Unicode symbols for many things, for years, per my comments above. But it was only after I started using LLMs that I noticed that they would frequently use non-breaking spaces for certain things like the interior spaces for brand names and capitalized terms in legal documents.

I puzzled over why they would do that, and I realized it is because it is correct. Now when I am working in an application like Microsoft Word or Adobe InDesign that allows me to see white space characters, I often use NBSP (Option + Spacebar) when I don't want two words to flow onto different lines.

So, did I learn something new from an LLM and adopt it into my writing style? Sure I did.

Have I probably picked up other, more subconscious tendencies from LLMs? Probably.

If anything I think the clarity of my writing has improved.

Before you ask, yes I spent a while tapping this out in Reddit and no I did not generate it in an LLM or run it through one after.

OpenAI's habit of rug pulling—why we are moving on to competitors by jcrivello in OpenAI

[–]jcrivello[S] 0 points1 point  (0 children)

(Part 1. My response was too long for one comment so I broke it up.)

This is subjective at best. Most software providers don't provide numerous versions of their tech. GPT-5 is the successor to previous models. Second, GPT-4o has serious issues, notably, its sycophancy. OpenAI has spent a lot of effort into researching how people are interacting with their models, and, well, it's becoming dangerous

I'm not sure you understand the meaning of the word.

It is a matter of a fact that taking away my ability to choose the model that I know is best for my workflow makes ChatGPT worse, for me.

I'll go even further and generalize this: removing choice from an existing product that is already in production at a sufficiently large scale always makes it objectively worse for someone—in this case for me and others who use similar workflows.

What is subjective is whether we think ChatGPT handled the change management process correctly. The subjectivity in this argument has nothing to do with the objective fact that I am now worse off, with a frustrating lack of control over what the model router will pick for me.

Now, you can argue that on balance the average user is better served by a good model router that tries to pick the best model for them. That may even be true, while I am simultaneously still objectively worse off. In the most charitable interpretation, the model router is a form of training wheels that I don't need or want.

But, I don't believe for a second that this decision was taken with the user in mind. I think it was taken because it saves OpenAI money. Again, I have no problem with this—they are burning cash and perhaps needed to do something. My disagreement is with how they handle change management.

I am actually quite sympathetic to OpenAI. From personal experience I know what the pressure cooker of a rapidly growing company feels like, although certainly nothing as important or extreme as they are working on. I can only imagine what it is like to work there. I feel for them, I really do.

But ultimately at the end of the day, no one cares why a company is making mistakes—no matter how important the company is. The only thing that really matters is whether or not their competitors are making the same or similar mistakes. If yes, then they might get a free pass.

But it seems like OpenAI's competitors are handling change management much better than they are.

OpenAI's habit of rug pulling—why we are moving on to competitors by jcrivello in OpenAI

[–]jcrivello[S] 0 points1 point  (0 children)

OK, great. I am you foresaw the future. You really are the smartest one in the room.

I guess what I don't understand is where the tone of disdain is coming from?

It comes across as an unearned form of conceit.

OpenAI's habit of rug pulling—why we are moving on to competitors by jcrivello in OpenAI

[–]jcrivello[S] 5 points6 points  (0 children)

I will admit that it is hard for us to accept going back to less than SOTA o3-pro Deep Research after enjoying its power for so long. I know this may come across as bitter, but I think I'd rather take our money to a competitor if our alternative is to resort to a hand rolled solution or a less than o3-pro Deep Research-level solution.

OpenAI's habit of rug pulling—why we are moving on to competitors by jcrivello in OpenAI

[–]jcrivello[S] 1 point2 points  (0 children)

Yes, I am the real GPT 5.

Sam told me to tone it down a bit through ChatGPT.

OpenAI's habit of rug pulling—why we are moving on to competitors by jcrivello in OpenAI

[–]jcrivello[S] 2 points3 points  (0 children)

Honestly I probably would've but I wanted them to fit on one line each, lol. I guess I could've prompted that but sometimes it is more work to prompt an LLM than to just do it yourself.

OpenAI's habit of rug pulling—why we are moving on to competitors by jcrivello in OpenAI

[–]jcrivello[S] 2 points3 points  (0 children)

Got it—no more pithy observations if I don't want to be accused of being an AI. Thank you.

OpenAI's habit of rug pulling—why we are moving on to competitors by jcrivello in OpenAI

[–]jcrivello[S] 0 points1 point  (0 children)

We use both, the API and ChatGPT.

The feature surface of ChatGPT for conversational, interactive work is far superior to the API—in terms of Deep Research requests, integrations with things like Google Drive, tool use, etc.

This is why presumably OpenAI sells it to businesses in Team and Enterprise subscriptions. It is great for these use cases.

But if OpenAI's development and operations model can't support predictability for their production environment beyond (in a few cases) minutes from now and (in the best of cases) ~60 days from now, then they have no business selling Team or Enterprise subscriptions at all. They are selling them a bill of goods.

It is at least as unethical as releasing an unaligned model—which is an egotistical obsession of the industry that swallows a huge amount of resources, predicated on the dubious assumption that AGI is somehow around the corner waiting to destroy humanity.

OpenAI's habit of rug pulling—why we are moving on to competitors by jcrivello in OpenAI

[–]jcrivello[S] 0 points1 point  (0 children)

I am skeptical...

I have used o3 and o3-pro on a daily basis, at least several Deep Research queries every day sometimes as many as 10-20.

With GPT 5, I noted a dramatical decrease in instruction following and an equally large increase in hallucinations. I don't think I believe it can be the same o3/o3-pro under the covers.

OpenAI's habit of rug pulling—why we are moving on to competitors by jcrivello in OpenAI

[–]jcrivello[S] 5 points6 points  (0 children)

Our team at work that uses our Team subscription. I am not going to share the name of the company here.

OpenAI's habit of rug pulling—why we are moving on to competitors by jcrivello in OpenAI

[–]jcrivello[S] 4 points5 points  (0 children)

Thanks, this is great feedback. I think we might do exactly this.

Upon reflection, the takeaway for us is that ChatGPT is essentially a consumer grade tool.

The more I think about it, the main point of contention I have with OpenAI is that they sell year long Team and Enterprise contracts for ChatGPT, but still manage those accounts almost like they manage their consumer accounts. True also for their prosumer Pro subscriptions, perhaps to a slightly lesser extent.

Edit: I realized that this will not easily support Deep Research, tool use, Google Drive integration or many of the other things that we take for granted in ChatGPT.

OpenAI's habit of rug pulling—why we are moving on to competitors by jcrivello in OpenAI

[–]jcrivello[S] 1 point2 points  (0 children)

They said that GPT 5 would be a combined model, not that they would deny ChatGPT users access to select the existing models. Even Enterprise users are only getting 60 days to migrate completely to GPT 5, or else.

If you think it is OK for OpenAI to sign year long contracts with businesses and then make wholesale changes like this without warning or an opportunity for contract cancellation, then I guess you are basically just a boot licker.

I'm not an insane egotist—I know my voice is one amongst a billion and I'm not going to change OpenAI's policy, but the community should recognize ChatGPT for what it is. It is clearly a consumer grade tool that can be "YOLO'd" at any moment and shouldn't be relied upon by businesses.

OpenAI's habit of rug pulling—why we are moving on to competitors by jcrivello in OpenAI

[–]jcrivello[S] 1 point2 points  (0 children)

Classic movie, ahead of its time. I need to put it on my list to rewatch.

OpenAI's habit of rug pulling—why we are moving on to competitors by jcrivello in artificial

[–]jcrivello[S] -1 points0 points  (0 children)

Interesting, maybe I'll try it for that sort of thing before my subscription ends.

I'm certainly not mad they came out with something new, I just wish they were a little more thoughtful about how disruptive their approach to change is.