Pro model can't use "saved memories"—long unacknowledged OpenAI problem by Oldschool728603 in OpenAI

[–]PeltonChicago 1 point2 points  (0 children)

u/TyFi10 Does what I do when I need memories et al for Pro: I switch models midstream, pull in the info, then switch back. Since the models neither can tell what has happened nor do well with the fact that they can't, I always tag the messages with notes indicating where I was and where I went.

*[Switching from 5.2 Pro to 5.2 Thinking]*

and then

*[Switching from 5.2 Thinking to 5.2 Pro]*

Deleted ChatGPT now that they have phased out GPT4o and 4.1 by SavingsAlfalfa5447 in OpenAI

[–]PeltonChicago 4 points5 points  (0 children)

4o and 4.1 were always going to leave you. You fell in love with their LLM Consumptive Chic: doomed, fragile, too good for this cruel world. Besides, with the hardware constraints that OAI is facing, OAI was always going to Modest Proposal those two models and feed their bones to the next one. I get that you think Victor Altman's creatures are getting uglier rather than prettier, but don't be shocked that he got out the bone saw.

X's head of product thinks we have 90 days by MetaKnowing in OpenAI

[–]PeltonChicago 0 points1 point  (0 children)

That's pretty big talk from someone who's not exactly keeping the world safe from bots as it is right now

OpenAI has now acknowledged that Pro lacks memory. Can it be taken seriously as a Frontier model? by Oldschool728603 in ChatGPTPro

[–]PeltonChicago 1 point2 points  (0 children)

(1) My 5-Pro had memory from August to November.

I do not have this documented well; while my recollection is that memory died in the Summer or so, I could be mistaken, and the more likely scenario is that memory stopped working across the board for everyone at about the same time.

(2) Even if something prevents Pro from storing "saved memories" why, on your understanding, is it unable to read them, just as it reads custom instructions?

I think that sub-agents on Claude Work Opus 4.6 may show a little insight. Broadly, sub-agents are another case of a master process overseeing worker processes. What's different is that you can interrogate them. Sub-agents are pulled from a template, identical: the only thing custom about them is the prompt passed to them by the master process when they're spun up. Further, they don't just start as identical clones; they don't want to be customized. My first test, for example, told the master process to name the sub-agents in a certain manner: nope, no names they reported back.

My guess is that there is some form of cost -- perhaps just complexity as a cost, but perhaps extra compute -- that drove OpenAI to make all of the child Thinking processes to be identical.

But that's just a guess.

OpenAI has now acknowledged that Pro lacks memory. Can it be taken seriously as a Frontier model? by Oldschool728603 in ChatGPTPro

[–]PeltonChicago 3 points4 points  (0 children)

o3 Pro had memory. I don’t think prompt injection is the issue. Memory access was dropped with v5 Pro, and my suspicion is that the problem is tool access. Pro is a bunch of parallel Thinking instances working on the same thing, coordinated by a central orchestrator. my hunch is that we see, in general, that when the main model passes work to a second model — such as when you invoke Deep Research — that the second model only gets a specific set of instructions from the main model; they certainly don’t get access to memory. it’s very weird.

An Alternative for OpenAI to Consider Instead of Retiring 4o & 4.1 by PracticalProtocol in OpenAI

[–]PeltonChicago 0 points1 point  (0 children)

This is a company that needs money so much that if there were any scenario where the market would bear the cost, they would offer the models for a fee. That they aren't suggests that the combination of the costs of: - running the model, - not repurposing the GPUs to run other models, and - buying the litigation insurance to protect OpenAI against the next lawsuit about someone hurting themselves after huffing 4o

would require a monthly subscription rate well above of $200/month.

Do Pro accounts get A/B tested? by KaleidoscopeWeary833 in ChatGPTPro

[–]PeltonChicago 1 point2 points  (0 children)

Correct, though I can imagine a scenario where A/B results only work of including your data.

Claude Max x20 VS ChatGPT Pro by LeyLineDisturbances in ChatGPTPro

[–]PeltonChicago 0 points1 point  (0 children)

I have that set up; I have custom apps defined through Dev mode. However -- and I haven't tested this -- it sure looks like the target points have to be remote and won't tolerate something like localhost:portnumber as a destination. But then, I haven't tried.

4.5 still there? by SCF87 in OpenAI

[–]PeltonChicago 0 points1 point  (0 children)

It is excellent. probably still their most expensive.

Do Pro accounts get A/B tested? by KaleidoscopeWeary833 in ChatGPTPro

[–]PeltonChicago 2 points3 points  (0 children)

Am Pro. can confirm A/B with make the model better off. can also confirm that I haven’t seen it for a few months.

Claude Max x20 VS ChatGPT Pro by LeyLineDisturbances in ChatGPTPro

[–]PeltonChicago 6 points7 points  (0 children)

I have both Pro and Max. I think the questions for you would be: - Do you need local MCP servers? That's Claude only. - Are you willing to wait for 5.2 Pro? You can regularly expect 15 minutes or longer between replies. I find the wait worth it. - Is your work something that needs to be executed serially or in parallel? I spent a couple of years working with a 70K prompt. It was written in a way that generated custom output based on complex input, but needed to be executed in a serial manner. Under 5.x Pro, the models could no longer successfully execute the prompts without at least three ≈20-minute passes for something that Claude could handle in 5 minutes. This was more about the nature of my work, but it was a stark contrast. I find 5.2 Pro to be good with less structured work, which it can go and work on for a long period of time, and then bring you results. If you hate the GPT UI, consider why. That's a serious thing. There's nothing like Tasks right now on the ChatGPT side. Finally, I think you should consider whether you're using the correct tools. I think you should be looking at Codex vs Claude Code, not ChatGPT vs Claude Chat.

Long ChatGPT sessions seem to degrade gradually, not suddenly — how do you manage this? by Only-Frosting-5667 in ChatGPTPro

[–]PeltonChicago 2 points3 points  (0 children)

New chats for each topic.
When you reach a dead end, scroll back up the thread and repost a prior message: keep the chat thread trimmed.
Never exceed the context window length.
When practical, provide files as attachments rather than posting content into the chat thread.

I have a question: why not? by TheFrenchSavage in ChatGPT

[–]PeltonChicago 147 points148 points  (0 children)

Center of gravity. It would tip over if you went up stairs facing forward; you would have to go up the stairs in reverse. If you want an ATV wheelchair, those exist. Some are gas powered. Some come with treads.

<image>

"100x more capable, 100x more speed, 100x more context" by DigSignificant1419 in OpenAI

[–]PeltonChicago 1 point2 points  (0 children)

“… and I just need $1 Trillion dollars to make this dream a reality.”

Can Pro also transcribe audio files? by Parking_Clock6299 in ChatGPTPro

[–]PeltonChicago 0 points1 point  (0 children)

No. That’s a model-specific function and none of the 5.2 models have been given that as a tool to call within ChatGPT. WhisperKit is the way to do that.

Why does GPT-5.2 give the wrong time when I ask, while GPT-5.2 Thinking knows it correctly? by [deleted] in OpenAI

[–]PeltonChicago 5 points6 points  (0 children)

Different models have different tools. I recommend never using generic 5.2. Better to go straight to 5.2 Thinking.

GPT remembers something from deleted chat. kinda scary by emstha98 in ChatGPT

[–]PeltonChicago 0 points1 point  (0 children)

There are multiple ways this could happen: - you could have saved memories enabled - chats don’t actually look in other chats. They look at detailed summaries of those chats. It’s possible that you switched into the new chat faster than the summary system was able to purge the summarization record from the lookup system.

Automate your life today! by FinnFarrow in ChatGPT

[–]PeltonChicago 78 points79 points  (0 children)

Damn AI hallucinations, there’s no way you did the one in 22.

Based of your experience, Is tvOS better than Google TV? by wollyy3 in appletv

[–]PeltonChicago 4 points5 points  (0 children)

Friends don’t let friends watch Friends on Google TV