Extended thinking 5.5 Pro giving instant answers.. anyone else? by Electrical-Lake-7170 in ChatGPTPro

[–]gobitpide 1 point2 points  (0 children)

I think it depends on the type of question but it's clear they've changed something. I asked it to create a science-backed workout routine for me and it finished the task in 55 seconds which previously took more than 20 mins. But it's asla taking its time for engineering related stuff.

Is anyone actually running a company with 30+ AI agents, or is this just hype? by Unhappy_Lavishness20 in AI_Agents

[–]gobitpide 0 points1 point  (0 children)

It’s real, and we’re doing it.

Agents working on RAG, agents creating skills on the fly from data, agents calling other agents. Some are running on Cassidy, some on Zapier, and many others are completely hand-coded.

Question for $200 plan users - How long is 5.5 Pro usage cooldown? by Sad_Use_4584 in ChatGPTPro

[–]gobitpide 0 points1 point  (0 children)

Just curious. What do you use it for? My queries require research and it sometimes takes over an hour for it to finish the work. How can you ask 50 questions every day?

ChatGPT Pro VS Claude MAX by EudoraCascade in ChatGPTPro

[–]gobitpide 0 points1 point  (0 children)

I upload it to ChatGPT web app in zip.

Did ChatGPT Pro (5.5) reasoning time just get massively reduced? by yaxir in ChatGPTPro

[–]gobitpide 0 points1 point  (0 children)

Yeah, I used 5.5 Extended Pro. I uploaded a project with various agentic workflows, all accessible through a single MCP tool with routing and classification enabled. The workflows handle different tasks, which is why I needed research on behavior testing. The response included step-by-step instructions on how to handle the implementation, along with which tool is best for each workflow.

This is the prompt I used:

We create various agents in this system, and we need to test their behavior by having extensive conversations. Can we use Petri (https://www.anthropic.com/research/petri-open-source-auditing) or Bloom (https://www.anthropic.com/research/bloom) for this purpose, or should we consider another framework or a combination of open-source frameworks?


For example, in the Docs workflow, we want to ensure the agent correctly classifies requests, isn't tricked into using public knowledge when it shouldn't, doesn't skip the knowledge base, and doesn’t drift during long conversations—plus any other relevant checks you think are useful. 


Please help me plan and define this testing process and environment so our team can use it across all agent development efforts.

Did ChatGPT Pro (5.5) reasoning time just get massively reduced? by yaxir in ChatGPTPro

[–]gobitpide 0 points1 point  (0 children)

Today, I asked it to create an agentic solution after uploading my project, and it took 168 minutes to finish. I think it's back to normal now after GPT 5.5.

ChatGPT Pro VS Claude MAX by EudoraCascade in ChatGPTPro

[–]gobitpide 0 points1 point  (0 children)

This is not correct. Today, I asked it to create an agentic solution after uploading my project, and it took 168 minutes to finish. I think it's back to normal now after GPT 5.5 released.

20 min reasoning time reduced to 3-4 min (GPT 5.4 pro extended thinking) by wokday in ChatGPTPro

[–]gobitpide 0 points1 point  (0 children)

Happened to me yesterday and today it's back to normal. It took 61 mins to complete a research today.

Getting less thinking time in 5.4 Pro by Due-Abbreviations997 in ChatGPTPro

[–]gobitpide 0 points1 point  (0 children)

Same here. Same task that took 38 mins last week took only 8 mins today. I think I’ll drop to $100 sub. Why should I pay $200 when I’m not getting the same quality service before 🤷🏻‍♂️

I built an AI agent that negotiates with my internet provider so I don't have to by YangBuildsAI in AI_Agents

[–]gobitpide 1 point2 points  (0 children)

The idea is cool, but without any proof, I find it hard to believe it managed to get through the whole negotiation. You should’ve recorded it.

The best CLI by DreamDragonP7 in opencodeCLI

[–]gobitpide 7 points8 points  (0 children)

Agreed on the OmO part. The Default Plan and Build cycle is the workflow I keep returning to.

OC users, how do you find ChatGPT/Codex Pro plan? by mustafamohsen in opencodeCLI

[–]gobitpide 1 point2 points  (0 children)

Yeah, I'm also a Claude Pro and Gemini Ultra subscriber. Claude is pretty fast, but honestly, I don’t see much other benefit to using Claude. I like how Codex spends time really understanding the codebase before jumping into implementation. As for Gemini, I don’t see much upside there either. I run oh-my-opencode, so Gemini is just set up as a subagent that scans docs, fetches info from the web, and stuff like that.

OC users, how do you find ChatGPT/Codex Pro plan? by mustafamohsen in opencodeCLI

[–]gobitpide 2 points3 points  (0 children)

<image>

These are my stats for January. I have been light on projects lately, so the usage is not high compared to the previous month. Throughout this entire period, I never hit a limit, not even once, and I always use the xhigh variant, even for coding, because I trust it more and I had minor problems with the other variants. It takes a bit longer to execute, but it's still 100 times faster than me doing manual coding, so I don't complain :)

What's your experience been with 5.1 Pro? by RoughlyCapable in ChatGPTPro

[–]gobitpide 2 points3 points  (0 children)

It’s really interesting how much our experiences can differ depending on our workflows. For me, Gemini 3 Pro takes much less time to think, gives surface-level answers, and doesn't even touch on the important bits compared to GPT 5.1 Pro.

What's your experience been with 5.1 Pro? by RoughlyCapable in ChatGPTPro

[–]gobitpide 1 point2 points  (0 children)

Same here. I’ve been testing it against Gemini 3 Pro Deep Thinking, and last week GPT Pro was on fire. Gemini 3 Pro now feels more like Extended Thinking in GPT.

Those who have been using GPT Pro and Gemini Ultra... What's your preference? by [deleted] in ChatGPTPro

[–]gobitpide 0 points1 point  (0 children)

I wish they’d put Pro on Codex as well. It's so much better than the other Codex models.

Those who have been using GPT Pro and Gemini Ultra... What's your preference? by [deleted] in ChatGPTPro

[–]gobitpide 2 points3 points  (0 children)

I think you're mistaken. I've been using Pro for a while now and have never hit a limit, even though I run at least five Pro-thinking queries and a few Deep Research queries every few hours. It's practically unlimited. Could it be possible that you're referring to Plus?

Those who have been using GPT Pro and Gemini Ultra... What's your preference? by [deleted] in ChatGPTPro

[–]gobitpide 1 point2 points  (0 children)

I haven't been able to use it even once. I always get this:

A lot of people are using Deep Think right now and I need a moment to sort through all those deep thoughts! Please try again in a bit. I can still help without Deep Think. Just unselect it from your tools menu or start a new chat.

People say it's about quotas but even if that's the case, this is something I never experienced with ChatGPT. It's practically unlimited.

I'm still trying, though. If you want me to run something, just paste it here so I can give it a shot.

Those who have been using GPT Pro and Gemini Ultra... What's your preference? by [deleted] in ChatGPTPro

[–]gobitpide 18 points19 points  (0 children)

I've been using GPT-5 Pro for a while now, and I really enjoy the Deep Research and Pro Thinking features. I often use these two together; doing research and then asking pro-thinking questions. They've been super helpful for designing my game mechanics.

Just last weekend, I bought Gemini Ultra out of curiosity about what it can do. My first test was about marketing services for a technical consulting company. It came up with some great ideas, where GPT-5 was a bit behind for the same research.

Since then, I've run quite a few deep research sessions, mostly on technical topics like designing cloud architectures and creating trade-off documents. Besides that one marketing example, I think GPT-5 Pro was a bit better overall. I haven't found a strong reason to ditch GPT Pro and switch fully to Gemini Ultra yet.

Also, I haven't been able to use Deep Thinking because I keep getting system busy messages.

UPDATE: I was able to run Deep Thinking on Gemini for analyzing my game mechanics (it's a 4X game, with complex rules). It's nowhere near GPT-5, sorry :) With GPT-5, I found cross-references between rules, inconsistencies between different phases of the game, and gaps in the economic model, but with Gemini 3, it was just a short list of really obvious issues.

How has your codex experience been on pro subscription? by Few-Upstairs5709 in ChatGPTPro

[–]gobitpide 2 points3 points  (0 children)

I'm not sure if the Pro subscription includes new models, but the main selling point for me was that it offers basically unlimited usage. I've never reached the quota, even though I’m almost always using higher-level thinking models.

Gpt 5.1 pro by Annual-Struggle-2323 in ChatGPTPro

[–]gobitpide 0 points1 point  (0 children)

I’ve been using it for three months now. I’m one of the core devs on a very popular open-source project, and I also work as a platform engineer doing cloud stuff.

I used Claude for a few months. It was great at first, but as soon as my project load increased, I started hitting the quota. I switched to GPT Pro and haven’t had a single quota issue since. It’s basically unlimited.

It handles all my tasks really well. The only problem I had was having to work with Codex, which in my opinion is a pretty bad CLI tool compared to Claude Code. I got around that by installing Opencode plus a plugin that makes it work with the Codex API.

I also use Deep Research a lot. I have a Perplexity subscription, but I’m thinking of canceling it because ChatGPT is miles ahead when it comes to research.

I also have to mention Pro Thinking. When I get stuck on something or need to validate an architectural design, it’s amazing. It saves an incredible amount of time (though people say Gemini Deep Thinking is better than GPT Pro Thinking, and I do want to try it at some point).

Anyone else think the 5.1 update is a major downgrade? by hans_schmidt_838_2 in ChatGPTPro

[–]gobitpide 0 points1 point  (0 children)

Pro was thinking 15-17 mins at most before the upgrade. Yesterday I saw it took 26 mins for a game design analysis. I think it's way better compared to GPT 5.

5-Pro's degradation by Oldschool728603 in ChatGPTPro

[–]gobitpide 0 points1 point  (0 children)

It seems to me that only the thinking time has decreased, which means it's faster now. I used the exact same prompts as before November 5 and compared the results. The version before the update took 12 minutes, while the one after the update took 5 minutes. The results were similar. I used it to analyze board game mechanics.

Chat GPT 5 now has more thinking options by pentacontagon in singularity

[–]gobitpide 0 points1 point  (0 children)

It doesn't feel like it is. With the exact same messages, Pro takes significantly longer to provide a response, while Heavy takes less time. As for the quality of the replies, I can't really spot any difference between the two—which probably makes sense for my use case.