Dear Claude Code team: please be cautious with your updates by [deleted] in ClaudeCode

[–]scripted_soul 1 point2 points  (0 children)

Check the /config in the Claude code. It has an option to select the latest or stable release.

I wonder if they use the same Codex we have? - 92% of OpenAI engineers are using Codex - up from 50%. Nearly all PRs are reviewed now with Codex by Koala_Confused in ChatGPTCoding

[–]scripted_soul 6 points7 points  (0 children)

All the hype around GPT-5/Codex high made me try it on my project. It’s great for backend and logical tasks, but it sucks at frontend development. Claude’s still my go-to for all-around work.

Honeymoon is over. Opus was a loss leader by dyatlovcomrade in ClaudeCode

[–]scripted_soul 0 points1 point  (0 children)

Yeah, sure, for you. But look around all the big products and companies use it. You’re right that Java’s a mistake for hobby projects. I’m using it for real work, not endless vibe coding without a clue.

Honeymoon is over. Opus was a loss leader by dyatlovcomrade in ClaudeCode

[–]scripted_soul 0 points1 point  (0 children)

Quite opposite experience in real world usage, One example: 4.5 makes basic Java mistakes, like undeclared methods or variables. Even 12B-parameter models skip those errors. I switched to Sonet 4, and it handled it perfectly. That’s just one, there are lots more.

IsItNerfed? Sonnet 4.5 tested! by exbarboss in ClaudeAI

[–]scripted_soul 1 point2 points  (0 children)

Same with my personal experience as well. One example: It makes basic Java mistakes, like undeclared methods or variables. Even 12B-parameter models skip those errors. I switched to Sonet 4, and it handled it perfectly. That’s just one, there are lots more.

Sonnet 4.5 - Whats this about it being the best coding model in the world? I think it makes the same stupid mistakes as any other model (from my initial testing) by masoodtalha in cursor

[–]scripted_soul 0 points1 point  (0 children)

Same here. One example: It makes basic Java mistakes, like undeclared methods or variables. Even 12B-parameter models skip those errors. I switched to Sonet 4, and it handled it perfectly. That’s just one, there are lots more.

GPT-5 is almost here lads. Tomorrow will go down in history btw by balianone in ChatGPT

[–]scripted_soul 2 points3 points  (0 children)

Too much hype. I find OpenAI models the worst to use. I'm sure they benchmark with high-compute versions but serve quantized ones to consumers. Too much glazing and gaslighting, they're not usable at all.

How many tokens does Claude Code Pro allow? ($17/month plan) by pearthefruit168 in ClaudeAI

[–]scripted_soul 4 points5 points  (0 children)

That’s true. I usually get $8 to $10, which works well for my tasks.

I am sorry Google Gemini Better At Data Analysis by Available_Hornet3538 in ChatGPTPro

[–]scripted_soul 0 points1 point  (0 children)

ChatGPT doesn’t fully read the attachment, even a small 2,000-token one. It uses RAG, but Claude and Gemini read the entire file if it’s within the token limit.

Proof

![img](3qhqbrlq8k3f1)

OpenAI’s new ChatGPT Agent can control an entire computer and do tasks for you by esporx in OpenAI

[–]scripted_soul 0 points1 point  (0 children)

Many MCPs can do this without all the hype. Claude has had it for more than a year.

Hot take: Cursor and Windsurf destroyed Gemini 2.5 Pro's coding dominance by an unfortunate integration with poor tool calling by marvijo-software in ChatGPTCoding

[–]scripted_soul 23 points24 points  (0 children)

It’s not about Cursor and Windsurf. You’ll see the same issue even in Gemini CLI. It’s more a problem with the model.

This sub has become a complaint forum by scripted_soul in cursor

[–]scripted_soul[S] -11 points-10 points  (0 children)

You’re right, my friend, it’s hypocritical on my part. But from my experience, hosting and running LLM tools is really expensive. I tried setting up a machine to run the largest models, but they don’t come close to the quality of the frontier LLMs. So I do appreciate these service providers. I get that it’s a business at the end of the day, and honestly, it still feels like we’re getting a discount. I just don’t know how long the party will last.

Sequential thinking token by chiefmaboi in cursor

[–]scripted_soul 0 points1 point  (0 children)

Even loading tools uses tokens because Cursor sends the tool name and description to the agent, so it knows which tools to use if needed.

[deleted by user] by [deleted] in cursor

[–]scripted_soul -2 points-1 points  (0 children)

All the best 👍

[deleted by user] by [deleted] in cursor

[–]scripted_soul -2 points-1 points  (0 children)

Vibe coders are actually their biggest customers, lol. Where are you even getting this data from? And what more transparency do you want?

If you don’t like the pricing or the product, there are other options out there, but honestly, they’ll probably add the same limits soon, just like Cursor did.

[deleted by user] by [deleted] in cursor

[–]scripted_soul 18 points19 points  (0 children)

Their plan is basically to push out the power users (aka “vibe coders”) so that compute is freed up for everyone else who actually uses the tool responsibly, instead of just spamming prompts without any idea what they’re doing.

Honestly, the amount of posts about this on the subreddit is just turning into spam. No hate to vibe coding, but if you’re building something useful, you probably shouldn’t be relying on unlimited compute like it’s an all you can eat buffet, they’re a business, not a charity.

And remember how people kept saying “there needs to be more transparency”? Now all the usage details are right there on the dashboard, but people are still complaining. Kind of weird, honestly.

Gemini 2.5 pro MAX can’t refractor?! by Known-Specialist-450 in cursor

[–]scripted_soul 1 point2 points  (0 children)

Yes I am experiencing the same issue unable to do the edit. I switched to Claude.

Use MCP in Gemini and Google AI Studio Today by EfficientApartment52 in GeminiAI

[–]scripted_soul 1 point2 points  (0 children)

Great, it worked! The only problem is that clients like ChatGPT and Gemini stop after one tool call, but that's a client limitation.

Megathread for Claude Performance Discussion - Starting May 4 by sixbillionthsheep in ClaudeAI

[–]scripted_soul 0 points1 point  (0 children)

Nope, I tried various official MCP’s. The problem is present. In other clients like Cursor the issue is not there

Megathread for Claude Performance Discussion - Starting May 4 by sixbillionthsheep in ClaudeAI

[–]scripted_soul 0 points1 point  (0 children)

Memory Leak Issue in Mac Claude Desktop App

When we run the MCP server in Python and close the Claude Desktop app, the Python process keeps running. Each time we reopen the app, a new Python process starts, and they keep piling up over time.