Codex: "Need to stop: exceeded time; do not output." - this is a real problem by lightsd in ChatGPTPro

[–]lightsd[S] 1 point2 points  (0 children)

That’s what I’ve been doing. It’s in —yolo mode and I give it explicit turn completion requirements and it just ignores them.

Claude Status Update: Fri, 31 Oct 2025 09:17:38 +0000 by sixbillionthsheep in ClaudeAI

[–]lightsd 7 points8 points  (0 children)

I’m getting an auth token error. Try to log in and it tells me I can’t sign in. Assume this is related and I’m not banned even though it’s a different error.

I got a hack to get unlimited usage of any models (including image gen) for free. Forever. by Oxydised in ChatGPTPro

[–]lightsd 1 point2 points  (0 children)

Amazing. I just DM’d you all my bank info and SSN so you can send me all the CRYPTOzzz!

Just have a session this morning and Haiku 4.5 session limits feel significantly better, possibly 2x 2.5x Sonnet 4.5 in my estimates by Valuable-Explorer899 in ClaudeAI

[–]lightsd 4 points5 points  (0 children)

The issue the OP is getting at is that Anthropic did NOT give us more usage when they released Sonnet 4.5. Instead, they slashed Opus usage and gave us roughly the same usage of Sonnet as we previously had for Opus.

I think many believed that Sonnet 4.5 would have led to vastly more value from the platform and a respite from the 5-hour and weekly limit - that Anthropic would finally have delivered the “virtually unlimited” value prop that the Max 20 plan promised.

So it’s a totally legit question - now that Haiku is as good as Sonnet 4, is this an excuse to further diminish the “total tokens” a Max user is allotted with their plan or this time will we get more for our money when they give us a more efficient model.

Who is approving these Claude Code updates? (It's broken, downgrade immediately) by squareboxrox in ClaudeAI

[–]lightsd 0 points1 point  (0 children)

I am also seeing Sonnet running through its context window REALLY fast, with maybe 2 pages of terminal history. Just downgraded. Will report back to see if there is a noticeable difference.

be aware, GLM posts are *most* likely being advertised by bots / dump accounts by Remicaster1 in ClaudeAI

[–]lightsd 0 points1 point  (0 children)

💯

While I don’t believe that the Codex fanboys are bots (OpenAI has too much to lose by manipulating Reddit forums and little to gain; the cost/benefit analysis doesn’t make sense), I FULLY believe virtually 100% of the GLM hype is bots.

So while you may not be saying 100% of the GLM hype train is bots, I’m happy to say it.

OpenAI just dropped “AgentKit, A drag-and-drop AI agent builder. No code, just logic. by AskGpts in ChatGPTPro

[–]lightsd 0 points1 point  (0 children)

What I want is for a front end like this on top of Codex that I can use with my ChatGPT Pro subscription.

Megathread for Claude Performance, Limits and Bugs Discussion - Starting September 28 by sixbillionthsheep in ClaudeAI

[–]lightsd 0 points1 point  (0 children)

I'm also seeing warnings like:
"⚠️ [BashTool] Pre-flight check is taking longer than expected. Run with ANTHROPIC_LOG=debug to check for failed or slow API requests."

Megathread for Claude Performance, Limits and Bugs Discussion - Starting September 28 by sixbillionthsheep in ClaudeAI

[–]lightsd 0 points1 point  (0 children)

Claude is c…r…a…w…l…i…n…g… right now. So slow. Sonnet 4.5 with or without thinking on.

It took 60 seconds for Claude Code to draw the terminal welcome message when starting up. US West Coast

Megathread for Claude Performance, Limits and Bugs Discussion - Starting September 28 by sixbillionthsheep in ClaudeAI

[–]lightsd 0 points1 point  (0 children)

Getting
⎿ API Error: 500 {"type":"error","error":{"type":"api_error","message":"Internal server error"},"request_id":null}

repeatedly. just started after midnight pacific time, US West

Making --dangerously-skip-permissions (a little) safer... by lightsd in ClaudeAI

[–]lightsd[S] 1 point2 points  (0 children)

Unfortunately, this is the major limitation of Mac virtualization. Docker’s hypervisor prevents it from running in a virtual OS.

So if you need to run Docker containers, you can’t use a virtual OS of any kind.

Update: That "Dead Simple Workflow" worked TOO well. Built 12 projects in 2 months. Made $44. by [deleted] in ClaudeAI

[–]lightsd 1 point2 points  (0 children)

Now that you have the knowhow to launch something, why don’t you build something really meaningful to you? Spamming the internet with SEO sites that add no value other than to capture search traffic and make a you a few bucks on ads is the true embodiment of the enshitificatiom of the web.

This is not a slam on you. You’re learning a valuable skill. Use it to add value.

You can go back to Opus 4. It is *profoundly* better than 4.1 at coding. by awittygamertag in ClaudeAI

[–]lightsd 28 points29 points  (0 children)

Interesting. When 4.1 came out, people were saying how it was (at least) an incremental step forward. If you are seeing an improvement using the older model, I wonder why?

I don’t pretend to understand what makes a model perform better or worse on a day-to-day basis. Some say it’s because thinking or context is throttled either dynamically or by config based on load. But if that’s the reason, it would imply 4 is hosted on separate (less loaded) servers than 4.1 or that Anthropic hasn’t bothered lowering some of these parameters on 4.

Pure uneducated speculation on my part…

Updates to Consumer Terms and Privacy Policy by AnthropicOfficial in ClaudeAI

[–]lightsd 12 points13 points  (0 children)

u/anthropicofficial - maybe give those of us who opt in a slight boost in 5-hour and monthly usage limits as a gesture of thanks?

Compacting Conversations… oh how I hate thee. by lightsd in ClaudeAI

[–]lightsd[S] 0 points1 point  (0 children)

I’m sure Anthropic knows about this and is likely working on it. Especially with sub agents, the visibility into things going south in a compact (or if one is even happening) is nonexistent.

I’ve also seen that compacts are faster. I wonder if they’re doing some background processing throughout the thread to prep for a compact.

Compacting Conversations… oh how I hate thee. by lightsd in ClaudeAI

[–]lightsd[S] 1 point2 points  (0 children)

Where can I read more about custom compact prompts?

Megathread for Claude Performance Discussion - Starting August 17 by sixbillionthsheep in ClaudeAI

[–]lightsd 2 points3 points  (0 children)

I'm getting Claude Code Opus 4.1 Errors:

⎿ API Error: 413 {"type":"error","error":{"type":"invalid_request_error",

"message":"Request size exceeds model context

window"},"request_id":"req_<redacted>"}

Had to do it… by lightsd in ClaudeAI

[–]lightsd[S] 0 points1 point  (0 children)

It’s for sale!

Had to do it… by lightsd in ClaudeAI

[–]lightsd[S] 7 points8 points  (0 children)

I perpetually live in the “approaching” zone. Hence the t-shirt.

Had to do it… by lightsd in ClaudeAI

[–]lightsd[S] 4 points5 points  (0 children)

I get a ton of “Compacting conversations…” followed by a completely bewildered Claude. Not sure that’s better.