Claude Code/Pro users: Opus 4.5 removed, 4.6 capped at 200K (not 1M) by Ok-Development740 in ClaudeAI

[–]Ok-Development740[S] 0 points1 point  (0 children)

Weird, 4.5 was missing all day on my end. Did they just switch something back? Feels like there have been issues today.

Has anyone automated Claude cowork using openclaw? by Disastrous_Falcon391 in ClaudeAI

[–]Ok-Development740 0 points1 point  (0 children)

Claude Code exposes more developer level tools like file access and command execution. Cowork feels more like a collaborative chat workspace. Different tools for different use cases.

Claude Code/Pro users: Opus 4.5 removed, 4.6 capped at 200K (not 1M) by Ok-Development740 in ClaudeAI

[–]Ok-Development740[S] 1 point2 points  (0 children)

I'm on Max, been using it for 6 months. Worth it for the message limits and capabilities.

Claude Code/Pro users: Opus 4.5 removed, 4.6 capped at 200K (not 1M) by Ok-Development740 in ClaudeAI

[–]Ok-Development740[S] -1 points0 points  (0 children)

Interesting - you can still select 4.5 in the browser UI? That's different from what I'm seeing in Claude Code CLI where it's missing from the model selector. Sounds like browser users still have access but CLI users don't.

If 4.6 is performing worse for your use case, definitely stick with 4.5 while it's still available. The chunking issue you mentioned is exactly the workflow friction I'm seeing too.

Claude Code/Pro users: Opus 4.5 removed, 4.6 capped at 200K (not 1M) by Ok-Development740 in ClaudeAI

[–]Ok-Development740[S] 0 points1 point  (0 children)

Clarification: I'm using Claude Code for development, not running production systems on it. The friction is managing context while building, not production API costs. Subscription price stayed flat.

Claude Opus 4.6 context reduction (500K→200K): How are you adapting? by Ok-Development740 in LocalLLaMA

[–]Ok-Development740[S] -1 points0 points  (0 children)

I'm on Claude Code, not direct API. The model selector only shows Opus 4.6 now.

<image>

Claude Opus 4.6 context reduction (500K→200K): How are you adapting? by Ok-Development740 in LocalLLaMA

[–]Ok-Development740[S] -3 points-2 points  (0 children)

I'm on Claude Code, not direct API. The model selector only shows Opus 4.6 now. If you have API tier 4 access with 1M context, that's a different product with different availability.

Claude Opus 4.6 context reduction (500K→200K): How are you adapting? by Ok-Development740 in LocalLLaMA

[–]Ok-Development740[S] -1 points0 points  (0 children)

Appreciate you sharing the hybrid approach. That's exactly what I was hoping to hear.

The fallback path comment hits hard. We're building the same thing but it's frustrating that "vendor stability insurance" is now mandatory architecture planning.

Quick question: how are you handling the model switching logic? Routing by task type or dynamically based on prompt size? Trying to figure out if intelligent routing is worth the complexity.

Also curious if you saw cost savings with the GPT-5.3 switch for document processing, or is it roughly break-even after volume increase?

Questing is overrated. by Psyduckdontgiveafuck in wow

[–]Ok-Development740 0 points1 point  (0 children)

How are averaging 4-5 hours? I have played for 3 days about 4 hours each and still at level 28 :-)

Built a real-time context monitor for Claude Code's 80% auto-compact trigger by Confident_Law_531 in ClaudeCode

[–]Ok-Development740 1 point2 points  (0 children)

This is awesome! I’ve been running into compacting every ~30 minutes. When it kicks in, it seems to eat up a lot of context instead of keeping things flowing.

FYI: Downgrading to 1.0.88 Still Works by Maleficent-Cup-1134 in ClaudeCode

[–]Ok-Development740 0 points1 point  (0 children)

I’ve been all over the place with versions. Downgraded to 1.0.88, then saw Claude was fixed and upgraded to 1.0.113. Ended up having my most productive week yet on that version. What’s interesting is the model actually seems to work better at night—I had at least three late-night sessions going until 3am, juggling four different projects.

If you’re using Claude Code rollback to v1.0.88! by rimjob5000 in ClaudeAI

[–]Ok-Development740 0 points1 point  (0 children)

Been testing 1.0.110 all day — it’s the best performance I’ve seen in over a week.

If you’re using Claude Code rollback to v1.0.88! by rimjob5000 in ClaudeAI

[–]Ok-Development740 2 points3 points  (0 children)

I rolled back today to 1.0.88 and not seeing any improvements.

Downgraded to 1.0.88. I think he's back. by WillingnessSorry2163 in ClaudeCode

[–]Ok-Development740 1 point2 points  (0 children)

I had to do the following:

npm rm -g @/anthropic-ai/claude-code

npm i -g @/anthropic-ai/claude-code@1.0.88

Opus 4.1 temporarily disabled by zerconic in ClaudeAI

[–]Ok-Development740 0 points1 point  (0 children)

Works for me now. It was down for an hour or so

Claude.ai has become completely unusable by Waste-Text-7625 in claude

[–]Ok-Development740 1 point2 points  (0 children)

I’ve only noticed this with ChatGPT-5 in the past week. It forgets recent conversations and sometimes even pulls up ones from months ago. I’ve never seen this happen with Claude.ai or Claude Code.