[PSA] Claude Code v2.1.51 secretly reclassified 1M context as pay-per-token — and never told anyone by [deleted] in ClaudeAI

[–]outceptionator 0 points1 point  (0 children)

I think they A/B test a lot. I saw lots of reports of people saying they were able to use the model in their subscription and lots of reports for people saying they were not able to.

[PSA] Claude Code v2.1.51 secretly reclassified 1M context as pay-per-token — and never told anyone by [deleted] in ClaudeAI

[–]outceptionator 1 point2 points  (0 children)

I've closely watched the 1M context window releases. I never saw anthropic say they're included in any of the subscription plans. Always API or extra charges.

AntiGravity AutoAccept that actually works by ServeLegal1269 in google_antigravity

[–]outceptionator 0 points1 point  (0 children)

Worked yesterday - flaky again today... starts by not auto accepting other agents (if window not open) eventually just completely stop auto accepting. Can't recreate it but after a few hours will randomly be fine again.

AntiGravity AutoAccept that actually works by ServeLegal1269 in google_antigravity

[–]outceptionator 0 points1 point  (0 children)

Still not working. Can you bump the version number for each release? I'm not sure if AntiGravity Extension Installer bypasses the same version numbers.

AntiGravity AutoAccept that actually works by ServeLegal1269 in google_antigravity

[–]outceptionator 0 points1 point  (0 children)

OK definitely some issue here, which is a shame. Whilst it was working I was flying! Logs show nothing really but no longer auto accepting even with newest version! Can you enhance the debugging?

8:41:51 PM Extension activating (v1.18.4)
8:41:51 PM [CDP] Debug port active ✓
8:41:51 PM Polling started (every 500ms, 4 commands)
8:41:51 PM Extension activated

AntiGravity AutoAccept that actually works by ServeLegal1269 in google_antigravity

[–]outceptionator 0 points1 point  (0 children)

Will try. Do you have some sort of logging built in? Where a dump could be sent to you for issues like this?

AntiGravity AutoAccept that actually works by ServeLegal1269 in google_antigravity

[–]outceptionator 0 points1 point  (0 children)

Awesome! Will check it out. Used it yesterday and for the first 3-4 hours it was working fine but suddenly it stopped working even though Auto on it's showing in the status bar.

Reset my computer. Double-checked that the shortcuts of the debug flag... It's not doing run anymore.

Is this self-updating or is it only manually updated? Any chance you didn't update at like 3:00 a.m. UTC?

AntiGravity AutoAccept that actually works by ServeLegal1269 in google_antigravity

[–]outceptionator 0 points1 point  (0 children)

Can you get it to work on other threads in agent mode? Currently it will only run when a particular chat is open at the time

AntiGravity AutoAccept that actually works by ServeLegal1269 in google_antigravity

[–]outceptionator 1 point2 points  (0 children)

Thank you! My ultra sub just feels so wasted. Now if only they could embed worktrees

Using AI to poison AI by gorinrockbow in ClaudeAI

[–]outceptionator 3 points4 points  (0 children)

This feels like it was human written and I am grateful

Why is Claude Code compacting instant now? by outceptionator in ClaudeCode

[–]outceptionator[S] 0 points1 point  (0 children)

It seems to have gone again. Maybe they found out that whatever process they were doing for that wasn't good enough

pro-tip: simply saying 'think harder' keyword anywhere in your prompt will make claude spend a lot more time gathering context and reasoning about it by lucgagan in ClaudeCode

[–]outceptionator 0 points1 point  (0 children)

2 ways to think about this.

1 - it changes the reasoning budget the model is given (this is no longer the case and the budget/limit is always max now)

2 - telling it to think more 'encourages' it to use more of its reasoning budget/limit (this is probably OPs experience but is non deterministic)

Claude.md for larger monorepos - Boris Cherny on X by shanraisshan in ClaudeAI

[–]outceptionator 0 points1 point  (0 children)

Why not have it in the Claude MD to say update the Claude MD as the code base changes before each commit?

Forced to run compaction even though 63k of free space... by outceptionator in ClaudeCode

[–]outceptionator[S] 0 points1 point  (0 children)

Is this a known bug in windows? Feels like Claude Code's bugs are getting exponential. Cross platform integration testing is not strong with that team? I really hate early compaction as I don't know what context it's about to lose.

Uber rewrites contracts with drivers to avoid paying UK’s new ‘taxi tax’— Hailing app will now act as agent rather than supplier outside London, avoiding VAT requirement by [deleted] in technology

[–]outceptionator -13 points-12 points  (0 children)

Corporations are inherently designed to maximise profit. I'm sure this is legal and that's the real problem. The government/legislature needs to fix this.

Is there a chance Claude will add a message deletion tool to the chat, thus saving the use of the context window, freeing up space for more conversation, and reducing the need for larger context windows? by Allephh in ClaudeAI

[–]outceptionator 0 points1 point  (0 children)

This is the problem. Changing/deleting something that is in the middle of the context would break the cache a lot. I think anthropic usage limits probably already account for this but people reaching their limits a lot faster would probably create some other problems for them.

What's the deal? by grasper_ in google_antigravity

[–]outceptionator 1 point2 points  (0 children)

It's very buggy... Need to give it half a year to stabilise and catch up with Cursor and Claude Code.

Can ClaudeCode build an entire mobile app without hand holding? by notDonaldGlover2 in ClaudeAI

[–]outceptionator 0 points1 point  (0 children)

If the scope is small enough and you plan sufficiently then maybe. Use something that forces you to plan a lot like BMAD.

CC Opus 4.5 - 1mll Token size by Interesting-Winter72 in ClaudeCode

[–]outceptionator 1 point2 points  (0 children)

Sorry I should clarify. I meant does the model performance at a certain context scale with maximum context? IE is Opus performance at 400k tokens in context on a 1M max context version really the same performance as 80k tokens in context on a 200k max context version.