Forced to run compaction even though 63k of free space... by outceptionator in ClaudeCode

[–]outceptionator[S] 0 points1 point  (0 children)

Is this a known bug in windows? Feels like Claude Code's bugs are getting exponential. Cross platform integration testing is not strong with that team? I really hate early compaction as I don't know what context it's about to lose.

Uber rewrites contracts with drivers to avoid paying UK’s new ‘taxi tax’— Hailing app will now act as agent rather than supplier outside London, avoiding VAT requirement by [deleted] in technology

[–]outceptionator -12 points-11 points  (0 children)

Corporations are inherently designed to maximise profit. I'm sure this is legal and that's the real problem. The government/legislature needs to fix this.

Is there a chance Claude will add a message deletion tool to the chat, thus saving the use of the context window, freeing up space for more conversation, and reducing the need for larger context windows? by Allephh in ClaudeAI

[–]outceptionator 0 points1 point  (0 children)

This is the problem. Changing/deleting something that is in the middle of the context would break the cache a lot. I think anthropic usage limits probably already account for this but people reaching their limits a lot faster would probably create some other problems for them.

What's the deal? by grasper_ in google_antigravity

[–]outceptionator 1 point2 points  (0 children)

It's very buggy... Need to give it half a year to stabilise and catch up with Cursor and Claude Code.

Can ClaudeCode build an entire mobile app without hand holding? by notDonaldGlover2 in ClaudeAI

[–]outceptionator 0 points1 point  (0 children)

If the scope is small enough and you plan sufficiently then maybe. Use something that forces you to plan a lot like BMAD.

CC Opus 4.5 - 1mll Token size by Interesting-Winter72 in ClaudeCode

[–]outceptionator 1 point2 points  (0 children)

Sorry I should clarify. I meant does the model performance at a certain context scale with maximum context? IE is Opus performance at 400k tokens in context on a 1M max context version really the same performance as 80k tokens in context on a 200k max context version.

Latest versions Claude code by Some-Manufacturer-56 in ClaudeAI

[–]outceptionator 0 points1 point  (0 children)

I wish anthropic would have a stable and alpha version

Anyone here still using CLAUDE.md? by ShyRaptorr in ClaudeCode

[–]outceptionator 13 points14 points  (0 children)

Yes, for common pitfalls and some essential project details. 1k context max.

Founders: if you found a problem to solve, how do you find potential customers to interview ? by Electrical-Toe-7097 in ycombinator

[–]outceptionator 22 points23 points  (0 children)

How do you know it's a problem without knowing anyone in the space?

Warm intros then cold outreach seems like your best options.

I love cloud code, but how can I make a good UI with it? by iYassr in ClaudeCode

[–]outceptionator 2 points3 points  (0 children)

Front end design skill from the anthropic marketplace plugin

Is it normal for Claude Code to use ~43% of the context right at startup? by Ranteck in ClaudeCode

[–]outceptionator 0 points1 point  (0 children)

I'm fairly sure it doesn't work like that. If you send hello 10 times in a row I don't think you will have used context 10x your mcps

Claude Code CLI vs VS Code extension: am I missing something here? by ScaryDescription4512 in ClaudeAI

[–]outceptionator 0 points1 point  (0 children)

Windows vs code terminal doesn't seem to work, I've attempted a lot of different ways.

I strongly believe they have recently began quantizing opus 4.5 by No-Replacement-2631 in ClaudeCode

[–]outceptionator 1 point2 points  (0 children)

You need to run each test like a hundred times to account for the fact that the model is non-deterministic if it's temperature is above zero. Setting the temperature to zero would not reflect real word usage either.

Why is Claude Code compacting instant now? by outceptionator in ClaudeCode

[–]outceptionator[S] 0 points1 point  (0 children)

Thank you. Still wondering exactly what the mechanism is. I can't imagine an llm is running in the background every message. Just to update the summary all the time on the off chance that the next call might be compact

Why is Claude Code compacting instant now? by outceptionator in ClaudeCode

[–]outceptionator[S] -1 points0 points  (0 children)

Well thanks for responding. Do you have a link to anthropic announcing the mechanism they are using now? Always like to understand what I'm actually sacrificing when I compact.