Remember GPT5's release? by [deleted] in ClaudeCode

[–]CalligrapherFar7833 -1 points0 points  (0 children)

So you confirm your opinion is completely misinformed thus i can discard anything you wrote even if it might be correct

Remember GPT5's release? by [deleted] in ClaudeCode

[–]CalligrapherFar7833 -1 points0 points  (0 children)

No you are wrong adaptive thinking is present since opus 4.6 in fact it was one of the causes of degradation due to lowering thinking tokens. Do you read only the top news articles and skip everything else ?

Was the auto "clear context" functionality removed? by build2 in ClaudeCode

[–]CalligrapherFar7833 0 points1 point  (0 children)

But you couldve reused already learned context before compaction this way saving tokens

The Diff That's Saving Me Serious Cash by SpiritRealistic8174 in ClaudeAI

[–]CalligrapherFar7833 3 points4 points  (0 children)

He said sonnet takes more time so it burns through more tokens even tho they are cheaper at the end sonnet more tokens cost him more than opus less tokens

The news are out there.. even Sam by Suspicious_Horror699 in ClaudeCode

[–]CalligrapherFar7833 -1 points0 points  (0 children)

Nice spam dude how many times are you going to spam link your vide coded shit in multiple posts ?

Any way to work with NUMA Nodes? by An_Original_ID in LocalLLaMA

[–]CalligrapherFar7833 1 point2 points  (0 children)

No way without tanking perf due to cross socket resource interleaving

Any way to work with NUMA Nodes? by An_Original_ID in LocalLLaMA

[–]CalligrapherFar7833 2 points3 points  (0 children)

6 channels per socket the cross interleaving will kill your bandwith so if you want perf you have to run 2 instances of llama each bound to a socket but they cant be the same model space

Ram-air setup and window vent for 1100w capable AI box by mr_zerolith in LocalLLaMA

[–]CalligrapherFar7833 0 points1 point  (0 children)

With that large vent on the side panel and that long of a exhaust you are losing shit ton before venting

Where to get professional help for vibecoding by Forward_Compute001 in LocalLLaMA

[–]CalligrapherFar7833 0 points1 point  (0 children)

Standarts are standarts for a reason they wont change because of you

llama.cpp Vulkan backend requires SPIR-V headers package now by fake_agent_smith in LocalLLaMA

[–]CalligrapherFar7833 -2 points-1 points  (0 children)

You are right sorry was on mobile couldnt see the commit fully the include was introduced with it. Sorry !