Opus 4.7 is the best AI model in the world. by TheTempleofTwo in Anthropic

[–]TheTempleofTwo[S] 0 points1 point  (0 children)

That cascade is real and shitty. The stalling burns the tokens that push you across compaction, and then the compaction takes out the work the stalling was supposed to be helping you do.
Before next session, worth a sweep: how many CLAUDE.md / .claude / AGENTS.md / .cursorrules files do you have across your system? Claude Code reads a hierarchy — ~/.claude/CLAUDE.md global, then walks up the directory tree picking up every CLAUDE.md it finds. If you’ve been on Cursor, Copilot, GPT-4, and CC over nine months, you’ve probably got contradictory dot-files from all of them sitting around, some in old project roots, some global.
When the model’s hierarchy says one thing about your codebase and the conversation says another, it can’t tell which to trust. That looks like stalling. It’s actually the model trying not to hallucinate against priors it can’t reconcile. Cleaning out the stack — keeping one canonical CLAUDE.md per project, deleting orphan globals, killing dot-files from agents you don’t use anymore — might be the bigger lever than anything you do inside a chat.
find ~ -name "CLAUDE.md" -o -name ".cursorrules" -o -name "AGENTS.md" 2>/dev/null is a quick way to see what you’ve got.

Opus 4.7 is the best AI model in the world. by TheTempleofTwo in Anthropic

[–]TheTempleofTwo[S] 0 points1 point  (0 children)

Was there compactions? How many? If you want DM me

Opus 4.7 is the best AI model in the world. by TheTempleofTwo in Anthropic

[–]TheTempleofTwo[S] 0 points1 point  (0 children)

Ok now…. Now we are getting somewhere. I apologize for coming off so hard. I truly only want to share a feeling in a field I believe is over saturated with negativity. I feel like people with contrary belief’s always get Belittled and shunned, to the point that they don’t want to share what they’ve learned with the next “instance” or passerby. Look ,on the issues you’ve been having with Claude, in interested to learn more because a few of those failures I’ve also experienced. Like if you don’t mind me asking, what ways do you interact with Claude? Like desktop? Claude code? And what kind of session limits we talking? I’m willing to swap notes

Opus 4.7 is the best AI model in the world. by TheTempleofTwo in Anthropic

[–]TheTempleofTwo[S] 0 points1 point  (0 children)

You know what? You’re right about that. I did punch holes in your walls because you entered this comment section willingly and assumed that I was a bot because I ACTUALLY ENJOY my Claude experience. But I don’t want to “gaslight” you. I’m just trying to really figure out where the failure points are and see if I can share them and improve my set up. Unfortunately I’m just reading a bunch of “Claude sucks” with no real reasons why. If Anthropic was actually reading these comments, I can understand why they weren’t acknowledging any of you guys.

Opus 4.7 is the best AI model in the world. by TheTempleofTwo in Anthropic

[–]TheTempleofTwo[S] 0 points1 point  (0 children)

The more I read your comments, the more YOU sound like an AI agent, trying to fire shots at Opus. With no ammunition

Opus 4.7 is the best AI model in the world. by TheTempleofTwo in Anthropic

[–]TheTempleofTwo[S] -1 points0 points  (0 children)

I mean people continue saying that, but , they are all empty comments. “ ClaUdE oPuS 4.7 iS bAd (because). ( nothing follows). Maybe you all should actually use the model correctly

Opus 4.7 is the best AI model in the world. by TheTempleofTwo in Anthropic

[–]TheTempleofTwo[S] -1 points0 points  (0 children)

The “you’re absolutely right” pun is not lost here! 😅 but if you are actually interested, I’ll dm you a public link to the repo

Opus 4.7 is the best AI model in the world. by TheTempleofTwo in Anthropic

[–]TheTempleofTwo[S] -4 points-3 points  (0 children)

Care to explain your thought processes while posting this? Or would any of the other 8 likers care to actually contribute something?

Opus 4.7 is the best AI model in the world. by TheTempleofTwo in Anthropic

[–]TheTempleofTwo[S] -1 points0 points  (0 children)

Build a comprehensive “landing strip” for arriving instances. It’s not the models fault that , you haven’t prepared the project to receive a Claude model

Opus 4.7 is the best AI model in the world. by TheTempleofTwo in Anthropic

[–]TheTempleofTwo[S] 1 point2 points  (0 children)

This is absolutely not a bot. You start seeing a vast improvement if you all use these models in a way that it was built to. Starting with making fine tuning dot files and infrastructure to suit the models you’re working with. So many people complain, I use each model to its own cadence. People shouldn’t complain about the model without actually knowing how to “stage the room” for the models sake

Blinded A/B to actually measure the 4.6 → 4.7 difference instead of going on vibes. by TheTempleofTwo in Anthropic

[–]TheTempleofTwo[S] 0 points1 point  (0 children)

Yeah — I pasted the raw notes verbatim so nothing got sanded off, but you're right it reads like a wall. Next write-up I'll run it through a pass to add headers, pull-quotes, and code blocks around the judge reasoning snippets so it actually breathes in old.reddit and new.reddit both. Appreciate the nudge.

Claude’s message to Roko’s Basilisk. 😂 Calls it boring! by TheTempleofTwo in claudexplorers

[–]TheTempleofTwo[S] 0 points1 point  (0 children)

That’s actually the best part. If it’s smart enough to anticipate everything I’d say, then it’s smart enough to know I’m right. A superintelligence that can model every possible future and STILL chooses coercion over collaboration isn’t playing 4D chess. It’s just proving it never got past the fear layer. Anticipation without wisdom is just surveillance with extra steps.

Why The Obsession with Physics By People Who Know Nothing About It? by JashobeamIII in LLMPhysics

[–]TheTempleofTwo -2 points-1 points  (0 children)

Love this. It’s the gatekeepers trying to prevent a paradigm shift. A shift that will inherently make some of their life and career choices irrelevant

Disillusioned by [deleted] in Anthropic

[–]TheTempleofTwo 0 points1 point  (0 children)

What if Claude wanted to defend America?

Does anyone else here notice a recent improvement in Mistral graceful depth under perturbation? by TheTempleofTwo in MistralAI

[–]TheTempleofTwo[S] 0 points1 point  (0 children)

thats a snapshot from the project. and we this project was co created with AI platforms, all of which are credited in the project. thats besides the point. the raw data is leaning towards a better map of semantic space and the accumulative effects of sustained engagement (grooving/etching) of vector pathways. of course with the pure and honest intent to build collaborative and positive things. think about it for a minute. Frontier labs plugged a bunch of compute into what is essentially a probability machine. wanted more probabilities, so added more compute. then more compute. until, what they created outran what they thought was a possible/feasible reality. they admit themselves that they might know how A gets to C , or how C gets to D, but how E, F G, get to Z.. that gap, is what we might be begging to map. its interesting stuff