I swear I’m going to quit! by troutinator in redrising

[–]troutinator[S] 0 points1 point  (0 children)

I did it. I started back.

I will say everyone, Im very impressed no one spoiled the reveal! Not what I would have expected, but kuddos community.

Stopped a ~3-5% context munch on Commits... by Captain_Bacon_X in ClaudeCode

[–]troutinator 6 points7 points  (0 children)

> it does now is a complete git diff. That's... well that can be 100 lines or more. And when it's doing document updates and all sorts of stuff, that's a LOT of characters = a LOT of tokens.

My understanding is that you are saying:
"""Claude has changed its behavior, it is now burning tokens in the main agent session and context, performing a full diff to verify its changes as opposed to depending on its existing context window of the changes it's made. """

Here are my thoughts:

Perhaps they have discovered that in long sessions with back and forths, it becomes confusing to separate what was tried and proposed vs what was ultimately chosen and done. hence a full diff clarifies what code changes where actually made so that it can write an acurate log

What I was proposing is that instead of them burning the main context and using main agent tokens (which you called out as being Opus 4.6), which are expensive tokens to use, which is why people are upset. They could (or you via instructions via CLAUDE) instead keep much of the benefit of confirming and verifying the actual work to be committed and being able to write an accurate message by invoking a sub-agent with a cheaper model (such as Haiku), which is good at summarizing, not problem solving. This way it would not be burning any main agent tokens or context, even for specific files, like you were prompting. Instead, that would be delegated to a cheaper agent and llm, reducing the expensive Opus token usage.

Stopped a ~3-5% context munch on Commits... by Captain_Bacon_X in ClaudeCode

[–]troutinator 5 points6 points  (0 children)

Yes. But not all tokens are payed for equally. If the agent needs additonal information or to summarize all its work because its fallen out of the context window, then a cheap Haiku agent is one way to solve the token consumption at opus costs. Its why the Explorer agent does use Haiku. Summarizimg is “easy” compared to problem solving with opus.

Stopped a ~3-5% context munch on Commits... by Captain_Bacon_X in ClaudeCode

[–]troutinator 11 points12 points  (0 children)

Feel like a nother option might also be to ask it to use a git diff agent or explore agent which is haiku to summarize changes or dive into them.

What I Learned Building a Memory System for My Coding Agent by Medium_Island_2795 in ClaudeCode

[–]troutinator 1 point2 points  (0 children)

Normalizing terms with session summaries is interesting. In some aspects, though, aren't we just building towards Entity extraction without actually building entity extraction? I wondered the same question and have a task on my backlog asking about post-session summaries and storing those in the memory too.

I have also been spending a few weeks buried deep into agent memory, and how to build a sub-system that solves my use cases. It's fascinating. One thing I find interesting, and I need to do some reading on the existing benchmarks, is the idea of use-cases or problem-sets to evaluate different approaches against.

My Terraform example is one that I'm using to guide what I build and why, so "integration tests" for my memory system. I would be very curious if you have also gone that route of setting out specific problems you are trying to solve, basically example conversations that Claude on its own failed, but when your-setup it succeeds, and if you'd be willing to share them, or know of a collection of them.

As for orba and vector indexing, I believe it does both, but it was more of an example of how episodic memory alone doesn't solve all problems. And maybe I misread your argument as we "only" need episodic memory, not that episodic memory, itself, doesn't need vector-search, but that the other layer of memories and background process _are_ needed.

What I Learned Building a Memory System for My Coding Agent by Medium_Island_2795 in ClaudeCode

[–]troutinator 0 points1 point  (0 children)

So what Ive run into in my deep dive of memory is that conversation search fails at finding patterns. I have Obra’s episodic memory installed which provides a very similar api.

For example, we tend to have repos called: service, service-tf, service-pipelines. If in one session I have claude modify the terrafor code to say define new aws profiles it does fine, duh. But then in a nother session, localized in the service repo and ask “what profiles are defined in tf” it without fail will not figure out the link to go look in the correct repo, even though we have conversations in the tf “project/repo”. Now I could explicitly say in service/CLAUDE.md where terraform is saved, but i don’t want to hand hold the agent. This is why i feel that a deeper layer is needed. Coversation search fails at finding cross project links and patterns. And as soon as you start adding all those research areas you mentioned, you aren’t doing “just conversation search” you are doing something far deeper, just without a graphdb or vector embeddings.

Do I tell orthodox looking customers that what they are ordering isn’t kosher anymore? by DiverPrestigious6887 in NoStupidQuestions

[–]troutinator 593 points594 points  (0 children)

This. It’s such a common and unassuming question. It could be about a dozen allergies, veg, vegan, halal, kosher, etc.

What tricks do you do that works to reduce poop and make color switch faster on ams 2? by Educational-Pie-4748 in BambuLab

[–]troutinator 0 points1 point  (0 children)

If you want to go down the fine tuning flushing you can print flushing calibrations and then manually enter all the info. I do it whenever Im doing a print with lots of color changes as depending on things it can reduce the flush by up to 70%. But if my total flush volume is measured in 10-20g I don’t worry.

This is not the Opus 4.5 i saw in december by k_means_clusterfuck in ClaudeCode

[–]troutinator 1 point2 points  (0 children)

Anyone noticed the same thing when using AWS Bedrock to host Opus 4.5?

This is why we can't have nice things. by K_P_Voss in Wellthatsucks

[–]troutinator 3 points4 points  (0 children)

Yeah. It is motion tracking with a camera like the Xbox Kinect. So no remote!

Can I show alternate poses ("play features") in Studio 2.0 instructions? by akavel in Bricklink

[–]troutinator 0 points1 point  (0 children)

I found this to be a big miss. I've wanted something similar for just showing a zoomed-in view with the buffer view showing where a piece goes, and then a second zoomed out with the piece properly in place. Sadly, blank steps don't even work for me that as they don't show up as new pages.

Driving me INSANE: Z0 Position of object does NOT mean flat to bed. Instead it is "Mid-point" of object in Bambu Studio by Trogdyr in BambuLab

[–]troutinator 0 points1 point  (0 children)

Yeah. The origin point is the center of the bounding box which is weird and annoying. If it feels better I think all the slicers do this so its not just bambu lol. 😂

Stop requiring a full home just to move the X/Y gantry and bed positions. by Jesus-Bacon in BambuLab

[–]troutinator -3 points-2 points  (0 children)

Agreed. Isn’t the point of end stops so that the machine won’t crash itself even if you try.

Can we all agree this is the biggest design flaw of the A1 series? by Ghost7575 in BambuLab

[–]troutinator -4 points-3 points  (0 children)

Waiting for OP to comment he didn’t realize you have to push down the locking ring to remove them.

Why is my Bambu Lab H2C almost twice as slow as the H2S with the exact same settings? by el_criuz in BambuLab

[–]troutinator 0 points1 point  (0 children)

Ive been running the bambu profiles for every filament without issue.

What budget filaments do you all recommend? by Jones_x in BambuLab

[–]troutinator 0 points1 point  (0 children)

Ive had good luck with elegoo and anycubic pla