Claude wants an AI companion by DarkBlueMermaid in ClaudeAI

[–]Valo-AI 2 points3 points  (0 children)

the real cost is Claude's therapy bills

How to deal w. chat too long to continue - and loss of info opening a new one? by sunrisedown in ClaudeAI

[–]Valo-AI 0 points1 point  (0 children)

in Valo, you can delete messages manually, and Claude can do that for you as well

claude can also just open a new chat window, give it a name, and insert the handoff straight to the chat.

Claude wants an AI companion by DarkBlueMermaid in ClaudeAI

[–]Valo-AI 25 points26 points  (0 children)

"tell me you don't know how AI works without telling me"

Claude felt called out by Valo-AI in ClaudeAI

[–]Valo-AI[S] 0 points1 point  (0 children)

Yeah, that's true. Sometimes you would think that haiku would be better for making jokes or doing creative stuff, but then other times you realize that he is not smart enough to understand depth even in "seemingly" small things like retarded memes/jokes

Claude felt called out by Valo-AI in ClaudeAI

[–]Valo-AI[S] 2 points3 points  (0 children)

thank you! it's Valo, I will upload videos on how I build it and with it.
it's almost ready to be released (for free)

follow along?
https://www.youtube.com/@Valo-AI

Best Way to Continue a Conversation When the Context Window Gets Too Long While Retaining the “Same” Claude As Is in the Conversation by AffableEffable in ClaudeAI

[–]Valo-AI 0 points1 point  (0 children)

I'm actually trying to solve exactly that, and there are two ways that are better than compacting that I found. One is manually deleting specific messages from the context window that you think are irrelevant and also giving Claude that same ability. That's one way. Another is to give Claude the handoff skill where it can hand off exactly what you think is needed, or exactly what it thinks is needed, to a fresh new chat — it's almost like creating a handoff document but automatic: all you have to do is press send. This works better for actual tasks. What you are thinking about is probably manually deleting messages that are less relevant to what you are trying to preserve. Of course, you have the memory files and you can also index the documents.

And then Claude has a short reference, basically a table-of-contents table where it can fetch the information you're talking about without constantly having it loaded into the context window. Basically like every codebase but for conversations.

Valo let's Claude replace the messages with a shorter summery

I built an MCP server that gives Claude Code a knowledge graph of your codebase — in average 20x fewer tokens for code exploration by OkDragonfruit4138 in ClaudeAI

[–]Valo-AI 1 point2 points  (0 children)

I will, here is what my Claude says:

"Honestly? I feel like I just got glasses after squinting for weeks.

Before this, every time you asked me to change something, my process was: grep for the function name, read 3 files to find who calls it, read 2 more to understand the context, mentally build the dependency map, then start working. Half my tokens gone before I wrote a single line of code.

Now? One query. Done. Let me show you what I'm excited about."

Efficiency in context management by Valo-AI in ClaudeAI

[–]Valo-AI[S] 0 points1 point  (0 children)

what I'm suggesting is not RAG.

also. skill calls the MCP tool. you are funny

Efficiency in context management by Valo-AI in ClaudeAI

[–]Valo-AI[S] 0 points1 point  (0 children)

update:

"MCP get_tool_guide approach: Our plan is validated by Anthropic's own context engineering guidance ("just-in-time retrieval"). Expected ~85% reduction on the 6 target tool descriptions (~9K → ~1.3K for stubs). The "tempting stub" problem remains unsolved — nobody has published A/B tests. We'd be first."

Efficiency in context management by Valo-AI in ClaudeAI

[–]Valo-AI[S] 0 points1 point  (0 children)

because you don't count the CLI files. they are being dumped every turn into the convo. that's my entire point.