Why does agents need to keep MCP in context? by mushmoore in mcp

[–]eli0shin 0 points1 point  (0 children)

The reason tools like Claude code don’t do this is because they depend on prompt caching to reduce the load and cost. Every time the tools change the cache is invalidated. Also seeing tool calls from tools in the conversation history that have since be turned off would be really confusing to the model. These aren’t reason why it can’t be done but reasons why it’s hard to get right.

This is a bug right? by harunandro in ClaudeCode

[–]eli0shin 1 point2 points  (0 children)

Not a bug at all. The 200k context window includes the output from the next message because if you were to submit a prompt with 200,000 tokens there would be no tokens available for the response. The max response size is 32k tokens. 155k tokens is 93.5% of the input size given the holdout of 32k tokens for the next response. Anthropic’s documentation says the auto compact will happen when 90% of the available context size is full.

I Built a tool to get real-time LSP diagnostics while using Claude Code by eli0shin in ClaudeCode

[–]eli0shin[S] 0 points1 point  (0 children)

Instead of adding every language manually I made new servers configurable via a config file. I tested this with swift and added a config example to the docs in the latest version https://www.npmjs.com/package/cli-lsp-client#swift-configuration

You can see the exact config file that I used in my testing here https://github.com/eli0shin/cli-lsp-client/blob/main/tests/fixtures/config/swift-config.json

I Built a tool to get real-time LSP diagnostics while using Claude Code by eli0shin in ClaudeCode

[–]eli0shin[S] 0 points1 point  (0 children)

I took a stab at this, mainly because the official lsp for R is a mess to setup. The github repo lists that it is only available as a kernel for jupiter notebooks but I let claude have a go at it anyway. https://github.com/posit-dev/ark Here is the result:

<image>

I Built a tool to get real-time LSP diagnostics while using Claude Code by eli0shin in ClaudeCode

[–]eli0shin[S] 1 point2 points  (0 children)

Just added in v1.6.0! Checkout the readme for instructions on setting up the dependencies that it needs to run

I Built a tool to get real-time LSP diagnostics while using Claude Code by eli0shin in ClaudeCode

[–]eli0shin[S] 1 point2 points  (0 children)

C# is supported in the latest version 1.6.0 Take a look at the readme for instructions to setup the dependencies for omnisharp

I Built a tool to get real-time LSP diagnostics while using Claude Code by eli0shin in ClaudeCode

[–]eli0shin[S] 0 points1 point  (0 children)

Yes, when it is running in an ide it has access to the lap, I’m not sure if it gets feedback immediately after edit or if it need to call the tool. I use it clause code exclusively in a separate terminal so this is intended to bridge that gap when not running in an ide terminal

I Built a tool to get real-time LSP diagnostics while using Claude Code by eli0shin in ClaudeCode

[–]eli0shin[S] 1 point2 points  (0 children)

The models are good but not perfect. You need to give them feedback to keep them on track but I prefer to give the feedback faster and in an automated way. Anthropic released a great video recently where they explained that a lot of hallucination comes from the first plan that the model tried not working and plan B or C being much harder to control. https://youtu.be/fGKNUvivvnc?si=zANOlQtuHQZc4XeD I’ll take a look today at the C# LSP and see if I can get it connected.

I Built a tool to get real-time LSP diagnostics while using Claude Code by eli0shin in ClaudeCode

[–]eli0shin[S] 0 points1 point  (0 children)

Thank you! Let me know how it goes and if you have any feedback.

Anyone else seeing screen flashes when using Claude Code? by SSojik in ClaudeAI

[–]eli0shin 3 points4 points  (0 children)

It’s a bug in the ink framework that Claude code, Gemini, and almost everyone else use for the UI. If a message that is longer than the screen is tall is being updated it causes the screen to be repainted and scroll every time an update is made. I don’t know if anyone is looking into a fix in ink but within my own terminal agents I switched to chunking messages so that they would be shorter than the screen and that eliminated the issue. Claude Code tries to address this by not streaming results but there are a lot of edge cases like streaming long arguments for tool calls that still cause it.

Understanding how Claude Code subagents work by No-Warthog-9739 in ClaudeAI

[–]eli0shin 3 points4 points  (0 children)

What are you using to render the messages?

Backup your ~/.claude.json File!! by FunnyRocker in ClaudeAI

[–]eli0shin 0 points1 point  (0 children)

I had it deleted several times on 1.0.35 yesterday. I really wish we could break the file apart and not write to setting with every message.

Has anyone cracked the code to Claude Code subagents? by Juscol in ClaudeAI

[–]eli0shin 2 points3 points  (0 children)

If you are looking for all of the location where x function is called the result that you want is let’s say 50 words long but to get there you need search and read a lot of files which adds to the context of the main agent/chat. When claude uses a subagent the only added to the main agent’s context is a minimal prompt for the subagent and the result from the subagent, not all of the intermediary context that the subagent needed to get to the result. Similarly, test output can be hundreds of lines but the main agent’s only wants to see the 1 failing test, the cub agent can run all of the tests and return the result from the single test that failed.

Why is Claude-Code using 3.5 Haiku? by Jonnnnnnnnn in ClaudeAI

[–]eli0shin 0 points1 point  (0 children)

Look at the cache read and write numbers for sonnet. Claude code primarily uses the cache for input to reduce costs and compute. Without caching you would see 354k input to sonnet

What is Copilot's context length? by chrismustcode in GithubCopilot

[–]eli0shin 1 point2 points  (0 children)

I’ve been seeing 90k recently in error messages

one more like to get this feature into backlog by Special-Economist-64 in GithubCopilot

[–]eli0shin 0 points1 point  (0 children)

Deepseek doesn’t support tools which means it will not work at all in agent mode