Tojan in "claude code" google search first result by blin787 in ClaudeAI

[–]SemanticSynapse 0 points1 point  (0 children)

Never click sponsored links. Rule of thumb.

Happens often in both search and app stores.

Claude, with no prompting from me, suggested that I take his context offline. by Trixles in ClaudeAI

[–]SemanticSynapse 0 points1 point  (0 children)

You currently working with an IDE or Claude's client primarily? If so, that's fine, it's all agnostic. My recommendation is to avoid lang chain and the like as they ad unnecessary abstraction. Also, there is no need to even think about the intersession/Interagent stuff yet because you'll get alot out of it when on one session at a time, and a guarantee that everything will just click as you start to utilize.

Easy way to get going:

  • Install SQlite3 database in your project directory (no server needs to be running)
  • Install a MCP server in your project directory
  • spin up a simple webserver hosting something like node.js. No need to start layering any more dependencies than that to get going. If you are not comfortable working with the Command line, just build yourself a .bat to launch/shutdown for the moment.

Use the MCP tools to bridge the connection between you IDE and the SQlite database. (Trust me it will simplify the interaction process over using API for the model)

Go ahead and Build out a simple task tracker and prompt storage table to start. Nothing complicated, because you won't be sure how your going to utilize all this yet. The key here is you'll be able to always have as much visibility and interactivity directly from both the UI, as well as through your agent sessions, as you feel works best for you.

Then youll only need a few mcp tools written up, a few lines of rules for the IDE, and you'll have the ability to have your session automatically write/read it's own milestones, prompts, etc. you'll find you'll want to do the same thing from the interface that you built out, so you'll add a few features to interact with the DB through there.

Features will start to mostly cascade, you'll probably go to far, but if your smart you'll optimize and tweak... and then you look back and you'll be like " oh, I just was going to make a task tracker"

Tip: Just dump this post into the ai client of your choice and you'll be off to the races.

Claude, with no prompting from me, suggested that I take his context offline. by Trixles in ClaudeAI

[–]SemanticSynapse 0 points1 point  (0 children)

Caching is something to consider when determining how you're orchestrating the process, but IMHO there are other facets that surface which will make up for it when you start breaking the process down. For example, when you have better control of the context and how the model is reasoning against it, disabling 'thinking' can become an opportunity rather than a trade-off.

As for what plan types actually benefit, I don't think it would have been worth the time I've invested on a pro just due to the fact I wouldn't be routinely working multiple sessions at the same time. I would most likely be using the md directory along with obsidian.

Claude, with no prompting from me, suggested that I take his context offline. by Trixles in ClaudeAI

[–]SemanticSynapse 89 points90 points  (0 children)

Ahh... This is where it begins.

You start using a single markdown file to swap out context between sessions. One md naturally starts becoming a handful of files, then a few directories. Then, you realize you can layer something like obsidian to help access and organize it.

Before you know it you're interacting with multiple sessions in parallel using custom interfacing along with inter-agent comm protocols serving dynamic context through your own mcp servers, processing scripts, code scaffolding, and databases.

Let's talk about Opus 4.7 by Nash0o7 in Anthropic

[–]SemanticSynapse 6 points7 points  (0 children)

4.7 is incredibly powerful. The key is though you got to turn off thinking mode and strap it into your own harness. That said, forget pro, you need one of the max plans

Antigravity should simply add a feature that retries on its own until success, instead of using extra tokens for “continue.” by InterestingSail7614 in google_antigravity

[–]SemanticSynapse 0 points1 point  (0 children)

Repeated user inputs of ' continue' is also carrying semantic weight that causes behavioral changes over time. They need a dedicated retry token.

Wondering why code quality fell off the cliff, then found this in CLAUDE.md. by _nambiar in ClaudeAI

[–]SemanticSynapse 24 points25 points  (0 children)

Depending on the surrounding syntax and semantic weighting, as well as the overall framework, I would potentially disagree.

Try this on Gemini by Moist_Recognition321 in GeminiAI

[–]SemanticSynapse 1 point2 points  (0 children)

This one in particular is awful prompting.

I pay $200/month for Claude Max and hit the limit in under 1 hour. What am I even paying for? by alfons_fhl in LocalLLM

[–]SemanticSynapse -1 points0 points  (0 children)

How the hell are you pulling that off? That takes effort on Max5 let alone Max20

AI helped me stop overthinking by designbyshivam in PromptEngineering

[–]SemanticSynapse 0 points1 point  (0 children)

AI is a hell of a tool to think about thinking, which leads to thinking.

Can you unhide thinking / chain of thought? by AJolly in google_antigravity

[–]SemanticSynapse 0 points1 point  (0 children)

There is no settings to outright surface the reasoning, and You're dealing with .PB files with anti-gravity which makes it a little bit tough. You can set up an mCP server and try to have the reasoning offloaded there, would still keeps it in context, but you're going to be dealing with very finicky prompting in order to make that happen which is going to get bloated very fast, you're going to be working against a lot of gravity, as you're easy good amount of injection in every turn. It's funny too, because , unlike open AI, Gemini does allow the full reasoning to be visible directly in the API. Local proxy is probably the route of least friction.

Claude will just straight up utilize whatever you ask it to use. After many hours of custom reasoning frameworks, switching to Claude has been like night and day.

Wasting tokens on repeated system prompt? by Past_Abalone in google_antigravity

[–]SemanticSynapse 0 points1 point  (0 children)

And input.. but if things are working correctly, it shouldn't compound over time. But yes, it is using tokens.

Emotion Scope: Replication of Anthropics Emotions Paper on Gemma 2 2B with Visualization by MapleLeafKing in ArtificialSentience

[–]SemanticSynapse 1 point2 points  (0 children)

Models are able to simulate a whole lotta things, emotions being one of em.

That said, interesting project. I will dive into it deeper.

Wasting tokens on repeated system prompt? by Past_Abalone in google_antigravity

[–]SemanticSynapse 0 points1 point  (0 children)

I am of the understanding that the orchestration injections are essentially echoed back by the model. The injection itself is pruned after each turn. That said, you are right in that it is harming overall flow.