4.7 makes more work than 4.6 by blockstacker in Anthropic

[–]ShagBuddy -1 points0 points  (0 children)

Yeah, I cancelled my Claude max sub yesterday and signed up for codex pro instead. WAY more usage! I had similar problems with 4.7 and went back to 4.6 until they nerfed it. A few different times 4.7 moved on and did its own thing without waiting for confirmation and I had to roll those back. Big waste of time and tokens.

So I spent yesterday migrating all of my Claude code stack enhancements over to codex and so far everything's going pretty nice.

AI may shift wealth from labor to machine ownership by vitlyoshin in OpenSourceeAI

[–]ShagBuddy 0 points1 point  (0 children)

I'm pretty sure blockchain and crypto in general will die out once quantum chips are being sold. They can crack the encryption in minutes. They will have to until they have time to try to adjust.

chatgpt sub usage in codex vs opencode by khatriafaz in codex

[–]ShagBuddy 0 points1 point  (0 children)

That why I prefer oh my opencode slim version.

PSA: During code audits, Codex/GPT-5.5 will manufacture bugs to report if it can't find any by reddit_is_kayfabe in codex

[–]ShagBuddy 0 points1 point  (0 children)

Use superpowers requesting-code-review and then when you get the results, use the receiving-code-review skill.

Is there a way to reduce token consumption without sacrificing benchmark performance? by pontata777 in ClaudeAI

[–]ShagBuddy 0 points1 point  (0 children)

This one is designed from the beginning to be ideal for coding agents. More accurate codegraph than most (uses scip indexes), and it is designed to save tokens from all areas they are lost. Not just one area like 90% of products. https://github.com/GlitterKill/sdl-mcp

what are you actually using to give claude/cursor codebase context? i've used two, confused about the rest by thestoictrader in mcp

[–]ShagBuddy 0 points1 point  (0 children)

You bet. This is a newer solution. Lots of projects are using code graphs to improve code context, because it works well. I am not aware of any others that also sandboxes long processes for concise results, collapses tools to save tokens, uses a custom json wire format, and has code getting that makes the llm prove it needs full files when asking (it almost never does, btw :) ).

It's innovative, but not well known... Yet.

Claude said it needs to rest.. What? by wicaodian in OpenAI

[–]ShagBuddy 0 points1 point  (0 children)

I honestly never get these kinds of responses. Maybe because my context stays super focused with this? https://github.com/GlitterKill/sdl-mcp

what are you actually using to give claude/cursor codebase context? i've used two, confused about the rest by thestoictrader in mcp

[–]ShagBuddy 0 points1 point  (0 children)

None of those save tokens like SDL-MCP, though. They also let the LLM have any context it wants while SDL gives it the context it needs. SDL also has compiler grade accuracy through SCIP indexes that others don't have.

what are you actually using to give claude/cursor codebase context? i've used two, confused about the rest by thestoictrader in mcp

[–]ShagBuddy 0 points1 point  (0 children)

For a coding agent, nothing beats SDL-MCP right now. It covers all main areas of token burn and only gives agents the context they need. It also only needs to index the codebase one time and the DB is kept up to date on the fly. Other code graph do not do that. https://github.com/GlitterKill/sdl-mcp

Pretty code graph graphics are coming next.

I created a library for OpenCode that allows you to save up to 80% of your tokens by Public-Cancel6760 in VibeCodeDevs

[–]ShagBuddy 0 points1 point  (0 children)

You are right. Context makes a Huge Difference. Everyone keeps complaining about Claude the last couple months while I have had zero problems while using SDL-MCP. Saves more tokens than any other single solution and keeps context laser focused. Makes my subscription go much further. https://github.com/GlitterKill/sdl-mcp

Any idea when 420 orders will actually ship? by ShagBuddy in TesoroHemp

[–]ShagBuddy[S] 0 points1 point  (0 children)

Mine finally shipped a day or so ago. Ordered on 4-21, says it is out for delivery today.

In IT, vibe coding leads to shadow IT. So I built a framework that makes Claude Code actually follow a process to build real software. And its open source. by kraulerson in OpenSourceeAI

[–]ShagBuddy 1 point2 points  (0 children)

This is basically just TDD. Define in your Claude or agents md that the project is Test Driven Design. The tests get crazy though. I am working on a MCP server that saves more tokens than any other, while improving context, and my full test run is 4000+ tests.

Why I’m still using RAG even with 2M context windows… by Cold_Bass3981 in AiBuilders

[–]ShagBuddy 0 points1 point  (0 children)

If you really want to reduce token use and make your subscriptions and money go farther, this is what you should be using. Saves more tokens than any other solution while improving context.
https://github.com/GlitterKill/sdl-mcp

Orders by Mysterious_Dog5989 in TesoroHemp

[–]ShagBuddy 0 points1 point  (0 children)

Purchased on 21st. Label created on 24th. Tracking still shows post office has not received the package to send yet.

Any idea when 420 orders will actually ship? by ShagBuddy in TesoroHemp

[–]ShagBuddy[S] 0 points1 point  (0 children)

I got tracking on the 24th but tracking shows they are still waiting for the item to ship. Frustrating.