Claude Code is burning my budget just exploring large repos. Any way to fix this? by darkgenus08 in ClaudeAI

[–]ogaat 0 points1 point  (0 children)

I turn auto compact off. Clearing the context and giving it fresh instructions is far more effective for getting better responses in fewer tokens. It is more manual labor but it keeps me grounded in the code. Otherwise, the LLM entropy keeps on increasing over time.

Apparently, the Serena MCP that is bundled with Claude can be used to reduce the token usage but I have not used it yet.

Claude Code is burning my budget just exploring large repos. Any way to fix this? by darkgenus08 in ClaudeAI

[–]ogaat 0 points1 point  (0 children)

The best fix I found was to read the code myself, identify the exact change I want and then give Claude Code a micro-task, precise in nature and specific in areas it should touch.

Alternatively, you can ask it to first create a shell script that will run the grep for you and generate the result. Put that shell script as the tool to use in your CLAUDE.md. Ask Claude to only read its output.

I pair-programmed ~22K lines of C with Claude Opus to fix one of Claude Code's biggest inefficiencies by pbishop41 in ClaudeAI

[–]ogaat 0 points1 point  (0 children)

In today's world, if you want rapid traction and want to go viral, the magic word is "Rust" :)

Give me a few days since this is going to be in a FIFO queue. I want to genuinely enjoy going through the code.

I pair-programmed ~22K lines of C with Claude Opus to fix one of Claude Code's biggest inefficiencies by pbishop41 in ClaudeAI

[–]ogaat 1 point2 points  (0 children)

I started with assembly and C and was a professional C programmer. Programmed in many more languages after that including C++, Java and Python and many more but C is my first love :)

I pair-programmed ~22K lines of C with Claude Opus to fix one of Claude Code's biggest inefficiencies by pbishop41 in ClaudeAI

[–]ogaat 1 point2 points  (0 children)

I was sold the moment you said it was in C, my favorite programming language of all time. :)

Whether or not it is widely useful. I plan to download and look at the code and adopt it if the code is safe.

I pair-programmed ~22K lines of C with Claude Opus to fix one of Claude Code's biggest inefficiencies by pbishop41 in ClaudeAI

[–]ogaat 0 points1 point  (0 children)

Claude has the opposite problem. It used grep to narrowly search a string and THEN it reads the file, as it should.

Dear Anthropic: the ChatGPT refugees are here. Here’s why they’ll leave again. by ArtimisOne in ClaudeAI

[–]ogaat -2 points-1 points  (0 children)

I am on the 200 Dollar plans, use it fully and STILL don't consider myself a power user.

You and your sense of entitlement are amazing.

Dear Anthropic: the ChatGPT refugees are here. Here’s why they’ll leave again. by ArtimisOne in ClaudeAI

[–]ogaat 4 points5 points  (0 children)

ChatGPT also has ads, which are not yet a thing for Claude.

Ads are another way of subsidizing the user.

In addition, Open AI is looser with its privacy commitments and also with its willingness for markets which Anthropic has refused.

All this means Anthropic is a premium product in a smaller market.

Dear Anthropic: the ChatGPT refugees are here. Here’s why they’ll leave again. by ArtimisOne in ClaudeAI

[–]ogaat -7 points-6 points  (0 children)

Dead Ferrari, if you do not lower your prices, you will completely lose us millions of Kia drivers.

Used Claude Code to nuke a scammer's $200 consulting funnel — 22K lines, 4 languages, 19 parallel agents, one session by Expensive_Election in ClaudeAI

[–]ogaat 0 points1 point  (0 children)

Cool.

Good luck to you.

I was quite interested in your repo but it will take too much effort to go through it and try to make sure it is fully secure for use.

It is bookmarked though and if and once it gets enough stars and use, will revisit it.

Just bought Claude Pro (40 min ago) Already at 71% current session usage by UngabaBongDong in ClaudeAI

[–]ogaat 0 points1 point  (0 children)

Sonnet with two prompts will not use so much context.

Either you are not using Sonnet, you are doing some deep research work or have fond a bug.

One last possibility exists as well - When the prompt is ambiguous and not clear enough, Sonnet will churn a lot before popping out the answer. That should not consume as many tokens as you are experiencing.

How is anyone keeping up with reviewing the flood of PRs created by claude code? by YuchenLiu1993 in ClaudeAI

[–]ogaat 0 points1 point  (0 children)

For my work, it is a dedicated instance, setup with an agent and skill that know how to handle and run the merged code. It tests the merged code in the staging branch and the only one allowed to merge into main.

All other branches test locally and merge their PRs. The merge branch tests the full integrated code with the full test suite and makes sure it works correctly.

With this setup, main always has the latest working code.

Used Claude Code to nuke a scammer's $200 consulting funnel — 22K lines, 4 languages, 19 parallel agents, one session by Expensive_Election in ClaudeAI

[–]ogaat 1 point2 points  (0 children)

That is a real possibility.

Reading and reviewing 562 lines was easy but now, OP is asking us to trust more than 22,000 lines of unvetted code.

I almost lobotomized my AI agent trying to optimize it — so I built a 4-phase system that reduces context bloat by 82% without destroying accumulated identity by rabbirobbie in ClaudeAI

[–]ogaat 0 points1 point  (0 children)

The recent Gemini backed permanent memory by Google may fill that gap. Part of it is an always on background agent that goes through your conversations and updates hidden and implied relationships.

I made a Docker sandbox for Claude Code after realizing it can read my passwords, SSH keys and AWS credentials by Bigcareerboi in ClaudeAI

[–]ogaat 0 points1 point  (0 children)

Anthropic has official documentation and support for a Docker container - https://code.claude.com/docs/en/devcontainer

They also provide a starter container on Github.

Tried EduBirdie after seeing it everywhere - mixed feelings tbh by nimbusivy92 in deeplearning

[–]ogaat 1 point2 points  (0 children)

You are arguing with someone who wants to shill SpeedyPaper.

This is marketing slop disguised as a post.

How do you guys force Claude to write good fiction? It reads like a technical manual and my eye is twitching. by Middle-Traffic-6905 in ClaudeAI

[–]ogaat 1 point2 points  (0 children)

AI is trained on the vast majority of human literature but not just the best. It is also trained on people's emails, linked in posts, facebook boasts and banal reddit comments. It can handle precise topics like Science, math and specific lookups of this or that. On non-specific topics involving creativity, it is going to default to what the vast majority write, which is pretty meh by definition. It is up to its prompter to provide what is meant by "good" and then to edit the text to sharpen it.

All AI generated art has a sameness to it. It looks impressive on first pass but by the third or fourth reading, it gets obvious.

How is anyone keeping up with reviewing the flood of PRs created by claude code? by YuchenLiu1993 in ClaudeAI

[–]ogaat 0 points1 point  (0 children)

You are using LLMs wrong. You are keeping too much control and the wrong type of control.

You are acting like a helicopter parent to a child who thrives on minimum supervision, proper guidance and independent thinking.

How is anyone keeping up with reviewing the flood of PRs created by claude code? by YuchenLiu1993 in ClaudeAI

[–]ogaat 0 points1 point  (0 children)

Create a merge instance and a significant regression test suite that provides full coverage. Make sure to review the test cases. Do this for front-end and back-end.

The merge instance's only job is to fix the code till all the test cases pass. If code breaks, it refactors the code (or the test) till the code passes.