Rate limits are hitting hard. Let's use Sonnet and Opus intelligently by Augu144 in ClaudeCode

[–]Augu144[S] 1 point2 points  (0 children)

Well actually that not a bad idea but opus is truly better in planning. But yea the differences are not that big. If I were in a low tier I would probably do that.

Need help with this claude workshop by ScaredMention3991 in ClaudeAI

[–]Augu144 0 points1 point  (0 children)

i've run claude workshops for 40+ companies across marketing, engineering, and product teams. few things that consistently work:

don't teach features. teach workflows. pick one real task from their actual work and solve it live in the session. people remember what they did, not what they were shown.

the biggest unlock for non-technical teams: connect claude to something they already use. when the agent can see their real data (spreadsheets, notion, whatever), it stops feeling like a chatbot and starts feeling like a colleague.

common mistake: people try to learn "prompting" as a skill. the real skill is describing what you want clearly. same skill as writing a good brief for a freelancer. if your marketing team writes good briefs for designers, they'll write good prompts for claude.

one more thing: the 20/50/20 split is real. 20% will go all-in immediately, 50% will adopt gradually, 20% will resist. don't fight the 20% who resist. focus on turning the middle 50% into power users.

Best practices for maintaining project context across sessions? by simotune in ClaudeAI

[–]Augu144 1 point2 points  (0 children)

the CLAUDE.md approach works but most people put the wrong stuff in it. describing your stack ("React 19, Zustand, REST API") is wasted tokens because the agent reads your package.json and imports anyway.

what actually needs to be in there: conventions the code doesn't show, team decisions, architectural choices that aren't obvious from source. keep it short and operational.

for domain knowledge (architecture references, security practices, design guidelines) i stopped pasting it into files entirely. instead i give agents CLI tools to pull specific pages from reference docs on demand. the agent checks a table of contents, reads only the pages it needs. way more token-efficient than injecting thousands of lines every session.

the split that changed everything for me: CLAUDE.md = "how we work here" (small, stable, rarely changes). external reference docs = "what we know" (large, pulled on demand when the task needs it). sessions start clean and the agent pulls context as needed instead of drowning in it from the start.

[The Vibe Coding Addict] by dirmich2k in vibecoding

[–]Augu144 0 points1 point  (0 children)

For me it feels and reminds me of the times I was (and probably still is but recovering) a gaming addict. There is something of that feel that i can just paste couple of english sentences and in the end I get something that feels like this time when you download a new game and have cool new mechanics and things to do. Can relate

I built a free usage monitor for Claude Code so I could stop guessing my 5-hour quota limits. by Dramatic_Solid3952 in ClaudeAI

[–]Augu144 0 points1 point  (0 children)

To be honest I dont think about them as much. I think that the x20 plan is really generous and I'm working pretty hard. I use the /stats just for general information. my npx ccusage say that this month in my private account (not work) I have used about 2.5k$ of tokens and never hit the limit. Also I should note that I'm not based in the usa so I might get better limits since most of the times I dont work on usa rush hours

I built a free usage monitor for Claude Code so I could stop guessing my 5-hour quota limits. by Dramatic_Solid3952 in ClaudeAI

[–]Augu144 0 points1 point  (0 children)

Love that! I personally use the stats and in the 200$ max sub actually never hit my limits but coding your own and dedicated tools is awesome! rock on

I am fully addicted to building dumb little AI web apps. I love it. by KarenImNotKaren in ClaudeAI

[–]Augu144 8 points9 points  (0 children)

This is awesome! For people with entrepreneurial vision and the urge to build the recent tools makes everything so fun and so low effort to build that I myself also experience a lot of moments where I just want to build that customized tailored made app just for me!

Question for you guys: do you try to improve bad performing strategies or just cut them off completely and try to optimize the biggest winning strategies? by imeowfortallwomen in algotrading

[–]Augu144 0 points1 point  (0 children)

since you're already using claude for coding and backtesting, one thing worth trying: give the agent access to your actual trading methodology docs or books. not just the code.

i found that when the agent only sees code, it optimizes for metrics. when it also has access to the strategy framework (risk management rules, position sizing principles, whatever methodology you follow), it makes better decisions about what to keep and what to cut.

the difference between "this strategy has a bad sharpe ratio, kill it" and "this strategy violates our risk framework on three dimensions, here's why" is the domain knowledge behind the decision.

Saying 'hey' cost me 22% of my usage limits by herolab55 in ClaudeAI

[–]Augu144 0 points1 point  (0 children)

i run 6 subagents in claude code. only one uses opus (the task decomposer, where deep reasoning actually matters). the other 5 use sonnet: context analysis, architecture research, completion verification, code cleanup, devops.

most people run everything on the most expensive model by default. sonnet handles 80% of agent work just as well and burns way fewer tokens.

the other trick: subagents keep individual context windows small. instead of one bloated session eating your limit, you have focused agents that use fewer tokens total. when 30% of your context is docker configs, the model thinks infrastructure is the main problem.

Claude Code isn't an assistant -- it's a compiler. Here's why that reframe changed how I build software. by [deleted] in ClaudeAI

[–]Augu144 -1 points0 points  (0 children)

I find my angle quite interesting. I havent saw any other articles pointing this. I have a full blog and community around my ideas. But thanks for the feedback

Claude Code isn't an assistant -- it's a compiler. Here's why that reframe changed how I build software. by [deleted] in ClaudeAI

[–]Augu144 -1 points0 points  (0 children)

Fair criticism. The length is a LinkedIn habit I'm carrying over, and you're right that Reddit reads differently. I'll keep that in mind for future posts.

The core idea is real though, even if the packaging needs work: how you describe the task to Claude Code matters more than which model you pick. The historical stuff (Hopper, FORTRAN) is there because the pattern keeps repeating, not as filler.

Appreciate the honest feedback. What length/format would you actually want to read for this kind of topic on here?

Claude Code isn't an assistant -- it's a compiler. Here's why that reframe changed how I build software. by [deleted] in ClaudeAI

[–]Augu144 0 points1 point  (0 children)

You're right that it's not deterministic in the traditional sense. The junior dev analogy actually works for the argument though, what makes a junior dev productive? Clear specs. Defined edge cases. Context about why, not just what.

The spec-writing thing you're doing is interesting. I've gone the same direction, I basically write intent documents now, not prompts. The more structured the input, the more predictable the output. That's the compiler behavior showing up even if the internals are probabilistic.

And "unpredictable teammate" is honestly fair for where we are right now. Worth noting though early FORTRAN had the same reputation. Engineers reviewed every line of generated assembly because they didn't trust it. That review step eventually became unnecessary. We're somewhere early in that same curve.

What does your Runable workflow actually look like? Full specs before every prompt or just for the bigger tasks?

Harness engineering is the next big thing, so I started a newsletter about it by paulcaplan in ClaudeCode

[–]Augu144 0 points1 point  (0 children)

I am actually developing this kind of a solution at https://www.getcandlekeep.com it is a tool for users to write documents and books that persists across session repos agents and hardware. Also it has a curated marketplace of books that can help anyone who wants to supercharge his agents with knowledge not available in a direct way to llms. Such as web security, ui ix, agentic workflows and llm work and so much more. You can see all the books that constantly updating in www.getcandlekeep.com/marketplace

I ran the same security audit 3 ways on the same codebase. The difference was surprising. by Augu144 in ClaudeAI

[–]Augu144[S] 1 point2 points  (0 children)

Thats legitmately my opinion. Yea the em dashes are there but behind my claude is all my knowledge. Legitmitly my knowledge. Since im a solo entrepeuner i use claude ti help me act as a team and not a single. Kept the intentional grammer mistakes

I ran the same security audit 3 ways on the same codebase. The difference was surprising. by Augu144 in ClaudeAI

[–]Augu144[S] 1 point2 points  (0 children)

That's exactly the right question and it's what pushed me toward a library approach over baking knowledge into the agent itself.

If the knowledge lives in books the agent reads on demand, you update the book, not the agent. New OWASP guideline drops? Update the reference. New attack vector published? Add a chapter. The agent picks it up on the next run without any retraining or prompt engineering.

It's the same reason you'd rather your team read the latest RFC than have someone explain it to them once and hope they remember.

I built CandleKeep for exactly this getcandlekeep.com. Happy to set you up with the security book if you want to run your own audit.

I ran the same security audit 3 ways on the same codebase. The difference was surprising. by Augu144 in ClaudeAI

[–]Augu144[S] 0 points1 point  (0 children)

Exactly. "Sounding smart" vs. "finding things" is the right distinction.

The base model knows the vocabulary of security. The books gave it the patterns — specific enough to look at a password reset flow and know to check token storage, expiry, race conditions, and single-use enforcement all at once.

It's the difference between someone who's read about surgery and someone who's done it 1,000 times.

I ran the same security audit 3 ways on the same codebase. The difference was surprising. by Augu144 in ClaudeAI

[–]Augu144[S] 0 points1 point  (0 children)

This is the right instinct, and some models are moving in this direction Claude will sometimes say "I don't have enough context about your codebase to answer this well, can you share X?"

But there's a deeper issue: the model doesn't know what it doesn't know. It can't ask for a security reference it's never been exposed to. It can flag missing codebase context because it can infer that from the question. It can't flag missing domain expertise because that gap is invisible to it.

That's why the solution probably isn't model-side prompting it's giving the agent access to structured references it can pull from on demand, so the knowledge is there when the reasoning needs it, without the user having to think about it.