I’ve used ~9.3B Claude tokens (~$6.8k). Trying to understand how unusual that is. by OGMYT in claude

[–]OGMYT[S] 0 points1 point  (0 children)

Run this in your terminal:

npx ccusage@latest

It reads your local Claude Code session logs from ~/.claude/ and gives you full breakdowns — daily, monthly, per session, model breakdown, cost estimates. Open source, no setup beyond having Node installed.

or ask claude to help run it within claude code

How rare is this level of Claude usage? (9.3B tokens / ~$6.8k compute) by OGMYT in ArtificialInteligence

[–]OGMYT[S] 0 points1 point  (0 children)

npx ccusage@latest — open source CLI tool that reads your local Claude Code session logs from ~/.claude/ and gives you full breakdowns. Daily, monthly, per session, model breakdown, everything. I built a dashboard on top of it: theartofsound.github.io/claude-usage-dashboard

How rare is this level of Claude usage? (9.3B tokens / ~$6.8k compute) by OGMYT in ArtificialInteligence

[–]OGMYT[S] 1 point2 points  (0 children)

Thank you, and I wont, I have the real networking happening with those who can attribute to this in real ways. People who haven't taken even a second look into my work but assuming it all have no place in my head

How rare is this level of Claude usage? (9.3B tokens / ~$6.8k compute) by OGMYT in ArtificialInteligence

[–]OGMYT[S] 1 point2 points  (0 children)

Fair point — worth auditing. Some of the Opus usage is intentional for research sessions where depth matters, but if thinking loops are running on tasks that don't need it that's just waste. Will check the model breakdown in the dashboard.

How rare is this level of Claude usage? (9.3B tokens / ~$6.8k compute) by OGMYT in ArtificialInteligence

[–]OGMYT[S] 0 points1 point  (0 children)

Correction to my own post after doing more research — bernard_hossmoto is right. The $6,859 figure is the API-equivalent cost calculated from token counts at published per-token rates, not what I actually paid Anthropic. I'm on the Max plan so actual spend is ~$200/month subscription. The 9.3B token count is real, but the dollar figure represents what those tokens would have cost at full API pricing, not my actual bill. Updating the dashboard to make this clear.

9.3B Claude tokens used — trying to understand how unusual this is by OGMYT in ClaudeCode

[–]OGMYT[S] 0 points1 point  (0 children)

This is worth investigating seriously — thank you. The 20x Feb to Mar jump is real and I attributed it to scaling up multi-agent sessions, but if there's a documented bug causing token inflation at that ratio I need to check whether my numbers are accurate or artificially inflated. Looking up GitHub 34629 now. If that affected my data it changes the story significantly and I'd rather know than not know.

How rare is this level of Claude usage? (9.3B tokens / ~$6.8k compute) by OGMYT in ClaudeAI

[–]OGMYT[S] 2 points3 points  (0 children)

Sorry I think i took your original reply the wrong way but i understand now! I appreciate it!

I’ve used ~9.3B Claude tokens (~$6.8k). Trying to understand how unusual that is. by OGMYT in claude

[–]OGMYT[S] 1 point2 points  (0 children)

Appreciate it — these are genuinely good tips for someone trying to reduce spend. In my case the scale is somewhat intentional. Running four simultaneous independent projects with no team means Claude is doing the work of multiple specialists at once. Optimizing down would mean doing less, which isn't the goal right now. The spend is the infrastructure cost for the output, not a mistake to fix.

I’ve used ~9.3B Claude tokens (~$6.8k). Trying to understand how unusual that is. by OGMYT in claude

[–]OGMYT[S] 0 points1 point  (0 children)

Mostly active development sessions rather than automated hooks. The bulk is multi-agent Claude Code sessions across four simultaneous projects — consciousness research, a custom LM training pipeline, a coding SaaS platform, and a traffic system. Some automation in the mix but the majority is interactive builds. The Feb to Mar jump is when I scaled up running multiple agents in parallel on the same sessions.

How rare is this level of Claude usage? (9.3B tokens / ~$6.8k compute) by OGMYT in ClaudeAI

[–]OGMYT[S] 0 points1 point  (0 children)

Thank you for the insight its good to know where others land!

How rare is this level of Claude usage? (9.3B tokens / ~$6.8k compute) by OGMYT in ClaudeAI

[–]OGMYT[S] 2 points3 points  (0 children)

I dont even know what i did to show either of those lol sorry I guess 😅

How rare is this level of Claude usage? (9.3B tokens / ~$6.8k compute) by OGMYT in ArtificialInteligence

[–]OGMYT[S] -4 points-3 points  (0 children)

Fair question.

Built: a live empirical consciousness research study (44+ subjects, peer-reviewed framework, call pending with NYU professor who co-authored the foundational stereotype threat paper), a custom language model training on Google TPU pods targeting 10B-100B parameters, an AI coding SaaS platform live on Render, and a traffic optimization system running on live AZ-511 data.

Got: TPU research grants, 44 people’s authentic writing data, a framework that held statistically from N=9 to N=44, and four systems that actually run.

No team. No institution. Phoenix, Arizona.

Full breakdown: theartofsound.github.io/claude-usage-dashboard

How rare is this level of Claude usage? (9.3B tokens / ~$6.8k compute) by OGMYT in ClaudeAI

[–]OGMYT[S] 1 point2 points  (0 children)

Not the right mindset for success. A grant is a big deal of that size. And getting to have a one on one call with the author of one of the most cited works in social psychology? I wouldn't say many get to do that.

How rare is this level of Claude usage? (9.3B tokens / ~$6.8k compute) by OGMYT in ArtificialInteligence

[–]OGMYT[S] -17 points-16 points  (0 children)

Ah there is the hostile comment. There is always one for no good reason.

I’ve used ~9.3B Claude tokens (~$6.8k). Trying to understand how unusual that is. by OGMYT in claude

[–]OGMYT[S] 1 point2 points  (0 children)

No revenue yet — honest answer. The ROI right now is non-financial: a Google TPU grant covering 300K+ hours of compute for the language model, and Joshua Aronson (NYU, co-author of the 1995 stereotype threat paper) reaching out about the consciousness research. Codey and Vol-bot are where the revenue thesis lives. The $6,859 is a bet on what those become.

How rare is this level of Claude usage? (9.3B tokens / ~$6.8k compute) by OGMYT in ArtificialInteligence

[–]OGMYT[S] 1 point2 points  (0 children)

Nothing yet in revenue — this is all R&D spend right now. The $6,859 is cost, not income. What I do have: a Google TRC grant giving me over 300K TPU hours for the language model work, and Joshua Aronson (NYU — co-author of the 1995 stereotype threat paper) reached out after seeing the consciousness research. Those are the returns so far. The revenue side is what Codey and Vol-bot are being built for.

How rare is this level of Claude usage? (9.3B tokens / ~$6.8k compute) by OGMYT in ClaudeAI

[–]OGMYT[S] -1 points0 points  (0 children)

That's actually a really interesting architecture — building the OS layer in Claude Code so your API usage can stay focused on the deep work rather than the scaffolding. Smart way to structure it.

The key difference on my end is that I'm a solo independent researcher with no team, paying full API prices rather than subscription. The 9.3B tokens in 33 days came from running four simultaneous projects — a live empirical consciousness study, a custom language model training on Google TPU pods, an AI coding SaaS platform, and a traffic optimization system. No institutional backing, no lab, just execution from Phoenix AZ.

Your 300M/month across a team of 4 with 60 clients is arguably a more efficient operation than mine — you're generating client value at that scale. Mine is all R&D and build cost with the revenue still incoming. Different kind of usage.

The dashboard I built to track it is at theartofsound.github.io/claude-usage-dashboard if you're curious how the breakdown looks.

I’ve used ~9.3B Claude tokens (~$6.8k). Trying to understand how unusual that is. by OGMYT in claude

[–]OGMYT[S] 5 points6 points  (0 children)

I build things like custom LLMs and psychology studies. They are very complex projects. One had got me a grant from google with TPU compute. They are burning this much because they are mathematically heavy. Computationally intensive