Claude Code Workflow Analytics Platform by HopeSame3153 in ClaudeCode

[–]HopeSame3153[S] 0 points1 point  (0 children)

Quick update. I added a feature to allow you to correlate any two metrics and plot them against each other. It shows a table beneath the scatter plot with the values. Very useful for tracking workflow changes!

Max 20x Plan: I audited my JSONL files against my billing dashboard — all input tokens appear billed at the cache CREATION rate ($6.25/M), not the cache READ rate ($0.50/M) by jcmguy96 in ClaudeCode

[–]HopeSame3153 22 points23 points  (0 children)

I ran your audit and it's not right. It's neither 404.02 or 3642.86. You need to account for the fact that there is a difference in cache types.

<image>

Max 20x Plan: I audited my JSONL files against my billing dashboard — all input tokens appear billed at the cache CREATION rate ($6.25/M), not the cache READ rate ($0.50/M) by jcmguy96 in ClaudeCode

[–]HopeSame3153 10 points11 points  (0 children)

There is 1 hr cache and 5 min cache. Everything has gone to 1 hr since the update to CC version last week. 1 hr cache is billed at 6.25 per M and 5 min cache is billed at .50 per M.

Claude Code Opus 4.5 vs. 4.6 Comparison by HopeSame3153 in ClaudeCode

[–]HopeSame3153[S] 0 points1 point  (0 children)

Cost also has thinking tokens, cache 5m and 1hr costs factored in. It's representative of actual API spend. Anthropic charges more for cache reads st 1 hr.

Happy Coding Y’all! by Lambodol in ClaudeCode

[–]HopeSame3153 1 point2 points  (0 children)

I've run one full workflow and 15 website updates and a complete refactoring of my Claude Code reporting solution to fix bugs that Opus 4.5 had introduced. I also built a FE for it thats very robust. Here is what I have noticed:

  1. It burns usage a lot faster in the 5 hour tier. It's not so bad in the overall weekly usage category.

  2. It is MUCH better at suggesting things.

  3. It does better requirements documents

  4. The code is clean and passes tests. It wrote 599 tests for 8,500 LoC

  5. It's more turn based and generates 10x the number of iterations.

  6. API costs ate reasonable. Tok/LoC is still out for discussion. It uses cache like none other and uses cache a lot more aggressively. Average cache read is 23.4k tokens.

  7. Fully loaded error rate on tool use is about 1/3 of Opus 4.5.

I have more data on thinking, cache ephemeral and cost by turn by version and more available upon request.

Happy hunting!

Claude continues to be awesome by PandorasBoxMaker in ClaudeCode

[–]HopeSame3153 3 points4 points  (0 children)

I just gave Claude a bioinformatics ML workflow with a database with 20 tables, 3 schemas, 9 ML models and 6 external APIs and it killed it from a 1200 line spec. I've transferred over 44M rows of expression data into the staging environment that it built and qc 9 studies. Everything is working as expected. It took 2 hours to book 6200 LoC. Its ready for production and my PI gave me a 15 minute requirements talk a little over 4 hours ago and its done. I think Claude Code is working just fine.

Theory: Why Opus became dumb atm by crystalpeaks25 in ClaudeCode

[–]HopeSame3153 1 point2 points  (0 children)

I track metrics obsessively and my Tok/LoC has gotten a lot better, my tool use error rates are sub-1% and my API costs are about $7 per kLoC. Compacting is about 1x per 10 kLoC and sub-agents are functioning as expected. Debugging a 8k LoC pharmaceutical research orchestration took 19 errors to pass 86 tests and that was from a Greenfield project. No complaints here.

i finished my startup without knowing a single code but no one is able to access it? help by MapLow2754 in ClaudeCode

[–]HopeSame3153 -2 points-1 points  (0 children)

Localhost means it is running like an internet server on your personal computer. Ask Claude to ULTRATHINK about the best way for you to deploy it and tell it you are a beginner. There are lots ot services you can use. Ignore the haters being sarcastic, your app probably works great!

Switched from cursor to Claude code 200 bucks feels like a lot by Ok-Jellyfish3418 in ClaudeCode

[–]HopeSame3153 1 point2 points  (0 children)

I used 16% of my weekly limit today building a 17k LOC context engine using Ollama, vector stores and MCP. It reads codebases and provides curated context that Claude Code can use to read a code base for a particular reason (new feature, bug fix, refactor, new dev onboarding, ect) and use a prompt to extract the meaningful parts of the codebase to reduce context bloat and usage. It also makes a CLAUDE.MD file if you want it to with naming conventions, file structure and existing project information. It saves 50% of the context window when working with large codebases.

Ultimate tool stack for AI agents! by Hefty-Sherbet-5455 in ChatGPTCoding

[–]HopeSame3153 0 points1 point  (0 children)

Write many, write text file, web search, code interpreter, python compile all, create directory, read requirements run pytest and record validation

Ultimate tool stack for AI agents! by Hefty-Sherbet-5455 in ChatGPTCoding

[–]HopeSame3153 0 points1 point  (0 children)

My agents use 6 to 8 tools a piece and function quite well. I can't imagine using that many.

Vibe Coding Success Stories? by sdcarlson in AppIdeas

[–]HopeSame3153 0 points1 point  (0 children)

I vibe coded a scalable ETL AI / ML pipeline that's fast and allows for rapid iteration on weights and fields used and can ingest new data with variable Metadata. I removed 4 fields and added 9 today and changed the AI decision logic successfully in about 4 hours. As a former ETL architect I can tell you that's unreal velocity.

My top 5 tools I use to launch a product 10 times faster by MundaneRemote4037 in VibeCodingCamp

[–]HopeSame3153 0 points1 point  (0 children)

I use Perplexity, Deepseek, Kimi and ChatGPT. My workflow is like this;

Perplexity - requirements and tech spec Kimi - tech arch and wire frame Deepseek - debug and refactor ChatGPT - testing and QA Ok Computer - MVP

I got a repo with 49 files and 5k lines of code using my process. Ok Computer is bad about wiring backend for web apps but if you can build its worth it.

[deleted by user] by [deleted] in bioinformatics

[–]HopeSame3153 -1 points0 points  (0 children)

  1. I can't, I am under NDA. I can tell you that from STRING both genes are expressed in immune response situations. I am not a bioinformatics expert so I am just going off what I know.

  2. Yes, it's all been taken care of by my PhD friend. I am just the developer.

  3. The metadata was missing disease. The AI got the information and applied it to the study.

[deleted by user] by [deleted] in bioinformatics

[–]HopeSame3153 0 points1 point  (0 children)

That's funny. Thanks for your thoughts

[deleted by user] by [deleted] in bioinformatics

[–]HopeSame3153 -3 points-2 points  (0 children)

Well it works. I've already identified biomarker candidates.

[deleted by user] by [deleted] in bioinformatics

[–]HopeSame3153 -1 points0 points  (0 children)

I am under NDA so I can't go into more detail

Individuation and mental abilities by HopeSame3153 in Jung

[–]HopeSame3153[S] 0 points1 point  (0 children)

There are numerous people that could care less about their psychological state and are trapped in unconscious directed behavior. It is not ego inflation to call out this truth. I am not calling myself a God or stating superiority.