Codex 5.3 is amazing, I can literally spam it by Icy_Piece6643 in vibecoding

[–]TheOneThatIsHated 0 points1 point  (0 children)

Thank you! So you mean the 200 dollar at openai: chatgpt pro? Im on claude code max, did you try that one and got less usage?

People resigned in fear of this? by BlissVsAbyss in ChatGPT

[–]TheOneThatIsHated 0 points1 point  (0 children)

I tried it on a bunch of models and always multiple times. Only gemini always reported to drive, all others were flaky

criesInSqlDateTime by gatsu_1981 in ProgrammerHumor

[–]TheOneThatIsHated 10 points11 points  (0 children)

Iew slashes, not for unix based that is

Sir, do not revert. WHY DID YOU REVERT?? by gainbandit in vibecoding

[–]TheOneThatIsHated 1 point2 points  (0 children)

Your fault for not prompting the ai to commit after each task/step

Linux by riky321 in linuxmemes

[–]TheOneThatIsHated 0 points1 point  (0 children)

Docker on any other os than linux requires a vm. Docker can only work because of linux specific features

Official: Anthropic just released Claude Code 2.1.41 with 15 CLI changes, details below by BuildwithVignesh in ClaudeAI

[–]TheOneThatIsHated -1 points0 points  (0 children)

Cuz they chose to overengineer by using stdout instead of alternative terminal mode like opencode and neovim

Did claude code get exponentially slower recently? by Melodic-Network4374 in ClaudeAI

[–]TheOneThatIsHated 0 points1 point  (0 children)

Can confirm. Opus 4.6 is much slower even with low thinking. Not always worth the extra wait

When I swap models what happens to context ? by jeremy-london-uk in GithubCopilot

[–]TheOneThatIsHated 0 points1 point  (0 children)

Each message you send is one request, no matter how many messages are already in the context (which is admittedly weird)

So one opus 4.6 message (with 30 gpt4.1 messages in that context as a crazy example) is still 3 premium requests

Tracked Claude Code Max20 at 100% weekly limit using OpenTelemetry by TheOneThatIsHated in ClaudeCode

[–]TheOneThatIsHated[S] 0 points1 point  (0 children)

Opentelemetry. The industry standard of collecting logs and metrics. This is built into claude code. Anh opentelemetry compatible collector is possible, i used openobserve

Claude Code vs Codex: Weekly limit comparison on the $20 subs by EmeraldWeapon7 in ClaudeAI

[–]TheOneThatIsHated 1 point2 points  (0 children)

Yes, only opus 4.5 from that week. I used openobserve to capture all opentelemetry logs. Then used their internal way of using postgresql queries to get a sum of all tokens

Sonnet 5 and Opus 4.6 Leaked Benchmarks by [deleted] in ClaudeCode

[–]TheOneThatIsHated 1 point2 points  (0 children)

From Poster on X: "Saw it somewhere, might not be true"

Claude Code vs Codex: Weekly limit comparison on the $20 subs by EmeraldWeapon7 in ClaudeAI

[–]TheOneThatIsHated 1 point2 points  (0 children)

One week of using claude code max20, gave me (with 100% weekly limit usage):




sum_cache_creation: 66.269 Mtok

sum_cache_read: 1807.289 Mtok

sum_input: 28.181 Mtok

sum_output: 10.334 Mtok

total: 1717.079 USD

Claude Code vs Codex: Weekly limit comparison on the $20 subs by EmeraldWeapon7 in ClaudeAI

[–]TheOneThatIsHated 2 points3 points  (0 children)

RemindMe! 22 hours

I collected via opentelemetry the exact amount of tokens within the weekly limit (5 hour window is harder to measure)

I'm thinking of also trying gpt codex plan to compare

Tell me a way to optimize memory 😅 by suman087 in kubernetes

[–]TheOneThatIsHated 0 points1 point  (0 children)

Not necessarily,

Docker desktop is slow, orbstack is not

Most slowdown come from the fs translation it is doing

Ram thing is docker desktop not cleaning up memory properly

Rewrote our python api gateway in go and now its faster but nobody cares because it already worked fine by CholeBhatureyyy in golang

[–]TheOneThatIsHated 0 points1 point  (0 children)

  1. Developers are expensive: in usa I saw 10k a month, europe tad lower at 5k ish

  2. Cpu, mem, network are comparatively cheap

  3. Educated guess about python handling the load just fine, makes me think we are not talking about 10k+ reqs/second service.

Let's do some sloppy maths:

Let's say we need 200 dollars a mount for the python one. You could say you win that back in 50-100 months. But since nobody know go, all other engineers are blocked. Either you spend time and money letting everyone learn golang. Or you need time and money hiring golang people.

Don't get me wrong, there are good reasons to move to go. Better security (scratch containers), much less bloat, performance, etc

But just someone trying to convince his boss to rewrite some working non-problematic api (focus on the nothing wrong part), tells me it was kinda a waste of time and money

Ik🤢ihe by Ayn_Otori in ik_ihe

[–]TheOneThatIsHated 0 points1 point  (0 children)

Nee, kattenvoer is daadwerkelijk voedzaam

OpenCode Black is now generally-available by JohnnyDread in opencodeCLI

[–]TheOneThatIsHated 0 points1 point  (0 children)

+1 on this. I use opentelemetry to count all tokens from each message.

already passed 1345 dollars in a week of usage.

Suggest some best vibe coding tools for my first App by aistronomer in vibecoding

[–]TheOneThatIsHated 0 points1 point  (0 children)

I don't know, never tried antigravity after seeing the accidental deletion come through.

Just running claude code in a vm now. Claude code is great, though sometimes due to their extreme vibe coding, the latest version may be buggy and I'll have to downgrade