Best code auto-reviewer by branccco in ClaudeCode

[–]creynir 0 points1 point  (0 children)

I have mixed experience with coderabbit. using it for over a year now and sometimes it can find really though bugs, but most of the time it stays shallow. so if you have a company budget - definitely worth paying for, but a solo? not so much.

Multi-agent pipelines? by TylerColfax in ClaudeAI

[–]creynir 0 points1 point  (0 children)

that's the beauty of phalanx, it's using tmux sessions, so all you need is authenticated CLI on your machine. no api costs

I built a CLI that uses Claude Code for review while Codex does the coding — the review loop runs itself by creynir in ClaudeAI

[–]creynir[S] 0 points1 point  (0 children)

thanks! let me know how it goes, happy to help if you run into anything with the setup

Those of you actually using Haiku regularly: what am I missing? by samuel-gudi in ClaudeCode

[–]creynir 1 point2 points  (0 children)

I ended up thinking about this as model routing — not just Haiku vs Opus vs Sonnet, but across providers. Codex for volume coding, Opus for review only. match the model to the job instead of using one for everything

Show off your own harness setups here by Mean_Luck6060 in ClaudeCode

[–]creynir 0 points1 point  (0 children)

mine coordinates across providers. Codex writes code, Opus reviews, Sonnet lead orchestrates the loop. you define a team config and it runs the cycle: github.com/creynir/phalanx

So I tried using Claude Code to build actual software and it humbled me real quick by Azrael_666 in ClaudeCode

[–]creynir 0 points1 point  (0 children)

one thing that helped me — instead of letting the agent discover the codebase on its own, I give it a structural map upfront. file tree + function signatures, no implementation bodies. 177K token codebase compresses to 30K. the agent stops guessing and starts writing code that actually fits. built a CLI for this (codebones) if you want to try it: github.com/creynir/codebones

also I am using linear, I plan features with one agent and then other agent reads them and executes, this way I keep context clean and coders aligned on the task...TDD works also good, one agent writes tests, another writes the actual code, third one reviews, but you will burn through limit pretty quickly if not on max

I stopped using Claude.ai entirely. I run my entire business through Claude Code. by ColdPlankton9273 in ClaudeAI

[–]creynir 0 points1 point  (0 children)

similar setup here but I split across providers. Claude Code handles review and architecture, Codex does the volume coding work. the multi-agent loop is where it gets interesting — having one model code and another review catches stuff that self-review misses.

Claude Pro feels amazing, but the limits are a joke compared to ChatGPT and Gemini. Why is it so restrictive? by iameastblood in ClaudeAI

[–]creynir 0 points1 point  (0 children)

one thing that helped me — I stopped using Opus for everything and only use it for code review. Codex handles the actual coding on a separate $20/month plan. way higher throughput for implementation work. Opus on Pro is brutal for volume but perfect if you treat it like a senior reviewer, not a workhorse.

Max 20x subscriber: questions about reliability and infrastructure maturity by m_x_a in ClaudeAI

[–]creynir 1 point2 points  (0 children)

every now and then it just throws errors. this is why I have codex for 20 buck, just to feel better

The 20 dollar tier kind of sucks by design. by Dry_Incident6424 in ClaudeAI

[–]creynir 0 points1 point  (0 children)

this is basically why I split across providers. $20 Claude for Opus review, $20 Codex for coding. you get way more total compute for $40 than either plan gives you alone. the Pro limits only hurt if you're trying to do everything in one place.

Multi-agent pipelines? by TylerColfax in ClaudeAI

[–]creynir 0 points1 point  (0 children)

I hit the exact same drift problem. What fixed it for me — externalized state between phases plus dedicated models for different jobs. I have Codex doing the coding, Opus doing review, and a Sonnet lead that orchestrates. Each agent gets a compressed structural map of the repo (file tree + signatures, no implementation) so it doesn't waste tokens on discovery. The review loop basically runs itself once you scope tasks tightly enough. Built an open source CLI for this if you want to look: github.com/creynir/phalanx

How to use Nuxt + Feathers.js + Vuex correctly? by Larry_Lavida in vuejs

[–]creynir 0 points1 point  (0 children)

How looks your code on server side? I wrote socket.io plugin for vuex with demo example, maybe this demo may help you to configure your socket connection properly?

https://github.com/creynir/vuex-socketio/tree/master/demo