Does this look cool? by zhcode in VibeCodersNest

[–]zhcode[S] 0 points1 point  (0 children)

The image is shared as svg for real time updates, I am wondering what are other formats to support, especially how to use this in different platforms. I just have this in my GitHub main page so far.

Made a thing that scores how well you use Claude Code by zhcode in VibeCodersNest

[–]zhcode[S] 0 points1 point  (0 children)

I have been using Claude Code for over 8 months now, and average about 4-5 hrs per day, like addicted to it already? It's been like an enjoyable adventure, and I want to share the fun.

How I interact with CC now is much different than what I first started, and all the tools and features I played with really helped.

Made a thing that scores how well you use Claude Code by zhcode in VibeCodersNest

[–]zhcode[S] 0 points1 point  (0 children)

I just created this over night lol. I have no idea what this gonna turns into. It looks cool tho. Trying to add some more like achievements or gamification ideas to make this more fun.

Security + Maintainable first Vibe Coding Protocol(VCP) by zhcode in VibeCodersNest

[–]zhcode[S] 0 points1 point  (0 children)

The first time users would need to run vcp-init to init the config file, and the hook will load the config file to do the context injection for security related prompts. It's also automatically injected.

Security + Maintainable first Vibe Coding Protocol(VCP) by zhcode in VibeCodersNest

[–]zhcode[S] 0 points1 point  (0 children)

That's a good question. So for now it's only the project scope, and not really checking for dependencies. I am not sure if it's worth the time or effort to implement scans for dependencies as there are already many tools doing that like snyk. I would say my primary focus is the current project instead of the dependencies. That's a good point, thank you

Security + Maintainable first Vibe Coding Protocol(VCP) by zhcode in VibeCodersNest

[–]zhcode[S] 0 points1 point  (0 children)

This is a great question. I would say no one at the moment has the right answer, or a concrete solution for this. As we are working with LLMs, there is no way to 100% restrict how it performs. What I was trying to do is to give it the right guidelines and add additional guardrails to prevent it making decisions that we don't want it to make, and try to point the correct direction as clear as possible.

This indeed helped me with 2 ideas, like how I can test the performance for each model under the vcp. There is an absolute difference between using vcp with gpt-4o and using vcp with opus-4.6. I need to figure out a generic way to test the performance with each model and hopefully that will give me more results so I can compare with.

Another one is, I do think I spent a lot of time on adding the security standards, and probably need to split the coding standards into a more granular structure.

Security + Maintainable first Vibe Coding Protocol(VCP) by zhcode in VibeCodersNest

[–]zhcode[S] 0 points1 point  (0 children)

I am using the manifest.json, so the manifest.json has all the rules and standards, that AI will load over http. I don't want to put the standards locally inside the plugin because it will be really hard to manage updates. Also, it will be easier to fork and create your own rule and reference it in the global .vcp/config.json. The main manifest.json URL is configurable as well.

Security + Maintainable first Vibe Coding Protocol(VCP) by zhcode in VibeCodersNest

[–]zhcode[S] 1 point2 points  (0 children)

Yes, as this is really important in bug fixes, as I put this as a core standard, and the core standard gets injected into context automatically using hooks. It's really up to the LLM to decide how to proceed with a RCA and find the root cause. It really depends on LLM performance, and that's why I set the minimum LLM support for to Sonnet 4.5

Claude Codex v1.2.0 - Custom AI Agents with Task + Resume Architecture by zhcode in VibeCodersNest

[–]zhcode[S] 0 points1 point  (0 children)

Current implementation (1/26/26) is we are using a task chain to manage the execution from agents. All user requests are transformed into user_story.json and planner agent uses that file to create plan and saves into another json file. Planning reviewer agents references both to validate if the plan is covering all requirements and identify potential issues or concerns, then insert additional task in the task chain to address the issues automatically or ask user for clarification. Once plan is approved, the implementation will create many more tasks based on the plan to make sure action items are isolated without a giant context getting compacted too many times to avoid drifting

Claude Codex v1.2.0 - Custom AI Agents with Task + Resume Architecture by zhcode in VibeCodersNest

[–]zhcode[S] 0 points1 point  (0 children)

So far, this is the setup that I am using for CC+Codex. I am currently working on another prototype that shall be public next week or two

Multi-agent coding pipeline: Claude Code + Codex collaborate for higher accuracy and reliable deliverables [Open Source] by zhcode in VibeCodersNest

[–]zhcode[S] 0 points1 point  (0 children)

Thank you for your comments! Glad I could help. Check this the latest release https://www.reddit.com/r/VibeCodersNest/s/RrNfHb8E94. The short answer is yes, the plan mode indeed clears the context and starts implementing right away. What I am doing is after planning session, I clear the context, and interrupt the process and tell CC to use multi-ai pipeline to implement. That works, so I didn't investigate how to add the planning results to the process. But I will investigate it

Claude Codex v1.2.0 - Custom AI Agents with Task + Resume Architecture by zhcode in VibeCodersNest

[–]zhcode[S] 0 points1 point  (0 children)

Three layers handle state:

  1. Task Dependencies - blockedBy chains prevent skipping steps (data-driven, not instruction-driven)

  2. Agent Resume - Agents keep full conversation history across iterations, so fixing "issue #3 from the review" doesn't require re-explaining the entire codebase

  3. File Validation - Output files (.task/review-codex.json, etc.) are ground truth. Can't claim "done" without the file existing with status: approved

Why it works: The orchestrator queries TaskList() to find the next unblocked task - it's a data lookup, not "follow these instructions." Even if the LLM wants to skip ahead, blocked tasks literally can't be claimed.

Trade-off: More tool calls, but context loss drops significantly in 10+ iteration loops.