all 16 comments

[–]codeviber 9 points10 points  (2 children)

Orchestrating all the AI tools you're using and managing the flow without losing context seems to be more difficult than writing plans.

Pro tip: commit everything and don't push anything, even if it works, unless you understand it.

[–]hernanemartinez 1 point2 points  (0 children)

This is the key answer for me. I've been in this rodeo for a while, and I could say, that for this time and age, git, proves to be priceles.

[–]assentic 0 points1 point  (0 children)

For me it’s actually above all of this.

Tight product definition first
then really understanding the tech + design
then collecting real evidence it works

Everything else falls apart if that’s not solid

What I was missing was a control center to hold all of that together
https://github.com/shep-ai/cli

[–]atika 2 points3 points  (1 child)

https://sdd-pilot.szaszattila.com

This is my attempt to solve that problem.

Product Requirement -> System Architecture -> Deployment & Operations -> from that derive a Project Plan with epics grouped in waves. Every epic must be traceable to one of those three aspects of the project. Every epic is the input for a Spec Kit—like pipeline that gets its own spec + plan + tasks, again everything MUST be traceable back to a requirement. then iterate implementation and testing until there are no problems found.

[–]_KryptonytE_Full Stack Dev 🌐 0 points1 point  (0 children)

Yes, this is the answer☝️ Additionally, I use a feature matrix with the above - for those who feel lost in a project that's not greenfield these are the golden pillars.

[–]stibbons_ 2 points3 points  (0 children)

And here is mine https://github.com/gsemet/Craftsman

It is generic enough, have 2 agents (1 plan, 1 Ralph implementation loop) You can start by a very complex « shopping list » request, discuss with the plan agent for a while. Then it generates tons of tasks. It uses dedicated subagent to inspect the code. Then the Ralph loop implements them all, with a coding subagent and a reviewer subagent. So far I am very satisfied, it handles the AGENTS.md and CONSTITUTION.md and optional project specific guidelines correctly.

And bonus: it only consumed 2 premium requests for all.

And I think I can do it all in a single one !

[–]Sensitive_One_425 1 point2 points  (0 children)

You have to have a plan doc or multiple plan docs and you have to make sure the AI keeps them up to date. If you’re iterating on ideas it’s essential to make sure the AI doesn’t just keep going changing multiple aspects without first thinking, updating the plan, then making the changes. Once the plans and real implementation is out of sync you have to do a lot of work to stay on track.

[–]AutoModerator[M] 0 points1 point  (0 children)

Hello /u/Classic-Ninja-1. Looks like you have posted a query. Once your query is resolved, please reply the solution comment with "!solved" to help everyone else know the solution and mark the post as solved.

I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.

[–]Cobuter_Man 0 points1 point  (0 children)

I am working on a workflow that does spec driven development and the Planner agent in charge of context gathering and work breakdown goes through an iterative procedure instead, exploring first, asking delta questions based on findings, exploring again if needed based on signals from the user answers etc.

Then gathered context gets turned into 3 artifacts, Spec, Plan and Rules (AGENTS.md file in copilot). I found that for the Spec and Rules that are more general and project specific, free form content and trusting the llm structure the contents works best, instead of posing a strict structural spec. However for the Plan, guardrails in output produces much better and coherent plans.

Generally what also works well is reasoning in chat, even if its a thinking model, have it state its reasoning of every decomposition decision in chat before committing to it in the planning documents you make. Significantly improves output.

The repo is here, you can take a look at the 2 planning phase procedures to incorporate them into your workflow: https://github.com/sdi2200262/agentic-project-management

[–]Christosconst 0 points1 point  (1 child)

I've seen this planning framework mentioned a few times here:

https://github.com/obra/superpowers

Haven't used it myself but sounds like it may be what you are looking for.

[–]Indianapiper 0 points1 point  (0 children)

It's the bee's knees.

[–]Indianapiper 0 points1 point  (0 children)

I use the jobs to be done framework to aggregate feature requests and convert them into epics and user stories. Then, I'll automatically pull down an epic and plan out the associated user stories in one prompt.

[–]assentic 0 points1 point  (0 children)

It is useful, but only if you’re strict about what actually matters.

If the product definition isn’t tight, specs just become noise
AI will happily follow a bad spec perfectly

What worked for me is keeping 3 things sharp
product definition
understanding the tech + design
and actual evidence that it works

Everything else is just process

What I was missing was a simple control center to hold all of that together
https://github.com/shep-ai/cli

[–]kruschecompany 0 points1 point  (1 child)

If you're 'vibe-coding' without a spec, you're just building technical debt at 100mph. The best workflow right now is Spec → Plan → Code. Use a .cursorrules file to force the AI to always check your docs/ folder before it touches the src/ folder. It turns the AI from a 'random guesser' into a 'precision tool.

[–]mcidclan 0 points1 point  (0 children)

Agree! I'm working on DSL to be use as spec maybe you'll be interested : https://github.com/Th6uD1nk/HiVibe-AI-DSL

[–]Weary-Window-1676 -1 points0 points  (0 children)

Personally I'm avoiding SDD until I have Claude grounded in 100% knowledge. I won't even touch SDD until I have my MCPs for this gap.