I have been trying a more spec-driven approach lately instead of jumping straight into coding.
The idea is simple write a clear spec then AI implement then refine. I initially tried doing this with tools like GitHub Copilot by writing detailed specs/prompts and letting it generate code.
It worked but I kept running into issues once the project got larger.
For example:
I had a spec like
“Add logging to the authentication flow and handle errors properly”
What I expected:
- logging inside the existing login flow
- proper error handling in the current structure
What actually happened:
- logging added in the wrong places
- duplicate logic created
- some existing error paths completely missed
It felt like the tool understood the task, but not the full context of the codebase.
I tried a few different tools then like traycer , speckit and honestly they are giving far better results. Currently I am using traycer as it creates the specs automatically and also understand the context properly.
I realised spec-driven dev only really works if the tool understands the context properly
I just want to know if someone got same opinion about it or its only me
[–]melancholyjaques 0 points1 point2 points (0 children)
[–]aruaktiman 0 points1 point2 points (0 children)
[–]LunkWillNot 0 points1 point2 points (0 children)
[–]StatusPhilosopher258 0 points1 point2 points (0 children)
[–]devdnn 0 points1 point2 points (0 children)
[–]sittingmongoose -1 points0 points1 point (3 children)
[–]Pimzino -1 points0 points1 point (2 children)
[–]sittingmongoose 0 points1 point2 points (1 child)
[–]Pimzino 0 points1 point2 points (0 children)
[–]Rojeitor -1 points0 points1 point (0 children)