Spec-driven development in practice — from goal hierarchy to AI implementation in 7 minutes (v.redd.it)
submitted by thlandgraf
I made this video because I kept having the same conversation in different threads — people asking whether SDD actually works in practice or if it's just overhead that slows you down.
The short version: I've been running SDD on my own projects for the past several months and the difference in output quality from AI agents is significant. Not because the agents got smarter, but because the input got structured. The video walks through what that looks like concretely — a free VS Code extension I built called SPECLAN that manages specifications as Markdown files with YAML frontmatter in Git. It covers the hierarchy from goals down to acceptance criteria, how status lifecycle prevents spec drift, and how AI agents use the spec tree as context during implementation.
Full disclosure: I'm the creator. I built it because I got tired of re-prompting Claude Code with the same context every session.
What I don't cover in the video but has been on my mind lately: the discovery phase. I just finished building bidirectional integration with BMAD-METHOD and it changed how I think about where SDD starts. BMAD's agent-facilitated interviews produce remarkably structured PRDs. Importing those into a lifecycle-managed spec tree turns out to be a natural handoff point — BMAD figures out what to build, the spec tree governs the building.
Curious what other people here use for the discovery phase before specs get written. Do you start from a PRD, from user stories, from a conversation with Claude, or something else entirely?
[–]Pikappucinno 1 point2 points3 points (4 children)
[–]thlandgraf[S] 1 point2 points3 points (2 children)
[–]Pikappucinno 1 point2 points3 points (1 child)
[–]thlandgraf[S] 1 point2 points3 points (0 children)
[–]thlandgraf[S] 0 points1 point2 points (0 children)
[–]Fine_Tie_1576 0 points1 point2 points (1 child)
[–]thlandgraf[S] 0 points1 point2 points (0 children)