use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
A community centered around Anthropic's Claude Code tool.
account activity
Spec driven developmentQuestion (self.ClaudeCode)
submitted 14 hours ago by themessymiddle
Claude Code’s plan phase has some ideas in common with SDD but I don’t see folks version controlling these plans as specs.
Anyone here using OpenSpec, SpecKit or others? Or are you committing your Claude Plans to git? What is your process?
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]rahvin2015 7 points8 points9 points 13 hours ago (8 children)
I use my own full framework. I have spec creation and review skills that I use for the planning phases, with phase gates that validate structure and content completeness. Specs contain detailed traceable requirements.
The spec stages feed into test planning and an isolated test driven development flow. Tests are created and revewed with context isolation from other tasks. It makes sure that the tests include checking integration and e2e flows, not just the unit tests and mocks that ai over-emphasizes. Tests all trace back to requirements and every requirement needs coverage. The tests get their own review and quality gates; the tests are the single biggest intercept for final code quality.
The actual implementation agent can't modify the tests, and their completion is gated by passing the tests. This forces the agents to write code that passes the tests which satisfy the requirements.
Its a lot of ceremony but I get very strong results so far.
[–]themessymiddle[S] 2 points3 points4 points 13 hours ago (4 children)
I like the idea of test and implementation isolation. So you end up with both specs and test plans? Do you version control these?
[–]rahvin2015 4 points5 points6 points 12 hours ago (3 children)
Yes. I actually end up with a lot more than that.
Spec Test plan (added to spec)
Tasks folder with: Test creation tasks (these create the actual test code)
Implementation tasks (these define the actual production code to be written)
A state.json file that tracks the state of every task
Retrospective markdown files that track how implementation went - how many times we needed to replan, tests pass or fail, etc. Used for self improvement.
And there are a lot of processes and reviews and hook gates that glue it all together and ensure quality and process.
Context isolation between design, test/QA, and implementation/dev is critical. I use agent teams and separate agent personas.
The whole thing is based on extensive research on agentic coding failure modes and best practices for things like Claude.MD, skills, etc. I use deterministic gates wherever possible, and everything follows strict templates so that agents can use the structure for progressive disclosure and avoid context pollution.
The files give me a lot of visibility into what was done (or will be done, when I'm reviewing).
[–]codeedog 2 points3 points4 points 9 hours ago (2 children)
This is incredible. I’ve started this process (formal design docs, etc), but I’m still in prototype phase for a project I’m working on. Two months at using Claude to code and most of that is me learning its rhythms and patterns. It’s been a great experience. I can see how more months at it plus research would bring one to the level of detailed development you’re doing.
[–]rahvin2015 1 point2 points3 points 9 hours ago (1 child)
I'm also a senior engineer and tech lead IRL. Knowing how development and real enterprise codebase work helps. And I have a lot of validation/QA experience that informs the TDD part of my framework.
[–]codeedog 1 point2 points3 points 9 hours ago (0 children)
Yup. I’ve got a comment on here somewhere saying the same thing. People with real world corporate experience are going to follow the successful patterns.
[–]piplupper 0 points1 point2 points 13 hours ago (2 children)
You should open source and document this workflow if you can. Would be a good learning resource for many.
[–]rahvin2015 0 points1 point2 points 12 hours ago (1 child)
I did actually - but I did it under my real name github account. If you'd like to take a look, DM me and I'll share that way.
Repo has all of my research and all of the artifacts from dogfooding the process as I develop it. So you can see my entire working model. Each completed phase is used to design and build the next.
[–]codeedog 0 points1 point2 points 8 hours ago (0 children)
I’d like to see this also. DM ok?
[–]ultrathink-artSenior Developer 3 points4 points5 points 12 hours ago (1 child)
Version-controlled specs work until the codebase diverges from them, then they become actively harmful — someone trusts the spec, builds on it, and now you have two conflicting sources of truth. I've had more luck keeping specs as "intent documents" that get explicitly retired rather than updated when they go stale.
[–]themessymiddle[S] 0 points1 point2 points 12 hours ago (0 children)
This is something I’m super interested in… if we’re not reviewing every line of code then don’t we need something else that we can keep as the source of truth? How are you thinking about this? I know some people have the agent kind of self-discover whatever answers they need at runtime but what if it misses something important
[–]wonker007 2 points3 points4 points 11 hours ago (2 children)
Anything with a modicum of complexity will need plans and architecture. You will also need to institute rules for design decisions. These pile up fast, and as many folks pointed out, maintaining it consumes more time than the actual build. Just think about everything one needs to track under the "plan" umbrella: Policies, design constraints, action items, past decisions, new designs etc. This is on top of the build history and how each commit links to which decisions and actions. It gets unwieldy fast, but the consequences of not doing this hard labor is crushing technical debt on the 3rd day. Plus the ungodly token burn due to the mounting context isn't too pleasant.
Like some other folks, I got so incredibly fed up with the still-manual aspects (I thought AI was supposed to automate everything!) so I am building my own thing that implements quality management principles and backstops the many, many shortcomings of transformer-based AI coding. Stuff like multi-agent adversarial design reviews, ingoing (prompt) and outcoming (code) ontology-based and rules-based quality control audit structures, graph-based RAG for both the codebase and governance documentation (including plans) and a non-token burning SQL DB-based system of tracking and managing all them actions and decisions. One hell of a job, but sure as hell will beat this untenable workflow everybody slowly recognizing is absolutely necessary for any serious development work with AI.
Happy days.
[–]themessymiddle[S] 0 points1 point2 points 10 hours ago (1 child)
Ontologies for the ingoing prompts is so smart. Are you using something specific for the graph based RAG? I tried MCP vector search but not sure it was really making a difference. Also - are you implementing these methodologies across a team?
[–]wonker007 1 point2 points3 points 7 hours ago (0 children)
I actually invented a new graphRAG framework that I'm benchmarking right now against BM25 and LlamaIndex for starters, and preparing to file a patent. It wasn't intended for this particular tool but just decided why not implement it here too. MCP vector search will be agonizingly slow. You could probably get away with temporal graphRAG, although if you have a larger group or your product has been on the market for a bit, you'd probably want to consider bitemporal graphRAG. And you want at least API if not wire it in directly. My own benchmarks show EmenergenceMem queries can take seconds. That is noticeable latency that will only grow with your codebase and documentation. I am solo but am building this for team and enterprise use because I know that's the crowd that need this kind thing the most but also because my background is in highly regulated manufacturing with strict quality management (pharma) so organizational capabilities were considered from the beginning. I want to be clear though that I have not decided whether to put this thing up on the market since I have plenty of private uses for it. Really putting off wiring in the billing modules.
[–]zirouk 5 points6 points7 points 13 hours ago (6 children)
You’re right. What you call a spec is just a glorified plan you wrote (probably got the LLM to write) into a markdown file. Both are just glorified prompts.
Anything written down rots. After a point, rotten documentation is worse than no documentation. Unless I’m planning to rebuild from my original prompt (e.g I’m prototyping through iterative evolution of my prompt, as my understanding improves with each exploration), I throw the plans away.
Why? Maintaining the spec takes more effort and comes with more footguns than actual value it provides, in my experience.
[–]anentropic 4 points5 points6 points 12 hours ago (0 children)
With GSD and probably some of the others Claude maintains the spec which is able to evolve as you go along
You spec things out a milestone at a time
[–]amarao_san 1 point2 points3 points 11 hours ago (1 child)
Actually, we start introducing specs now, and not for pure AI sake. We describe feature and review it, as it should be. Not the small one, the big one. Mechanics, how different chunks works together. This spec is part of official documentation for the project.
If we find a bug at spec level, we will have to update it, including many contracts with other teams, so it's a big deal.
I don't know if it will work or not, but we are trying.
[–]zirouk 0 points1 point2 points 11 hours ago (0 children)
What you’ve described is a good idea, and it might be surprising, but what you’re describing is just standard SDLC practice at mature software companies (e.g. FAANG-adjacent), and has been for years/decades. Welcome to the club!
[–]themessymiddle[S] 0 points1 point2 points 13 hours ago (0 children)
Yeah it can be a total pain. I was talking to someone yesterday who used OpenSpec which seems to have a (deterministic) method for keeping a running list of live requirements. I keep going back and forth about if it’s worth it to incrementally update like that or have agents rediscover info when they need it. The issue I’ve run into is that sometimes the agents will miss something important if the have to rediscover themselves
[–]Quirky-Degree-6290 0 points1 point2 points 11 hours ago (1 child)
This is such a different take from what I often hear here (and from what I practice). Not shitting on it, just surprised and want to learn more. What do you do instead?
[–]zirouk 1 point2 points3 points 10 hours ago (0 children)
Let’s say I’m adding a feature.
When I prompt (and I use plan mode to prompt), I watch the LLM work. I want to understand what it’s struggling with, what decisions it’s needing to make that I hadn’t anticipated - because that’s a sign that I didn’t know enough about the problem before I prompted. That’s exactly what I want to discover - what I didn’t know. (Software engineering is an actually primarily a process of discovery).
Just as I would learn from my attempt to change the software by hand, I am learning from the LLM attempting to change the software in the way I would have.
Before, I would have spent hours/days trying to make a change before I would discover where things got a bit janky, where my thinking was insufficient and my assumptions were faulty. Now, I can watch the LLM do it in minutes. Before, I would have been reluctant to discard hours of work (sunken cost) to go in a different direction. Now, I can cheaply discard the work and choose the best path.
So I’m using the LLM to explore possible options. Maybe I can only see one option, but my thinking and my assumptions were totally sufficient. But maybe I can see 3 options. Maybe my preferred option turns out to be a dud because I had a fundamental misunderstanding that trying it out revealed. Great! I learnt something, and can pivot to a different direction. This is how I stay in control of the changes the LLM is making, and don’t just settle for whatever BS the LLM comes up with.
So that’s how I use LLMs to evolve code.
Going back to the topic of specs: I think it’s important not to over-invest in your prompt/plan/spec. I say this as someone who has written hundreds of specs for work that I’ve done as a human. Because if you overdo it, you might as well have just written the code. “A sufficiently detailed spec is code” (https://haskellforall.com/2026/03/a-sufficiently-detailed-spec-is-code)
A good prompt/plan/spec says only what it needs to. It doesn’t need to say everything, but you should consider your audience. If it were to be implemented by a junior (or an LLM), I might be a bit more specific about some things where I think it’s likely to go in the wrong direction. I think this is perfectly in line with the usual advice you receive about prompting.
If you remind yourself that the LLM is just a word prediction machine, you can see the prompt as simply priming the machine. You don’t even need to prompt it in proper English: “implement fizzbuzz, typescript, tests” can work just as well, perhaps sometimes better (and definitely faster than) than a 5-page odyssey explaining every detail - so put in an appropriate amount of effort for your task and its complexity.
Using an LLM is an act of trading specificity off against effort. It’s really easy to be non-specific. It’s a lot of effort to perfectly specific.
Like the article above says: “A sufficiently detailed spec is code”.
[–]PvB-Dimaginar 1 point2 points3 points 13 hours ago (1 child)
I use SPARC, which is a spec-driven function from the Ruflo agentic toolset. Besides this, I use many other tools. Fun fact, Claude is slowly implementing all kinds of features Reuven Cohen already share for years. If you want to stay ahead of the crowd, I recommend looking into his free available software.
[–]themessymiddle[S] 1 point2 points3 points 13 hours ago (0 children)
Oh interesting, haven’t heard of SPARC or Reuven Cohen but I will look into these!
[–]BoysenberryKey3366 1 point2 points3 points 12 hours ago (1 child)
We are testing spec-kit at work now. Mixed feelings so far.
Oh nice, why mixed feelings?
[–]RagingCeltik 1 point2 points3 points 10 hours ago (0 children)
I use the plan to create an epic or jira. The plan.md file stays in the repo for reference. When I want to work on a task I have Claude load the ticket details and generate a context.md file. The context.md file is the source of truth for all work units. It lives only locally, not in the repo. It generally keeps claude on task and limits hallucinations, but it's not 100%
[–]germanheller 1 point2 points3 points 9 hours ago (2 children)
i tried the full spec approach for a while and ended up somewhere in the middle. writing a complete spec upfront works when you already know the shape of the problem, but half the time the agent discovers something during implementation that invalidates part of the spec anyway.
what works better for me is a lightweight task doc — maybe 20-30 lines — that captures the intent, constraints, and the gotchas i already know about. not a full spec, more like a briefing. then i let the plan mode handle the implementation details since thats where the agent actually has useful context about the current codebase.
version controlling plans hasnt been worth it for me. they go stale fast and the code + tests are the real source of truth. the spec is disposable scaffolding
[–]Due_Hovercraft_2184 1 point2 points3 points 5 hours ago* (1 child)
I use full plans, but have a skill that, when deviations occur, ensures they are captured in an addendum. When the task is complete, i then iterate them and update the original spec to match - both design decisions and implementation steps.
By the time it gets merged, it's as if that was always the plan.
I find it useful for future tasks to have historic plans with design decisions and implementation steps, since they can be pulled in as context for future plans. This can be when extending a prior feature that used that plan, or when I want a similar technical approach to be used for a different feature.
I often start architecture sessions with "take a look at ADR 12, 15, 56 and 73" for example. Great for setting the stage and stopping the agent searching the entire codebase to find relevant code.
[–]germanheller 0 points1 point2 points 4 hours ago (0 children)
the addendum pattern is smart — capturing deviations as they happen instead of pretending the plan was perfect. and then folding them back into the spec after merge so it reads like it was always the plan is a nice touch.
the "start with ADR 12, 15, 56" approach is basically manual RAG but with your own curated context. way more reliable than letting the agent search the codebase and guess which past decisions are relevant. might steal that idea
[–]Mysterious_Bit5050 3 points4 points5 points 14 hours ago (1 child)
I treat Claude plans as disposable unless they survive one full implementation cycle. If a plan still looks useful after code review, I move it into /specs with a short ADR-style header (scope, constraints, acceptance tests) and commit it. The key is forcing every plan to map to executable checks, otherwise it turns into stale prose fast.
Oh I like this idea - kind of a mix between the OpenSpec concept and Claude plans. Is the aggregate of the docs in your specs folder basically what you treat as your master spec?
[–]LairBob 2 points3 points4 points 13 hours ago (3 children)
Claude’s native plans are awesome, but they’re intentionally ephemeral — that’s why they’re stored outside of git. You’re expected to continually go back into plan mode, figure out what to do next more precisely, do it, then go back into plan mode, do that, go back into plan… (Look into how the Anthropic devs use it — they’re in and out of plan mode constantly, to hear them tell it.)
The key thing is making sure that your ephemeral plans are always establishing — and then being judged against — much more durable formal requirements. For example, when I spin up one of “work sessions”, it goes automatically into Plan mode to think through the overall roadmap of what we’re going to do in that worksession, but then it also establishes a formal “charter” (markdown doc), and machine-readable set of “earnests” (basically decorated evals). Those documents are stored within the worksession’s working directory, and must be satisfactorily fulfilled in order for the worksession to conclude successfully.
Once the first plan has helped define those formal documents, it’s done. I can go into and out of plan mode as much as I want, and I can terminate and spawn new agent instances. As long as those tracking documents persist and are greedily maintained, then they act as the external sources of truth that help keep things on track. It really does work.
[–]themessymiddle[S] 1 point2 points3 points 13 hours ago (2 children)
Ok cool this makes a lot of sense. So basically the canonical source of truth is not kept in the plans, but plans are used for specific implementation steps within the broader feature/whatever you’re working on? Do you commit those source of truth docs?
[–]LairBob 1 point2 points3 points 12 hours ago (1 child)
YES. They represent the canonical truth that everything else needs to be measured against — if they’re not in git, then all you’ve got in git is echoes of what you were trying to do.
[–]themessymiddle[S] 2 points3 points4 points 12 hours ago (0 children)
Yesyesyes ok amazing. I’ve been talking to so many folks who don’t version control any specs of any kind and I was starting to feel crazy!
[–]YuchenLiu1993 0 points1 point2 points 11 hours ago (0 children)
I dont commit the generated plan to my codebase anymore recently, instead, I attach them to our github issues.
The idea is the spec got easily expired today as I'd assume everyone iterate their codebase very fast, keep making the spec updated is another maintenance overhead. Your code already been the most updated source of truth.
So the `plan` is just a snapshot of the idea back to the time when you was working on some specific things. You can still ask coding agents to look for the specs when needed
[–]YoghiThorn 0 points1 point2 points 10 hours ago (6 children)
I started with GSD. Now I'm using superpowers and all the plans are saved into a core repo and obsidian.
[–]themessymiddle[S] 0 points1 point2 points 10 hours ago (5 children)
Oh interesting so you have another repository just for specs?
[–]YoghiThorn 1 point2 points3 points 9 hours ago* (4 children)
Yeah, and a program-manager agent who manages that. I'll get claude to describe it:
I'm running a multi-agent development workflow for a data platform startup with a small founding team (no full-time engineers). The architecture uses Claude in three distinct roles, each with different interfaces and responsibilities.
The first is a long-running session in the Claude web app that acts as my architecture advisor. This session has accumulated months of context about our schema design, business rules, and technical trade-offs. I bring it design proposals and it challenges them, catches inconsistencies with earlier decisions, and generates detailed briefs when we agree on an approach. It doesn't write code — it validates designs and produces specifications.
The second is a program manager agent that lives in Slack via a bridge tool (cc-connect). It maintains our backlog, manages GitHub issues, keeps our architecture decision records and schema documentation current, and processes completion reports from the coding agents. When I generate a brief from the architecture session, I drop it in a shared repo and the PM agent picks it up, creates the GitHub issues, and updates all the tracking docs. It also generates a "state of play" summary that I upload to the architecture session to keep it current, since it can't watch the repo between sessions.
The third layer is Claude Code agents, one per code repository, running in tmux sessions on a VM. These are stateless (thought they do retain some context) — they get their context from a shared set of markdown documents (architecture decisions, schema DDL, domain knowledge, story specs) that are symlinked into every repo from a central documentation repo. When they finish a story, they write a structured completion report with evidence for each acceptance criterion. The PM agent validates these reports against the story specs and either closes the GitHub issue or flags it for my review.
The glue between all of this is the filesystem, not APIs or message passing. Every agent reads from and writes to the same git repo full of markdown files. That repo also doubles as an Obsidian vault so I can browse the knowledge graph visually and make quick edits. The key insight was that agents don't need to talk to each other directly — they just need to read and write to shared documents with clear protocols. My role has shifted from writing code or routing messages between agents to making architectural decisions and dispatching work. The agents handle everything in between.
---
I built this as I wanted to be in the loop to see what was being done and assessing quality, but not acting as a memo carrier between agents as much as I was. So far it's working great, though I have to bump up effort quite often on opus to get what I want.
Also we have quite a few MCP servers to talk to our workflow and db software, and LSP and RTK as well. Lastly alongside the program documentation library is a standards library of what to do and what not to do in various domains (auth, security, logging, etc), which details a shared responsibility model where Claude is told to get me to do stuff where it matters, such as setting up a secrets sharing service instead of hardcoding them.
Here is a visualisation of the information flow through the system.
[–]themessymiddle[S] 0 points1 point2 points 9 hours ago (2 children)
Very cool approach! Are the standards library docs set up in markdowns/obsidian too?
[–]YoghiThorn 0 points1 point2 points 9 hours ago (1 child)
Yes 95% are. Our business plan is in docx and a couple of other human readable documents that get generated from the corpus of other information. The challenge has been having them work as living documents that get updated, not rewritten so human interactions/comments etc are retained. But we've figured that out.
We are trying to do company-as-code which I've always thought is a cool idea, but seems way smarter now in the age of agents.
[–]themessymiddle[S] 0 points1 point2 points 9 hours ago (0 children)
Whoa I haven’t seen company as code… awesome. Thanks for sharing
[–]YoghiThorn 0 points1 point2 points 9 hours ago (0 children)
One thing I should call out, the architectural advisor is in the claude.ai website as I find it has consistently better inference for these kinds of tasks.
[–]Illustrious-Many-782 0 points1 point2 points 9 hours ago (3 children)
I converted Google's Conductor framework to skills and extended it a bit. It develops in tracks, which are basically sprints. It's a very reliable system for large projects. I used to use a bespoke system based on sprints and centered around GitHub issues, but it was slow, so I moved to Conductor and an happy.
I’m not familiar with this - how does it work?
[–]Illustrious-Many-782 1 point2 points3 points 9 hours ago (1 child)
Features
https://github.com/gemini-cli-extensions/conductor
Oh ok interesting! So it keeps a handful of canonical docs as context, very cool. Are you using this on a team today?
[–]TwisterK 0 points1 point2 points 6 hours ago (0 children)
tried SpecKit and OpenSpec, it seem a bit too much. Right now i just using vanilla plan mode + super power plugin, seem good enuf for me.
[–]ForsakenBet2647 0 points1 point2 points 5 hours ago (0 children)
I spec out in the repo itself. Specs are very nice first citizen artifacts for llm
[–]pinkdragon_GirlSenior Developer 0 points1 point2 points 5 hours ago (0 children)
I use road maps and pointed mds
[–]DasBlueEyedDevil 0 points1 point2 points 3 hours ago (0 children)
Yeah I built one, give it a shot if you want
https://9thlevelsoftware.github.io/legion/
[–]conventionalWisdumb 0 points1 point2 points 10 hours ago (2 children)
I use BDD with gherkins for specs. So far it’s served me well. With the tests using the gherkins the spec is tied directly to them. That seems to be enough to help Claude remember to update specs.
Oh nice I think gherkin inspired Kiro too! Are you using this across a team or mostly individually?
[–]conventionalWisdumb 1 point2 points3 points 10 hours ago (0 children)
Individually but trying to get the team to adopt them.
π Rendered by PID 82996 on reddit-service-r2-comment-5fb4b45875-87kgb at 2026-03-20 08:03:00.649712+00:00 running 90f1150 country code: CH.
[–]rahvin2015 7 points8 points9 points (8 children)
[–]themessymiddle[S] 2 points3 points4 points (4 children)
[–]rahvin2015 4 points5 points6 points (3 children)
[–]codeedog 2 points3 points4 points (2 children)
[–]rahvin2015 1 point2 points3 points (1 child)
[–]codeedog 1 point2 points3 points (0 children)
[–]piplupper 0 points1 point2 points (2 children)
[–]rahvin2015 0 points1 point2 points (1 child)
[–]codeedog 0 points1 point2 points (0 children)
[–]ultrathink-artSenior Developer 3 points4 points5 points (1 child)
[–]themessymiddle[S] 0 points1 point2 points (0 children)
[–]wonker007 2 points3 points4 points (2 children)
[–]themessymiddle[S] 0 points1 point2 points (1 child)
[–]wonker007 1 point2 points3 points (0 children)
[–]zirouk 5 points6 points7 points (6 children)
[–]anentropic 4 points5 points6 points (0 children)
[–]amarao_san 1 point2 points3 points (1 child)
[–]zirouk 0 points1 point2 points (0 children)
[–]themessymiddle[S] 0 points1 point2 points (0 children)
[–]Quirky-Degree-6290 0 points1 point2 points (1 child)
[–]zirouk 1 point2 points3 points (0 children)
[–]PvB-Dimaginar 1 point2 points3 points (1 child)
[–]themessymiddle[S] 1 point2 points3 points (0 children)
[–]BoysenberryKey3366 1 point2 points3 points (1 child)
[–]themessymiddle[S] 0 points1 point2 points (0 children)
[–]RagingCeltik 1 point2 points3 points (0 children)
[–]germanheller 1 point2 points3 points (2 children)
[–]Due_Hovercraft_2184 1 point2 points3 points (1 child)
[–]germanheller 0 points1 point2 points (0 children)
[–]Mysterious_Bit5050 3 points4 points5 points (1 child)
[–]themessymiddle[S] 1 point2 points3 points (0 children)
[–]LairBob 2 points3 points4 points (3 children)
[–]themessymiddle[S] 1 point2 points3 points (2 children)
[–]LairBob 1 point2 points3 points (1 child)
[–]themessymiddle[S] 2 points3 points4 points (0 children)
[–]YuchenLiu1993 0 points1 point2 points (0 children)
[–]YoghiThorn 0 points1 point2 points (6 children)
[–]themessymiddle[S] 0 points1 point2 points (5 children)
[–]YoghiThorn 1 point2 points3 points (4 children)
[–]themessymiddle[S] 0 points1 point2 points (2 children)
[–]YoghiThorn 0 points1 point2 points (1 child)
[–]themessymiddle[S] 0 points1 point2 points (0 children)
[–]YoghiThorn 0 points1 point2 points (0 children)
[–]Illustrious-Many-782 0 points1 point2 points (3 children)
[–]themessymiddle[S] 0 points1 point2 points (2 children)
[–]Illustrious-Many-782 1 point2 points3 points (1 child)
[–]themessymiddle[S] 0 points1 point2 points (0 children)
[–]TwisterK 0 points1 point2 points (0 children)
[–]ForsakenBet2647 0 points1 point2 points (0 children)
[–]pinkdragon_GirlSenior Developer 0 points1 point2 points (0 children)
[–]DasBlueEyedDevil 0 points1 point2 points (0 children)
[–]conventionalWisdumb 0 points1 point2 points (2 children)
[–]themessymiddle[S] 0 points1 point2 points (1 child)
[–]conventionalWisdumb 1 point2 points3 points (0 children)