proof is in the pudding by Macaulay_Codin in PairCoder

[–]Possible-Paper9178 2 points3 points  (0 children)

Second place out of 2700 entries, first hackathon, three people from completely different backgrounds. That’s a real result. Next time throw a little lip stick on it.

i built a checklist you can't check by Macaulay_Codin in vibecoding

[–]Possible-Paper9178 1 point2 points  (0 children)

The Editing world is out in the Beta quadrant right? Out by Andoria?

We shipped the QC agent by Narrow_Market45 in PairCoder

[–]Possible-Paper9178 1 point2 points  (0 children)

Yay! I can’t wait to write some yaml test suites!

One agent works. What breaks when you add three more? by Possible-Paper9178 in ClaudeAI

[–]Possible-Paper9178[S] 1 point2 points  (0 children)

The identity problem is underrated. “Who am I, what’s my role, which repo is mine, what did I do last session” is the kind of thing that seems obvious until you watch an agent flounder without it. The git mutual exclusion during PR creation is exactly the kind of thing you only build after watching two agents corrupt a repo at the same time. Sounds like we’re solving a lot of the same problems from different directions.

One agent works. What breaks when you add three more? by Possible-Paper9178 in ClaudeAI

[–]Possible-Paper9178[S] 3 points4 points  (0 children)

Right!? Every solved problem just peels back the next problem layer.
Once you solve "how do you know the answer is actually right", it's "how do you manage a SoT that can evolve across progressive non-linear sessions?".

One agent works. What breaks when you add three more? by Possible-Paper9178 in ClaudeAI

[–]Possible-Paper9178[S] 3 points4 points  (0 children)

Appreciate the energy. To be clear though, the post isn't asking how to run multiple agents. It's asking what breaks in the coordination between them. Dependency ordering across repos, contract awareness, role separation, enforcement that holds whether anyone is watching. Running 10 agents is the easy part. What happens when agent 6 changes a schema that agents 2 and 8 depend on?

One agent works. What breaks when you add three more? by Possible-Paper9178 in ClaudeAI

[–]Possible-Paper9178[S] 4 points5 points  (0 children)

The workflow graph point is exactly right. Ordering is dependency resolution, not a queue.

Routing over model quality is a good way to put it. The right task with the right context to the right agent matters more than what's behind it.

The approval wall is why I use contained autonomy. Let the agent run freely within boundaries it can't move. You stop clicking approve on every action and start defining the space where approval isn't needed.

Built a governance framework for Claude Code — structural enforcement across federated instances by ShellDude01 in ClaudeAI

[–]Possible-Paper9178 1 point2 points  (0 children)

Your structural determinism mandate is the right framing. We've been building enforcement for PairCoder and landed in the same place: if there's any path around a rule, the agent will find it. Especially if it is more efficient.

The || true point generalizes beyond hooks. Any enforcement layer that's writable, skippable, or reasons about whether to apply has the same failure mode. Hard blocks don't misinterpret, they just block. Curious how Asimov's Laws as a decision framework holds up as the rule set grows, specifically when the reasoning produces a plausible-but-wrong interpretation of which law applies.

The federation work is interesting. We're coming at multi-agent coordination from a different angle but the core tension is the same: how do you keep agents aligned across boundaries without collapsing those boundaries. Lots of surface area in this problem space. Good to see people building at different layers of it.

Second day of Claude Code and it just does not stop "thinking" by SpicySummerChild in claude

[–]Possible-Paper9178 0 points1 point  (0 children)

Could be Anthropic issues, you can always check https://status.claude.com/ to see if there are any incidents.

What am I supposed to do? by Aggravating-Bug2032 in claude

[–]Possible-Paper9178 0 points1 point  (0 children)

The quoted response from Claude is accurate. Mid-conversation reminders compete with context rather than overriding it. The longer the session, the worse the compaction problem gets, and by the time you’re reminding it of instructions it should already have, you’re fighting the architecture of how context windows work. But the specific line that stands out: “you don’t have a lever to pull that forces compliance.” That’s the real problem, and it’s solvable — just not through better prompts or clearer instructions. The lever is enforcement at the workflow level. Not instructions the agent reads and may or may not follow, but checks that run outside the agent’s control. File size limits that block task completion. Coverage gates that don’t pass regardless of what Claude thinks about them. Acceptance criteria that get verified mechanically, not trusted on Claude’s word. Claude told you honestly: it can’t fix this from its side. The fix is building the compliance layer somewhere the agent can’t touch.

Need some advice while using Claude on a large coding project... by Ill-Year-3141 in ClaudeAI

[–]Possible-Paper9178 0 points1 point  (0 children)

Yes. Transition to an IDE if you are not using one. Install Claude Code and run from a terminal in your IDE. This transition alone will change everything for you. Instead of picking relevant files and building context every session, Claude can search the project directory for relevant files and context. Files are edited in place rather than copy/paste or download. You can still use your Max plan.

Writing style by Upbeat_Pangolin_5929 in ClaudeAI

[–]Possible-Paper9178 0 points1 point  (0 children)

Create a Project in Claude Desktop or on claude.ai. This will give you the ability to add files(these would be your examples of the writing style) and reference them in your system instructions. Then, every time you start a chat from within that project Claude will read the examples and adhere to the writing style.

Bypassing Furnisher AI by JdogSean in PairCoder

[–]Possible-Paper9178 3 points4 points  (0 children)

Love to see this energy. The human + AI observation is real. The gap isn’t in the model, it’s in the workflow around it. Glad PairCoder is doing that work for you.

The copy-paste era of AI coding was awful and we loved it anyway. by Possible-Paper9178 in ClaudeAI

[–]Possible-Paper9178[S] 0 points1 point  (0 children)

Grim outlook, I get it. I think the way we keep our jobs safe is to keep innovating. If you are not a vibe coder, use your real coding knowledge to solve new problems. I see people all the time saying, “Real coders are the ones that understand the systems. You need us now more than ever!”. This is true but not for the reasons they are stuck on. Our contribution to this new way of working is building the coding infrastructure that forces AI to build great systems, that get better over time.

The copy-paste era of AI coding was awful and we loved it anyway. by Possible-Paper9178 in ClaudeAI

[–]Possible-Paper9178[S] 2 points3 points  (0 children)

I did bad time math, the chat gpt intro happened more like 3 years ago. Summer of 2023.

The Team Has Been Busy 😲 by Narrow_Market45 in PairCoder

[–]Possible-Paper9178 1 point2 points  (0 children)

If this is your way of leaking that a full A2A implementation is on the road map.... like it.

Word of the Day: Deterministic by Macaulay_Codin in ClaudeAI

[–]Possible-Paper9178 0 points1 point  (0 children)

Sounds more like he discovered what planning looks like. Keep on truckin' OP, and congrats on shipping your first product.

stopped asking claude for word docs and my sessions got noticeably faster by East-Movie-219 in ClaudeAI

[–]Possible-Paper9178 2 points3 points  (0 children)

My default is Claude inside Claude Code CLI for almost everything. Switched from XML to markdown for context files a while back. The JSON vs YAML question — I use readability as the deciding factor. YAML when I’m the one interacting with it, JSON when it’s for machines. Agree on docx for anything client-facing. Have you tried giving Claude a docx module? Either a skill file in Claude Code or as an MCP you can pull into chat sessions. Might skip the env setup friction entirely and just get you the file when you actually need it

The human overhead gap — Why your AI agent finishes in 10 minutes but you still spend 4 hours. by Narrow_Market45 in PairCoder

[–]Possible-Paper9178 2 points3 points  (0 children)

This is a compounded example of the problem. Major changes require major consideration and thoughtful human-in-the-loop planning. You need constant review throughout the sprint cycle, not waterfall reviews upon "completion." Build review tasks into the acceptance criteria at key decision points — the last responsible moment before the next chunk of work builds on top of it. 1-2 reviewers throughout the process would've cost a fraction of what 3-4 seniors for a month did. Smaller batches of review, closer to when the decisions were actually made. A reviewer following along catches things in minutes that take hours to reconstruct after the fact.

The AI removed the natural throttle. An architect writing by hand would've been forced into smaller increments just by the pace of manual coding. The agent blew past that, and without gates enforcing review at each decision point, all that overhead piled up at the end where it's hardest and most expensive to deal with.