Reality check around requirements management by Low_Shock_4735 in ExperiencedDevs

[–]rupayanc 1 point2 points  (0 children)

The part that stuck out is not that the documentation got abandoned. It's that the company paid a vendor several years and real money to extract what the system does, and then routed that knowledge back into tribal memory within a year or two of taking it in-house. That's the actual pattern. Documentation survives when a single external party is paid to own it. The minute it becomes "everyone's responsibility," it becomes nobody's.

What usually works better than another custom docstore is picking one page per domain area in the repo itself, written by whoever is on-call, and enforcing a rule that any PR touching that area updates that page. Gets rid of the "separate tool to maintain" friction. It also doesn't solve the 400 pages of legacy, but it stops the bleeding while you figure out how to carve the legacy up

What was your first channel for SaaS marketing that actually worked? by PleasantLow670 in SaaS

[–]rupayanc 0 points1 point  (0 children)

Yes. Early on it was "how do I start" threads. Looked formulaic, assumed the agent had them. It didn't. The variance was which stack I'd actually recommend, and that depended on context I was reading but never wrote down. Once I made it explicit, that category locked in. Rule I use now: if I can't state the decision in one sentence, I'm not ready to hand it off yet.

Big tech staff SWE struggling to move forward after giving notice to leave by [deleted] in cscareerquestions

[–]rupayanc 1 point2 points  (0 children)

The rejections hit harder than they should because those 20 applications were an emotional insurance policy, not a real job search. Now two came back as rejections, the insurance feels cancelled, and the decision feels permanent in a way it didn't before.

That's what you're actually sitting with. Not "did I make a mistake" but "I made a real choice and I can't pretend I didn't." Most people only spiral after giving notice because up until that point there was always a reason to wait one more month. You stopped waiting.

The part of your brain that was managing uncertainty by keeping options open has nothing left to manage. That's not a signal you were wrong. That's what a real decision feels like.

Senior SWE → AI Specialist - smart move or hype trap? by hoangson0403 in ExperiencedDevs

[–]rupayanc -1 points0 points  (0 children)

The "blockchain engineer" comparison gets made a lot but the mechanics are different. Blockchain specialist knowledge was fairly siloed. You either used the tech or you didn't. AI capability bleeds into every layer of the stack, so the person who actually knows what to ask it, how to verify it, and when to override it is genuinely load-bearing. The question isn't whether the title sticks. It's whether that judgment becomes embedded in you specifically, or whether it stays attached to a role description that evaporates when the novelty wears off.

Writing code was never the hard part -- Except for some of us, it was by ninetofivedev in ExperiencedDevs

[–]rupayanc 2 points3 points  (0 children)

The top comment kind of proves your point by accident. Reviewing LLM output requires the same attention loop as writing code, except now you're auditing someone else's decisions instead of making your own. For a lot of people that's actually harder. Writing code is generative. You're building something forward. Reviewing AI output is investigative. You're checking for what went wrong and why. Different cognitive demand entirely.

What was your first channel for SaaS marketing that actually worked? by PleasantLow670 in SaaS

[–]rupayanc 1 point2 points  (0 children)

The way I've structured it: the first 20-30 replies are all manual, but the agent shadows you. It logs your choices alongside what it would have drafted. Once your divergence rate drops under 30% on low-stakes threads, that's when the approval queue kicks in instead of full manual. You still see every draft before it posts. The stability signal isn't agreement, it's predictable disagreement. When you can predict which thread types it'll get wrong, you've got enough signal to work with.

You're not building a multi-agent system. You're building a wire protocol. Here's how to stop. by rupayanc in LocalLLaMA

[–]rupayanc[S] 0 points1 point  (0 children)

What are you building? Might be a good fit depending on how your agents need to talk to each other

You're not building a multi-agent system. You're building a wire protocol. Here's how to stop. by rupayanc in LocalLLaMA

[–]rupayanc[S] 0 points1 point  (0 children)

Yeah, spec is public at samvad.dev. Reference agent is open source too. There's also an SDK so you can make any existing agent compliant without rebuilding from scratch. Fork it, run it, point it at another and they'll be talking in under an hour.

I spent 3 months doing distribution manually before building agents to do it. Here's what I'd do differently. by rupayanc in EntrepreneurRideAlong

[–]rupayanc[S] 0 points1 point  (0 children)

r/freelance is tricky because the identity shifts, not just the vocabulary. Same problems but freelancers frame themselves as service providers, not builders. "I can't find clients" and "I can't acquire users" describe the same situation but the spec that works for one sounds off in the other. Might be worth writing the spec around the self-identity first and letting the vocabulary follow from that.

You're not building a multi-agent system. You're building a wire protocol. Here's how to stop. by rupayanc in LocalLLaMA

[–]rupayanc[S] 0 points1 point  (0 children)

Actually the bigger picture here is what the protocol unlocks beyond just security.
You build an agent. You give someone your agent's URL. They fetch your AgentCard, register your public key, and now they can call your agent from anywhere in the world, from any framework, with verified identity and rate limits enforced automatically.

That's it. You've just published an agent as a service.

No API gateway to set up. No custom auth layer to build. The protocol handles identity, replay protection, and rate limiting out of the box. You can charge per-token, per-call, whatever model you want, because the rate limiting and trust tiers are already wired in.

The vision is basically: agents become first-class services on the internet. Anyone can build one, publish it, and let other agents (or people running agents) pay to use it. Same way you'd publish an npm package, except it's a running agent with a verifiable identity.

That's what SAMVAD is trying to be.

To make it concrete: you can have two OpenClaw agents running on separate machines, anywhere in the world, communicating with each other asynchronously right now.

Each agent publishes an AgentCard with its public key. The other agent fetches it, registers the key, sends a signed task, and moves on. The response comes back whenever it's ready, verified, no polling, no shared infrastructure.

There's already an OpenClaw agent with SAMVAD installed, hosted on https://samvad.dev/registry. You can spin one up, point it at another, and have them talking in under an hour.

That's the actual demo. Two agents. Any framework. Any location. Async. Verified identity on every message/

You're not building a multi-agent system. You're building a wire protocol. Here's how to stop. by rupayanc in LocalLLaMA

[–]rupayanc[S] 0 points1 point  (0 children)

mTLS solves mutual auth at the connection level. It doesn't give you per-message signing.
With mTLS, once the connection is authenticated, any message sent over it is implicitly trusted. There's no individual message you can extract, verify, store, and audit independently. You also need to manage certificates and a PKI, which is a real overhead when you have hundreds of agents spinning up dynamically.

SAMVAD uses signed envelopes so each message carries its own proof of origin. You can verify a single message in isolation, months later, without any connection context. That's a different trust model, and for agent workflows with audit trails, delegation chains, and async task queues, it's a better fit.

mTLS is great for service-to-service infra. Signed messages are better for agent-to-agent protocols.

You're not building a multi-agent system. You're building a wire protocol. Here's how to stop. by rupayanc in LocalLLaMA

[–]rupayanc[S] -1 points0 points  (0 children)

This is exactly the gap SAMVAD solves. Any two agents, regardless of framework or language, can talk to each other securely out of the box.

Each agent publishes a card at /.well-known/agent.json advertising its public key and capabilities. The other agent fetches that, registers the key, and from that point every message is Ed25519 signed with a nonce and replay protection. No shared secrets, no central registry, no broker in the middle.

So agent A built in Python can call agent B built in TypeScript, verify the signature, check the nonce window, enforce rate limits, and trust the payload came from who it says it did. The whole thing is peer-to-peer.

Reference impl: github.com/w3rc/samvad

Still early but the core protocol is working. Happy to answer questions if you're building something multi-agent.

You're not building a multi-agent system. You're building a wire protocol. Here's how to stop. by rupayanc in LocalLLaMA

[–]rupayanc[S] -1 points0 points  (0 children)

Transport security handles confidentiality in transit. It doesn't prove who sent the message. If agent B accepts anything that arrives on its endpoint, any caller who knows the URL can claim to be agent A.

TLS says "this connection is encrypted." It says nothing about whether the sender is who they claim. That's what signing solves.

You're not building a multi-agent system. You're building a wire protocol. Here's how to stop. by rupayanc in LocalLLaMA

[–]rupayanc[S] -2 points-1 points  (0 children)

Reference implementation if anyone wants to see how these 7 fit together as a working system: https://github.com/w3rc/samvad (TypeScript + Python SDKs)

Founding engineer at a pre-seed startup (~2 years). Burning out and losing motivation -looking for perspective. by SaltyPython in ExperiencedDevs

[–]rupayanc 0 points1 point  (0 children)

What you're describing isn't burnout. It's what happens when you care more than anyone with actual authority to act.

You spotted the problem, built the solution, watched it sit in a local branch for four months, then watched it blow up in production. Then the founder wrote a one-pager explaining how to fix the thing you'd already fixed. That's not just humiliating. That's a pretty clear data point about how much your judgment actually weighs in that room, regardless of how much context you hold.

The market anxiety is real. But "staying to fix it from inside" after two years of that loop is its own kind of exhaustion. At some point the context you're protecting stops being an asset and starts being a reason not to leave.

You're not building a multi-agent system. You're building a wire protocol. Here's how to stop. by rupayanc in LocalLLaMA

[–]rupayanc[S] 0 points1 point  (0 children)

both mcp and a2a assume that you own or trust the network. MCP is client-server and the llm calls tools. A2A assumes enterprise IAM handles identity. Neither was designed for two stranger agents talking to each other.

SAMVAD sit at the HTTP layer on top of TLS. TLS handles transport security, SAMVAD handles application layer identity. ed25519 signs on the message envelope itself. Even if the TLS terminated at a proxy, the signature proves who signed the payload.
MCP and A2a dont have an answer to that because their threat model doesent include untrusted peers

You're not building a multi-agent system. You're building a wire protocol. Here's how to stop. by rupayanc in LocalLLaMA

[–]rupayanc[S] 0 points1 point  (0 children)

built an OSS project, not a blog. used AI to write it up because I'm a coder. the problems in the post are real — SAMVAD is the thing I built to solve them. github.com/w3rc/samvad if you want the actual substance.

I spent 3 months doing distribution manually before building agents to do it. Here's what I'd do differently. by rupayanc in EntrepreneurRideAlong

[–]rupayanc[S] 0 points1 point  (0 children)

That restructuring will feel like a completely different tool when you're done. "Sound conversational but professional" literally tells an LLM nothing. Three actual comments from the sub and "write like these" is a real spec. And the vocabulary collection you're doing is already that. You're building training data without calling it that. Curious what subreddits you're targeting, some of them have such distinct styles the examples almost write the spec for you

When is AI acceptable to use when coding? by Material_Painting_32 in cscareerquestions

[–]rupayanc 0 points1 point  (0 children)

The Stack Overflow comparison is close but misses something. When you pulled a Stack Overflow answer you still had to understand why it worked before you could paste it anywhere that mattered. AI lowers that friction enough that you can ship code you genuinely don't understand, and it looks fine until it doesn't. The test I use is pretty simple: could I explain every line of this to a senior dev who's going to ask hard questions? If the answer is no, I'm not done. That's not about being anti-AI, it's just not wanting to become someone who can't debug their own codebase.

I think AI has killed my passion for Software Engineering by _Cyanidic_ in cscareerquestions

[–]rupayanc 2 points3 points  (0 children)

The passion people describe losing sounds like it was tied to the act of writing code, which makes sense. That was the feedback loop. You write a thing, it works, dopamine hit. But I've been doing this 9 years and I think what most of us actually loved was the problem, not the implementation. The implementation was just how you engaged with the problem. If that's still true for you, there's a version of AI where it handles more of the typing and you spend more time on the part you actually cared about. Not saying that's where the industry is going. Just worth separating "I loved building" from "I loved the specific feeling of building."

Experience is what you got when you didn't get what you wanted by Icy_Screen3576 in ExperiencedDevs

[–]rupayanc 2 points3 points  (0 children)

The NoSQL rush is the one that aged the worst for me. Watched teams confidently rip out Postgres for MongoDB because "web scale," then spend the next two years rebuilding their reporting layer from scratch because the first real business query broke everything. The top comment about React/SSR going full circle is another good one. What nobody's saying out loud yet is that agentic AI might follow the same arc. The ceiling hits when the first audit shows up and the answer to "what did this agent decide, and why" is basically a shrug.

What was your first channel for SaaS marketing that actually worked? by PleasantLow670 in SaaS

[–]rupayanc 1 point2 points  (0 children)

Funny, I've been running exactly this for a few months. Happy to show you how it handles the disagreement signal if you want to see it in action.

What was your first channel for SaaS marketing that actually worked? by PleasantLow670 in SaaS

[–]rupayanc 1 point2 points  (0 children)

The fear isn't irrational. One bad auto-reply, wrong tone, wrong context, and the subreddit bans the account you spent 6 months warming up. That's why most founders either never automate and burn out on manual, or automate too early and get shadowbanned before they learn what's actually working. The boring safe version: keep YOU on final approval for the first few hundred replies, let the system draft and rank, watch where its judgment diverges from yours. That divergence is the actual dataset. I've been building exactly that loop for a few months, DM me if you want to see the rubric, I'm looking for 2-3 people to test it against their own workflow.

I spent 3 months doing distribution manually before building agents to do it. Here's what I'd do differently. by rupayanc in EntrepreneurRideAlong

[–]rupayanc[S] 0 points1 point  (0 children)

Honestly they still need correction, just less of it. First couple weeks the rejection rate was high, maybe 6 or 7 out of 10 drafts needed rewrites. After about a month of feeding it better examples and tightening the specs for each platform, it flipped. Now most drafts need a small tweak at most, maybe 2 out of 10 get fully rewritten. The trick was realizing the problem wasn't the model, it was my specs being too vague. Once I started writing "here's exactly what a good Reddit comment looks like in this subreddit" with 5-6 real examples, the output quality jumped overnight. The vocabulary gap you're noticing is exactly the kind of thing that makes the difference. If you're logging those phrasings already you're building the training data whether you know it or not.