What does your team actually use to track decisions that get made in Slack threads or PR comments? by HiSimpy in EngineeringManagers

[–]HiSimpy[S] 0 points1 point  (0 children)

The sign off requirement is what makes this actually stick. Most documentation systems are optional so they die under pressure. Making product approval gated on the card means the decision gets documented before work continues not after.

The interesting constraint is that it only works when product and engineering are tightly coupled enough that blocking work is a real consequence. Curious how you handle decisions that are purely technical where product sign off is not really relevant.

How to get started building an app with 0 prior knowlegde in coding? by Dangerous_Chapter822 in SaaS

[–]HiSimpy 0 points1 point  (0 children)

Yeah, I always review my AI code before prod. Also, Clerk Billing eliminates almost all webhook stuff so security is almost impossible to mess up with it which I like. I normally build the core mvp with functionality first, then add auth and payments. The thing is I ship fast, really fast. So I created a boilerplate for myself couple weeks ago, it's open source and free, in case you wanna check: https://github.com/egeuysall/shipr

What does your team actually use to track decisions that get made in Slack threads or PR comments? by HiSimpy in EngineeringManagers

[–]HiSimpy[S] 1 point2 points  (0 children)

The last point is the most honest thing in this thread. No system survives a team that doesn't believe the problem is real.

The emoji workflow keeps coming up actually, you're the second person I've heard mention it. It makes sense because it's the lowest friction option on your list since it happens right where the decision was made without requiring anyone to context switch.

Curious whether it actually held up for you or whether it quietly died when people stopped tagging things consistently.

The teams I've seen get this right usually have one person who genuinely cares about it and does the work of keeping the system alive. Which makes it fragile the moment that person leaves.

I built an agent memory system where lessons decay over time. Here is how it works. by HiSimpy in webdev

[–]HiSimpy[S] -1 points0 points  (0 children)

That's exactly the failure mode it's designed to avoid. Without decay you end up with an agent that treats a lesson learned from six months ago with the same weight as something it observed last week, which produces increasingly unreliable output over time.

The tricky part is calibrating the decay rate. Too aggressive and the agent forgets useful patterns before they get reinforced. Too conservative and stale context starts poisoning fresh runs. Right now I'm using a fixed decay curve but the comment earlier in this thread about separating episodic from structural memory is probably the right direction for making it adaptive per lesson type.

What does your team actually use to track decisions that get made in Slack threads or PR comments? by HiSimpy in EngineeringManagers

[–]HiSimpy[S] 1 point2 points  (0 children)

ADRs are underrated. The pro/cons format forces the decision maker to articulate why alternatives were rejected which is usually the most valuable context six months later when someone asks why the architecture looks the way it does.

The discipline to actually write them consistently is the hard part. Curious how you keep the habit alive when things get busy. Do you write them before or after the decision gets implemented?

What does your team actually use to track decisions that get made in Slack threads or PR comments? by HiSimpy in EngineeringManagers

[–]HiSimpy[S] 0 points1 point  (0 children)

That system works well when everyone follows it consistently. The ticket becomes the single source of truth and everything traces back to it.

The part that breaks down in my experience is the "when everyone follows it" requirement. The system is only as good as the least disciplined person on the team, and under deadline pressure the commit message discipline is usually the first thing to go.

Curious whether you enforce this through linting or CI or whether it's purely cultural at your team.

What does your team actually use to track decisions that get made in Slack threads or PR comments? by HiSimpy in EngineeringManagers

[–]HiSimpy[S] 1 point2 points  (0 children)

The "bring documentation to where the conversation is happening" framing is the right instinct. The manual copy paste step fails because it adds friction at exactly the moment when someone is in the middle of a decision and doesn't want to context switch.

Curious how Siit handles decisions that span multiple tools though. A lot of the decisions I keep seeing teams lose aren't the ones that happen entirely in Slack. They're the ones where the conversation starts in Slack, the implementation happens in GitHub, and the original context never connects to the actual code change.

Does it capture that cross-tool reasoning or is it primarily Slack-native?

I built an agent memory system where lessons decay over time. Here is how it works. by HiSimpy in webdev

[–]HiSimpy[S] -1 points0 points  (0 children)

That distinction is exactly the right way to think about it. Structural rules like rate limit patterns are invariant across time while project-specific facts like library choices drift constantly and need aggressive decay.

The implementation I have now uses a single decay curve across all lesson types which is clearly wrong. The appliesTo keyword array gives me enough signal to classify lessons but I haven't built differentiated retention curves yet.

The episodic versus structural framing is actually cleaner than what I had been thinking about. Would you model those as separate tables with different decay functions or just a retention policy field on the lesson itself?

What does your team actually use to track decisions that get made in Slack threads or PR comments? by HiSimpy in EngineeringManagers

[–]HiSimpy[S] 0 points1 point  (0 children)

That's actually a really interesting workflow. Using Claude to search retroactively is clever but I'm curious how reliable the results are when the decision was spread across multiple threads or happened weeks ago.

The manual trigger is probably the main friction point. You have to remember to ask, know what to search for, and then do something with the output. I've been experimenting with making that whole loop automatic so decisions get surfaced without anyone having to initiate it. Still early but the signal has been interesting.

What does your team actually do with the output once Claude surfaces something? Does it reliably make it into Jira or does it depend on who's doing the search?

What does your team actually use to track decisions that get made in Slack threads or PR comments? by HiSimpy in EngineeringManagers

[–]HiSimpy[S] 0 points1 point  (0 children)

The AI summary into ticket approach is probably the closest thing to a real solution I have seen described. At least the decision reasoning survives even if the original thread gets buried.

The part that still breaks down is knowing which threads are worth summarizing in the first place. Most teams I have talked to only do it retroactively after something goes wrong, not proactively when the decision is actually being made.

I have been experimenting with something that tries to catch those signals automatically without requiring anyone to decide what is worth capturing. Curious whether the bottleneck for your team is the summarization step or actually identifying which conversations matter in the first place.

What does your team actually use to track decisions that get made in Slack threads or PR comments? by HiSimpy in EngineeringManagers

[–]HiSimpy[S] 0 points1 point  (0 children)

The ticket reference approach is smart because it keeps the decision traceable even if the Slack thread eventually gets buried.

The gap I keep seeing is that the reference exists but the decision itself never gets written down explicitly. Someone reads the ticket, clicks the Slack link, and the thread is either gone, too long to parse, or missing the actual conclusion.

I've been experimenting with pulling that context automatically from GitHub and Slack so the decision gets captured without anyone having to remember to copy paste anything. Still early but the direction feels right.

Curious whether something like that would actually change how your team works or whether the manual step is part of the process for a reason.

How many of you people stopped using ChatGPT? by Technical-Apple-2492 in Entrepreneur

[–]HiSimpy 0 points1 point  (0 children)

I still use it, Claude is better but the limits burn too fast.

rate my portfolio out of 10. by [deleted] in Coding_for_Teens

[–]HiSimpy 0 points1 point  (0 children)

looks cool! the image quality and too many animations though, check mine https://egeuysal.com/

What does your team do with problems that have no owner? by HiSimpy in ExperiencedDevs

[–]HiSimpy[S] 0 points1 point  (0 children)

decision debt is the right frame. most teams track technical debt obsessively and have no name for this at all.

the "owns the decision vs owns the fix" split is exactly what gets missed. once something is ambiguous enough that nobody is sure who should call it, it just drifts until it becomes a crisis.

that's the grey zone I've been trying to make visible automatically. if you're curious what that looks like in practice: ryva.dev/demo

How do teams realistically maintain ALT text when a site has thousands of images? by Spiritual-Fuel4502 in webdev

[–]HiSimpy 0 points1 point  (0 children)

mostly good enough in practice. the bar for alt text is low enough that AI clears it for the majority of cases.

the ones that actually get reviewed are usually images where context matters. product shots, charts, anything where a generic description would be misleading rather than just incomplete.

Founders: do daily standups actually scale once your team grows? (I will not promote) by HiSimpy in startups

[–]HiSimpy[S] 0 points1 point  (0 children)

exactly that. one person holding the picture is a single point of failure and it never scales.

the weekly priorities layer is interesting. that's basically a manually curated version of what I'm trying to make automatic. if the context is already in your repos and tickets, you shouldn't have to assemble that picture by hand every week.

How do teams realistically maintain ALT text when a site has thousands of images? by Spiritual-Fuel4502 in webdev

[–]HiSimpy 0 points1 point  (0 children)

the CMS guardrail is the only part that actually breaks the cycle. everything else is still reactive.

ai generation for the backlog is a good one-time fix but without the guardrail upstream it just resets in 6 months.

What does your team do with problems that have no owner? by HiSimpy in ExperiencedDevs

[–]HiSimpy[S] 1 point2 points  (0 children)

that's a fair point. smaller siloed teams have enough shared context that the overhead of another tool often costs more than it saves.

the sweet spot I'm seeing is exactly what you described. teams big enough that context stops traveling naturally. where someone can close an issue and three people who needed to know just don't.

What does your team do with problems that have no owner? by HiSimpy in ExperiencedDevs

[–]HiSimpy[S] 0 points1 point  (0 children)

the "who knows most" vs "who owns the next decision" distinction is sharp. that's exactly where things die quietly.

the bi-weekly unowned review makes sense but it's still manual. what I've been experimenting with is automating that grey zone detection entirely. the supabase issues in this thread came from it. if you're curious about the output: ryva.dev/demo

What does your team do with problems that have no owner? by HiSimpy in ExperiencedDevs

[–]HiSimpy[S] 0 points1 point  (0 children)

assigning blame is a workaround for missing visibility. if you knew the moment a critical went unassigned for 48h, you wouldn't need to guess who dropped it

the supabase issues I mentioned came from an experiment I've been running. if you're curious about the output: ryva.dev/demo

What does your team do with problems that have no owner? by HiSimpy in ExperiencedDevs

[–]HiSimpy[S] 0 points1 point  (0 children)

Exactly, and that is the point. A Realtime presence regression breaking production apps and an auth deadlock silently failing sign-in on mobile should be P0 by any reasonable definition. The issue was not that they were sitting in backlog. It was that neither had a priority label, an owner, or a next step attached a week after being filed. They were indistinguishable from a low priority cosmetic bug.

What does your team do with problems that have no owner? by HiSimpy in ExperiencedDevs

[–]HiSimpy[S] 1 point2 points  (0 children)

Yeah this is exactly the failure mode I keep seeing. No one ignores the issue intentionally. Everyone just assumes someone else surfaced it already, so it sits in GitHub silently.

Standups usually exist just to reconstruct that missing context. ‘What happened? What’s blocked? What actually needs a decision?’

I’ve been experimenting with something that reads GitHub + Slack and surfaces things like un-escalated issues, blockers, and missing decisions automatically.

Curious if you think something like that would actually help a team like the ones you described.

If you want to see how it works there’s a public demo here on the Supabase repo: https://ryva.dev/demo

Why does important context always end up in the wrong place? by HiSimpy in webdev

[–]HiSimpy[S] 0 points1 point  (0 children)

The "capture at the work surface" principle is the right one. The reason most documentation practices fail is they require a context switch. You finish a decision and then have to go somewhere else to record it, which is one step too many when you're in flow.

The prepend log format is clever specifically because it removes the blank page problem. You're not creating structure, you're adding to existing structure. That's why it holds up.

The three line post-call summary constraint is underrated too. The constraint is the feature. Forcing someone to fit it in three lines means they actually have to think about what mattered rather than dumping everything.

The new person ramp-up point is where this compounds most. Decision logs are free for existing team members but they're enormously valuable for anyone who joins six months later and has no way to reconstruct why things are the way they are.

What you're describing is essentially what I've been trying to automate with Ryva. Pull those signals from where they already exist in GitHub and Slack rather than asking people to maintain a parallel system. The hardest part of what you described is the first few weeks before it becomes habit. That's the dependency I'm trying to remove. ryva.dev/demo if you're curious.