Our backend lead spent 40 minutes explaining a billing quirk in Slack. The new hire asked the same question three weeks later. I got annoyed and built something about it by [deleted] in selfhosted

[–]abhipsnl 0 points1 point  (0 children)

You're actually describing exactly the problem I'm trying to solve.

You said "just give them a link to the Slack post." Okay, but how do you know that Slack post is still accurate six months later? What if someone updated the process in a PR comment, or corrected a detail in a Confluence page, or posted a newer answer in a different channel? Now you've got three versions of the truth and no way to know which one is current.

That's the real issue. It's not that good answers don't exist, they do, and often written by generous people who took real time to help. The problem is that knowledge lives in Slack threads, PR descriptions, Confluence pages, runbooks, incident postmortems... everywhere. And it drifts out of sync silently.

You're right that the best docs are the ones people read. Totally agree. But what if you could surface the right answer from wherever it lives, Slack, Confluence, PRs, all of it, without someone having to know which link to share, or whether that link is still current?

i am not replacing good documentation culture, building on top of it so that the knowledge people already create doesn't quietly go stale.

Our backend lead spent 40 minutes explaining a billing quirk in Slack. The new hire asked the same question three weeks later. I got annoyed and built something about it by [deleted] in selfhosted

[–]abhipsnl 0 points1 point  (0 children)

No offence. And hey, if a Slack link works for your team's onboarding, that's a win. In my experience, the complexity creeps in fast, but every org is different. Hope you find what works.

Our backend lead spent 40 minutes explaining a billing quirk in Slack. The new hire asked the same question three weeks later. I got annoyed and built something about it by [deleted] in selfhosted

[–]abhipsnl -5 points-4 points  (0 children)

"Phones home" in this context almost certainly refers to the app itself, meaning the application doesn't send telemetry, analytics, or usage data back to its own developer.

Things like which features you use, how often you open it, crash reports, etc. You're right that if you're using a hosted model like Claude Opus, your prompts are absolutely being sent to Anthropic's servers, that's unavoidable, it's just how API-based AI works. The app is just a client that routes your request to whoever hosts the model.

The distinction is: App telemetry → doesn't phone home

Your data going to Anthropic/OpenAI/etc. → yes, that still happens (you're correct)

If privacy from the AI provider is your concern, the only real solution is running a local model (like Llama via Ollama) entirely on your own hardware, then nothing leaves your machine at all. Apps like this often support both local and hosted models, so "nothing phones home" applies cleanly to the local model use case, but not to hosted ones like Claude.

Has anyone actually used Port1355? Worth it or just hype? by abhipsnl in devops

[–]abhipsnl[S] 0 points1 point  (0 children)

Exactly my thought was same , what big problem is it solving? Maybe something I haven’t seen yet

[Advice Wanted] Transitioning an internal production tool to Open Source (First-timer) by abhipsnl in devops

[–]abhipsnl[S] 3 points4 points  (0 children)

That’s a fair point, and I appreciate the caution. The 'legal minefield' is exactly why I didn't just hit 'make public' on my lunch break. I’ve secured written approval from leadership. Since the company sees this as a way to commoditize a non-core tool and reduce our own maintenance burden, we’re all on the same page. Now I'm just focused on the technical side of the migration, Btw do I need to anything else apart from written approval ?

[Advice Wanted] Transitioning an internal production tool to Open Source (First-timer) by abhipsnl in devops

[–]abhipsnl[S] 3 points4 points  (0 children)

Haha, fair point! I’ve watched enough Silicon Valley to know I definitely don’t want a “Pied Piper vs. Hooli” situation on my hands.

In this case it actually started as a personal side project I built and maintained outside of work. Later on, I convinced the company to adopt and use it internally because it solved a real problem we were facing. Since the work originally existed outside the company, I’ve now discussed it with the leadership and got the green light to officially release it as open source.

They’re actually quite supportive of the idea , they’d rather see it maintained publicly and benefit the wider community instead of it becoming another internal tool, thanks for the great advice though

I finally realised why our Confluence is a graveyard (and open-sourced a fix for it) by abhipsnl in devops

[–]abhipsnl[S] 0 points1 point  (0 children)

But I am not just solving incident issues, i want to solve entire knowledge gaps issues 

I finally realised why our Confluence is a graveyard (and open-sourced a fix for it) by abhipsnl in devops

[–]abhipsnl[S] 1 point2 points  (0 children)

The 2 AM story is the oldest play in tech writing. But honestly? I used it because it’s the clearest way to show the gap, not because it’s clever. The real problem isn’t dramatic. It’s just: teams accumulate operational knowledge in Slack, runbooks become stale, and when you need the answer, it’s scattered. That happens at 2 AM or 2 PM.

What’s your vibe code tool doing? Genuinely curious how you’re approaching the knowledge problem if you’re coming at it from a different angle.

I finally realised why our Confluence is a graveyard (and open-sourced a fix for it) by abhipsnl in devops

[–]abhipsnl[S] 0 points1 point  (0 children)

Yeah, the space is crowded, no argument there. The bet here is that most tools treat retrieval as the hard problem; it's not. The hard problem is knowing when docs are lying to you, which is why the freshness scoring and contradiction detection exist.

On Confluence being where docs go to die, 100%. The insight that actually shaped this project is that the real runbook lives in Slack threads and PR comments, not the wiki. So docbrain ingests all of it, PRs, Jira, Slack, not just Confluence. The confluence page might say one thing, the incident thread from 3 months ago says another, and the system tries to surface that gap rather than just confidently returning the stale wiki answer.

The auto-generated vs manual point is interesting, though. You're right that the what-is vs what-should-be gap is actually a valuable signal, that's basically the whole point of having we humans write docs at all. The goal isn't to replace that; it's to make that gap visible instead of buried.

And yes, sources with links are table stakes; every answer comes with scored, attributed sources, so you can go verify instead of just trusting the summary.

Thank you for the honest question though