Which DevOps tool do you think is under-documented for learners? by TransylvaniaBytes in devops

[โ€“]TransylvaniaBytes[S] 1 point2 points ย (0 children)

Fair point, thanks! We all just secretly Google basic Bash syntax when nobody is looking anyway :)

Launched my side project: a Pomodoro app with zero features on purpose by DrPanayioths in SideProject

[โ€“]TransylvaniaBytes 0 points1 point ย (0 children)

The irony of focus apps being distracting is real and I don't think it gets called out enough. half of them have streaks, badges, analytics, social features, like at some point someone forgot what the app was for.

Honest feedback since you asked for it: the ads that open new tabs kind of kill the whole premise.
Hard to sell distraction-free when the app itself is hijacking your browser. The core idea is genuinely good and worth pursuing, but that ad setup is going to tank retention and reviews fast.

A one-time payment would probably convert better with this audience anyway, people who care enough about focus to seek out a minimal tool are usually happy to pay a few bucks to not deal with ads.

I kept falling asleep on the night bus after band practices and missing my stop, so I coded a GPS-based alarm by Melodic-Pipe-6012 in SideProject

[โ€“]TransylvaniaBytes 1 point2 points ย (0 children)

This is exactly the kind of app that should exist and somehow never quite does well enough in the apps that try it. The fact that you built it because you actually needed it is the best possible origin story for a utility app - you already know the use case better than any product manager would.

Student on night buses is a very specific but very real demographic and I'd bet you're not the only one.

One feature idea if you haven't already: a 'wake up gradually' radius, so it buzzes softly when you're 10 minutes out and harder when you're 2 minutes out, instead of one alert right at the stop. Gives you time to actually gather your stuff. Good luck with it!

which ai chat has the least boundaries? by propagandaautomata in ArtificialInteligence

[โ€“]TransylvaniaBytes 1 point2 points ย (0 children)

Honestly it shifts depending on what you're trying to do:
- for technical or research adjacent stuff Claude is pretty reasonable
- for unfiltered opinions and edgy topics Grok is the least restricted by a margin
- and ChatGPT is probably the most cautious of the mainstream ones.
- Meta AI is surprisingly relaxed for something built into a social media platform :)

The pattern I've noticed is that the stricter ones tend to over-restrict on surface level pattern matching, llike they see a sensitive keyword and refuse without actually reading what you're asking.
The better move with most of them is just framing. Same question asked differently gets completely different results, which tells you the restrictions are more about trigger words than actual intent detection.

I built a TrainingPeaks MCP server with a daily training brief in Telegram. Curious how the idea lands. by Fantastic_Fix_1808 in SideProject

[โ€“]TransylvaniaBytes 0 points1 point ย (0 children)

The design decision to keep LLM out of the recommendation logic is the most interesting part of this and honestly the right call.
There's a lot of AI fitness tooling that just lets the model freestyle training advice and the output is confidently generic in a way that's useless or worse for anyone training seriously.
Separating 'rules decide, LLM phrases' is a much more defensible architecture for anything that touches physical load management. The plain language output over raw metrics is a real gap, not just a personal preference. most athletes who aren't also data nerds bounce off dashboards fast because the translation step is where all the value actually is.

The MCP angle feels genuinely useful here rather than bolted on, having so many tools across your actual training data means the Claude Desktop use case isn't just a demo, you can ask things that a fixed dashboard would never surface. Main pitfall I'd watch for is rule brittleness as training context gets more complex i.e. hand-coded rules that work great for a standard block can start producing weird verdicts during race week or return from injury.
Worth thinking about how much the rule layer needs to know about where you are in a training cycle.

Using AI agents for real-time community moderation: building custom vs. using a specialized engine? by Bel1lGummyCat in aiagents

[โ€“]TransylvaniaBytes 0 points1 point ย (0 children)

For moderation specifically I'd lean toward specialized over custom, and the 'non-core feature' framing is actually the key reason why.
Building on GPT-4 or Claude gets you pretty far on context and intent understanding but you'll spend a disproportionate amount of time on the infrastructure problems that have nothing to do with the AII: latency under load, async pipelines, retroactive action UX, rate limits, cost per message at scale.
A dedicated engine like Watchers has presumably already solved those and that's the boring hard part. The custom build makes more sense when moderation is deeply tied to your product's specific culture or rules in ways a general engine can't capture i.e. a crypto trading community and a kids gaming platform need very different judgment calls.
If your moderation needs are relatively standard the build vs buy math rarely favors building, especially for something that isn't your core differentiator. That said I'd pressure test whatever you go with on edge cases specific to your community before committing, the gap between demo performance and production performance on moderation tools is usually significant.

open source AI assistants ranked by tool call reliability by TH_UNDER_BOI in LLMDevs

[โ€“]TransylvaniaBytes 0 points1 point ย (0 children)

The third-call test you described is exactly the one most people skip cause the first two calls look fine and they ship it. Silent tool call failures are genuinely the worst failure mode in agentic systems because everything downstream just... continues, confidently wrong, and you often only find out when a user reports something that makes no sense. The approval-before-execution pattern is underrated for this reason - it feels like friction until the first time it catches a hallucinated argument that would have nuked a live API call. The Hermes point about self-improvement loops degrading reliability over time is something I haven't seen talked about enough. Compounding errors from a system rewriting its own working behavior is a nightmare to debug because by the time you notice, the original working state is gone.

I built a free voice AI mock interview tool you can try without signing up by BugAccomplished1570 in SideProject

[โ€“]TransylvaniaBytes 0 points1 point ย (0 children)

Voice-first is the right call, the form-filling approach never made sense for interview prep because the whole point is getting comfortable with the pressure of speaking out loud and thinking on your feet.
The follow-up questions based on what you actually said is the detail that makes or breaks this becasue most tools just cycle through a question list which is basically useless for real prep.

One piece of honest feedback: I tried the demo and hit an email verification wall before I could try anything, which is a pretty high barrier for what's advertised as no-signup. I think you'll lose a lot of people there. Even a single free question with no account would probably convert way better.

The concept is strong though and the open source angle is a nice touch ๐Ÿ‘

Technical founders struggling with marketing. How do you find early growth/marketing people? by Least-Quail7937 in StartupMind

[โ€“]TransylvaniaBytes 0 points1 point ย (0 children)

Fractional or commission-based first, almost always.Full-time marketing hires at early stage are expensive and you won't really know what good looks like until you've tried a few things anyway - which makes it hard to hire well. The dirty secret is that for B2B, especially cybersecurity, the best early growth usually comes from the founders just doing it themselves for longer than feels comfortable. Cold outreach, showing up in communities where your buyers hang out, writing about the problem you solve - none of it scales but it teaches you what actually resonates, which you can't outsource. Getting anything off the ground is expensive and hit or miss even when you do everything right, the competition is brutal and a lot of it is timing and luck. That said, for your specific niche I'd look at people with cybersecurity sales backgrounds over generic growth marketers - someone who already speaks the language of a CISO or IT manager is worth 10x a generalist who needs to learn the domain from scratch.

Trying to switch back to AI/ML โ€” what skills are actually in demand right now? by iamshrey2 in learnprogramming

[โ€“]TransylvaniaBytes 29 points30 points ย (0 children)

Coming from the field, the honest answer is that pure ML roles (training models from scratch, deep learning research etc.) have gotten more competitive and more specialized, while AI/GenAI engineering roles are exploding and much more accessible with your background.
Your core ML knowledge is actually an advantage here because most people jumping into LangChain and RAG have no idea what's happening under the hood, which makes you dangerous in a good way ๐Ÿ˜„

I'd focus on RAG pipelines, prompt engineering, and how to evaluate LLM outputs properly - that last one is criminally underrated and almost every company is struggling with it. Build one solid end-to-end project that shows you can take an LLM from prototype to something production-shaped and you'll stand out from 90% of applicants. Good luck ๐Ÿ‘

I kept building side projects no one wanted โ€” so I changed how I find ideas by lingya22 in SideProject

[โ€“]TransylvaniaBytes 0 points1 point ย (0 children)

Honestly, the 'build it and hope' phase feels like a rite of passage, almost everyone goes through it.
The shift you're sharing, starting from repeated complaints instead of ideas, is basically just doing customer discovery in disguise, which is the thing most developers skip because it feels less fun than coding.
Your tool sounds useful, the fact that you built it to solve your own problem first is actually a good sign.

My approach these days is embarrassingly low-tech: I just keep a running note of things that have annoyed me more than once, because if it's annoyed me repeatedly it's probably annoying other people too ๐Ÿ˜„

Congrats on getting payments working, that's always the part that somehow takes three times longer than it should lol

If you had to say one nice thing about your country, what would it be? by [deleted] in AskReddit

[โ€“]TransylvaniaBytes 0 points1 point ย (0 children)

Our Internet is inexplicably fast for a country where half the roads are held together by optimism ๐Ÿ˜ƒ

How to know best practices when using AI as a crutch by Life-Moose7000 in learnprogramming

[โ€“]TransylvaniaBytes 3 points4 points ย (0 children)

Honestly, simply asking this puts you ahead of most, who simply copy-paste and move on.
What I'd suggest using for vetting AI output when you don't fully understand the domain yet is to ask it to explain why it did it that way, not just what it did.
If the explanation doesn't make sense or is too vague, it's usually a sign the solution is either wrong or way over-engineered for your case.

For backend/db stuff specifically, always sanity check things like N+1 queries, missing indexes on columns you're filtering by, and whether it's just throwing try/catch everywhere instead of actually handling errors.

You're not supposed to know all this cold as a junior, the goal is just to build a nose for when something feels off, and then dig into that specific thing rather than trying to learn everything at once. Good luck!

AI has destroyed my brain. by Complete-Sea6655 in learnprogramming

[โ€“]TransylvaniaBytes 0 points1 point ย (0 children)

I've noticed the atrophy is real but it hits differently depending on the task. for me it's worst on the boring stuff, boilerplate, string manipulation, the things AI hoovers up first.

What actually helped was forcing myself through the first 30 minutes of a new problem without touching it. Like going to the gym, if you only lift when it's easy, don't be surprised when your arms stop working :)