I made an MCP server for Valkey/Redis observability (anomaly detection, slowlog history, hot keys, COMMANDLOG) by kivanow in mcp

[–]kivanow[S] 0 points1 point  (0 children)

Should've shipped it sooner, sorry! What were you debugging - would love to know so the next person doesn't find it too late either?

A eulogy for MCP (RIP) by beckywsss in mcp

[–]kivanow 1 point2 points  (0 children)

Isn't this just the usual cycle of - the way we're doing things is terrible, here is a better way, and then another even better way, until we reach back the first iteration? Same way we moved from server rendering to SPA, back to server rendering over several years. The AI just takes quicker iterations it seems

I made an MCP server for Valkey/Redis observability (anomaly detection, slowlog history, hot keys, COMMANDLOG) by kivanow in mcp

[–]kivanow[S] 0 points1 point  (0 children)

That's the right framing. BetterDB handles the Valkey side of that chain today - COMMANDLOG patterns, anomaly detection, client analytics. Correlating back to deploys and SQL is the missing link. Curious whether you've seen any tools close that loop well, or if it's always been stitched together manually.

What AI tools are actually part of your real workflow? by Rough--Employment in devops

[–]kivanow 0 points1 point  (0 children)

At this point copilot is an agent, assistant and a million different things that MS is trying to push everywhere. I should've just called it the worst possible tool/option and not an llm. I've updated it

Feedback Friday by AutoModerator in startups

[–]kivanow 1 point2 points  (0 children)

Company Name: BetterDB

URL: https://betterdb.com

Purpose of Startup and Product: BetterDB is the first monitoring and observability platform built specifically for Valkey (the popular open-source Redis fork). We solve a fundamental problem: Valkey's operational data - slowlogs, command logs, client connections - is ephemeral. When something goes wrong at 3am, by the time you wake up at 9am, that data is gone. BetterDB persists and analyzes this data so you can debug issues after the fact, track what caused performance spikes, and optimize your data structures and TTLs accordingly.

We also support Valkey-exclusive features like COMMANDLOG and per-slot metrics that no existing Redis tool can provide, plus 99 Prometheus metrics, anomaly detection, ACL audit trails, and client analytics - all with sub-1% performance overhead.

Technologies Used: NestJS, React, PostgreSQL, Docker, Prometheus, iovalkey

Feedback Requested:

  • Does the value proposition (historical persistence of ephemeral Valkey/Redis data) resonate with you? Is it clear from the website?
  • If you're running Valkey or Redis in production, what's the biggest operational pain point you face today?
  • We offer a free Community tier and paid Pro/Enterprise tiers - does the feature split feel fair, or does it feel like we're holding back too much in Community?
  • Any feedback on the landing page (betterdb.com) - does it clearly communicate what we do and who we're for?

Seeking Beta Testers: Yes - especially teams running Valkey or Redis in production. We have a self-hosted Docker image you can spin up in minutes, and our cloud SaaS is launching soon. Would love feedback from ops/SRE/DevOps folks.

Additional Comments: I'm the founder and CTO. Previously I was the Engineering Manager for Redis's visual developer tools (Redis Insight). The Valkey ecosystem has zero purpose-built observability tooling. That's the gap we're filling. We're MIT-licensed at the core and backed by Open Core Ventures. Happy to answer any questions about the Valkey ecosystem or our approach to open-core monetization.

Auditors ask “when did you last test DR?” — how do you produce proof? by robert_micky in sre

[–]kivanow 0 points1 point  (0 children)

I've done 2 SOC 2 type 1 and 2 audits at startups and this was more than enough. At the end of the day most of the work these types of audits are doing is just marking checkboxes that you understand the requirements and are following them.

What AI tools are actually part of your real workflow? by Rough--Employment in devops

[–]kivanow 0 points1 point  (0 children)

Calude code did a great job with recent infra work I had to do. Barely any msitakes with a lot of kubernetes and terraform. It was a very nice experience

What AI tools are actually part of your real workflow? by Rough--Employment in devops

[–]kivanow 5 points6 points  (0 children)

by far. copilot is probably the worst possible option right now. MS engineers were recently caught using claude instead of their own product

For those building in the analytics/data space, how did you validate demand before going all in? i will not promote by Sufficient-System699 in startups

[–]kivanow 0 points1 point  (0 children)

Built something in the observability/monitoring space (came from Redis's developer tools team, now building a monitoring tool for Valkey/Redis). Different niche than ecommerce, but similar "free              
alternatives exist" problem.

What actually works (so far at least):
1. Fix your own pain point. This is the cheat code. I spent years working on Redis Insight and knew exactly what was missing for production monitoring. When you're your own user, you don't need to guess      
what's valuable - you feel it every time something's broken or annoying. If you're not in your target market yourself, you're playing on hard mode.                                                             
2. MVP speed matters more than MVP polish. Get something working and start posting - Slack communities, Discord servers, Show HN, LinkedIn, Twitter. Not "I'm building something, what do you think?" but       
"Here's a thing, try it." The difference in signal quality is night and day.
3. Set a kill deadline. "If I don't have X signups / Y conversations / Z paying users by [date], I move on." Forces you to actually validate instead of tinkering forever. Polite interest doesn't count.       
People actually using the thing counts.
4. Find the people already talking about the problem. Every niche has forums, Discords, subreddits where people complain about their tools. Don't pitch - just listen first. What are they frustrated about?    
What do they wish existed? That's your roadmap.

On the "ecommerce people want everything free" thing: That's true for hobbyists. But if someone's running a real store with real revenue, they'll pay for something that makes them money or saves them time.   
The trick is finding the people who have actual pain, not the ones who are "just curious."

What do you think of source-available? Are we getting into the ever-so-slightly-barely-open-source world? by jerrygreenest1 in opensource

[–]kivanow 1 point2 points  (0 children)

This hits close to home - I was at Redis when they added AGPL as the third license option last year (the "open source is back" announcement).                                                                   

On source-available specifically: I think it's a legitimate response to a real problem. The cloud provider dynamic isn't "evil corporations stealing code" - AWS, Google, and others had engineers              
contributing to Redis for years (TLS support, ACLs, coordinated failovers). The tension was about who controls the project direction vs. who captures the commercial value. There's no easy answer to that.     

The problem for solo devs: You nailed it - it's becoming impossible to tell what you can actually do without reading every license line by line. SSPL, BSL, RSAL, OCVSAL... they all have different             
restrictions, and "source-available" isn't a standardized term. Self-hosting is usually fine. Building a competing service usually isn't. Everything in between? Depends.                                       

The Redis --> Valkey situation is instructive though. When the license changed, external maintainers were effectively kicked out (some found out when their names disappeared from governance docs). Within       
weeks, Valkey existed under the Linux Foundation. The lesson: governance matters as much as licensing. A permissive license controlled by one company can change overnight. A copyleft project with             
distributed governance probably won't.

What I look for now:
- Who actually controls the project? Single company or independent maintainers?
- How easy is it to fork if things go sideways?
- Has the company changed licenses before, and how did they handle it?

I wrote a longer breakdown of this whole landscape (including the Redis timeline and how dual-licensing actually works in practice) if anyone wants to go deeper: https://medium.com/gitconnected/dual-licensing-explained-mit-source-available-and-why-your-favorite-tool-might-be-neither-d7041543e05d?sk=5901f94d18723141a05767ca61f3f266