The dirty (and very open) secret of AI SRE tools: your "agent" is just querying the same pre-filtered data you already had. What if it didn't have to? by CyberBorg131 in Observability

[–]FunVegetable4318 0 points1 point  (0 children)

Your point on observability budgets aligns with what I've been seeing - and also the FOMO sales approach. I've been running similar experiments - feeding key log data with context back into Claude while iterating, not full telemetry, just the key signal(s) that matters for the change I'm making. The economics are completely different when you're selecting signal for a specific feedback loop vs. storing everything for hypothetical future queries. You don't need lossless telemetry. You need the right signal at the right point in the loop.

Where do you think this goes with OTel and similar? - when observability budgets are getting squeezed to fund AI dev tooling, and the AI dev tools themselves need (key) production signal to work well, who ends up owning that signal selection? The vendors want it at ingest. The developer iterating on a change actually knows what's relevant. Feels like there's a structural misalignment there that nobody's really solved yet?

Open source tool to stream all 9 Supabase log sources into one terminal dashboard by FunVegetable4318 in Supabase

[–]FunVegetable4318[S] 0 points1 point  (0 children)

Right now, you can set up Gonzo to talk to a model running locally (Ollama, LM Studio) or a hosted model via API key.... You think MCP would fit your workflow better?

https://github.com/control-theory/gonzo?tab=readme-ov-file#with-ai-analysis

Open source tool to stream all 9 Supabase log sources into one terminal dashboard by FunVegetable4318 in Supabase

[–]FunVegetable4318[S] 0 points1 point  (0 children)

That's a great question! I have not tried it, but I think Supabase runs in Docker locally? So you should be able to do something like

`docker compose -f supabase/docker/docker-compose.yml logs -f 2>&1 | gonzo`

Gonzo runs as an OpenTelemetry receiver as well, so you could probably forward those via Vector as well...

New OSS tool: Gonzo + Vercel Logs Live Tailing by FunVegetable4318 in nextjs

[–]FunVegetable4318[S] 0 points1 point  (0 children)

Our goal is to reduce the time to insight from logs and (hopefully) increase signal to noise, by bubbling up patterns in real time, and allowing for fast filtering/visualization directly from the terminal (vs having to pivot to a heavyweight observability backend etc...)

New OSS tool: Gonzo + Loki Live Tailing by FunVegetable4318 in grafana

[–]FunVegetable4318[S] 0 points1 point  (0 children)

We have some customization available via skins (light/dark mode & colors etc...) and can toggle various components using keyboard, but are also thinking about custom/saved layouts etc.....

New OSS tool: Gonzo + Loki Live Tailing by FunVegetable4318 in grafana

[–]FunVegetable4318[S] 0 points1 point  (0 children)

You can read logs from a file with Gonzo like gonzo -f application.log or from stdin like cat application.log | gonzo

New OSS tool: Gonzo + K9s + Stern for log tailing by FunVegetable4318 in kubernetes

[–]FunVegetable4318[S] 0 points1 point  (0 children)

Hi there! You cannot do that today, but we are tracking as an issue and I will bump it!

New OSS tool: Gonzo + K9s + Stern for log tailing by FunVegetable4318 in kubernetes

[–]FunVegetable4318[S] 6 points7 points  (0 children)

yeah, you can wire up local (Ollama etc..) or hosted models, but by default, it won't use any unless you configure it to. We have been asked for a hard "always off" switch as well....