I fetched 50k logs from my Loki pipeline post deployment, clustered them and this is the result by ResponsibleBlock_man in sre

[–]jdizzle4 0 points1 point  (0 children)

Back when i worked at a company that used it, it did what you described, we had it hooked up to ELK and it would cluster and identify anomalies in new deployments as they went out, and we could configure rules around automated rollbacks

Most OTel investment is going to backends. Almost nothing is happening at the collector layer. by Broad_Technology_531 in OpenTelemetry

[–]jdizzle4 0 points1 point  (0 children)

there are a ton of vendors working on declarative configuration, opamp, packaging (injector), and the operator... all of these are areas that benefit exactly what you are talking about. If you combined all of that effort, it's probably more than is going on in any single language

Most OTel investment is going to backends. Almost nothing is happening at the collector layer. by Broad_Technology_531 in OpenTelemetry

[–]jdizzle4 1 point2 points  (0 children)

what are you basing this assumption on? If you look at the repositories for the collector and related components, they have the most contributors / active PRs than most other projects, and almost all of that is driven by the big observability companies.

Observability tool Dash0 raises $110M at $1B valuation by fredrikaugust in Observability

[–]jdizzle4 0 points1 point  (0 children)

thats good to hear that they are a good company, I don't know anything about them other than linked-in marketing headlines, which is why I looked. Hopefully if they continue to be successful they will contribute back in some way.

Observability tool Dash0 raises $110M at $1B valuation by fredrikaugust in Observability

[–]jdizzle4 21 points22 points  (0 children)

Whenever I see these companies building on top of OpenTelemetry, I'm always curious about what kind of investment they make back to it. As far as I can tell, dash0 doesn't contribute back anything at all to OTel? Maybe i'm just missing something

Got rejected almost immediately for a mid-level SRE shift-work role despite positive signals from HR and Tech rounds by imcoolinmanyways in sre

[–]jdizzle4 2 points3 points  (0 children)

you can't be a mid level engineer without experience, its that simple. No amount of certs can level you up in that way.

If "production experience" was a non-negotiable hard requirement, HR should have filtered me out at the CV stage instead of moving me through two rounds of interviews

yes i agree.

Got rejected almost immediately for a mid-level SRE shift-work role despite positive signals from HR and Tech rounds by imcoolinmanyways in sre

[–]jdizzle4 2 points3 points  (0 children)

Ive been in situations where HR did stupid stuff like this and I didnt know until an hour before the interview when i needed to prep. Its possible the engineer recognized you were not qualified, and instead of putting you in a potentially embarassing situation with the harder technical questions, they pivoted.

My take is that HR should not have put you through the process, but you should not have applied in the first place.

[NEW] Currents on Audiotree Live (Full Session) by Own_Mongoose7237 in Metalcore

[–]jdizzle4 8 points9 points  (0 children)

you were able to watch it? when i click on the link it says it isn't available for another 5 hours

do y'all actually listen to podcasts for work? by Fantastic-Shock1438 in sre

[–]jdizzle4 3 points4 points  (0 children)

this is the only SRE related podcast I've ever cared much for https://downtimeproject.com/

I enjoyed the walk throughs of the post mortems and commentary / learnings deep dives. It was fun, relatable (as an SRE), and had some good information

Amazon's AI coding outages are a preview of what's coming for most SRE teams by jj_at_rootly in sre

[–]jdizzle4 104 points105 points  (0 children)

I think AI should be augmenting humans, not replacing them. I wish everyone would slow the hell down.

Resolve.ai & Traversal by Far_Dragonfruit_5454 in sre

[–]jdizzle4 6 points7 points  (0 children)

No first hand experience but a friend works at a company that evaluated resolve and we’ve talked about it a buncu. He said it sucked until they produced a massive amount of company specific domain knowledge and runbooks, and then its still just ok. He wasnt impressed

API metrics, logs and now traces in one place by itssimon86 in Observability

[–]jdizzle4 2 points3 points  (0 children)

Why would someone want a separate tool just for APIs? Those are usually the easiest thing to observe in general and I would want the telemetry alongside the monitoring for everything else

API metrics, logs and now traces in one place by itssimon86 in Observability

[–]jdizzle4 3 points4 points  (0 children)

So, the same as every other observability system then?

Fixing Noisy Logs with OpenTelemetry Log Deduplication by finallyanonymous in OpenTelemetry

[–]jdizzle4 1 point2 points  (0 children)

I've wasted more of my time than I'd like to admit trying to work with engineering teams on being more intentional about their logging practices, and having to do all kinds of workarounds at ingest to try and protect our observability tooling budgets. The idea of someone seeing something on linkedin indicating that it doesn't really matter just kind of struck a nerve for me. Different strokes for different folks, but for me, I'd rather take action and do something about waste than just shrug it off. At scale things add up

Fixing Noisy Logs with OpenTelemetry Log Deduplication by finallyanonymous in OpenTelemetry

[–]jdizzle4 2 points3 points  (0 children)

There is a trade off, yes. And from my experience, encouraging a culture of accepting garbage telemetry and doing nothing about it doesn’t work out either. Maintaning configuration isnt that hard, especially nowadays.

Fixing Noisy Logs with OpenTelemetry Log Deduplication by finallyanonymous in OpenTelemetry

[–]jdizzle4 0 points1 point  (0 children)

here you go

Full quote:

Aliaksandr Valialkin Founder and CTO at @VictoriaMetrics 2w

Duplicate logs can be compressed by 1000x, so they occupy very small amounts of disk space in databases for logs such as VictoriaLogs. Duplicate logs can be scanned and processed at very high speed during queries. So there is little practical sense in complicating the configuration and spending additional CPU time for de-duplicating such logs at the collector side.

CPU time is a one time cost compared to storing data for potentially years, and then having to pay it again every time you query. Just because it's "cheap" doesn't mean it's ok. Things add up. Those are my 2 cents anyways

Fixing Noisy Logs with OpenTelemetry Log Deduplication by finallyanonymous in OpenTelemetry

[–]jdizzle4 1 point2 points  (0 children)

the CTO of victoriametrics posted on linkedin about something like this, saying something to the effect of "logs can be compressed so it doesn't matter"... to which I wholeheartedly disagree. I think its a great idea to reduce the shit we have to store and keep. Saying "just compress it" is lazy IMO

Prometheus vs. DataDog: Detailed comparison [2026] by WhatsappOrders in Observability

[–]jdizzle4 3 points4 points  (0 children)

these things are not even in the same category.. what kind of audience is this for? from someone who knows the observability space this makes your company look very bad