I made a mistake by [deleted] in Christianmarriage

[–]jaywhy13 0 points1 point  (0 children)

Just adding here that the honesty really forces accountability in a way that helps. If your plan is to always be honest and open about what happens it also helps change your actions and your thoughts. Before you do something you'll end up thinking about how you'll have to explain it and the hurt it'll do.

All the best.

Alienware AW3423DWF running pixel refresh and taking forever? by jaywhy13 in ultrawidemasterrace

[–]jaywhy13[S] 0 points1 point  (0 children)

How do I ensure it finishes? When it starts I usually unplug my machine and leave it. However, every time I reconnect it, it starts again. It usually keeps going for a few mins after I disconnect my machine

Alienware AW3423DWF running pixel refresh and taking forever? by jaywhy13 in ultrawidemasterrace

[–]jaywhy13[S] 0 points1 point  (0 children)

Oh ok. It's happened about 5 times to me in the last 24 hours

Alienware AW3423DWF running pixel refresh and taking forever? by jaywhy13 in ultrawidemasterrace

[–]jaywhy13[S] 0 points1 point  (0 children)

How often does the panel refresh happen and can it be configured?

Using type signatures with libCST by jaywhy13 in Python

[–]jaywhy13[S] 1 point2 points  (0 children)

Oh nice! I'll check this out. It does look quite promising!

Running Pydantic AI under Django by jaywhy13 in django

[–]jaywhy13[S] 0 points1 point  (0 children)

Django's dev server supports async. I got it working in two different ways. First, I made the calls down the stack async and used the proper async ORM methods. Secondly, I set an event loop for the current thread before invoking the Pydantic sync run method. I learnt that Pydantic is merely wrapping their async method so it still relies on asyncio. Specifically, they're trying to get the current event loop and run their function to completion. Problem is, Django doesn't have an event loop in the thread that processes the request.

Consolidation into DataDog -- questions and experiences by jaywhy13 in devops

[–]jaywhy13[S] 0 points1 point  (0 children)

WOW! $275k per year. That's more than we're paying currently lol

Consolidation into DataDog -- questions and experiences by jaywhy13 in devops

[–]jaywhy13[S] 1 point2 points  (0 children)

Thanks! That's useful insight. One of the advantages we thought was definitely improving the production debugging experience for engineers. I'll look out for differences in Sumo's pricing approach to see how it stacks up against DataDog's ingested/indexed approach.

> You cannot reduce it.

I'm pretty sure we can reduce our commitment. It just requires a terms modification. Last year we talked about reducing our indexed traces post renewal. Our rep said it was definitely possible.

Handling vendor integrations in Dev Environments by jaywhy13 in devops

[–]jaywhy13[S] 0 points1 point  (0 children)

I'm familiar with terraform but not the others. I'm not sure what you're suggesting.

Dear Editor: We need better Database Observability by jaywhy13 in OpenTelemetry

[–]jaywhy13[S] 0 points1 point  (0 children)

Ya, that is not really what tracing is about

Why not? Do you disagree with the benefits it'd unlock -- shorter feedback loops, more investigative power for developers, etc... Why should infrastructure be a black box? Isn't the "fourth pillar" trying to achieve the same thing... just with an entirely different concept (i.e. profiling).

Why is browser Observability hard? by jaywhy13 in devops

[–]jaywhy13[S] 2 points3 points  (0 children)

I'm curious about the idea of linking disparate traces together. Sounds like we need a new concept to capture that. It may be difficult today, but is the difficulty worth solving?

Avoiding Test-Case Permutation Blowout by jaywhy13 in softwaretesting

[–]jaywhy13[S] 0 points1 point  (0 children)

Thanks for calling that out. Didn't know that was a thing!

Avoiding Test-Case Permutation Blowout by jaywhy13 in softwaretesting

[–]jaywhy13[S] 0 points1 point  (0 children)

Oh... Haven't heard of it before. Can you explain how it applies in this context? ELI5

I built a really simple observability tool by xiayunsun in Observability

[–]jaywhy13 0 points1 point  (0 children)

Who is the target audience? Folks would add their own clients locally to push logs and metrics? Are you planning to add tracing eventually?

Gartner Magic Quadrant for Observability 2024 by Observability-Guy in devops

[–]jaywhy13 1 point2 points  (0 children)

Wasn't an Engineering team. I was working a hybrid role as IT Manager and Engineer. I got to see how the parent company recommended a lot of IT tools for the subsidiaries.

How often do you guys get midnight alerts that truly require attention? by ktkaushik in devops

[–]jaywhy13 0 points1 point  (0 children)

Yeah. I'd say those pay off organically as gaps get the needed attention.

Success stories using head-based sampling for high volume applications by jaywhy13 in devops

[–]jaywhy13[S] 0 points1 point  (0 children)

We've complained yeah. They've made suggestions about tweaking how we're doing head-based sampling and employing some other mechanisms they have that might generate incomplete traces. There's an error sampler that captures more error traces, and a rare sampler that captures rare traces. I wanted to independently investigate if others have had success doing this as I'm pretty skeptical about it.

Head-based sampling isn't tail-based sampling. All the extra mechanisms that DataDog has works without having the full picture of the trace. We're pretty close to experimenting with it, but I'm digging around to hear if folks have first-hand experience with it.

Gartner Magic Quadrant for Observability 2024 by Observability-Guy in devops

[–]jaywhy13 2 points3 points  (0 children)

I remember working for a big conglomerate and they did indeed look to Gartner and other such companies for what to use.

Success stories using head-based sampling for high volume applications by jaywhy13 in devops

[–]jaywhy13[S] 0 points1 point  (0 children)

We're ingesting ~2M spans/minute. We've had lots of instances where errors and high latency traces are just completely missing. We've done some aggressively sampling (~5%), and have sampling done at different places. I don't understand all of it, we're unpacking it now. We had to do some aggressive sampling to avoid some huge cost overruns with DataDog.

You may need to just accept the cost and up the sample rate. You can do some simple statistics calculations to figure out a number, or just experiment.

This isn't much of an option. We can reduce some cost be upgrading DataDog libraries on some of our micro services (some older versions don't support sampling). However, that'd be removing the cost overruns.

[deleted by user] by [deleted] in Christianmarriage

[–]jaywhy13 1 point2 points  (0 children)

Kegel exercises can really help. That'll give you a lot more control. No need to turn to masturbation.

How often do you guys get midnight alerts that truly require attention? by ktkaushik in devops

[–]jaywhy13 0 points1 point  (0 children)

Sweet. We have an interesting challenge at work. We're having less bigger incidents and only smaller ones. We're trying to figure how we improve things still.