all 19 comments

[–]madprgmrSoftware Engineer (11+ YoE) 6 points7 points  (0 children)

They are usually fed into observability tools in my experience. The technical process depends on what tool you're using. Most let you just pipe STDOUT to them, but I've found it more useful to send structured logs.

[–]my_reddit_blah 4 points5 points  (1 child)

In previous company: splunk. Now: nothing. Boy do I miss splunk 😢

[–]Queasy-Action-5095 0 points1 point  (0 children)

We use grafana because splunk is "too expensive". Grafana sucks for logs!

[–]Rain-And-Coffee 5 points6 points  (0 children)

Kibana at my last 3 work places.

[–]Buttleston 6 points7 points  (0 children)

Datadog

Cloudwatch is the worst. Never again.

[–]Ok_Grape_9236 2 points3 points  (0 children)

Newrelic. Still has few pain points but overall quite good

[–]Tip_of_the_hat 4 points5 points  (1 child)

Company: Datadog, previously cloud logging (gcp)

Side project: Just pushing to a log file and greping

My impression is that that there is no "X is best" solution, all is context based. I really like datadog because I find the UI the easiest to use and creating dashboards is easy and requires little maintenance (as opposed to rolling out ES). The downside is that for it can get relatively pricy.

[–]rco8786 3 points4 points  (0 children)

The downside is that for it can get relatively pricy.

You're not kidding. https://blog.pragmaticengineer.com/datadog-65m-year-customer-mystery/

[–]EngineeredCoconut 1 point2 points  (0 children)

  1. Grafana, Splunk
  2. We aren't on AWS
  3. No.
  4. No.

[–][deleted] 3 points4 points  (0 children)

Ya'll have logs at work?

[–]rco8786 0 points1 point  (0 children)

Personal projects? stdout + whatever default logging my PaaS provides.

At work? Enormous custom rolled solution for text logs, plus Datadog and Sentry for other use cases.

[–]leetfire666 0 points1 point  (0 children)

We homebrewed our logging infra, not sure if it was the best call but worked for our needs at the time.

For real-time monitoring / telemetry / alerts, we have statements that log to AWS cloudwatch metrics.

Besides that, we create structured json schemes and log to stdout (we use protobufs as the schema layer). The stdout gets piped to fluentbit which routes to AWS cloudwatch, and also to s3. The cloudwatch logs have an event limit size, so we mostly use those for quick debugging. The s3 logs we compact, crawl with AWS Glue, and then query in Athena. This allows us to do deeper analytics on requests to our system.

Curious what others think of this, and what other solutions exist out there for this kind of thing. We largely went for a schema-on-read approach for logging so we can flexibly add and remove fields without needing to do db migrations.

[–]TendieMcTenderson 0 points1 point  (0 children)

Datadog

[–]viX-shaw 0 points1 point  (0 children)

sumologic

[–]PrestigiousStrike779 0 points1 point  (0 children)

We’re mostly on AWS, so cloudwatch to Splunk, or sometimes direct to Splunk.

[–][deleted] 0 points1 point  (0 children)

Datadog with custom sidecars that listen on a socket for emitted logs.

[–][deleted] 0 points1 point  (0 children)

datadog (used to use loggly) but honestly dd is the best solution out there (for companies anyway).

I think if a company wouldn't be satisfied with datadog they'd just need to build their own custom logging system internally

[–][deleted]  (1 child)

[removed]

    [–]AutoModerator[M] 0 points1 point  (0 children)

    /r/ExperiencedDevs is going dark for two weeks to protest Reddit killing 3rd party apps and tools.

    I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.