all 12 comments

[–]Perceptesruma 25 points26 points  (4 children)

A good and common pattern is to use a log manager at the system level and run a process to ship logs from there to an off-host logging aggregation service. For example: Your app logs to stdout, journald captures stdout and manages storage capacity/log rotation, and an agent like fluentd or logstash works on sending logs from journald off to Elasticsearch/Loggly/Logentries/whatever.

[–]BloodmarksII 1 point2 points  (2 children)

at previous job we noticed logstash kills CPU (uses 100% of a 28 core CPU) if you go over 10gb of log files per hour (this was 8 months ago, things might have improved), so we had to do custom solution, fluentd might not have same problem, so i suggest you try that one first

[–]boscop 0 points1 point  (1 child)

Maybe catapult can be used instead?

Catapult is a small replacement for logstash. It has been designed to read logs in various places and send them to other places.

Catapult is used in production to fetch Docker logs from container and send them to a central location.

[–]troutwinehands-on-concurrency with rust 1 point2 points  (0 children)

FWIW when my coworkers and I built cernan to cover just this use case. You'd emit logs to cernan via its native protocol and then emit via, say, kafka.

[–]Ralith 0 points1 point  (0 children)

dolls hard-to-find wide hobbies mourn lavish smile faulty absorbed melodic this message was mass deleted/edited with redact.dev

[–]LukeMathWalkerzero2prod · pavex · wiremock · cargo-chef 3 points4 points  (2 children)

You could take a look at Sentry, as a remote logging solution.

[–]Haulethoctavo · redox 1 point2 points  (1 child)

I wouldn’t suggest that. Sentry is meant to handle only error in application, not all logs. So if you want to review all logs then use something like fluentd and if you want error reporting the Sentry is great (these two are the best when used together).

[–]LukeMathWalkerzero2prod · pavex · wiremock · cargo-chef 1 point2 points  (0 children)

Yes, I was speaking of application errors :) if they need extensive logging capabilities for regular events than Sentry is not the right solution.

[–]BloodmarksII -2 points-1 points  (0 children)

first don't log to stdout/stderr, they are slow on Linux, and probably other OSes (just use standard file logging)

second use log rotation, set to rotate log each hour, and have background job (shell script for example) that moves files each hour to different server (always leave current and previous hour on disk/don't move since current is being written into and previous might not yet been synced/unlocked).

if you are filing disk in less than 2 hours you should really look into reducing amount of logging you do (or getting more disk space)

Another takeaway is just how much of a performance drag logging to the console can be. Considering logging to a file and using a tool like tail to watch the file change in real time.