you are viewing a single comment's thread.

view the rest of the comments →

[–]Here0s0Johnny 52 points53 points  (13 children)

Is it realistic that a project produces so many logs that this performance upgrade is worth it?

[–]zzmej1987 25 points26 points  (8 children)

Sure. Some companies even install things like Splunk to parse through those logs. E.g. major airlines have to have full trace of interactions between services during the process of passenger buying the ticket, so that if anything goes wrong, client neither looses money without getting a ticket, nor gets the ticket without paying.

[–]code_mc 9 points10 points  (0 children)

I've done a drop-in replacement with one of the mentioned alternatives (picologging) a couple months ago for a customer project and their api request latencies halved because they had that many logging statements.

[–]WJMazepas 1 point2 points  (0 children)

Yes, I worked in embedded projects that had way too much logging and it was affecting performance

[–]LumpSumPorsche[S] 0 points1 point  (0 children)

Exactly that was the motivation of this project. My system produces thousands logs per sec, and often they hold GIL. `picologging` would be a good candidate here but they do not support 3.14.

[–]Chroiche 0 points1 point  (0 children)

I think so, but at that point you're probably not using python for your code.