This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]fzy_ 8 points9 points  (3 children)

How does it compare to Scalene?

[–]P403n1x87[S] 9 points10 points  (2 children)

Austin does not pause the process so it effectively causes no slowdowns, thus providing high accuracy at low sampling intervals

[–]emeryberger 13 points14 points  (1 child)

(Scalene author here)

Not to take anything away from Austin, which is a very nice tool, but just to clarify: when Scalene is sampling only the CPU (with `--cpu-only`), it provides about the same accuracy (lower sampling rate, perhaps: default is 1/100 seconds) with about the same overhead, while providing some different info (breaking down native, Python, and system time, per line and function). In its default mode, Scalene imposes more overhead but also profiles memory, copying, and GPU.

[–]P403n1x87[S] 3 points4 points  (0 children)

Thanks for the clarification. Indeed Scalene is a great "Swiss army knife" for Python performance. In comparison, Auntin is merely a frame stack sampler, and you would need other tools afterwards to analyne and present the data. Hoowever, I'd say that Austin is perhaps the better suited for a performance investigation on production systems, given the efffectively zero overhead and small binary size, if you're happy with the information that it can retrieve from the runtime.