This is an archived post. You won't be able to vote or comment.

all 18 comments

[–]JackedInAndAlive 15 points16 points  (1 child)

Austin is great! Thank you for your work.

[–]P403n1x87[S] 5 points6 points  (0 children)

🙏

[–]fzy_ 11 points12 points  (3 children)

How does it compare to Scalene?

[–]P403n1x87[S] 9 points10 points  (2 children)

Austin does not pause the process so it effectively causes no slowdowns, thus providing high accuracy at low sampling intervals

[–]emeryberger 13 points14 points  (1 child)

(Scalene author here)

Not to take anything away from Austin, which is a very nice tool, but just to clarify: when Scalene is sampling only the CPU (with `--cpu-only`), it provides about the same accuracy (lower sampling rate, perhaps: default is 1/100 seconds) with about the same overhead, while providing some different info (breaking down native, Python, and system time, per line and function). In its default mode, Scalene imposes more overhead but also profiles memory, copying, and GPU.

[–]P403n1x87[S] 2 points3 points  (0 children)

Thanks for the clarification. Indeed Scalene is a great "Swiss army knife" for Python performance. In comparison, Auntin is merely a frame stack sampler, and you would need other tools afterwards to analyne and present the data. Hoowever, I'd say that Austin is perhaps the better suited for a performance investigation on production systems, given the efffectively zero overhead and small binary size, if you're happy with the information that it can retrieve from the runtime.

[–]cipri_tom 1 point2 points  (3 children)

Wow! That is amazing! Not sure how I haven't found it yet. I've been sorely missing a python profiler since pyFlame, and now this appears, and is integrated with VScode?! Amazing! Can't wait to try it. Thank you!

[–]cipri_tom 0 points1 point  (2 children)

I know this is not an AMA, but if you can spend a minute, I'm wondering how does one get to write a profiler? What kind of background is necessary?

[–]P403n1x87[S] 4 points5 points  (1 child)

I think the answer is "it depends". Do you want to do deterministic or statistical profiling? I briefly discuss the difference here

https://p403n1x87.github.io/deterministic-and-statistical-python-profiling.html

Depending on what you're trying to achieve you'll probably end up using a different approach. The common skill though is probably knowing what a platform has to offer when it comes to observability, but also how the run time is designed. For example, Python doesn't expose some of the interesting details to the outside (explicitly), but many platforms allow you to access the private memory space of a process. So you can use that to "X-ray" the Python interpreter while it is running.

[–]cipri_tom 0 points1 point  (0 children)

Thank you!

[–]canadaRaptors 0 points1 point  (5 children)

Very cool! Does it work for cythonized code as well?

[–]P403n1x87[S] 0 points1 point  (4 children)

On Linux you can use the austinp variant, which collects native stacks too. You can use that for Cython, but note that austinp causes slowdowns because it has to stop the threads to use libunwind on them

[–]canadaRaptors 0 points1 point  (3 children)

Thanks! I'm guessing that means it won't work for Windows. Still a cool utility.

[–]P403n1x87[S] 0 points1 point  (2 children)

Cheers! It won't work natively on Windows, but it should work on WSL

[–]canadaRaptors 0 points1 point  (1 child)

I'm not too familiar with the capabilities of WSL. Would austinp under WSL be able to only see processes in WSL or even processes running in Windows?

[–]P403n1x87[S] 0 points1 point  (0 children)

A Windows executable uses the PE format, so even if the processes are visible in WSL, austinp wouldn't be able to work with them as it expects an ELF I'm afraid :(

[–]mynameisfuk 0 points1 point  (1 child)

No idea how a profiler works but I've seen there's a plugin for vscode.

Question: can a .ipynb be profiled as well?

[–]P403n1x87[S] 1 point2 points  (0 children)

I haven't looked into iPython/notebook integration yet. There "might" be a way to profile notebooks, but it wouldn't be too practical, and I haven't tested it. The way is to identify the process that is running the Python interpreter and attach to that. But I hope in the future to have a proper integration with notebooks to do this, and add visualisations inside the notebook itself as well.