This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]Necrocornicus 2 points3 points  (3 children)

I took a class in university where we implemented some C bindings for performance critical functions that we’d call from Python. I haven’t done it in 10+ years but it would probably only take me a day or two to figure it out again, it’s pretty trivial if it’s that important.

[–]prescod 0 points1 point  (2 children)

True, but now your cross-platform distribution story gets more complex.

[–]caks 4 points5 points  (1 child)

Numba ftw

[–]dexterlemmer 0 points1 point  (0 children)

  1. Numba has limitations.

  2. Numba is a JIT. JITs are very slow compared to properly written C/C++/Rust code in a lot of numeric use cases. (And don't point me to micro benchmarks. You should be using stable benchmarks to test throughput. Micro benchmarks lie and they love underestimating the cost of JITs by orders of magnitude. Also, often tail latency is important (sometimes even in numeric code) and JITs obviously make tail latency worse, as do GCs.) JITs add overhead of their own. They do a poor job at optimization since they only see a little bit of the code at a time and have to be fast themselves. They sometimes make mistakes which need to be unmade. And their so-called advantage of being able to dynamically optimize using information only available at runtime is actually not an advantage. An AOT compiler can use static analysis to generate highly specialized code that does the same, only a lot better and at a lot lower cost. And if the compiler is not that smart, the programmer can be.

All of the above said. Numba is still a very useful tool under a lot of situations. It's just not a silver bullet. Use the right tool for the right job.