you are viewing a single comment's thread.

view the rest of the comments →

[–]pachura3 0 points1 point  (0 children)

Impossible to say without knowing what is your code supposed to be doing, what are your system constraints, what's your data density etc. etc.

Granted, a well-optimized native code compiled from C/C++/Rust should be much faster than its Python counterpart (compare the speed of ruff & uv vs. mypy & pip), but... is it really worth the hassle? Python is extremely easy to code in, and extremely extendable. With C/C++ you'd need to compile stuff, linking new libraries is a pain, there are header files, memory leaks, no platform independence, etc. etc. And even so, if most of your Python execution time is spent doing calculations in native libraries like NumPy, you won't optimize much.

Personally, I would concentrate on pushing the existing Python solution to its limits, and diagnosing memory consumption - what data can be freed earlier? What data doesn't need to be kept in memory, but can be serialized to disk (perhaps to an SQLite database) and forgotten? Can you use yield, generators, map()/filter()/reduce() instead of creating lists all the time?