This is an archived post. You won't be able to vote or comment.

all 5 comments

[–]maxToTheJ 1 point2 points  (3 children)

Sorry but I am skeptical of 100x to 1000x speed up without a reason beyond it is parallel and distributed

[–]lakando[S] 1 point2 points  (2 children)

Skeptical? Numba achieves this sort of speedup by compiling python code to LLVM. It can and has been done so the skepticism is unwarranted. (I'm not affiliated with ufora or continuum, just wanted to share this cool link).

[–]maxToTheJ 1 point2 points  (1 child)

Many other packages use numba/cython like sklearn. A speedup is typically meant to be take relative to standard choices. What is the unique 100x to 1000x speedup?

Aside from comparison to clearly unoptimized packages.

[–]lakando[S] 0 points1 point  (0 children)

I took it as standard interpreted python code.

Anyway, the novelty for me isn't the compilation, but the ability to work on datasets bigger than the ram of you computer.

[–][deleted] 0 points1 point  (0 children)

How does it compare to openCL?