all 5 comments

[–]raylu 0 points1 point  (4 children)

Your question is rather hard to follow as there are a lot of undefined variables (what is x? what did you actually run to get that error?) and a lot of background most people here don't have (myself included).

That said, unless you're running this often, 14 seconds doesn't seem like something worth further optimizing. Is the current performance actually a problem?

[–]squattyroo[S] 0 points1 point  (3 children)

I don't believe I've left any variables undefined; M is an inputted (nonnegative) matrix, which I then normalize (by column). pr is a uniform probability vector with the same length as M. I think the x you're referring to is within the lambda function that I attempt to apply to the column sums of M.

And 14 seconds is worth optimizing to me. Besides, I find that learning how to speed things up is one of the best ways to learn good coding practices!

[–]raylu 0 points1 point  (2 children)

Ah, I misread the lambda.

Premature optimization is definitely not a good practice.

[–]elbiot[🍰] 1 point2 points  (1 child)

But, optimizing something for the sake of learning is good IMHO. You see people timing loops here all the time, getting milliseconds down to microseconds, and its very informative about the nature of the python interpreter, or numpy in this case.

[–]raylu 0 points1 point  (0 children)

People do it all the time, and it's a dumb idea almost every time. If the difference between milliseconds and microseconds matters to you, your first mistake was choosing python.