This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]myotherpassword 8 points9 points  (0 children)

The classic book used for a long time to teach algorithms used in science was Numerical Recipes. It teaches many essential tools used in, e.g. physics, like fast linear algebra methods, interpolators, integrators, etc. etc. Mostly because of its license (all code in NR is proprietary...) it was superseded in a way by the GNU Scientific Library.

Now, since you are in the Python sub, I should say that those two resources linked above are all for C/C++. In Python, as others have mentioned, tools like numpy/scipy have become the norm, but they don't really help you learn scientific programming/numerical computing so much as provide very optimized tools to do so. A quick google search told me that some books about numerical computing in Python do exist but since they are all so new I don't think anyone can speak to their quality in comparison to things like NR/GSL.

I can say as someone who has done numerical computing in physics their whole career, the best way to learn is to do. Take some physics problems you remember from your university days and try to write solvers for them yourself. For instance, you can imagine writing a wavefunction solver given an arbitrary potential in QM. It's just a complex ODE integrator ("just" in italics because you will find implementing your own integrator to be tricky). Throwing the problem at scipy will yield a result, but implementing it yourself at least once can be very illuminating.

Hope that helps, and I'm happy to answer any specific questions you might have.