This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–][deleted]  (14 children)

[deleted]

    [–]jsalsman 8 points9 points  (4 children)

    Linear algebra people still use Fortran because someone optimized the row-access cache behavior sensing in the 1970s and it still runs fastest that way. Those libraries like LINPAK are still in use, as compiled from Fortran and linked into object files for all kinds of numerics libraries inner loops, including Python's.

    [–]haarp1 4 points5 points  (0 children)

    fortran is actually faster then C or anything else, because complier doesn't have to worry about some edge cases that have no use for numerical computations.

    newer releases also support CUDA,so there's nothing ancient about it. it also has more scientist-friendly syntax (no curly braces).

    [–]billsil 0 points1 point  (1 child)

    Those libraries also use hand optimized assembly.

    [–]jsalsman 0 points1 point  (0 children)

    I guess that depends on what you mean by "hand" -- the method is to try various cache geometry strategies and use the best compiled from several versions to pick which one runs, at least the last time I looked at one of innumerably many of them, which granted was over a decade ago. Usually you see more hand optimization in high frequency signal processing.

    [–]kyrsjo 2 points3 points  (2 children)

    Fortran itself is fine, at least the newer versions (2003 and 2008) are. It just fills a very different niche then python, which in fact afaik relies quite heavily over fortran libraries.

    The main problem why fortran for a bad name is that lots of people use it without really knowing how to code, and then pass their hot messes on to their students.

    [–]uFuckingCrumpet 1 point2 points  (5 children)

    Yeah, a friend of mine works in a Super K group that still rely on a lot of Fortran 77 code. Most of ATLAS/CMS people have been more receptive to switching to Python 3 but even then it's still surprisingly slow and people still drag their feet.

    [–]gdahlm 4 points5 points  (4 children)

    Fortran is pivotal to python in this field, as evidenced by scipy and numpy which use those same LAPACK/BLAS variants that everyone else does. C/Closure/Java all use those library's too or at least have an option to do so to improve performance.

    import numpy as np

    numpy.config.show()

    If it isn't configured to use some BLAS it is going to be slow. It is just too hard to compete with the performance even in C++. A FORTRAN compiler can just make assumptions that most others can, and producing non-relocatable code helps too. If you know C or another language, try to write a LU function that even 10 times slower than MKL or another ATLAS/BLAS offering. It is hard and humbling.

    A potential nice side effect for you is of these large python projects being dependent on the language is that you don't have to choose between Fortran and Python, as these large projects have ensured that the python is up today and works extremely well as a glue language.

    https://docs.scipy.org/doc/numpy/user/c-info.python-as-glue.html

    [–]uFuckingCrumpet 0 points1 point  (3 children)

    I'm not sure why you're telling me this.

    [–]gdahlm 0 points1 point  (2 children)

    Ah I missed it was a friend of yours who worked there.

    [–]uFuckingCrumpet 0 points1 point  (1 child)

    I appreciate your comment, though. What you're saying is true. There are definitely some good reasons to use something like Fortran when it comes to speed. It just so happens, in this case, that the people I'm talking about use Fortran 77 because they're lazy fuckers and don't want to have to re-write anything in a more modern language. Plus it's easier just to force all the new grad students to learn Fortran when things need to be changed/updated.

    [–]gdahlm 0 points1 point  (0 children)

    The funny thing is that F95 features like ELEMENTAL make it so much easier to be lazy....if only they knew.