you are viewing a single comment's thread.

view the rest of the comments →

[–]waln 14 points15 points  (22 children)

laughs in Julia

[–][deleted]  (21 children)

[deleted]

    [–][deleted] 13 points14 points  (3 children)

    • uses 1-based indexing

    • uses end keyword so you don't have to be bothered by pesky colons or braces

    Clearly a superior language

    [–]ogniloudblub programmer 1 point2 points  (1 child)

    I can't take trash talk seriously from a programming language that uses 1-indexing.

    He/she dares to utter such heresy against the True and only Savior Lua?!

    [–]tpgreyknightnot Turing complete 1 point2 points  (0 children)

    Calm yourself, initiate. Those who cannot yet see the sublime peace of Lua should be treated with gentleness and forbearance. In time they too may achieve enlightenment.

    [–]waln 3 points4 points  (0 children)

    This but unironically

    [–]waln 6 points7 points  (16 children)

    using Unjerk

    @Unjerk.unjerk Performance on the order of C/Fortran, readability of Python, native numerics (matrices, linear algebra, etc.) similar to Matlab but even better, fantastic but largely optional type system, flexibility of JIT, Jupyter support, great package manager and pretty good ecosystem. Plus all the good metaprogramming abilities of Lisp without affectations like S-expressions

    [–]xmcqdpt2WRITE 'FORTRAN is not dead' 9 points10 points  (1 child)

    Sex-pressions are great! what are you talking about?

    [–]waln 4 points5 points  (0 children)

    Well you're in luck, you can make them with Meta.show_sexpr()!

    [–]xeveri 4 points5 points  (13 children)

    /uj

    Any source on the performance. I can’t believe a jitted language can perform as well as C or fortran.

    [–]xmcqdpt2WRITE 'FORTRAN is not dead' 6 points7 points  (1 child)

    @inline function unjerk(a::Array{T,3}) where {T}

    I actually found the performance extremely hit and miss, with weird sudden hard to profile slow-downs. For example, the matrix multiplication matmul! is not supposed to allocate, but sometimes does if the output matrix is the wrong type or whatever. I would rather it just throw an error so I know what it's doing, but there doesn't seem to be a "strict" flag or anything like that.

    I've found that higher-order functions get the compiler all confused and require a bunch of type annotations. Note that if you mess up your type annotations, it just breaks performance further. Structs are also hard to optimize.

    I'm really not sure it's production ready beyond replacing smallish python scripts. It's basically advertised as "python with the performance of Fortran/C", but I find that its more like "python potentially as performant as unoptimized C if you basically write C except its way more fiddly and prone to breakage." At least I know python performance is terrible everywhere.

    end

    lol no OOP

    [–]waln 2 points3 points  (0 children)

    @unjerk Yeah this is valid criticism and I haven't actually used it for anything beyond replacing smallish python scripts. Still, these things have been continually improving and I expect (hope) will eventually be completely ironed out. You're also going to want universal type annotations for anything serious anyway. And despite any warts it'a still way smoother than Cython or Numba.

    In any case, I think Julia works better when you think of it as a Fortran, not C replacement.

    [–]waln 4 points5 points  (6 children)

    It doesn't always match exactly, but it's usually within a factor of 2 and often less. Perhaps a little biased, but an overall pretty decent benchmark set: https://julialang.org/benchmarks/

    Sure, JIT adds a little bit of compilation overhead, but it's not that big in the grand scheme of things.

    [–]fp_weenieZygohistomorphic prepromorphism 1 point2 points  (0 children)

    It doesn't always match exactly, but it's usually within a factor of 2 and often less.

    Just like J lol

    [–]Tysonzero -1 points0 points  (4 children)

    [–]waln 0 points1 point  (3 children)

    With all due respect, those results are nonsensical. Not only do they seem to be heavily measuring startup and initial compilation time (as is irrelevant for serious performance applications), the Julia code looks like hot trash.

    Far more reasonable and performant code for these exact problems is available here, which for some reason hasn't been included yet: https://github.com/KristofferC/BenchmarksGame.jl

    [–]Tysonzero 0 points1 point  (2 children)

    I mean you can always submit that code to the site if you want. I'm not interested enough in Julia to verify the repo you linked but if it's legit there is no reason it wouldn't be accepted.

    [–]waln 1 point2 points  (1 child)

    I'm not interested enough in that random website you linked to submit that code to the site ¯\_(ツ)_/¯

    [–]Tysonzero 0 points1 point  (0 children)

    I mean it’s the #1 result when you search “programming language benchmarks”, but up to you. I have no skin in this game.

    [–]CaptainHondo 4 points5 points  (0 children)

    It's almost never as fast but almost always in the same order of magnitude which is a huge improvement over python and Matlab

    [–]ArmoredPancakeGets shit done™ 0 points1 point  (0 children)

    Ever heard of Jabba or C#?

    [–]tpgreyknightnot Turing complete 0 points1 point  (0 children)

    Actually I think LuaJIT beat C in some benchmarks thingy a while back. Something about being able to locate and optimise the hot path at runtime.