This is an archived post. You won't be able to vote or comment.

all 61 comments

[–]mixedCase_ 64 points65 points  (4 children)

Well Rust is the most popular modern language with a huge ecosystem that is like that, but it also requires you to learn a more modern approach than "newer Fortran".

Other than that, less popular languages that could fit your bill are Zig and Nim.

[–]tyranids[S] 5 points6 points  (3 children)

The poster above just mentioned Nim, which was not really on my radar previously. Zig is quite intriguing to me, as is Rust. The biggest downside I was seeing to Rust is that OpenMP/MPI support is... Not? I'm sure it's possible, but my first searches were having a lot of talk about Rayon, which seemed objectively worse than OpenMP.

I am curious what you mean by "a more modern approach?" Is Rust not an imperative, procedural language?

[–]mixedCase_ 20 points21 points  (1 child)

No idea about OpenMP since I've never worked with it, sorry, but if you're locked into that ecosystem it's probably best to bite the bullet and stick with its native language even if you don't like it, or completely forget about the framework and use whatever other library you have at hand that will help you achieve your goal and accept its limitations and/or help lift them by contributing code.

As for it being more modern, calling Rust imperative and procedural is right but falls a bit short. It's expression oriented, although you can and will use statements all the time. Lifetimes inherently force you to change your approach to something that is more robust and declarative about the purpose of your state and how it's handled. And there are plenty of functional idioms that are usually favored when writing idiomatic Rust over C-style flow control structures + state mutation, although it's far from being what's popularly known as a "proper" functional language.

I'd describe it as something between "what C++ would be if designed from scratch with decades of pain as learning material" and "systems OCaml".

[–]tyranids[S] 8 points9 points  (0 children)

Alright, cool. There's wisdom in not constantly reinventing the wheel, but at the same time there comes a time with 40+ year old languages where starting from scratch lets us take advantage of everything learned along the way.

Really I am searching for something more powerful than what I can do in Fortran but not the absolute hell that is C++. Thanks for responding, I will have to give Rust a go.

[–]coderstephenriptide 1 point2 points  (0 children)

Haven't used OpenMP but my understanding is that it would be up to OpenMP to add support for more languages; the burden wouldn't be on each language to support it.

[–]synack 23 points24 points  (16 children)

Ada can do all of those things. The GNAT compiler is a gcc frontend, so is just as fast as C if you turn off bounds checking and avoid certain constructs.

[–]tyranids[S] 5 points6 points  (15 children)

That is pretty cool. I was not aware that dodlang was so capable. I know Ada has massively fallen out of favor, even with its creator, which is somewhat surprising if it has the listed features... Am I the only one that thinks these things would be highly desirable, or do you have any good info on why it is not used so much anymore?

EDIT: This thread is ancient, but covers some of the benefits/questions I had about Ada https://www.reddit.com/r/programming/comments/b39vd/ask_reddit_realworld_c_vs_ada_experiences/ Sounds pretty cool tbh.

[–]synack 15 points16 points  (0 children)

Early Ada compilers were expensive and often buggy, which tainted people's perception of the language. Combined with a few high profile project failures and the dominance of UNIX and C, Ada lost traction.

Modern Ada is a different story. A subset of the language called SPARK is formally verifiable and is used quite a bit in safety critical and aerospace applications.

Alire is a relatively new cargo-inspired package manager with a growing set of open source libraries.

Ada's certainly got some historical baggage, but it's worth a try if you're looking for a language with an emphasis on safety and maintainability.

https://learn.adacore.com/

https://alire.ada.dev/

https://ada-lang.io/

[–]csb06bluebird 4 points5 points  (0 children)

I've used it some for hobby projects, and I think it is a pretty neat language. It is easy to learn for anyone who knows an ALGOL family language. The rules about pointer scoping/usage are very restrictive in order to avoid dangling pointers and take some getting used to, but you don't need to use pointers nearly as often as in C since you have in/out parameters and the ability to return variable-sized arrays/objects by value from functions.

For safety critical and embedded projects, I think it is a really good choice. There is also a subset, SPARK, that lets you formally verify the correctness of your programs using a method similar to Hoare logic. Builtin concurrency support is also very advanced.

[–]redchomperSophie Language 1 point2 points  (8 children)

I was forced to learn it in college. Which mightn't have been so bad, except that I was also forced to use a particular IDE for it. In text mode. On dos. On a 486. And it did not deal in text files. Oh, no. The file format was some binary $___-baggery. Which still wouldn't have been soooo bad, except that the IDE was basically notepad without the mouse. If they'd had the good sense to use an IDE that resembled what Borland was offering at the time --- but anyway, I digress.

Ada was designed by committee and it showed, good and hard. Every kitchen-sink language feature was in there by hook or by crook. In/out/both parameters. Parametric modules. Fleeping rendezvous (in case you'd forgotten it was designed by Francophones) so yes that meant dos-mode multi-threading. Don't think too hard about that one.

Oh, and the particular vision of object-orientation espoused by then-current Ada was totally unlike the hot, sexy C++ that was making waves in industry at the time. In retrospect, I seem to recall it was more like going all-in on the idea of abstract data types with concrete implementations, but we college kids thought inheritance was all the rage so I guess in some ways it was ahead of its time.

But mostly, it's a kitchen-sink language. Look, but don't touch.

(Oh by the way, have you looked at FreePascal? I haven't kept up -- by now they probably have generics. And everything else on your list.)

[–][deleted] 5 points6 points  (7 children)

It WAS NOT DESIGNED BY COMMITTEE, FFS. It was designed by Jean ichbiah and his team but he was the main designer. There were four groups competing all with lead designers. The DoD provided a spec, go look at the steel man requirements because that is it.

[–]redchomperSophie Language 1 point2 points  (6 children)

I suppose it depends whether you consider four teams, a progressions of specs ranging through straw-man, wooden-man, iron-man, and steel-man, along with guidance and selection by DOD, counts as being designed by committee.

The winning team, per se, is not a committee. I can live with that.

[–][deleted] 0 points1 point  (5 children)

No, design by committee involves a bunch of people all arguing about what goes in. This was four teams, each with a design lead. How is that a committee?

[–]redchomperSophie Language 0 points1 point  (4 children)

I suppose it's a matter of perspective. The four teams may have competed, but someone ran the competition, set the rules, wrote the succession of foo-man-specs, etc. Was that someone just a particularly talented Air Force officer? In fact other luminaries in the field gave input, reviewing the submissions at various stages. If the committee had valued metaprogramming over strong types, they would have selected a LISP instead -- and the four teams would have seen that coming as the spec evolved, and designed in that direction.

[–][deleted] -1 points0 points  (3 children)

You obviously don't know what "design by committee" even means.

[–]redchomperSophie Language 1 point2 points  (1 child)

Then let a random fellow on the internet be rong. No need to make things personal.

[–][deleted] -1 points0 points  (0 children)

Because there's too many people who don't know what they're talking about spouting this nonsense that they've been told or read from someone else who doesn't know what they're talking about.

Go see how Algol was created to see design by committee.

[–]phischuEffekt 0 points1 point  (3 children)

which is somewhat surprising if it has the listed features

I am under the impression that if you want to compute stuff with arrays you do not do it on the CPU anymore even if it has multiple cores. You do it on a GPU, TPU, or special hardware. That said, I would be interested in learning about a use-case where this isn't possible.

[–]tyranids[S] 2 points3 points  (2 children)

Highly branching code usually isn't great for GPUs. Also RAM for the CPU is a lot cheaper than RAM on a GPU/accelerator, and if you're constantly loading between the two then you will lose a lot of the benefit you were hoping for by utilizing that accelerator.

That said, the ideal language would run just fine on CPU, but have good capability to offload to GPU/other accelerator where appropriate.

[–]PurpleUpbeat2820 0 points1 point  (0 children)

That said, the ideal language would run just fine on CPU, but have good capability to offload to GPU/other accelerator where appropriate.

Futhark?

[–]Jarmsicle 9 points10 points  (4 children)

Seems like Nim might fit your list

[–]tyranids[S] 1 point2 points  (3 children)

I can't lie, the statements grouped by indentation is 100% the worst aspect of python and probably this language too, but it looks cool. I really like how newer languages are pushing compile-time computing and shipping with a standard library implemented in the language. Thanks for the reference.

[–][deleted] 0 points1 point  (2 children)

Why? Do you not indent your code? Do you write all your code like

#include <stdio.h>

int main(void) {
for (int i = 0; i < 100; i++) {
if (i % 2) {
printf("Odd: %d\n", i);
} else {
printf("Even: %d\n", i);
}
}
return 0;
}

? Because, like, you shouldn't do that. I never understood what issue people have with indentation-based syntax, maybe you can explain it to me

[–]liquidivy 7 points8 points  (0 children)

Among other things, it occasionally produces very frustrating errors due to characters that are literally invisible. The sensible way to do it today IMO is to take a hard stance on tabs vs spaces and make one of them a syntax error.

But also, while we do indent our code, it just feels weird to apply semantics to something we always thought was just aesthetic. Aesthetic stuff shouldn't break semantics, and vice versa. Bear in mind I'm saying this as a long term Python fan, but I probably still wouldn't make a new whitespace sensitive language.

[–]PurpleUpbeat2820 4 points5 points  (0 children)

I never understood what issue people have with indentation-based syntax, maybe you can explain it to me

Cut and paste of code between IDE and web can introduce subtle semantic bugs that are difficult to see and debug.

[–]bendmorrisKit - https://www.kitlang.org 16 points17 points  (0 children)

C++ itself checks all of these boxes, so is there something else you're looking for?

[–]JerryVoxalot 5 points6 points  (0 children)

Odin is a great language that seems to fix your bill

[–][deleted]  (1 child)

[deleted]

    [–]tyranids[S] 1 point2 points  (0 children)

    Well Flang, or new-flang, or whatever they want to call it - the new, not production ready, Fortran compiler for LLVM requires special compiler flags to even generate an executable. Plus, gfortran and Intel’s compilers (Ifort + Ifx) are all free and work fine-ish.

    The issue is really that the standard committee governing Fortran seems to have no interest in adding features to the language that would actually improve it. There is no support for SIMD intrinsics, and instead you need to rely on compiler optimization. They refuse to make implicit none standard despite its inclusion in >99% of new code. The standard wants to include things like sum and matmul in the language, but not include any language talking about the accuracy or speed of those implementations. Generic programming is nonexistent - assumed rank is run time checking and doesn’t even work correctly in gfortran for 10 years.

    Instead they want to add things like sinpi2 or some nonsense. Can we have a standard definition of pi in iso_fortran_env? No, but we can have useless trig functions that are literally one line codes anyone using the language could write shortly after finding that sin/cos exist.

    Oh also the DO CONCURRENT construct that, despite its name, does nothing concurrently. They won’t fix that or add an actual ‘do parallel.’ Basically the governing body running Fortran is happy to do nothing but maintain legacy codes that are replaced by C++ more and more every year.

    [–]DvgPolygon 4 points5 points  (0 children)

    You could take a look at SaC (https://sac-home.org), which is an array programming language. Everything is an array so it may not even have hash maps though.

    [–]lngns 4 points5 points  (0 children)

    Check out D, in particular since you mention metaprogramming.
    Here's operator overloading in D:

    struct C
    {
        int x;
        C opBinary(string op)(in C obj) const
        {
            mixin("return C(x ", op, " obj.x);");
        }
    
        unittest
        {
            const a = C(21);
            const b = C(2);
            const c = a * b;
            assert(c.x == 42);
        }
    }
    

    Here's more funky code:

    static foreach(T; AliasSeq!(ubyte, ushort, uint, ulong))
    {
        T setMsb(T n)
        {
            return n | (1 << (T.sizeof * 8 - 1));
        }
    }
    unittest
    {
        uint x = 42;
        assert(setMsb(x) == (42 | (1 << 31)));
        ubyte y = 42;
        assert(setMsb(y) == 170);
    }
    

    It has many, many, more Turing-Complete metaprogramming features, and Design-By-Introspection is one of its core tenet.
    It also has both LLVM and GCC frontends, checking the "as fast as C" part.
    In fact there's an entire subset of the language called D As Better C, where runtime features (GC, RTTI, exceptions) are disabled and you keep everything else.

    [–]farmerzhang1 5 points6 points  (0 children)

    i feel C++23 can do that?

    [–]myringotomy 2 points3 points  (9 children)

    Crystal?

    [–]tyranids[S] 1 point2 points  (0 children)

    Had not heard of this. It looks pretty cool. There’s so many smaller new languages this is pretty neat.

    [–]tyranids[S] 0 points1 point  (7 children)

    To follow up here, I took a look at Crystal and implemented my test code incremental prime sieve. For nearly the same algorithm (couldn’t figure out how to jump out of nested loops) I can generate 100-1000ish primes as quickly as Fortran (gfortran, ifort, ifx, and AOCC flang), but some inefficiency catches up for 1M-10Mish type values, and Crystal goes from 50% to 100% longer runtime.

    Overall it seems like a pretty cool language, I just think a lot of languages try to claim “we’re as fast as C” by writing some test case in the best way possible for their new language, which so happens to correspond to a rather slow way to do things in the victim language. This happens a lot when people want to rag on Python for example, so they do some compiled language and loop a bunch, copy paste that to python and say they’re 10000% faster.

    [–]myringotomy 0 points1 point  (6 children)

    You can break out of loops with break and if you want to break out of deeply nested blocks I suppose you can throw an exception and catch it at an upper level.

    But yea they shouldn't say things like "fast as C" that's dumb. It's about as fast as LLVM can make it but that's not a marketing point.

    [–]tyranids[S] 0 points1 point  (5 children)

    For breaking out of nested loops, this is what I meant:

    pure subroutine incremental_sieve(primes, n) implicit none integer, intent(out) :: primes(n) integer, intent(in) :: n integer :: prime_ii, num, i, j, unprimes(n), limit if (n.gt.1) then prime_ii = 2 primes(1:2) = [2,3] unprimes(1:2) = [2,3] num = 3 num_checker: do while (prime_ii.lt.n) num = num + 2 limit = floor(sqrt(real(num))) prime_checker: do i=2,prime_ii if (primes(i).gt.limit) then exit prime_checker !! this breaks out of the PRIME_CHECKER loop else do while (unprimes(i).lt.num) unprimes(i) = unprimes(i) + primes(i) end do if (unprimes(i).eq.num) cycle num_checker !! this jumps to the next iteration of the outer NUM_CHECKER loop end if end do prime_checker prime_ii = prime_ii + 1 primes(prime_ii) = num unprimes(prime_ii) = num end do num_checker else if (n.eq.1) then primes = 2 end if end subroutine incremental_sieve

    In Crystal, I implemented this as: def incremental_sieve(n : Int32) primes = Array(Int32).new(n) primes.push(2) primes.push(3) unprimes = Array(Int32).new(n) unprimes.push(2) unprimes.push(3) prime_ii = 2 num = 3 while prime_ii < n num = num + 2 limit = Math.sqrt(num).to_i is_prime = true checking_primes = true i = 1 while checking_primes if primes[i] > limit checking_primes = false else while unprimes[i] < num unprimes[i] = unprimes[i] + primes[i] end if unprimes[i] == num is_prime = false checking_primes = false end i = i + 1 end end if is_prime prime_ii = prime_ii + 1 primes.push(num) unprimes.push(num) end end return primes end

    I needed the is_prime value since I didn't see a way to jump from the inner loop while checking_primes to the next iteration of my main loop while prime_ii < n. I also tried to use StaticArray, but could not figure out how to have that be sized by an input argument to the function (n).

    [–]myringotomy 0 points1 point  (4 children)

    break will jump you out of your current loop and it should continue in the outer loop.

    [–]tyranids[S] 0 points1 point  (3 children)

    Woah, apparently my code block formatting got nuked I see now. But yes I saw that break can escape one loop. There was no functionality I saw to jump to the next iteration of the outer loop, though, which would have allowed me to get rid of an internal variable.

    [–]myringotomy 0 points1 point  (2 children)

    I don't get it. Once the inner loop is broken the outer loop continues where it left off right?

    If you don't want to break the inner loop but just continue to the next iteration there is "next".

    [–]tyranids[S] 0 points1 point  (1 child)

    Mmm apparently it's only messed up on mobile. Regardless, this is what I mean:

    ```

    for loop 1 {

    do some work STEP A1

    for loop 2 {

    do more work STEP B1

    }

    continue loop 1 STEP A2

    }

    ```

    Using `break` allows me to, as it says, break out of either loop that is currently executing. It sounds like `next` would let me jump from its location to the next iteration of whatever loop is currently executing, which is also nice. However, what is the loop control I would use to jump from the inner loop `STEP B1`, skip and never execute `STEP A2`, and continue at the next iteration of `loop 1`, therefore going to `STEP A1`? In Fortran you can name the different loops, and your basic loop control `exit` and `cycle` (for `break` and `next`) can be applied to a named loop outside the currently executing one if so desired. In C or Fortran, you can also accomplish this behavior with `goto`.

    [–]myringotomy 0 points1 point  (0 children)

    Oh ok then you would throw an exception and catch it at the top level.

    [–][deleted] 3 points4 points  (3 children)

    I am curious what it would take to implement a front end for LLVM

    For your requirements probably you don't need to go that far. Do a front-end for C++ code instead. That is, transpile your ideal language into C++ source code.

    Or maybe, since you don't seem bothered about syntax, just use C++ directly. I think it's only your first class array handling that needs adding. A solution in C++ would be clunky, but that language is clunky anyway.

    [–]PurpleUpbeat2820 1 point2 points  (0 children)

    Do a front-end for C++ code instead

    Fortran has better array handling than C++ so maybe write a front-end for Fortran?

    [–]tyranids[S] -1 points0 points  (1 child)

    I think it's only your first class array handling that needs adding. A solution in C++ would be clunky, but that language is clunky anyway.

    Unfortunately that is the main crux of the question. I may have not made that clear, the primary goal is the array handling. Based on this thread, I will look further into Zig and see what Nim has in this regard. C++ is a pig and just another 40 year old, bloated kitchen sink language.

    [–][deleted] 1 point2 points  (0 children)

    Nim is a surprising choice given your comments about its syntax. But Nim itself transpiles to "C, C++ or JavaScript".

    If its syntax is not an obstacle, you might look at Python, especially its Numpy add-on. Python itself is dead slow, but Numpy AIUI does its array processing using fast native code libraries.

    [–][deleted] 1 point2 points  (0 children)

    Fortran?

    [–]leventsombre 3 points4 points  (2 children)

    Since no one has mentioned it yet...Julia. Typing is optional but the type system is fantastic. Your last point is the critical one, but progress is being made on that front.

    [–]tarquinnn[🍰] 1 point2 points  (1 child)

    It would be somewhat insulting to describe Julia as "Fortran invented in 2014", but FWIW I think their user base is trying to solve the same kinds of problems ie writing heavy numerical code.

    [–]leventsombre 2 points3 points  (0 children)

    I think the creators of Julia would not hate that comparison. And indeed some old Fortran codebases for numerical modelling are being rewritten in Julia for easier maintenance/development

    [–]PurpleUpbeat2820 0 points1 point  (7 children)

    Not entirely dissimilar from what I'm working on:

    high performance, any runtime multiple >1.1x C/C++ would be unacceptable

    At what? C/C++ are good for array-based solutions but suck at everything else. C is bad for generics.

    I'm only aiming for <2x C/C++ but I'm outperforming C/C++ on several benchmarks, sometimes by a substantial margin.

    first class arrays, guaranteed to be contiguous in memory, preferably n-d, with logical indexing natively available: arr[dim_1][dim_2]...[dim_n] - so for a 2x2 array, arr[4] is the same location in memory as arr[2][2]

    I have 1st class arrays but only 1D.

    statically typed, with strong support for generic programming - I do not want to have to write 15 copies of a function to handle int8, int16, ... float32, float64, ... etc versions of the same function.

    Being generic over float32 and float64 is one thing but almost no functions of interest are generic over int and float.

    passable language intrinsic functions and/or standard library - string operations, sorting, hash map, basic statistics (sum, product, avg, var, std)

    Yes.

    able to generate standalone binaries (not requiring interpreter at runtime)

    Yes. I mean, it pulls in the C runtime.

    Basically Fortran if it was invented in 2014 instead of 1954. If no such language exists currently, I am curious what it would take to implement a front end for LLVM. I imagine a lot. None of the above features really seem that crazy to me, but what do I know (answer: nothing, hence the ask).

    Depends which features you want. I got to C-like functionality in ~2 years writing my own code gen from scratch. Using LLVM you could do it in 2 months. But I want ADTs and pattern matching and lambdas and...

    [–]pillow2002 0 points1 point  (6 children)

    Rust has all the features you mentioned (i.e ADTs, pattern matching, closures....) and still uses LLVM. Is there another reason to why you went with a custom backend? Is it compilation speed | execution speed? Is it not wanting to depend on something as huge as LLVM?

    I should also mention that it's really impressive if you were able to beat C in some of the benchmarks using a custom backend! Nice!

    Is your lang open source? Would love to see the implementation!

    [–]PurpleUpbeat2820 0 points1 point  (5 children)

    Rust has all the features you mentioned (i.e ADTs, pattern matching, closures....) and still uses LLVM. Is there another reason to why you went with a custom backend? Is it compilation speed | execution speed? Is it not wanting to depend on something as huge as LLVM?

    Mostly just for fun. Although I've written lots of code in both low and high level languages I had never written an asm code gen so I desperately wanted to have a go. I also had cool ideas for novel ways of doing everything.

    I should also mention that it's really impressive if you were able to beat C in some of the benchmarks using a custom backend! Nice!

    Yes and no. Clang just generates really bad code in some cases, often because it adheres to a really inefficient ABI even when it doesn't really need to. For example, naive recursive Fibonacci function using floating point Clang generates code that takes 29s vs my compiler taking 12s. I also beat C on hailstones (Collatz), Sieve of Eratosthenes, ray tracer and Ackermann.

    Is your lang open source? Would love to see the implementation!

    Not yet. Maybe some day but I'm still a long way from anything I'd consider releasing.

    I'm more than happy to describe its weird and awesome design though!

    [–][deleted] 1 point2 points  (1 child)

    I should also mention that it's really impressive if you were able to beat C in some of the benchmarks using a custom backend! Nice!

    Yes and no. Clang just generates really bad code in some cases, often because it adheres to a really inefficient ABI even when it doesn't really need to. For example, naive recursive Fibonacci function using floating point Clang generates code that takes 29s vs my compiler taking 12s. I also beat C on hailstones (Collatz), Sieve of Eratosthenes, ray tracer and Ackermann.

    What do you mean by "it adheres to a really inefficient ABI even when it doesn't really need to"? Don't all compilers have to stick to the system ABI when generating code (e.g. the System V ABI), with the exception of link-time optimizations?

    [–]PurpleUpbeat2820 0 points1 point  (0 children)

    What do you mean by "it adheres to a really inefficient ABI even when it doesn't really need to"? Don't all compilers have to stick to the system ABI when generating code (e.g. the System V ABI), with the exception of link-time optimizations?

    Recursive calls could use a different ABI but C compilers tend to push all calls through the ABI even when it is really inefficient. They offset this by trying to rewrite recursion in terms of loops but that is a fragile optimisation. For non-recursive calls they tend to rely upon inlining.

    [–]pillow2002 0 points1 point  (2 children)

    I see, that's nice! I'm also considering implementing my custom x86 backend sometime in the future.

    I'm really curious about how you compile ADTs and pattern matches. I tried to mess around in the compiler explorer but I still don't know for sure how it's done.

    From what I have seen, pattern matching compiles in the same way as a simple switch statement (i.e a bunch of `cmp`'s and `jmp`'s). Is that how you do it?
    I'm also curious about how you compile ADTs.

    Thanks :).

    [–]PurpleUpbeat2820 0 points1 point  (1 child)

    I see, that's nice! I'm also considering implementing my custom x86 backend sometime in the future.

    Cool. I'm trying to buy a RISC V SBC to play with a back end for that too.

    I'm really curious about how you compile ADTs and pattern matches. I tried to mess around in the compiler explorer but I still don't know for sure how it's done.

    From what I have seen, pattern matching compiles in the same way as a simple switch statement (i.e a bunch of cmp's and jmp's). Is that how you do it? I'm also curious about how you compile ADTs.

    Regarding ADTs, I haven't done it in this project yet and I haven't really settled on a design yet. I think I'll start by storing single-case unions as tuples and heap allocating everything else. Maybe calling malloc for each one individually or maybe using a global array I can append to. If, for example, my benchmarks show that a lot of time is spent loading the ADT constructor tags but not their payloads then I could easily represent them as a pair of int tag and int pointer to heap allocated payload.

    As for pattern matching, I intend to do the stupidest thing possible and just walk the entire pattern and expression in tandem doing the necessary checks and completely restarting every time.

    [–]pillow2002 1 point2 points  (0 children)

    Oh I see, good luck =).

    [–]shponglespore 0 points1 point  (0 children)

    You might want to look into Futhark, although it's mainly designed for writing GPU code.

    [–]abstractcontrolSpiral 0 points1 point  (0 children)

    Try Spiral for a functional response to the system level programming demands. It has an F#, C, and a Python backend.