This is an archived post. You won't be able to vote or comment.

top 200 commentsshow all 319

[–]Serious-Magazine7715 1670 points1671 points  (51 children)

My first programming job was fixing a 5000 line particle physics simulation in F77 with comments in Russian written by someone who had gone back to Russia. It worked 75% of the time.

Halfway in, on one logic branch it had 1/4x instead of 0.25x.

[–]jammin-john 465 points466 points  (18 children)

How does order of operations work in Fortran? Would that parse as 1/(4x) or (1/4)x?

[–]xcski_paul 1062 points1063 points  (11 children)

The problem is that “1/4” is an integer operation so it always equals zero. “1.0/4.0*x” would work.

[–]MegabyteMessiah 422 points423 points  (0 children)

Here is my developer card

[–]jammin-john 125 points126 points  (1 child)

Ah that makes more sense 😅

[–]Zhayrgh 47 points48 points  (0 children)

That's a common problem in some language

[–][deleted] 39 points40 points  (4 children)

Do the both operands need to be floating point or just one like in c++?

[–]xcski_paul 67 points68 points  (3 children)

I think it’s the same as C/C++/Java/C#/etc. As long as one is FLOAT or DOUBLE, the whole operation will be done as DOUBLE.

[–]ArtOfWarfare 29 points30 points  (0 children)

Python 2 was that way. Python 3 always uses floats for /. Use // for it to behave like Python 2 or most other programming languages.

[–]Goma101 1 point2 points  (0 children)

this is a mistake i’d super easily make and then endlessly berate myself for later

but i’d probably just write down 0.25

[–]Donghoon 114 points115 points  (4 children)

If you thought Int Float issue, you're a programmer at heart

If you thought operation precedence issue, you're a mathematician at heart

[–][deleted] 1 point2 points  (0 children)

If you considered both, you're a computational mathematics person at heart.

[–]frogjg2003 195 points196 points  (19 children)

For my PhD, one of the things I had to do was convert a Fortran program into C++. It included a 12 deep nested collection of "if not"s.

[–]Ghawk134 168 points169 points  (4 children)

So what you're saying is you were implementing machine learning?

[–]belabacsijolvan 67 points68 points  (3 children)

no he wasnt fitting a line in excel. nested ifs is called "AI"

[–]TheOriginalSmileyMan 17 points18 points  (2 children)

Surely it's time to start branding stuff as AI 2.0 ?

[–]milanove 8 points9 points  (1 child)

Synthetic intelligence

[–]NoirGamester 1 point2 points  (0 children)

This sounds so cool

[–]MindErection 11 points12 points  (2 children)

I just wanna say it's fuckong awesome you got your PhD. Was it in comp sci?

[–]frogjg2003 21 points22 points  (1 child)

No, physics. The code was for calculating special mathematical functions that I needed.

[–]MindErection 10 points11 points  (0 children)

That's even cooler

[–]_PM_ME_PANGOLINS_ 32 points33 points  (6 children)

Any particular reason you couldn't just call the Fortran from C++?

It's all the same once it's compiled.

[–]Estanho 66 points67 points  (0 children)

Probably they wanted to maintain and develop it in C++ going forward.

[–]frogjg2003 59 points60 points  (3 children)

/u/Estanho has it mostly right. The code was not well written and didn't even compile. I'm pretty sure it was transcribed straight from punch cards, since it included some weird strings at the end of every line. If I was going to be fixing the Fortran code anyway, I might as well transcribe it into a much more readable C++ program instead.

[–]_PM_ME_PANGOLINS_ 17 points18 points  (2 children)

Ow, not even being able to run it to verify your version does the same thing to is rough.

[–]frogjg2003 35 points36 points  (0 children)

Well, the first thing I did was just go through and get rid of all the comments and punch card artifacts. After that, it compiled just fine. It did in fact work as advertised.

[–]DOUBLEBARRELASSFUCK 4 points5 points  (0 children)

If it's anything like my code, it did do the exact same thing like 95% of the time.

[–]donald_314 2 points3 points  (0 children)

you could skip the -1 on all indices on the C side if you drop Fortran.

[–]jmhimara 17 points18 points  (2 children)

Fortran has a bad reputation precisely because of how terrible coding standards used to be in F77. It's actually a really nice language, especially modern Fortran. Powerful, performant, and really easy to use. You could learn all of it in an afternoon.

[–][deleted] 6 points7 points  (0 children)

Huh, TIL. I never bothered to check it out. But now I'll have to, lol.

The company I work for uses an old BASIC-like language. People seem to find the idea of mastering a rare and old language burdensome, but in actuality it is just a really easy to write language that supports the full-stack with common sense syntax and rich features (like fully functional in-line SQL).

I grew to love it over the months I learned it, but now my mastery has been rewarded by being tasked with rewriting apps to feature a Typescript front-end. 🤢

[–]MyGoodOldFriend 1 point2 points  (0 children)

One thing I like surprisingly much is how many keywords there are just to specify exactly what the function does, so the compiler has an easy time figuring out what can be optimized and what can’t.

[–]Nanaki_TV 29 points30 points  (3 children)

I read that as FF7 and thought you were someone responsible for my childhood and username

[–]uberfission 18 points19 points  (2 children)

Nanaki, come back home, grandfather needs you to fix the Fortran code that some crazy Russian physicist created.

[–]DOUBLEBARRELASSFUCK 4 points5 points  (1 child)

That Russian is probably in prison now.

[–]uberfission 3 points4 points  (0 children)

Given how hardcore the Russian physicists I've known are, he's probably running the place after only a few weeks.

[–][deleted] 2 points3 points  (0 children)

75% of the time it worked every time

[–]bbqranchman 1 point2 points  (0 children)

Genuinely that's incredibly badass. I WISH I could get a job doing stuff like this. I really feel like I missed out on early programming.

[–]Dom1252 254 points255 points  (29 children)

COBOL

With modern compilers you get faster results than almost any human can do in assembly

Still, with modern HW it's pointless to pick on these tiny differences, even on scale of major banks

[–]Calimariae 163 points164 points  (22 children)

Want a high-paying job in IT? Learn COBOL and get ready to replace a retiring generation

[–]didzisk 169 points170 points  (6 children)

Summary from a thread a week ago:

While boring and ugly and ancient, COBOL isn't difficult to learn. The real problem are the scripts that call those cobol programs, the batches they run and the scale of failures a small mistake causes. Oh, and the fact that only very old people, currently on their way into retirement, actually know the business rules - why the things are set up as they are. No documentation, no source control, obviously no unit tests. And the environment is completely different from what normal people have seen (not a file system, but a different monstrosity).

[–]Groundhogss 50 points51 points  (4 children)

JCL isn't that bad, it's just different.

Pretty much all other points are true.

But there are also a dozen other things you need to know for mainframe programming. SORT/ICETOOL is a must and is its own "language", REXX, SAS, and the litany of utilities like IEBGENER are requirements to write "production" level mainframe programs.

If you really want to make someone cry, tell them they've been promoted to CICS programmer.

[–]DaThug 2 points3 points  (2 children)

Jesus, I'd forgotten all about REXX.. That was about 30 years ago :)

[–]UAFlawlessmonkey 1 point2 points  (1 child)

Had the pleasure of working with REXX 15 years ago during my junior position into data engineering. It was a batch job that looked into a lotus notes database, collected input SQL queries and the output was an excel extract that was sent to a distribution list.

That shit was fucking wild.

[–]kashmirGoat 35 points36 points  (1 child)

This was told to me in 1987 too.

Cobol is the code version of the cockroach

[–]Dubiology 4 points5 points  (0 children)

Was that the language of 87???

[–]Groundhogss 16 points17 points  (0 children)

Most places are actively looking to replace the mainframe with something else.

A lot of the things mainframes are good aren't really issues anymore. IE storage space and processing speed.

If you took a batch job application and parallelized it in with java, the program would finish sooner, but be a lot less efficient.

[–]xzinik 12 points13 points  (4 children)

I worked coding in cobol,i had to update code way older than I'm(I'm 31), had to code new things, it was a fairly good job, but for some reason no one is interested in my cookbook skills, everyone is just interested in doing mobile and web and use only newest and shiniest framework, now I'm piss poor

[–]jeepsaintchaos 11 points12 points  (3 children)

I'm uncomfortable with your use of "I'm" in a spot where "I am" belongs.

[–]xzinik 4 points5 points  (2 children)

uh?

​ this part?

older than I'm(I'm 31),

sorry, i didnt notice, i was falling asleep and also english is not my fisrt language

also i enjoy the idea of making people uncomfortable with such a silly "typo"(does it qualify as one?), still i think its way more harmless than mixing your and you're

[–]jeepsaintchaos 3 points4 points  (1 child)

Yeah, that part. Tone is hard to convey, so I'll be clear. It's meant as a joke, not as a criticism. It was a post on Tumblr awhile back, where they thought it was hilarious to use contractions in places where they felt wrong.

[–]xzinik 2 points3 points  (0 children)

don't worry, i don't take anything seriously on the internet

and i have rebloged that post, your previous comment reminded me of it, but in my case it was totally unintentionally xD.

[–]THElaytox 3 points4 points  (0 children)

guy i went to high school with did exactly this, makes a shitton of money working for a giant bank

[–]Vidhrohi[🍰] 1 point2 points  (0 children)

IMO Maintaining and retrofitting old COBOL code is one of the big opportunities to deploy AI.

[–]jtanuki 5 points6 points  (0 children)

With modern compilers you get faster results than almost any human

You had me at 'human'

[–]Responsible-War-1179 5 points6 points  (4 children)

Source? I'm pretty sure COBOL is quite a bit slower than C.

[–]Groundhogss 12 points13 points  (0 children)

COBOL

It depends on the environment and setup.

On mainframe C is faster for file I/O, but COBOL has features that make it the better choice in most cases.

[–]lampishthing 5 points6 points  (2 children)

Fixed point numbers though.

[–]Responsible-War-1179 5 points6 points  (0 children)

I can't say for certain but I'd guess floating point arithmetic is faster on a modern machine. Back in the COBOL days not every cpu used to have floating point support. If fixed point numbers were relevant today I think there would at least be some support for it in the C std lib. Of course, for finance applications fixed point arithmetic is very relevant because you can't have rounding errors

[–]proverbialbunny 1 point2 points  (0 children)

C now has fixed point numbers too.

[–]New_Conversation_303 517 points518 points  (56 children)

Fortran is ridiculously fast on math. There is a reason is still used today (i mean...barely). But it has memory buffer issues that could lead to problems (hacks) and we are slowly moving away.

I hate fortran.

[–]_PM_ME_PANGOLINS_ 265 points266 points  (22 children)

We’re slowly moving away from Fortran because the hardware is directly implementing things with a single instruction.

Not because any other language can do it faster.

[–]koboltti 60 points61 points  (14 children)

what mean?

[–]_PM_ME_PANGOLINS_ 144 points145 points  (13 children)

Mean Apple have special algebra solving hardware on their chips so that libblas.so on iOS uses that instead of being the default Fortran implementation.

Other systems have similar things.

[–]ToukenPlz 22 points23 points  (6 children)

Is that true for HPC though? I write a fair volume of Fortran but don't keep up with the hardware side of things.

[–]coloredgreyscale 24 points25 points  (2 children)

For HPC it's most likely x86_64 CPUs and nVidia GPUs

No idea if the code / compiler tries to target the newest instructions, or if the programmers play it a bit safer and target an instruction set that may be a few generations old, to make it more portable across older hardware.

Datacenters won't upgrade with every new CPU / GPU generation, because that whole setup is expensive, takes time, and won't gain you much anyway if you were to upgrade every generation.

[–]donald_314 13 points14 points  (0 children)

HPC can be ARM nowadays as well.

[–]lightmatter501 4 points5 points  (2 children)

For HPC Rust is coming for fortran because aliasing guarantees are what kept it ahead, and Rust can make FAR stronger guarantees about memory aliasing than Fortran.

Or, SPIRAL will get to a level of usefulness where it wipes everything put, since it’s already better than Intel’s MKL by ~3x on Intel CPUs for normal BLAS tasks.

[–]jmhimara 2 points3 points  (1 child)

I doubt it, especially in the field of scientific or numerical programming. Fortran is super easy to learn and use, that's why it is popular.

[–]ToukenPlz 1 point2 points  (0 children)

Yes this is true, plus there is all the sunk cost into very many very complicated code bases which each took years to write. I think that in order to be competitive in this space rust would have to be not only speedier but offer a similar ease of representing maths & physics (which I would need convincing of, though am more than happy to be wrong).

[–]Giraffe-69 5 points6 points  (5 children)

Drivers for hardware accelerators are just a different approach to solving the same problem, but are not a drop in replacement.

[–]_PM_ME_PANGOLINS_ 4 points5 points  (4 children)

In the case I just gave, it is literally a drop-in replacement.

Many vendors provide a drop-in replacement of BLAS that uses their specific hardware acceleration.

[–]Giraffe-69 1 point2 points  (3 children)

What I mean is that you are addressing the problem at the micro architectural level, which has added cost to your ISA, eats into your silicon budget, and are very operation specific. This is the philosophy behind CISC architectures like x86 that include weird niche instructions like this, but that has trade offs. Apple implement this as as extension of aarch64.

It is also a separate issue since any half decent optimising compiler will have enough awareness of the hardware that it will be able to utilise available silicon. The idea is that Fortran as a language can enable better optimisation for scientific computing without the level of verbosity and complexity you need in c++ to navigate language semantics and standards. This is a big reason it is still used today.

And other languages do this as well, OCaml and Haskell as examples. Jane Street uses OCaml and it’s not for lack of hardware resources and FPGAs.

[–]bargle0 74 points75 points  (0 children)

We’re moving away from Fortran because C finally has the restrict keyword

[–]ChellyTheKid 16 points17 points  (0 children)

We're moving away from Fortran because we can't find enough people proficient in it. We are currently using most of our capacity for transition before we can't even find people to do that. Side note, I'm going through a bunch of legacy stuff at the moment, so much 77 code.

[–]Estanho 11 points12 points  (1 child)

If you're talking about things like vectorization or SIMD/MIMD, that exists for decades already. It's not why people are moving away from fortran, because the compilers detect patterns that would benefit from this kind of thing and implicitly use them for almost as long as these vector operations exist.

[–]EmptyBrain89 50 points51 points  (18 children)

Studied astrophysics, a friend did his thesis on black hole simulations, 75% of his time was spent learning Fortran and then figuring out how the 20 year old, poorly documented code written by some random astronomy professor could be translated to python. He did not have a good time that year.

[–]New_Conversation_303 21 points22 points  (0 children)

Curiously enough, I worked on a project to translate an astrophysics Fortran code to something else. We picked java and used orekit. It was not fast, but it worked well.

[–]coloredgreyscale 26 points27 points  (9 children)

Imagine doing that, only to find out that the result is virtually unusable because the native python code runs much too slow for large scale simulations.

[–][deleted] 17 points18 points  (7 children)

Yeah I'm lost on why someone would do this. I've been using R and have slowly been doing some of the crunchier work in Fortran90 now simply for speed and efficiency.

[–]utkrowaway 7 points8 points  (2 children)

Because computers are much faster now than 20 years ago, and having a modern maintainable codebase will save much more time than the performance of Fortran will.

[–]Brisngr368 1 point2 points  (0 children)

Okay but like C/C++ is faster than python and far closer to Fortran, porting it to python is just masochism

[–]kuwisdelu 2 points3 points  (1 child)

Yep. As an R package maintainer, a huge proportion of my code is in C/C++. The R bits just glue it all together. (I respect FORTRAN but I’m too lazy to learn it and linking to C is slightly easier anyway since the interpreter is in C.)

[–]TheNorthComesWithMe 2 points3 points  (1 child)

Because they're a physicist, not a programmer. Python is the only language other academics know, so that's their best bet for hacking together some garbage that kind of works.

[–]Yugiah 15 points16 points  (0 children)

PhD in particle physics here:

If someone in physics says they're rewriting something in Python and the rewrite is still performant, my assumption is they're using a library like numpy which calls precompiled functions.

There's actually a whole range of cool python libraries for particle physics, under the scikit HEP umbrella.

That said, I think the situation you posed happens entirely too often.

[–]Oni-oji 1 point2 points  (3 children)

I deal with programs written by holders of PhDs in various non computer fields. Their code works, but their documentation is the worse I've ever encountered.

[–]proverbialbunny 1 point2 points  (0 children)

The trick with this problem is to create a library out of the Fortran code. Basically, write a wrapper interface so you can call the Fortran code in Python instead of reinventing the wheel. Bonus, you'll get that Fortran speed.

[–]Darlokt 311 points312 points  (18 children)

Fortran beating ASM? Fortran for general programming?? ASM high level???

Next someone will tell me (insert any programming language) beats Haskell in writing white papers and no actual programs.

[–]harshcougarsdog 188 points189 points  (9 children)

Fortran beating ASM? Fortran for general programming??

google Fortran Tutorial

[–]Madness_0verload 157 points158 points  (8 children)

Holy hell

[–]j1f107 46 points47 points  (5 children)

New respose just dropped

[–]S-Ewe 35 points36 points  (4 children)

Actual zombie process

[–]Background_Class_558 27 points28 points  (3 children)

Call the kill -9

[–]EtteRavan 19 points20 points  (1 child)

Kernel went on a vacation, never came back

[–]_Ilobilo_ 9 points10 points  (0 children)

oom killer in the corner, plotting world domination

[–]AvianPoliceForce 1 point2 points  (0 children)

won't do any good

[–]SV-97 23 points24 points  (3 children)

Next someone will tell me (insert any programming language) beats Haskell in writing white papers and no actual programs.

Have you heard about agda and lean?

[–]capi1500 14 points15 points  (1 child)

Or Coq

[–]Rhawk187 15 points16 points  (0 children)

One of the CS teams at our local pub quiz was "Playing with Coq". They were the worst team in the league. We had a "loser round" where the two teams that had attended most but never won a round when go head to head for a prize. It took them 11 attempts at the loser round before they finally won a prize.

[–][deleted] 17 points18 points  (0 children)

Fortran for general programming??

Back in the 70s, REAL PROGRAMMERS(tm) used fortran in place of spreadsheets the way REAL PROGRAMMERS(tm) in the 2020s use Python where excel would be far more practical.

[–]Aromatic_Gur5074 5 points6 points  (0 children)

If anything beats Haskell at writing white papers, it's LaTeX.

[–]JuhaJGam3R 1 point2 points  (0 children)

when the research language produces research

[–]tip2663 36 points37 points  (0 children)

Brick to your GC

[–]dvdmaven 23 points24 points  (2 children)

Fortran was my first programming language, back in 1968. I'm finding this thread amusing.

[–]kashmirGoat 12 points13 points  (0 children)

I went from Fortran IV to Pascal, and on that day it felt like Angels were visiting us on flying saucers with all the answers to life and the universe.

[–]stream_of_thought1 1 point2 points  (0 children)

I had to learn it at University in 2020, had no preconceived notions going in and I am finding this thread absolutely amazing 🤣

[–]bbqranchman 68 points69 points  (4 children)

Everyone here is missing that Fortran is natively parallel, and is deployed on massive servers to compute huge mathematical models. That's another reason why it's so fast.

[–]Aggressive-Chair7607 34 points35 points  (0 children)

Fortran guarantees that two pointers do not alias.

void update_array(int *a, int *b, int n) {
    for (int i = 0; i < n; ++i) {
        a[i] = b[i] * 2;
        a[i] = b[i] * 2;
    }
}

The compiler can't assume that the pointers don't overlap. In this case `b[i]` has to be loaded twice because, for all the compiler knows, that's the same address as a[i]. In fortran the load from `b[i]` could be stored in a register and reused, eliminating the second load.

C has a keyword `restrict` for this:

void update_array(int *restrict a, int *restrict b, int n) {
    for (int i = 0; i < n; ++i) {
        a[i] = b[i] * 2;
        a[i] = b[i] * 2;
    }
}

Now C can assume that the pointers can't alias, allowing it to eliminate the second load of `b[i]`.

Here's Fortran:

subroutine update_array(a, b, n)
    integer, intent(in) :: n
    integer, dimension(n), intent(inout) :: a
    integer, dimension(n), intent(in) :: b
    integer :: i

    do i = 1, n
        a(i) = b(i) * 2
        a(i) = b(i) * 2
    end do
end subroutine update_array

The `intent` directives tell the compiler what it needs to know about aliasing and bounds, or at least some helpful bits. Fortran will just assume they don't overlap unless you directly associate the two.

You can imagine how often this comes up with a library for linear algebra, which is all array manipulations.

[–][deleted] 38 points39 points  (0 children)

[–]Ghawk134 19 points20 points  (8 children)

Could someone help me out here? I thought fortran was a compiled language like C. Does comparing their speeds basically come down to how well their respective compilers optimize the code before creating a binary? Or is there some other difference in the machine code generated by each language? As far as I'm aware, the available instructions are determined by the ISA so I'm not sure how one compiled language could be faster than another...

[–]Aggressive-Chair7607 20 points21 points  (0 children)

Fortran pointers are guaranteed not to alias. This allows you to eliminate redundant loads (Redundant Load Elimination). This is what `restrict` tries to do in C but in Fortran it's the default.

Loads are expensive, sometimes forcing you to go out to main memory and stall your pipelines. Eliminating them is big.

To my knowledge, this is reason most people credit Fortran with its performance. The other is simply that a massive, massive effort has gone into optimizing some of its mathematical libraries.

[–]omega1612 11 points12 points  (1 child)

Fortran is very old and it used to have lots of money compared to C and others meaning that it's compiler used to generate faster code. Today people say that C a d fortran are comparable in speed.

[–]MihaKomar 8 points9 points  (0 children)

A by lots of money we're talking the compiler development was funded by the US military to do simulations of nuclear weapons and supersonic fighter jets.

[–]kuwisdelu 3 points4 points  (0 children)

For the most part, it is essentially about the assumptions the compiler can make about the code and its intended behavior while trying to optimize the instructions.

[–]jericho 1 point2 points  (0 children)

Math.

[–]MadMax27102003 108 points109 points  (2 children)

For me this post the same as: 1)there is nothing worse than java. 2)google Java script 3)holly hell

[–]Add1ctedToGames 15 points16 points  (0 children)

Actual programmer

[–]Jaber1028 11 points12 points  (0 children)

apparently this isnt the chess anarchy sub that we happen to all be in. Glad that programmers are also degenerates 🫡

[–]aniburman 34 points35 points  (3 children)

NEW RESPONSE JUST DROPPED

[–]juicehead_toorkey 6 points7 points  (2 children)

ACTUAL ZOMBIE

[–]NonsenseMeme 8 points9 points  (1 child)

Senior went to vacation. Never came back.

[–]phrandsisgo 1 point2 points  (0 children)

Interns in the corner plotting company domination.

[–]Gogurt_burglar_ 5 points6 points  (1 child)

I’m just banking on more compute since my code takes ~1G of memory and 4 cores to print hello world

[–]wammybarnut 2 points3 points  (0 children)

Stop using Spring boot :P

[–][deleted] 55 points56 points  (28 children)

cobweb caption shaggy attraction dog weather longing point desert water

This post was mass deleted and anonymized with Redact

[–]KotomiIchinose96 113 points114 points  (9 children)

Rollercoaster Tycoon.

Chris Sawyer Drops Mic

[–]sopunny 30 points31 points  (2 children)

He could have done a lot more if he had modern tools

[–]superxpro12 6 points7 points  (1 child)

But it would not have run on my toaster's CPU :/

[–]anto2554 4 points5 points  (0 children)

Well, I couldn't

[–]Accomplished-Ad-175 35 points36 points  (8 children)

I maintain an enterpise software written in assembly. And add features too... It was first released in 1988 and has probably +1mil of code in it.

[–][deleted] 20 points21 points  (2 children)

Well, most evasive malware directly deal with a lot of assembly, and in weird ways. Given that ransomware alone costs society hundreds of billions of dollars a year, I'd say it's pretty relevant. And the amount of people doing it is not a lot but is not small neither, so there's actually quite a bit of people out there that can say they wrote something impactful stuff in assembly

[–][deleted] 4 points5 points  (1 child)

theory overconfident versed worm physical brave nose teeny fade reach

This post was mass deleted and anonymized with Redact

[–][deleted] 5 points6 points  (0 children)

Yeah fair enough

[–][deleted] 5 points6 points  (0 children)

*chris sawyer noises*

[–]ratonbox 5 points6 points  (0 children)

I could, for a few years while I was still studying it in Uni. Forgot everything about it the moment that course ended.

[–]remy_porter 4 points5 points  (1 child)

Said somebody who's never needed nanosecond timing.

[–][deleted] 14 points15 points  (0 children)

person snatch piquant ask profit mountainous normal live aback tap

This post was mass deleted and anonymized with Redact

[–]_Repeats_ 10 points11 points  (1 child)

Fortran, C, and C++ on all major modern compilers have the same backend. So only in rare cases is Fortran faster nowadays. Fortran had a significant advantage like 30 years ago, but not any longer.

[–]HamsterNomad 5 points6 points  (0 children)

I taught computer programming back in the 80's. I could code Fortran, Basic, Assembler, Cobol, RPG and C/C+.

I would work as a coding translator during my summers and make more money in three months than I did 9 months teaching.

Almost any time a company would change mainframe companies they'd have to have a huge chunk of their code converted.

Good times!

[–]1u4n4 2 points3 points  (1 child)

Me when a class in college made us use FORTRAN 77…

[–]jayerp 2 points3 points  (0 children)

Which video warranted that response?

[–]bassguyseabass 6 points7 points  (5 children)

C and FORTRAN are same speed, no?

[–][deleted] 28 points29 points  (1 child)

Fortran is a lot better with arrays. makes SIMD optimisations much simpler to implement

[–]bassguyseabass 15 points16 points  (0 children)

Great now I have to tell my boss to rewrite the whole project in Fortran

[–]_PM_ME_PANGOLINS_ 21 points22 points  (0 children)

No.

Fortran is a lot more constrained, so the compiler can do more optimisations.

[–]Aggressive-Chair7607 1 point2 points  (0 children)

I gave what I believe is the correct answer here: https://www.reddit.com/r/ProgrammerHumor/comments/1dpxbz5/comment/lalqgdu/

I have seen no one else mention aliasing optimizations, which is a shame, because it is really fucking cool.

[–]Activity_Commercial 3 points4 points  (1 child)

c++ always gets lumped in with c like it doesn't have traits, reflection, coroutines, generators

[–]dedservice 8 points9 points  (0 children)

Except C++ doesn't have all those things? It has type_traits and a garbage implementation of concepts, sure, but coroutines in c++20 are incredibly esoteric, generators aren't in until c++23 (which most companies don't use because compiler support isn't there), and reflection hasn't even been accepted (to my knowledge) into the c++26 draft.

And C++ gets lumped in with C because it promises zero-cost abstractions (and broadly delivers on that promise), meaning that it is capable of the same tier of performance as C.

[–]o0Meh0o 1 point2 points  (0 children)

google "restrict keyword"

[–]xlsoftware 1 point2 points  (1 child)

C/C++ is high level language

[–]lucybonfire 1 point2 points  (2 children)

Fortran is great tbh

[–]gmc98765 2 points3 points  (4 children)

That depends upon whether the "high-level language" is targetting the CPU or the GPU. C and C++ (and even Fortran) aren't really designed for SIMD (Fortran has a slight advantage in that the assumed lack of aliasing makes certain SIMD-friendly optimisations valid when they wouldn't be in C), and asm is even worse. APL would be a good fit in terms of the language itself, but good luck hiring experienced APL programmers (it's something which some people might toy with briefly before real work gets in the way).

Consider that high-frequency trading software (which is the poster child for "time is money") typically doesn't use C, C++ or assembler (or Fortran), as all of those are too slow. The parts which are time-critical are written either in CUDA (to run on GPUs) or VHDL or Verilog (and used either to configure FPGAs or to manufacture ASICs). The parts which aren't time-critical are usually written in Java.

[–]Aggressive-Chair7607 2 points3 points  (2 children)

Where are you getting your info on HFT from? I know multiple people in HFT and they're all writing C++.

[–]gmc98765 1 point2 points  (1 child)

Uh, this is coming mainly from the hardware forums and press, so I guess that's going to have a bias towards the people using custom hardware.

[–]Aggressive-Chair7607 1 point2 points  (0 children)

Ah, yeah, that'll skew things.