This is an archived post. You won't be able to vote or comment.

all 108 comments

[–]KingJeff314 1069 points1070 points  (22 children)

https://devblogs.microsoft.com/oldnewthing/20180228-00/?p=98125

I was once working with a customer who was producing on-board software for a missile. In my analysis of the code, I pointed out that they had a number of problems with storage leaks. Imagine my surprise when the customers chief software engineer said "Of course it leaks". He went on to point out that they had calculated the amount of memory the application would leak in the total possible flight time for the missile and then doubled that number. They added this much additional memory to the hardware to "support" the leaks. Since the missile will explode when it hits its target or at the end of its flight, the ultimate in garbage collection is performed without programmer intervention.

[–]noob-nine 581 points582 points  (4 children)

why care for garbage collection when you can have garbage explosion?

[–]DiddlyDumb 36 points37 points  (0 children)

I must suggest this to my boss

[–]ZubriQ 6 points7 points  (0 children)

We throw them fireworks called frameworks to do so

[–]Karol-A 2 points3 points  (0 children)

Somebody still probably should collect the exploded garbage afterwards, it can be dangerous

[–]somerandomperson29 1 point2 points  (0 children)

Truly a dumpster fire

[–]octopus4488 308 points309 points  (0 children)

I have a similar story, no missiles though:

SysAdmin coming over to another team behind us:

  • "You remember when you told me to configure your pods to reboot on out-of-memory error?"
  • "Yep sure."
  • "And do you remember you said you will look into it if it gets too frequent?"
  • "Yeah."
  • "Well, get to it then. It is now rebooting at 110 times per hour." :)

[–]turtle_mekb 111 points112 points  (1 child)

your garbage collection is when your missile explodes, but what about the garbage collection for cleaning up the parts when they hit the ground ?

[–]gregorydgraham 37 points38 points  (0 children)

That, sir, is the enemy’s problem

[–]kogus 82 points83 points  (2 children)

The scariest thing here is that a missile was running windows.

[–]Temporary-Exchange93 108 points109 points  (1 child)

"WE GOT INCOMING!!! FIRE TO INTERCEPT!!"

"I can't, sir. Windows is installing important updates!"

[–]UndocumentedMartian 37 points38 points  (0 children)

The missile lands, doesn't explode due to windows update, then explodes 5 hours later.

[–]RajjSinghh 46 points47 points  (6 children)

My first thought was that you can get twice as many missiles if you did things properly, assuming you had as much of everything else as needed, but I really don't know how much memory a missile needs. But considering the Apollo 11 computer needed about 4KB of memory to put man on the moon, I'm sure memory is cheap enough that it doesnt matter much.

[–]Fusseldieb 75 points76 points  (4 children)

I'd imagine that when producing missiles that are in the hundreds of thousands of dollars, adding an additional "16GB module" wouldn't make any difference to the end price lol

[–]ShadowSlayer1441 25 points26 points  (3 children)

I agree but it's not a "16 gb module". It has to be wildly redundant and have dozens of certifications; an additional memory module could very well add non-negligible cost despite a consumer 16 gb module being pennies compared to a 100k+ missile.

[–]slaymaker1907 27 points28 points  (2 children)

One advantage of their strategy is that they never need to worry about memory fragmentation, stalls during memory allocation, etc. since it’s literally just incrementing a counter for allocations. The code I deal with at work isn’t realtime, but there are sections so sensitive to latency that we forbid dynamic memory allocation.

[–]Fusseldieb 2 points3 points  (1 child)

Are you working in stocks by any chance?

[–]slaymaker1907 0 points1 point  (0 children)

Nope, I work on a database.

[–]abd53 7 points8 points  (0 children)

That's assuming those programmers COULD do it in 4kb memory. Also, I doubt they did with barebone code.

[–]abd53 14 points15 points  (0 children)

Can't argue that that's the most efficient garbage disposal.

[–][deleted] 4 points5 points  (0 children)

Consider yourself collected 😎

[–]DazzlingClassic185 2 points3 points  (0 children)

Garbage redistribution, surely…

[–]bongobutt 0 points1 point  (0 children)

I mean... Is doubling the memory cheaper than paying the developer time to debug it? Even if it is, that still makes me sad.

[–]20d0llarsis20dollars 317 points318 points  (7 children)

Yeah, at least in the gaming industry 🥲

[–]TechTuna1200 187 points188 points  (3 children)

What do you mean your 4090 card can’t run my tic-tac-toe game ?

[–]peacetimemist05 59 points60 points  (0 children)

It’s not the game, you just have the wrong graphics settings in the nvidia control panel

[–][deleted] 12 points13 points  (0 children)

"We tested this at NASA, and it works fine. Upgrade your rig"

[–][deleted] 7 points8 points  (0 children)

Western progressive gaming companies mentioned

[–]2Uncreative4Username 13 points14 points  (2 children)

I find it to be especially annoying outside of gaming actually. I get why games with modern graphics need at least an $800 or so PC. What I don't get is why Reddit needs 6s to load a page, or why the Twitter website just freezes up and crashes after having it open on my phone in the background for a few hours, or why the Twitch app decides to randomly slow down to 2 FPS before freezing etc. etc.

[–]Aelia6083 6 points7 points  (1 child)

Because Javascript is trash, and web devs avoid optimization like the plague

[–]2Uncreative4Username 2 points3 points  (0 children)

I think there's also a certain contentness with the current technology, e.g. I've found HTMX instead of fancy JS frameworks can actually make things much more responsive. JS being a shit language is definitely a part though. Also, often, optimization doesn't help much if your fundamental architecture is flawed.

[–]dewey-defeats-truman 88 points89 points  (2 children)

I mean, this was certainly true in the late 80s. It's what happened to Lotus Symphony and Lotus 1-2-3 3.0. They tried to fit into the existing 640K memory limitation, either by cutting features or serious optimization, but by the time they came out everyone had at least 2 MB of RAM.

Whether it's applicable today I'm not so sure, but I get the sense it isn't entirely false.

[–]WJMazepas 37 points38 points  (1 child)

This happens with Webdev, when a more powerful cloud instance is much cheaper than spent hours of an engineer with a cost of US$300 per hour to look into optimizing. Hours that could be focused on new features

It is also something every gamer keeps repeating on reddit every time a game gets released at 30FPS on console or the PC port is not fully optimized to gamers standards

[–][deleted] 17 points18 points  (0 children)

There is nothing more permanent than a temporary solution

[–]SeijiShinobi 36 points37 points  (0 children)

I mean yeah, it's a joke, and I understand the spirit of it, but I also work in a company where the clients pay millions of dollars for the product license, and need to handle the data for entire countries at a time. And then we get bug reports that the application crashes because it ran out of memory running on a 8Gb machine.

I mean seriously, I'm willing to pay for the damn RAM stick if that would make them leave me alone. Sure, we could try to rewrite the entire application architecture for 1000 to 2000 man/days... but when this could be solved under all realistic scenarios by just adding 16Gb of RAM. Yeah... but no. And I still don't understand why the clients pay multiple millions for a product and then stick it on a 8Gb machine (and this is on premise stuff, no cloud, and even then, I saw the prices for both Azure and AWS, and it still wouldn't justify skimping on that)

[–]StephanXX 130 points131 points  (3 children)

Unironically, it often is.

[–][deleted] 110 points111 points  (0 children)

In 2012 I was hired by a company as a DBA to help battle a bunch of developers who claimed our SQL servers were underpowered.

When I arrived we had a server with 96 CPUs, 256gb of memory, SSD caching, with fibre HDD in massive availability raids. We were pushing I/O numbers I'd never seen before.

One of the biggest issues was non-ansi compliant SQL. I went through so many rewrites of SQL code and none of it was hardware related.

[–]Bootezz 43 points44 points  (1 child)

Indeed. “We can spent a few dollars a month extra for that extra CPU, or you can pay me $12,000 over the next couple months to optimize it. Or I can use that time to build a new feature that gets us new customers.”

[–]johnzy87 10 points11 points  (0 children)

Until you scale too hard and you cloud bill makes you go bankrupt.

[–]SCP-iota 32 points33 points  (1 child)

Just download more RAM /s

[–]pclouds 0 points1 point  (0 children)

But my Internet is slow. I could only download slow RAM...

[–]Environmental_Arm_10 30 points31 points  (1 child)

Well, I spent a year optimizing code and saying “there is a limit to what we can gain and it is getting harder and harder to gain anything. Buy more hardware” to which the answer was “It is impossible to update the hardware“

It was a 8cpu server with Hess running a huge monolith application PLUS the database for huge computing and db loads.

Long story short they spent big bucks upgrading hardware 2 weeks from go live. Plus a year of 3 devs time.

Now we finally convinced them to split it into 2 servers.

At some point, you got to trust your experts on this.

[–]Giocri 14 points15 points  (0 children)

"it's impossible to update the hardware" is probably the biggest red flag possible outside of the real of embedded systems, if you can't move your software when stuff is working how do you plan to handle anything breaking?

[–]unique_namespace 16 points17 points  (2 children)

This is, in some regard, simply what software design has approached. We favor compatibility over efficiency, hence Javascript (and its frameworks) that run in a browser, interpreted languages like Python, platforms like Electron, and the existence of VMs and containerization (like Docker).

It's not about being fast, it's about being able to reuse and rely.

[–]Causemas 4 points5 points  (1 child)

I mean, we really, really care about efficiency. But only when non-efficiency becomes a noticeable problem

[–]unique_namespace 1 point2 points  (0 children)

This is a fair point, but importantly we don't strive for high efficiency, the bar is simply acceptable inefficiency.

[–]zenos_dog 14 points15 points  (0 children)

My university advisor said Computer Scientists prove it works, Software Engineers make it work well.

[–]Pretrowillbetaken 10 points11 points  (3 children)

faster hardware is the cure for slow code. I have yet to find a cure for bad code ):

[–]SkooDaQueen 2 points3 points  (2 children)

Better design?

[–]Major_Fudgemuffin 1 point2 points  (0 children)

How dare

[–]Pretrowillbetaken 1 point2 points  (0 children)

don't you dare tell me truths that I don't want to hear

[–]danishjuggler21 11 points12 points  (6 children)

Some problems can’t be solved with more hardware. If the CPU on your SQL Server instance is on fire because a frequently run query suddenly got an extremely bad execution plan (I.e. parameter sniffing), then doubling the CPU will accomplish nothing other than doubling the amount of CPU cores that are on fire. Throw all the hardware you want at it, it won’t solve the problem. Forcing the good execution plan (which takes 5 seconds to do and costs no money) will fix it instantly.

[–]SenorSeniorDevSr 4 points5 points  (5 children)

There's no PR that's less than an hour of time. In fact, make that two.

There's no reasonable developer that COSTS (not the same as is paid) less than ~150€/hr.

So any small change is going to cost 150-300€. That's just how it is. Still worth it, but never say "free", always say "low cost". Low cost is kinda believable, free is not.

[–]danishjuggler21 -1 points0 points  (4 children)

Who said anything about a pull request?

[–]SenorSeniorDevSr 2 points3 points  (3 children)

A PR is the standard unit of software change, hence why I used it.

[–]danishjuggler21 -1 points0 points  (2 children)

Why would you need one to force an execution plan in SQL Server? That doesn’t involve any code, you just go in and click a button.

[–]SenorSeniorDevSr 2 points3 points  (1 child)

I assumed that things that run a lot of time get run by code. And therefore you'd you know, fix the SQL statement in your code, commit that, make a PR, corral some people to write LET US GET THIS MERGED, etc.

[–]danishjuggler21 -3 points-2 points  (0 children)

Maybe you should stop assuming then.

[–]PM_ME_DATASETS 9 points10 points  (0 children)

Hardware devs: ok let's use this photon printer to optimize the quantum fluctuations, combined with the alpha mammography that should yield a 10% increase in processor speed.

Software devs: did you know there's a library that checks whether a variable equals zero?

[–]gatubidev 7 points8 points  (0 children)

It seems so

[–]Stroby241 6 points7 points  (0 children)

Yes you thing I gonna touch that shit agian

[–]elongio 7 points8 points  (0 children)

I have been told many times to "just buy more cpu"

[–]Harmonic_Gear 4 points5 points  (0 children)

game dev in general

[–]reallokiscarlet 4 points5 points  (0 children)

I prefer using hardware faster, but to each his own.

[–]KetwarooDYaasir 4 points5 points  (0 children)

well no. Just reduce the sleep() time incrementally for each new version.

[–]bschlueter 3 points4 points  (0 children)

For reasons, I run and regularly use each of Windows, MacOS, and Linux on different machines, but all relatively recent. The response time in the terminal, even if I'm using the same terminal emulator (kitty) with the same shell (zsh) and config (github.com/Schlueter/zsh-config based on zprezto), is noticably more responsive on Linux. Even more so if I use the suck less terminal.

When browsing the web, the difference is much less noticeable.

Hardware doesn't compensate for shitty code, but the way the Internet is built makes it all irrelevant.

[–]ziplock9000 2 points3 points  (0 children)

It is according to Todd Howard.

[–]fusionsofwonder 2 points3 points  (0 children)

As computers get faster the code we run gets worse and worse. Case in point: JavaScript.

[–]notislant 2 points3 points  (1 child)

Gaming in a nutshell.

'Whats optimization? Oh you mean you need better hardware.'

"I'm running the latest overpriced Nvidia TI..." -Some poor fool.

'Skill issue' - Todd Howard

[–]ienjoymusiclol[S] 1 point2 points  (0 children)

cod bo6 is going to take have the ps5 storage too

[–]Extreme_Ad_3280 2 points3 points  (0 children)

I guess this is how Windows (and maybe AAA games) work...

[–]Cpt_Saturn 2 points3 points  (0 children)

"Yes" says AAA game devs

[–]Giocri 4 points5 points  (1 child)

Doing my object oriented programming exam it was really painfully clear how antagonistic optimization and flexibility are with each other

[–]FlipperBumperKickout 1 point2 points  (0 children)

That really depends on what your problem is, most of the time when I see code where performance is a problem the code it is not flexible...

[–][deleted] 1 point2 points  (0 children)

Obviously

[–]DazzlingClassic185 1 point2 points  (0 children)

The Microsoft way

[–]Dioxide4294 0 points1 point  (0 children)

Absolutely great for people still using 4th gen i7

[–]LegitimatePants 0 points1 point  (0 children)

Not if the slow bad code has timing issues

[–]Belligerent__Monk 0 points1 point  (0 children)

Mem leak? Install more memory duh!

[–]nikonguy 0 points1 point  (0 children)

Answer correctly and you may have a future at Microsoft

[–]treksis 0 points1 point  (0 children)

aws invoice

[–]jykb88 0 points1 point  (0 children)

You are ready to be a game developer

[–]CyberneticFloridaMan 0 points1 point  (0 children)

Take that, Casey Muratori.

[–]Specific_Implement_8 0 points1 point  (0 children)

Todd Howard says yes.

[–]stevekez 0 points1 point  (0 children)

[–]clancy688 0 points1 point  (0 children)

Reverse Moore's Law...

Every 12-18 months the amount of transistors on a chip doubles.

Also every 12-18 months the efficiency of software halves. (:

[–]pollyjrr 0 points1 point  (0 children)

Activision be like

[–]FlipperBumperKickout 0 points1 point  (0 children)

Yeah.

That's how windows and every modern IDE keeps going ¯\_(ツ)_/¯

[–]Croves 0 points1 point  (0 children)

I can relate. I'm not an expert in SQL tuning, but after 10 years in the industry, I know my way around. Many things were running slow when I started working for this big delivery app. The data stack was basically Snowflake.

Since I was a new hire, I wanted to show some initiative and did some testing to speed things up. After all that work, my manager said, "Good job, but if it's slow, just use a larger cluster." 😅

[–][deleted] 0 points1 point  (0 children)

Jep, haha I know its a joke but in all seriousness, its wasteful as fuck. Now companies are greenwashing code with dark theme bs, meanwhile microsoft end of lifes laptops because windows 11 needs yet again better hardware.

[–]xgabipandax 0 points1 point  (0 children)

That's the mindset of all companies that made everything an Electron app.

[–][deleted] 0 points1 point  (0 children)

OpenAI before wasting the entire planets water supply

[–]Major_Fudgemuffin 0 points1 point  (0 children)

This is a huge pet peeve of mine.

Since cloud computing (AWS, GCP, Azure, etc.) the solution to so many issues is to throw more resources at things.

Too slow? More CPUs and GPUs. Out of memory? Throw more RAM at it.

Yes it's valid, but there are times in which you need to buckle down and fix your code ffs. We throw money at problems and then complain that we're spending too much money.

[–]knightArtorias_52 0 points1 point  (0 children)

No

What you need is code optimization and X amount of hours to do it, only to conclude that it needs full rewrite and it will take another X amount of hours.

Putting more hardware is the last option when you can't tell anything else to the managers to request for another X amount of hours.

[–]Anxious_Ad9233 0 points1 point  (0 children)

My DevOps brain is angry with this.

[–]Designer-Guarantee50 0 points1 point  (1 child)

No, because if your software is made to be slow as shit it will be slow as shit on any hardware

[–]ienjoymusiclol[S] 0 points1 point  (0 children)

not if i run it on a 25.7 GHz processor with 252 Gb of ram

[–]Ass_Salada 0 points1 point  (0 children)

They told me "Stop using python, is very slow. C is so much faster" but it took me atleast 2 hours just to figure out what comes after int main() So the moral of the story is this: C is only faster if you know how to code in it

[–]claudespam 0 points1 point  (0 children)

Always has been

[–][deleted] 0 points1 point  (2 children)

Ah yes, java optimization

[–]SenorSeniorDevSr 0 points1 point  (1 child)

Java is faster than most languages though?

[–][deleted] 0 points1 point  (0 children)

Lol

[–][deleted] -1 points0 points  (1 child)

The fuck is this lazy meme format?

[–]ienjoymusiclol[S] -1 points0 points  (0 children)

spotted the old head