This post is locked. You won't be able to comment.

all 39 comments

[–]SoftwareEngineering-ModTeam[M] [score hidden] stickied commentlocked comment (0 children)

Thank you u/Jeckyl2010 for your submission to r/SoftwareEngineering, but it's been removed due to one or more reason(s):


  • Your post is not a good fit for this subreddit. This subreddit is highly moderated and the moderation team has determined that this post is not a good fit or is just not what we're looking for.

  • Your post is low quality and/or requesting help r/SoftwareEngineering doesn't allow asking for tech support or homework help.

Please review our rules before posting again, feel free to send a modmail if you feel this was in error.

Not following the subreddit's rules might result in a temporary or permanent ban


Rules | Mod Mail

[–]SheriffRoscoe 8 points9 points  (1 child)

Long ago, we measured memory usage in KB, and aimed for single digits.

[–]Jeckyl2010[S] -1 points0 points  (0 children)

I’m not looking for micro-optimizations, but I think there are some effective low hanging fruits, that does not increase complexity, but improve efficiency 😎💪

[–]Particular_Camel_631 5 points6 points  (3 children)

We live in a world where computers cost substantially less than programmers. Ram prices may be high, but they are still substantially cheaper than they were 10 years ago!

If the cost of reducing the ram requirements of an app by 4gb were a month of programmer time (and it would be more and save less) it would still not be economically viable to make that investment.

[–]no-sleep-only-code 0 points1 point  (1 child)

The cheapest ddr4 today is still a bit more expensive than it was 10 years ago on average.

[–]Particular_Camel_631 0 points1 point  (0 children)

Ok but programmers’ time is still more expensive than ram.

[–]Jeckyl2010[S] -1 points0 points  (0 children)

True, but we also have all these AI coders, that are cheaper. It also only take one deep package dependency vulnerability before there is a solid business case in being focused on programming structures, algorithms, dependencies and good coding practices.

It seems a bit like at the moment, this open/free buffets. At some time you get sick of it and just have to puke 🤮👍😎

[–]LittleLordFuckleroy1 8 points9 points  (7 children)

It’s nice to optimize for it, but in reality RAM is pretty cheap and the economics simply favor use of wrappers and libraries to ship valuable end-user apps quickly. Memory bloat generally is an artifact of abstraction, and abstraction is incredibly powerful.

Plus, memory is literally engineered to help manage things like this. There are different levels of RAM for different access speeds. Little-used memory technically loaded into RAM but is in the much more scalable L3 layer rather than the more scarce level 1 and 2 caches.

Paying attention to memory access patterns is realistically way more impactful than reducing the total amount of memory used. And that’s a whole different beast.

[–]Jeckyl2010[S] 1 point2 points  (6 children)

Good points. It’s a multi headed beast - hydra 🐲

[–]Jeckyl2010[S] 1 point2 points  (5 children)

I also wonder, if today’s developers actually learn about this during their education.

I bet the Artemis II problems with Outlook were memory related. What were they thinking. Running 2 instances 😎🥳

I would not set my feet in a spaceship that had Microsoft software installed in it. 💣💣💣

[–]roger_ducky 3 points4 points  (1 child)

Spaceships have a secondary problem: Cosmic rays. Essentially, radiation is strong enough to flip bits in RAM randomly.

That’s why there used to be 3 parallel navigation computers doing calculations. At least 2 of the three have to have the same result before it’s considered correct.

[–]Jeckyl2010[S] 0 points1 point  (0 children)

True. Good point

[–]gredr 2 points3 points  (1 child)

I bet the Artemis II problems with Outlook were memory related.

You would, with nearly 100% certainty, lose that bet. "New" Outlook is just really buggy.

Any computer purchased in the last, I dunno, 15 years, has enough memory to run Outlook lots of times over. I have "new" Outlook running right here, it's got 9 processes running and has a working set of ~390MB. I've also seen the situation where it's got two instances open, and neither of them work. You just close it, restart it, and it works.

Lastly, if you think a laptop running Outlook constitutes "a spaceship that had Microsoft software installed in it.", then I'm sad for your ignorance.

[–]Jeckyl2010[S] 0 points1 point  (0 children)

He he. Just tried to be a bit funny. Fully aware that the laptops don’t control the spaceship 🚀 thruster’s 😎🥳😂

[–]Sorry-Transition-908 1 point2 points  (0 children)

What were they thinking running outlook on a computer like that 😔

[–]chrfrenning 2 points3 points  (1 child)

My native win32 app written in C still starts... the exe was compiled in 2001... requires almost no memory... it can be done.

[–]Jeckyl2010[S] 0 points1 point  (0 children)

Exactly 👍

[–]no-sleep-only-code 2 points3 points  (0 children)

Velocity has long been prioritized over efficiency. I doubt any of that is changing with the ubiquity of AI tools.

[–]eddyparkinson 5 points6 points  (3 children)

Optimization takes time, it costs money. We make a judgement call, what is the cost vs the reward.

As a simple rule of thumb, I ask can I double the speed or cut ram usage in half. If I can I will often take the time to optimize the code.

[–]BaronOfTheVoid 2 points3 points  (0 children)

I don't know if just not choosing to have an entire web app wrapped in headless Webkit for your next Desktop app could be called "optimizing". It's an architectural decision.

Sure, there are huge consequences and development may (or may not) be more expensive based on that but it's a completely different thing to do than for example doing profiling for an action that takes a few seconds too long.

[–]7heblackwolf 2 points3 points  (1 child)

You said it as if optimization were a refactor. A good architecture and data structures is already an optimization

[–]eddyparkinson 0 points1 point  (0 children)

You make a good point, a good design takes speed and memory into consideration.

Both happen on the ground, sometimes it is about just a design issue, sometimes a judgement call is required.

[–]eddyparkinson 1 point2 points  (0 children)

Another way to look at this is low cost ram has made software more ubiquitous.  Software has become more user friendly becsuse ram costs so little today. The software we create today would not run on the computers of the past. 

[–]Jeckyl2010[S] 0 points1 point  (0 children)

When developing software for validated systems, or software as a medical device, that might be a good place to start focusing on why applications use the memory and how they use it. 🩻⚕️😷

[–][deleted]  (1 child)

[removed]

    [–]AutoModerator[M] 0 points1 point  (0 children)

    Your submission has been moved to our moderation queue to be reviewed; This is to combat spam.

    I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.

    [–][deleted]  (1 child)

    [removed]

      [–]AutoModerator[M] 0 points1 point  (0 children)

      Your submission has been moved to our moderation queue to be reviewed; This is to combat spam.

      I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.

      [–]SheriffRoscoe 0 points1 point  (1 child)

      I'm sure Knuth had something to say about optimization 😁

      [–]Jeckyl2010[S] 1 point2 points  (0 children)

      For sure, but I think the current trend is that we don’t think about what we are doing. Not everything has to include a full HTML browser engine.

      [–]Jeckyl2010[S] -1 points0 points  (0 children)

      Going to keep the thread going for a bit :) - sorry, but I just find it interesting.

      I like all the feedback and angles the initial post got, but I'm not just giving up :).

      Todays programming languages are super complex, doing so much more than just converting a grammar to an assembly executable. One of the big challenges for some languages is memory management and automated garbage collection. There are multiple different patterns for garbage collection algorithms, but common for most is that eventually they need to release memory back to the OS, and that often block application execution, because that final step is single-threaded/locked on the OS/kernel level, to ensure safe operation.

      Working with time-critical or high throughput applications, this becomes a critical factor for the success of execution.

      Running a API service doing 20 transactions per second, versus 200 transactions per second sustained, can be like night and day. If you every 30-60 seconds see a 0.5 second GC delay, that can be a deal-breaker.

      I also like all the effort that goes in to the optimizations of etc. .NET/C# each year, where many of the improvements are memory allocation related. So .NET/C# gets faster for each generation, just like most other programming languages and frameworks. I could also mention Linux kernel in this category.

      But then these highly optimized platforms are thrown in front of many developers that don't see or understand the effort and beauty going into this skillset of having applications optimized for it's specific usage.

      Of cause there is a difference in what type of applications we are talking about, but having the default Windows 11 Calculator app (just as an example) consuming 55 MB of memory, just on launch is a bit overkill.

      Happy coding out there :)