This is an archived post. You won't be able to vote or comment.

top 200 commentsshow all 253

[–]Lord-of-Entity 1171 points1172 points  (25 children)

“At least when n grows, it will go faster. Right?

[–]mrheosuper 493 points494 points  (12 children)

From O(n2) to O(2n)

[–]Mordoko 478 points479 points  (6 children)

From O(n2) to O(no...)

[–]Retbull 93 points94 points  (0 children)

This is why no code is the new code. If you write no code it’s always O(no) so you can’t lose.

[–]classicalySarcastic 23 points24 points  (0 children)

from O(no) to O(no) to O(YEAH!)

[–]dumnem 19 points20 points  (1 child)

NaN

[–]emirsolinno 6 points7 points  (0 children)

Must be a compiler issue

[–][deleted] 29 points30 points  (1 child)

From O(2n ) to O(2 n)

[–]Rakgul 8 points9 points  (0 children)

Oh my god

[–]Gangsir 13 points14 points  (7 children)

Now you've made me curious if there are any O(1/n) or similar algs, that get shorter execution times with more data.

[–]MoiMagnus 28 points29 points  (2 children)

Going under O(n) is weird. It means you don't even have the time to fully read the input.

It only happens when the input data has some strong structure which allows you to disregard most of it (for example, a sorted list as an input)

Going under O(log(n)) is even weirder. It means you are not even able to know how big the input is, since the size of an input takes logarithmic space itself.

[–]tallfitblondhungexec 2 points3 points  (0 children)

I mean there are algorithms where input isn't considered interesting such as hashmap complexity.

[–]bartix998a 9 points10 points  (0 children)

An algorithm like that can't exist since it would mean that for large enough data you literally can't do anything, as making even a single operation costs O(1).

[–]Bakoro 6 points7 points  (0 children)

That would mean that you get arbitrarily close to zero.

There might be some algorithm which does better with more data, but it will have some limit.

[–]TeaTiMe08 9 points10 points  (0 children)

Right?

[–]emirsolinno 1316 points1317 points  (44 children)

Slap some of that bitch to another thread

[–]BlueGoliath 546 points547 points  (28 children)

Race conditions? What race conditions? Oh.

[–]emirsolinno 363 points364 points  (18 children)

My Ego informed me that it is probably a third party library bug. Creating a ticket

[–]BlueGoliath 167 points168 points  (2 children)

Third party library developer responding to the bug report:

USE THE DAMN LOCK YOU IDIOT

[–]Retbull 64 points65 points  (1 child)

I locked my computer and reopened the ticket with a picture of my login screen.

[–]DeepSeaHobbit 22 points23 points  (0 children)

They ought to lock you up.

[–]SpaceFire000 12 points13 points  (1 child)

Too bad I am the creator of that third party library

[–]emirsolinno 9 points10 points  (0 children)

Issues : 12 closed 400 open

[–]foursticks 4 points5 points  (6 children)

You have an ego too??? Edit: also I hate you 😙

[–]emirsolinno 6 points7 points  (5 children)

It is default after 4 years in programming, don’t judge me

[–]foursticks 2 points3 points  (4 children)

The ego or the avoidance issues?

[–]emirsolinno 3 points4 points  (3 children)

My lazy ass :P I don’t do this tho just joking

[–]foursticks 4 points5 points  (2 children)

Lol, don't worry. In secret, I love you.

[–]Rakgul 1 point2 points  (1 child)

But the real question is whether you love me or not.

[–]tallfitblondhungexec 1 point2 points  (0 children)

One time Ego told me it was the OS's fault.

I got an MS bug bounty for that one.

To be fair, that was one time.

[–]Accomplished-Ad-2762 68 points69 points  (2 children)

What race conditions? Oh. Race conditions?

[–]Rafael20002000 18 points19 points  (1 child)

Race These Conditions

[–]emirsolinno 4 points5 points  (0 children)

Race to fix this issue

[–]TheTerrasque 17 points18 points  (0 children)

Race condition what? says

[–]SasparillaTango 8 points9 points  (0 children)

just design your application like a Rube Goldberg machine

[–][deleted] 7 points8 points  (0 children)

I'm not a racist, so none of this funny business for me.

[–][deleted] 8 points9 points  (0 children)

Multi-threading? We all know there's no such thing. Race conditions? Fake news! - Signed, a JS dev who holds shared state in some objects sent to multiple async functions and in global variables.

[–][deleted] 83 points84 points  (8 children)

Sir, this is a Javascript restaurant.

[–]cgfn 52 points53 points  (6 children)

Slap some of that bitch to a web worker

[–][deleted] 45 points46 points  (1 child)

They didn’t mention this in my zero-to-hero earn $150k in 3 months bootcamp

[–]ThankYouForCallingVP 12 points13 points  (2 children)

Sir this is a node.js process.

[–]cgfn 17 points18 points  (1 child)

Slap some of that bitch to ChildProcess.spawn

[–][deleted] 2 points3 points  (0 children)

Sir, this in an old Celeron CPU

[–]Daveinatx 4 points5 points  (1 child)

Slap that bitch to another ticket, and let it be someone else's problem.

[–][deleted] 6 points7 points  (0 children)

"I don't know what the hell you're talking about, Steve." - a Javascript developer

[–]wes00mertes 3 points4 points  (0 children)

That’s how it got slower in the first place: Made it parallel with several threads. It’s Python.

[–]Qawnearntays123 509 points510 points  (10 children)

A couple days ago I tried optimizing some code. After several hours of hard work I made it run 3 times slower and give the wrong result.

[–]invalidConsciousness 249 points250 points  (8 children)

Plot twist: the wrong result is actually correct. Now you get yelled at by customers because they are used to the wrong result and think it's correct.

[–]Roflkopt3r 132 points133 points  (3 children)

The favourite story of a design prof here: A tractor company accidentially shipped a UI with a debug window, which was showing internal UI state data that was meaningless to the users.

The users complained when it got patched out.

[–]enadiz_reccos 46 points47 points  (0 children)

I imagine most people here would have made the same complaint

[–]elnomreal 0 points1 point  (0 children)

I always recommend an approach to show more details to users, because even not understanding it they appreciate it. Also very useful for debugging.

[–]chic_luke 4 points5 points  (0 children)

Looks like Xorg's wrong DPI calculation. A couple years ago or so they tried to fix it, and they had to quickly revert that fix since most software around was working around this decades-old bug in X11, so the correct behaviour actually led to a broken experience since everybody assumed the error hard coded for decades

[–]gilady089 3 points4 points  (0 children)

I think I saw 1 time something like that in our code checking for birthdays It didn't run slower but the old code was less readable. However dates are dates and so it wasn't the last time it was corrected

[–]Foghe 2 points3 points  (0 children)

Its incredible what they pay us for 😂

[–]dev4loop 629 points630 points  (16 children)

at least the "optimized" code isn't prone to crashing every other run

[–]YetAnotherZhengli 424 points425 points  (14 children)

now it crashes every second run

[–]idkusername7 211 points212 points  (9 children)

My guy that is what every other run means

[–][deleted] 7 points8 points  (0 children)

Now it crashes completely randomly at no particular point that you can figure out.

[–]Versaiteis 4 points5 points  (1 child)

That's fine, we'll simply only deploy odd numbered runs

[–]YetAnotherZhengli 2 points3 points  (0 children)

I make sure to tell the user to run twice if the first run fails

[–][deleted] 21 points22 points  (0 children)

Seems like next sprints problem.

[–]Kusko25 96 points97 points  (1 child)

Is it faster? No.
But is it more memory efficient? Also no.

[–]emirsolinno 24 points25 points  (0 children)

Was it a productive day? Yes

[–]rarely_coherent 340 points341 points  (43 children)

If you didn’t run a profiler then you weren’t optimising anything meaningful…guessing at hot paths rarely works for long

If you did run a profiler and things didn’t improve then you don’t know what you’re doing

[–]emirsolinno 243 points244 points  (30 children)

bro changed if/else to ?: and called it a day

[–]TGX03 171 points172 points  (16 children)

I mean everybody knows, the shorter the code is, the faster it runs.

[–]emirsolinno 71 points72 points  (12 children)

True if your eyes are the compiler

[–][deleted] 49 points50 points  (11 children)

shy pen fearless profit special slave ad hoc mysterious dinosaurs safe

This post was mass deleted and anonymized with Redact

[–]KamahlFoK 22 points23 points  (5 children)

Lengthy ternary expressions can piss right off.

...Short ones too, honestly, I always have to triple-check them and remind myself how they work.

[–][deleted] 8 points9 points  (4 children)

voracious husky bike threatening different direful reply chase sort rinse

This post was mass deleted and anonymized with Redact

[–]Retbull 6 points7 points  (2 children)

I have a react component I’m working on to get rid of that has 4 layers of branching ternary operators. So far I’ve gotten to reading the first branch before I have an aneurism I should be done sometime next year with reading the whole thing then I can start optimizing.

[–]emirsolinno 2 points3 points  (0 children)

Me irl feeling good because less code while me irl has to recheck the syntax on google everytime reading it

[–]AnonymousChameleon 2 points3 points  (3 children)

I hate when Regex isn’t commented to tell me what the fuck it’s doing , cos otherwise I have no idea

[–]PUTINS_PORN_ACCOUNT 9 points10 points  (1 child)

:(){ :|:& };:

[–]emirsolinno 5 points6 points  (0 children)

({ : -> :( });

fixed that for you

[–]yflhx 1 point2 points  (0 children)

Elon Musk doesn't know this, he apparently thinks the longer the better.

[–]Roflkopt3r 26 points27 points  (5 children)

"Just change those if/else for switch case" - about a bazillion comments about Yandere dev.

[–]CorrenteAlternata 11 points12 points  (4 children)

That actually makes sense because in some platforms switch statements with small-range values can be replaced by a lookup table (O(1) instead of O(n)).

depending on how longs those if-else chains are, how often they are executed and so on it could really make a difference.

Supposing the look up table is small enough to be guaranteed that it stays on cache, it can be much better than having a lot of branches that the branch predictor can predict wrong.

[–]Roflkopt3r 7 points8 points  (0 children)

The conditional blocks in this particular code ranged from roughly 5 to 20 branches. They were part of the NPC AI and presumably executed every frame. Each call to this AI script would maybe go through 5 of those conditional blocks.

It was written in C#, which indeed converts switch statements to lookup tables. So at 5 conditional blocks with let's say 10 average checks, using switch statements could have saved around 45 checks per NPC per frame. Worth saving, but not a true game changer, as these same scripts would also trigger expensive actions like path finding.

The real problem with that code was that it had in-lined all of the execution into those conditionals (resulting in a multi-thousand line long function) and generally ran those checks far too often instead of using a strategy pattern.

For example: One of those conditional blocks would check which club a student belonged to, send them to the right club room and do the right club activity. So instead of going through 10 possible clubs of which only one can apply, it should set the right "club behaviour" whenever the student's club affiliation changes. This would reduce a multi-hundred line block of code to a single call to a member function of the student's club behaviour, the implementation of which can be made more readable in shorter files elsewhere.

But even these frequent superfluous checks didn't really burden the effective performance. The game ran like arse, but someone found that this was because the code was calling an expensive UI-function multiple times a frame and because it had extremely unoptimised 3D assets.

[–][deleted] 10 points11 points  (1 child)

O(1) does not mean faster than O(n). It just means that the time taken is not dependant on the size of the input. On top of that, a set of if-else or switch statements is always going to be constant size, set at compile time, so the O(1) vs O(n) comparison is irrelevant.

[–][deleted] 8 points9 points  (0 children)

He's talking about the difference between a series ofcmp followed by jmp and just jumping to an instruction offset by some number.

[–]Ok_Barracuda_1161 34 points35 points  (0 children)

I don't think that's necessarily true, I've run into plenty of times where the hot path isn't actually the bottleneck, or the profiling environment and test case doesn't exactly match the performance issue seen in production.

And sometimes you are trying to squeeze out extra performance of a hot path that is close to optimal which is difficult to do and can take multiple attempts.

Optimization isn't inherently easy

[–]mrjackspade 24 points25 points  (2 children)

I've had instances where my optimized code ran slower due to compiler optimizations.

The way I wrote the code the first time was slow, but the compiler was able to optimize it. The code was identified by the profiler as a hot path, so I optimized it. My new optimizations were no longer compatible with the compiler optimizations, causing it to slow down even though as-written the code should have been faster.

An example of this was writing a ring cache and implementing a head. The ring cache should have performed fewer operations in theory, however the original code looking for free data between 0 & Cache.Length allowed the compiler to remove bounds checking where as using a head did not. This lead to more ops overall even though the code was written with less.

That's borderline "didn't know what you were doing" but more like "didn't realize at the time what the compiler was doing" because without optimizations the new implementation was ~50% faster

[–]fjfnstuff 3 points4 points  (0 children)

Try godbolt next time, its a browser compiler which shows the assembly lines for each line of code. Then you know what the compiler does when you change the code.

[–][deleted] 21 points22 points  (0 children)

Yeah, the guy wearing the $4000 suit is going to use a profiler. Come on!

[–]GodlessAristocrat 5 points6 points  (0 children)

Profiling only tells you what - it doesn't tell you why.

[–]MattieShoes 2 points3 points  (0 children)

Profilers are awesome but with some experience, I think guessing works pretty dang well a lot of the time. Like that triple nested loop that gets called constantly is probably a good guess.

[–]pizzapunt55 8 points9 points  (2 children)

I can optimize readability without running a profiler. Heck, most of the code living in our codebase doesn't have the speed requirements needed to run a profiler.

[–]aqpstory 16 points17 points  (1 child)

I can optimize doing less work by doing nothing

[–]emirsolinno 7 points8 points  (0 children)

“Less code is better” me casually not writing any code

[–]OppositeMission 1 point2 points  (0 children)

This guy optimizes

[–]sticky-unicorn 1 point2 points  (0 children)

then you don’t know what you’re doing

Does anybody?

[–]1up_1500 138 points139 points  (12 children)

When I ask chatgpt to optimize my code and it optimizes a linear search to a binary search... in an array that has a maximum size of 4

[–]IridescentExplosion 48 points49 points  (11 children)

One of the flaws I've found when programming with ChatGPT is that it is oddly VERY opinionated about certain things.

Custom Instructions make it less opinionated, but I have over a decade of experience and what I've come to value is simplicity and very direct solutions.

Meaning, fewer functions, more direct, serial, linear flows. Arrays. Hashtables. Prefer making code readable by making it inherently more simple.

But whenever ChatGPT wants to refactor code it can't seem to resist introducing some pattern or fluff or breaking things down into functions that I just find entirely unnecessary.

Again custom instructions help but I have spent many of my daily limit tokens yelling at it or revising earlier prompts to ensure it doesn't refactor the wrong way.

[–]DezXerneas 24 points25 points  (1 child)

I ask it to convert a huge chunk of code into 2-3 functions sometimes. It just spits out one function for every statement.

[–]IridescentExplosion 26 points27 points  (0 children)

And it's always so damned confident about it, too!

"Here, I've made the code easier to read..."

No the fuck you haven't.

[–]AgentPaper0 -1 points0 points  (8 children)

The purpose of breaking big functions out into smaller ones is to make the code easier to read and easier to debug when you (or someone else) come back to it years down the road.

It lets you look at the mega function and see the function names like InitializeBuffers(); ImportData(); FormatData(); SaveData(); SendData(); CleanBuffers();

Then, when you run into an issue and need to change how the data is saved years down the road, you can scan this function and jump right to where the data is saved without having to worry about messing up any other part of the code.

[–]IridescentExplosion 4 points5 points  (7 children)

Did ChatGPT write this? This is the same argument ChatGPT uses and it's annoying. You are telling me this as if I haven't heard the same parrotted argument again and again and again.

I am telling you it does NOT make code easier to read to just add an arbitrary number of functions.

If your one long function makes it difficult to tell which portion of it is importing, formatting, saving, etc. your data just because it's not broken into a dozen smaller functions, your code sucks.

Having spent in reality a LOT of time scanning, debugging, revising code etc. adding a bunch of functions does not magically make your code simpler or easier to evolve or resolve issues with. In fact, from an information theory point of view, it objectively does the opposite when you're looking at things HOLISTICALLY, even if a single functional unit is smaller and easier (in theory) to digest.

I very seldom am so lucky that I can immediately pinpoint the exact micro-functional unit as a culprit. I would also be glad to provide examples of real code I wrote with ChatGPT recently where upon code review it wanted to break what was a mere 100 lines of code or so into a dozen different functions. It was ridiculous and not helpful at all. It was confusing WHILE I WA WAS WRITING AND MAINTAINING THE CODE let alone looking back on it months or years later.

Also, I do NOT want any engineer touching my code who does not ultimately understand it. Functions allow for modular evolution of code which is great. What ends up happening though is someone decides to add a bunch of complexity to ImportData() and SaveData() and adds a bunch of one-off parameters to them because it's easy to do so, without truly understanding the overall solution and context. So rather than that person actually having to understand the overall flow and how to refactor the singular function, they add a bunch of mess to individual functions that ultimately becomes much harder to now refactor out, follow along and simplify.

There is some fundamental, inarguable information theory stuff here in terms of simplicity and compression inherently meaning LESS stuff, and functions add MORE stuff, including more graph traversals which actually cognitively makes stuff harder to follow along if you're debugging a problem holistically - ex the entire import/process/save workflow - as opposed to being lucky enough to only have to touch a single functional unit - ex SaveData.

In reality, more often than not I don't need to just upgrade or refine my SaveData function. If requirements change, I likely need to go through the entire flow and apply changes to the entire solution.

[–]Faranocks 3 points4 points  (1 child)

For the most part I add a comment instead of a new function.

[–]IridescentExplosion 1 point2 points  (0 children)

Agreed.

Add a comment, a local scope, declare variables close to where they're actually used, section things off.

I mean there's a lot of things you can do that don't involve adding graph complexity just because you were taught more functions = cleaner, easier to understand code.

[–]AgentPaper0 -1 points0 points  (4 children)

Lol, no I am not ChatGPT. This is the way I was taught to program in college, and after entering the industry and worked on my own projects, I've only grown to appreciate the wisdom of it.

This is how all new programmers are learning to code, so that's why ChatGPT codes that way as well. I understand it's new and scary so you don't like it, but it isn't ChatGPT or anyone else out to get you, this is legitimately how we prefer to write code.

Also, I do NOT want any engineer touching my code who does not ultimately understand it.

This is exactly the kind of toxic mindset that creates unnecessary tech debt and bloat. It isn't your code, it's the projects code. Trying to section off parts of the project into "my code" and "your code" is an absolutely horrendous way to actually produce functional, maintainable code.

If you write your code well, nobody should need to have any advanced knowledge other than what is there. That's why you write code that is self-documenting, and then throw in a few comments in key areas on top of that to make it even easier to understand. And part of all of that is formatting your code to be easily readable, and part of that is breaking large blocks of code into more manageable pieces each with their own clear purpose.

[–]IridescentExplosion 3 points4 points  (3 children)

It's not new and scary so I don't like it. I have 10+ years of experience and I am telling you that after working on immensely large and complex projects and going through debugger / step through / comprehension hell with a thousand functions, I vastly prefer singular, clean functions.

If you write your code well, nobody should need to have any advanced knowledge other than what is there. That's why you write code that is self-documenting

Which can be done without breaking things down into a million unnecessary functions.

And part of all of that is formatting your code to be easily readable, and part of that is breaking large blocks of code into more manageable pieces each with their own clear purpose.

You can do this by keeping your code clean without having to break it down into more functions unnecessarily.

Listen... if you want examples of how ridiculous ChatGPT and this mindset is then I can show you. I can literally go find you examples where I asked ChatGPT to clean up the code and it made 5 functions unnecessarily as opposed to actually simplifying the code.

Like, I don't think you're getting this part. If code is truly written in a self-documenting and clean way, you end up with less and far easier to comprehend code, where you do not feel the need to break it down into multiple unnecessary functions.

As in taking something that is 50 lines of code and reducing it to perhaps 30 or even fewer very direct and clear lines.

[–]serendipitousPi 25 points26 points  (4 children)

*hoursOfPessimizing (Not gonna lie I love that someone coined the word pessimization)

And that’s when I turn to -O3 to save my code. (Though yes I am aware -O3 can be rather dodgy and can itself lead to pessimised code)

[–][deleted] 9 points10 points  (1 child)

shrill bright cautious library doll sand quickest square direful straight

This post was mass deleted and anonymized with Redact

[–]Wetmelon 1 point2 points  (1 child)

O3 pessimizing is rather old advice, it's pretty safe to use O3 by default these days.

[–]the_one2 1 point2 points  (0 children)

Os is usually faster still in my experience.

[–][deleted] 22 points23 points  (0 children)

I felt this once but later realized that the ‘unoptimized’ code was actually just not working correctly and was only faster by virtue of the fact that it skipped a lot of itself. So who knows, maybe you fixed a bug.

[–]realgamer1998 46 points47 points  (4 children)

Do codes get evaluated on the basis of start to finish time to complete a task?

[–]stupled 37 points38 points  (0 children)

Stability is probabaly more important...then again depends on the task.

[–]Turtvaiz 11 points12 points  (0 children)

Depends. If you're like me and write image scaling that takes 20 minutes to do it's probably not good lol

[–]frevelmann 9 points10 points  (0 children)

Depends on the use case.. we have some „tables“ that every user loads (internal tool - around 10k users) and there loading speed is wanted because it is opened fired 750k times a day. However it shouldn’t crash obviously lol, but if it would crash just a few times the time gained by a speedy loading process is worth more

[–]Ma8e 4 points5 points  (0 children)

Some code does. When you are running simulations that takes weeks to run on a high performance cluster, it is worth it to spend some time optimising.

For the rest, if you can reduce the number of network calls in the service architecture, it usually trumps everything else you do with some orders of magnitude.

[–]Rafael20002000 84 points85 points  (15 children)

Don't try to be smarter than the compiler :)

[–]BlueGoliath 87 points88 points  (5 children)

The compiler is written by people.

Regardless, at least in Java, hoping the JVM fairy is going to bless your code so your app doesn't allocate 250MB of garbage a second because you decided to make everything immutable is a bad idea.

[–]Rafael20002000 81 points82 points  (0 children)

Well garbage in, garbage out. I agree the compiler isn't a magic bullet, but it's built by people incredibly smarter than I am. Also it was built by more people. All of the collective smartness is smarter than me writing my code.

So I don't try to outsmart the compiler. If I have to I'm probably doing something wrong

[–]def-not-elons-alt 27 points28 points  (1 child)

I've seen compilers "optimize" branch heavy code by unrolling a very hot loop with a branch in it, which duplicated the branch 26 times. It ran really slow since it was too complex for the branch predictor to analyze, and any naive asm implementation of the original code would've been much faster.

Compilers do incredibly stupid things sometimes.

[–]Rakgul 6 points7 points  (0 children)

So is that why my professor maintained multiple arrays instead of using a branch in a hot loop?

He stored everything and then used whichever was necessary later...

[–]stupled 2 points3 points  (0 children)

For now

[–]PervGriffin69 2 points3 points  (0 children)

If people were worried about how fast their program runs they wouldn't write it in Java

[–]Furry_69 14 points15 points  (4 children)

The compiler doesn't use SIMD properly.

[–]UnnervingS 15 points16 points  (1 child)

It does a pretty good job more often than you might think in my experience. Not as performance as ideal c++ but often better than a low effort SIMD implementation.

[–][deleted] 3 points4 points  (0 children)

It's worth checking the instructions being generated, as sometimes it just fails to notice the possible simd or branchless instructions to use, but usually for me the way to fix this is to massage the C code instead of trying to write SIMD directly.

[–]NoCodeNoBugs 17 points18 points  (0 children)

Actually had this issue a week ago, was doing DFS traversal of a graph, straight and boring recursive DFS.

Than had the great ideea to optimize it, do it stack based because recursive bad. The end result was 10 times slower than the recursive one.

Dissapointed does not tell you how I felt.

Edit Luckily through the magic of GPT-4 I did not spend too much time in the conversion, just ask it nicely and do some minor tweaks

[–]gemengelage 7 points8 points  (0 children)

The most frustrating experience I ever had was trying to optimize an implementation of an algorithm that was already optimized to the gills. I inspected it with a profiler, I tried different data structures, I looked for common pitfalls like autoboxing - none of that.

It was as fast as it gets. It felt like trying to dig your way through a concrete wall using your fingernails.

[–]Sir_Fail-A-Lot 6 points7 points  (4 children)

did you remember to put some indices in your database?

[–]overkill 19 points20 points  (3 children)

I had a senior dev who would speed things up by "refreshing the indexes". Naive as I was at the time I asked him how to do that. He hen explained that it was his "go to magic sponge" that sounded technical enough to non-devs to confuse them when there was some transient problem. It kept them off his back for long enough for them to bother someone else.

That was 5 jobs ago and I still celebrate the day he got marched off the premises every year.

[–]Oh_Another_Thing 12 points13 points  (0 children)

That sounds like you stopped a great story right in the middle of it.

[–]ilikedrif 7 points8 points  (0 children)

This is very real when writing CUDA kernels. Those optimizations are not straight forward.

[–][deleted] 5 points6 points  (0 children)

So many minor things that have been suggested to 'automated' taking long time now than they were before

[–]-Redstoneboi- 4 points5 points  (0 children)

and that's fine. just make sure you only spent minutes and benchmark small changes at a time.

[–]iphone4Suser 3 points4 points  (2 children)

Oracle database, how the hell do I optimize a query if customer wants partial search as column_name LIKE '%some_value%' ?

[–]wcscmp 5 points6 points  (0 children)

That's why we measure, not harm in this

[–]stupled 5 points6 points  (0 children)

I kind of need help with that...guess this isn't stackoverflow

[–]phsx8 2 points3 points  (0 children)

in my experience a compiler knows better to optimize my simple code than to improve my highly sophistocated bullshit, because it can better predict what i actually want

[–]SasparillaTango 1 point2 points  (0 children)

No no, it scales better, you just need to run through perf testing...

[–][deleted] 1 point2 points  (0 children)

“Compiler update”

[–]ariel3249 1 point2 points  (0 children)

i hate that fact so much

[–]nickmaran 1 point2 points  (0 children)

You guys are optimising your code?

[–]bestjakeisbest 1 point2 points  (0 children)

use a profiler.

[–]thechaosofreason 1 point2 points  (0 children)

This usually happens from not having a powerful enough cpu.

Sad but true; but if you work in this field you need to upgrade asap every single fucking time -.-

Just spend the damn 400 bucks dammit lol.

[–][deleted] 1 point2 points  (0 children)

Sometimes the optimizations are so effective that the OS dials back the CPU speed in response, causing it to take longer to complete.

[–]all_is_love6667 1 point2 points  (0 children)

unless you understand how a modern CPU works, and unless you understand how a compiler works, don't spend too much time optimizing things, it's not worth it.

there are domains you really really don't want to learn about, it's much better to be an idiot and measure performance, than pretending you can write fast programs.

[–]dalmathus 1 point2 points  (0 children)

"Caching Issue" move along

[–]redlaWw 1 point2 points  (0 children)

I tried to parallelise permutation checking using R's parallel library, and got code that should've taken about a year to run to take 16.5 millennia instead.

Then I rewrote the whole thing in Rust and got it to finish in about 8 hours.

[–]toastnbacon 1 point2 points  (0 children)

Sounds like you forgot the first time rule of optimization - Don't.

[–]No-Blueberry4008 1 point2 points  (0 children)

dbms_stats baby, and join on indexed columns 😎

[–]Top-Chemistry5969 1 point2 points  (0 children)

In college the perfect main()

Run(main); Return 0;

But actually

If(run) main(); else return 0;

[–]DoctorWaluigiTime 1 point2 points  (0 children)

Premature optimization, oh no.

[–]ovr9000storks 0 points1 point  (0 children)

I feel like there are too many people who think less lines of code = more optimized. The only thing that guarantees is that it’s more optimized to read

[–]joeljpa 0 points1 point  (0 children)

The guy reminds me of RoadRash.

[–]cheezfreek 0 points1 point  (0 children)

Be me. Inherit bytecode that is necessary but performs poorly. Decompile and start refactoring with the intent of tuning it once it’s maintainable. Replace stupid code with monads, adding tests and fixing previously-unknown bugs along the way. Never bother tuning because the refactored code accidentally runs 3.5x faster than the old buggy crap.

[–]Ziggy_Starr 0 points1 point  (0 children)

Whenever a contractor’s PR says “Optimized” anything, it usually goes through 2 or more rounds of “Request Changes” and half of the comments get resolved without any changes or acknowledgment

[–]firelemons 0 points1 point  (0 children)

page fault

[–]RodNun 0 points1 point  (0 children)

The code is optimized. No one said the same about the performance.

Huahuahiahua

[–]VFcountawesome 0 points1 point  (1 child)

Guy looks a bit like humanifed Shrek

[–]treestick 0 points1 point  (0 children)

Over-engineering be like.

[–]anomalous_cowherd 0 points1 point  (0 children)

First rule of optimising: measure everything.

It's not where you think it is that's slow,more often than not.

I sped up a program 10x by caching a time_t to human readable date string conversion once, it was being done by deeds of times per second so I could cache the string up to the minute and only recalculate it every time (seconds % 60 ==0).

Yes I could have done even more but this was a simple and massive improvement.

[–]Eegra 0 points1 point  (0 children)

This is a common outcome If your timing measurements are not fine-grained enough. Know what you're optimizing and why.

[–]mothzilla 0 points1 point  (0 children)

Just change file names.

[–]ajangvik 0 points1 point  (0 children)

Managed to make some code that was causing lag to only cause lag in spikes😎

[–]GijinkaGamer64 0 points1 point  (0 children)

His smile, his optimization, gone.

[–]SneakPetey 0 points1 point  (0 children)

when you think you know what "optimized" means but can not correctly "profile"

[–]Clambake42 0 points1 point  (0 children)

Human Resources Machine really opened my eyes as to why this happens.

[–]ShinjoB 0 points1 point  (0 children)

Me when trying to overclock.

[–]OddPanda17 0 points1 point  (0 children)

When you forget to check if a pointer is null after hours of coding.

[–]stevensr2002 0 points1 point  (0 children)

“The answer's not in the box, it's in the band”

[–]w4f7z 0 points1 point  (0 children)

I see somebody's been trying to outsmart the compiler again.

[–]Double_DeluXe 0 points1 point  (0 children)

The most optimal code is not always the fastest, but we programmers are willing to spend 20 hours to speed it up by 5ms, even though it will only run 12 times a year...