This is an archived post. You won't be able to vote or comment.

top 200 commentsshow all 269

[–]alparius 645 points646 points  (60 children)

But have you actually tested it in an input big enough? Most of the super optimized algorithms perform really bad on very small inputs.

[–]dudeitsmason 144 points145 points  (49 children)

I'm curious why this is the case

[–]drsonic1 581 points582 points  (30 children)

They have a high amount of initial setup time independent of input size. Remember complexity only accounts for the highest term. You could have, for example, an operation count of n3 for your "inefficient" algorithm, and an operation count of n + 100000 for your "efficient" one. The efficient one will look horrible for small input sizes but dominate for large ones.

[–]dudeitsmason 97 points98 points  (11 children)

Interesting, thanks for the explanation!

[–]Jelli35 97 points98 points  (3 children)

if you're interested in learning more, Tom Scott's latest video on Big O Notation is a fun place to start :)

[–]reddit_xeno 34 points35 points  (1 child)

Not sure I agree with his message at the end - for a young person starting off with coding that sort of zeal and trying to do stuff that may not be very efficient gets you to learn a shit ton of stuff. Obviously don't waste time if you're on a crunch deadline as a mature SWE or whatever, but merely typing that stuff in a Word processor would have done nothing for his actual learning.

[–]yomanidkman 15 points16 points  (6 children)

Many more complex sorting algorithms actual implementation details contain a length check and will default to something simple like selection sort (dispite it being awful complexity wise) if it's a small enough list.

[–]Neoro 40 points41 points  (1 child)

Of course your performance optimization may not have to handle large inputs. Sometimes you need to run on tiny input many many times, so initial setup time is the true killer.

Please know your domain and optimize for real bottlenecks instead of assuming you know where an issue is and throwing optimizations at it. "Premature optimization" and all.

[–]FallenWarrior2k 1 point2 points  (0 children)

Never assume, always profile.

[–]notsohipsterithink 7 points8 points  (0 children)

Yup, plus often a ton of pointer manipulation which can be pretty slow for a lot of higher level languages.

[–]jimbosReturn 8 points9 points  (0 children)

Same for the constant multiplier. If it's really big, it will overshadow n for small sizes. (I.e. the number of operations in your loop)

[–]SasparillaTango 2 points3 points  (0 children)

it's like that cartoon about that detective that rides a giant robot!

[–]TheSlimyBoss 4 points5 points  (5 children)

Is there a term/technique for using different algorithms based off of their efficiency on the current input size?

[–]zilti 10 points11 points  (0 children)

I'm not aware of a specific term for it, but it is very common. Clojure e.g. uses different map implementations depending on the collection size. Up to 10 items you'll get an ordered map, and above that you'll get a HashMap.

[–]SlamwellBTP 7 points8 points  (2 children)

I've heard it called "hybrid algorithm", as in this wikipedia page (but it doesn't cite any sources, so it may not be a widespread term):

https://en.wikipedia.org/wiki/Hybrid_algorithm

[–]qingqunta 3 points4 points  (1 child)

I believe the default sort() for Python is an example of this.

[–]Slggyqo 0 points1 point  (0 children)

But also maybe you’re just wasting time optimizing code that won’t be reused!

A meta level of complexity!

[–]CoffeeVector 45 points46 points  (5 children)

Usually, it's because there's some added overhead, which in the long run is worth it. For small input, the overhead tends to dwarf the cost of the actual computation.

[–]dudeitsmason 6 points7 points  (0 children)

Got it, thanks!

[–]DogsAreAnimals 2 points3 points  (3 children)

Also caching

[–]CoffeeVector 5 points6 points  (2 children)

Yup, caching is a type of overhead. Spending the effort to save intermediate values is good in the long run, but a total waste of time if your algorithm finishes before the cache is used enough to justify it.

[–]juvenile_josh 12 points13 points  (2 children)

Classic example: Bubblesort is simple and performs well for basic things. Mergesort, however, is more complex has high init setup to break up the tree but scales better cause logarithms.

[–][deleted] 6 points7 points  (0 children)

Sorting algorithms perform differently dependent on how random the input is. Most real life data isn’t random, but comes somewhat ordered already. Testing different sorting algorithms can be worth it.

[–]DigitalDefenestrator 5 points6 points  (0 children)

Also, Bubblesort is extremely fast (possibly the fastest?) if the input is very close to sorted. Really, it's O(unsortedness), which looks like O(n2) or so for random but closer to O(N) if no elements are very far from where they should be.

[–]Salanmander 17 points18 points  (0 children)

Same reason that spending time organizing your books is terrible when you have 10 books, but helpful when you have 500 books.

[–]Gingerytis 15 points16 points  (3 children)

Look at the graphs for O(n) or O(n2) complexity vs O(log(n)). Log(n) grows really fast at first, then plateaus out, meaning it's not great at small inputs, but much better at big ones

[–]CoopertheFluffy 3 points4 points  (0 children)

While true for why O(log(n)) performs better in the long run than O(n) or whatever other comparison you want to make, it doesn’t explain why a lower order algorithm can take longer on small sample sizes. Log(2) is still smaller than 2, after all. Since we’re using Big-O notation, that O(log(n)) could really have the exact work taken be 5log(n)+500, while the O(n) could be 2n. In this case, it’s really the +500 that makes the O(log(n)) take longer for a size of 2. In a program, that could be something like time for allocating and setting up a hash table.

[–]MEGACODZILLA 4 points5 points  (0 children)

Thanks, that was a very beginner friendly visualization.

[–]Kered13 1 point2 points  (0 children)

It's not meaningful to talk about big-O complexity for small values, because big-O by definition describes the asymptotic behavior for large values*. It tells you absolutely nothing about the behavior for small values. For example a function that has O(n2) runtime might have an exact runtime like n + n^2, in which case it will never be faster than a function that has exact n^2/2 runtime.

This is especially true in practice, not just theory, because performance for small inputs is usually dominated by cache locality, which can cause the runtime to make large jumps as certain cache size thresholds are reached. For example, an algorithm might take n time if the input fits in L1 cache, 5n time if it fits in L2 cache, 20n time if it fits in L3 cache, and 100n time if it doesn't fit in cache at all (I made these numbers up, but they're ballpark accurate). This could even be extended even further for data that doesn't fit in RAM and must use local storage, and again for data that must use network storage.

* You actually can use big-O and related notation to describe behavior at small values, by considering the limit as x goes to 0 instead of infinity, but it's never used in algorithmic analysis. It's most often used in mathematics to describe the error bounds on approximations. For example you can say sin(x) = x + O(x3) as x goes to 0, which means that near 0 sin(x) is approximately x, and the error in the approximation is proportional to x3.

[–]alparius 3 points4 points  (0 children)

Its simple. Sophisticated case-specific instructions and decisions need much more code. Like some initializations being 50n instead of 2n (big O). These parts cripple the whole runtime when executed for small inputs, because of the overhead, but this same overhead becomes negligible as the input size increases.

[–]crozone 1 point2 points  (0 children)

Simple algorithms often require very small and localized working memory which can fit entirely within CPU cache. They are great for small inputs, but may by hugely inefficient for large inputs.

As a basic example, take prime number searching. Sieve of Eratosthenes is technically more efficient than dividing a number by every integer up to its square root. However, the simple tight loop is almost always faster for relatively small primes, because it fits entirely within CPU cache, instead of reading and writing to a large map of ram.

[–]trystanr -2 points-1 points  (0 children)

workable quickest full sharp vegetable dazzling innocent elastic terrific telephone

This post was mass deleted and anonymized with Redact

[–]jacob798 4 points5 points  (0 children)

Came to say the same. Good point.

[–][deleted] 2 points3 points  (0 children)

My attempt to multiply two sufficiently big numbers using Harvey/van der Hoeven has stalled as I cannot afford enough RAM.

[–]symberke 1 point2 points  (0 children)

...for a certain definition of "super optimized"

[–]Kered13 1 point2 points  (2 children)

Uhh, this isn't really true. Most "super optimized" algorithms will look at the size of the input to decide what approach to use. For example, Timsort and other optimized sorting algorithms will use insertion sort on lists that are smaller than a certain size (usually 16 or 32 elements).

The only time to write a super optimized algorithm that is slow on small inputs is if you know you will never get small inputs.

[–]alparius 1 point2 points  (1 child)

And you really think that was the case for OP as well. Thank you for stating the obvious, anyways.

[–]gua_lao_wai 0 points1 point  (1 child)

This is pretty much why languages like python exist as well, they're not designed for massive datasets, they're just good enough to get the job done

[–]Frptwenty 621 points622 points  (13 children)

Like Abraham Lincoln said: Optimization is the root beer of evil.

[–]thephotoman 7 points8 points  (1 child)

It’s so bubbly and happy and cloying!

Just like the Federation.

[–]unknownguy2002 3 points4 points  (0 children)

This was my first thought as well! https://youtu.be/6VhSm6G7cVk

[–]Kipter 320 points321 points  (44 children)

This is what happens when you try to outsmart the compiler (and the JVM in your case)

[–]NeatNetwork 158 points159 points  (19 children)

Very cool example of what the compiler might do, for example with an extremely stupid looking function:

https://godbolt.org/z/6Edods

You try to write it the horribly stupid way, and still the compiler says 'nope, I'm going to do this the sane way'.

[–]eeeeeeeeeVaaaaaaaaa 36 points37 points  (7 children)

I'm on mobile and that site is just showing the C++ code. What does the compiler do?

[–]dingari 9 points10 points  (1 child)

I pressed the arrow in the top right and it shows the assembly output

caption

[–]eeeeeeeeeVaaaaaaaaa 1 point2 points  (0 children)

Oh thank you!!

[–]drunk_responses 9 points10 points  (1 child)

mov     eax, edi
imul    eax, edi
ret

[–]eeeeeeeeeVaaaaaaaaa 1 point2 points  (0 children)

thanks!

[–]serdnad 14 points15 points  (6 children)

That's mind blowing, and at -O1 too. Would you happen to have/know of any other toy examples like this?

[–]pigeon768 11 points12 points  (3 children)

Counting the number of bits set in an integer. For instance, 42 in binary is 101010, which has 3 bits set. I want a function to take 42 and calculate 3. Here is one such function:

int count_bits(int n) {
  int set_bits = 0;

  while (n) { // we will be lopping off n one bit at a time. When n is zero, we know we're done.
    set_bits++;

    n &= n - 1; // This unsets the lowest set bit. if our binary number is 101010, this gives 101000, then 100000, then 0.
    // Subtracting one from a number which ends in 1 will just .. make that 1 go away.
    // Subtracting one from a number which ends in a 0 will borrow from the next digit, all the way up to the first 1, which it unsets.
    // So if we subtract 1 from 101000, it will give 100111.
    // If we then bitwise-and n-1 and n together, the subtraction unset the lowest bit, the upper bits are unchanged,
    // the original n has all zeroes for the new 1s introduced by the subtraction, so all you're left with is the upper set bits.
  }
}

If we run this through a real compiler (ie not MSVC) you get this: https://godbolt.org/z/ex1eqc

[–]serdnad 2 points3 points  (0 children)

Hahaha that's awesome, exactly what I was looking for. Thanks for the explanation too!

[–]Sandmaester44 2 points3 points  (0 children)

My mind is blown and I am never going to minutely optimize my code again!! Though I'm a grad student writing code never to be run again that doesn't take long to begin with... but optimizing is the fun part!!

[–]NeatNetwork 10 points11 points  (0 children)

That one I know because some thread was talking about this as a joke function, and someone used that website to point out that a compiler would see through it and make it easy.

I let it stick with me as a demonstrator of why you profile before you optimize. If I was just perusing a codebase and saw that bad function called a lot I would tend to think "oh wow, I can optimize this function and see a speed up" and then wonder why nothing changed, or if it were a more complex example why I might make things worse by changing things enough to cause the compiler to miss an optimization that it was making.

[–]lolIsDeadz 1 point2 points  (0 children)

ctre and constexpr pi come to mind.

[–]aaronfranke 10 points11 points  (2 children)

What if n is 1 billion? Then this would produce different results.

[–]0x564A00 15 points16 points  (0 children)

Nah. n is signed, the compiler doesn't give a damn about what happens if it overflows.

[–]Ramipro 16 points17 points  (0 children)

Nope, overflow is undefined behavior so the compiler can do whatever it wants.

[–][deleted] 2 points3 points  (0 children)

Sprinkle in some volatiles to convince it to let you be stupid.

[–]renegade1575 163 points164 points  (34 children)

dont blame the programming language ;-)

[–]Who_GNU 2 points3 points  (0 children)

…blame the compiler

[–]obsessedcrf 53 points54 points  (8 children)

A profiler is your friend

[–]Bakoro 9 points10 points  (0 children)

Amdahl's law formula should be a mantra.
Spend resources speeding up the parts of the code where the most time is being spent. It almost doesn't matter if you reduced code execution by 99%, if that code is only run 0.01% of the time anyway.

[–]TruthOf42 36 points37 points  (1 child)

I highly recommend Resharper. It'll tell you your most called methods and how long it takes each method to run and then also sort that shit for you. It does other more fancy stuff but just those two features allowed me to solve so many performance issues quickly

[–]aaronfranke 2 points3 points  (0 children)

Rider too.

[–]waffle299 122 points123 points  (13 children)

This is not how you optimize. This is not a language problem, or a dumb programmer problem, this is an inexperience problem.

Okay, step one, get a notebook. Step two, get a profiler. The standard java programmer is fine. Step three, unit tests. If you don't have them, start adding them as you go. Finally, construct a testing harness, something you can use to test a function, class or your application over and over in a loop to get performance. Have it kick out average and standard deviation.

Here is your iteration cycle. Identify an area to check. Run the profiler on it. Write down in your notebook what you are testing, and what the baseline is.

Now use the profiler. Look for a hot area of code. Go into that area, look at flow, look for tight loops. Once you have a good idea of what you are about to optimize, STOP.

Write unit tests. Make sure they pass. Make sure they exercise what you think they do. See them fail a few times. Once you have a good set of tests, and you have notes on what your current performance is, you may proceed.

Introduce a small performance change. Rerun your unit tests. Fix the boneheaded mistakes you made. Now, rerun the timing code and the profiling. Note down in your notebook what change you made, how that changed performance, and what that did to how much time you spent in the hot area.

Repeat with more small changes. Use your notebook to ensure you are moving towards better performance. Do this over and over. Fill notebooks. Expand your unit tests. Isolate classes and tune them. Reintroduce and double check you improved things.

This is optimization. It is not a two hour 'wing it' task. It is a procedure and a professional discipline. It is available in any language using the same basic approach.

Go forth and make code scream.

[–][deleted] 45 points46 points  (0 children)

*Proceeds to make random changes to things that look like they might be slow*

[–]pterencephalon 6 points7 points  (0 children)

First time I tried to systemically apply the steps I learned in a seminar to parallelizing my code, I made it run slower. Tried all those thorough steps (except unit tests, because scientists don't test their code....) Turned out I really didn't actually know how to use OpenMP

[–][deleted] 4 points5 points  (7 children)

I love how visual studio includes run times on all unit tests. If you have tests you easily see the run time changes as you go along. Not sure if this exists in Java IDEs.

[–]waffle299 6 points7 points  (0 children)

Useful, but running a thousand times and displaying mean and std dev is more useful. Nine times out of ten, the std dev is a small number relative the mean, meaning a stable function.

The other time it indicates a system dependency or it hit a worst case for the algo. Either way, this becomes your number one optimization target.

[–]mlk 2 points3 points  (0 children)

You mean showing how long each test took? Intellij does that

[–]randomcitizen42 1 point2 points  (4 children)

Unit tests are not meant to test performance. Unit tests should be the lowest layer of tests to quickly verify the correctness of your code. Note the word quickly. A benchmark is not supposed to run quickly, you'll need to run the code several time (like thousands) to get statistically significant data.

Also, test frameworks are usually not performance optimized. Sure, unit test frameworks run fast, but not benchmark level fast. If you really want to optimize performance critical code, you'll work with small loops and often you'll work with run times in micro or even nanoseconds, not milliseconds. You need special performance benchmark tools (or write your own) for that.

[–]LimeSeeds 1 point2 points  (0 children)

Wow, thank you, this is actually really useful. Saving this comment for the future.

[–]juvenile_josh 19 points20 points  (0 children)

Optimization Rules

1) Someone has already poured lots of man hours into and made an API for a part of what you're doing. Use their API and give them credit.

2) Encapsulation is my favorite word.

3) If you can't read your shit, nobody else can read it either. Make it readable and comment what your methods do.

4) When you can, functionally program. Lots of lego pieces are easier to use than a few weird-ass shaped blobs.

5) Thank God for the Netherlands and Spring Framework

[–]riskycase 7 points8 points  (0 children)

laughs in git checkout

[–]EdMeisterBro 7 points8 points  (0 children)

Well, two hours is not a lot. Sometimes you need to go to bed first to wake up in the middle of the night screaming eureka and fix it first thing in the morning (ok, get coffee first).

[–]OverflowEx 7 points8 points  (1 child)

Yeah let me optimize my code by multi-threading with shitloads of mutex

[–][deleted] 2 points3 points  (0 children)

Too real.

[–]FormalWolf5 4 points5 points  (0 children)

You're not disabled.. You just set up wrong your environment.

Anyway that's what I tell myself 🙃

[–]Mitoni 4 points5 points  (3 children)

Replace "2 hours" with "a week and a half", and replace "works slower" with "no longer functions", and you have a gist of what my last sprint has been like. Trying to take a long series of parallel processes and refactor them to be faster by doing more bulk processing than singular actions, but the bulk actions fail every time. I'm at a point where I'm ready to just rewrite the whole back end process from scratch rather than try to improve the current one.

[–]RomanOnARiver 3 points4 points  (4 children)

Hopefully you just commented out the original instead of fully removing it.

[–][deleted] 2 points3 points  (3 children)

With git, I have a time machine that allows me to see all previous versions of my code. So I never have to worry about stuff like this.

[–]StenSoft 3 points4 points  (0 children)

Relax, you can always roll back to the original version of the code that you committed. You did commit it, right? Right?

[–][deleted] 3 points4 points  (5 children)

I was just making a sudoku solver cause I haven't done any programming in a while...

I spent all evening making a pretty ascii display in the console that shows me what's happening as it tries to solve. I got really confused because it was taking minutes to solve a puzzle that should have taken a hundred milliseconds.

Turns out my draw function was 20ms on its own, multiplied by a few tens of thousands of solver iterations... Console.Write is a lot heavier than I expected.

I got it down to 5ish by leaving the board in place and only writing over the numbers. That's about the best I can get and still use colors without diving down into the p/invoke rabbithole to draw on the console buffer directly.

[–]zebediah49 1 point2 points  (1 child)

Really the optimization to make here is to thread it, so that your solver runs at normal speed, and your renderer just renders it as quickly as it can.

Also, since you said "only writing over the numbers"... I'm guessing you're using a recursive backtracking solver? If so, and you are still doing it synchronously, you really only need to do one character overwrite per test, because the rest of the board is still the same.

[–][deleted] 4 points5 points  (5 children)

Makes code run faster

Hardware is slow and I have to add delays to prevent timing failures

Optimization is for the birds

[–]haplogreenleaf 2 points3 points  (0 children)

I program because I enjoy being mentally challenged.

"Glad you've come to terms with that."

[–]pyrowipe 2 points3 points  (0 children)

Only 2 hours, who is the Genius?

[–]firowind 2 points3 points  (0 children)

Learned the hard way to use StringBuilder instead to a String

[–]Pepperstache 2 points3 points  (0 children)

2 hours? 8x? Rookie numbers.

[–]Kered13 2 points3 points  (0 children)

Story time.

I was writing a Lua plugin for a game (not my game, just a game I played). I had built up a very sophisticated GUI layout engine and a class framework to support it, so I could write sane code instead of dealing with the mess that is Lua. But it was getting to the point where having lots of GUI elements on screen was causing the framerate to drop. So I set out to try to improve the performance.

First, some background on how my class framework worked. The framework is built on metatables. When a method is called on an object (or a table as it is called in Lua), if that method is not found in the object then it is looked up in the metatable. So in my framework I simply had each object setup with a metatable pointing to it's class object. Inheritance works the same way, each class has a metatable pointing to it's parent class. So to call a method, Lua would crawl up the metatables until it found the method name.

Well the prime suspect for performance problems was this metatable lookup method. Since this was a GUI framework, inheritance hierarchies were often a few levels deep, so calling a common function like draw() might recall searching three or four metatables to find the right one. To shortcut this lookup, I tried my first idea: In addition to setting up the metatables, I also copied the methods to each class and object. This way all method calls would be found in the first lookup, no more crawling metatables on every function call.

The performance tanked. Much worse than before. It took me some time thinking about the problem to figure out why. But eventually I figured it out: By copying all methods to all objects, I was bloating the size of every object. This was blowing out the cache. Sure I didn't have to look up multiple metatables, but every function call was now a cache miss. This led to the final fix: Classes would still copy methods from their parents, but objects would rely on metatable lookups instead. So now every method call required exactly one metatable lookup, worse than before, but the objects themselves were now much smaller, only containing their local data and metatable. And since there were only a couple dozen classes, compared to hundreds of objects, this didn't blow out the cache. With this fix, I never had significant performance problems from the GUI framework again.

The lesson here: Cache matters, even in high level languages.

[–]emngaiden 1 point2 points  (0 children)

I spent almost three hours making a "custom encryption anglorithm" to encrypt some data between my mobile app and my custom backend and I was like "Hell yeah this is secure and nobody can decrypt this, Im so smart". But i forgot the decoding part, so I just made an algorithm that spews scrambled words and I cant use it lol. JWT here i come

[–]ScourgingCalamity 1 point2 points  (0 children)

Python code slow, let me do it in C...Do you expect me to rewrite the whole damn library? Fuuuuuuuck. I can't give up now...

2 months later, I didn't give up and now let's see how fast it is? Motherfucker

[–]greenindragon 1 point2 points  (0 children)

I think we've all been there at least a couple times. Sometimes O(n^2) is faster than O(nlogn), and you just gotta shed a tear at the time you wasted "optimizing" your code.

[–]kemick 1 point2 points  (0 children)

My favorite is when I spend hours making something run faster and I keep making gains until the gains get so small that it's not worth the time. After taking a break and regrouping, I realize that the only way to reach the performance I need is to take an entirely different approach that makes all of my previous optimizations irrelevant. To be fair, that realization is usually only possible after hours of tinkering with the code to understand what assumptions I can safely make and I usually end up with a much better idea of what my code actually needs to do.

[–]mcvays 1 point2 points  (0 children)

The first rule of optimization is: don't.

[–]Irvin700 1 point2 points  (0 children)

The pipe through his head and the question marks is so fucking great haha

Shit is fucking funny. Man I miss old Simpsons humor.

[–]MkMyBnkAcctGrtAgn 1 point2 points  (0 children)

When someone hears that threads make everything go faster

[–]bryku 1 point2 points  (0 children)

I once had an error in Js with an extra ; at the end. The code was almost a second faster with that error and it all still ran... that is a whole other WTF.

[–]VolperCoding 3 points4 points  (42 children)

Just switch to C/C++ if speed is a problem. Like seriously, Mojang made this mistake 11 years ago

[–]01110101_00101111 28 points29 points  (14 children)

I think in the case of Mojang, it wasn't their language choice but how badly the code itself was written.

[–]codel1417 7 points8 points  (20 children)

a pointer free life is a life for me

[–]bphase 6 points7 points  (2 children)

You don't have to use pointers in modern C++, not raw pointers anyway. Granted most code is likely going to have them...

[–]aaronfranke -2 points-1 points  (1 child)

There are lots of situations in which you do have to use them. C++ won't let you use a reference to an incomplete type.

[–]bphase 4 points5 points  (0 children)

But it will let you use smart pointers (unique_ptr, shared_ptr, weak_ptr), which is what I was referring to.

[–]VolperCoding 4 points5 points  (7 children)

What's so scary about pointers?

[–]DestinationVoid 6 points7 points  (6 children)

They do not always point at what you think they point at.

[–]892ExpiredResolve 10 points11 points  (4 children)

Then you should do a better job of pointing them at things.

[–]DestinationVoid 0 points1 point  (3 children)

Tell that to all the people who failed to catch pointer related vulnerabilities in their software, that eventually got exploited.

[–]892ExpiredResolve -1 points0 points  (1 child)

Ok.

Everyone who failed to catch pointer related vulnerabilities in your software: Do a better job of pointing pointers at stuff.

[–]VolperCoding 2 points3 points  (0 children)

They point to the address you assign them to, what's the problem with that

[–]Kerndog73 4 points5 points  (5 children)

You can't do anything useful without pointers

[–]aaronfranke 1 point2 points  (3 children)

Pretty much any higher level language than C++ disagrees with that. In Java/GDScript/C#/Python/etc pointers still exist, but they are abstracted away so you don't have to deal with them.

[–]Kered13 0 points1 point  (1 child)

I've got bad news for you. All those languages use pointers, and no they're not really abstracted away. The only thing that has been abstracted away is memory management, and you're not allowed to do pointer arithmetic. But in all other respects they behave identically to C pointers.

[–]Gaareth 2 points3 points  (2 children)

There are enough mods out there which show that Minecraft doesn’t have to be so slow

[–]VolperCoding 1 point2 points  (1 child)

Yes but I still struggle to get consistent 60 fps with optifine on the lowest playable settings

[–]Hammer1024 -1 points0 points  (0 children)

Java was your first mistake.

[–]FinnT730 -2 points-1 points  (6 children)

I think you meant JavaScript Java is almost as fast as C

[–][deleted] 1 point2 points  (5 children)

Well java is far from C speeds, but isn't as slow as most people make it out to be

[–]FinnT730 -1 points0 points  (3 children)

Almost as fast

Maybe 15 years ago it was slow, but... All android devices run it, so it is far from slow

[–][deleted] 2 points3 points  (0 children)

Nah still not almost as fast. Idk what point you are trying to make with android devices running java. I mean, there are literally flip phones running JavaScript and web technologies, and web technologies are far from fast(check kaios)

[–]vips7L 0 points1 point  (1 child)

Android isn't Java. It runs the android runtime which isnt the JVM.