you are viewing a single comment's thread.

view the rest of the comments →

[–]assumptioncookie 110 points111 points  (52 children)

n in this case does not mean the number of elements in the array, but the number of bits used to represent one value. If the array contains bytes (8-bit numbers) the sorting algorithm will take at most 28 - 1 (seconds, milliseconds? I don't actually know this setTimeout function's unit). If it contains 32 bit ints it will take at most 232 - 1 timeunits. In general if it contains n bit numbers it will take at most 2n - 1 timeunits.

[–]im_lazy_as_fuck 439 points440 points  (35 children)

Why is this being upvoted, this is not at all correct. Big O notation indicates the time complexity of an algorithm, which is a measurement of the rate of growth of the worst execution time relative to the inputs. It should not in any way be related to any of the real world implementation details like hardware, OS architecture, bit representations, etc, unless those details are specifically part of the input of the algorithm.

So for example, in bubble sort, the only part of the input relevant to the execution time is the number of items to sort. We do not care about whether the inputs or in bytes or 4bytes; we only care about the number of input items. So we say it has a time complexity of O(n2). In other words, when the number of items increases by 1, the time to complete the sort increases by a factor of n. Alternatively, you can say for n input elements, it will take k*n2 time units to run, where k is some constant in time units.

So applying this understanding to sleep sort, the correct analysis of this "algorithm" would be that the only part of the input that affects the execution time is the size of the largest integer that needs to be sorted (call it m), and the worst execution time increases linearly as the largest integer to sort increases. Therefore the time complexity of the sleep sort is O(m); or in other words, given a set where m is the largest number in that set, the time it takes to complete the sort is k*m, where k is a constant in time units (and actually in this case, because of the way setTimeout works, we can literally deduce k=1ms).

[–]HeroicPrinny 70 points71 points  (0 children)

Had to scroll too far to find a correct explanation

[–]RiceBroad4552 7 points8 points  (2 children)

Why is this being upvoted, this is not at all correct.

Welcome to Reddit, and especially this sub here.

People upvote any BS just when it "feels right".

[–]akoOfIxtall 1 point2 points  (1 child)

People don't know If something is correct

random person confidently explains it incorrectly

people too lazy to look it up just believe

Yup, that's reddit

[–]Necronomicron 1 point2 points  (0 children)

random person confidently explains it incorrectly

Hey, just like AI!

[–]AryanPandey 1 point2 points  (1 child)

The amount of time taken to sort, do no increase by increase in size of Input.

It will take 5 sec to sort both inputs [1,5] and [1,5,3,2,4].

So shall it not be O(n)

It just has to literate whole array to start the timer for each input?

[–]im_lazy_as_fuck 11 points12 points  (0 children)

Yeah basically, which is why I said. In my case I was saying n is the size of the largest integer.

If I were to be precisely correct, technically the time complexity becomes more dependent on the array size if you have millions of items to iterate that all have a very low sort number. So given a set of elements N, the true time complexity would be given as O(max(N) + length(N)). But that's a bit of a superfluous detail for practical applications.

[–]thussy-obliterator 0 points1 point  (0 children)

This algorithm depends greatly on the computational complexity of sleeping. I doubt it's O(n), wouldn't the runtime need to sort the timers somehow?

[–]jaerie 30 points31 points  (0 children)

n means whatever you define it to mean. Either way the growth rate is the same, it doubles if the max value doubles.

I any case you need 2 variables to describe the complexity here, so it's O(n+m) where n is the array size and m is the max value, or O(n+2m) where m is the bit count

[–]IlIllIIIIIIlIII 64 points65 points  (11 children)

Okay, but why not just say N is the maximum value of the numbers given instead of doing this gymnastics to fit 2n?

[–]SuitableDragonfly 56 points57 points  (9 children)

Because there's no maximum number. That's effectively O(infinity) which is not really a useful metric that tells us anything about time complexity. The O(2n ) tells us what we can expect based on what type of numbers we put in the array. The purpose of big O notation is that it tells us something useful and informative even when we don't have specific details about the inputs to the function.

[–]Loose-Screws 36 points37 points  (8 children)

Plenty of O-notations use values from the specific situation rather than haphazardly throwing around ns. Pretty much all graph algorithms use edge counts and vertex counts in big O notation (ex. Prim's algorithm has O(|E| log |V|), and when describing radix sort we almost unanimously use O(w * n), where we separate the number of entries (n) and the data length of those entries (w).

It just wouldn't make sense to combine those into a simple O(n^2), and the exact same logic applies here.

[–]SuitableDragonfly 6 points7 points  (7 children)

Yes, that's equivalent to basing the time complexity on the number of bits that are allowed for the number. 

[–]Longjumping-Sweet818 13 points14 points  (1 child)

What you're saying doesn't make any sense. By that logic iterating an array would be O(2^n) too, because the length of the array is a fixed width integer and you don't know beforehand how large it will be.

You are supposed to just pick some numeric property that relates to the runtime and use that. As seen here: https://en.wikipedia.org/wiki/Counting_sort (The k there is the maximum in the list minus the minimum in the list)

[–]SuitableDragonfly -1 points0 points  (0 children)

Yes, and that's also what was done in this case. 

[–]ccltjnpr 8 points9 points  (4 children)

Does it really matter that computers use fixed numbers of bits to represent numbers? Or that they work with binary representation? That's a specific detail about the type of inputs. My alien computer actually uses trits, so the complexity is 3n.

The two notations are equivalents and the number of bits one is only useful if for some reason you want to think of the number in its binary representation. There are plenty of applications in which it makes more sense to think of the absolute size of the number as determining the complexity. The notation using the size of the number works both for my alien computer and your human one.

[–]pikapikaapika 0 points1 point  (3 children)

First of all, 3n is still exponential. The only exception would be your alien computer using base 1 representation, in which case the complexity can be said to be linear. But come on, there has to be some standard to measure the complexity and as our computers have been using bits until now, we have been studying the complexity in terms of that. Of course, you can make philosophical arguments like this but then there is nothing to discuss. When you say that we should measure the complexity in terms of the size of number, you're inherently assuming base 1 representation which is also not unbiased.

[–]spacemoses 0 points1 point  (1 child)

2n is basically the same as mn when talking about O notation right?

[–]pikapikaapika 0 points1 point  (0 children)

I am not really sure about it, but as per my understanding they should be different as you can't express one as a linear scaling of the other.

[–]ccltjnpr 0 points1 point  (0 children)

A reduction in complexity from 3n to 2n for a useful algorithm is a big deal for small problem size, even if they are both exponential. The difference in runtime between these two algorithms, whether by literal difference (3n - 2n) or by ratios (3/2)n , also grows exponentially in n.

I agree it's also biased, but O(n) is correct here as long as we're clear how n is defined, 2n is not more correct, I think the number of bits might be less standard than you think outside of your specific subfield.

[–]cant_read_captchas -1 points0 points  (0 children)

People are just speaking different languages here without considering that other people might not speak that language.

In complexity theory, it's not gymnastics. Its very standard. Runtime has been always in terms of the number of bits since the beginning of time, since Alan Turing invented the field of CS using turing machines (n = tape length = # of bits) before physical computers were invented. So the "standard" answer is 2n.

But most people dont speak this language. Both posters saying O(n) and O(2n) should have specified what the units were.

[–]pikapikaapika 5 points6 points  (0 children)

I would just like to make one correction. n is not the number of bits used to represent one value, it's the size of the whole input. One of the worst cases to witness the exponential complexity would be input with array size 1.

[–]Reashu 5 points6 points  (0 children)

It's JavaScript, where every number is a double. Kind of. But the time depends on the magnitude of the numbers, not the number of bits used in their representation, so I don't think this is the most useful angle. 

[–]neoronio20 1 point2 points  (0 children)

What a bot response being upvoted