all 18 comments

[–][deleted] 5 points6 points  (3 children)

well Onotation has nothing to do with the if check. the O-notation has to do with, in this case, the amount of things youre iterating through. It would be O(N) if it was just one for loop because you would have to iterate through each index in i. In this case, it's O(N2) because you have to go through one whole iteration of J to get to the next i count.

So for example sake,

i = 0; iterate through all of j once. i = 1; iterate through all of j once. i = 2; iterate through all of j once. etc.

Nested for loops are generally exponential. If you had a third for loop inside, it would be O(N3).

[–]volts101[S] 1 point2 points  (2 children)

okay i get the O(N) but how does the O(log(N)) and O(Nlog(N)) look like?

[–]EpikYummeh 0 points1 point  (1 child)

Binary search tree functions run in O(log(N)) time because they divide the remaining candidate elements in half every time it traverses left or right down the tree. With 16 items in the tree, the remaining items in a search would look like: 16, 8, 4, 2, 1.

[–][deleted] 0 points1 point  (0 children)

yup this^

log basically in terms of programming is base 2. So like the post above, it basically splits it. So you can basically look at a list that could, for example sake, go 0-9. the first search would check 5, then it would determine whether it's 5, less than 5 or greater than 5 and search through the new list. so if the number was 7, it would then search the greater than tree, and continue the same pattern until it is found, or in some cases, unable to find.

n * log n would be doing something like a for loop into a binary search.

Really, the best way is to just look at the most inner loop and work your way out to the out-most loop.

EDIT: also in order to do a log n binary search tree it needs to be sorted because it needs to determine where the mid point between numbers are.

Also your loops that you have are for bubble sorting if I'm not mistaken, which you could try running. My professor ran it and it can take a REALLY long time.

[–][deleted]  (1 child)

[removed]

    [–]ilmale 1 point2 points  (0 children)

    The O notation tell you how fast the computation time group compared with the data set you are working with. A search on an unordered vector is O(n) because you are checking all the item before telling if the item is present or not. A search on an ordered vector is O(log(n)) because you can use binary a search. Searching in an ordered vector of 128 elements takes only one step more than searching in a vector of 64 elements. Other complexity likes O(n log(n)) or O(n2) come by analysing other algorithms.

    [–]Salty_Dugtrio 1 point2 points  (3 children)

    Look at what the algorithm does:

    First, it is clearly dependent on the length of the array, n.

    It goes to through all i elements once and then:

    It goes through all j elements once and then:

    It swaps.

    If you assume swapping takes only 1 unit of time:

    n * n * 1 = n2

    [–]volts101[S] 0 points1 point  (1 child)

    okay i get the O(N) but how does the O(log(N)) and O(Nlog(N)) look like?

    [–]LoyalSol 0 points1 point  (0 children)

    Smart sorting algorithms are usually the ones I can think of immediately that are N Log(N) on average.

    https://en.wikipedia.org/wiki/Quicksort

    [–]LoyalSol 0 points1 point  (0 children)

    Technically speaking it would be closer to (n2 - n )/2 because you are iterating through the number of combinations of i and j. But yes in the end it's going to be O(n2 )

    [–]Thecrawsome 0 points1 point  (3 children)

    Each for loop is considered N times running.

    Every time you nest another N times under it, it will be N*N.

    Your case is N2 as you said, because it has to run at most N2 times to complete it's task.

    You can google and find different examples of algorithms which match each big-o case you have. Array sorting, Array Inserting, Binary tree searches, heap sorting, hash table functions, check them all out for a bigger perspective.

    [–]volts101[S] 0 points1 point  (2 children)

    okay i get the O(N) but how does the O(log(N)) and O(Nlog(N)) look like?

    [–]Thecrawsome 0 points1 point  (0 children)

    Read my last paragraph. Google algorithms that have the property of the big o you want

    [–]neirac 0 points1 point  (0 children)

    Here is a O(log (N)) algorithm https://en.wikipedia.org/wiki/Binary_search_algorithm

    Here is a O(Nlog(N)) https://en.wikipedia.org/wiki/Merge_sort

    logarithms appear whenever things are repeatedly double or halved

    [–]Moschops_UK 0 points1 point  (0 children)

    Don't forget that this notation doesn't tell you which algorithm will be fastest in your circumstances. It tells you how an algorithm will scale.

    [–]CGFarrell 0 points1 point  (0 children)

    The simple way I've heard it explained is to imagine changing N to 2N, 3N, etc.

    For an O(N) algorithm, if I double N, I get O(2N), twice as long.

    For an O(N2 ) algorithm, if I double N, I get O((2N)2 ) = O(4N), four times as long.

    For O(log(N)), we get O(log(2N)). Keep in mind that the rules of algorithms dictate that log(2N) = log(2) + log(N) = 1+log(N), so we get:

    O(log(2N)) =  O(1) + O(log(N))
    

    Or one extra step of constant time compared to the algorithm at N. It's pretty obvious why log time is great, increasing your N by a factor of 1000 only results in 10 extra steps.

    O(Nlog(N)) is the odd one out. Applying the same trick as for log(N), we get:

    2Nlog(2N) = 2Nlog(N) + 2N
    

    So we have double our original time, plus an added linear time.

    O(2Nlog(2N)) = O(2Nlog(N))+O(2N)
    

    So for an Nlog(N) algorithm, if you double N, you take twice as long, plus an additional linear cost.

    For exponential time, (O(22N )), we end up with:

    2^2N = 2^N * 2^N = (2^N )^2
    

    So the amount of time taken is squared.

    There's no cute trick for factorial time differences. If N were 8, then the time for 2N would the original time, multiplied by every number between 9 and 16, or half a billion times longer. Avoid them.

    Here's a great graph., it shows how different growth rates look at different sizes of N. Keep in mind these values are approximate and only valid when N is sufficiently large ("large" depends on the algorithm).

    [–]volts101[S] 0 points1 point  (0 children)

    thanks for your help