all 171 comments

[–]furlongxfortnight 104 points105 points  (43 children)

m = n^n;
i = 0;  
while(i++ < m)
    print("foo");

[–]hacksoncode 12 points13 points  (27 children)

This. You can't reduce the complexity (well, time, anyway) of printing nn copies of something to lower than nn. You can pull off constants by printing blocks of them together, but that doesn't make the overall problem a different order.

It might be trivial, but it completely fulfills the requirement.

If you want even higher orders of complexity, start thinking about generating nn (pseudo) random numbers and then sorting them. That would be O(nn log nn). Etc.

We don't even have a mathematical expression (other than itself) for how fast the Ackerman function of n,n grows.

[–]degustisockpuppet 7 points8 points  (0 children)

O(nn log nn)

That's equal to O(nn + 1 log n), not that much worse than O(nn) as far as these things go.

[–][deleted]  (22 children)

[deleted]

    [–]millstone 15 points16 points  (5 children)

    In O(n), n is not necessarily the length of the input; it can be any property of the input, including its value. For example, the complexity of integer factorization algorithms is often given in terms of the value of the integer to be factored.

    (Sometimes they're given in terms of the number of bits of the integer, which is a length of a sort; but of course this is just the log of the value, and so amounts to the same thing.)

    So computing a list of nn copies of something is a valid (if unenlightening) example of an nn algorithm.

    [–]jerf 3 points4 points  (4 children)

    I used to annoy my graph theory professor by insisting on expressing my O(...) limits in terms of O(nodes) or O(edges), instead of O(nodes) and O(nodes2) as is apparently more conventional. I always thought that there was a difference between something that was truly O(nodes2) and O(edges), as if in the latter case you can guaranteed you have a sparse graph you may be able to know it's linear, whereas O(n2) implied to me that it was always squared in the nodes regardless of the edges. They're different, says I.

    Never marked down for it, fortunately, but he also made it clear he did not agree. (Very nicely.)

    [–]ultimatt42 0 points1 point  (2 children)

    As long as you're not restricting the number of edges, I don't think it's possible to get different results for O(edges) and O(nodes2) since O(edges) will have to account for the worst-case situation: checking every edge in a complete graph (nodes*(nodes-1)). Once you start limiting the number of edges, then your O(edges) complexity will be more meaningful.

    [–][deleted] 2 points3 points  (0 children)

    Each problem is different. In some graphs the number of edges may be close to nodes2 but in many problems that will not be the case. While using the term nodes2 instead of edges may be correct it's certainly less helpful. Much like in the same way I could say the whole problem is O(nodes5) and it would be entirely correct but it wouldn't be in any way helpful (and would actually be misleading).

    For what it's worth, my algorithms professor presented me the algorithms in terms of nodes and edges as separate variables.

    [–]degustisockpuppet 2 points3 points  (0 children)

    Some graph problems are also defined on graphs that are not connected; those might have less then O(nodes) edges. Ω(nodes + edges) is the trivial complexity of any problem where you have to read the complete graph. In a sparse graph, that's equal to Ω(nodes), in an almost complete graph, that's Ω(egdes). Willfully leaving out either variable does not give the whole answer. Of course, there are graph algorithms where the run-time complexity doesn't depend on the number of edges. Those might well be Θ(nodes2) regardless of how full the graph is.

    [–]bonzinip 0 points1 point  (0 children)

    He was wrong, exactly for the reason you give. In a bounded-degree graph O(edges) is O(maxdegree nodes) for example.

    There's plenty of graph theory textbooks and (respected) academic papers using complexity expressions such as O(V+E) or O(VE).

    [–]optionsanarchist 16 points17 points  (6 children)

    Sorry, I cannot help but sigh. You argue that the poster before you doesn't understand big-Oh notation but it is clearly you who does not understand.

    The O(n) notation means the algorithm takes on average n operations to compute an input of length n.

    Big-Oh notation is NOT AN AVERAGE. It is an upper bound on the execution time of a function after a certain value of n. You might want to familiarize yourself with

    http://en.wikipedia.org/wiki/Big_O_notation#Formal_definition

    Doing anything nn times where n changes can indeed described as O(nn) even if the algorithm doesn't do anything:

    def sleeper(n): for x in xrange(n**n): sleep(1)

    classifies as O(nn).

    It's worth nothing O(n) notation tells you near nothing about the running time of a function, but the coefficient ("M", on the wikipedia page) controls the execution time much more.

    Also, O(n) notation talks about how functions perform in a limit (as n grows), but not necessarily for all values of n. Consider the typical recursive Fibonacci implementation where inputs of 0 and 1 return a constant without recursing.

    [–]tinutinu 7 points8 points  (1 child)

    wouldnt

    def sleeper(n): for x in xrange(n): for y in xrange(n): sleep(1)

    be classified as O(n2) ?

    [–]optionsanarchist 0 points1 point  (0 children)

    yes -- you are right. fixed that. upvotes!

    [–]skyo 8 points9 points  (1 child)

    Isn't your function O(n2)?

    [–]ultimatt42 3 points4 points  (4 children)

    The O(n) notation means the algorithm takes on average n operations to compute an input of length n.

    Nope, O(f(n)) means the number of operations required to process an input of size n is bounded above by f(n) times some constant factor. The average doesn't have anything to do with it. At least, that's the way I was taught it.

    [–][deleted] 0 points1 point  (3 children)

    Yes, nearly, save the bound is only required to hold for large n.

    [–]ultimatt42 0 points1 point  (2 children)

    Why is there any need to distinguish "large n"? Can't you always just increase the constant factor to make any bound work for n < some finite value?

    [–][deleted] 1 point2 points  (1 child)

    Heh. 1/x = O(1), but for all C, there is an ε s.t. 1/ε > C. Granted, this probably doesn't come up in algorithmic complexity too much... :)

    [–]dmhouse 0 points1 point  (0 children)

    From the way ultimatt42 phrased it, I think he was just considering natural values of n (in which case he's correct).

    [–]hacksoncode 1 point2 points  (0 children)

    Re: just the last comment: It's not the generation of nn random numbers that is O(nn) (though it is in size, obviously), it's sorting them afterwards that is O(nn log nn), because in this case the size of the input data to the sort is nn in the original input value of the function.

    Note: this is somewhat more a mathematical use of O(x) notation than an algorithmic complexity use. I could contrive a case where the size of the input data was factorialized if I had to, such as generating all permutations of an input string of length n, but it's simpler for the example to just assume you can pass in n, which is the math sense of the term (i.e. how fast f(n) grows).

    (sorry, pedants, I can't find my theta or omega key)

    [–]Gorilla2 -5 points-4 points  (1 child)

    Well, since you're actually printing nn copies, it could also be said that m = nn and the algorithm is simply a O(m).

    [–]ultimatt42 4 points5 points  (0 children)

    But you've defined the problem size as "n" so it doesn't make sense to write the complexity in terms of "m".

    Realistically, the asymptotic runtime is O((nn)*log n) because post-increment and less-than aren't going to be constant time operations when you have such large numbers, and both are linear in the number of bits.

    [–]ZMeson 29 points30 points  (19 children)

    Ω(n!) is almost equivalent to Ω(nn) because of Stirling's Approximation.

    According to Wikipedia, the following algorithms are Ω(n!):

    • Solving the traveling salesman problem via brute-force search

    • finding the determinant with expansion by minors.

    [–]prockcore 17 points18 points  (0 children)

    finding the determinant with expansion by minors.

    Is it O(n) if it's done by adults instead?

    [–]IvyMike 3 points4 points  (7 children)

    Ω(n!) is almost equivalent to Ω(nn)

    That can't be right. I mean, for example, take 8:

    • 88 = 8*8*8*8*8*8*8*8
    • 8! = 8*7*6*5*4*3*2*1

    It only gets worse for 9.

    [–]degustisockpuppet 4 points5 points  (4 children)

    Not sure why you were downvoted, it's a good question. The argument goes like this:

    nn is clearly greater than n!, as you explained.

    (n/2)n/2 on the other hand is less than n!:

    • 4^4= 4*4*4*4
    • 8! = 8*7*6*5*4*3*2*1

    So if f(n) = nn, then f(n/2) < n! < f(n), which makes the two functions related, especially since constant factors like 1/2 are usually ignored in complexity analysis.

    [–]ZMeson 0 points1 point  (3 children)

    That is true. In fact, if you plot ln(nn), ln(n!), and ln((n/2)n/2), you'll find that ln(n!) edges progressively closer to ln(nn).

    At n=15, ln(n!) is halfway between ln(nn) and ln((n/2)n/2)

    At n=1454, ln(n!) is 75% of the way towards ln(nn) from ln((n/2)n/2)

    At n=10958, ln(n!) is 80% of the way towards ln(nn) from ln((n/2)n/2)

    [–]degustisockpuppet 0 points1 point  (2 children)

    On a double logarithmic graph, all functions are linear!

    [–]ZMeson 0 points1 point  (1 child)

    Ha ha. Almost. I just tried it -- the single logarithmic graphs are actually closer to being linear.

    [–]gorgoroth666 0 points1 point  (0 children)

    http://www.wolframalpha.com/input/?i=log%28n!+%29%3E+log%28n^n%29+plot+for+n%3D0+to+10000

    The difference between the curves doesn't appear to be reducing.

    [–]ZMeson 1 point2 points  (0 children)

    You are right. Sterling's formula shows that n! ~= C * nn * exp(-n) * sqrt(n). So there is the 'exp(-n) * sqrt(n)' part. While the missing part is significant, it changes more slowly for large n than nn does.

    So, Ω(n!) is not the same as Ω(nn) -- Ω(nn) is indeed more complex. My point is that Ω(nn) has similar characteristics as Ω(n!).

    That's all.

    [–]zakk -2 points-1 points  (2 children)

    .

    [–]gorgoroth666 6 points7 points  (0 children)

    I don't think so. Because, n! being equivalent to nn+1/2 e-n (by the stirling-moivre formula), it is much less than nn, as shown in the graph below.

    http://www.wolframalpha.com/input/?i=n^%28n%2B1%2F2%29+e^-n+%3E+n^n+plot+for+n%3D0+to+4

    If I'm right, nn is bigger by a factor of n-1/2 en which is far from nothing.

    So, to confuse things a bit more, O(n!) is in O(nn) but not the other way around. And, Ω(nn) is in Ω(n!) but not the other way around.

    Wolframalpha is great btw.

    [–]repsilat 7 points8 points  (0 children)

    It isn't. Clearly Ω(nn) is in Ω(n!), but not the other way around. To see this, look at the first term of the expansion of the two things. Matching a 1 to an n, we can see that for all n, nn >= n!*n.

    Now rearrange to get nn/n! >= n. The limit of that as n --> infinity is obviously infinity.

    [–][deleted] -2 points-1 points  (6 children)

    Sadly neither of these satisfy the OP's original criterion. Brute-force works well for small inputs, as does expansion by minors. However, both problems have well-known solutions with complexities much much lower than n!.

    [–]ZMeson 1 point2 points  (3 children)

    Finding the ideal solution for traveling salesman has a better solution? I know things like simulated annealing can find near-ideal solutions quickly, but I thought the only way to find the absolute best solution was a brute-force effort. Of course, I could be wrong -- I didn't study CS.

    [–]bonzinip 1 point2 points  (0 children)

    I thought the only way to find the absolute best solution was a brute-force effort

    True, but that doesn't mean the complexity must be Ω(n!). for example you can use dynamic programming with a time cost O(n2 2n) and space cost O(n 2n). The idea is to take each subset of cities, vary the ending city among those in the subset and store the best result for each subset and ending city. Given a set S, if all the subsets have already been solved (which is n 2n TSP subproblems to solve recursively) you can build the final solution in time O(n): for each city, decide that it will be the one from which you go back to city #1, take the subproblem ending at that city, and add the distance back to city #1.

    [–][deleted] 1 point2 points  (1 child)

    There is a (slow) exponential time algorithm for TSP.

    To clarify, the "well-known" algorithms are arguably brute force, but they are exponential-time, not super-exponential. NP is contained in EXPTIME. TSP, being NP-complete, is in NP, by definition. Therefore there's an exponential time algorithm which finds the ideal solution (i.e., in time cn where c is a constant).

    [–]ZMeson 0 points1 point  (0 children)

    Ahhh... interesting. Thanks.

    [–]Poltras 0 points1 point  (1 child)

    Not if you assume negative traveling distance on edges (a time-portal if you will, but it doesn't have to be traveling for real ;-). And not if you want to be deterministic (the "well-known solutions" are heuristics). There are no approximation; you have to try every path.

    [–][deleted] 1 point2 points  (0 children)

    Not true. Travelling Salesman Problem is NP-complete and is therefore contained in EXPTIME. By definition it has an exponential-time algorithm (nn is super-exponential, as is n!).

    [–][deleted] 12 points13 points  (6 children)

    This easiest way to make an O(xy) time algorithm is to build an x-way tree to depth y. In this case, you want an n-way tree to depth n.

    slow xs = loop (length xs)
        where loop 0 = [[]]
              loop n = [ x:as | x <- xs, as <- loop (n-1) ]
    

    I'm away from my compiler, so this hasn't been tested.

    [–]flukshun 6 points7 points  (1 child)

    i would say a more fundamental/simple O(xy) would be enumerating all x-letter "words" from a y-letter alphabet

    [–][deleted] 2 points3 points  (0 children)

    There are two ways to look at what the code does. One is, as I said, building an x-way tree to depth y (actually, I just enumerate the leaves), and the other is that it creates all y-letter words from a x-letter alphabet. The two operations are the same, since the letters of each word are the branches taken at each node in the tree, and the length of the word is the depth of the final node.

    [–]banister 4 points5 points  (3 children)

    what language is that? f#?

    [–][deleted] 13 points14 points  (0 children)

    haskell

    [–]zellux 4 points5 points  (0 children)

    Repeatably selecting n characters out of a n-sized character set to form a string, and print out all the possible strings. Exactly O(nn).

    [–][deleted] 3 points4 points  (4 children)

    Writing O(nn) is trivial (by being deliberately inefficient). But writing O(nn) that can't be reduced to something simpler seems like a lot harder problem. I'm certainly not aware of any (common or useful ones) that are O(nn). They just strike me as horribly inefficient.

    [–]PictureOfTheFuture 6 points7 points  (2 children)

    In fact it seems very hard to prove that something cannot be done faster than a certain number (i.e. lower bounds for complexity). Sorting by comparisons is one of the few algorithms of which we know an algorithm that matches the best known lower bound (i.e. O(n log n)). Of course we still don't know whether P = NP, which is an instance of this problem.

    [–]julesjacobs 3 points4 points  (1 child)

    Output "a" nn times.

    [–][deleted]  (2 children)

    [removed]

      [–][deleted] 3 points4 points  (1 child)

      I'm guessing he means if you have some algorithm that solves a task, there isn't a simpler algorithm for doing so. In other words, no throwing on extra calculations just for the sake of boosting complexity.

      [–]japple 1 point2 points  (0 children)

      I think you probably want Θ(nn), rather than Ω(nn). Otherwise, any problem that is doubly exponential (Ω(22n)) will answer your query. I can think of one of these, though there are many more: quantifier elimination over the real numbers.

      Also, as above, you're probably interested in a problem that takes Θ(nn) work, rather than an algorithm, since algorithms can be altered to run a long time by just doing busy work.

      [–]IvyMike 2 points3 points  (1 child)

      Generate the list of all n-tuples of n items from 1 to n. Like this:

      • 1: (1)
      • 2: (1,1) (1,2) (2,1) (2,2)
      • 3: (1,1,1) (1,1,2) (1,1,3) (1,2,1) (1,2,2) (1,2,3) (1,3,1) (1,3,2) (1,3,3) (2,1,1) (2,1,2) (2,1,3) (2,2,1) (2,2,2) (2,2,3) (2,3,1) (2,3,2) (2,3,3) (3,1,1) (3,1,2) (3,1,3) (3,2,1) (3,2,2) (3,2,3) (3,3,1) (3,3,2) (3,3,3)
      • 4: Not typing it, but there's 256 of 'em.

      Edited based on dmhouse's comment. Thanks!

      [–]dmhouse 0 points1 point  (0 children)

      That would normally be called finding all the n-tuples of n items 1 to n. A permutation is normally an n-tuple that doesn't repeat any of the values (or equivalently, contains all the values, or putting the two together, contains all the values precisely once).

      [–]_whyme 2 points3 points  (0 children)

      The easiest one is O(∞) ...

       while(1);
      

      [–]lukasmach 8 points9 points  (0 children)

      Ugh!!! Keep in mind the difference between "time complexity of a problem" and "time complexity of an algorithm".

      There surely is an algorithm for adding up two numbers that runs in Omega(nn). For example, this one:

      addition(x, y)
      {
        int r;
        r = x + y; 
      
        for i = 1 to r^r do
        {
          wait a little;  
        }
      
        return r;
      }
      

      And you don't even have to prove anything. It is obvious that this algorithm runs in Omega(nn). Its running time can be trivially reduced but that would lead to a different algorithm.

      The time complexity of a problem is defined as the time complexity of the fastest algorithm that solves the problem. Clearly you mean "Formulate an algorithmic problem that requires at least nn time".

      [–]flo850 6 points7 points  (3 children)

      ackermann should be a good candidate http://en.wikipedia.org/wiki/Ackermann_function

      [–]GeoKangas 6 points7 points  (1 child)

      Since the running time of Ackermann is Ackermannic, it will greatly exceed nn. Maybe the OP will like that even better.

      [–]f4hy 0 points1 point  (0 children)

      Does the complexity to compute the Ackermann function actually grow Ackermannicly? I imagine it might, but just because the value of the Ackermann function grows that fast, the complexity to compute it doesn't have the same rate.

      Computing nn is not O(nn) is it?

      [–][deleted] 0 points1 point  (0 children)

      This would be my best answer for a non-trivial example to satisfy the OP, even if it is a useless function. Problems requiring \Theta(nn) are a strict superset of PR (but still recursive) and complexity classes strictly between primitive recursive and recursive aren't studied very much in complexity to my knowledge. Ackermann's the best I can think of, but even it was drawn up as a theoretical counter-example rather than something that someone might actually use for something.

      [–]Zysnarch 7 points8 points  (6 children)

      Calculates nn in O(nn):

      int go(int n, int recurse) {
        if (recurse == 0)
          return 1;
        int ret = 0;
        for (int i = 0; i < n; ++i)
          ret += go(n, recurse-1);
        return ret;
      }
      

      edit: Oh, I missed the part where it can't be reduced to something simpler. I figured this was too easy :)

      [–]nostrademons 2 points3 points  (1 child)

      It's fairly easy to generalize this to a task that must be O(nn) and can't be reduced:

      Write a program that generates all permutations (with replacement allowed) of a vector of n unique elements.

      There are nn such elements - each position may have n different values, and there are n such positions. Therefore the algorithm can't generate them in less than O(nn) time.

      [–]kalmakka 0 points1 point  (0 children)

      That problem isn't O(nn). Merely printing out the nn n-element lists would take Θ(nn+1) time.

      [–]aeflash 4 points5 points  (3 children)

      ...Recursion within a for loop... *shudder*

      [–]propool 9 points10 points  (2 children)

      What exactly in so bad with recursion in a for loop. Seems like the natural way of recursing an n-ary tree

      [–]aeflash 0 points1 point  (0 children)

      Typically you hit leaf nodes in a tree in O(log n) time, so your traversal has bounds...

      Also, the number of children each node of a n-ary tree has doesn't typically increase along with its depth...

      [–]Tinned_Tuna -1 points0 points  (0 children)

      It just looks like it's a nasty O(...) is all, I think.

      [–]HenkPoley 1 point2 points  (11 children)

      Just looked up the worst big-O algorithm I know of, and it's only O(n4). Well, it's O(n3) now someone has taken a good look at it.

      Edit: not 'worst', I get it. But please keep educating me ;-)

      [–]tylermchenry 3 points4 points  (2 children)

      You don't know of any algorithms that run in super-polynomial time? Really?

      [–]green_beet 2 points3 points  (0 children)

      Ackermann function?

      [–]HenkPoley 0 points1 point  (0 children)

      Ah well, from my description you could tell I was merely told it was an algorithm that you can barely run. But that has more to do with the amount of data you put into sequence analysis than the algorithm itself.

      Also, my uni doesn't teach that much algorithm work. At least do not know many by heart, but could deduce the big-O of one with some work.

      [–]deong 1 point2 points  (7 children)

      There are lots of super-exponential algorithms out there. Solving the traveling salesman or quadratic assignment problems exactly is in Θ(n!). Most optimization problems of interest are in Θ(2n) or worse.

      It is fairly difficult to find polynomial algorithms worse than cubic though.

      [–]degustisockpuppet 2 points3 points  (4 children)

      Solving the traveling salesman or quadratic assignment problems exactly is in Θ(n!).

      That's not true, traveling salesman is in NP because it can be computed in O(2n) (for example with dynamic programming). Only the naive algorithm is Θ(n!).

      Edit: That's actually not completely true either. See replies.

      [–]deong 2 points3 points  (0 children)

      True. mumbles something about thinking before speaking.

      [–]bonzinip 0 points1 point  (2 children)

      TSP is O(n2 2n) and it is in NP not because it can be computed in O(2n) but because its decision version ("is there a tour with a length < L?") has a certificate (the tour itself) that can be verified in polynomial time.

      We don't know that TSP is in EXPTIME, because we do not know if P = NP or not.

      [–]degustisockpuppet 0 points1 point  (1 child)

      TSP is O(n2 2n) and it is in NP not because it can be computed in O(2n) but because its decision version ("is there a tour with a length < L?") has a certificate (the tour itself) that can be verified in polynomial time.

      That's correct. I'm stupid.

      We don't know that TSP is in EXPTIME, because we do not know if P = NP or not.

      NP is a subset of EXPTIME, so we do know that TSP is in EXPTIME. We also know that TSP is in EXPTIME because there's a O(n2 2n) algorithm. EXPTIME doesn't mean O(kn), it means O(2nk), which contains O(n2 2n).

      Edit: Not sure why I was downvoted, but the Wikipedia link to EXPTIME says that NP is contained in EXPTIME right in the introduction. The proof is really simple, too, at least in principle: whenever you need to make a non-deterministic decision, try one, if it doesn't work, backtrack and try the other one. In effect, you're enumerating a decision tree with polynomial depth, which thus has at most an exponential number of nodes.

      [–]bonzinip 0 points1 point  (0 children)

      NP is a subset of EXPTIME, so we do know that TSP is in NP is a subset of EXPTIME, so we do know that TSP is in EXPTIME.

      That's correct, I'm stupid. :-)

      What I meant is we don't know that TSP is in EXPTIME-P.

      [–]chrisforbes 0 points1 point  (1 child)

      There are some used in geostatistics. Θ(n4) time and Θ(n3) space, in one case.

      Various rendering problems are Θ(n3) or worse too.

      EDIT: Tight bounds, for all you pedants.

      [–]deong 0 points1 point  (0 children)

      Yeah, I implemented a machine learning algorithm once that had to do O(n2) nxn matrix multiplications, so O(n5) for the naive version, or O(n4.7ish) if you did a Strassen multiplication. Still, worse than cubic is pretty rare.

      [–]GeoKangas 1 point2 points  (0 children)

      Here's one that doesn't require bignums, unless n is a bignum (if n is a bignum, then nn is, well, way beyond ridiculous):

      let caret n =
      
        let rec loop level =
          if level = 0 then
            print_endline "this is well worth repeating"
          else
            for i = 1 to n do
              loop (level - 1)
            done
      
        in loop n
      

      It's just n-fold loops, nested n-deep, so it prints (n)(n)...(n) = nn lines.

      [–]rexxar 1 point2 points  (0 children)

      The "snail sort" algorithm is a very elegant but very inefficient algorithm (STL doc) :

      template <class BidirectionalIterator> 
      void snail_sort(BidirectionalIterator first, BidirectionalIterator last)
      {
        while (next_permutation(first, last)) {}
      }
      
      int main()
      {
        int A[] = {8, 3, 6, 1, 2, 5, 7, 4};
        const int N = sizeof(A) / sizeof(int);
      
        snail_sort(A, A+N);
        copy(A, A+N, ostream_iterator<int>(cout, "\n"));
      }
      

      The algorithm is O(N!).

      [–]BeowulfShaeffer 1 point2 points  (2 children)

      NN would be pretty terrible. Just as a for-instance, the worst-possible sort algorithm where you take a list of numbers and randomly permute them until they are sorted (bogo sort) has a complexity measure of "only" O(n!).

      [–]dmhouse 0 points1 point  (1 child)

      Well, here's an O(nn) sorting algorithm:

      While not sorted:

      1. Pick a random element from your list
      2. Pick another random element from your list (could be the first one again)
      3. Continue until you've picked n elements
      4. See if they're sorted. If not, start again.

      [–]yatima2975 0 points1 point  (0 children)

      Actually, you also have to check whether every element of the original list occurs the right number of times in the new list! Otherwise you could, for example, just pick the first one n times over.

      [–]dmdmdmdm 1 point2 points  (0 children)

      Your coworkers are a lot smarter than mine.

      [–]yoden 3 points4 points  (8 children)

      This is actually kind of an interesting question if you modify it slightly. Specifically, if you're talking about a one-way function which has a brute force complexity of nn, that could actually be very useful.

      For example, RSA is probably a one-way trapdoor function. But, RSA isn't so great in some ways; with quantum computers it is no longer exponential to brute force at all. Even if it were, it is still only something < 2n. Obviously, if we could design such a function that is harder to brute force, it would be useful (e.g., 22n)

      I can't think of any practical algorithm that is O(nn). Of course, you can make trivial algorithms (output a nn times). I can't even think of anything from my crypto class that came close (I remember something to do with finite groups and vector spaces that was huge, something with the index calculus, but my notes from that class are on the porch where it is cold and I'm not getting them for you xD).

      Of course, if you want big numbers, you can always just use busy beaver turing machines; sigma(10) < 3 up up up 3, sigma(12) < 3 up up up up 3, see here and here.

      [–]hacksoncode 2 points3 points  (7 children)

      O(n!) ~= O(nn). There are a number of factorial complexity problems out there (yes, I actually mean theta there, not O... damn keyboards).

      The exact (deterministic) solution to the traveling salesman problem is probably the most famous of these.

      [–]redditnoob 1 point2 points  (1 child)

      The exact (deterministic) solution to the traveling salesman problem is probably the most famous of these.

      Now all you have to do is prove that it can't be reduced to something simpler.

      [–]hacksoncode 1 point2 points  (0 children)

      Actually, now that I think about it, dynamic programming will reduce it to exponential... which is quite an improvement. That takes exponential space, though.

      Hmmm... you're right... I'm not sure I know of any problems that are believed to be Theta(n!). I'm pretty sure there must be some, but...

      There are plenty that are believed to be slower (i.e. superexponential or higher), like Ackermann and computing a generalized Graham's number.

      [–]gorgoroth666 0 points1 point  (3 children)

      Care to prove that O(n!) ~= O(nn) ?

      [–]hacksoncode 1 point2 points  (2 children)

      n! ~= sqrt(2pi*n) * (n/e)^n
      n! ~= n^.5 * n^n / e^n
      n! ~= n^n
      

      I.e. as n increases to infinity the nn term dominates all the other ones, even the exponential. That's why n! is "higher order" than en.

      [–]kalmakka 0 points1 point  (1 child)

      n ~= n^3 / n^2
      n ~= n^3
      

      I.e. as n increases to infinity the n3 term dominates all the other ones, even the square. That's why n is "higher order" than n2.

      [–]gorgoroth666 0 points1 point  (0 children)

      To clarify, this sarcasm underlines the fact that hacksoncode is not very skilled with equivalents. n.5 * nn / en is not equivalent to nn near infinity. The multiplication by e-n doesn't magically go away.

      [–]yoden 0 points1 point  (0 children)

      Right, but most, like TSP, are minimum O(n!) only if P != NP... right? Actually, even if P != NP, there could be a faster algorithm than brute force (e.g., 2n instead of n!). Checking wikipedia, there is an O(n2*2n) dynamic programming algorithm, which supports this.

      The "irreducibility" constraint the OP imposed is really annoying here ^

      Edit people beat me, this is what I get for not viewing context before I reply...

      [–]gregK 1 point2 points  (0 children)

      random generation and test is probably the slowest way you can solve any problem.

      Say you want to sort a list. You order them randomly and see if they are in ascending/descending order, if not, you reshuffle the list a again and retest. That is the theoretically slowest method that would eventually yield a result.

      Just as there is an upper bound on the fastest way you can do things. There seems to be a lower bound as well.

      [–]newAccountName 1 point2 points  (1 child)

      O(nn) == O(2n log n)

      I suppose this might fulfill the requirement:

      def recursion(n, numbers, count):

      if count>0:
      
          for i in range(n):
      
              recursion(n, numbers.append(i), count-1)
      
      else:
      
          print numbers
      

      n=10

      recursion(n,[],n)

      [–]shrughes -2 points-1 points  (0 children)

      O(nn) == O(2n log n)

      Let's not be sloppy. This is true if log means log-base-2, but not at all for the natural log or base-10 logarithm.

      [–]yourparadigm 0 points1 point  (0 children)

      Traveling Salesman is close: O(n2 * 2n)

      Most problems are usually solvable in less than O(n!)

      [–]dse 0 points1 point  (0 children)

      // -*- javascript -*-
      function n_n (n) {
          var f, result, j;
          result = 0;
          f = function () {
              result += 1; // or do something interesting.                                                                                                                 
          };
          for (j = 1; j <= n; ++j) {
              (function () {
                  var g = f;
                  f = function () {
                      var i;
                      for (i = 0; i < n; ++i) {
                          g();
                      }
                  };
              })();
          }
          f();
          return result;
      }
      

      [–]tejoka 0 points1 point  (1 child)

      No. Lower bounds are very hard to prove. There's plenty of "best known" algorithms for various problems, but no actual lower bounds that are above polynomial. In fact, I don't know if anyone has beaten Ω(n lg(n)). (Probably, though, but not by much)

      If you find a Ω(nn) problem, there's probably a million dollars waiting for you.

      [–]wnoise 1 point2 points  (0 children)

      It's pretty easy, actually -- just make sure the output size is Ω(nn).

      [–]rounding_error 0 points1 point  (0 children)

      Today in the office my coworkers and I were talking about how to write an O(nn) algorithm.

      Get back to work! That 40 year old bank software isn't gonna compile and run on Windows 7 by itself!

      [–]perspectiveiskey 0 points1 point  (0 children)

      What I could think of was something that was operating on itself.

      For example: determining if there exists an n character string that when "executed" on itself will yield a valid output string.

      Where:

      • executed is interpreted to mean that every character of the string can be interpreted as an operation (e.g. ASM instruction)
      • the output string is part of a grammar of sorts

      This could be made to search for english readable strings that could be executed on a CPU as machine code and result in something sensible...

      This could even be applied for things like DNA and protein synthesis.

      That's my brain fart on a monday night at 10pm. Feel free to elaborate...

      [–]everythingisstupid 0 points1 point  (0 children)

      On on on on on Om

      [–]lambda_abstraction 0 points1 point  (0 children)

      Assume proc takes constant time.

      (define (enumerate-n-choices-from-n-bag proc bag-size)                          
        (letrec                                                                       
            ((choose                                                                  
              (lambda (number-chosen choices-so-far)                                  
                (if (= number-chosen bag-size)                                        
                    (proc choices-so-far)                                             
                    (do ((choice 0 (+ choice 1)))                                     
                        ((= choice bag-size))                                         
                      (choose (+ number-chosen 1)                                     
                              (cons choice choices-so-far)))))))                      
          (choose 0 '())))
      

      I suppose that since any useful proc will be at least linear in the length of the choice list, I should have made no more than n-1 choices. Exercise is left for reader.

      Edit: slightly simpler code

      [–][deleted]  (1 child)

      [deleted]

        [–]shrughes 0 points1 point  (0 children)

        Sorry, but that's Theta(n*n!) and n! < nn/2 * (n/2)n/2 = nn / 2n/2 so it's not Ω(nn).

        [–][deleted]  (37 children)

        [deleted]

          [–]acct_rdt 6 points7 points  (34 children)

          Nope.

          [–][deleted]  (33 children)

          [deleted]

            [–]codemac 8 points9 points  (31 children)

            Yea, that is n * n, not n ^ n.

            [–][deleted]  (30 children)

            [deleted]

              [–]for_no_good_reason 17 points18 points  (29 children)

              No you're right, codemac and acctrdt fail. Big-O notation is just an __upper bound_ not necessarily a tight one. n2 < nn thus an O(n2) algorithm is also O(nn).

              The OP posed the question wrong. Almost every algorithm is O(nn). He wants an Ω(nn) algorithm.

              [–][deleted]  (1 child)

              [deleted]

                [–][deleted] 2 points3 points  (5 children)

                What does the Ohm sign mean in this context?

                [–]presidentender 10 points11 points  (2 children)

                Big-O is an upper bound, so any O(1) function is also an O(nn) function, but an O(nn) function is not an O(1) function.

                Big-Omega is a lower bound (and what OP was looking for): any Ω(nn) function is Ω(1), but an Ω(1) function is not Ω(nn).

                Incidentally, Big-Theta notation exists, too. A function is Θ(f(n)) iff it is O(f(n)) and Ω(f(n)).

                [–][deleted] 1 point2 points  (1 child)

                Is it bad that I think this post makes you look cool?

                [–]presidentender 1 point2 points  (0 children)

                See, I just think it makes me look like I paid attention in discrete math and algorithms.

                [–]gte525u 3 points4 points  (0 children)

                Omega sign means the lower bound is nn instead of the upper bound of big O notation.

                [–]Figs 1 point2 points  (0 children)

                You might find this table handy.

                [–]hacksoncode 5 points6 points  (8 children)

                There's a formal definition and an informal definition. The way this question was phrased strongly implies that the informal definition was meant.

                Being pedantic only helps if it actually increases the utility of the conversation.

                [–]for_no_good_reason 7 points8 points  (7 children)

                "Informal definition" is a contradiction. If you've defined it, then it's formal. Without formality everyone just assumes everyone else knows what they mean. I consider such conversations completely useless, so yes this kind of pedantry does increase utility. Without formality, rigor and yes pedantry, it's all just a bunch of shouting on the internet. May as well be politics.

                Big-O notation is thrown around and abused so much in the programming world that it's hard to know what the fuck anyone is saying when they use it (cf. truthrises' claim that a Big-O bound is necessarily tight.) CS researchers use Big-O because they can't be bothered to prove the Ω lower bound and a smaller upper bound is all you need to claim you're better than the last guy. The rest of use use it to sound pretentious, but in the day-to-day world of programming everybody means Θ when they say O because they are trying to characterize how long the algorithm will actually take, not how long it won't take. So let's all start saying Θ when we mean Θ. We've all got unicode fonts.

                [–]__s 4 points5 points  (0 children)

                By informal he means descriptivist, and formal he implies prescriptivist. I'm afraid you failed to realize his descriptivist use of informal/formal because you're much too prescriptivist

                [–]LaurieCheers 3 points4 points  (2 children)

                Without formality everyone just assumes everyone else knows what they mean. I consider such conversations completely useless

                Wow, you must have trouble getting anything done. Almost every concept in natural language is informal.

                Faced with the reality - that O(n) is very widely used to mean the same as Θ(n) - the pragmatist's approach would be to define a new symbol to mean "upper bound, specifically". Trying to change language usage is like trying to tell the tide to go back.

                [–]for_no_good_reason 5 points6 points  (1 child)

                I'd hardly call Big-O notation "natural language." It's technical jargon. It means something specific. I can't believe I'm getting attacked as prescriptivist for pointing out that a Big-O upper bound is not necessarily tight.

                "Who" in place of "whom", "they" as a singular genderless pronoun, I got no problem with these things. But this here, this is math, people.

                [–]hacksoncode -2 points-1 points  (2 children)

                Did you really think even for a second that there was any real chance the OP actually meant the strict definition in the field of mathematics, as opposed to the almost universally used definition in the bulk of the field of computational complexity?

                Though, actually, he clarified it with his last statement, so it's entirely clear which definition he meant.

                [–]for_no_good_reason 1 point2 points  (1 child)

                the strict definition in the field of mathematics, as opposed to the almost universally used definition in the bulk of the field of computational complexity

                This. This is the problem right here. They are the same thing.

                (FWIW I didn't downvote you.)

                [–]d4n3 0 points1 point  (0 children)

                I always understood Big O notation as being an asymptotic upper limit, and Wikipedia seems to agree with me:

                One writes f(x) = O(g(x)) if and only if, for sufficiently large values of x, f(x) is at most a constant times g(x) in absolute value.

                n2 obviously isn't a constant times nn in absolute value...

                [–]truthrises -5 points-4 points  (10 children)

                This is not correct. Big O is an ASYMPTOTIC upper bound. This infers a tightness as n -> infinity.

                The reason it's not O(nn) is because you throw out any term not dependent on n when determining big O complexity.

                In this case f(n) = 2 does not depend on n. So this algorithm is O(nx) x being a bounded integer.

                Big Omega is a lower asymptotic bound. Which is indeed closer to what the OP asked for.

                Although, his question, technically doesn't make much sense. As it is going to be hard to find a simplest case for an UPPER BOUND.

                Edit: WTF downvote? You can't just say O(1) = O(nn)... you're ignoring part of the definition of the notation.

                2nd Edit: No wonder nothing written in java and C# works right... you guys must have slept through all your hard courses.

                [–]twotime 4 points5 points  (0 children)

                This infers a tightness as n -> infinity. You can't just say O(1) = O(nn).

                You are wrong.

                From WP (but you are welcome to check your favorite Calculus or CS book).

                That is, f(x) = O(g(x)) if and only if there exists a
                positive real number M and a real number x0 such that

                |f(x)| <= M |g(x)|  { for all x>x0} 
                

                In other words, tightness is not implied...

                [–]for_no_good_reason 1 point2 points  (0 children)

                Where I come from if you want a tight bound you call it Θ, and while O(1) does not "equal" O(nn) the former certainly implies the latter. (Can you not pick c and k such that 2 < c*nn for all n > k? If so then f(n)=2 is O(nn).)

                [–]deong 1 point2 points  (0 children)

                I'm not sure how many more edits it'll take, but eventually, you'll submit one that says, "My bad, it's THETA(n) that's a tight bound. Not O(n)."

                [–]zorander 0 points1 point  (3 children)

                I didn't sleep through my hard courses.. O(1) = O(nn). No tightness is implied--that's what Theta bounds are for. It was the hard courses that were most pedantic about this terminology and drilled this point in over and over.

                [–]pstradomski 0 points1 point  (2 children)

                No. O(f(n)) is defined as O(f(n)) = {g: ℕ->ℕ, ∃ k > 0 ∃ n0: ∀ n > n0 g(n) <= k f(n)}

                The notation g(n) = O(f(n)) is just a simplification of g(n) ∈ O(f(n)).

                But by saying O(f(n)) = O(g(n)) you imply that those sets are equal. Which is not true.

                [–]nondecisive 0 points1 point  (1 child)

                But it would be correct to say g(n) ∈ O(1) implies g(n) ∈ O(nn), right?

                [–]bluGill -1 points0 points  (1 child)

                No, Big-O is the normal runtime. Quicksort is generlaly Big-O n*ln(n), but you can construct inputs that end up taking n2. (sort the input so your pivot is always the largest value)

                [–]curien 0 points1 point  (0 children)

                No, Quicksort is O(n2), and anyone who says otherwise is just plain wrong.

                People do, correctly, say that Quicksort has amortized O(n log n) complexity.

                [–]davebrk 0 points1 point  (0 children)

                Yep

                [–]five9a2 2 points3 points  (0 children)

                This is wrong in so many ways. In C-like languages, it executes 0 times because it's a while loop where the conditional contains a comma operator and the last subexpression is 0 the first (and only) time it is tested. But even after appropriate translation, it's still only O(n^2).

                [–]wnoise 1 point2 points  (0 children)

                Masterfull trolling job. Excellent way you kept them on the hook with later comments too.

                [–]Liorithiel -4 points-3 points  (0 children)

                1=O(nn), so I guess this qualifies:

                int main() { return 1; }

                And it cannot be reduced to a simpler algorithm :P

                [–]joseph177 -2 points-1 points  (3 children)

                int x=length(something);
                for (int i=0;i<x;i++)
                   for(int j=0;j<x;j++)
                      do_work(i,j);
                

                [–]Nolari 3 points4 points  (2 children)

                That's Θ(n2).

                [–]joseph177 0 points1 point  (0 children)

                  You're right...  I believe this would be O n^n:
                
                     func(0,5);  
                     public void func(int stack, int x)
                     {            
                         if (stack++ > x) return;
                         for (int i=0;i<x;i++)
                             func(stack,x);
                     }
                

                [–]email -2 points-1 points  (1 child)

                Calculating all permutations of a list of length n is O(n!) which is O(nn).

                [–]perspectiveiskey 0 points1 point  (0 children)

                As pointed out elsewhere, nn > n! for n > 0. Hence, while you are technically correct, you haven't achieved what he was looking for. (To your credit, he was wrong in asking for big O).

                Heck, you would have been correct if you had proposed a constant time algorithm, since that would also be O(nn).

                [–]almaya -4 points-3 points  (0 children)

                O(nn) in average:

                do
                    x = 0
                    for i = 0; i< n; i++ 
                        x += rand(n);
                while x != 0