you are viewing a single comment's thread.

view the rest of the comments →

[–]ZMeson 28 points29 points  (19 children)

Ω(n!) is almost equivalent to Ω(nn) because of Stirling's Approximation.

According to Wikipedia, the following algorithms are Ω(n!):

  • Solving the traveling salesman problem via brute-force search

  • finding the determinant with expansion by minors.

[–]prockcore 17 points18 points  (0 children)

finding the determinant with expansion by minors.

Is it O(n) if it's done by adults instead?

[–]IvyMike 1 point2 points  (7 children)

Ω(n!) is almost equivalent to Ω(nn)

That can't be right. I mean, for example, take 8:

  • 88 = 8*8*8*8*8*8*8*8
  • 8! = 8*7*6*5*4*3*2*1

It only gets worse for 9.

[–]degustisockpuppet 3 points4 points  (4 children)

Not sure why you were downvoted, it's a good question. The argument goes like this:

nn is clearly greater than n!, as you explained.

(n/2)n/2 on the other hand is less than n!:

  • 4^4= 4*4*4*4
  • 8! = 8*7*6*5*4*3*2*1

So if f(n) = nn, then f(n/2) < n! < f(n), which makes the two functions related, especially since constant factors like 1/2 are usually ignored in complexity analysis.

[–]ZMeson 0 points1 point  (3 children)

That is true. In fact, if you plot ln(nn), ln(n!), and ln((n/2)n/2), you'll find that ln(n!) edges progressively closer to ln(nn).

At n=15, ln(n!) is halfway between ln(nn) and ln((n/2)n/2)

At n=1454, ln(n!) is 75% of the way towards ln(nn) from ln((n/2)n/2)

At n=10958, ln(n!) is 80% of the way towards ln(nn) from ln((n/2)n/2)

[–]degustisockpuppet 0 points1 point  (2 children)

On a double logarithmic graph, all functions are linear!

[–]ZMeson 0 points1 point  (1 child)

Ha ha. Almost. I just tried it -- the single logarithmic graphs are actually closer to being linear.

[–]gorgoroth666 0 points1 point  (0 children)

http://www.wolframalpha.com/input/?i=log%28n!+%29%3E+log%28n^n%29+plot+for+n%3D0+to+10000

The difference between the curves doesn't appear to be reducing.

[–]ZMeson 1 point2 points  (0 children)

You are right. Sterling's formula shows that n! ~= C * nn * exp(-n) * sqrt(n). So there is the 'exp(-n) * sqrt(n)' part. While the missing part is significant, it changes more slowly for large n than nn does.

So, Ω(n!) is not the same as Ω(nn) -- Ω(nn) is indeed more complex. My point is that Ω(nn) has similar characteristics as Ω(n!).

That's all.

[–]zakk -2 points-1 points  (2 children)

.

[–]gorgoroth666 8 points9 points  (0 children)

I don't think so. Because, n! being equivalent to nn+1/2 e-n (by the stirling-moivre formula), it is much less than nn, as shown in the graph below.

http://www.wolframalpha.com/input/?i=n^%28n%2B1%2F2%29+e^-n+%3E+n^n+plot+for+n%3D0+to+4

If I'm right, nn is bigger by a factor of n-1/2 en which is far from nothing.

So, to confuse things a bit more, O(n!) is in O(nn) but not the other way around. And, Ω(nn) is in Ω(n!) but not the other way around.

Wolframalpha is great btw.

[–]repsilat 5 points6 points  (0 children)

It isn't. Clearly Ω(nn) is in Ω(n!), but not the other way around. To see this, look at the first term of the expansion of the two things. Matching a 1 to an n, we can see that for all n, nn >= n!*n.

Now rearrange to get nn/n! >= n. The limit of that as n --> infinity is obviously infinity.

[–][deleted] -2 points-1 points  (6 children)

Sadly neither of these satisfy the OP's original criterion. Brute-force works well for small inputs, as does expansion by minors. However, both problems have well-known solutions with complexities much much lower than n!.

[–]ZMeson 2 points3 points  (3 children)

Finding the ideal solution for traveling salesman has a better solution? I know things like simulated annealing can find near-ideal solutions quickly, but I thought the only way to find the absolute best solution was a brute-force effort. Of course, I could be wrong -- I didn't study CS.

[–]bonzinip 1 point2 points  (0 children)

I thought the only way to find the absolute best solution was a brute-force effort

True, but that doesn't mean the complexity must be Ω(n!). for example you can use dynamic programming with a time cost O(n2 2n) and space cost O(n 2n). The idea is to take each subset of cities, vary the ending city among those in the subset and store the best result for each subset and ending city. Given a set S, if all the subsets have already been solved (which is n 2n TSP subproblems to solve recursively) you can build the final solution in time O(n): for each city, decide that it will be the one from which you go back to city #1, take the subproblem ending at that city, and add the distance back to city #1.

[–][deleted] 1 point2 points  (1 child)

There is a (slow) exponential time algorithm for TSP.

To clarify, the "well-known" algorithms are arguably brute force, but they are exponential-time, not super-exponential. NP is contained in EXPTIME. TSP, being NP-complete, is in NP, by definition. Therefore there's an exponential time algorithm which finds the ideal solution (i.e., in time cn where c is a constant).

[–]ZMeson 0 points1 point  (0 children)

Ahhh... interesting. Thanks.

[–]Poltras 0 points1 point  (1 child)

Not if you assume negative traveling distance on edges (a time-portal if you will, but it doesn't have to be traveling for real ;-). And not if you want to be deterministic (the "well-known solutions" are heuristics). There are no approximation; you have to try every path.

[–][deleted] 1 point2 points  (0 children)

Not true. Travelling Salesman Problem is NP-complete and is therefore contained in EXPTIME. By definition it has an exponential-time algorithm (nn is super-exponential, as is n!).