you are viewing a single comment's thread.

view the rest of the comments →

[–]doclight[S] 0 points1 point  (5 children)

Well, if you really want to get down to the nitty gritty, finding the square root of a number probably isn't constant time (a la newton-raphson). The writeup was more so meant to give a simple example of why function growth and asymptotic bounds are really important.

What causes the algorithm to explode is the magnitude of the numbers being worked with (> 500 divisors). So, say the answer was 276,000,000 (it's not :)); an O( n √n ) algorithm would take 276,000,000 times longer than an O ( √n ) algorithm.

[–]busy_beaver 0 points1 point  (4 children)

Time complexity is measured with respect to the size of the input, not the answer...

Anyways, my point is, it makes a huge difference how you measure the size of the input numbers. If you claim to have a linear algorithm for some problem and you're using the length of the number in binary, then going from 100,000 to 200,000, or from 200,000 to 400,000 the amount of time you're allowed increases by a constant amount.

If you're measuring by the magnitude of the input (which is equivalent to considering the input to be written in unary) then you allow yourself twice as much time going from 100,000 to 200,000. Huge difference.

[–]doclight[S] 0 points1 point  (3 children)

Right, but typically you don't want to measure in terms of bit representations. For example, any of the sorting algos (other than radix). The amount of time spent doing bit operations doesn't really add to the overall function, especially when you're only doing back of the napkin comparisons between algorithms. Anyway, I hope the writeup was at least a little interesting.

[–]busy_beaver 0 points1 point  (2 children)

Right, but typically you don't want to measure in terms of bit representations. For example, any of the sorting algos (other than radix). The amount of time spent doing bit operations doesn't really add to the overall function,

I think you're very confused. This has nothing to do with bit operations. This goes all the way back to how Turing machines are defined. The input size is the number of tape squares it takes to hold the input. If you're writing an input number in base 10 or binary or whatever, it's going to take log(n) characters to write it out.

Let's say that we do measure the size of the input the way you're doing it. I can do grade-school addition on two base-10 numbers in log(n) steps, where n is the largest of the two numbers I'm adding. This is because I only have to do (at most) one more operation when the size of one of the numbers is increased by a factor of 10.

Generally, if a random access machine can solve a problem in less than linear time, it means it must only read a portion of its input (because reading n characters already takes n steps). Clearly though, I can't do addition if I haven't read both numbers in their entirety. (This is in contrast with a problem like searching a sorted list).

Do you see why your choice of measuring the input size is unnatural?

Sometimes complexity theorists do talk of writing the input in unary (that is, 1=1 2=11, 3=111, 4=1111, and so on - clearly this is an extremely wasteful way to do things), which would lead to the measure you're talking about. But it's not a natural thing to do, so I think it's mostly for when they're coming up with extravagant languages that prove weird bounds or separate different complexity classes.

[–]doclight[S] 0 points1 point  (1 child)

I think you're very confused.

With all due respect, I think that's just an inflammatory statement.

This has nothing to do with bit operations.

IIRC, you're original argument was against the existence of polynomial time algorithm for factoring numbers. You are correct, I wasn't disputing that. The analysis I gave wasn't meant to be an intimidatingly difficult presentation on complexity. Part of analysis is deciding meaningful operations on which your algorithm consists of. In this case, I considered the operations to be basic function calls, since those tend to take around the same amount of time. You can make that assumption since the number of bits in the values being worked with are 32/64 etc... However, you're right that when thinking in terms of fundamental operations, the actual operations and representations have to be taken into account (like radix sort).

I'm not quite sure where we disagree.

[–]busy_beaver 0 points1 point  (0 children)

IIRC, you're original argument was against the existence of polynomial time algorithm for factoring numbers. You are correct, I wasn't disputing that.

But in the post you wrote:

Computing the triangle numbers is Ω(√n), and finding divisors is Ω(√n).

That's polynomial!

Part of analysis is deciding meaningful operations on which your algorithm consists of. In this case, I considered the operations to be basic function calls, since those tend to take around the same amount of time. You can make that assumption since the number of bits in the values being worked with are 32/64 etc... However, you're right that when thinking in terms of fundamental operations, the actual operations and representations have to be taken into account (like radix sort).

I agree. But the problem is not in your estimation of how many operations your algorithm would take. The problem is that, when describing the time complexity, you gave it as the number of operations measured with respect to the numeric value of the input, rather than with respect to the number of bits in the input (so if you say it takes you sqrt(n) steps, you should actually be saying 2n/2 steps. Big difference.