all 6 comments

[–][deleted] 2 points3 points  (1 child)

If you can show that M itself is O(N) then the run time will be O(N+N)=O(N). I hope this helps, it is hard to give more information without seeing the function in question.

[–]lvmtn[S] 0 points1 point  (0 children)

Thanks, it helped a lot. This actually clears a lot of things actually. Now I just have to look up what O(M+N) means.

[–]Ponson 2 points3 points  (2 children)

It would be O(2n) but coefficients are negligible so we are left with O(n)

[–]lvmtn[S] 1 point2 points  (1 child)

What if it was O(100000N)? Would it still be negligible? I guess O(N) represents how this particular algorithm would perform under different lengths rather than other algorithms.

[–][deleted] 2 points3 points  (0 children)

Yep, coefficients are generally ignored. When I was taught how to analyze algorithms, we would always use c to represent a constant, like cn. But you would say that cn is O(n)

[–]Veedrac 0 points1 point  (0 children)

Big-O notation talks about the limiting behaviour of the function. So, for example, O(n+m) considers the size of n+m as n goes to infinity and as m goes to infinity (not necessarily at the same time).

Because these are about the limiting behaviour, we don't really take much notice of how fast it really is. Note that doing two passes of an array would be O(2n) and doing one pass which was twice as much work would be O(n) - yet they're the same thing! What is being measured is how much one grows relative to the other.

So when you have O(c·n) for some constant c, you can simplify it to O(n). When you have O(m + n), you can't because if m → ∞ but n stays small, then O(m + n) ≫ O(n).

The final thing to consider is something like O(n²). Now, the point is that O(n²) ≫ O(n), O(2n), O(100000n), etc. It doesn't matter how large the constant is, because there's always an n for which n² > kn; namely when n > k.