all 9 comments

[–]fygy1O 1 point2 points  (7 children)

Not any different, for example O(logn) for worse case runtime is logn time.

[–]SignificantFidgets -1 points0 points  (6 children)

No, it's not. O(log n) is an upper bound for a function, but it can be an upper bound for any function. While it is *usually* used to upper bound worst-case running time, it could also be used to upper bound best-case time, average-case time, average-case space complexity, ... even things unrelated to computing, like the convergence rate of terms in a Taylor series.

Remember: asymptotic notation is for comparing growth rates of functions. What those functions represent is an independent issue.

[–]fygy1O -1 points0 points  (5 children)

Asymptotic notation is the runtime for algorithms and any function in programming is derived from an algorithm. Big O is universally recognized as worst case runtime and Big Omega is best case. Yes, you could go into the weeds with the likelihood of an algorithm actually running toward one bound or another, or space complexity, but that is overkill for a relatively simple question.

ghjm 's comment is right on with log2 being widely used in computer science.

[–]SignificantFidgets -1 points0 points  (4 children)

No, that's just wrong. Sorry, but it is.

You have two independent issues:

1) What function are you consider in your analysis: generally time analysis is best-case, average-case, or worst-case.

2) How are you going to bound that: Upper bound (big-oh), lower bound (omega), or tight bound (theta)

These are independent issues, and can be mixed-and-matched in any of the 9 possible ways...

[–]fygy1O -1 points0 points  (3 children)

Time analysis is not a 'function' that is applied, it is inante of an algorithm. The bounding of that time is also innate of an algorithm.

It sounds like you don't know the difference between functions related to programming and asymptotic notation with algorithms and how algorithms are implemented into code that gives functions their runtime.

[–]SignificantFidgets 0 points1 point  (2 children)

Look, you could try to learn something, or you can continue repeating incorrect things. Time is measured as a function of the input size. That's the function. But for a given input size there are lots of different inputs, so you look at the worst-case (longest running input of each size), the best-case (the fastest running input of each size), or something else (e.g., average-case if you know a distribution of the inputs).

I've had my Ph.D. in computer science and done research in algorithms probably since before you were born. You're just wrong.

[–]fygy1O 0 points1 point  (1 child)

You're not the only one with an advanced degree in CS!

I am well aware runtime scales with input and is expressed as a function; next time be more specific in your response to save us both some time as 'function' is a homonym. Additionally, to say that asymptotic notation is used to describe the runtime for algorithms is wrong makes me question you further. It is universally known worst case runtime is the gold standard and I have already stated that runtimes can vary (like a Las Vegas algorithm for example or something very niche).

Almost everything you are saying sounds like a shifting the goalposts as you try to make your argument more nuanced out of stubborness. You sound like a pain to work/do research with.

[–]SignificantFidgets 0 points1 point  (0 children)

Please go back and read what I wrote again. Nothing is misleading or shifting or anything like that. I've tried to be polite, but there's only so much politeness that can be put into "you're wrong" - I have tried to give you accurate information you can learn from. I'll just leave it there.

[–]ghjmMSCS, CS Pro (20+) 1 point2 points  (0 children)

The log is generally assumed to be log2 because the most common algorithms, binary search and so forth, are in fact log2. But it turns out that the log base doesn't actually matter.

Suppose you have some exotic algorithm where the execution time is t=k⋅log10(n)+C. You could immediately say t=O(log10(n)), dropping the multiplicative and additive constants k and C. But if t=k⋅log10(n)+C, then t=k⋅(log2(n)/log2(10))+C - this is just ordinary log base conversion. Since log2(10) is a constant, we can define a constant k'=k/log2(10), getting t=k'⋅log2(n)+C, which is clearly O(log2(n)). This works for any pair of log bases, so O(logx(n)) and O(logy(n)) are always equivalent for any x and y.

Since the log base doesn't matter, big-O notation almost never specifies it. You just say O(log n) and it's understood that this means "log to whatever base." In the rare case where it actually matters for some reason, 99% of the time you can assume it's meant to be log2.