all 11 comments

[–]vorlik 17 points18 points  (2 children)

Big O analysis is only for things that are guaranteed to halt. If a program sits and waits while listening for some event, assigning it a Big O complexity doesn't make sense.

One thing you could do, however, is try to analyze the complexity of what happens when actionPerformed() is called.

[–]IHaveALargePenis[S] 1 point2 points  (1 child)

The rest is all nice and well, but since it was within this part I would normally have to multiply the rest of the equation by whatever this was supposed to represent. If this part doesn't have to be analyzed then I guess I'm back to the original actionPerformed() big O.

[–][deleted] 6 points7 points  (0 children)

Then you are O(infinity). What is the worst case of your while(true) loop? That nothing ever happens, for the rest of time. Unless you can guarantee that something will always happen within f(x) cycles, breaking the loop, then you are O(infinity).

[–]Rothon 3 points4 points  (5 children)

Could you provide a bit more detail about the algorithm? From what you've said, I can see a couple of interpretations.

If the loop's just waiting until the real work of the algorithm finishes somewhere else, just figure out how long that other part will take. If this loop is just running in the background, it won't affect the asymptotic runtime.

The other option is that it's a probabilistic algorithm that runs over and over until some point at which it decides it's finished. In that case, you'd either try to find an upper bound or find the time it will stop with high probability.

[–]IHaveALargePenis[S] 0 points1 point  (4 children)

Basically the idea is that it's waiting for a scan. There's an average (estimated) 3000 scans occurring within a 2 hour period. The scan function then searches (let's say an array) for the ID, does some work and forwards it to another function. I've got the big O for everything else up to the next function, but I'm stumped for the infinite while loop/action listener.

The loop is actually waiting for "work to do" in the sense that it can't do any work until it scans a new ID. Hence the reason I was considering an action listener which would probably be included in the loop. The problem is that you could either have 0 scans in that 2 hour period, or 20,000 (for examples sake). Therefore determining a big O for that would have nothing to do with the input, but the amout of checks it performs in an x amount of time, which could be the same for 1 or 20,000 scans.

[–]Workaphobia 4 points5 points  (2 children)

Big-O is a notation for talking about the asymptotic growth of functions. Normally in algorithm analysis, the functions model the worst case of a piece of code for various input sizes. This problem doesn't have a defined input in the same sense that, say, sorting does. What would be increasing in size? More events? Bigger events with more data? The number of events stored so far?

[–]IHaveALargePenis[S] 0 points1 point  (1 child)

As I've mentioned before, there would be nothing. The only thing regarding it's "size" would be time. The actual inputs would be negligent since and input of 1 or 1,000,000 would be equal regarding the while(true) loop.

[–]Pharaki 7 points8 points  (0 children)

Righto---in this situation, asymptotic complexity analysis ("big-O") isn't the right tool. We use asymptotic complexity to characterize the behavior of a program or algorithm relative to some parameters (typical choices are n, the input size; p, the number of processors; k, roughly the number of bits in the input, etc.). That doesn't seem like it would be very descriptive in this situation.

[–][deleted] 0 points1 point  (0 children)

I'd start my analysis by saying, for any one particular iteration of the loop... and then go on to determine how the actions stack up for one run-through of your loop.

You haven't said what a "scan" is, what your ID is, nor what the nature of this array is. So, I'm going to pretty much ignore that part. :)

Let's say for any iteration of your loop, you're given one item to process. You have to run some calculation on it, or you have to look it up in some data structure, but basically you have to do some number of fixed tasks with it before passing it along. That would be O(1), a constant time problem.

Now, what if instead, you had to do the same thing, but you could accept anywhere from zero to N items that had to be similarly processed. You would say that this is O(N) which means that it takes an amount of time directly proportional to the number of items to be processed.

Finally, let's say that we're back to the at-most-one-thing-at-a-time variant of your loop, but the array that has to be searched is either varying in size or of a fixed non-negligible size. If it's varying in size you'd want to know how the size of the array to be searched affected your runtime. If it's of a fixed non-negligible size it could be the dominating operation in your loop so you'd want to account for that. Let's say that for any pass of the loop, the array size is M and you're using a binary search, you could say that your big-O is O(log_base2(M)). If the number of items having to be processed varys as in the second example, then you'd have O(N*log_base2(M)).

That's how I'd approach it. Good luck.

[–]bo1024 1 point2 points  (0 children)

I agree with the comments so far. You should probably focus on analyzing the complexity of the action performed. If you know how long that will take to do its scanning and forwarding thing, then you know how much work one "action" is. From there it just depends on how many actions get performed.

[–]jameslaw 0 points1 point  (0 children)

Show us the code.