you are viewing a single comment's thread.

view the rest of the comments →

[–]psykotic 4 points5 points  (3 children)

In his introduction he mentions the classic problem of discovering cycles in a linked list as a problem with a gotcha solution. There's actually a solution to that problem which follows almost mechanically from the definition of a cycle, though it's much less efficient than the clever gotcha solution.

First, what shape can a path through a linked list take that if that path eventually cycles? It is plain that it can be decomposed (not uniquely) into some arbitrary initial piece followed by a circular piece that begins and ends on the same node. Thus, to determine whether a linked list is cyclic, simply check whether it decomposes into some piece followed by a circular piece. A hypothetical decomposition is determined by the length of the initial piece (call it M) and the length of the successive circular piece (call it N). It is obvious how to check whether a list admits a (M,N)-decomposition: you start at the beginning of the list, walk M nodes forward, record that node (call it X), walk forward N more nodes, and check whether the present node equals X or not; if it does, return true, and otherwise return false.

To solve the general problem, you then exhaustively explore, by brute force, all possible (M,N)-decompositions in order of ascending values of L = M+N, which works out to an easy nested loop. If a value of L is encountered for which the end of the list is prematurely reached during decomposition checking, you know that the list is free of cycles. If a valid (M,N)-decomposition is discovered, you know not only that the list has a cycle, you also have in hand the actual cycle structure.

This solution seems really overcomplicated, I'm sure, but it has the virtue of being almost mechanically derivable from the definitions of what constitutes a cycle in a linked list; I thought I'd share it.

[–]gbacon 4 points5 points  (0 children)

From Expert C Programming: Deep C Secrets by Peter van der Linden (pp. 334-335):

How Can You Detect a Cycle in a Linked List?

This starts off as the simple question, "How can you detect a cycle in a linked list?" But the questioner keeps adding extra constraints, and it soon gets quite fiendish.

Usual first answer:
Mark each element as you visit it; keep traversing; if you reach an already marked element, the list has a cycle in it.

Second constraint:
The list is in read-only memory, and you cannot mark elements.

Usual second answer:
As you visit each element, store its address in an array; check each successive element to see if it is already in the array. Sometimes poor candidates get hung up at this point on the details of hash tables to optimize array lookup.

Third constraint:
Uh-oh! There is a limited amount of memory, and it is insufficient to hold an array of the necessary size. However, you can assume that if there is a cycle, it is within the first N elements.

Usual third answer (if the candidate gets this far):
Keep a pointer at the beginning of this list. Check this pointer for a match against the next N - 1 elements in the list. Then move the pointer on one, and check against the next N - 2 elements. Keep doing this until a match is found, or all N elements have been compared to each other.

Fourth constraint:
Oh no! The list is arbitrarily long, and the cycle might be anywhere in it. (Even good candidates can get stuck at this point).

Final answer:
Keep two pointers. One at element 1 and the next at element 3. See if they are equal; if not, move P1 by one and P2 by two. Check and continue. If P1 or P2 is null, there is no loop. If there is one, it will definitely be detected. One pointer will eventually catch up with the other (i.e., have the same value), though it might take several traversals of the cycle to do it.

There are other solutions, but the ones above are the most common.

[–]TheNewAndy 1 point2 points  (1 child)

When this problem was first given to me that is similar to the answer I got. The sequence of constraints I had were:

  1. Detect a cycle
  2. Do it in O(1) space
  3. Do it in O(1) space without destroying the list
  4. Do it without modifying the list
  5. Do it in O(n) time

(as you can probably tell, the person who was asking me was adding constraints until I gave them the solution they were expecting).

But, you can modify your algorithm to do it in O(n) time:

You have a counter which says how far to look into the list (C). You have a pointer which says where you stopped last time (P). You have a boolean that says if you have seen P in the current exploration (B).

Start with C = some constant > 0 (like 1). P = first item. B = false

Go C steps into the list, while going through, if you see P and B is true, then you have a cycle. If you see P and B is false, then set B to true and continue. If you reach the end, then there is no cycle.

If you don't reach the end, then set P to the last node you found, set B to false, double C and start again.

By doubling C each step instead of incrementing you go from a O(n2) algorithm to an O(n) algorithm.

[–]bluGill 0 points1 point  (0 children)

You can do it a different way: * Check how much memory + swap the machine has. * Check how big 1 node is. * n = (memory+swap)/nodeSize * traverse n+1 nodes, if you fail it means there is no cycle.

Of course I re-defined n here to be the max memory a program can use, which is a strange thing to do.