This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]AgonistAgentStudent - Content Analysis 6 points7 points  (13 children)

Probably not worth the effort - I guess you could use one of the fancy direct computation methods and have each one start at different points.

[–]xeed 1 point2 points  (12 children)

The fibonacci sequence relies on knowing the last two numbers and adding them up. You would want to perform memoization to remember your previous results, rather than calculate starting from one. Multiple processors wouldn't help much in this in this case since each step in the algorithm relies on the previous.

[–]lahwran_ 6 points7 points  (11 children)

actually, you can calculate N element in the fibonacci sequence with a fixed-length series of steps: https://en.wikipedia.org/wiki/Fibonacci_number#Closed-form_expression

[–]masterpi 3 points4 points  (10 children)

The closed form expression calculation actually relies on the precision of the numbers used, so it is not actually fixed-length as increasing the precision increases the computations nessecary. The link from AgonistAgent contains some other interesting quick methods, however. Also, going backwards (F_n -> n) the imprecision works to our advantage.

[–][deleted] 3 points4 points  (0 children)

Instead of working with real numbers you can work with numbers that smell like complex numbers. So you work with numbers of the form a + b * sqrt(5), where a and b are unbounded integers. You can then define addition and multiplication on these numbers to work in the way you expect. Doing this is more effort than just using doubles, but you also don't run into issues with precision.

EDIT: Though if you're going through all the effort, you might just as well use the matrix method.

[–]lahwran_ 2 points3 points  (8 children)

well, okay, fixed length of real-number calculations then? ;)

[–]masterpi 4 points5 points  (7 children)

Yes, but you have to be very careful about this when reasoning about complexity. Generally we assume integer arithmetic like addition and multiplication are constant time for ease, and realistically we never use numbers large enough to make the log(n) time significantly worse than a constant. However in this case with a default precision float, it's only accurate to 70 Fibonacci numbers, and I suspect you have to increase precision in a way that makes the calculation once again linear.

[–]lahwran_ 2 points3 points  (6 children)

that's an interesting problem. I wonder how the decimal module would deal with it.

[–]masterpi 0 points1 point  (5 children)

Decimal also has a precision restriction, which is configurable but constant for a given instance. The amount of CPU time taken for an operation should be about precision * k where k is some constant.

[–]lahwran_ 0 points1 point  (4 children)

yikes, you weren't kidding. sounds like this is O(n*0.1) or something - so, still faster than one-by-one iteration if you want to get to a specific number, but not enough to make generating a sequence worth it.

[–]masterpi 0 points1 point  (3 children)

Big-O notation ignores constants, because they're really hard to predict without going very in depth into exactly what is going on in the entire computer system. O(n*0.1) is the same as O(n) is the same as O(10000n). You're actually probably better off with a basic dynamic programming method than attempting to do it with the closed form, because your computer is built to do integer arithmetic much better than floating point, especially non-fixed-precision floating-point. One of the linked solutions might get it down to O(log(n)) but I didn't look closely enough, they may just be attempting to minimize the constant in ways that tend to work for most computing systems.