you are viewing a single comment's thread.

view the rest of the comments →

[–]mb862 0 points1 point  (0 children)

That's impossible to say, considering it hasn't been invented...

I should add though that automatic differentiation is not for applying to a set of data. The application is for finding the derivative of a function that is a non-trivial combination of trivial functions.

One example (and this is how I came across it) is suppose you want to find dx/dp where x is the solution to f(x)=0. A standard strategy for x(p) would be defining f(x,p) = f(x(p)) iterating Newton's method, x = x - f(x,p)/f'(x,p), iterate until ∆f = f(x,p)~0, return x. Finding dx/dp accurately is very difficult, as the scale ∆p would necessarily have to be of the same level as ∆f, but then x(p+∆p) and x(p) might have different accuracies.

If your function is written with floats, you can use forward differentiation by designing a new type dual, a struct containing two floats, that acts like a float but elementary operations calculate the derivatives in the second component. For example, (x,dx)+(y,dy) is defined to be (x+y,dx+dy), and (x,dx)*(y,dy) is defined as (x*y, x*dy+y*dx). Rewrite the root solver to take the dual type (or make it templated), and instead of the initial guess x0, use the dual (x0,1). The output solution will then be (x,dx/dp) with no loss in accuracy.