Numerical Differentiation | Extreme Optimization Numerical Libraries for .NET Professional |

The FunctionMath class contains a number of static (Shared) methods for approximating the derivative of a function. Finite differences are used to approximate the derivative. An adaptive method finds the best choice of finite difference, and estimate the error. You can also create a delegate that evaluates the derivative function of an arbitrary function numerically.

All methods listed in this section are defined as extension methods. This means that in most languages they can be called as if they were instance methods of their first argument. The examples illustrate how this is done.

The Derivative method
takes two or three parameters. The first argument is a Func

Console.WriteLine("Numerical derivative of cos(x):"); double result = FunctionMath.Derivative(Math.Cos, 1.0); Console.WriteLine(" Result = {0}", result); Console.WriteLine(" Actual = {0}", -Math.Sin(1));

The optional third parameter, of type DifferencesDirection, specifies the kind of finite differences to use in the calculation. The possible values for this parameter are summarized in table 3.

Value | Description |
---|---|

Backward | Backward differences are used. Only function values at the target point and to the left of the target point are used. |

Central | Central differences are used. Function values on both sides of the target point are used. |

Forward | Forward differences are used. Only function values at the target point and to the right of the target point are used. |

Passing DifferencesDirection, or omitting the third parameter invokes the algorithm using central differences. The target function is evaluated on both sides of the target point.

In some cases this may not work. When the target function is either undefined, the algorithm will break down. If the target function contains a discontinuity, the result will be wrong. In these cases, you should use either forward or backward differences.

Passing DifferencesDirection to Derivative invokes the algorithm using backward differences. The target function is evaluated only to the left of the target point and at the target point, not to the right of the target point. Passing DifferencesDirection to Derivative invokes the algorithm using forward differences. The target function is evaluated only to the right of the target point and at the target point, not to the left of the target point.

The BackwardDerivative, CentralDerivative and ForwardDerivative methods call these respective algorithms directly. They optionally take an out parameter, in which the estimated absolute error is returned.

It may be useful to be able to use the numerical derivative as a function
that can be evaluated at arbitrary points.
This can be done using the
GetNumericalDifferentiator
method. This method returns a Func

Func<double, double> f = Math.Cos; Func<double, double> df = FunctionMath.GetNumericalDifferentiator(f); // Pass both to an equation solver: double root = FunctionMath.FindZero(f, df, 1.0);

Using a numerical derivative for an algorithm that requires it, as in the example above, is usually not a good idea. Most such algorithms have alternatives that do not use a derivative, but may be more expensive in terms of function evaluations. Still, numerical differentiation is relatively expensive, costing at least 5 evaluations of the original function. This is an important consideration when deciding which algorithm to use.

In addition to simple derivatives of functions of one variable, numerical approximations to the gradient of a multivariate function, or the Jacobian of a set of multivariate functions can also be computed. The GetNumericalGradient method returns a delegate that evaluates the gradient of its argument. The GetNumericalJacobian

Numerical differentiation is an inherently unstable process. There is really no way to avoid subtracting two numbers that are very close, resulting in very significant round-off error. The best we can hope for is a relative accuracy of roughly half the machine precision (SqrtEpsilon).

S.D. Conte and Carl de Boor, *Elementary Numerical Analysis: An Algorithmic Approach*, McGraw-Hill,
1972.

Copyright Â© 2004-2018,
Extreme Optimization. All rights reserved.

*Extreme Optimization,* *Complexity made simple*, *M#*, and *M
Sharp* are trademarks of ExoAnalytics Inc.

*Microsoft*, *Visual C#, Visual Basic, Visual Studio*, *Visual
Studio.NET*, and the *Optimized for Visual Studio* logo

are
registered trademarks of Microsoft Corporation.