Represents a multi-dimensional optimizer that uses a quasi-Newton algorithm (DFP or BFGS).
, OptimizationSolutionReport Extreme.Mathematics.OptimizationMultidimensionalOptimizer Extreme.Mathematics.OptimizationDirectionalOptimizer Extreme.Mathematics.OptimizationQuasiNewtonOptimizer
Extreme.Numerics.Net40 (in Extreme.Numerics.Net40.dll) Version: 6.0.16073.0 (6.0.16312.0)
public sealed class QuasiNewtonOptimizer : DirectionalOptimizer
Public NotInheritable Class QuasiNewtonOptimizer
public ref class QuasiNewtonOptimizer sealed : public DirectionalOptimizer
type QuasiNewtonOptimizer =
The QuasiNewtonOptimizer type exposes the following members.
Use the QuasiNewtonOptimizer class to find an extremum of a multivariate function
using a quasi-Newton method. Two variations of this method are available: the BFGS method of
Broyden, Fletcher, Goldfarb and Shanno, and the DFP method of Davison, Fletcher and Powell.
The default is the BFGS method.
A quasi-Newton method is the preferred method for smaller problems when the gradient
of the objective function is available. For large problems, the
Conjugate Gradient method is usually more efficient.
The objective function must be supplied as a multivariate function
delegate to the ObjectiveFunction property. The gradient of the objective function
can be supplied either as a multivariate function returning a vector delegate (by setting the
GradientFunction property), or a multivariate function returning a vector in its second argument
delegate (by setting the FastGradientFunction property). The latter has the advantage
that the same Vector instance is reused to hold the result.
Sometimes, the gradient function is not available, or is very expensive to calculate.
In such instances, a numerical approximation may work better.
Before the algorithm is run, you must set the InitialGuess property to
a vector that contains an initial estimate for the extremum. The ExtremumType
property specifies whether a minimum or a maximum of the objective function is desired.
The FindExtremum method performs the actual
search for an extremum, and returns a Vector containing the best approximation.
The Extremum property also returns the best
approximation to the extremum. The ValueAtExtremum property
returns the value of the objective function at the extremum.
property is a AlgorithmStatus value that indicates the outcome of the algorithm.
A value of Normal shows normal termination.
A value of Divergent usually indicates that the objective
function is not bounded.
The algorithm has three convergence tests. By default, the algorithm terminates
when either of these is satisfied. You can deactivate either test by setting its Enabled
property to . If both tests are deactivated, then the algorithm always terminates when
the maximum number of iterations or function evaluations is reached.
The first test is based on the uncertainty in the location
of the approximate extremum. The SolutionTest property returns a
VectorConvergenceTestT object that allows you to specify the desired
See the VectorConvergenceTestT class for details on how to further customize
The second test is based on the change in value of the objective function at the approximate extremum.
The test is successful when the change of the value of the objective function is within the tolerance.
Care should be taken with this test. When the tolerance is too large, the algorithm will terminate prematurely.
The ValueTest property returns a SimpleConvergenceTestT object
that can be used to customize the test.
The third test is based on the value of the gradient at the approximate extremum.
The GradientTest property returns a VectorConvergenceTestT object
that can be used to customize the test. By default, the error is set to the component
with the largest absolute value.
Supported in: 6.0, 5.x, 4.x