Represents an optimizer that uses a conjugate gradient algorithm.
, OptimizationSolutionReport Extreme.Mathematics.OptimizationMultidimensionalOptimizer Extreme.Mathematics.OptimizationDirectionalOptimizer Extreme.Mathematics.OptimizationConjugateGradientOptimizer
Extreme.Numerics.Net40 (in Extreme.Numerics.Net40.dll) Version: 6.0.16073.0 (6.0.17114.0)
public sealed class ConjugateGradientOptimizer : DirectionalOptimizer
Public NotInheritable Class ConjugateGradientOptimizer
public ref class ConjugateGradientOptimizer sealed : public DirectionalOptimizer
type ConjugateGradientOptimizer =
The ConjugateGradientOptimizer type exposes the following members.
Use the ConjugateGradientOptimizer class to solve an optimization problem
using a conjugate gradient algorithm. Three variants of the algorithm are available: the method of
Fletcher and Reeves, the method of Polak and Ribière, and the positive method of Polak and Ribière.
The default is the positive Polak-Ribière method.
The conjugate gradient method is the method of choice for large problems. For these problems,
it consumes less memory and performs less work per iteration than the other common methods.
On the downside, the search directions are often badly scaled, which makes the method
less suitable for smaller problems. A quasi-Newton algorithm
is preferred in such a case.
The objective function must be supplied as a multivariate function
delegate to the ObjectiveFunction property. The gradient of the objective function
can be supplied either as a multivariate function returning a vector delegate (by setting the
GradientFunction property), or a multivariate function returning a vector in its second argument
delegate (by setting the FastGradientFunction property). The latter has the advantage
that the same Vector instance is reused to hold the result.
Before the algorithm is run, you must set the InitialGuess property to
a vector that contains an initial estimate for the extremum. The ExtremumType
property specifies whether a minimum or a maximum of the objective function is desired.
The FindExtremum method performs the actual
search for an extremum, and returns a Vector containing the best approximation.
The Extremum property also returns the best
approximation to the extremum. The ValueAtExtremum property
returns the value of the objective function at the extremum.
property is a AlgorithmStatus value that indicates the outcome of the algorithm.
A value of Normal shows normal termination.
A value of Divergent usually indicates that the objective
function is not bounded.
A number of properties let you control the search for an extremum.
The LineSearch property returns a OneDimensionalOptimizer that
is used to locate a suitable new point along the current search direction. You can modify its
convergence criteria. Note that conjugate gradient algorithms require a fairly precise line search.
Sometimes, successive conjugate directions are almost parallel, or don't reflect the current
curvature of the objective function well, resulting in poor convergence.
This can be remedied in one of two ways. The RestartIterations property specifies
how often the conjugate direction is to be reset to the steepest descent direction. A value of 0, which
is the default, specifies not to reset the direction.
The RestartThreshold property is used to test whether successive search directions
are sufficiently orthogonal. If this is the case, then the search direction is reset to the steepest descent
direction. A lower value indicates more frequent resets. The default value is 0.1.
The algorithm has three convergence tests. By default, the algorithm terminates
when either of these is satisfied. You can deactivate either test by setting its Enabled
property to . If both tests are deactivated, then the algorithm always terminates when
the maximum number of iterations or function evaluations is reached.
The first test is based on the uncertainty in the location
of the approximate extremum. The SolutionTest property returns a
VectorConvergenceTestT object that allows you to specify the desired
See the VectorConvergenceTestT class for details on how to further customize
The second test is based on the change in value of the objective function at the approximate extremum.
The test is successful when the change of the value of the objective function is within the tolerance.
Care should be taken with this test. When the tolerance is too large, the algorithm will terminate prematurely.
The ValueTest property returns a SimpleConvergenceTestT object
that can be used to customize the test.
The third test is based on the value of the gradient at the approximate extremum.
The GradientTest property returns a VectorConvergenceTestT object
that can be used to customize the test. By default, the error is set to the component
with the largest absolute value.
Supported in: 6.0, 5.x, 4.x