Implements the Nelder-Mead simplex algorithm for multi-dimensional optimization.
, OptimizationSolutionReport Extreme.Mathematics.OptimizationMultidimensionalOptimizer Extreme.Mathematics.OptimizationNelderMeadOptimizer
Extreme.Numerics.Net40 (in Extreme.Numerics.Net40.dll) Version: 6.0.16073.0 (6.0.16312.0)
public sealed class NelderMeadOptimizer : MultidimensionalOptimizer
Public NotInheritable Class NelderMeadOptimizer
public ref class NelderMeadOptimizer sealed : public MultidimensionalOptimizer
type NelderMeadOptimizer =
The NelderMeadOptimizer type exposes the following members.
Use the NelderMeadOptimizer class to find an extremum of an objective function
for which only the objective function is available, and the objective function itself may not be smooth.
The method is often called the downhill simplex method.
The main advantage of this method is that it converges for functions where other methods
would fail. This may happen, for instance, when the derivative of the objective function contains
discontinuities. The major drawback is that the method converges more slowly than the other methods,
and performs poorly for large problems.
The objective function must be supplied as a multivariate function
delegate to the ObjectiveFunction property. The gradient of the objective function
is not used.
Before the algorithm is run, you must set the InitialGuess property to
a vector that contains an initial estimate for the extremum. The ExtremumType
property specifies whether a minimum or a maximum of the objective function is desired.
The FindExtremum method performs the actual
search for an extremum, and returns a Vector containing the best approximation.
The Extremum property also returns the best
approximation to the extremum. The ValueAtExtremum property
returns the value of the objective function at the extremum.
property is a AlgorithmStatus value that indicates the outcome of the algorithm.
A value of Normal shows normal termination.
A value of Divergent usually indicates that the objective
function is not bounded.
The algorithm has two convergence tests. By default, the algorithm terminates
when either of these is satisfied. You can deactivate either test by setting its Enabled
property to . If both tests are deactivated, then the algorithm always terminates when
the maximum number of iterations or function evaluations is reached.
The first test is based on the uncertainty in the location
of the approximate extremum. The test succeeds if the difference between the best and worst
approximation is within the tolerance. The SolutionTest property returns a
VectorConvergenceTestT object that allows you to specify the desired
See the VectorConvergenceTestT class for details on how to further customize
The second test is based on the difference in value of the objective function at the best and at the worst
The test is successful when the difference between the two is within the tolerance.
Care should be taken with this test. When the tolerance is too large, the algorithm will terminate prematurely.
The ValueTest property returns a SimpleConvergenceTestT object
that can be used to customize the test.
A third test, based on the value of the gradient at the approximate extremum,
is exposed through the GradientTest property.
However, this test does not apply to the Nelder-Mead method, and it's
Enabled property is set to
before the algorithm is executed.
Supported in: 6.0, 5.x, 4.x