Extreme Optimization™: Complexity made simple.

Math and Statistics
Libraries for .NET

  • Home
  • Features
    • Math Library
    • Vector and Matrix Library
    • Statistics Library
    • Performance
    • Usability
  • Documentation
    • Introduction
    • Math Library User's Guide
    • Vector and Matrix Library User's Guide
    • Data Analysis Library User's Guide
    • Statistics Library User's Guide
    • Reference
  • Resources
    • Downloads
    • QuickStart Samples
    • Sample Applications
    • Frequently Asked Questions
    • Technical Support
  • Blog
  • Order
  • Company
    • About us
    • Testimonials
    • Customers
    • Press Releases
    • Careers
    • Partners
    • Contact us
Introduction
Deployment Guide
Nuget packages
Configuration
Using Parallelism
Expand Mathematics Library User's GuideMathematics Library User's Guide
Expand Vector and Matrix Library User's GuideVector and Matrix Library User's Guide
Expand Data Analysis Library User's GuideData Analysis Library User's Guide
Expand Statistics Library User's GuideStatistics Library User's Guide
Expand Data Access Library User's GuideData Access Library User's Guide
Expand ReferenceReference
  • Extreme Optimization
    • Features
    • Solutions
    • Documentation
    • QuickStart Samples
    • Sample Applications
    • Downloads
    • Technical Support
    • Download trial
    • How to buy
    • Blog
    • Company
    • Resources
  • Documentation
    • Introduction
    • Deployment Guide
    • Nuget packages
    • Configuration
    • Using Parallelism
    • Mathematics Library User's Guide
    • Vector and Matrix Library User's Guide
    • Data Analysis Library User's Guide
    • Statistics Library User's Guide
    • Data Access Library User's Guide
    • Reference
  • Mathematics Library User's Guide
    • General Classes
    • Mathematical Functions
    • Complex Numbers
    • Arbitrary Precision Arithmetic
    • Automatic Differentiation
    • Curves and Interpolation
    • Curve Fitting
    • Solving Equations
    • Optimization
    • Calculus
    • Fast Fourier Transforms
    • Random Numbers
    • Generic Arithmetic
    • Appendices
  • Calculus
    • Numerical Integration
    • Numerical Integration in Two Dimensions
    • Numerical Integration in more than Two Dimensions
    • Numerical Differentiation
    • Ordinary Differential Equations
  • Numerical Integration
    • The NumericalIntegrator Class
    • Fixed interval methods
    • Gauss-Kronrod Integration Rules
    • Adaptive Numerical Integration
  • Fixed interval methods

Fixed interval methods

Extreme Optimization Numerical Libraries for .NET Professional

The integration algorithms in this category divide the integration interval into an equal number of intervals. A weighted average of the function values at these points gives an approximation for the integral.

Left and Right Rectangle Rules

One way to compute an integral is to approximate the area using rectangles. This is what the rectangle rules do. The integration interval is divided into a large number of subintervals. On each interval, the function is approximated by a constant value. For the left point rule, it is the lower bound of the subinterval. For the right point rule it is the upper bound of the subinterval. These rules are implemented by the LeftPointIntegrator and RightPointIntegrator classes.

These algorithms are only of order one, meaning that any linear function is integrated exactly, but quadratic functions are not. This makes the algorithm not suitable for general problems. However, there is one property that still makes them useful. For certain functions, the resulting approximation is guaranteed to be an upper or lower bound for the true value of the integral. For monotonically increasing functions, the left-point rule gives a lower bound. For monotonically decreasing functions, it gives an upper bound. The right-point rule gives the opposite bounds.

The Mid-Point and Trapezoid Rules

The mid-point rule, implemented by the MidpointIntegrator class, is another variation of the rectangle rule. It approximates the function on each subinterval by its value in the middle of the subinterval. The trapezoid rule approximates the function on each interval by a straight line through the function at the left and right points of the subinterval. The trapezoid rule is implemented by the TrapezoidIntegrator class.

These algorithms are also only of order one. They are equally ill-suited for general problems. Once again, however, the resulting approximation is guaranteed to be an upper or lower bound for the true value of the integral of certain functions. For convex functions, the mid-point rule gives a lower bound. For concave functions, it gives an upper bound. The trapezoid rule gives the opposite bounds.

Simpson's Rule

One of the simplest integration algorithms is Simpson's rule, implemented by the SimpsonIntegrator class. This algorithm is of order 3. In each iteration, the number of points is doubled. The difference between successive approximations is taken as the estimate for the integration error. Because the order of the algorithm is so low, use of this algorithm is not recommended. It is here more for historical reasons than for its practical application.

The following example evaluates the integral of the Sine function over the interval [0,5] using Simpson's rule.

C#
VB
C++
F#
Copy
SimpsonIntegrator simpson = new SimpsonIntegrator();
simpson.RelativeTolerance = 1e-5;
simpson.ConvergenceCriterion = ConvergenceCriterion.WithinRelativeTolerance;

double result = simpson.Integrate(Math.Sin, 0, 2);
Console.WriteLine("sin(x) on [0,2]");
Console.WriteLine("Simpson integrator:");
Console.WriteLine("  Value: {0}", simpson.Result);
Console.WriteLine("  Status: {0}", simpson.Status);
Console.WriteLine("  Estimated error: {0}", simpson.EstimatedError);
Console.WriteLine("  Iterations: {0}", simpson.IterationsNeeded);
Console.WriteLine("  Function evaluations: {0}", simpson.EvaluationsNeeded);
Dim simpson As SimpsonIntegrator = New SimpsonIntegrator()
simpson.RelativeTolerance = 0.00001
simpson.ConvergenceCriterion = ConvergenceCriterion.WithinRelativeTolerance

Dim result As Double = simpson.Integrate(AddressOf Math.Sin, 0, 2)
Console.WriteLine("sin(x) on (0,2)")
Console.WriteLine("Simpson integrator:")
Console.WriteLine("  Value: {0}", simpson.Result)
Console.WriteLine("  Status: {0}", simpson.Status)
Console.WriteLine("  Estimated error: {0}", simpson.EstimatedError)
Console.WriteLine("  Iterations: {0}", simpson.IterationsNeeded)
Console.WriteLine("  Function evaluations: {0}", simpson.EvaluationsNeeded)

No code example is currently available or this language may not be supported.

let simpson = new SimpsonIntegrator()
simpson.RelativeTolerance <- 1e-5
simpson.ConvergenceCriterion <- ConvergenceCriterion.WithinRelativeTolerance

let result = simpson.Integrate(Math.Sin, 0.0, 2.0)
Console.WriteLine("sin(x) on [0,2]")
Console.WriteLine("Simpson integrator:")
Console.WriteLine("  Value: {0}", simpson.Result)
Console.WriteLine("  Status: {0}", simpson.Status)
Console.WriteLine("  Estimated error: {0}", simpson.EstimatedError)
Console.WriteLine("  Iterations: {0}", simpson.IterationsNeeded)
Console.WriteLine("  Function evaluations: {0}", simpson.EvaluationsNeeded)
Romberg Integration

Romberg integration is an enhancement of the basic trapezoid rule. It is implemented by the RombergIntegrator class. Once again, the number of points is doubled in each iteration. This time, however, the successive approximations are used to extrapolate the exact value of the integral as the width of each subinterval approaches zero. Romberg integration has an effective order of 2k after k steps of the algorithm. Even though the number of function evaluations is still fairly high, it is a major improvement over Simpson's rule, and a reasonable choice for target functions that are sufficiently smooth.

C#
VB
C++
F#
Copy
RombergIntegrator romberg = new RombergIntegrator();
romberg.RelativeTolerance = 1e-5;
romberg.ConvergenceCriterion = ConvergenceCriterion.WithinRelativeTolerance;
double result = romberg.Integrate(Math.Sin, 0, 2);
Console.WriteLine("sin(x) on [0,2]");
Console.WriteLine("Romberg integrator:");
Console.WriteLine("  Value: {0}", result);
Console.WriteLine("  Status: {0}", romberg.Status);
Console.WriteLine("  Estimated error: {0}", romberg.EstimatedError);
Console.WriteLine("  Iterations: {0}", romberg.IterationsNeeded);
Console.WriteLine("  Function evaluations: {0}", romberg.EvaluationsNeeded);
Dim romberg As RombergIntegrator = New RombergIntegrator()
romberg.RelativeTolerance = 0.00001
romberg.ConvergenceCriterion = ConvergenceCriterion.WithinRelativeTolerance
Dim result As Double = romberg.Integrate(AddressOf Math.Sin, 0, 2)
Console.WriteLine("sin(x) on (0,2)")
Console.WriteLine("Romberg integrator:")
Console.WriteLine("  Value: {0}", result)
Console.WriteLine("  Status: {0}", romberg.Status)
Console.WriteLine("  Estimated error: {0}", romberg.EstimatedError)
Console.WriteLine("  Iterations: {0}", romberg.IterationsNeeded)
Console.WriteLine("  Function evaluations: {0}", romberg.EvaluationsNeeded)

No code example is currently available or this language may not be supported.

let romberg = new RombergIntegrator()
romberg.RelativeTolerance <- 1e-5
romberg.ConvergenceCriterion <- ConvergenceCriterion.WithinRelativeTolerance
let result = romberg.Integrate(Math.Sin, 0.0, 2.0)
Console.WriteLine("sin(x) on [0,2]")
Console.WriteLine("Romberg integrator:")
Console.WriteLine("  Value: {0}", result)
Console.WriteLine("  Status: {0}", romberg.Status)
Console.WriteLine("  Estimated error: {0}", romberg.EstimatedError)
Console.WriteLine("  Iterations: {0}", romberg.IterationsNeeded)
Console.WriteLine("  Function evaluations: {0}", romberg.EvaluationsNeeded)

The method does have its limits, however. Nothing beats the robustness of the adaptive Gauss-Kronrod integrator discussed in the next section.

References

S.D. Conte and Carl de Boor, Elementary Numerical Analysis: An Algorithmic Approach, McGraw-Hill, 1972.

Copyright (c) 2004-2021 ExoAnalytics Inc.

Send comments on this topic to support@extremeoptimization.com

Copyright © 2004-2021, Extreme Optimization. All rights reserved.
Extreme Optimization, Complexity made simple, M#, and M Sharp are trademarks of ExoAnalytics Inc.
Microsoft, Visual C#, Visual Basic, Visual Studio, Visual Studio.NET, and the Optimized for Visual Studio logo
are registered trademarks of Microsoft Corporation.