Overview
Introduction
Features
Documentation
QuickStart Samples
Sample Applications
Downloads
Get it now!
Download trial version
How to Buy
Information
Contact Us
Our customers

This is a partial list of companies who are using our libraries:

ABB Robotics
Allstate
Arcam
Astra Schedule
Babson College
Canadian Council on Learning
Canyon Associates
Caxton Associates
CECity
Constellation Energy
CreditSights
DeepOcean
Duke University
Dynamotive
Elecsoft
Engelhard Corporation
Epcor
Equipoise Software
Galileo International
GAM UK
Gammex
GlaxoSmithKline
Global Matrix
The Hartford
Infinera Corporation
Intel
JDS Uniphase
LaBranche & Co.
Learning & Skills Council
Jacobs Consultancy
Litman Gregory
Lucas Systems
Malvern Instruments
Medrio
Merck & Co.
Mintera.
Monitor Software
MorningStar
NanoString Technologies
Paletta Invent
Parametric Portfolio Associates
Prosanos
RATA Associates
RiskShield
Ramboll
Standard & Poor's
Strategic Analysis Corporation
Univ. of Alicante
Univ. of South Carolina
vielife
Xerox
US Army

New Version 6.0!

Try it for free with our fully functional 60-day trial version.

Download now!

Extreme Optimization Numerical Libraries for .NET

Performance

The Extreme Optimization Numerical Libraries for .NET uses native, processor-specific code for its core computations. This gives you performance comparable to the fastest code available.

For example, the classes in the Extreme.Mathematics.LinearAlgebra namespace use native BLAS and LAPACK routines wherever possible. BLAS stands for Basic Linear Algebra Subroutines, and is the de facto standard for core numerical linear algebra routines such as matrix and vector products. LAPACK stands for Linear Algebra PACKage, and is the standard for the more complex functionality such as matrix decompositions and eigenvalue problems.

The BLAS and LAPACK interface is public. This means you can plug in your own implementation if desired. This is of particular importance if you wish to use the library on a non-Windows based platform.

All native routines also have managed equivalents. This code isn't as fast as the native code, especially for larger problems. But it has the advantage of portability and a smaller memory footprint.

The tables below shows some performance benchmarks. The tests were run on a 3GHz Pentium IV with 512MB of RAM.

Benchmark results for processor-specific, native implementation:

Matrix size 5x5 50x50 1000x1000
Number of iterations 500.000 10.000 10
LU Decomposition 2.05s 1.31s 2.17s
QR Decomposition 3.89s 5.10s 4.66s
Matrix multiply 0.37s 0.78s 4.22s

Benchmark results for 100% managed implementation:

Matrix size 5x5 50x50 1000x1000
Number of iterations 500.000 10.000 10
LU Decomposition 1.18s 3.25 10.11s
QR Decomposition 2.57s 9.25s 45.30s
Matrix multiply 0.38s 3.24s 27.52s