Using Formulas to Define Statistical Models

In R, and the S language from which it was derived, model formulas (or model formulae as the official documentation calls them) are the standard way of specifying the variables in a model. The patsy package brought the same convenient syntax to Python. With an update we released earlier today, you can now use formulas to specify models in the Extreme Optimization Numerical Libraries for .NET.

Formula basics

Formulas work for almost all types of models. Here, we will use regression as an example. A regression model is defined by one dependent variable and one or more independent variables. To specify a model with dependent variable y and independent variables x1 and x2, we can use the formula:

y ~ x1 + x2

In this formula, the ~ operator separates the dependent variable from the independent variables, and the + operator indicates that x1 and x2 should both be included in the model.

It is useful to think of formulas as operating on sets of variables. So the terms y, x1, and x2 each stand for a set of one variable. The + operator stands for the union of the set of terms on the left and the set of terms on the right. It follows that including the same variable twice results in the same formula:

y ~ x1 + x1 + x2 + x1 + x2

is exactly the same as

y ~ x1 + x2

Other operators exist that specify different combinations of variables: They are all really just set operations. We will get to those in a minute.

In many multivariate models like clustering models, factor analysis or principal component analysis (PCA), there are no dependent variables, only features. In this case, the first part of the formula is omitted.

The Intercept

The special term 1 stands for the intercept (constant) term in a model. Because nearly all linear models include an intercept term, one is included by default. So, again for a regression, the model

y ~ x1 + x2

is equivalent to

y ~ 1 + x1 + x2

To exclude the intercept term, there are two options. The - operator can be used. This operator corresponds to the set difference between terms: it returns the set of terms in its right operand with the terms on the left removed:

y ~ x1 + x2 - 1

Alternatively, the special no-intercept term, 0, can be added as the first term in the model:

y ~ 0 + x1 + x2

Interactions

Models may include not just main effects but also interactions between terms. The interaction between two numerical variables is the element-wise product of the two variables. Interactions are specified by the : operator. For example:

y ~ x1 + x2 + x1:x2

It is very common to include both the main effects and the interactions in a model. It is so common that there is a special operator for this purpose: the product operator *. So, the above model can also be written much shorter as:

y ~ x1*x2

There is another shortcut operator: ** or ^ (both forms are equivalent). The right operand has to be an integer. It specifies how many times the left operand should be repeated in a product. So, the model

y ~ (x1 + x2)**3

is equivalent to

y ~ (x1 + x2)*(x1 + x2)*(x1 + x2)

Categorical variables

Most models require variables to be numerical. To include categorical variables in a model, they have to be encoded as numerical variables. This is done automatically In a formula like

y ~ x + a

where a is a categorical variable, the term a really means “the set of variables needed to fully represent the variable a in the model.”

The standard way to encode categorical variables is called dummy encoding or one hot encoding. For each level of the categorical variable, a corresponding indicator variable is generated that has a 1 where the variable’s value equals that level, and a 0 otherwise. That’s not the full story, however.

The sum of all indicator variables for a categorical variable is a variable with ones everywhere. This is exactly the same as an intercept term. So, if an intercept term is present in a model, then for a categorical variable with 2 levels, you only need 1 indicator variable in the model. For a variable with 3 levels, you only need 2. Adding the last indicator variable does not add any more information. It is redundant.

Handling redundancies, especially for models that include interactions between variables, can get somewhat complicated. Complicated enough for R to get it wrong sometimes. No worries: we’ve done it right.

The Catch All Term

Often, the data set used to fit a model contains just the variables that are used in the model. In this case, it is convenient to use the special . term. This term captures all the variables in the data set that have not yet appeared in the model. So, for example, if the data set contains the variables y, x1, x2, …, x17, but nothing else, then instead of writing out the full formula:

y ~ x1 + x2 + x3 + x4 + x5 + x6 + x7 + x8 + x9 + x11 + x12 + x13 + x14 + x15 + x16 + x17

you can simply write:

y ~ .

A code example

This example is adapted from the linear regression QuickStart sample. We’ll use C#, but the code is also available in Visual Basic and F#.

The example uses information about the test scores of 200 high school students. We’ll try to predict the science score in terms of some other variables: the scores in reading, math, and social studies, and the gender of the student.

var data = DataFrame.ReadCsv(@"..\..\..\..\Data\hsb2.csv");
var formula = "math + female + socst + read";
var model = new LinearRegressionModel(data, formula);
model.Compute();
Console.WriteLine(model.Summarize());

We start by reading the data from a CSV file into a data frame. We can then go straight to setting up the model using a formula, and computing it. We then print a summary of the results. This gives us the following output:

# observations: 200
R Squared:            0.4892        Log-likelihood:      -674.63
Adj. R Squared:       0.4788        AIC:                 1359.25
Standard error:       7.1482        BIC:                 1375.75
===================================================================
Source of variation   Sum.Sq.   D.o.f. Var.Est.  F-stat.  prob.
Regression            9543.72     4.00  2385.93  46.6948   0.0000
                      9963.78   195.00    51.10
                     19507.50   199.00    98.03
===================================================================
Parameter                Value   St.Err.       t   p(>t)
Constant              12.32529   3.19356    3.86  0.0002
math                   0.38931   0.07412    5.25  0.0000
female                -2.00976   1.02272   -1.97  0.0508
socst                  0.04984   0.06223    0.80  0.4241
read                   0.33530   0.07278    4.61  0.0000

We see that math an reading scores were most significant. In this group, girls scored 2 points lower than boys, on average.

Limitations

This is just our initial implementation of model formulas. Not everything works entirely as we would like it, and there are some limitations.

Expressions are not supported.

This is the biggest limitation. Formulas in R and in patsy can contain expressions of variables. So, for example, you can have

log(y) ~ x1 + x2

In this example, the dependent variable is the natural logarithm of the variable y. When the expression contains operators that could be confused with formula operators, the special I() function is used to indicate that a term is to be evaluated as a mathematical expression:

y ~ x1 + I(x1+x2)

This formula specifies a model with 2 independent variables: x1 and the sum of x1 and x2.  Such expressions are not supported in this release.

Special purpose variables are not supported.

Some models have variables that play a special role. For example, Poisson regression models sometimes have an offset variable that is associated with a parameter fixed at 1. In R, the offset parameter is indicated by the expression offset(variableName). This is currently not supported.

The ‘nested in’ operatorS / or %in% ARE not supported

These operators are not very common and a little confusing, so we decided to leave them out, at least for now.

The no-intercept term must appear first.

Adding the no-intercept term, 0, anywhere other than at the start of the independent variables or features has undefined behavior.

Conclusion

In only a few lines of code, we've loaded a data set, set up a model and printed a summary of the results. We believe that specifying models using formulas is a very useful feature. You can read more in the documentation. The QuickStart samples have also been updated.

Of course, there's nothing like trying it out yourself. Get the update here!

New Version of Numerical Libraries for .NET released

Last week, we released version 6.0 of our Extreme Optimization Numerical Libraries for .NET. We’ve introduced a number of great new features. Here are some highlights.

  • The linear algebra library has undergone a major overhaul:
    All vector and matrix types are generic and support arbitrary element types.
  • Thanks to type inference, the element type rarely needs to be specified.
  • New mutability options let you create read-only vectors, and writable vectors with copy-on-write semantics.
  • We’ve filled in missing operations and consistently offer them in three variants: perform the operation in-place, return a new array, or return the result in an existing array.
  • The native linear algebra libraries now support Conditional Numerical Reproducibility.

The data analysis library has been greatly expanded:

  • Group rows by value or quantile, sliding and expanding windows, partitions, time-based resampling and 2D pivot tables.
  • Many new aggregator functions with efficient implementations for specific types of groupings like moving averages.
  • Matrices and vectors can act as data frames. The same functionality that is available for data frames, like data manipulation, aggregation, and automatic missing value handling, can also be used on vectors and matrices.
  • LINQ queries are supported on data frames, matrices, and vectors.
  • Create data frames from databases, arbitrary .NET objects, or import and export to CSV files. Support for other file formats, including R data files and SAS files, will be released in the coming weeks and months.
  • Many improvements including data transformations, lookup/join on nearest.

The statistics library has been integrated with the data analysis library:

  • We’ve removed the distinction between vectors and statistical variables. Everything is now a vector.
  • Covariance matrices, parameter vectors and similar objects are automatically labeled with variable names.
  • Residuals, predictions, and similar vectors automatically get the observations’ labels.
  • Categorical variables are automatically expanded to indicator variables as needed.

We’ve also greatly improved the experience for interactive computing. Many objects have pretty printers that return descriptions suitable for an interactive environment, for example:

  • Data frames, vectors, and matrices limit the displayed rows and columns to fit the screen.
  • Statistical models give a summary of parameter estimates and diagnostic information.
  • Statistical tests give a brief description of the test results.
    There’s more coming in that area.

Unfortunately, all this goodness did require us to make some breaking changes. Most drastic is the move to generic vectors and matrices, which caused changes in every area of the library. Fortunately, the required changes are not too extensive: just adding the generic type argument in declarations and some constructor calls takes care of most issues. The data analysis library has been moved to a different namespace, but otherwise the changes are minimal. Code using the statistics library requires considerably more changes, but we’re working to make that transition easier with a small ‘backwards compatibility’ library that we’ll be releasing soon.

The new version installs side-by-side with earlier versions, so your existing code will continue to work.

In the coming weeks, I’ll be posting in more detail about different aspects of the new version. Stay tuned!

In the mean time, you can give the new version a try or upgrade now.

CUDA Support Preview

Within the next couple of months we’ll be releasing an update to our Numerical Libraries for .NET that includes support for GPU-accelerated calculations using NVIDIA’s CUDA libraries.

We wanted to make it very easy to offload calculations to the CPU. This means the number of code changes should be minimized. At the same time, we wanted to make optimal use of the GPU. One of the most time-consuming parts of GPU computing is transferring data between CPU and GPU memory. In order to take full advantage of the processing power, data should be kept on the GPU as much as possible.

To support this functionality, we’ve introduced three new types: DistributedProvider, DistributedVector and DistributedMatrix. DistributedProvider is an abstract class that defines core functionality of the distributed computing platform such as memory management and computational kernels. At the moment, it has one concrete implementation: CudaProvider. We expect to add OpenCLProvider and possibly others (Xeon Phi, C++/AMP, MPI…) in the future.

DistributedVector and DistributedMatrix represent distributed (i.e. GPU-based) arrays that may or may not have a local copy in CPU memory. Data is copied to and from GPU memory only as needed. The result of, say, multiplying two matrices in GPU memory is a third matrix which also resides on the GPU. If further calculations are done using this matrix, its data is kept on the GPU. It is never copied to CPU memory, unless individual elements are accessed, or an operation is performed that is not supported on the GPU. In the latter case, the data is copied to local memory and the CPU-based implementation is used. We also fall back on the CPU-based code automatically if the GPU runs out of memory, or if there is no CUDA GPU at all.

So what does the code look like? Well, here is a small example that runs 100 iterations of the power method for computing the largest eigenvalue of a matrix using various configurations. The core computation is a matrix-vector product. We use the same code for the CPU-based and the GPU-based calculation.

static double DoPower(Matrix<double> A, Vector<double> b) { 
    double λ = 0;
    b.SetValue(1.0);
    Vector<double> temp = null) {
    for (int i = 0; i < imax; i++) {
        temp = Matrix<double>.MultiplyInto(A, b, temp);
        // Note that temp will exist only on the GPU
        // if A and b are GPU arrays
        var λ1 = temp.Norm();
        Vector<double>.MultiplyInto(1 / λ1, temp, b);
        if (Math.Abs(λ1 - λ) < 1e-5) break;
        λ = λ1;
    }
    // In case this is a GPU array: free GPU memory.
    temp.Dispose();
    return λ;
}

// CPU:
l = DoPower(A, b);

// GPU:
l = DoPower(A.MakeDistributed(), b.MakeDistributed());

As you can see, the only difference between the CPU and GPU versions is that we called MakeDistributed on the input arguments.

In our benchmark we added a third variation that touches the matrix on the CPU during each iteration. This forces the matrix to be copied to the GPU in each iteration, which is similar to what happens with naive offloading. Here are the results:

Size: 100
MKL (CPU only):        4.107 ms (lambda=6.07967352075151)
CUDA (keep on GPU):   25.101 ms (lambda=6.07967352075151)
CUDA (offloaded):     29.593 ms (lambda=6.07967352075151)

Size: 500
MKL (CPU only):       42.116 ms (lambda=13.3132987677261)
CUDA (keep on GPU):   30.376 ms (lambda=13.3132987677261)
CUDA (offloaded):     94.250 ms (lambda=13.3132987677261)

Size: 1000
MKL (CPU only):      171.170 ms (lambda=18.754878830699)
CUDA (keep on GPU):   35.196 ms (lambda=18.754878830699)
CUDA (offloaded):    276.329 ms (lambda=18.754878830699)

Size: 5000
MKL (CPU only):    4397.868 ms (lambda=41.3752599052634)
CUDA (keep on GPU): 282.907 ms (lambda=41.3752599052635)
CUDA (offloaded):  5962.417 ms (lambda=41.3752599052635)

This is on a GPGPU-poor GTX 680, with 1/24 double-precision capacity, compared to 1/2 for Titan/Kepler cards, and using Intel’s Math Kernel Library version 11. It clearly shows that using our implementation, the GPU version is competitive for moderate sized matrices (n=500) and really charges ahead for larger problems, while the simple offloading technique never quite catches up.

Using Numerical Libraries for .NET from F#

With the release of version 5.0 of our Numerical Libraries for .NET, we’ve also considerably improved our support for the F# language. In this article I want to walk you through setting up F# Interactive to use the Extreme Optimization Numerical Libraries for .NET, and show off some of the new features. This tutorial will focus on F# 3.0 in Visual Studio 2012. Most of the functionality also works with F# 2.0 in Visual Studio 2010.

Setting up F# Interactive

The first step is to bring the Extreme Optimization Numerical Libraries for .NET assemblies into the F# Interactive environment:

#I @"c:\Program Files\Extreme Optimization\Numerical Libraries for .NET\bin\Net40\";;
#r "Extreme.Numerics.Net40.dll";;
#r "Extreme.Numerics.FSharp30.Net40.dll";;

open Extreme.Mathematics;;

fsi.AddPrinter(fun (A : Extreme.Mathematics.ISummarizable) -> A.Summarize());;

We do this by first adding the path to the Extreme Optimization binaries to the DLL search path using the #I command. (Note: the path depends on the installation folder and may be different on your machine.) We the main assembly Extreme.Numerics.Net40, and the F# specific assembly Extreme.Numerics.FSharp30.Net40. Finally, we tell fsi to use a pretty printer for objects that implement the ISummarizable interface. This will print out vectors and matrices in a user-friendly form.

Working with vectors and matrices

So let’s get our hands dirty right away. When we referenced the Extreme.Numerics.FSharp30.Net40 dll, this automatically opened several modules that make it easier to do linear algebra in F#. We can call Vector and Matrix factory methods, or we can use some convenient F# functions:

> let A = rand 3 3;;

val A : LinearAlgebra.DenseMatrix =
  3x3 DenseMatrix
[[0.0780,0.1007,0.6917]
 [0.5422,0.8726,0.5718]
 [0.3528,0.0631,0.0772]]

> let c = !![1.0..10.0];;
val c : LinearAlgebra.DenseVector =
  [1.0000,2.0000,3.0000,4.0000,5.0000,6.0000,7.0000,8.0000,9.0000,10.0000]

Notice the !! operator, which creates a vector from a list or sequence. We can access elements of vectors and matrices using the familiar F# syntax. Slices work too, as do integer sequences and predicates:

> A.[0,1];;
val it : float = 0.100694183
> A.[1,1]  A.[1..2,0..1];;
val it : Matrix = 2x2 DenseMatrix
[[0.5422,99.0000]
 [0.3528,0.0631]]
> c.[1..3];;
val it : Vector = [2.0000,3.0000,4.0000]
> c.[fun x -> sin x > 0.0];;
val it : Vector = [1.0000,2.0000,3.0000,7.0000,8.0000,9.0000]
> c.[[2;4;5]];;
val it : Vector = [3.0000,5.0000,6.0000]
> c.[[for i in 1..7 -> i % 3]];;
val it : Vector = [2.0000,3.0000,1.0000,2.0000,3.0000,1.0000,2.0000]

Let’s do some simple calculations:

> A*b;;
 val it : LinearAlgebra.DenseVector = [3.7378,5.1463,0.8651]
 > A.T*b;;
 val it : Vector = [2.9263,2.1614,2.2214]
 > 2.0 * b;;
 val it : LinearAlgebra.DenseVector = [2.0000,4.0000,10.0000]
 > 1.0 + b;;
 val it : Vector = [2.0000,3.0000,6.0000]
 > log (1.0 + b);;
 val it : Vector = [0.6931,1.0986,1.7918]
 > det A;;
 val it : float = -0.1707208593
 > inv A;;
 val it : Matrix =
 3x3 DenseMatrix
 [[-0.1835,-0.2101,3.1982]
 [-0.9361,1.3939,-1.9355]
 [1.6027,-0.1792,-0.0791]]
> let b = Vector.Create(3, fun i -> 1.0 + (float i)**2.0);;
val b : LinearAlgebra.DenseVector = [1.0000,2.0000,5.0000]
> let x = A.Solve(b);;
val x : LinearAlgebra.DenseVector = [15.3876,-7.8258,0.8488]

The last command solved the system of equations represented by the matrix A and solved it for the right-hand side b. We can also work with matrix decompositions. There are actually two ways. Let’s use the SVD as an example. First, we can get the factors of the decomposition.

> let (U,Σ,V) = A.svd();;

val Σ : LinearAlgebra.DiagonalMatrix =
  3x3 DiagonalMatrix
[[1.2980,0.0000,0.0000]
 [0.0000,0.5101,0.0000]
 [0.0000,0.0000,0.2578]]
val V : Matrix =
  3x3 DenseMatrix
[[-0.4492,0.4032,-0.7973]
 [-0.6411,0.4760,0.6020]
 [-0.6222,-0.7816,-0.0446]]
val U : Matrix =
  3x3 DenseMatrix
[[-0.4083,-0.9041,-0.1260]
 [-0.8928,0.3667,0.2616]
 [-0.1903,0.2193,-0.9569]]

Or, if we just want the singular values, we can get those as well:

> let σ = A.svd();;

val σ : LinearAlgebra.DenseVector = [1.2980,0.5101,0.2578]

The second way to work with decompositions is by creating decomposition objects. Let’s work with a non-square matrix this time:

> let C = rand 15 4;;

val C : LinearAlgebra.DenseMatrix =
  15x4 DenseMatrix
[[0.4198,0.8147,0.8530,0.5364]
 [0.9181,0.7706,0.3164,0.8773]
 [0.4768,0.1423,0.7419,0.6521]
 [0.2463,0.4579,0.3474,0.0311]
 [0.0432,0.6366,0.4928,0.2399]
 ...
 [0.3991,0.3426,0.4452,0.1276]
 [0.7867,0.3247,0.5256,0.1940]
 [0.6504,0.0943,0.1169,0.4266]
 [0.0436,0.6716,0.5230,0.4922]
 [0.5329,0.5422,0.2448,0.0547]]

> let d = (rand 15 1).GetColumn(0);;

val d : Vector =
  [0.7446,0.8430,0.6296,0.1210,0.2190,...,0.8281,0.6643,0.0355,0.7120,0.3582]

Here we see that the output is truncated, and only the 5 first and last rows are shown. We can find the least squares solution to C*y = d.
We’ll also compute the residuals and their norm:

> let y = svdC.GetPseudoInverse() * d;;

val y : Vector = [0.3221,0.4458,0.3383,0.1298]

> let r = d - C * y;;

val r : Vector =
  [-0.1120,-0.0171,0.0770,-0.2840,-0.2765,...,0.3796,0.0632,-0.3110,0.1577,-0.1450]

> norm r;;
val it : float = 0.8214250504

Working with functions

You’d expect a functional language to work with functions. So let’s see what we can do.

The modules discussed in this section are not automatically opened when you load the F# compatibility assembly. The available modules are:

  • Special: special functions, such as Gamma functions, Bessel functions, etc.
  • Functional: numerical integration and differentiation, finding zeros, maxima and minima of functions.
  • Statistics: some functions that are useful when doing statistical analysis.
  • Random: random number streams.

These are only the modules with F# specific functions. The full functionality of the libraries is of course available from F# as well.

Here we’ll focus mainly on the second category. Special functions can be very handy, however, so let’s start with those:

> open Extreme.Numerics.FSharp.Special;;
> gamma 0.5;;
val it : float = 1.772453851
> BesselJ 0.0 1.0;;
val it : float = 0.7651976866
> let J0 = BesselJ 0.0;;

val J0 : (float -> float)

> J0 1.0;;
val it : float = 0.7651976866
>

Notice how we used partial application of the Bessel function to get a function that returns the Bessel function of order 0. We can do something similar in many other places, for example with binomial coefficients:

> let binom9 = binomial 9;;

val binom9 : (int -> float)

> [for i in 0..9 -> binom9 i];;
val it : float list =
  [1.0; 9.0; 36.0; 84.0; 126.0; 126.0; 84.0; 36.0; 9.0; 1.0]

Now to the functional stuff. Two of the most basic operations on functions are differentiation and integration. Differentiation is done using the d function. Integration is done using integrate1. Partial application is supported, so you can create a function that computes the numerical derivative at any value you give it.

> open Extreme.Numerics.FSharp.Functional;;
> let J1 = BesselJ 1.0;;

val J1 : (float -> float)

> d J0 0.5;;
val it : float = -0.2422684577
> J1 0.5;; // d J0 = -J1
val it : float = 0.2422684577
> let dJ0 = d J0;;

val dJ0 : (float -> float)

> dJ0 0.5;;
val it : float = -0.2422684577
> dJ0 1.5;;
val it : float = -0.5579365079

Integration works in a similar way. The bounds of the integration interval are supplied as a tuple. Infinite intervals are supported as well.

> integrate1 J1 (0.0,5.0);;
val it : float = 1.177596771
> J0 5.0 - J0 0.0;; // Should give - the previous result:
val it : float = -1.177596771
> let iJ1 = integrate1 J1;;

val iJ1 : (float * float -> float)

> iJ1 (0.0,5.0);;
val it : float = 1.177596771
> integrate1 (fun x -> exp -x) (0.0,infinity);;
val it : float = 1.0

Finding zeros of functions is easy. Let’s quickly find the first 5 zeros of the Bessel function J0:

> [for x0 in [2.0;5.0;8.0;11.0;14.0] -> findzero (fun x -> J0 x) x0];;
val it : float list =
  [2.404825558; 5.52007811; 8.653727913; 11.79153444; 14.93091771]

Finding the minimum or maximum of a function can be difficult because most software requires that you supply the gradient of the function. Although computing derivatives is high school calculus, it’s still very error prone. Automatic differentiation comes to the rescue here. We’ll use the SymbolicMath class, which contains all kinds of functions that take advantage of symbolic computations to obtain a solution.

One of the most famous optimization problems is the Rosenbrock function. We can find the minimum of this function in just one line of code:

> SymbolicMath.FindMinimum((fun x y -> (1.0 - x)**2.0 + 100.0*(y-x**2.0)**2.0), !![-1.0;-1.0]);;
val it : SolutionReport =
  Extreme.Mathematics.SolutionReport`1[Extreme.Mathematics.Vector]
    {ConvergenceTest = null;
     Error = 2.778930274e-09;
     EvaluationsNeeded = 74;
     IterationsNeeded = 26;
     Result = [1.0000,1.0000];
     Status = Converged;}
>

It’s time now for you to play. Download the trial version of the Extreme Optimization Numerical Libraries for .NET here and have fun!

Using Numerical Libraries from IronPython

Today, we have the pleasure of announcing the availability of a new IronPython interface library and over 50 samples of using our Extreme Optimization Numerical Libraries for .NET from IronPython.

Python is used more and more for numerical computing. It has always been possible to call into it from IronPython. However, IDE support was minimal and some of the more convenient features of Python, like slicing arrays, were not available.

In January, Microsoft announced the availability of Python Tools for Visual Studio. This is a big step forward in IDE’s for Python development on Windows.

Now, with our new IronPython interface library you can take advantage of the following integration features:

  • Create vectors and matrices from Python lists.
  • Setting and getting slices of vectors and matrices.
  • Integrating Python’s complex number type with our DoubleComplex type.
  • Use Python-style format specifiers.

If you want to dive right in, the download is here: IronPython Tools for Extreme Optimization Numerical Libraries for .NET.

Prerequisites

In order to use the IronPython interface library, you need the following:

Installation

To install the IronPython interface library, follow these steps:

  1. Make sure all the prerequisites are installed.
  2. Download the zip archive containing the IronPython interface library for the Extreme Optimization Numerical Libraries for .NET.
  3. Copy the Extreme.Numerics.IronPython27.dll file from the zip archive to the DLLs folder in the IronPython installation folder.
  4. Copy the IronPython folder from the zip archive to the QuickStart folder in the Extreme Optimization Numerical Libraries for .NET installation folder.

Getting Started

To use the interface library, import the numerics module:

IronPython 2.7.1 (2.7.0.40) on .NET 4.0.30319.239 
Type "help", "copyright", "credits" or "license" for more information. 
>>> import numerics 

The main types reside in the Extreme.Mathematics namespace, so it’s a good idea to import everything from it:

>>> from Extreme.Mathematics import *

You can then start using mathematical objects like vectors, and manipulate them:

>>> a = Vector([1,2,3,4,5]) 
>>> b = Vector.Create(5, lambda i: sqrt(i+1)) 
>>> b 
Vector([1,1.4142135623730952,1.7320508075688772,2,2.23606797749979]) 
>>> a+b 
Vector([2,3.4142135623730949,4.7320508075688767,6,7.23606797749979]) 
>>> from math import * 
>>> a.Apply(sin) 
Vector([0.8414709848078965,0.90929742682568171,0.14112000805986721,-0.7568024953079282,-0.95892427466313845])

You can use Python slicing syntax, including counting from the end:

>>> a[0] 
1.0 
>>> a[-2] 
4.0 
>>> a[-2:] 
Vector([4,5]) 
>>> a[1:4] 
Vector([2,3,4])

Slicing works on matrices as well:

>>> H = Matrix.Create(5,5, lambda i,j: 1.0 / (1+i+j)) 
>>> H 
Matrix([[1,0.5,0.33333333333333331,0.25,0.2] 
[0.5,0.33333333333333331,0.25,0.2,0.16666666666666666] 
[0.33333333333333331,0.25,0.2,0.16666666666666666,0.14285714285714285] 
[0.25,0.2,0.16666666666666666,0.14285714285714285,0.125] 
[0.2,0.16666666666666666,0.14285714285714285,0.125,0.11111111111111111]]) 
>>> H[1,1] 
0.33333333333333331 
>>> H[1,:] 
Vector([0.5,0.33333333333333331,0.25,0.2,0.16666666666666666]) 
>>> H[:,1] 
Vector([0.5,0.33333333333333331,0.25,0.2,0.16666666666666666]) 
>>> H[0:5:2,0:5:2] 
Matrix([[1,0.33333333333333331,0.2] 
[0.33333333333333331,0.2,0.14285714285714285] 
[0.2,0.14285714285714285,0.11111111111111111]])

Many linear algebra operations are supported, from the simple to the more complex:

>>> H*a 
Vector([5,3.55,2.8142857142857141,2.3464285714285715,2.0174603174603174]) 
>>> H.Solve(a) 
Vector([124.99999999997976,-2880.0000000002506,14490.000000002709,-24640.000000005733,13230.000000003382]) 
>>> svd = H.GetSingularValueDecomposition() 
>>> svd.SingularValues 
Vector([1.5670506910982314, 0.20853421861101323, 0.011407491623419797, 0.00030589804015118552, 3.2879287721734089E-06])

Sample programs

We’ve converted over 60 of our QuickStart samples to Python scripts. The samples folder contains a solution that contains all the sample files. To run an individual sample, find it in Solution Explorer and choose “Set As Startup File” from the context menu. You can then run it in the interactive window by pressing Shift+Alt+F5.

Accurate trigonometric functions for large arguments

This week we introduced two new features in the Extreme Optimization Numerical Libraries for .NET.

Trigonometric functions with large arguments

The .NET Framework implementation of the trigonometric functions, sine, cosine, and tangent, relies on the corresponding processor instruction. This gives extremely fast performance, but may not give fully accurate results. This is the case for even fairly small arguments.

For example, Math.Sin(1000000.0) returns -0.34999350217129177 while the correct value is -0.34999350217129295. So we’ve already lost at least 2 digits. And things just go downhill from there. At 1012, we only get 8 good digits.

For even larger arguments, something quite unexpected happens. The result of Math.Sin(1e20) is… 1e20! The argument is returned unchanged! Not only is this a completely meaningless return value. It also can cause a calculation to fail if it relies on the fact that sine and cosine are bounded by -1 and +1.

To understand what is going on, we need to go back to the implementation.

Math.Sin, Math.Cos and Math.Tan rely on processor instructions like fsin and fcos. These instructions work by first reducing the angle to a much smaller value, and then using a table-based approach to get the final result. Moreover, according to the documentation, the argument must be strictly between –263 and 263. The return value is not defined for arguments outside this interval. This explains the complete breakdown for large arguments.

The explanation for the gradual loss of accuracy is more subtle. As I said, the computation is done by first reducing the argument to a smaller interval, typically (–π, π]. The argument x is rewritten in the form

x = n π + f

where n is an even integer, and f is a real number with magnitude less than π.

The first step is to divide the number by π. The quotient is rounded to the nearest even integer to give us n. We get the reduced value f by subtracting this number times π from x. Because of the periodicity of the trigonometric functions, the value of the function at x is the same as the value at f, which can be calculated accurately and efficiently.

Now, if you start with a value like 10300, you’ll need to divide this number by π with at least 300 digits of precision, round the quotient to the nearest even number, and subtract π times this number from 10300. To do this calculation accurately over the whole range, we would need to work with a value of π that is accurate to 1144 bits.

Even today, this is just a little bit beyond the capabilities of current hardware. Moreover, it could be considered wasteful to spend so much silicon on a calculation that is not overly common. Intel processors use just 66 bits. This leaves us with just 14 extra bits. The effects will begin to show with arguments that are larger than 216.

Accurate argument reduction for trigonometric functions

So, there are two problems with the standard trigonometric functions in the .NET Framework:

  1. The accuracy decreases with increasing size of the argument, losing several digits for even modest sized arguments.
  2. When the argument is very large, the functions break down completely.

To address these two problems, we’ve added Sin, Cos and Tan methods to the Elementary class. These methods perform fully accurate argument reduction so the results are accurate even for huge values of the argument.

For example, the 256 digit floating-point number 6381956970095103 2797 is very close to a multiple of π/2. In fact, it is so close that the first 61 binary digits after the period of the reduced value are 0. As expected, Math.Cos gives the completely meaningless result 5.31937264832654141671e+255. Elementary.Cos returns -4.6871659242546267E-19, which is almost exactly the same as what Wolfram|Alpha gives: 4.6871659242546276E-19. The relative error is about the size of the machine precision.

What about performance? Thanks to some tricks with modular multiplication, it is possible to do the reduction in nearly constant time. Moreover, since the reduction only needs to be done for larger arguments, the overhead in most situations is minimal. Our benchmarks show a 4% overall slowdown when no reduction is needed. For smaller values, the operation can take close to twice as long, while in the worst case, for large arguments that require a full reduction, we see a 10x slowdown.

Transcendental functions for System.Decimal

Also new in this week’s update is the DecimalMath class, which contains implementations of all functions from System.Math for the decimal type. All trigonometric, hyperbolic, exponential and logarithmic functions are supported, as well as the constants e and π.

The calculations are accurate to the full 96 bit precision of the decimal type. This is useful for situations where double precision is not enough, but going with an arbitrary precision BigFloat would be overkill.

New feature: Wilcoxon-Mann-Whitney and Kruskal-Wallis Tests

We just released an update to our Extreme Optimization Numerical Libraries for .NET that adds some new classes. We’ve added two non-parametric tests: the Mann-Whitney and Kruskal-Wallis tests. These are used to test the hypothesis that two or more samples were drawn from the same distribution. The Mann-Whitney test is used for two samples. The Kruskal-Wallis test is used when there are two or more samples.

For both tests, the test statistic only depends on the ranks of the observations in the combined sample, and no assumption about the distribution of the populations is made. This is the meaning of the term non-parametric in this context.

The Mann-Whitney test, sometimes also called the Wilcoxon-Mann-Whitney test or the Wilcoxon Rank Sum test, is often interpreted to test whether the median of the distributions are the same. Although a difference in median is the dominant differentiator if it is present, other factors such as the shape or the spread of the distributions may also be significant.

For relatively small sample sizes, and if no ties are present, we return an exact result for the Mann-Whitney test. For larger samples or when some observations have the same value, the common normal approximation is used.

The Kruskal-Wallis test is an extension of the Mann-Whitney test to more than two samples. We always use an approximation for the distribution. The most common approximation is through a Chi-square distribution. We chose to go with an approximation in terms of the beta distribution that is generally more reliable, especially for smaller sample sizes. For comparison with other statistical software, the chi-square p-value is also available.

We created some QuickStart samples that illustrate how to use the new functionality:

You can also view the documentation on non-parametric tests, or download the trial version.

Dynamic times two with the Dynamic Language Runtime

Microsoft today announced their latest addition to the .NET family: the Dynamic Language Runtime (DLR). As Jim Hugunin points out, it is based on the IronPython 1.0 codebase, but has been generalized so it can support other dynamic languages, including Visual Basic.

Now, the word ‘dynamic’ here is often misunderstood. Technically, the word dynamic refers to the type system. The .NET CLR is statically typed: every object has a well-defined type at compile time, and all method calls and property references are resolved at compile time. Virtual methods are somewhat of an in-between case, because which code is called depends on the runtime type, a type which may not even existed when the original code was compiled. But still, every type that overrides a method must inherit from a specific parent class.

In dynamic languages, the type of variables, their methods and properties may be defined at runtime. You can create new types and add properties and methods to existing types. When a method is called in a dynamic language, the runtime looks at the object, looks for a method that matches, and calls it. If there is no matching method, a run-time error is raised.

Writing code in dynamic languages can be very quick, because there is rarely a need to specify type information. It’s also very common to use dynamic languages interactively. You can execute IronPython scripts, but there’s also a Console that hosts interactive IronPython sessions.

And this is where it gets confusing. Because leaving out type information and interactive environments come naturally to dynamic languages, these features are often thought of as properties of dynamic languages. They are not.

Ever heard of F#? It is a statically typed, compiled language created by Don Syme and others at Microsoft Research. It can be used to build libraries and end-user applications, much like C# and VB. But it also has an interactive console and eliminates the need for most type specifications through a smart use of type inference.

F# is not a dynamic language in the technical sense: it is statically typed. But because it has an interactive console and you rarely have to specify types, it is a dynamic language in the eyes of a lot of people. In fact, at the Lang.NET symposium hosted by Microsoft last August, people were asked what their favorite dynamic language is. Many answered with F#. And these were programming language designers and compiler writers!

Anyway, the point I wanted to make with this post is that the new Dynamic Language Runtime has great support for both the technically dynamic languages (dynamic types) and the perceived as dynamic features like interactive environments. I hope the distinction between these two aspects will be clarified in the future.

Latest Supercomputer Top 500

Last week, the latest edition of the list of the 500 fastest supercomputers was released. Two recent developments make this list interesting.

  1. The arrival of multicore processors. Even though their presence is still modest on the current list, expect their share to rise. Intel is targeting 32 cores on a chip by 2010.
  2. Microsoft made its entry on the scene with Windows Compute Cluster Server 2003, an enhanced Windows 2003 Enterprise Server version tweaked for High Performance Computing. The first (and so far the only) entry on the Top500 list is at the National Center for SuperComputing at the University of Illinois. It will be interesting to see how this number grows in the coming years. At the very least, it will give some indication of the headway Microsoft is making in the HPC market.

Some trivia:

The #1 spot is still held by IBM’s BlueGene/L supercomputer at the Lawrence Livermore National Laboratory. At over 280TFlops, this monster is faster than an IBM PC with 8087 co-processor by a factor of roughly one billion. That’s right: it’s as fast as 1,000,000,000 original IBM PC’s!

The first Top500 list was published in June 1993. It’s interesting to note that one dual processor machine based on Intel’s latest dual-core processors would, at 34.9GFlops, take the #2 spot on that original list. Today’s average desktop would make it into the top 100…

Yahoo Finance miscalculates monthly average daily volume

Am I missing something here?

While testing some time series functionality in the new version of our statistics library, we came across a rather curious discrepancy. We used the historical quotes available from Yahoo Finance as a reference resource. As it turns out, comparison with our data appears to show that Yahoo miscalculates some summary statistics.

The error occurs on the Historical Prices page when using a monthly timeframe. Take the monthly data for 2005 for Microsoft’s stock (symbol MSFT). This shows an average daily volume for January of 79,642,818 shares. According to the help document, this is “the average daily volume for all trading days in the reported month.”

When we look at the daily prices for January 2005, we find 20 trading days. When we add up all the daily volumes, we find 1,521,414,280 shares changed hands that month. That should give an average daily volume of 76,070,714 shares, more than 3 million shares less than Yahoo’s figure. Why the difference?

A brief investigation showed that the difference can be explained because Yahoo includes the volume on the last trading day of the month twice. If you add the volume of Jan. 1st to the total, we get 1,592,856,376. Dividing by the number of trading days (20) gives 79,642,818.8.

When we look at other months, we find the same pattern: Yahoo consistently overstates the average daily volume for the month by a few percentage points. Each time, this difference can be explained by the double inclusion of the volume of the last trading day in the total volume for the month.

Here’s a random sample: Research in Motion for June 2000. Yahoo gives an average daily volume of 4,262,160 shares. Our calculation shows an average of 3,870,800 shares corresponding to a total volume of 77,416,000 for the month. Yahoo’s total corresponds to 85,243,200 shares. The difference of 7,820,200 shares is exactly the volume for June 30th.

The weekly average daily volume appears to be correct.

I find it hard to believe that a service that is as widely used as Yahoo would show such an error. So, my question to the experts in technical analysis out there is: What am I missing???