Using Formulas to Define Statistical Models

In R, and the S language from which it was derived, model formulas (or model formulae as the official documentation calls them) are the standard way of specifying the variables in a model. The patsy package brought the same convenient syntax to Python. With an update we released earlier today, you can now use formulas to specify models in the Extreme Optimization Numerical Libraries for .NET.

Formula basics

Formulas work for almost all types of models. Here, we will use regression as an example. A regression model is defined by one dependent variable and one or more independent variables. To specify a model with dependent variable y and independent variables x1 and x2, we can use the formula:

y ~ x1 + x2

In this formula, the ~ operator separates the dependent variable from the independent variables, and the + operator indicates that x1 and x2 should both be included in the model.

It is useful to think of formulas as operating on sets of variables. So the terms y, x1, and x2 each stand for a set of one variable. The + operator stands for the union of the set of terms on the left and the set of terms on the right. It follows that including the same variable twice results in the same formula:

y ~ x1 + x1 + x2 + x1 + x2

is exactly the same as

y ~ x1 + x2

Other operators exist that specify different combinations of variables: They are all really just set operations. We will get to those in a minute.

In many multivariate models like clustering models, factor analysis or principal component analysis (PCA), there are no dependent variables, only features. In this case, the first part of the formula is omitted.

The Intercept

The special term 1 stands for the intercept (constant) term in a model. Because nearly all linear models include an intercept term, one is included by default. So, again for a regression, the model

y ~ x1 + x2

is equivalent to

y ~ 1 + x1 + x2

To exclude the intercept term, there are two options. The - operator can be used. This operator corresponds to the set difference between terms: it returns the set of terms in its right operand with the terms on the left removed:

y ~ x1 + x2 - 1

Alternatively, the special no-intercept term, 0, can be added as the first term in the model:

y ~ 0 + x1 + x2

Interactions

Models may include not just main effects but also interactions between terms. The interaction between two numerical variables is the element-wise product of the two variables. Interactions are specified by the : operator. For example:

y ~ x1 + x2 + x1:x2

It is very common to include both the main effects and the interactions in a model. It is so common that there is a special operator for this purpose: the product operator *. So, the above model can also be written much shorter as:

y ~ x1*x2

There is another shortcut operator: ** or ^ (both forms are equivalent). The right operand has to be an integer. It specifies how many times the left operand should be repeated in a product. So, the model

y ~ (x1 + x2)**3

is equivalent to

y ~ (x1 + x2)*(x1 + x2)*(x1 + x2)

Categorical variables

Most models require variables to be numerical. To include categorical variables in a model, they have to be encoded as numerical variables. This is done automatically In a formula like

y ~ x + a

where a is a categorical variable, the term a really means “the set of variables needed to fully represent the variable a in the model.”

The standard way to encode categorical variables is called dummy encoding or one hot encoding. For each level of the categorical variable, a corresponding indicator variable is generated that has a 1 where the variable’s value equals that level, and a 0 otherwise. That’s not the full story, however.

The sum of all indicator variables for a categorical variable is a variable with ones everywhere. This is exactly the same as an intercept term. So, if an intercept term is present in a model, then for a categorical variable with 2 levels, you only need 1 indicator variable in the model. For a variable with 3 levels, you only need 2. Adding the last indicator variable does not add any more information. It is redundant.

Handling redundancies, especially for models that include interactions between variables, can get somewhat complicated. Complicated enough for R to get it wrong sometimes. No worries: we’ve done it right.

The Catch All Term

Often, the data set used to fit a model contains just the variables that are used in the model. In this case, it is convenient to use the special . term. This term captures all the variables in the data set that have not yet appeared in the model. So, for example, if the data set contains the variables y, x1, x2, …, x17, but nothing else, then instead of writing out the full formula:

y ~ x1 + x2 + x3 + x4 + x5 + x6 + x7 + x8 + x9 + x11 + x12 + x13 + x14 + x15 + x16 + x17

you can simply write:

y ~ .

A code example

This example is adapted from the linear regression QuickStart sample. We’ll use C#, but the code is also available in Visual Basic and F#.

The example uses information about the test scores of 200 high school students. We’ll try to predict the science score in terms of some other variables: the scores in reading, math, and social studies, and the gender of the student.

var data = DataFrame.ReadCsv(@"..\..\..\..\Data\hsb2.csv");
var formula = "math + female + socst + read";
var model = new LinearRegressionModel(data, formula);
model.Compute();
Console.WriteLine(model.Summarize());

We start by reading the data from a CSV file into a data frame. We can then go straight to setting up the model using a formula, and computing it. We then print a summary of the results. This gives us the following output:

# observations: 200
R Squared:            0.4892        Log-likelihood:      -674.63
Adj. R Squared:       0.4788        AIC:                 1359.25
Standard error:       7.1482        BIC:                 1375.75
===================================================================
Source of variation   Sum.Sq.   D.o.f. Var.Est.  F-stat.  prob.
Regression            9543.72     4.00  2385.93  46.6948   0.0000
                      9963.78   195.00    51.10
                     19507.50   199.00    98.03
===================================================================
Parameter                Value   St.Err.       t   p(>t)
Constant              12.32529   3.19356    3.86  0.0002
math                   0.38931   0.07412    5.25  0.0000
female                -2.00976   1.02272   -1.97  0.0508
socst                  0.04984   0.06223    0.80  0.4241
read                   0.33530   0.07278    4.61  0.0000

We see that math an reading scores were most significant. In this group, girls scored 2 points lower than boys, on average.

Limitations

This is just our initial implementation of model formulas. Not everything works entirely as we would like it, and there are some limitations.

Expressions are not supported.

This is the biggest limitation. Formulas in R and in patsy can contain expressions of variables. So, for example, you can have

log(y) ~ x1 + x2

In this example, the dependent variable is the natural logarithm of the variable y. When the expression contains operators that could be confused with formula operators, the special I() function is used to indicate that a term is to be evaluated as a mathematical expression:

y ~ x1 + I(x1+x2)

This formula specifies a model with 2 independent variables: x1 and the sum of x1 and x2.  Such expressions are not supported in this release.

Special purpose variables are not supported.

Some models have variables that play a special role. For example, Poisson regression models sometimes have an offset variable that is associated with a parameter fixed at 1. In R, the offset parameter is indicated by the expression offset(variableName). This is currently not supported.

The ‘nested in’ operatorS / or %in% ARE not supported

These operators are not very common and a little confusing, so we decided to leave them out, at least for now.

The no-intercept term must appear first.

Adding the no-intercept term, 0, anywhere other than at the start of the independent variables or features has undefined behavior.

Conclusion

In only a few lines of code, we've loaded a data set, set up a model and printed a summary of the results. We believe that specifying models using formulas is a very useful feature. You can read more in the documentation. The QuickStart samples have also been updated.

Of course, there's nothing like trying it out yourself. Get the update here!

New Version of Numerical Libraries for .NET released

Last week, we released version 6.0 of our Extreme Optimization Numerical Libraries for .NET. We’ve introduced a number of great new features. Here are some highlights.

  • The linear algebra library has undergone a major overhaul:
    All vector and matrix types are generic and support arbitrary element types.
  • Thanks to type inference, the element type rarely needs to be specified.
  • New mutability options let you create read-only vectors, and writable vectors with copy-on-write semantics.
  • We’ve filled in missing operations and consistently offer them in three variants: perform the operation in-place, return a new array, or return the result in an existing array.
  • The native linear algebra libraries now support Conditional Numerical Reproducibility.

The data analysis library has been greatly expanded:

  • Group rows by value or quantile, sliding and expanding windows, partitions, time-based resampling and 2D pivot tables.
  • Many new aggregator functions with efficient implementations for specific types of groupings like moving averages.
  • Matrices and vectors can act as data frames. The same functionality that is available for data frames, like data manipulation, aggregation, and automatic missing value handling, can also be used on vectors and matrices.
  • LINQ queries are supported on data frames, matrices, and vectors.
  • Create data frames from databases, arbitrary .NET objects, or import and export to CSV files. Support for other file formats, including R data files and SAS files, will be released in the coming weeks and months.
  • Many improvements including data transformations, lookup/join on nearest.

The statistics library has been integrated with the data analysis library:

  • We’ve removed the distinction between vectors and statistical variables. Everything is now a vector.
  • Covariance matrices, parameter vectors and similar objects are automatically labeled with variable names.
  • Residuals, predictions, and similar vectors automatically get the observations’ labels.
  • Categorical variables are automatically expanded to indicator variables as needed.

We’ve also greatly improved the experience for interactive computing. Many objects have pretty printers that return descriptions suitable for an interactive environment, for example:

  • Data frames, vectors, and matrices limit the displayed rows and columns to fit the screen.
  • Statistical models give a summary of parameter estimates and diagnostic information.
  • Statistical tests give a brief description of the test results.
    There’s more coming in that area.

Unfortunately, all this goodness did require us to make some breaking changes. Most drastic is the move to generic vectors and matrices, which caused changes in every area of the library. Fortunately, the required changes are not too extensive: just adding the generic type argument in declarations and some constructor calls takes care of most issues. The data analysis library has been moved to a different namespace, but otherwise the changes are minimal. Code using the statistics library requires considerably more changes, but we’re working to make that transition easier with a small ‘backwards compatibility’ library that we’ll be releasing soon.

The new version installs side-by-side with earlier versions, so your existing code will continue to work.

In the coming weeks, I’ll be posting in more detail about different aspects of the new version. Stay tuned!

In the mean time, you can give the new version a try or upgrade now.

CUDA Support Preview

Within the next couple of months we’ll be releasing an update to our Numerical Libraries for .NET that includes support for GPU-accelerated calculations using NVIDIA’s CUDA libraries.

We wanted to make it very easy to offload calculations to the CPU. This means the number of code changes should be minimized. At the same time, we wanted to make optimal use of the GPU. One of the most time-consuming parts of GPU computing is transferring data between CPU and GPU memory. In order to take full advantage of the processing power, data should be kept on the GPU as much as possible.

To support this functionality, we’ve introduced three new types: DistributedProvider, DistributedVector and DistributedMatrix. DistributedProvider is an abstract class that defines core functionality of the distributed computing platform such as memory management and computational kernels. At the moment, it has one concrete implementation: CudaProvider. We expect to add OpenCLProvider and possibly others (Xeon Phi, C++/AMP, MPI…) in the future.

DistributedVector and DistributedMatrix represent distributed (i.e. GPU-based) arrays that may or may not have a local copy in CPU memory. Data is copied to and from GPU memory only as needed. The result of, say, multiplying two matrices in GPU memory is a third matrix which also resides on the GPU. If further calculations are done using this matrix, its data is kept on the GPU. It is never copied to CPU memory, unless individual elements are accessed, or an operation is performed that is not supported on the GPU. In the latter case, the data is copied to local memory and the CPU-based implementation is used. We also fall back on the CPU-based code automatically if the GPU runs out of memory, or if there is no CUDA GPU at all.

So what does the code look like? Well, here is a small example that runs 100 iterations of the power method for computing the largest eigenvalue of a matrix using various configurations. The core computation is a matrix-vector product. We use the same code for the CPU-based and the GPU-based calculation.

static double DoPower(Matrix<double> A, Vector<double> b) { 
    double λ = 0;
    b.SetValue(1.0);
    Vector<double> temp = null) {
    for (int i = 0; i < imax; i++) {
        temp = Matrix<double>.MultiplyInto(A, b, temp);
        // Note that temp will exist only on the GPU
        // if A and b are GPU arrays
        var λ1 = temp.Norm();
        Vector<double>.MultiplyInto(1 / λ1, temp, b);
        if (Math.Abs(λ1 - λ) < 1e-5) break;
        λ = λ1;
    }
    // In case this is a GPU array: free GPU memory.
    temp.Dispose();
    return λ;
}

// CPU:
l = DoPower(A, b);

// GPU:
l = DoPower(A.MakeDistributed(), b.MakeDistributed());

As you can see, the only difference between the CPU and GPU versions is that we called MakeDistributed on the input arguments.

In our benchmark we added a third variation that touches the matrix on the CPU during each iteration. This forces the matrix to be copied to the GPU in each iteration, which is similar to what happens with naive offloading. Here are the results:

Size: 100
MKL (CPU only):        4.107 ms (lambda=6.07967352075151)
CUDA (keep on GPU):   25.101 ms (lambda=6.07967352075151)
CUDA (offloaded):     29.593 ms (lambda=6.07967352075151)

Size: 500
MKL (CPU only):       42.116 ms (lambda=13.3132987677261)
CUDA (keep on GPU):   30.376 ms (lambda=13.3132987677261)
CUDA (offloaded):     94.250 ms (lambda=13.3132987677261)

Size: 1000
MKL (CPU only):      171.170 ms (lambda=18.754878830699)
CUDA (keep on GPU):   35.196 ms (lambda=18.754878830699)
CUDA (offloaded):    276.329 ms (lambda=18.754878830699)

Size: 5000
MKL (CPU only):    4397.868 ms (lambda=41.3752599052634)
CUDA (keep on GPU): 282.907 ms (lambda=41.3752599052635)
CUDA (offloaded):  5962.417 ms (lambda=41.3752599052635)

This is on a GPGPU-poor GTX 680, with 1/24 double-precision capacity, compared to 1/2 for Titan/Kepler cards, and using Intel’s Math Kernel Library version 11. It clearly shows that using our implementation, the GPU version is competitive for moderate sized matrices (n=500) and really charges ahead for larger problems, while the simple offloading technique never quite catches up.

Data Frame Library Preview

One of our core objectives in creating our Numerical Libraries for .NET was to try to close the gap between the exploratory phase of development and the implementation phase, when the results are incorporated into applications. We wanted to address the needs of one of the fastest growing categories of users: data scientists.

Several tools are available already, but few come close to fulfilling our objectives. R is the most widely used platform for data analysis. Unfortunately the interaction with .NET is painful at best. For Python users, pandas is an excellent choice. It has been around for a few years and in that time has really matured in terms of API and performance. But of course, it’s Python only.

Blue Mountain Capital recently released Deedle, an open source library for exploratory data analysis written in F# by the brilliant Tomas Petricek and others. The library is fairly complete and has a nice API. especially for F# users. Using F#’s interactive REPL with Deedle, it is truly possible to do interactive exploratory data analysis with .NET.

Although a good effort has been made to make the library easy to use from other languages like C# and VB.NET, it is still more awkward than it could be. However, as we’ll see, the biggest problem with Deedle is performance.

Our own library for exploratory data analysis, which is well on its way to completion, is still convenient but maintains a high level of performance comparable to, and sometimes exceeding, that of pandas. I wanted to give you a preview of what’s coming.

Data Frame basics

The basic object in data analysis is a data frame. A data frame looks a lot like a database table. It is a two-dimensional table where the values in each column must be the same, but values in different columns can have different types. It also has a row index and a column index. The column index is used to select specific columns, usually with strings as keys.

Similarly, the row index can be used to select rows, but it also enables automatic data alignment. Imagine two time series, where some dates that are present in one series may not be present in another. If you want to compute the difference between the values in the two series, you want to make sure that you are comparing values for the same date. Here’s a quick example:

var s1 = Vector<double>.Create(
    new double[] { 1, 2, 3, 4 },
    new string[] { "a", "b", "c", "d" });
var s2 = Vector<double>.Create(
    new double[] { 40, 50, 10, 30 },
    new string[] { "d", "e", "a", "c" });
Console.WriteLine(s1);
Console.WriteLine(s2);
Console.WriteLine(s1 + s2);

We first create two vectors indexed using some strings. You’ll notice that the second vector has an extra key (e), and another key (b) is missing. If we add the two vectors, here is what we get:

a: 11
b: -
c: 33
d: 44
e: -

The result has a value for all five keys. It has added values with corresponding keys correctly. Where a key was present in only one vector, a missing value was returned.

All vectors in the Extreme Optimization Numerical Libraries for .NET may have an index, and all matrices may have a row and a column index. Any operations involving indexed arrays will automatically align the data on the indexes.

Indexes on vectors and matrices are weakly typed in that the type of the keys is not part of the type definition. The keys are stored as strongly typed collections, of course.

A data frame is different from an indexed matrix in two ways. First, not all elements must be of the same type. Different columns can have different element types. Second, the indexes on a data frame are strongly typed, while the columns are weakly typed. You can easily access rows and columns by key, but if you want a strongly typed column, you need to do a tiny bit of extra work:

var df = DataFrame.Create<string, string>(
    new string[] { "First", "Second" },
    new Vector<double>[] { s1, s2 });
Console.WriteLine(df);

var s3 = df["First"].As<double>() + 10.0;
Console.WriteLine(s3);

This creates a data frame using the two vectors we used earlier.  We then get the first column, get it as a double, and add 10 to it. The output:

df:                 s3:
   First Second
a: 1     10         a: 11
b: 2     -          b: 21
c: 3     30         c: 31
d: 4     40         d: 41
e: -     50         e: -

Notice that the row index on the data frame once again contains all keys in the two indexes, and missing values have been inserted in a vector where the key was not present.

Titanic survivor analysis in 20 lines of C#

Let;s do something more complicated. The Deedle home page starts with an example that computes the percentages of people who survived the Titanic disaster grouped by gender. The code shown is currently 20 lines. Using our library, the C# code is also 20 lines, and would look like this:

var titanic = DataFrame.ReadCsv(filename);
// Compute pivot table of counts, grouped by Sex and Survived:
var counts = titanic
    .PivotBy<string, int>("Sex", "Survived")
    .Counts()
    .WithColumnIndex(new string[] { "Died", "Survived" });

// Add a column that contains the total count:
counts["Total"] = counts["Died"].As<double>() + counts["Survived"].As<double>();

// Create the data frame that contains the percentages:
var percentages = DataFrame.Create<string, string>(
    new Dictionary<string, object>() {
        { "Died (%)", 100 * Vector<double>.ElementwiseDivide(
             counts["Died"].As<double>(),
             counts["Total"].As<double>()) },
        { "Survived (%)", 100 * Vector<double>.ElementwiseDivide(
             counts["Survived"].As<double>(),
             counts["Total"].As<double>()) }
    });

After reading the data into a DataFrame from a CSV file, we can compute a pivot table that shows the count for each combination of Sex and Survived. We then add a column that computes the total number of people for each gender. Finally, we construct a new data frame by dividing the first two columns by the total column.

F# has somewhat more succinct syntax, thanks mainly to its better type inference and support for custom operators. We can do the same thing in just 14 lines:

let titanic = DataFrame.ReadCsv(filename)
// Compute pivot table of counts, grouped by Sex and Survived:
let counts =
    titan.PivotBy<string, int>("Sex", "Survived")
         .Counts()
         .WithColumnIndex([| "Died"; "Survived" |]);

// Add a column that contains the total count:
counts?Total <- counts?Died + counts?Survived

// Create the data frame that contains the percentages:
DataFrame.Create<string, string>([
    ("Died (%)" , 100.0 * counts?Died ./ counts?Total ) ;
    ("Survived (%)" , 100.0 * counts?Survived ./ counts?Total )])

Down the road we will be adding some more F# extras.

Comparing Deedle and Extreme Optimization Performance

So, how does this code perform? Well, we added some timing code, making sure to ‘warm up’ the code so we’re only testing the actual running time and not time spent JIT’ing the code. Here is what we found on an older i7 930:

Platform: x86
Example from Deedle home page (100 iterations)
Extreme Optimization:  11.137 ms (0.1114 ms/iteration)
Deedle:              5395.866 ms (53.9587 ms/iteration)
Ratio: 484.49

Platform: x64
Example from Deedle home page (100 iterations)
Extreme Optimization:  11.049 ms (0.1105 ms/iteration)
Deedle:              8208.071 ms (82.0807 ms/iteration)
Ratio: 742.91

That’s right: our implementation is around 500 times faster than Deedle. We were as surprised as anyone by these results. You just don’t get these kinds of performance differences very often. So we looked a bit further. We summarized our findings in a chart:

CompareExtremeOptimizationDeedle

The chart compares running times (in ms) of 9 benchmarks using the Extreme Optimization library and Deedle running as both x86 and x64. Note that we had to use a logarithmic scale for the running time!

The conclusion: the Extreme Optimization library is 1 to 3 orders of magnitude faster than Deedle.

A couple of points on why these results aren’t entirely fair. First, the two versions don’t always do the same thing. For example: in the time series benchmark, we take advantage of the fact that the indexes are sorted to speed up the alignment. Verifying that the index is sorted is only done once. There are no doubt other cases where our code incurs a cost only once. Second, we use some specific optimizations that could be implemented in Deedle with relative ease. For example: we use an incremental method to compute the moving average, whereas Deedle re-computes the average for each window from scratch.

Still, a difference of 1-3 orders of magnitude is huge, and it begs for an explanation. I believe it’s rather simple, actually:

  • Performance was not a priority in the development of Deedle, especially in this initial release. (We used the first official release version, 0.9.12.) We expect performance to improve as the library matures. Still, 3 orders of magnitude will be hard to make up. Which brings me to my other point:
  • Sometimes, the F# language makes it a little too easy to write compact, elegant code that is very inefficient. For example: Types like tuples and discriminated unions are very convenient, but they are reference types, and so they have consequences for memory locality, cache efficiency, garbage collections…

Summary

Our upcoming data frame library will provide data scientists with a convenient way of interactively working with data sets with millions of rows. Compared to the only other library of its kind, Deedle, performance is currently 1 to 3 orders of magnitude better. A beta version of the library will be published soon. Contact me if you want to participate.

Using Numerical Libraries for .NET from F#

With the release of version 5.0 of our Numerical Libraries for .NET, we’ve also considerably improved our support for the F# language. In this article I want to walk you through setting up F# Interactive to use the Extreme Optimization Numerical Libraries for .NET, and show off some of the new features. This tutorial will focus on F# 3.0 in Visual Studio 2012. Most of the functionality also works with F# 2.0 in Visual Studio 2010.

Setting up F# Interactive

The first step is to bring the Extreme Optimization Numerical Libraries for .NET assemblies into the F# Interactive environment:

#I @"c:\Program Files\Extreme Optimization\Numerical Libraries for .NET\bin\Net40\";;
#r "Extreme.Numerics.Net40.dll";;
#r "Extreme.Numerics.FSharp30.Net40.dll";;

open Extreme.Mathematics;;

fsi.AddPrinter(fun (A : Extreme.Mathematics.ISummarizable) -> A.Summarize());;

We do this by first adding the path to the Extreme Optimization binaries to the DLL search path using the #I command. (Note: the path depends on the installation folder and may be different on your machine.) We the main assembly Extreme.Numerics.Net40, and the F# specific assembly Extreme.Numerics.FSharp30.Net40. Finally, we tell fsi to use a pretty printer for objects that implement the ISummarizable interface. This will print out vectors and matrices in a user-friendly form.

Working with vectors and matrices

So let’s get our hands dirty right away. When we referenced the Extreme.Numerics.FSharp30.Net40 dll, this automatically opened several modules that make it easier to do linear algebra in F#. We can call Vector and Matrix factory methods, or we can use some convenient F# functions:

> let A = rand 3 3;;

val A : LinearAlgebra.DenseMatrix =
  3x3 DenseMatrix
[[0.0780,0.1007,0.6917]
 [0.5422,0.8726,0.5718]
 [0.3528,0.0631,0.0772]]

> let c = !![1.0..10.0];;
val c : LinearAlgebra.DenseVector =
  [1.0000,2.0000,3.0000,4.0000,5.0000,6.0000,7.0000,8.0000,9.0000,10.0000]

Notice the !! operator, which creates a vector from a list or sequence. We can access elements of vectors and matrices using the familiar F# syntax. Slices work too, as do integer sequences and predicates:

> A.[0,1];;
val it : float = 0.100694183
> A.[1,1]  A.[1..2,0..1];;
val it : Matrix = 2x2 DenseMatrix
[[0.5422,99.0000]
 [0.3528,0.0631]]
> c.[1..3];;
val it : Vector = [2.0000,3.0000,4.0000]
> c.[fun x -> sin x > 0.0];;
val it : Vector = [1.0000,2.0000,3.0000,7.0000,8.0000,9.0000]
> c.[[2;4;5]];;
val it : Vector = [3.0000,5.0000,6.0000]
> c.[[for i in 1..7 -> i % 3]];;
val it : Vector = [2.0000,3.0000,1.0000,2.0000,3.0000,1.0000,2.0000]

Let’s do some simple calculations:

> A*b;;
 val it : LinearAlgebra.DenseVector = [3.7378,5.1463,0.8651]
 > A.T*b;;
 val it : Vector = [2.9263,2.1614,2.2214]
 > 2.0 * b;;
 val it : LinearAlgebra.DenseVector = [2.0000,4.0000,10.0000]
 > 1.0 + b;;
 val it : Vector = [2.0000,3.0000,6.0000]
 > log (1.0 + b);;
 val it : Vector = [0.6931,1.0986,1.7918]
 > det A;;
 val it : float = -0.1707208593
 > inv A;;
 val it : Matrix =
 3x3 DenseMatrix
 [[-0.1835,-0.2101,3.1982]
 [-0.9361,1.3939,-1.9355]
 [1.6027,-0.1792,-0.0791]]
> let b = Vector.Create(3, fun i -> 1.0 + (float i)**2.0);;
val b : LinearAlgebra.DenseVector = [1.0000,2.0000,5.0000]
> let x = A.Solve(b);;
val x : LinearAlgebra.DenseVector = [15.3876,-7.8258,0.8488]

The last command solved the system of equations represented by the matrix A and solved it for the right-hand side b. We can also work with matrix decompositions. There are actually two ways. Let’s use the SVD as an example. First, we can get the factors of the decomposition.

> let (U,Σ,V) = A.svd();;

val Σ : LinearAlgebra.DiagonalMatrix =
  3x3 DiagonalMatrix
[[1.2980,0.0000,0.0000]
 [0.0000,0.5101,0.0000]
 [0.0000,0.0000,0.2578]]
val V : Matrix =
  3x3 DenseMatrix
[[-0.4492,0.4032,-0.7973]
 [-0.6411,0.4760,0.6020]
 [-0.6222,-0.7816,-0.0446]]
val U : Matrix =
  3x3 DenseMatrix
[[-0.4083,-0.9041,-0.1260]
 [-0.8928,0.3667,0.2616]
 [-0.1903,0.2193,-0.9569]]

Or, if we just want the singular values, we can get those as well:

> let σ = A.svd();;

val σ : LinearAlgebra.DenseVector = [1.2980,0.5101,0.2578]

The second way to work with decompositions is by creating decomposition objects. Let’s work with a non-square matrix this time:

> let C = rand 15 4;;

val C : LinearAlgebra.DenseMatrix =
  15x4 DenseMatrix
[[0.4198,0.8147,0.8530,0.5364]
 [0.9181,0.7706,0.3164,0.8773]
 [0.4768,0.1423,0.7419,0.6521]
 [0.2463,0.4579,0.3474,0.0311]
 [0.0432,0.6366,0.4928,0.2399]
 ...
 [0.3991,0.3426,0.4452,0.1276]
 [0.7867,0.3247,0.5256,0.1940]
 [0.6504,0.0943,0.1169,0.4266]
 [0.0436,0.6716,0.5230,0.4922]
 [0.5329,0.5422,0.2448,0.0547]]

> let d = (rand 15 1).GetColumn(0);;

val d : Vector =
  [0.7446,0.8430,0.6296,0.1210,0.2190,...,0.8281,0.6643,0.0355,0.7120,0.3582]

Here we see that the output is truncated, and only the 5 first and last rows are shown. We can find the least squares solution to C*y = d.
We’ll also compute the residuals and their norm:

> let y = svdC.GetPseudoInverse() * d;;

val y : Vector = [0.3221,0.4458,0.3383,0.1298]

> let r = d - C * y;;

val r : Vector =
  [-0.1120,-0.0171,0.0770,-0.2840,-0.2765,...,0.3796,0.0632,-0.3110,0.1577,-0.1450]

> norm r;;
val it : float = 0.8214250504

Working with functions

You’d expect a functional language to work with functions. So let’s see what we can do.

The modules discussed in this section are not automatically opened when you load the F# compatibility assembly. The available modules are:

  • Special: special functions, such as Gamma functions, Bessel functions, etc.
  • Functional: numerical integration and differentiation, finding zeros, maxima and minima of functions.
  • Statistics: some functions that are useful when doing statistical analysis.
  • Random: random number streams.

These are only the modules with F# specific functions. The full functionality of the libraries is of course available from F# as well.

Here we’ll focus mainly on the second category. Special functions can be very handy, however, so let’s start with those:

> open Extreme.Numerics.FSharp.Special;;
> gamma 0.5;;
val it : float = 1.772453851
> BesselJ 0.0 1.0;;
val it : float = 0.7651976866
> let J0 = BesselJ 0.0;;

val J0 : (float -> float)

> J0 1.0;;
val it : float = 0.7651976866
>

Notice how we used partial application of the Bessel function to get a function that returns the Bessel function of order 0. We can do something similar in many other places, for example with binomial coefficients:

> let binom9 = binomial 9;;

val binom9 : (int -> float)

> [for i in 0..9 -> binom9 i];;
val it : float list =
  [1.0; 9.0; 36.0; 84.0; 126.0; 126.0; 84.0; 36.0; 9.0; 1.0]

Now to the functional stuff. Two of the most basic operations on functions are differentiation and integration. Differentiation is done using the d function. Integration is done using integrate1. Partial application is supported, so you can create a function that computes the numerical derivative at any value you give it.

> open Extreme.Numerics.FSharp.Functional;;
> let J1 = BesselJ 1.0;;

val J1 : (float -> float)

> d J0 0.5;;
val it : float = -0.2422684577
> J1 0.5;; // d J0 = -J1
val it : float = 0.2422684577
> let dJ0 = d J0;;

val dJ0 : (float -> float)

> dJ0 0.5;;
val it : float = -0.2422684577
> dJ0 1.5;;
val it : float = -0.5579365079

Integration works in a similar way. The bounds of the integration interval are supplied as a tuple. Infinite intervals are supported as well.

> integrate1 J1 (0.0,5.0);;
val it : float = 1.177596771
> J0 5.0 - J0 0.0;; // Should give - the previous result:
val it : float = -1.177596771
> let iJ1 = integrate1 J1;;

val iJ1 : (float * float -> float)

> iJ1 (0.0,5.0);;
val it : float = 1.177596771
> integrate1 (fun x -> exp -x) (0.0,infinity);;
val it : float = 1.0

Finding zeros of functions is easy. Let’s quickly find the first 5 zeros of the Bessel function J0:

> [for x0 in [2.0;5.0;8.0;11.0;14.0] -> findzero (fun x -> J0 x) x0];;
val it : float list =
  [2.404825558; 5.52007811; 8.653727913; 11.79153444; 14.93091771]

Finding the minimum or maximum of a function can be difficult because most software requires that you supply the gradient of the function. Although computing derivatives is high school calculus, it’s still very error prone. Automatic differentiation comes to the rescue here. We’ll use the SymbolicMath class, which contains all kinds of functions that take advantage of symbolic computations to obtain a solution.

One of the most famous optimization problems is the Rosenbrock function. We can find the minimum of this function in just one line of code:

> SymbolicMath.FindMinimum((fun x y -> (1.0 - x)**2.0 + 100.0*(y-x**2.0)**2.0), !![-1.0;-1.0]);;
val it : SolutionReport =
  Extreme.Mathematics.SolutionReport`1[Extreme.Mathematics.Vector]
    {ConvergenceTest = null;
     Error = 2.778930274e-09;
     EvaluationsNeeded = 74;
     IterationsNeeded = 26;
     Result = [1.0000,1.0000];
     Status = Converged;}
>

It’s time now for you to play. Download the trial version of the Extreme Optimization Numerical Libraries for .NET here and have fun!

Using Numerical Libraries from IronPython

Today, we have the pleasure of announcing the availability of a new IronPython interface library and over 50 samples of using our Extreme Optimization Numerical Libraries for .NET from IronPython.

Python is used more and more for numerical computing. It has always been possible to call into it from IronPython. However, IDE support was minimal and some of the more convenient features of Python, like slicing arrays, were not available.

In January, Microsoft announced the availability of Python Tools for Visual Studio. This is a big step forward in IDE’s for Python development on Windows.

Now, with our new IronPython interface library you can take advantage of the following integration features:

  • Create vectors and matrices from Python lists.
  • Setting and getting slices of vectors and matrices.
  • Integrating Python’s complex number type with our DoubleComplex type.
  • Use Python-style format specifiers.

If you want to dive right in, the download is here: IronPython Tools for Extreme Optimization Numerical Libraries for .NET.

Prerequisites

In order to use the IronPython interface library, you need the following:

Installation

To install the IronPython interface library, follow these steps:

  1. Make sure all the prerequisites are installed.
  2. Download the zip archive containing the IronPython interface library for the Extreme Optimization Numerical Libraries for .NET.
  3. Copy the Extreme.Numerics.IronPython27.dll file from the zip archive to the DLLs folder in the IronPython installation folder.
  4. Copy the IronPython folder from the zip archive to the QuickStart folder in the Extreme Optimization Numerical Libraries for .NET installation folder.

Getting Started

To use the interface library, import the numerics module:

IronPython 2.7.1 (2.7.0.40) on .NET 4.0.30319.239 
Type "help", "copyright", "credits" or "license" for more information. 
>>> import numerics 

The main types reside in the Extreme.Mathematics namespace, so it’s a good idea to import everything from it:

>>> from Extreme.Mathematics import *

You can then start using mathematical objects like vectors, and manipulate them:

>>> a = Vector([1,2,3,4,5]) 
>>> b = Vector.Create(5, lambda i: sqrt(i+1)) 
>>> b 
Vector([1,1.4142135623730952,1.7320508075688772,2,2.23606797749979]) 
>>> a+b 
Vector([2,3.4142135623730949,4.7320508075688767,6,7.23606797749979]) 
>>> from math import * 
>>> a.Apply(sin) 
Vector([0.8414709848078965,0.90929742682568171,0.14112000805986721,-0.7568024953079282,-0.95892427466313845])

You can use Python slicing syntax, including counting from the end:

>>> a[0] 
1.0 
>>> a[-2] 
4.0 
>>> a[-2:] 
Vector([4,5]) 
>>> a[1:4] 
Vector([2,3,4])

Slicing works on matrices as well:

>>> H = Matrix.Create(5,5, lambda i,j: 1.0 / (1+i+j)) 
>>> H 
Matrix([[1,0.5,0.33333333333333331,0.25,0.2] 
[0.5,0.33333333333333331,0.25,0.2,0.16666666666666666] 
[0.33333333333333331,0.25,0.2,0.16666666666666666,0.14285714285714285] 
[0.25,0.2,0.16666666666666666,0.14285714285714285,0.125] 
[0.2,0.16666666666666666,0.14285714285714285,0.125,0.11111111111111111]]) 
>>> H[1,1] 
0.33333333333333331 
>>> H[1,:] 
Vector([0.5,0.33333333333333331,0.25,0.2,0.16666666666666666]) 
>>> H[:,1] 
Vector([0.5,0.33333333333333331,0.25,0.2,0.16666666666666666]) 
>>> H[0:5:2,0:5:2] 
Matrix([[1,0.33333333333333331,0.2] 
[0.33333333333333331,0.2,0.14285714285714285] 
[0.2,0.14285714285714285,0.11111111111111111]])

Many linear algebra operations are supported, from the simple to the more complex:

>>> H*a 
Vector([5,3.55,2.8142857142857141,2.3464285714285715,2.0174603174603174]) 
>>> H.Solve(a) 
Vector([124.99999999997976,-2880.0000000002506,14490.000000002709,-24640.000000005733,13230.000000003382]) 
>>> svd = H.GetSingularValueDecomposition() 
>>> svd.SingularValues 
Vector([1.5670506910982314, 0.20853421861101323, 0.011407491623419797, 0.00030589804015118552, 3.2879287721734089E-06])

Sample programs

We’ve converted over 60 of our QuickStart samples to Python scripts. The samples folder contains a solution that contains all the sample files. To run an individual sample, find it in Solution Explorer and choose “Set As Startup File” from the context menu. You can then run it in the interactive window by pressing Shift+Alt+F5.

Using Quadratic Programming for Portfolio Optimization

This week’s update of the Extreme Optimization Numerical Libraries for .NET includes the ability to solve quadratic programming (QP) problems. These are optimization problems where the objective function is a quadratic function and the solution is subject to linear constraints.

Our QP solver uses an active set method, and can be used to solve programs with thousands of variables. Quadratic programming has many applications in finance, economics, agriculture, and other areas. It also serves as the core of some methods for nonlinear programming such as Sequential Quadratic Programming (SQP).

We created some QuickStart samples that demonstrates how to use the new functionality:

In this post, I’d like to illustrate an application in finance: portfolio optimization.

Say we have a set of assets. We know the historical average return and variances of each asset. We also know the correlations between the assets. With this information, we can look for the combination of assets that best meets our objectives. In this case, our goal will be to minimize the risk while achieving a minimal return. Now let’s see how we can model this.

The unknowns in our model will be the amount to be invested in each asset, which we’ll denote by xi. All values together form the vector x. We will assume that all xi ≥ 0.

We can use the variance of the value of the portfolio as a measure for the risk. A larger variance means that the probability of a large loss also increases, which is what we want to avoid. If we denote the variance-covariance matrix of the assets by R, then the variance of the asset value is xTRx. This is what we want to minimize.

Now let’s look at the constraints. We obviously have a budget, so the sum of all amounts must be less than or equal to our budget. Say our budget is $10,000, then we must have Σ xi ≤ 10,000. We also want a minimum expected return on our investment, say 10%. If ri is the return associated with asset i, then the total expected return for our investment is Σ ri xi, and we want this value to be at least 10% of our total budget, or $1,000.

So, in summary, our model looks like this:

Minimize xTRx
Subject to Σ xi ≤ 10000
Σ ri xi ≥ 1000 xi ≥ 0

The last thing we need to do is formulate this quadratic program in terms of the optimization classes in the Numerical Libraries for .NET. The QuickStart samples show three ways of doing this: you can define the model directly in terms of vectors and matrices; you can build the model from scratch, or you can read it from a model definition file in MPS format. Since the problem above is already pretty much expressed in terms of vectors and matrices, we’ll use that approach here.

Quadratic programming problems are represented by the QuadraticProgram class. One of the constructors lets you create a model in so-called standard form. A quadratic program in standard form looks like this:

Minimize c0 + cTx + ½xTHx
Subject to AEx = bE
AIxbI
lxu

where c0 is a scalar, H, AE, and AI are matrices, and all other values are vectors. The subscript E denotes quality constraints, while the subscript I denotes inequality constraints. In our case, we don’t have equality constraints.

As a concrete example, we will work with four assets with returns 5%, -20%, 15% and 30%, respectively. Our budget is $10,000, and the minimum return is 10%. The implementation is quite straightforward. Noteworthy is that, since in the standard form the right hand side is an upper bound, we have to change the sign of the minimum return constraint which had a lower bound right-hand side.

Here’s the C# code:

// The linear term in the objective function: 
Vector c = Vector.CreateConstant(4, 0.0); 
// The quaratic term in the objective function: 
Matrix R = Matrix.CreateSymmetric(4, new double[] 
    {
        0.08,-0.05,-0.05,-0.05,
        -0.05, 0.16,-0.02,-0.02,
        -0.05,-0.02, 0.35, 0.06,
        -0.05,-0.02, 0.06, 0.35
    }, MatrixTriangle.Upper); 
// The coefficients of the constraints: 
Matrix A = Matrix.Create(2, 4, new double[] 
    { 
        1, 1, 1, 1, 
        -0.05, 0.2, -0.15, -0.30
    }, MatrixElementOrder.RowMajor);
// The right-hand sides of the constraints: 
Vector b = Vector.Create(10000, -1000);
// We're now ready to call the constructor. 
// The last parameter specifies the number of equality 
// constraints. 
QuadraticProgram qp1 = new QuadraticProgram(c, R, A, b, 0); 
// Now we can call the Solve method to run the algorithm: 
Vector x = qp1.Solve();

This is how it looks in Visual Basic:

' The linear term in the objective function: 
Dim c As Vector = Vector.CreateConstant(4, 0.0) 
' The quaratic term in the objective function: 
Dim R As Matrix = Matrix.CreateSymmetric(4, _ 
    New Double() _
        { _
            0.08, -0.05, -0.05, -0.05, _
            -0.05, 0.16, -0.02, -0.02, _
            -0.05, -0.02, 0.35, 0.06, _
            -0.05, -0.02, 0.06, 0.35 _
        }, MatrixTriangle.Upper) 
' The coefficients of the constraints: 
Dim A As Matrix = Matrix.Create(2, 4, New Double() _
    { _
        1, 1, 1, 1, _
        -0.05, 0.2, -0.15, -0.3 _
    }, MatrixElementOrder.RowMajor) 
' The right-hand sides of the constraints: 
Dim b As Vector = Vector.Create(10000, -1000) 
' We're now ready to call the constructor. 
' The last parameter specifies the number of
' equality constraints. 
Dim qp1 As New QuadraticProgram(c, R, A, b, 0) 
' Now we can call the Solve method to run the algorithm: 
Dim x As Vector = qp1.Solve()

And finally in F#:

// The linear term in the objective function:
let c = Vector.CreateConstant(4, 0.0); 
// The quaratic term in the objective function:
let R = Matrix.CreateSymmetric(4,
    [|
        0.08;-0.05;-0.05;-0.05;
        -0.05; 0.16;-0.02;-0.02;
        -0.05;-0.02; 0.35; 0.06;
        -0.05;-0.02; 0.06; 0.35
    |], MatrixTriangle.Upper) 
// The coefficients of the constraints: 
let A = Matrix.Create(2, 4, 
    [|
        1.0; 1.0; 1.0; 1.0;
        -0.05; 0.2; -0.15; -0.30
    |], MatrixElementOrder.RowMajor) 
// The right-hand sides of the constraints: 
let b = Vector.Create(10000.0, -1000.0) 
// We're now ready to call the constructor. 
// The last parameter specifies the number of 
// equality constraints. 
let qp1 = new QuadraticProgram(c, R, A, b, 0)
// Now we can call the Solve method to run the algorithm:
let x = qp1.Solve()

After we’ve run this code, the vector x will contain the optimal amounts for each asset. It gives us: $3,453, $0, $1,069, $2,223. Not surprisingly, the second asset, which has had a negative return on average, should not be part of our portfolio. However, it turns out that the asset with the lowest positive return should have the largest share.

Accurate trigonometric functions for large arguments

This week we introduced two new features in the Extreme Optimization Numerical Libraries for .NET.

Trigonometric functions with large arguments

The .NET Framework implementation of the trigonometric functions, sine, cosine, and tangent, relies on the corresponding processor instruction. This gives extremely fast performance, but may not give fully accurate results. This is the case for even fairly small arguments.

For example, Math.Sin(1000000.0) returns -0.34999350217129177 while the correct value is -0.34999350217129295. So we’ve already lost at least 2 digits. And things just go downhill from there. At 1012, we only get 8 good digits.

For even larger arguments, something quite unexpected happens. The result of Math.Sin(1e20) is… 1e20! The argument is returned unchanged! Not only is this a completely meaningless return value. It also can cause a calculation to fail if it relies on the fact that sine and cosine are bounded by -1 and +1.

To understand what is going on, we need to go back to the implementation.

Math.Sin, Math.Cos and Math.Tan rely on processor instructions like fsin and fcos. These instructions work by first reducing the angle to a much smaller value, and then using a table-based approach to get the final result. Moreover, according to the documentation, the argument must be strictly between –263 and 263. The return value is not defined for arguments outside this interval. This explains the complete breakdown for large arguments.

The explanation for the gradual loss of accuracy is more subtle. As I said, the computation is done by first reducing the argument to a smaller interval, typically (–π, π]. The argument x is rewritten in the form

x = n π + f

where n is an even integer, and f is a real number with magnitude less than π.

The first step is to divide the number by π. The quotient is rounded to the nearest even integer to give us n. We get the reduced value f by subtracting this number times π from x. Because of the periodicity of the trigonometric functions, the value of the function at x is the same as the value at f, which can be calculated accurately and efficiently.

Now, if you start with a value like 10300, you’ll need to divide this number by π with at least 300 digits of precision, round the quotient to the nearest even number, and subtract π times this number from 10300. To do this calculation accurately over the whole range, we would need to work with a value of π that is accurate to 1144 bits.

Even today, this is just a little bit beyond the capabilities of current hardware. Moreover, it could be considered wasteful to spend so much silicon on a calculation that is not overly common. Intel processors use just 66 bits. This leaves us with just 14 extra bits. The effects will begin to show with arguments that are larger than 216.

Accurate argument reduction for trigonometric functions

So, there are two problems with the standard trigonometric functions in the .NET Framework:

  1. The accuracy decreases with increasing size of the argument, losing several digits for even modest sized arguments.
  2. When the argument is very large, the functions break down completely.

To address these two problems, we’ve added Sin, Cos and Tan methods to the Elementary class. These methods perform fully accurate argument reduction so the results are accurate even for huge values of the argument.

For example, the 256 digit floating-point number 6381956970095103 2797 is very close to a multiple of π/2. In fact, it is so close that the first 61 binary digits after the period of the reduced value are 0. As expected, Math.Cos gives the completely meaningless result 5.31937264832654141671e+255. Elementary.Cos returns -4.6871659242546267E-19, which is almost exactly the same as what Wolfram|Alpha gives: 4.6871659242546276E-19. The relative error is about the size of the machine precision.

What about performance? Thanks to some tricks with modular multiplication, it is possible to do the reduction in nearly constant time. Moreover, since the reduction only needs to be done for larger arguments, the overhead in most situations is minimal. Our benchmarks show a 4% overall slowdown when no reduction is needed. For smaller values, the operation can take close to twice as long, while in the worst case, for large arguments that require a full reduction, we see a 10x slowdown.

Transcendental functions for System.Decimal

Also new in this week’s update is the DecimalMath class, which contains implementations of all functions from System.Math for the decimal type. All trigonometric, hyperbolic, exponential and logarithmic functions are supported, as well as the constants e and π.

The calculations are accurate to the full 96 bit precision of the decimal type. This is useful for situations where double precision is not enough, but going with an arbitrary precision BigFloat would be overkill.

New feature: Wilcoxon-Mann-Whitney and Kruskal-Wallis Tests

We just released an update to our Extreme Optimization Numerical Libraries for .NET that adds some new classes. We’ve added two non-parametric tests: the Mann-Whitney and Kruskal-Wallis tests. These are used to test the hypothesis that two or more samples were drawn from the same distribution. The Mann-Whitney test is used for two samples. The Kruskal-Wallis test is used when there are two or more samples.

For both tests, the test statistic only depends on the ranks of the observations in the combined sample, and no assumption about the distribution of the populations is made. This is the meaning of the term non-parametric in this context.

The Mann-Whitney test, sometimes also called the Wilcoxon-Mann-Whitney test or the Wilcoxon Rank Sum test, is often interpreted to test whether the median of the distributions are the same. Although a difference in median is the dominant differentiator if it is present, other factors such as the shape or the spread of the distributions may also be significant.

For relatively small sample sizes, and if no ties are present, we return an exact result for the Mann-Whitney test. For larger samples or when some observations have the same value, the common normal approximation is used.

The Kruskal-Wallis test is an extension of the Mann-Whitney test to more than two samples. We always use an approximation for the distribution. The most common approximation is through a Chi-square distribution. We chose to go with an approximation in terms of the beta distribution that is generally more reliable, especially for smaller sample sizes. For comparison with other statistical software, the chi-square p-value is also available.

We created some QuickStart samples that illustrate how to use the new functionality:

You can also view the documentation on non-parametric tests, or download the trial version.

My .NET Framework Wish List

This is now the fifth year I’ve been writing numerical software for the .NET platform. Over these years, I’ve discovered quite a few, let’s call them ‘unfortunate’, design decisions that make writing solid and fast numerical code on .NET more difficult than it needs to be.

What I’d like to do in the coming weeks is list some of the improvements that would make life easier for people in our specialized field of technical computing. The items mentioned in this post aren’t new: I’ve written about them before. But it’s nice to have them all collected in one spot.

Fix the “Inlining Problem“

First on the list: the “inlining“ problem. Method calls with parameters that are value types do not get inlined by the JIT. This is very unfortunate, as it eliminates most of the benefit of defining specialized value types. For example: it’s easy enough to define a complex number structure with overloaded operators and enough bells and whistles to make you deaf. Unfortunately, none of those operator calls are inlined. You end up with code that is an order of magnitude slower than it needs to be.

Even though it has been the top performance issue for several years, there is no indication yet that it will be fixed any time soon. You can add your vote to the already sizeable number on Microsoft’s product feedback site.

Support the IEEE-754 Standard

Over 20 years ago, the IEEE published a standard for floating-point arithmetic that has since been adopted by all major manufacturers of CPU’s. So, platform independence can’t be an issue. Why then is it that the .NET Framework has only the most minimal support for the standard? Surely the fact that people took the time to come up with a solid standard, and the fact that it has enjoyed such wide support from hardware vendors should be an indication that this is something useful and would greatly benefit an important segment of the developer community.

I’ve written about the benefits of floating-point exceptions before, and I’ve discussed my proposal for a FloatingPointContext class. I’ve added a suggestion to this effect in LadyBug. Please go over there and vote for this proposal.

Allow Overloading of Compound Assignment Operators

This is another topic I’ve written about before. In a nutshell: C# and VB.NET don’t support custom overloaded assignment operators at all. C++/CLI supports them, and purposely violates the CLI spec in the process – which is a good thing! One point I would like to add: performance isn’t the only reason. Sometimes there is a semantic difference. Take a look at this code:

RowVector pivotRow = matrix.GetRow(pivot)
for(row = pivot+1; row < rowCount; row++)
{
   RowVector currentRow = matrix.GetRow(row);
   currentRow -= factor * pivotRow
}

which could be part of the code for computing the LU Decomposition of a matrix. The GetRow method returns the row of the matrix without making a copy of the data. The code inside the loop subtracts a multiple of the pivot row from the current row. With the current semantics where x -= y is equivalent to x = x – y, this code does not perform as expected.

What I would like to see is have the CLI spec changed to match what C++/CLI does. Compound assignment operators should be instance members.

 

Still to come: a proposal for some (relatively) minor modifications to .NET generics to efficiently implement generic arithmetic, better support for arrays, and more.