Using Numerical Libraries for .NET from F#

With the release of version 5.0 of our Numerical Libraries for .NET, we’ve also considerably improved our support for the F# language. In this article I want to walk you through setting up F# Interactive to use the Extreme Optimization Numerical Libraries for .NET, and show off some of the new features. This tutorial will focus on F# 3.0 in Visual Studio 2012. Most of the functionality also works with F# 2.0 in Visual Studio 2010.

Setting up F# Interactive

The first step is to bring the Extreme Optimization Numerical Libraries for .NET assemblies into the F# Interactive environment:

#I @"c:\Program Files\Extreme Optimization\Numerical Libraries for .NET\bin\Net40\";;
#r "Extreme.Numerics.Net40.dll";;
#r "Extreme.Numerics.FSharp30.Net40.dll";;

open Extreme.Mathematics;;

fsi.AddPrinter(fun (A : Extreme.Mathematics.ISummarizable) -> A.Summarize());;

We do this by first adding the path to the Extreme Optimization binaries to the DLL search path using the #I command. (Note: the path depends on the installation folder and may be different on your machine.) We the main assembly Extreme.Numerics.Net40, and the F# specific assembly Extreme.Numerics.FSharp30.Net40. Finally, we tell fsi to use a pretty printer for objects that implement the ISummarizable interface. This will print out vectors and matrices in a user-friendly form.

Working with vectors and matrices

So let’s get our hands dirty right away. When we referenced the Extreme.Numerics.FSharp30.Net40 dll, this automatically opened several modules that make it easier to do linear algebra in F#. We can call Vector and Matrix factory methods, or we can use some convenient F# functions:

> let A = rand 3 3;;

val A : LinearAlgebra.DenseMatrix =
  3x3 DenseMatrix
[[0.0780,0.1007,0.6917]
 [0.5422,0.8726,0.5718]
 [0.3528,0.0631,0.0772]]

> let c = !![1.0..10.0];;
val c : LinearAlgebra.DenseVector =
  [1.0000,2.0000,3.0000,4.0000,5.0000,6.0000,7.0000,8.0000,9.0000,10.0000]

Notice the !! operator, which creates a vector from a list or sequence. We can access elements of vectors and matrices using the familiar F# syntax. Slices work too, as do integer sequences and predicates:

> A.[0,1];;
val it : float = 0.100694183
> A.[1,1]  A.[1..2,0..1];;
val it : Matrix = 2x2 DenseMatrix
[[0.5422,99.0000]
 [0.3528,0.0631]]
> c.[1..3];;
val it : Vector = [2.0000,3.0000,4.0000]
> c.[fun x -> sin x > 0.0];;
val it : Vector = [1.0000,2.0000,3.0000,7.0000,8.0000,9.0000]
> c.[[2;4;5]];;
val it : Vector = [3.0000,5.0000,6.0000]
> c.[[for i in 1..7 -> i % 3]];;
val it : Vector = [2.0000,3.0000,1.0000,2.0000,3.0000,1.0000,2.0000]

Let’s do some simple calculations:

> A*b;;
 val it : LinearAlgebra.DenseVector = [3.7378,5.1463,0.8651]
 > A.T*b;;
 val it : Vector = [2.9263,2.1614,2.2214]
 > 2.0 * b;;
 val it : LinearAlgebra.DenseVector = [2.0000,4.0000,10.0000]
 > 1.0 + b;;
 val it : Vector = [2.0000,3.0000,6.0000]
 > log (1.0 + b);;
 val it : Vector = [0.6931,1.0986,1.7918]
 > det A;;
 val it : float = -0.1707208593
 > inv A;;
 val it : Matrix =
 3x3 DenseMatrix
 [[-0.1835,-0.2101,3.1982]
 [-0.9361,1.3939,-1.9355]
 [1.6027,-0.1792,-0.0791]]
> let b = Vector.Create(3, fun i -> 1.0 + (float i)**2.0);;
val b : LinearAlgebra.DenseVector = [1.0000,2.0000,5.0000]
> let x = A.Solve(b);;
val x : LinearAlgebra.DenseVector = [15.3876,-7.8258,0.8488]

The last command solved the system of equations represented by the matrix A and solved it for the right-hand side b. We can also work with matrix decompositions. There are actually two ways. Let’s use the SVD as an example. First, we can get the factors of the decomposition.

> let (U,Σ,V) = A.svd();;

val Σ : LinearAlgebra.DiagonalMatrix =
  3x3 DiagonalMatrix
[[1.2980,0.0000,0.0000]
 [0.0000,0.5101,0.0000]
 [0.0000,0.0000,0.2578]]
val V : Matrix =
  3x3 DenseMatrix
[[-0.4492,0.4032,-0.7973]
 [-0.6411,0.4760,0.6020]
 [-0.6222,-0.7816,-0.0446]]
val U : Matrix =
  3x3 DenseMatrix
[[-0.4083,-0.9041,-0.1260]
 [-0.8928,0.3667,0.2616]
 [-0.1903,0.2193,-0.9569]]

Or, if we just want the singular values, we can get those as well:

> let σ = A.svd();;

val σ : LinearAlgebra.DenseVector = [1.2980,0.5101,0.2578]

The second way to work with decompositions is by creating decomposition objects. Let’s work with a non-square matrix this time:

> let C = rand 15 4;;

val C : LinearAlgebra.DenseMatrix =
  15x4 DenseMatrix
[[0.4198,0.8147,0.8530,0.5364]
 [0.9181,0.7706,0.3164,0.8773]
 [0.4768,0.1423,0.7419,0.6521]
 [0.2463,0.4579,0.3474,0.0311]
 [0.0432,0.6366,0.4928,0.2399]
 ...
 [0.3991,0.3426,0.4452,0.1276]
 [0.7867,0.3247,0.5256,0.1940]
 [0.6504,0.0943,0.1169,0.4266]
 [0.0436,0.6716,0.5230,0.4922]
 [0.5329,0.5422,0.2448,0.0547]]

> let d = (rand 15 1).GetColumn(0);;

val d : Vector =
  [0.7446,0.8430,0.6296,0.1210,0.2190,...,0.8281,0.6643,0.0355,0.7120,0.3582]

Here we see that the output is truncated, and only the 5 first and last rows are shown. We can find the least squares solution to C*y = d.
We’ll also compute the residuals and their norm:

> let y = svdC.GetPseudoInverse() * d;;

val y : Vector = [0.3221,0.4458,0.3383,0.1298]

> let r = d - C * y;;

val r : Vector =
  [-0.1120,-0.0171,0.0770,-0.2840,-0.2765,...,0.3796,0.0632,-0.3110,0.1577,-0.1450]

> norm r;;
val it : float = 0.8214250504

Working with functions

You’d expect a functional language to work with functions. So let’s see what we can do.

The modules discussed in this section are not automatically opened when you load the F# compatibility assembly. The available modules are:

  • Special: special functions, such as Gamma functions, Bessel functions, etc.
  • Functional: numerical integration and differentiation, finding zeros, maxima and minima of functions.
  • Statistics: some functions that are useful when doing statistical analysis.
  • Random: random number streams.

These are only the modules with F# specific functions. The full functionality of the libraries is of course available from F# as well.

Here we’ll focus mainly on the second category. Special functions can be very handy, however, so let’s start with those:

> open Extreme.Numerics.FSharp.Special;;
> gamma 0.5;;
val it : float = 1.772453851
> BesselJ 0.0 1.0;;
val it : float = 0.7651976866
> let J0 = BesselJ 0.0;;

val J0 : (float -> float)

> J0 1.0;;
val it : float = 0.7651976866
>

Notice how we used partial application of the Bessel function to get a function that returns the Bessel function of order 0. We can do something similar in many other places, for example with binomial coefficients:

> let binom9 = binomial 9;;

val binom9 : (int -> float)

> [for i in 0..9 -> binom9 i];;
val it : float list =
  [1.0; 9.0; 36.0; 84.0; 126.0; 126.0; 84.0; 36.0; 9.0; 1.0]

Now to the functional stuff. Two of the most basic operations on functions are differentiation and integration. Differentiation is done using the d function. Integration is done using integrate1. Partial application is supported, so you can create a function that computes the numerical derivative at any value you give it.

> open Extreme.Numerics.FSharp.Functional;;
> let J1 = BesselJ 1.0;;

val J1 : (float -> float)

> d J0 0.5;;
val it : float = -0.2422684577
> J1 0.5;; // d J0 = -J1
val it : float = 0.2422684577
> let dJ0 = d J0;;

val dJ0 : (float -> float)

> dJ0 0.5;;
val it : float = -0.2422684577
> dJ0 1.5;;
val it : float = -0.5579365079

Integration works in a similar way. The bounds of the integration interval are supplied as a tuple. Infinite intervals are supported as well.

> integrate1 J1 (0.0,5.0);;
val it : float = 1.177596771
> J0 5.0 - J0 0.0;; // Should give - the previous result:
val it : float = -1.177596771
> let iJ1 = integrate1 J1;;

val iJ1 : (float * float -> float)

> iJ1 (0.0,5.0);;
val it : float = 1.177596771
> integrate1 (fun x -> exp -x) (0.0,infinity);;
val it : float = 1.0

Finding zeros of functions is easy. Let’s quickly find the first 5 zeros of the Bessel function J0:

> [for x0 in [2.0;5.0;8.0;11.0;14.0] -> findzero (fun x -> J0 x) x0];;
val it : float list =
  [2.404825558; 5.52007811; 8.653727913; 11.79153444; 14.93091771]

Finding the minimum or maximum of a function can be difficult because most software requires that you supply the gradient of the function. Although computing derivatives is high school calculus, it’s still very error prone. Automatic differentiation comes to the rescue here. We’ll use the SymbolicMath class, which contains all kinds of functions that take advantage of symbolic computations to obtain a solution.

One of the most famous optimization problems is the Rosenbrock function. We can find the minimum of this function in just one line of code:

> SymbolicMath.FindMinimum((fun x y -> (1.0 - x)**2.0 + 100.0*(y-x**2.0)**2.0), !![-1.0;-1.0]);;
val it : SolutionReport =
  Extreme.Mathematics.SolutionReport`1[Extreme.Mathematics.Vector]
    {ConvergenceTest = null;
     Error = 2.778930274e-09;
     EvaluationsNeeded = 74;
     IterationsNeeded = 26;
     Result = [1.0000,1.0000];
     Status = Converged;}
>

It’s time now for you to play. Download the trial version of the Extreme Optimization Numerical Libraries for .NET here and have fun!

My .NET Framework Wish List

This is now the fifth year I’ve been writing numerical software for the .NET platform. Over these years, I’ve discovered quite a few, let’s call them ‘unfortunate’, design decisions that make writing solid and fast numerical code on .NET more difficult than it needs to be.

What I’d like to do in the coming weeks is list some of the improvements that would make life easier for people in our specialized field of technical computing. The items mentioned in this post aren’t new: I’ve written about them before. But it’s nice to have them all collected in one spot.

Fix the “Inlining Problem“

First on the list: the “inlining“ problem. Method calls with parameters that are value types do not get inlined by the JIT. This is very unfortunate, as it eliminates most of the benefit of defining specialized value types. For example: it’s easy enough to define a complex number structure with overloaded operators and enough bells and whistles to make you deaf. Unfortunately, none of those operator calls are inlined. You end up with code that is an order of magnitude slower than it needs to be.

Even though it has been the top performance issue for several years, there is no indication yet that it will be fixed any time soon. You can add your vote to the already sizeable number on Microsoft’s product feedback site.

Support the IEEE-754 Standard

Over 20 years ago, the IEEE published a standard for floating-point arithmetic that has since been adopted by all major manufacturers of CPU’s. So, platform independence can’t be an issue. Why then is it that the .NET Framework has only the most minimal support for the standard? Surely the fact that people took the time to come up with a solid standard, and the fact that it has enjoyed such wide support from hardware vendors should be an indication that this is something useful and would greatly benefit an important segment of the developer community.

I’ve written about the benefits of floating-point exceptions before, and I’ve discussed my proposal for a FloatingPointContext class. I’ve added a suggestion to this effect in LadyBug. Please go over there and vote for this proposal.

Allow Overloading of Compound Assignment Operators

This is another topic I’ve written about before. In a nutshell: C# and VB.NET don’t support custom overloaded assignment operators at all. C++/CLI supports them, and purposely violates the CLI spec in the process – which is a good thing! One point I would like to add: performance isn’t the only reason. Sometimes there is a semantic difference. Take a look at this code:

RowVector pivotRow = matrix.GetRow(pivot)
for(row = pivot+1; row < rowCount; row++)
{
   RowVector currentRow = matrix.GetRow(row);
   currentRow -= factor * pivotRow
}

which could be part of the code for computing the LU Decomposition of a matrix. The GetRow method returns the row of the matrix without making a copy of the data. The code inside the loop subtracts a multiple of the pivot row from the current row. With the current semantics where x -= y is equivalent to x = x – y, this code does not perform as expected.

What I would like to see is have the CLI spec changed to match what C++/CLI does. Compound assignment operators should be instance members.

 

Still to come: a proposal for some (relatively) minor modifications to .NET generics to efficiently implement generic arithmetic, better support for arrays, and more.

Dynamic times two with the Dynamic Language Runtime

Microsoft today announced their latest addition to the .NET family: the Dynamic Language Runtime (DLR). As Jim Hugunin points out, it is based on the IronPython 1.0 codebase, but has been generalized so it can support other dynamic languages, including Visual Basic.

Now, the word ‘dynamic’ here is often misunderstood. Technically, the word dynamic refers to the type system. The .NET CLR is statically typed: every object has a well-defined type at compile time, and all method calls and property references are resolved at compile time. Virtual methods are somewhat of an in-between case, because which code is called depends on the runtime type, a type which may not even existed when the original code was compiled. But still, every type that overrides a method must inherit from a specific parent class.

In dynamic languages, the type of variables, their methods and properties may be defined at runtime. You can create new types and add properties and methods to existing types. When a method is called in a dynamic language, the runtime looks at the object, looks for a method that matches, and calls it. If there is no matching method, a run-time error is raised.

Writing code in dynamic languages can be very quick, because there is rarely a need to specify type information. It’s also very common to use dynamic languages interactively. You can execute IronPython scripts, but there’s also a Console that hosts interactive IronPython sessions.

And this is where it gets confusing. Because leaving out type information and interactive environments come naturally to dynamic languages, these features are often thought of as properties of dynamic languages. They are not.

Ever heard of F#? It is a statically typed, compiled language created by Don Syme and others at Microsoft Research. It can be used to build libraries and end-user applications, much like C# and VB. But it also has an interactive console and eliminates the need for most type specifications through a smart use of type inference.

F# is not a dynamic language in the technical sense: it is statically typed. But because it has an interactive console and you rarely have to specify types, it is a dynamic language in the eyes of a lot of people. In fact, at the Lang.NET symposium hosted by Microsoft last August, people were asked what their favorite dynamic language is. Many answered with F#. And these were programming language designers and compiler writers!

Anyway, the point I wanted to make with this post is that the new Dynamic Language Runtime has great support for both the technically dynamic languages (dynamic types) and the perceived as dynamic features like interactive environments. I hope the distinction between these two aspects will be clarified in the future.

Double trouble: an interesting puzzle

I’ve written before about some of the strange looking behavior you can find when working with numbers on a computer. As this article explains at length, this behavior is a completely logical consequence of the need to use a finite set of values to represent potentially infinitely many numbers.

Kathy Kam of the Microsoft CLR team recently posted another interesting piece of floating-point wierdness. The question boils down to this. When you run this code:

Console.WriteLine(4.170404 == Convert.ToDouble(“4.170404“));

why does it print false instead of the expected true?

The answer turns out to be not trivial and comes down to the different ways that the .NET framework and the C# compiler convert text strings to numbers.

The C# compiler uses the Windows API function VarR8FromStr to convert the literal value to a double. The documentation isn’t very precific about its inner workings. The C# spec says in section 9.4.4.3 that doubles are rounded using IEEE “round to nearest“ mode.

So what happens here? First, the number is ‘normalized’: the number is scaled by a power of two so that the number is between 1 and 2. The scale factor is moved to the exponent. Next, the number is rounded to double precision, which has 52 significant binary digits.

Here, 4.170404 is divided by 4, which gives 1.042601. The binary representation of this number is:

1.0000101011100111111001100010110111000110111000101…

The part that interests us starts at digit #52 after the “binary point,“ so let’s show everything after, say, the 40th digit:

4        5          6          7
1234567890 1234567890 1234567890
1110001010 1010000000 0000011001

To round to 52 binary digits, we need to round upwards:

4        5
1234567890 12
1110001010 11

This result is used to compose the double precsion number.

If we follow the path of the Convert.ToDouble call, we find that it passes its string argument on to Double.Parse. This method prepares an internal structure called a NumberBuffer and eventually calls a function named NumberBufferToDouble internal to the CLR. In the Shared Source CLI implementation (Rotor) code, we find the following comment for the function that does all the hard work:

    The internal integer representation of the float number is
    UINT64 mantisa + INT exponent. The mantisa is kept normalized
    ie with the most significant one being 63-th bit of UINT64.

This is good news – extra precision is used to ensure we get the correct result. Looking further, we find that this function uses a helper function appropriately called Mul64Lossy: which multiplies two 64bit values. In the comments, we find this:

    // it’s ok to losse some precision here – Mul64 will be called
    // at most twice during the conversion, so the error won’t propagate
    // to any of the 53 significant bits of the result

This is bad news. If you look at the binary representation of 4.170404/4 above, you’ll see that all digits from the 54th up to the 65th are zero. So it is very much possible that there was some loss of precision here, and that it did propagate to the last significant digit of the final result. The assumption made by the developer of this code is mostly right, but sometimes wrong.

But why risk loss of precision when it can be avoided? The (misguided) answer is: speed. The Rotor code contains a function, Mul64Precise which doesn’t suffer from this loss of precision. However, it does use a few extra instructions to do some more shifting and multiplying. The function is only used in debug mode to verify that some internal conversion tables are correct.

In the grand scheme of things, the few extra instructions that would be used to get a correct result have only a very small effect on performance. The Convert.ToDouble method that started it all ends up spending most of its time parsing according to the specified locale, checking for currency symbols, thousands separators, etc. Only a tiny fraction of the time is spent in the Mul64 functions.

Let’s estimate how common this error is. For a conversion error to occur, the 12 bits from the 53rd to the 64th must all be zero. That’s about 1 in 4000. Also, the rounding must be affected, which gives us another factor of 2 or 4. So, as many as 1 conversion out of every 10000 may suffer from this effect!

The moral of the story: Be very careful with your assumptions about how errors will propagate. Don’t compromise for the sake of a few CPU cycles, unless performance is absolutely critical and is more important than accuracy.

Fast, Faster, Fastest

One of my favourite pass-times is getting the most out of a piece of code. Today, I got an opportunity to play a bit from a comment on Rico Mariani’s latest performance challenge: how to format the time part of a DateTime structure in the form “hh:mm:ss.mmm“ in the fastest possible way.

Apparently, Alois Kraus and Greg Young have been at it for a bit. Their solution already gives us more than a five-fold increase in speed compared to the simplest solution using String.Format. But could we do even better?

As it turns out, we could. Here’s the code that we can improve:

long ticks = time.Ticks;
int
hour = (int)((ticks / 0x861c46800L)) % 24;
int minute = (int
)((ticks / 0x23c34600L)) % 60;
int second = (int
)(((ticks / 0x989680L)) % 60L);
int ms = (int)(((ticks / 0x2710L)) % 0x3e8L);

The tick count is the number of 100 nanosecond (ns) intervals since the zero time value. For each of the hour, minute, second and millisecond parts, this code divides the number of ticks by the number of 100ns intervals in that time span, and reduces that number to the number of units in the larger time unit using a modulo. So, for example, there are 60x60x10000000 ticks in an hour, which is 0x861c46800 in hex, and there are 24 hours in a day.

What makes the above code less than optimal is that it starts from the number of ticks to compute every time part. This is a long (64 bit) value. 64-bit calculations are slower than 32-bit calculations. Moreover, divisions (or modulos) are much more expensive than multiplications.

We can fix both these issues by first finding the total number of milliseconds in the day. That number is always smaller than 100 million, so it fits in an int. We can calculate the number of hours with a simple division. We can “peel off” the hours from the total number of milliseconds in the day to find the total milliseconds remaining in the hour. From this, we can calculate the number of minutes with a simple division, and so on. The improved code looks like this:

long ticks = time.Ticks;
int ms = (int)((ticks / 10000) % 86400000);
int
hour = ms / 3600000;
ms -= 3600000*hour;
int
minute = ms / 60000;
ms -= 60000 * minute;
int
second = ms / 1000;
ms -= 1000*second;

This change decreases the running time by about 28 percent from the fastest previous solution. We can shave off another 4% or so by replacing the modulo calculation by a subtraction in the code that computes the digits.

The question now is: can we do even better, still?

Once again, the answer is: Yes: by as much as another 25%!

The single most time consuming calculation is a division. Dividing by large numbers is an order of magnitude slower than multiplying. For smaller numbers, the difference is smaller, but still significant. Since we know the numbers we are dividing by in advance, we can do a little bit shifting magic and eliminate the divisions altogether.

Let’s take dividing by 10 as an example. The basic idea is to approximate the ratio 1/10 by another rational number with a power of two as the denominator. Instead of dividing, we can then multiply by the numerator, and shift by the exponent in the denominator. Since shifting chops off digits, it effectively rounds down the result of the division, so we always have to find an approximation that is larger than the ratio.

We see, for example, that 13/128 is a good approximation to 1/10. We can rewrite x/10 as (x*13) >> 7 as long as x is not too large. We run into trouble as soon as the error time x is larger than 1. In this case, that happens when x is larger than 13/(13-12.8) = 65. Fortunately, this is larger than the number of hours in a day, or the number of minutes in an hour, so we can use it for most calculations in our code. It won’t work for numbers up to a 100, so to get the second digit of the millisecond, we need the next approximation, 205/2048, which is good for values up to 10,000.

To get the first digit of the milliseconds, we need to divide by 100. We find that 41/4096 works nicely.

Implementing this optimization, we go from (for example):

*a = (char)(hour / 10 + ‘0’);
a++;
*a = (
char)(hour % 10 + ‘0’);

to:

int temp = (hour * 13) >> 7;
*a = (
char)(temp + ‘0’
);
a++;
*a = (
char)(hour – 10 * temp + ‘0’);

Our running time for 1 million iterations goes down from 0.38s to 0.28s, a savings of almost 18% compared to the original.

The larger divisors give us a bit of a challenge. To get the number of seconds, we divide a number less than 60000 by 1000. Doing this the straight way has us multiplying by 536871, which would require a long value for the result of the multiplication. We can get around this once we realize that 1000 = 8*125. So if we shift the number of milliseconds by 3, we only need to divide by 125. As an added benefit, the numbers we’re multiplying are always less than 7500, so our multiplier can be larger. This gives us the simple expression good for numbers up to almost 4 million: ((x >> 3) * 67109) >> 23.

The same trick doesn’t work for getting the minutes and hours, but it does allow us to fit the intermediate result into a long. We can use the Math.BigMul method to perform the calculation efficiently.

The final code is given below. It is doubtful it can be improved by much. It runs in as little as 0.221s for one million iterations, 2.5 times faster than the previous fastest code and over 25 times faster than the original.

private unsafe static string FormatFast6(DateTime time)
{
   
fixed (char
* p = dateData)
    {
       
long
ticks = time.Ticks;
       
char
* a = p;
       
int ms = (int
)((ticks / 10000) % 86400000);
       
int hour = (int)(Math
.BigMul(ms >> 7, 9773437) >> 38);
        ms -= 3600000 * hour;
       
int minute = (int)((Math
.BigMul(ms >> 5, 2290650)) >> 32);
        ms -= 60000 * minute;
       
int
second = ((ms >> 3) * 67109) >> 23;
        ms -= 1000 * second;

       
int temp = (hour * 13) >> 7;
        *a = (
char)(temp + ‘0’
);
        a++;
        *a = (
char)(hour – 10 * temp + ‘0’
);
        a += 2;

        temp = (minute * 13) >> 7;
        *a = (
char)(temp + ‘0’
);
        a++;
        *a = (
char)(minute – 10 * temp + ‘0’
);
        a += 2;

        temp = (second * 13) >> 7;
        *a = (
char)(temp + ‘0’
);
        a++;
        *a = (
char)(second – 10 * temp + ‘0’
);
        a += 2;

        temp = (ms * 41) >> 12;
        *a = (
char)(temp + ‘0’
);
        a++;
        ms -= 100 * temp;
        temp = (ms * 205) >> 11;
        *a = (
char)(temp + ‘0’
);
        ms -= 10 * temp;
        a++;
        *a = (
char)(ms – 10 * temp + ‘0’
);
    }
    
return new String
(dateData);
}

Visual Studio as an interactive technical computing environment

A year ago, we were experimenting with an extensive sample that would illustrate the linear algebra capabilities of our math library. Unfortunately, M#, the .NET Matrix Workbench didn’t get very far. We’re technical guys. Building user interfaces just isn’t our cup of tea, and the IDE we built wasn’t stable enough to make it into an end-user product.

At the time, I realized that much of the functionality needed for this kind of interactive computing environment was already present in Visual Studio .NET. For example, we already have an excellent code editor, a project workspace, and tool windows that display variables and their values. Moreover, in the Visual Studio SDK, we have a framework for extending that environment with visualizers for specific types of variables, intellisense, custom project items, and so on.

Plus, you have a great library (the .NET Base Class Libraries) that you can use to do just about anything you’d like.

In short, Visual Studio is the ideal starting point to build a great technical computing IDE.

A couple of recent news items bring this vision closer to reality. Aaron Marten reports that the February 2006 CTP of the Visual Studio 2005 SDK now contains a tool window that hosts an IronPython console. And just a few days ago, Don Syme gave us a taste of what is to come in the next release of F#. The screen shot is the kind you would expect from Matlab. (I guess I was right when I wrote that Don gets what scientists and engineers need.)

Now, all we need is a Matlab-like language for the .NET platform…

Overload resolution in C#

Programming language design is a very tough discipline. You can be sure that every feature will be used and just as often abused in unexpected ways. Mistakes are close to impossible to fix.

C# is a great language. It’s my language of choice for most applications. It derives a lot of its quality from the cautious nature of its lead architect, Anders Hejlsberg, who tries to guide programmers into writing good code. That’s why unsafe code is called unsafe. Not because it is a security risk, but because it is really easy to get yourself into a buggy mess with it. It is why methods and properties have to be explicitly declared virtual, because people tend to override methods when it is not appropriate.

In some cases, however, this protecting developers against themselves can go to far. Overload resolution is one example I came across recently.

Consider a Curve class that represents a mathematical curve. It has a ValueAt method which gets the value of the curve at a specific x-value, which is passed in as a Double argument. This is a virtual method, of course, and specific types of curves, like Polynomial or CubicSpline, provide their own implementation.

Now, for polynomials in particular, it is sometimes desirable to get the value of the polynomial for a complex argument. So we define an overload that takes a DoubleComplex argument and also returns a DoubleComplex.

So far so good.

But now we want to add a conversion from Double to DoubleComplex. This is a widening conversion: every real number is also a complex number. So it is appropriate to make the conversion implicit. We can then write things like:

DoubleComplex a = 5.0;

instead of

DoubleComplex a = new DoubleComplex(5.0);

Unfortunately, this change breaks existing code. The following snippet will no longer compile:

Polynomial p = new Polynomial(1.0, 2.0, 3.0);
double y = p.ValueAt(1.0);

On the second line, we get an error message: “Cannot implicitly convert type DoubleComplex to Double.” Why? Because of the way C# resolves method overloads.

Specifically, C# considers methods declared in a type before anything else, including override methods. Section 7.3 of the C# spec (“Member lookup“) states:

First, the set of all accessible (Section 3.5) members named N declared in T and the base types (Section 7.3.1) of T is constructed. Declarations that include an override modifier are excluded from the set. If no members named N exist and are accessible, then the lookup produces no match, and the following steps are not evaluated.

In this case, because there is an implicit conversion from Double to DoubleComplex, the ValueAt(DoubleComplex) overload is applicable. Even though there is an overload whose parameters match exactly, it isn’t even considered here, because it is an override.

This highly unintuitive behavior is justified by the following two rules:

  1. Whether or not a method is overridden is an implementation detail that should be allowed to change without breaking client code.
  2. Changes to a base class that don’t break an inherited class should not break clients of the inherited class.

Even though neither of these actually applies to our example, I can understand that these rules are useful in many situations. In this case, however, it essentially hides the ValueAt(Double) overload I defined, unless I use an ugly upcast construct like

double y = ((Curve)p).ValueAt(1.0);

My problem is this: If I define an overload that takes a specific argument type, clearly my intent is for that overload to be called whenever the argument is of that exact type. This rule violates that intent.

In my view, it was a mistake to violate developer intent in this way and not give an exact signature match precedence over every other overload candidate. Unfortunately, this is one of those choices that in all likelihood is next to impossible to fix.

Visual Basic has a different set of overload resolution rules. It looks for the overload with the least widening conversion, and so would pick the correct overload in this case.

Thanks to Neal Horowitz for helping me clear up this issue.

Accessing floating-point context from .NET applications

A few days ago, I wrote about the need for the .NET framework to support floating-point exceptions. Although the solution I proposed there goes some way towards fixing the problem, it would create a few of its own as well.

The main issue is that the processor’s FPU (floating-point unit) is a global resource. Therefore it should be handled with extreme care.

For an example of what can go wrong if you don’t, we can look at the story of Managed DirectX. DirectX is the high-performance multimedia/graphics/gaming API on the Windows platform. Because performance is of the essence, the DirectX guys decided to ask the processor to do its calculations in single precision by default. This is somewhat faster than the default, and all that is generally needed for the kinds of applications they encountered.

When people started using the .NET framework to build applications using Managed DirectX, some people found that it broke the .NET math functions. The reason: the precision setting of the FPU is global. It therefore affected all floating-point code in the application, including the calls into the .NET Base Class Libraries.

The problem with our initial solution is that it doesn’t behave nicely. It doesn’t isolate other code from changes to the floating-point state. This can affect code in unexpected ways.

So how do we fix this?

One option is to flag code that uses special floating-point settings with an attribute like “FloatingPointContextAware.” Any code that has this attribute can access floating-point state. However, any code that does not have this attribute would require the CLR to explicitly save the floating-point state.

Unfortunately, this fix suffers from a number of drawbacks:

  1. In some applications, like error analysis, you want to set global state. For example, one way to check whether an algorithm is stable is to run it twice using different rounding modes. If the two results are significantly different, you have a problem.
  2. It is overkill in most situations. If you’re going to use special floating-point features, you should be able to isolate yourself properly from any changes in floating-point state. If you have no knowledge such features exist, chances are your code won’t break if, say, the rounding mode is changed.
  3. It is a performance bottleneck. You can’t require all your client code to also have this attribute. Remember the goal was to allow ‘normal’ calculations to be done as fast as possible while making the edge cases reasonably fast as well. It is counter-productive to impose a performance penalty on every call.

The best solution I can think of is to actually treat the floating-point unit for what it is: a resource. If you want to use its special features, you should inform the system when you plan to use them, and also when you’re done using them.

We make floating-point state accessible only through instances of a FloatingPointContext class, which implements IDisposable and does the necessary bookkeeping. It saves enough of the floating-point state when it is constructed, and restores it when it is disposed. You don’t always have to save and restore the full floating-point state, including values on the stack, etc. In most cases, you only need to save the control word, which is relatively cheap.

A typical use for this class would be as follows:

using (FloatingPointContext context = new FloatingPointContext())
{
    context.RoundingMode = RoundingMode.TowardsZero;
    context.ClearExceptionFlags();
    // Do some calculations
    if (context.IsExceptionFlagRaised(
         
FloatingPointExceptionFlag.Underflow))
      // Do some more stuff
}

I have submitted a proposal for a FloatingPointContext class to the CLR team. So far they have chosen to keep it under consideration for the next version. Let’s hope they’ll choose to implement it.

Floating-point exceptions and the CLR

I’ve decided to share some of our experiences during the development of our math and statistics libraries in the hope that they may contribute to improvements in the .NET platform as the next version is being designed.

The CLR is a general-purpose runtime environment, and cannot be expected to support every application at the fastest possible speed. However, I do expect it to perform reasonably well, and if a performance hit can be avoided, then it should be. The absence of any floating-point exception mechanism incurs such a performance hit in some fairly common situations.

As an example, let’s take an implementation of complex numbers. This is a type for general use, and has to give accurate results whenever possible. For obvious reasons, we want the core operations to be as fast as possible. This means we want to inline when we can, and make our code fast, too. Most operations are fairly straightforward, but division isn’t. Let’s start with the ‘naïve’ implementation:

struct Complex
{
  private re, im;
  public static operator/(Complex z1, Complex z2)
  {
    double d = z2.re*z2.re + z2.im*z2.im;
    double resultRe = z1.re * z2.re + z1.im * z2.im;
    double resultIm = z1.im * z2.re – z1.re * z2.im;
    return new Complex(resultRe / d, resultIm / d);
  }
}

If any of the values d, resultRe, and resultIm underflow, the result will lose accuracy, because subnormal numbers by definition don’t have the full 52 bit precision. The CLR also offers no indication that underflow has occurred. This can be fixed, mostly, by modifying the above to:

public static operator/(Complex z1, Complex z2) 
{ 
  if (Math.Abs(z2.re) > Math.Abs(z2.im) 
  { 
    double t = z2.im / z2.re; 
    double d = z2.re + t * z2.im; 
    double resultRe = (z1.re + t * z1.im); 
    double resultIm = (z1.im – t * z1.re); 
    return new Complex(resultRe / d, resultIm / d); 
  } 
  else 
  { 
    double t = z2.re / z2.im; 
    double d = t * z2.re + z2.im; 
    double resultRe = (t * z1.re + z1.im); 
    double resultIm = (t * z1.im – z1.re); 
    return new Complex(resultRe / d, resultIm / d); 
  } 
}

This will give accurate results in a larger domain, but is slower because of the extra division. Worse still, some operations that one would expect to give exact results now aren’t exact. For example, if z1 = 27-21i and z2 = 9-7i, the exact result is 3, but the round-off in the division by 9 destroys the exact result.

IEEE-754 exceptions would come to the rescue here – if they were available. Exceptions (a term with a specific meaning in the IEEE-754 standard – not to be confused with CLR exceptions) raise a flag in the FPU’s status register, and can also be trapped by the operating system. We don’t need a trap here. We can do what we need to do with a flag. The code would look something like:

public static operator/(Complex z1, Complex z2) 
{ 
  FloatingPoint.ClearExceptionFlag(
    FloatingPointExceptionFlag.Underflow);
 
  double d = z2.re*z2.re + z2.im*z2.im; 
  double resultRe = z1.re * z2.re + z1.im * z2.im; 
  double resultIm = z1.im * z2.re – z1.re * z2.im; 
  if (FloatingPoint.IsExceptionFlagRaised(
    FloatingPointExceptionFlag.Underflow)
 
  { 
    // Code for the special cases. 
  } 
  return new Complex(resultRe / d, resultIm / d); 
}

Note that the CLR strategy to “continue with default values” won’t work here, because complete underflow is defaulted to 0, which can not be distinguished from the common case when the result is exactly zero (and therefore no special action is required). The only way to do it right would be a whole series of ugly comparisons, which would make the code slower and harder to read/maintain. Even if a language supported a mechanism to check for underflow (by inserting comparisons to a suitable small value before and after storing the value), this would bloat the IL, introduce unnecessary round-off (by forcing a conversion from extended to double on each operation), and slow things down unnecessarily.

This type of scenario occurs many times in numerical calculations. You perform a calculation the quick and dirty way, and if it turns out you made a mess, you try again but you’re more cautious. The complex number example is the most significant one I have come across while developing our numerical libraries.

Nearly all hardware that the CLR (or its clones) run on, supports this floating-point exception mechanism. I found it somewhat surprising that a ‘standard’ virtual execution environment would not adopt another and well-established standard (IEEE-754) for a specific subset of its functionality.

For more on the math jargon used in this post, see my article Floating-Point in .NET Part 1: Concepts and Formats.

Bill Gates on Technical Computing

At the recent supercomputing conference in Seattle, Bill Gates gave a keynote address (transcript) where he announced that Microsoft had re-discovered its interest in scientific computing. (Microsoft’s second product was a Fortran compiler for CP/M, released in August 1977.) The motive is obviously commercial, but the statement is still significant. Microsoft has treated technical computing as a second class citizen for too long.

A couple of points I found of particular interest:

  • The primary interest appears to be in smoothing out the process of scientific discovery, with a lot of emphasis on data integration (using XML, of course), transparent distributed computing, and enhanced collaboration.
  • There is no mention at all of the nuts and bolts programming of numerical applications. He briefly mentions ‘visual programming,’ but that’s about it.

Improving workflow is one of Microsoft’s strengths. Scientific computing, on the other hand, is not, which probably explains Gates’ silence on the subject. Many people applauded DEC’s acquisition of Microsoft Fortran, which is now being phased out in favour of Intel Visual Fortran.

The .NET platform doesn’t exactly make it easy to write good numerical software. Even aside from the lack of support for rounding modes and standard floating-point exception mechanisms, the standard .NET languages (C# and Visual Basic) are not technical-computing friendly. There is no doubt in my mind that the .NET platform delivers a major improvement in developer productivity for general line-of-business applications.

Unfortunately, where technical computing is concerned, the productivity promise remains largely unfulfilled. It takes more than a keynote speech to win over the hearts of a community that has been ignored for so long. “Developers, developers, developers!”, Bill. Make writing numerical software for .NET a joy for developers.

What we need is a .NET language specifically targeted at technical computing. Will F# be it? Don Syme is obviously doing a lot of great work there. But functional programming for everyone?