My .NET Framework Wish List

This is now the fifth year I’ve been writing numerical software for the .NET platform. Over these years, I’ve discovered quite a few, let’s call them ‘unfortunate’, design decisions that make writing solid and fast numerical code on .NET more difficult than it needs to be.

What I’d like to do in the coming weeks is list some of the improvements that would make life easier for people in our specialized field of technical computing. The items mentioned in this post aren’t new: I’ve written about them before. But it’s nice to have them all collected in one spot.

Fix the “Inlining Problem“

First on the list: the “inlining“ problem. Method calls with parameters that are value types do not get inlined by the JIT. This is very unfortunate, as it eliminates most of the benefit of defining specialized value types. For example: it’s easy enough to define a complex number structure with overloaded operators and enough bells and whistles to make you deaf. Unfortunately, none of those operator calls are inlined. You end up with code that is an order of magnitude slower than it needs to be.

Even though it has been the top performance issue for several years, there is no indication yet that it will be fixed any time soon. You can add your vote to the already sizeable number on Microsoft’s product feedback site.

Support the IEEE-754 Standard

Over 20 years ago, the IEEE published a standard for floating-point arithmetic that has since been adopted by all major manufacturers of CPU’s. So, platform independence can’t be an issue. Why then is it that the .NET Framework has only the most minimal support for the standard? Surely the fact that people took the time to come up with a solid standard, and the fact that it has enjoyed such wide support from hardware vendors should be an indication that this is something useful and would greatly benefit an important segment of the developer community.

I’ve written about the benefits of floating-point exceptions before, and I’ve discussed my proposal for a FloatingPointContext class. I’ve added a suggestion to this effect in LadyBug. Please go over there and vote for this proposal.

Allow Overloading of Compound Assignment Operators

This is another topic I’ve written about before. In a nutshell: C# and VB.NET don’t support custom overloaded assignment operators at all. C++/CLI supports them, and purposely violates the CLI spec in the process – which is a good thing! One point I would like to add: performance isn’t the only reason. Sometimes there is a semantic difference. Take a look at this code:

RowVector pivotRow = matrix.GetRow(pivot)
for(row = pivot+1; row < rowCount; row++)
{
   RowVector currentRow = matrix.GetRow(row);
   currentRow -= factor * pivotRow
}

which could be part of the code for computing the LU Decomposition of a matrix. The GetRow method returns the row of the matrix without making a copy of the data. The code inside the loop subtracts a multiple of the pivot row from the current row. With the current semantics where x -= y is equivalent to x = x – y, this code does not perform as expected.

What I would like to see is have the CLI spec changed to match what C++/CLI does. Compound assignment operators should be instance members.

 

Still to come: a proposal for some (relatively) minor modifications to .NET generics to efficiently implement generic arithmetic, better support for arrays, and more.

Dynamic times two with the Dynamic Language Runtime

Microsoft today announced their latest addition to the .NET family: the Dynamic Language Runtime (DLR). As Jim Hugunin points out, it is based on the IronPython 1.0 codebase, but has been generalized so it can support other dynamic languages, including Visual Basic.

Now, the word ‘dynamic’ here is often misunderstood. Technically, the word dynamic refers to the type system. The .NET CLR is statically typed: every object has a well-defined type at compile time, and all method calls and property references are resolved at compile time. Virtual methods are somewhat of an in-between case, because which code is called depends on the runtime type, a type which may not even existed when the original code was compiled. But still, every type that overrides a method must inherit from a specific parent class.

In dynamic languages, the type of variables, their methods and properties may be defined at runtime. You can create new types and add properties and methods to existing types. When a method is called in a dynamic language, the runtime looks at the object, looks for a method that matches, and calls it. If there is no matching method, a run-time error is raised.

Writing code in dynamic languages can be very quick, because there is rarely a need to specify type information. It’s also very common to use dynamic languages interactively. You can execute IronPython scripts, but there’s also a Console that hosts interactive IronPython sessions.

And this is where it gets confusing. Because leaving out type information and interactive environments come naturally to dynamic languages, these features are often thought of as properties of dynamic languages. They are not.

Ever heard of F#? It is a statically typed, compiled language created by Don Syme and others at Microsoft Research. It can be used to build libraries and end-user applications, much like C# and VB. But it also has an interactive console and eliminates the need for most type specifications through a smart use of type inference.

F# is not a dynamic language in the technical sense: it is statically typed. But because it has an interactive console and you rarely have to specify types, it is a dynamic language in the eyes of a lot of people. In fact, at the Lang.NET symposium hosted by Microsoft last August, people were asked what their favorite dynamic language is. Many answered with F#. And these were programming language designers and compiler writers!

Anyway, the point I wanted to make with this post is that the new Dynamic Language Runtime has great support for both the technically dynamic languages (dynamic types) and the perceived as dynamic features like interactive environments. I hope the distinction between these two aspects will be clarified in the future.

Latest Supercomputer Top 500

Last week, the latest edition of the list of the 500 fastest supercomputers was released. Two recent developments make this list interesting.

  1. The arrival of multicore processors. Even though their presence is still modest on the current list, expect their share to rise. Intel is targeting 32 cores on a chip by 2010.
  2. Microsoft made its entry on the scene with Windows Compute Cluster Server 2003, an enhanced Windows 2003 Enterprise Server version tweaked for High Performance Computing. The first (and so far the only) entry on the Top500 list is at the National Center for SuperComputing at the University of Illinois. It will be interesting to see how this number grows in the coming years. At the very least, it will give some indication of the headway Microsoft is making in the HPC market.

Some trivia:

The #1 spot is still held by IBM’s BlueGene/L supercomputer at the Lawrence Livermore National Laboratory. At over 280TFlops, this monster is faster than an IBM PC with 8087 co-processor by a factor of roughly one billion. That’s right: it’s as fast as 1,000,000,000 original IBM PC’s!

The first Top500 list was published in June 1993. It’s interesting to note that one dual processor machine based on Intel’s latest dual-core processors would, at 34.9GFlops, take the #2 spot on that original list. Today’s average desktop would make it into the top 100…

Visual Studio as an interactive technical computing environment

A year ago, we were experimenting with an extensive sample that would illustrate the linear algebra capabilities of our math library. Unfortunately, M#, the .NET Matrix Workbench didn’t get very far. We’re technical guys. Building user interfaces just isn’t our cup of tea, and the IDE we built wasn’t stable enough to make it into an end-user product.

At the time, I realized that much of the functionality needed for this kind of interactive computing environment was already present in Visual Studio .NET. For example, we already have an excellent code editor, a project workspace, and tool windows that display variables and their values. Moreover, in the Visual Studio SDK, we have a framework for extending that environment with visualizers for specific types of variables, intellisense, custom project items, and so on.

Plus, you have a great library (the .NET Base Class Libraries) that you can use to do just about anything you’d like.

In short, Visual Studio is the ideal starting point to build a great technical computing IDE.

A couple of recent news items bring this vision closer to reality. Aaron Marten reports that the February 2006 CTP of the Visual Studio 2005 SDK now contains a tool window that hosts an IronPython console. And just a few days ago, Don Syme gave us a taste of what is to come in the next release of F#. The screen shot is the kind you would expect from Matlab. (I guess I was right when I wrote that Don gets what scientists and engineers need.)

Now, all we need is a Matlab-like language for the .NET platform…

Yahoo Finance miscalculates monthly average daily volume

Am I missing something here?

While testing some time series functionality in the new version of our statistics library, we came across a rather curious discrepancy. We used the historical quotes available from Yahoo Finance as a reference resource. As it turns out, comparison with our data appears to show that Yahoo miscalculates some summary statistics.

The error occurs on the Historical Prices page when using a monthly timeframe. Take the monthly data for 2005 for Microsoft’s stock (symbol MSFT). This shows an average daily volume for January of 79,642,818 shares. According to the help document, this is “the average daily volume for all trading days in the reported month.”

When we look at the daily prices for January 2005, we find 20 trading days. When we add up all the daily volumes, we find 1,521,414,280 shares changed hands that month. That should give an average daily volume of 76,070,714 shares, more than 3 million shares less than Yahoo’s figure. Why the difference?

A brief investigation showed that the difference can be explained because Yahoo includes the volume on the last trading day of the month twice. If you add the volume of Jan. 1st to the total, we get 1,592,856,376. Dividing by the number of trading days (20) gives 79,642,818.8.

When we look at other months, we find the same pattern: Yahoo consistently overstates the average daily volume for the month by a few percentage points. Each time, this difference can be explained by the double inclusion of the volume of the last trading day in the total volume for the month.

Here’s a random sample: Research in Motion for June 2000. Yahoo gives an average daily volume of 4,262,160 shares. Our calculation shows an average of 3,870,800 shares corresponding to a total volume of 77,416,000 for the month. Yahoo’s total corresponds to 85,243,200 shares. The difference of 7,820,200 shares is exactly the volume for June 30th.

The weekly average daily volume appears to be correct.

I find it hard to believe that a service that is as widely used as Yahoo would show such an error. So, my question to the experts in technical analysis out there is: What am I missing???

Overload resolution in C#

Programming language design is a very tough discipline. You can be sure that every feature will be used and just as often abused in unexpected ways. Mistakes are close to impossible to fix.

C# is a great language. It’s my language of choice for most applications. It derives a lot of its quality from the cautious nature of its lead architect, Anders Hejlsberg, who tries to guide programmers into writing good code. That’s why unsafe code is called unsafe. Not because it is a security risk, but because it is really easy to get yourself into a buggy mess with it. It is why methods and properties have to be explicitly declared virtual, because people tend to override methods when it is not appropriate.

In some cases, however, this protecting developers against themselves can go to far. Overload resolution is one example I came across recently.

Consider a Curve class that represents a mathematical curve. It has a ValueAt method which gets the value of the curve at a specific x-value, which is passed in as a Double argument. This is a virtual method, of course, and specific types of curves, like Polynomial or CubicSpline, provide their own implementation.

Now, for polynomials in particular, it is sometimes desirable to get the value of the polynomial for a complex argument. So we define an overload that takes a DoubleComplex argument and also returns a DoubleComplex.

So far so good.

But now we want to add a conversion from Double to DoubleComplex. This is a widening conversion: every real number is also a complex number. So it is appropriate to make the conversion implicit. We can then write things like:

DoubleComplex a = 5.0;

instead of

DoubleComplex a = new DoubleComplex(5.0);

Unfortunately, this change breaks existing code. The following snippet will no longer compile:

Polynomial p = new Polynomial(1.0, 2.0, 3.0);
double y = p.ValueAt(1.0);

On the second line, we get an error message: “Cannot implicitly convert type DoubleComplex to Double.” Why? Because of the way C# resolves method overloads.

Specifically, C# considers methods declared in a type before anything else, including override methods. Section 7.3 of the C# spec (“Member lookup“) states:

First, the set of all accessible (Section 3.5) members named N declared in T and the base types (Section 7.3.1) of T is constructed. Declarations that include an override modifier are excluded from the set. If no members named N exist and are accessible, then the lookup produces no match, and the following steps are not evaluated.

In this case, because there is an implicit conversion from Double to DoubleComplex, the ValueAt(DoubleComplex) overload is applicable. Even though there is an overload whose parameters match exactly, it isn’t even considered here, because it is an override.

This highly unintuitive behavior is justified by the following two rules:

  1. Whether or not a method is overridden is an implementation detail that should be allowed to change without breaking client code.
  2. Changes to a base class that don’t break an inherited class should not break clients of the inherited class.

Even though neither of these actually applies to our example, I can understand that these rules are useful in many situations. In this case, however, it essentially hides the ValueAt(Double) overload I defined, unless I use an ugly upcast construct like

double y = ((Curve)p).ValueAt(1.0);

My problem is this: If I define an overload that takes a specific argument type, clearly my intent is for that overload to be called whenever the argument is of that exact type. This rule violates that intent.

In my view, it was a mistake to violate developer intent in this way and not give an exact signature match precedence over every other overload candidate. Unfortunately, this is one of those choices that in all likelihood is next to impossible to fix.

Visual Basic has a different set of overload resolution rules. It looks for the overload with the least widening conversion, and so would pick the correct overload in this case.

Thanks to Neal Horowitz for helping me clear up this issue.

Accessing floating-point context from .NET applications

A few days ago, I wrote about the need for the .NET framework to support floating-point exceptions. Although the solution I proposed there goes some way towards fixing the problem, it would create a few of its own as well.

The main issue is that the processor’s FPU (floating-point unit) is a global resource. Therefore it should be handled with extreme care.

For an example of what can go wrong if you don’t, we can look at the story of Managed DirectX. DirectX is the high-performance multimedia/graphics/gaming API on the Windows platform. Because performance is of the essence, the DirectX guys decided to ask the processor to do its calculations in single precision by default. This is somewhat faster than the default, and all that is generally needed for the kinds of applications they encountered.

When people started using the .NET framework to build applications using Managed DirectX, some people found that it broke the .NET math functions. The reason: the precision setting of the FPU is global. It therefore affected all floating-point code in the application, including the calls into the .NET Base Class Libraries.

The problem with our initial solution is that it doesn’t behave nicely. It doesn’t isolate other code from changes to the floating-point state. This can affect code in unexpected ways.

So how do we fix this?

One option is to flag code that uses special floating-point settings with an attribute like “FloatingPointContextAware.” Any code that has this attribute can access floating-point state. However, any code that does not have this attribute would require the CLR to explicitly save the floating-point state.

Unfortunately, this fix suffers from a number of drawbacks:

  1. In some applications, like error analysis, you want to set global state. For example, one way to check whether an algorithm is stable is to run it twice using different rounding modes. If the two results are significantly different, you have a problem.
  2. It is overkill in most situations. If you’re going to use special floating-point features, you should be able to isolate yourself properly from any changes in floating-point state. If you have no knowledge such features exist, chances are your code won’t break if, say, the rounding mode is changed.
  3. It is a performance bottleneck. You can’t require all your client code to also have this attribute. Remember the goal was to allow ‘normal’ calculations to be done as fast as possible while making the edge cases reasonably fast as well. It is counter-productive to impose a performance penalty on every call.

The best solution I can think of is to actually treat the floating-point unit for what it is: a resource. If you want to use its special features, you should inform the system when you plan to use them, and also when you’re done using them.

We make floating-point state accessible only through instances of a FloatingPointContext class, which implements IDisposable and does the necessary bookkeeping. It saves enough of the floating-point state when it is constructed, and restores it when it is disposed. You don’t always have to save and restore the full floating-point state, including values on the stack, etc. In most cases, you only need to save the control word, which is relatively cheap.

A typical use for this class would be as follows:

using (FloatingPointContext context = new FloatingPointContext())
{
    context.RoundingMode = RoundingMode.TowardsZero;
    context.ClearExceptionFlags();
    // Do some calculations
    if (context.IsExceptionFlagRaised(
         
FloatingPointExceptionFlag.Underflow))
      // Do some more stuff
}

I have submitted a proposal for a FloatingPointContext class to the CLR team. So far they have chosen to keep it under consideration for the next version. Let’s hope they’ll choose to implement it.

Floating-point exceptions and the CLR

I’ve decided to share some of our experiences during the development of our math and statistics libraries in the hope that they may contribute to improvements in the .NET platform as the next version is being designed.

The CLR is a general-purpose runtime environment, and cannot be expected to support every application at the fastest possible speed. However, I do expect it to perform reasonably well, and if a performance hit can be avoided, then it should be. The absence of any floating-point exception mechanism incurs such a performance hit in some fairly common situations.

As an example, let’s take an implementation of complex numbers. This is a type for general use, and has to give accurate results whenever possible. For obvious reasons, we want the core operations to be as fast as possible. This means we want to inline when we can, and make our code fast, too. Most operations are fairly straightforward, but division isn’t. Let’s start with the ‘naïve’ implementation:

struct Complex
{
  private re, im;
  public static operator/(Complex z1, Complex z2)
  {
    double d = z2.re*z2.re + z2.im*z2.im;
    double resultRe = z1.re * z2.re + z1.im * z2.im;
    double resultIm = z1.im * z2.re – z1.re * z2.im;
    return new Complex(resultRe / d, resultIm / d);
  }
}

If any of the values d, resultRe, and resultIm underflow, the result will lose accuracy, because subnormal numbers by definition don’t have the full 52 bit precision. The CLR also offers no indication that underflow has occurred. This can be fixed, mostly, by modifying the above to:

public static operator/(Complex z1, Complex z2) 
{ 
  if (Math.Abs(z2.re) > Math.Abs(z2.im) 
  { 
    double t = z2.im / z2.re; 
    double d = z2.re + t * z2.im; 
    double resultRe = (z1.re + t * z1.im); 
    double resultIm = (z1.im – t * z1.re); 
    return new Complex(resultRe / d, resultIm / d); 
  } 
  else 
  { 
    double t = z2.re / z2.im; 
    double d = t * z2.re + z2.im; 
    double resultRe = (t * z1.re + z1.im); 
    double resultIm = (t * z1.im – z1.re); 
    return new Complex(resultRe / d, resultIm / d); 
  } 
}

This will give accurate results in a larger domain, but is slower because of the extra division. Worse still, some operations that one would expect to give exact results now aren’t exact. For example, if z1 = 27-21i and z2 = 9-7i, the exact result is 3, but the round-off in the division by 9 destroys the exact result.

IEEE-754 exceptions would come to the rescue here – if they were available. Exceptions (a term with a specific meaning in the IEEE-754 standard – not to be confused with CLR exceptions) raise a flag in the FPU’s status register, and can also be trapped by the operating system. We don’t need a trap here. We can do what we need to do with a flag. The code would look something like:

public static operator/(Complex z1, Complex z2) 
{ 
  FloatingPoint.ClearExceptionFlag(
    FloatingPointExceptionFlag.Underflow);
 
  double d = z2.re*z2.re + z2.im*z2.im; 
  double resultRe = z1.re * z2.re + z1.im * z2.im; 
  double resultIm = z1.im * z2.re – z1.re * z2.im; 
  if (FloatingPoint.IsExceptionFlagRaised(
    FloatingPointExceptionFlag.Underflow)
 
  { 
    // Code for the special cases. 
  } 
  return new Complex(resultRe / d, resultIm / d); 
}

Note that the CLR strategy to “continue with default values” won’t work here, because complete underflow is defaulted to 0, which can not be distinguished from the common case when the result is exactly zero (and therefore no special action is required). The only way to do it right would be a whole series of ugly comparisons, which would make the code slower and harder to read/maintain. Even if a language supported a mechanism to check for underflow (by inserting comparisons to a suitable small value before and after storing the value), this would bloat the IL, introduce unnecessary round-off (by forcing a conversion from extended to double on each operation), and slow things down unnecessarily.

This type of scenario occurs many times in numerical calculations. You perform a calculation the quick and dirty way, and if it turns out you made a mess, you try again but you’re more cautious. The complex number example is the most significant one I have come across while developing our numerical libraries.

Nearly all hardware that the CLR (or its clones) run on, supports this floating-point exception mechanism. I found it somewhat surprising that a ‘standard’ virtual execution environment would not adopt another and well-established standard (IEEE-754) for a specific subset of its functionality.

For more on the math jargon used in this post, see my article Floating-Point in .NET Part 1: Concepts and Formats.

Bill Gates on Technical Computing

At the recent supercomputing conference in Seattle, Bill Gates gave a keynote address (transcript) where he announced that Microsoft had re-discovered its interest in scientific computing. (Microsoft’s second product was a Fortran compiler for CP/M, released in August 1977.) The motive is obviously commercial, but the statement is still significant. Microsoft has treated technical computing as a second class citizen for too long.

A couple of points I found of particular interest:

  • The primary interest appears to be in smoothing out the process of scientific discovery, with a lot of emphasis on data integration (using XML, of course), transparent distributed computing, and enhanced collaboration.
  • There is no mention at all of the nuts and bolts programming of numerical applications. He briefly mentions ‘visual programming,’ but that’s about it.

Improving workflow is one of Microsoft’s strengths. Scientific computing, on the other hand, is not, which probably explains Gates’ silence on the subject. Many people applauded DEC’s acquisition of Microsoft Fortran, which is now being phased out in favour of Intel Visual Fortran.

The .NET platform doesn’t exactly make it easy to write good numerical software. Even aside from the lack of support for rounding modes and standard floating-point exception mechanisms, the standard .NET languages (C# and Visual Basic) are not technical-computing friendly. There is no doubt in my mind that the .NET platform delivers a major improvement in developer productivity for general line-of-business applications.

Unfortunately, where technical computing is concerned, the productivity promise remains largely unfulfilled. It takes more than a keynote speech to win over the hearts of a community that has been ignored for so long. “Developers, developers, developers!”, Bill. Make writing numerical software for .NET a joy for developers.

What we need is a .NET language specifically targeted at technical computing. Will F# be it? Don Syme is obviously doing a lot of great work there. But functional programming for everyone?