Dynamic times two with the Dynamic Language Runtime

Microsoft today announced their latest addition to the .NET family: the Dynamic Language Runtime (DLR). As Jim Hugunin points out, it is based on the IronPython 1.0 codebase, but has been generalized so it can support other dynamic languages, including Visual Basic.

Now, the word ‘dynamic’ here is often misunderstood. Technically, the word dynamic refers to the type system. The .NET CLR is statically typed: every object has a well-defined type at compile time, and all method calls and property references are resolved at compile time. Virtual methods are somewhat of an in-between case, because which code is called depends on the runtime type, a type which may not even existed when the original code was compiled. But still, every type that overrides a method must inherit from a specific parent class.

In dynamic languages, the type of variables, their methods and properties may be defined at runtime. You can create new types and add properties and methods to existing types. When a method is called in a dynamic language, the runtime looks at the object, looks for a method that matches, and calls it. If there is no matching method, a run-time error is raised.

Writing code in dynamic languages can be very quick, because there is rarely a need to specify type information. It’s also very common to use dynamic languages interactively. You can execute IronPython scripts, but there’s also a Console that hosts interactive IronPython sessions.

And this is where it gets confusing. Because leaving out type information and interactive environments come naturally to dynamic languages, these features are often thought of as properties of dynamic languages. They are not.

Ever heard of F#? It is a statically typed, compiled language created by Don Syme and others at Microsoft Research. It can be used to build libraries and end-user applications, much like C# and VB. But it also has an interactive console and eliminates the need for most type specifications through a smart use of type inference.

F# is not a dynamic language in the technical sense: it is statically typed. But because it has an interactive console and you rarely have to specify types, it is a dynamic language in the eyes of a lot of people. In fact, at the Lang.NET symposium hosted by Microsoft last August, people were asked what their favorite dynamic language is. Many answered with F#. And these were programming language designers and compiler writers!

Anyway, the point I wanted to make with this post is that the new Dynamic Language Runtime has great support for both the technically dynamic languages (dynamic types) and the perceived as dynamic features like interactive environments. I hope the distinction between these two aspects will be clarified in the future.

Latest Supercomputer Top 500

Last week, the latest edition of the list of the 500 fastest supercomputers was released. Two recent developments make this list interesting.

  1. The arrival of multicore processors. Even though their presence is still modest on the current list, expect their share to rise. Intel is targeting 32 cores on a chip by 2010.
  2. Microsoft made its entry on the scene with Windows Compute Cluster Server 2003, an enhanced Windows 2003 Enterprise Server version tweaked for High Performance Computing. The first (and so far the only) entry on the Top500 list is at the National Center for SuperComputing at the University of Illinois. It will be interesting to see how this number grows in the coming years. At the very least, it will give some indication of the headway Microsoft is making in the HPC market.

Some trivia:

The #1 spot is still held by IBM’s BlueGene/L supercomputer at the Lawrence Livermore National Laboratory. At over 280TFlops, this monster is faster than an IBM PC with 8087 co-processor by a factor of roughly one billion. That’s right: it’s as fast as 1,000,000,000 original IBM PC’s!

The first Top500 list was published in June 1993. It’s interesting to note that one dual processor machine based on Intel’s latest dual-core processors would, at 34.9GFlops, take the #2 spot on that original list. Today’s average desktop would make it into the top 100…

Double trouble: an interesting puzzle

I’ve written before about some of the strange looking behavior you can find when working with numbers on a computer. As this article explains at length, this behavior is a completely logical consequence of the need to use a finite set of values to represent potentially infinitely many numbers.

Kathy Kam of the Microsoft CLR team recently posted another interesting piece of floating-point wierdness. The question boils down to this. When you run this code:

Console.WriteLine(4.170404 == Convert.ToDouble(“4.170404“));

why does it print false instead of the expected true?

The answer turns out to be not trivial and comes down to the different ways that the .NET framework and the C# compiler convert text strings to numbers.

The C# compiler uses the Windows API function VarR8FromStr to convert the literal value to a double. The documentation isn’t very precific about its inner workings. The C# spec says in section 9.4.4.3 that doubles are rounded using IEEE “round to nearest“ mode.

So what happens here? First, the number is ‘normalized’: the number is scaled by a power of two so that the number is between 1 and 2. The scale factor is moved to the exponent. Next, the number is rounded to double precision, which has 52 significant binary digits.

Here, 4.170404 is divided by 4, which gives 1.042601. The binary representation of this number is:

1.0000101011100111111001100010110111000110111000101…

The part that interests us starts at digit #52 after the “binary point,“ so let’s show everything after, say, the 40th digit:

4        5          6          7
1234567890 1234567890 1234567890
1110001010 1010000000 0000011001

To round to 52 binary digits, we need to round upwards:

4        5
1234567890 12
1110001010 11

This result is used to compose the double precsion number.

If we follow the path of the Convert.ToDouble call, we find that it passes its string argument on to Double.Parse. This method prepares an internal structure called a NumberBuffer and eventually calls a function named NumberBufferToDouble internal to the CLR. In the Shared Source CLI implementation (Rotor) code, we find the following comment for the function that does all the hard work:

    The internal integer representation of the float number is
    UINT64 mantisa + INT exponent. The mantisa is kept normalized
    ie with the most significant one being 63-th bit of UINT64.

This is good news – extra precision is used to ensure we get the correct result. Looking further, we find that this function uses a helper function appropriately called Mul64Lossy: which multiplies two 64bit values. In the comments, we find this:

    // it’s ok to losse some precision here – Mul64 will be called
    // at most twice during the conversion, so the error won’t propagate
    // to any of the 53 significant bits of the result

This is bad news. If you look at the binary representation of 4.170404/4 above, you’ll see that all digits from the 54th up to the 65th are zero. So it is very much possible that there was some loss of precision here, and that it did propagate to the last significant digit of the final result. The assumption made by the developer of this code is mostly right, but sometimes wrong.

But why risk loss of precision when it can be avoided? The (misguided) answer is: speed. The Rotor code contains a function, Mul64Precise which doesn’t suffer from this loss of precision. However, it does use a few extra instructions to do some more shifting and multiplying. The function is only used in debug mode to verify that some internal conversion tables are correct.

In the grand scheme of things, the few extra instructions that would be used to get a correct result have only a very small effect on performance. The Convert.ToDouble method that started it all ends up spending most of its time parsing according to the specified locale, checking for currency symbols, thousands separators, etc. Only a tiny fraction of the time is spent in the Mul64 functions.

Let’s estimate how common this error is. For a conversion error to occur, the 12 bits from the 53rd to the 64th must all be zero. That’s about 1 in 4000. Also, the rounding must be affected, which gives us another factor of 2 or 4. So, as many as 1 conversion out of every 10000 may suffer from this effect!

The moral of the story: Be very careful with your assumptions about how errors will propagate. Don’t compromise for the sake of a few CPU cycles, unless performance is absolutely critical and is more important than accuracy.

Fast, Faster, Fastest

One of my favourite pass-times is getting the most out of a piece of code. Today, I got an opportunity to play a bit from a comment on Rico Mariani’s latest performance challenge: how to format the time part of a DateTime structure in the form “hh:mm:ss.mmm“ in the fastest possible way.

Apparently, Alois Kraus and Greg Young have been at it for a bit. Their solution already gives us more than a five-fold increase in speed compared to the simplest solution using String.Format. But could we do even better?

As it turns out, we could. Here’s the code that we can improve:

long ticks = time.Ticks;
int
hour = (int)((ticks / 0x861c46800L)) % 24;
int minute = (int
)((ticks / 0x23c34600L)) % 60;
int second = (int
)(((ticks / 0x989680L)) % 60L);
int ms = (int)(((ticks / 0x2710L)) % 0x3e8L);

The tick count is the number of 100 nanosecond (ns) intervals since the zero time value. For each of the hour, minute, second and millisecond parts, this code divides the number of ticks by the number of 100ns intervals in that time span, and reduces that number to the number of units in the larger time unit using a modulo. So, for example, there are 60x60x10000000 ticks in an hour, which is 0x861c46800 in hex, and there are 24 hours in a day.

What makes the above code less than optimal is that it starts from the number of ticks to compute every time part. This is a long (64 bit) value. 64-bit calculations are slower than 32-bit calculations. Moreover, divisions (or modulos) are much more expensive than multiplications.

We can fix both these issues by first finding the total number of milliseconds in the day. That number is always smaller than 100 million, so it fits in an int. We can calculate the number of hours with a simple division. We can “peel off” the hours from the total number of milliseconds in the day to find the total milliseconds remaining in the hour. From this, we can calculate the number of minutes with a simple division, and so on. The improved code looks like this:

long ticks = time.Ticks;
int ms = (int)((ticks / 10000) % 86400000);
int
hour = ms / 3600000;
ms -= 3600000*hour;
int
minute = ms / 60000;
ms -= 60000 * minute;
int
second = ms / 1000;
ms -= 1000*second;

This change decreases the running time by about 28 percent from the fastest previous solution. We can shave off another 4% or so by replacing the modulo calculation by a subtraction in the code that computes the digits.

The question now is: can we do even better, still?

Once again, the answer is: Yes: by as much as another 25%!

The single most time consuming calculation is a division. Dividing by large numbers is an order of magnitude slower than multiplying. For smaller numbers, the difference is smaller, but still significant. Since we know the numbers we are dividing by in advance, we can do a little bit shifting magic and eliminate the divisions altogether.

Let’s take dividing by 10 as an example. The basic idea is to approximate the ratio 1/10 by another rational number with a power of two as the denominator. Instead of dividing, we can then multiply by the numerator, and shift by the exponent in the denominator. Since shifting chops off digits, it effectively rounds down the result of the division, so we always have to find an approximation that is larger than the ratio.

We see, for example, that 13/128 is a good approximation to 1/10. We can rewrite x/10 as (x*13) >> 7 as long as x is not too large. We run into trouble as soon as the error time x is larger than 1. In this case, that happens when x is larger than 13/(13-12.8) = 65. Fortunately, this is larger than the number of hours in a day, or the number of minutes in an hour, so we can use it for most calculations in our code. It won’t work for numbers up to a 100, so to get the second digit of the millisecond, we need the next approximation, 205/2048, which is good for values up to 10,000.

To get the first digit of the milliseconds, we need to divide by 100. We find that 41/4096 works nicely.

Implementing this optimization, we go from (for example):

*a = (char)(hour / 10 + ‘0’);
a++;
*a = (
char)(hour % 10 + ‘0’);

to:

int temp = (hour * 13) >> 7;
*a = (
char)(temp + ‘0’
);
a++;
*a = (
char)(hour – 10 * temp + ‘0’);

Our running time for 1 million iterations goes down from 0.38s to 0.28s, a savings of almost 18% compared to the original.

The larger divisors give us a bit of a challenge. To get the number of seconds, we divide a number less than 60000 by 1000. Doing this the straight way has us multiplying by 536871, which would require a long value for the result of the multiplication. We can get around this once we realize that 1000 = 8*125. So if we shift the number of milliseconds by 3, we only need to divide by 125. As an added benefit, the numbers we’re multiplying are always less than 7500, so our multiplier can be larger. This gives us the simple expression good for numbers up to almost 4 million: ((x >> 3) * 67109) >> 23.

The same trick doesn’t work for getting the minutes and hours, but it does allow us to fit the intermediate result into a long. We can use the Math.BigMul method to perform the calculation efficiently.

The final code is given below. It is doubtful it can be improved by much. It runs in as little as 0.221s for one million iterations, 2.5 times faster than the previous fastest code and over 25 times faster than the original.

private unsafe static string FormatFast6(DateTime time)
{
   
fixed (char
* p = dateData)
    {
       
long
ticks = time.Ticks;
       
char
* a = p;
       
int ms = (int
)((ticks / 10000) % 86400000);
       
int hour = (int)(Math
.BigMul(ms >> 7, 9773437) >> 38);
        ms -= 3600000 * hour;
       
int minute = (int)((Math
.BigMul(ms >> 5, 2290650)) >> 32);
        ms -= 60000 * minute;
       
int
second = ((ms >> 3) * 67109) >> 23;
        ms -= 1000 * second;

       
int temp = (hour * 13) >> 7;
        *a = (
char)(temp + ‘0’
);
        a++;
        *a = (
char)(hour – 10 * temp + ‘0’
);
        a += 2;

        temp = (minute * 13) >> 7;
        *a = (
char)(temp + ‘0’
);
        a++;
        *a = (
char)(minute – 10 * temp + ‘0’
);
        a += 2;

        temp = (second * 13) >> 7;
        *a = (
char)(temp + ‘0’
);
        a++;
        *a = (
char)(second – 10 * temp + ‘0’
);
        a += 2;

        temp = (ms * 41) >> 12;
        *a = (
char)(temp + ‘0’
);
        a++;
        ms -= 100 * temp;
        temp = (ms * 205) >> 11;
        *a = (
char)(temp + ‘0’
);
        ms -= 10 * temp;
        a++;
        *a = (
char)(ms – 10 * temp + ‘0’
);
    }
    
return new String
(dateData);
}

Visual Studio as an interactive technical computing environment

A year ago, we were experimenting with an extensive sample that would illustrate the linear algebra capabilities of our math library. Unfortunately, M#, the .NET Matrix Workbench didn’t get very far. We’re technical guys. Building user interfaces just isn’t our cup of tea, and the IDE we built wasn’t stable enough to make it into an end-user product.

At the time, I realized that much of the functionality needed for this kind of interactive computing environment was already present in Visual Studio .NET. For example, we already have an excellent code editor, a project workspace, and tool windows that display variables and their values. Moreover, in the Visual Studio SDK, we have a framework for extending that environment with visualizers for specific types of variables, intellisense, custom project items, and so on.

Plus, you have a great library (the .NET Base Class Libraries) that you can use to do just about anything you’d like.

In short, Visual Studio is the ideal starting point to build a great technical computing IDE.

A couple of recent news items bring this vision closer to reality. Aaron Marten reports that the February 2006 CTP of the Visual Studio 2005 SDK now contains a tool window that hosts an IronPython console. And just a few days ago, Don Syme gave us a taste of what is to come in the next release of F#. The screen shot is the kind you would expect from Matlab. (I guess I was right when I wrote that Don gets what scientists and engineers need.)

Now, all we need is a Matlab-like language for the .NET platform…

Yahoo Finance miscalculates monthly average daily volume

Am I missing something here?

While testing some time series functionality in the new version of our statistics library, we came across a rather curious discrepancy. We used the historical quotes available from Yahoo Finance as a reference resource. As it turns out, comparison with our data appears to show that Yahoo miscalculates some summary statistics.

The error occurs on the Historical Prices page when using a monthly timeframe. Take the monthly data for 2005 for Microsoft’s stock (symbol MSFT). This shows an average daily volume for January of 79,642,818 shares. According to the help document, this is “the average daily volume for all trading days in the reported month.”

When we look at the daily prices for January 2005, we find 20 trading days. When we add up all the daily volumes, we find 1,521,414,280 shares changed hands that month. That should give an average daily volume of 76,070,714 shares, more than 3 million shares less than Yahoo’s figure. Why the difference?

A brief investigation showed that the difference can be explained because Yahoo includes the volume on the last trading day of the month twice. If you add the volume of Jan. 1st to the total, we get 1,592,856,376. Dividing by the number of trading days (20) gives 79,642,818.8.

When we look at other months, we find the same pattern: Yahoo consistently overstates the average daily volume for the month by a few percentage points. Each time, this difference can be explained by the double inclusion of the volume of the last trading day in the total volume for the month.

Here’s a random sample: Research in Motion for June 2000. Yahoo gives an average daily volume of 4,262,160 shares. Our calculation shows an average of 3,870,800 shares corresponding to a total volume of 77,416,000 for the month. Yahoo’s total corresponds to 85,243,200 shares. The difference of 7,820,200 shares is exactly the volume for June 30th.

The weekly average daily volume appears to be correct.

I find it hard to believe that a service that is as widely used as Yahoo would show such an error. So, my question to the experts in technical analysis out there is: What am I missing???

Overload resolution in C#

Programming language design is a very tough discipline. You can be sure that every feature will be used and just as often abused in unexpected ways. Mistakes are close to impossible to fix.

C# is a great language. It’s my language of choice for most applications. It derives a lot of its quality from the cautious nature of its lead architect, Anders Hejlsberg, who tries to guide programmers into writing good code. That’s why unsafe code is called unsafe. Not because it is a security risk, but because it is really easy to get yourself into a buggy mess with it. It is why methods and properties have to be explicitly declared virtual, because people tend to override methods when it is not appropriate.

In some cases, however, this protecting developers against themselves can go to far. Overload resolution is one example I came across recently.

Consider a Curve class that represents a mathematical curve. It has a ValueAt method which gets the value of the curve at a specific x-value, which is passed in as a Double argument. This is a virtual method, of course, and specific types of curves, like Polynomial or CubicSpline, provide their own implementation.

Now, for polynomials in particular, it is sometimes desirable to get the value of the polynomial for a complex argument. So we define an overload that takes a DoubleComplex argument and also returns a DoubleComplex.

So far so good.

But now we want to add a conversion from Double to DoubleComplex. This is a widening conversion: every real number is also a complex number. So it is appropriate to make the conversion implicit. We can then write things like:

DoubleComplex a = 5.0;

instead of

DoubleComplex a = new DoubleComplex(5.0);

Unfortunately, this change breaks existing code. The following snippet will no longer compile:

Polynomial p = new Polynomial(1.0, 2.0, 3.0);
double y = p.ValueAt(1.0);

On the second line, we get an error message: “Cannot implicitly convert type DoubleComplex to Double.” Why? Because of the way C# resolves method overloads.

Specifically, C# considers methods declared in a type before anything else, including override methods. Section 7.3 of the C# spec (“Member lookup“) states:

First, the set of all accessible (Section 3.5) members named N declared in T and the base types (Section 7.3.1) of T is constructed. Declarations that include an override modifier are excluded from the set. If no members named N exist and are accessible, then the lookup produces no match, and the following steps are not evaluated.

In this case, because there is an implicit conversion from Double to DoubleComplex, the ValueAt(DoubleComplex) overload is applicable. Even though there is an overload whose parameters match exactly, it isn’t even considered here, because it is an override.

This highly unintuitive behavior is justified by the following two rules:

  1. Whether or not a method is overridden is an implementation detail that should be allowed to change without breaking client code.
  2. Changes to a base class that don’t break an inherited class should not break clients of the inherited class.

Even though neither of these actually applies to our example, I can understand that these rules are useful in many situations. In this case, however, it essentially hides the ValueAt(Double) overload I defined, unless I use an ugly upcast construct like

double y = ((Curve)p).ValueAt(1.0);

My problem is this: If I define an overload that takes a specific argument type, clearly my intent is for that overload to be called whenever the argument is of that exact type. This rule violates that intent.

In my view, it was a mistake to violate developer intent in this way and not give an exact signature match precedence over every other overload candidate. Unfortunately, this is one of those choices that in all likelihood is next to impossible to fix.

Visual Basic has a different set of overload resolution rules. It looks for the overload with the least widening conversion, and so would pick the correct overload in this case.

Thanks to Neal Horowitz for helping me clear up this issue.

Accessing floating-point context from .NET applications

A few days ago, I wrote about the need for the .NET framework to support floating-point exceptions. Although the solution I proposed there goes some way towards fixing the problem, it would create a few of its own as well.

The main issue is that the processor’s FPU (floating-point unit) is a global resource. Therefore it should be handled with extreme care.

For an example of what can go wrong if you don’t, we can look at the story of Managed DirectX. DirectX is the high-performance multimedia/graphics/gaming API on the Windows platform. Because performance is of the essence, the DirectX guys decided to ask the processor to do its calculations in single precision by default. This is somewhat faster than the default, and all that is generally needed for the kinds of applications they encountered.

When people started using the .NET framework to build applications using Managed DirectX, some people found that it broke the .NET math functions. The reason: the precision setting of the FPU is global. It therefore affected all floating-point code in the application, including the calls into the .NET Base Class Libraries.

The problem with our initial solution is that it doesn’t behave nicely. It doesn’t isolate other code from changes to the floating-point state. This can affect code in unexpected ways.

So how do we fix this?

One option is to flag code that uses special floating-point settings with an attribute like “FloatingPointContextAware.” Any code that has this attribute can access floating-point state. However, any code that does not have this attribute would require the CLR to explicitly save the floating-point state.

Unfortunately, this fix suffers from a number of drawbacks:

  1. In some applications, like error analysis, you want to set global state. For example, one way to check whether an algorithm is stable is to run it twice using different rounding modes. If the two results are significantly different, you have a problem.
  2. It is overkill in most situations. If you’re going to use special floating-point features, you should be able to isolate yourself properly from any changes in floating-point state. If you have no knowledge such features exist, chances are your code won’t break if, say, the rounding mode is changed.
  3. It is a performance bottleneck. You can’t require all your client code to also have this attribute. Remember the goal was to allow ‘normal’ calculations to be done as fast as possible while making the edge cases reasonably fast as well. It is counter-productive to impose a performance penalty on every call.

The best solution I can think of is to actually treat the floating-point unit for what it is: a resource. If you want to use its special features, you should inform the system when you plan to use them, and also when you’re done using them.

We make floating-point state accessible only through instances of a FloatingPointContext class, which implements IDisposable and does the necessary bookkeeping. It saves enough of the floating-point state when it is constructed, and restores it when it is disposed. You don’t always have to save and restore the full floating-point state, including values on the stack, etc. In most cases, you only need to save the control word, which is relatively cheap.

A typical use for this class would be as follows:

using (FloatingPointContext context = new FloatingPointContext())
{
    context.RoundingMode = RoundingMode.TowardsZero;
    context.ClearExceptionFlags();
    // Do some calculations
    if (context.IsExceptionFlagRaised(
         
FloatingPointExceptionFlag.Underflow))
      // Do some more stuff
}

I have submitted a proposal for a FloatingPointContext class to the CLR team. So far they have chosen to keep it under consideration for the next version. Let’s hope they’ll choose to implement it.

Floating-point exceptions and the CLR

I’ve decided to share some of our experiences during the development of our math and statistics libraries in the hope that they may contribute to improvements in the .NET platform as the next version is being designed.

The CLR is a general-purpose runtime environment, and cannot be expected to support every application at the fastest possible speed. However, I do expect it to perform reasonably well, and if a performance hit can be avoided, then it should be. The absence of any floating-point exception mechanism incurs such a performance hit in some fairly common situations.

As an example, let’s take an implementation of complex numbers. This is a type for general use, and has to give accurate results whenever possible. For obvious reasons, we want the core operations to be as fast as possible. This means we want to inline when we can, and make our code fast, too. Most operations are fairly straightforward, but division isn’t. Let’s start with the ‘naïve’ implementation:

struct Complex
{
  private re, im;
  public static operator/(Complex z1, Complex z2)
  {
    double d = z2.re*z2.re + z2.im*z2.im;
    double resultRe = z1.re * z2.re + z1.im * z2.im;
    double resultIm = z1.im * z2.re – z1.re * z2.im;
    return new Complex(resultRe / d, resultIm / d);
  }
}

If any of the values d, resultRe, and resultIm underflow, the result will lose accuracy, because subnormal numbers by definition don’t have the full 52 bit precision. The CLR also offers no indication that underflow has occurred. This can be fixed, mostly, by modifying the above to:

public static operator/(Complex z1, Complex z2) 
{ 
  if (Math.Abs(z2.re) > Math.Abs(z2.im) 
  { 
    double t = z2.im / z2.re; 
    double d = z2.re + t * z2.im; 
    double resultRe = (z1.re + t * z1.im); 
    double resultIm = (z1.im – t * z1.re); 
    return new Complex(resultRe / d, resultIm / d); 
  } 
  else 
  { 
    double t = z2.re / z2.im; 
    double d = t * z2.re + z2.im; 
    double resultRe = (t * z1.re + z1.im); 
    double resultIm = (t * z1.im – z1.re); 
    return new Complex(resultRe / d, resultIm / d); 
  } 
}

This will give accurate results in a larger domain, but is slower because of the extra division. Worse still, some operations that one would expect to give exact results now aren’t exact. For example, if z1 = 27-21i and z2 = 9-7i, the exact result is 3, but the round-off in the division by 9 destroys the exact result.

IEEE-754 exceptions would come to the rescue here – if they were available. Exceptions (a term with a specific meaning in the IEEE-754 standard – not to be confused with CLR exceptions) raise a flag in the FPU’s status register, and can also be trapped by the operating system. We don’t need a trap here. We can do what we need to do with a flag. The code would look something like:

public static operator/(Complex z1, Complex z2) 
{ 
  FloatingPoint.ClearExceptionFlag(
    FloatingPointExceptionFlag.Underflow);
 
  double d = z2.re*z2.re + z2.im*z2.im; 
  double resultRe = z1.re * z2.re + z1.im * z2.im; 
  double resultIm = z1.im * z2.re – z1.re * z2.im; 
  if (FloatingPoint.IsExceptionFlagRaised(
    FloatingPointExceptionFlag.Underflow)
 
  { 
    // Code for the special cases. 
  } 
  return new Complex(resultRe / d, resultIm / d); 
}

Note that the CLR strategy to “continue with default values” won’t work here, because complete underflow is defaulted to 0, which can not be distinguished from the common case when the result is exactly zero (and therefore no special action is required). The only way to do it right would be a whole series of ugly comparisons, which would make the code slower and harder to read/maintain. Even if a language supported a mechanism to check for underflow (by inserting comparisons to a suitable small value before and after storing the value), this would bloat the IL, introduce unnecessary round-off (by forcing a conversion from extended to double on each operation), and slow things down unnecessarily.

This type of scenario occurs many times in numerical calculations. You perform a calculation the quick and dirty way, and if it turns out you made a mess, you try again but you’re more cautious. The complex number example is the most significant one I have come across while developing our numerical libraries.

Nearly all hardware that the CLR (or its clones) run on, supports this floating-point exception mechanism. I found it somewhat surprising that a ‘standard’ virtual execution environment would not adopt another and well-established standard (IEEE-754) for a specific subset of its functionality.

For more on the math jargon used in this post, see my article Floating-Point in .NET Part 1: Concepts and Formats.

The future of games

Board games that is…

Once upon a time, a board game consisted of a board, some pawns and some dice. Over the years, game designers added more and more props. Dynamic game play was possible, but you had to do all the work.

Now, Philips Research have developed the Entertaible. Pawns and dice are still around, but now the board itself can change during play. My friend Peter Sels did some of the programming.

Since they’re only at the prototype stage, the price tag will likely be too high for some time, even for Santa. Maybe in a few years they’ll sell a “1000 in 1” board game for $19.95…