Programming language design is a very tough discipline. You can be sure that every feature will be used and just as often abused in unexpected ways. Mistakes are close to impossible to fix.
C# is a great language. It’s my language of choice for most applications. It derives a lot of its quality from the cautious nature of its lead architect, Anders Hejlsberg, who tries to guide programmers into writing good code. That’s why unsafe code is called unsafe. Not because it is a security risk, but because it is really easy to get yourself into a buggy mess with it. It is why methods and properties have to be explicitly declared virtual, because people tend to override methods when it is not appropriate.
In some cases, however, this protecting developers against themselves can go to far. Overload resolution is one example I came across recently.
Consider a Curve class that represents a mathematical curve. It has a ValueAt method which gets the value of the curve at a specific x-value, which is passed in as a Double argument. This is a virtual method, of course, and specific types of curves, like Polynomial or CubicSpline, provide their own implementation.
Now, for polynomials in particular, it is sometimes desirable to get the value of the polynomial for a complex argument. So we define an overload that takes a DoubleComplex argument and also returns a DoubleComplex.
So far so good.
But now we want to add a conversion from Double to DoubleComplex. This is a widening conversion: every real number is also a complex number. So it is appropriate to make the conversion implicit. We can then write things like:
DoubleComplex a = 5.0;
DoubleComplex a = new DoubleComplex(5.0);
Unfortunately, this change breaks existing code. The following snippet will no longer compile:
Polynomial p = new Polynomial(1.0, 2.0, 3.0);
double y = p.ValueAt(1.0);
On the second line, we get an error message: “Cannot implicitly convert type DoubleComplex to Double.” Why? Because of the way C# resolves method overloads.
Specifically, C# considers methods declared in a type before anything else, including override methods. Section 7.3 of the C# spec (“Member lookup“) states:
First, the set of all accessible (Section 3.5) members named
N declared in
T and the base types (Section 7.3.1) of
T is constructed. Declarations that include an
override modifier are excluded from the set. If no members named
N exist and are accessible, then the lookup produces no match, and the following steps are not evaluated.
In this case, because there is an implicit conversion from Double to DoubleComplex, the ValueAt(DoubleComplex) overload is applicable. Even though there is an overload whose parameters match exactly, it isn’t even considered here, because it is an override.
This highly unintuitive behavior is justified by the following two rules:
- Whether or not a method is overridden is an implementation detail that should be allowed to change without breaking client code.
- Changes to a base class that don’t break an inherited class should not break clients of the inherited class.
Even though neither of these actually applies to our example, I can understand that these rules are useful in many situations. In this case, however, it essentially hides the ValueAt(Double) overload I defined, unless I use an ugly upcast construct like
double y = ((Curve)p).ValueAt(1.0);
My problem is this: If I define an overload that takes a specific argument type, clearly my intent is for that overload to be called whenever the argument is of that exact type. This rule violates that intent.
In my view, it was a mistake to violate developer intent in this way and not give an exact signature match precedence over every other overload candidate. Unfortunately, this is one of those choices that in all likelihood is next to impossible to fix.
Visual Basic has a different set of overload resolution rules. It looks for the overload with the least widening conversion, and so would pick the correct overload in this case.
Thanks to Neal Horowitz for helping me clear up this issue.