One of the things I love about C++ is the fact that it is a strongly-typed language. An essential element of good programming is writing code that is self-documenting. The more explicit you are in your code, the less chance that you, your fellow programmers, or even the compiler will misunderstand your purpose. That said, strong typing can be taken too far.

One of the problems I noted in the first math library I worked with was excessive repetition of common operations. Vector normalization particularly seemed to be done everywhere. The library had numerous functions that took as an input parameter a 3 component vector and required that vector to be normalized. The principle consideration in the design of the library must have been ease of use, because most library functions accepted unnormalized inputs and always performed a normalization step themselves. The unfortunate downside of this approach was that vectors were frequently being renormalized unnecessarily and performance suffered.

struct Vector3 { … };

struct Plane { … };// Construct a Plane object given a point on the plane and the plane’s normal

Plane CreatePlane(Vector3 point, Vector3 normal)

{

normal = Normalize(normal);

…

}

Years later when I had an opportunity to design my own math library, I attempted to solve this problem with C++ types. I introduced a new type in my math library called NormalizedVector3. Functions like CreatePlane were modified to take NormalizedVector3 objects and consequently they no longer needed to perform possibly redundant normalizations. My first implementation of NormalizedVector3 looked something like this:

struct NormalizedVector3

{

// construction from Vector3 must normalize!

NormalizedVector3(Vector3 v);// construction from scalars must normalize!

NormalizedVector3(float x, float y, float z);// cast operator allows NormalizedVector3 to be compatible with Vector3

operator Vector3() const;// can’t allow non-const access to members

float X() const;

float Y() const;

float Z() const;private:

float x, y, z;

};

This worked reasonably well but there were a few problems. NormalizedVector3 allowed implicit construction from Vector3 and implicit casting to Vector3 so it could take advantage of all the built-in functionality of a Vector3. Unfortunately this also meant it introduced a lot of opportunities for implicit normalization.

NormalizedVector3 nv0, nv1;

NormalizedVector3 nv2 = nv0 * nv1; // implicit conversion to Vector3 and

// back to NormalizedVector3Vector3 point, normal;

Plane p = CreatePlane(point, normal); // implicit conversion to NormalizedVector3

Since operations like vector multiplication are not length-preserving, they are handled by implicit conversions to and from Vector3. Not only is this confusing, it is exactly the sort of hidden (and likely unnecessary) cost I was trying to avoid. My second implementation made the constructor of NormalizedVector3 explicit, which added a bit more overhead to the API but also helped highlight what kind of work was going on underneath the hood.

NormalizedVector3 nv0, nv1;

NormalizedVector3 nv2 = NormalizedVector3( nv0 * nv1 );Vector3 point, normal;

Plane p = CreatePlane(point, NormalizedVector3(normal) );

Of couse as soon all the hidden normalizations were brought to light, I discovered a host of undesirable ones. The math library included functions to multiply vectors and matrices, and most of the time the matrices represented orthogonal transformations. Now all of a sudden transforming a NormalizedVector3 by a matrix required renormalization!

NormalizedVector vOld;

Matrix3x3 orthogonalTransform;

NormalizedVector3 vNew =

NormalizedVector3( orthogonalTransform * vOld ); // unnecessary normalization

Luckily I already had a solution to this problem, more C++ types! I created an OrthonormalMatrix3x3 class with a relationship to Matrix3x3 much like NormalizedVector3’s relationship to Vector3. I was then able to provide a function for transforming NormalizedVector3s by OrthonormalMatrix3x3s that returned NormalizedVector3s and, as if by magic, all concerns about unnecessary normalization disappeared! Okay, not really. What actually happened was I discovered lots of operations on orthonormal matrices that preserved orthonomality but were now resulting in unnecessary re-orthonormalizations. I attempted to provide overloads of those operations to remove the unnecessary conversions between Matrix3x3s and OrthonormalMatrix3x3s and hilarity ensued.

By this time it was pretty clear I was halfway to Wonderland and it was time to turn around and crawl back out of the rabbit hole. I deleted all references to OrthonormalMatrix3x3 and NormalizedVector3, which by this point constituted an unhealthy percentage of my math library’s code, and made just two modifications to the original design:

// Construct a Plane object given a point on the plane and the plane’s normal

// The normal vector must be unit length.

Plane CreatePlane(Vector3 origin, Vector3 normal)

{

assert( IsUnitLength(normal) );

…

}

The lesson here is that simplicity is every bit as important in API design as correctness. You can try to make your API so clever that even an idiot can’t misuse it, or you can try to make your API so simple that even an idiot can’t misunderstand it. The result is usually the same–high performance, error-free code–but your clients will be happier and your codebase will be much, much leaner.

I’d argue the real lesson here is that sometimes, it is better to defer type checking to the runtime, as you last example does.

It *might* be possible that a language with type inference could cover that issue too. Since I’m just starting to dig into that field, I have no idea if it’s true, though 😉

Either way, it seems to be pointing out that typing as-is in C++ is too verbose to actually support, at least in some instances.

The obvious solution to your angst is template programming. Instead of casting, which is undesirable, templates maintain strong type checking. For instance:

template normalize (PointClass& point3d) …

will issue compiler errors if any operations on PointClass are illegal.

Great article, thanks for that. It’s a good example of the bigger question about the value of static type checking, which has been on my mind a lot lately. One of the surprising things I’ve discovered on learning Python (from a C++ and .Net background) is that losing static type checking doesn’t actually cost very much in terms of programmer time – the sort of bugs that compile-type type checks can detect are generally very shallow and immediately obvious (passing function parameters in the wrong order, for example.) In contrast, and not at all obvious to me until the last couple of years, writing code to conform to a type system actually incurs a significant cost in terms of effort and code complexity. The simplicity of Python code to do the same thing as a chunk of C++ or C# is sometimes breathtaking, and a substantial part of that simplicity is due to the dynamic typing.

Clearly it’s about striking the most pragmatic medium, and different languages are good for different circumstances.

Best regards,