Lecture 13: Normal Matrix

Transforms Example

Consider a simple transform matrix

If we take the cross product between two vectors, we get one result:

If we transform each by M, then take the cross product, we get something different (as expected):

What might not be expected is that if we apply the transform to the result of the first cross product, we get a different result:

This shows us that the result of a cross product transforms differently from a cross product. That’s because that result is not a vector - it’s a bivector.

Derivation

Let’s take a closer look. What do we get if we transform some vector by matrix ?

Here is the th column of .

So what is the cross product of two vectors and after they are transformed?

The cross product is distributive over addition, so we can write:

Any vector crossed with itself is 0:

And cross products are anti-commutative (), so:

Okay, there’s the formula for the cross product components on the left, but what about those cross products on the right?

Well we can write this in vector form like so:

Which shows us that we can rewrite this as matrix-vector multiplcation.

On the right is the result of the cross product between and . On the left is a matrix we can multiply the result by to get what we have gotten by transforming both vectors before crossing.

So what is this matrix on the left?

Well, first let’s just point something out.

This follows from the formula for determinant - if you multiply everything out, it’s the same quantity.

However, all other dot products are , e.g.:

This also follows from the multiplication:

This is interesting because it gives us:

Which, by the definition of matrix inverse, means that:

There we go - inverse transpose is how to transform the result of a cross product. We need to divide by the determinant as well, but since we always normalize, this only really matters for flipping normals.

Next, we will cover why exactly it is that the result of a cross product is this different entity that obeys different rules.