```
{-# LANGUAGE CPP #-}
{-# LANGUAGE OverloadedLists #-}
import Control.Monad
import Data.Bool
import Data.List
import Data.Map (Map)
import qualified Data.Map as M
import Data.Maybe
#ifdef __HASTE__
import Data.IORef
import Haste.DOM
import Haste.Events
import Haste.Graphics.Canvas
import Haste.Graphics.AnimationFrame
#endif
type Multivector = Map String Double
m ! s = M.findWithDefault 0 s m
a ~== b = abs (a - b) < 10**(-6)
wedge :: Multivector -> Multivector -> Multivector
wedge m n = M.filter (not . (~== 0)) $
M.fromListWith (+) $ catMaybes $ f <$> M.assocs m <*> M.assocs n where
f a@(s, _) b@(t, _)
| length u == length s + length t = Just c
| otherwise = Nothing
where c@(u, _) = basisMul a b
dot :: Multivector -> Multivector -> Multivector
dot m n = M.filter (not . (~== 0)) $
M.fromListWith (+) $ catMaybes $ f <$> M.assocs m <*> M.assocs n where
f a@(s, _) b@(t, _)
| length u == length t - length s = Just c
| otherwise = Nothing
where c@(u, _) = basisMul a b
mul :: Multivector -> Multivector -> Multivector
mul x y = M.filter (not . (~== 0)) $
M.fromListWith (+) $ basisMul <$> M.assocs x <*> M.assocs y
```

# Projective Geometric Algebra

In the conformal model, null vectors are instrumental in throwing off the shackles of coordinates. We cleverly built null vectors starting from a wacky inner product that induced negative squared norms.

But why take the scenic route? Why not simply define a basis vector \(\newcommand{\vv}[1]{\mathbf{#1}} \newcommand{\a}{\vv{a}} \newcommand{\b}{\vv{b}} \newcommand{\P}{\vv{P}} \newcommand{\Q}{\vv{Q}} \newcommand{\I}{\vv{I}} \newcommand{\x}{\vv{x}} \newcommand{\y}{\vv{y}} \newcommand{\z}{\vv{z}} \newcommand{\E}{\vv{E}} \newcommand{\e}{\vv{e}} \vv{v}\) to be a null vector? What stops us demanding \(\vv{v}^2 = 0\) by fiat?

Nothing. However, it results in a *degenerate* inner product, leading to a
trickier algebra that historically deterred researchers. For example, \(\I\) is
no longer invertible, which makes duals less accessible.

Happily, other intrepid researchers overcame the obstacles, leading to
*projective geometric algebra* (PGA). Simpler and more efficient than the
conformal model, projective geometric algebra still handles rotations and
translations beautifully.

Look familiar? Let \(f(X) = e^{X/2}\) and:

\[ V = f(12 \z\infty) f(\y\x \gamma) f(8\x\infty) f(\z\y \beta) f(2\y\infty) f(\x\z \alpha) \]

Again, the map \(p \mapsto V p V^{-1}\) as \(\alpha, \beta, \gamma\) range over all angles produces the above animation. But this time we started from a 4-dimensional vector space, leading to a 16-dimensional geometric algebra. Thus the number of entries in a multivector is at most the number of entries in a traditional workhorse of computer graphics, a 4x4 matrix.

There are drawbacks. We can no longer wedge 4 points to describe a circle. We lose succinct inversion and dilation. Perhaps for proving advanced geometry theorems we may be better off with the conformal model. But for many applications projective geometric algebra may be ideal.

We follow along Charles G. Gunn, Course notes: *Geometric Algebra for Computer Graphics*, SIGGRAPH 2019.

## PGA Tour

We copy over definitions from older articles:

The bluntest approach might be to define one null vector for each dimension in our space. Points will certainly be represented by null vectors, but this takes degeneracy too far. In particular, if we want dot products to give useful information, then we need \(n\) non-null basis vectors.

Therefore we start with the standard inner product of \(\mathbb{R}^n\), then
add a null vector \(\e_0\) giving us an \((n+1)\)-dimensional space known as
\(\mathbb{R}_{n,0,1}\). In our code, we write `0`

for \(\e_0\) and handle this
case specially, defaulting to the standard behaviour for all other basis
vectors.

```
basisMul (s, m) (t, n) = gnome [] (s ++ t) (m * n) where
gnome pre (a:b:rest) c
| a < b = gnome (pre ++ [a]) (b:rest) c
| a == b = case a of
'0' -> ([], 0)
_ -> back pre rest c
| a > b = back pre (b:a:rest) (-c)
where
back [] rest c = gnome [] rest c
back pre rest c = gnome (init pre) (last pre:rest) c
gnome pre rest c = (pre ++ rest, c)
```

Next, we move to the dual space \(\mathbb{R}^*_{n,0,1}\) to spread \(\e_0\) around. Recall the dual of a \(k\)-blade is an \((n-k)\)-blade representing its orthogonal complement.

For \(\mathbb{R}^*_{2,0,1}\), if \(\e_1\) and \(\e_2\) are non-null basis
vectors (which show up as `1`

and `2`

in our code), then we represent 1-vectors
with the duals of \(\e_0, \e_1, \e_2\).

Already we are hampered by degeneracy. We usually multiply by \(\I\) to compute a dual, but this fails for deriving \(\E_0\) from \(\e_0\), since \(\e_0\) is a null vector. Furthermore, without this portal gun to teleport us between a space and its dual, we are forced to think "backwards" all the time: wedge products describe intersections and meet products describe spans. (If an orthogonal complement grows, then the original subspace shrinks, and vice versa.)

It so happens we can temporarily pretend to have a standard inner product and multiply \(\e_0, \e_1, \e_2\) by \(\I\) to get their duals:

\[ \E_0 = \e_1 \e_2, \E_1 = \e_2 \e_0, \E_2 = \e_0 \e_1 \]

The \(\E_1\) and \(\E_2\) represent the \(x\) and \(y\) directions, and \(\E_0\) is the origin. All but \(\E_0\) are null.

We represent euclidean points and ideal points as follows:

```
one :: Multivector
one = [("", 1)]
canon = mul one
point2 x y = canon [("20", x), ("01", y), ("12", 1)]
ideal2 x y = canon [("20", x), ("01", y)]
```

With these representations, for a finite point \(\P\) we have \(\P^2 = -1\), and for an ideal point \(V\) we have \(V^2 = 0\):

```
square x = mul x x
prop_finitePointSquaredIsMinusOne = square (point2 123 456) == [("",-1)]
prop_idealPointSquaredIsZero = square (ideal2 123 456) == []
```

As we’re in a projective space, \(\lambda \P\) also represents \(\P\) for any
nonzero scalar \(\lambda\); a finite point is *normalized* when \(\P^2 = -1\).
Ideal points always square to zero.

Like before, we avoid geometrically meaningless squares thanks to null vectors. The square of a (normalized) point is a constant, so is independent of the origin.

We test our code by reproducing the multiplication table of the geometric algebra basis vectors from one of Gunn’s papers:

```
basis2 :: [Multivector]
basis2 = one : map (flip M.singleton 1) (words "0 1 2 12 20 01 012")
pseudo2 = [("012", 1)] :: Multivector
table1 :: [[Multivector]]
table1 = [[mul r c | c <- basis2] | r <- basis2]
crudeTable1 = putStr $ unlines $ map (unwords . map (show . M.toList)) table1
```

[("",1.0)] [("0",1.0)] [("1",1.0)] [("2",1.0)] [("12",1.0)] [("02",-1.0)] [("01",1.0)] [("012",1.0)] [("0",1.0)] [] [("01",1.0)] [("02",1.0)] [("012",1.0)] [] [] [] [("1",1.0)] [("01",-1.0)] [("",1.0)] [("12",1.0)] [("2",1.0)] [("012",1.0)] [("0",-1.0)] [("02",-1.0)] [("2",1.0)] [("02",-1.0)] [("12",-1.0)] [("",1.0)] [("1",-1.0)] [("0",1.0)] [("012",1.0)] [("01",1.0)] [("12",1.0)] [("012",1.0)] [("2",-1.0)] [("1",1.0)] [("",-1.0)] [("01",-1.0)] [("02",-1.0)] [("0",-1.0)] [("02",-1.0)] [] [("012",1.0)] [("0",-1.0)] [("01",1.0)] [] [] [] [("01",1.0)] [] [("0",1.0)] [("012",1.0)] [("02",1.0)] [] [] [] [("012",1.0)] [] [("02",-1.0)] [("01",1.0)] [("0",-1.0)] [] [] []

The line \(a x + b y + c = 0\) is represented by:

`line2 a b c = [("1", a), ("2", b), ("0", c)] :: Multivector`

A euclidean line \(\a\) is normalized when \(a^2 = 1\).

We provide similar routines for 3D geometry:

```
point3 x y z = canon [("032", x), ("013", y), ("021", z), ("123", 1)]
ideal3 x y z = canon [("032", x), ("013", y), ("021", z)]
plane3 a b c d = [("1", a), ("2", b), ("3", c), ("0", d)] :: Multivector
pseudo3 = [("0123", 1)] :: Multivector
```

Our demo needs to retrieve the coordinates of a euclidean point for the 3D case:

```
-- Assumes input is a euclidean point.
coords3 :: Multivector -> [Double]
coords3 m = [-get "023", get "013", -get "012"] where
d = m!"123"
get s = m!s / d
```

## Squared away

Define the squared norm of a euclidean \(k\)-vector \(\vv{X}\) by:

\[ || \vv{X} ||^2 = \vv{X}^2 \]

(This formula does not apply to all \(k\)-vectors, but rather only those representing points, lines, planes, and so on.)

The square of an ideal element is always zero, so we also define the square of
the *ideal norm* \(||\vv{X}||^2_\infty\) to be the sum of the squares of the
coefficients, and say an ideal \(k\)-vector \(\vv{X}\) is normalized when
\(||\vv{X}||^2_\infty = 1\).

This can be formulated without coordinates:

\[ ||\vv{X}||^2_\infty = ||\vv{X}\vee\P||^2 \]

where \(\P\) is any normalized finite point. However, our implementation simply reads off the coefficients:

```
scalarAdd = M.insertWith (+) ""
scalarMul a v = (*a) <$> v
normSquared :: Multivector -> Double
normSquared v = square v ! ""
idealNormSquared :: Multivector -> Double
idealNormSquared = sum . map (^2) . M.elems
normalize :: Multivector -> Multivector
normalize v
| d ~== 0 = scalarMul (sqrt (1/idealNormSquared v)) v
| otherwise = scalarMul (sqrt (-1/d)) v
where d = normSquared v
```

## The meet product

Recall in the conformal model we computed wedge products to find the span of subspaces, such as the line defined by two points, or the plane defined by three points. Because PGA uses the dual space, we need meet products instead.

Usually we take advantage of \(A \vee B = (A^* \wedge B^*)^*\). But without an
easy dual operation, we resort to a *shuffle product* to compute the meet, a
sort of geometric version of
the
inclusion-exclusion principle.

We need a helper function that returns all partitions of a sequence into two subsequences, along with the sign of the permutation obtained by concatening the subsequences.

```
signedPartitions sz xs = f sz xs where
f n xs
| n > length xs = []
| n == 0 = [(1, [], xs)]
| (x:xt) <- xs = [(s, x:as, bs) | (s, as, bs) <- f (n - 1) xt]
++ [(parity n s,as, x:bs) | (s, as, bs) <- f n xt]
parity n | odd n = negate
| otherwise = id
prop_signedPartitionsExample = signedPartitions 2 [1..5] ==
[(1,[1,2],[3,4,5])
,(-1,[1,3],[2,4,5])
,(1,[1,4],[2,3,5])
,(-1,[1,5],[2,3,4])
,(1,[2,3],[1,4,5])
,(-1,[2,4],[1,3,5])
,(1,[2,5],[1,3,4])
,(1,[3,4],[1,2,5])
,(-1,[3,5],[1,2,4])
,(1,[4,5],[1,2,3])
]
```

To compute meets with a shuffle, we need the dimension of the space, which in a
cleaner library might be held somewhere by each `Multivector`

value. (Such a
library could also prevent mixing up multivectors from different spaces.)
Instead, we require the caller to supply the dimension.

```
vee :: Int -> Multivector -> Multivector -> Multivector
vee dim m n = M.filter (not . (~== 0)) $
M.fromListWith (+) $ concat $ f <$> M.assocs m <*> M.assocs n where
f a@(s, m) b@(t, n)
| dim > length s + length t = []
| otherwise = [(bs, fromIntegral sgn*m*n) |
(sgn, as, bs) <- signedPartitions (dim - length t) s,
intersect as t == []]
vee2 = vee 3
vee3 = vee 4
```

## Products in 2D

Let’s start with the geometric product in 2D on normalized inputs.

Multiplication by \(\I\) is commutative, and computes the orthogonal complement in the Euclidean 2D space. (Not the geometric algebra itself, otherwise we would have easy access to the dual!)

For a euclidean line \(\a\), the product \(\a\I\) is the ideal point in the direction perpendicular to the line.

For a euclidean point \(\P\), the product \(\P\I\) is \(\e_0\), the ideal line.

```
prop_linePerp = line2 1 2 3 `mul` pseudo2 == ideal2 1 2
prop_pointPerp = point2 42 9000 `mul` pseudo2 == [("0",-1)]
```

The product of two euclidean lines is:

\[ \a\b = \a \cdot \b + \a \wedge \b \]

The dot product \(\a \cdot \b\) is the cosine of the angle \(\theta\) between them.

Let \(\P\) denote their normalized point of intersection. If \(\P\) is finite, that is, if the lines intersect, then \(\a \wedge \b = (\sin \theta) \P\). If \(\P\) is ideal, that is, the lines are parallel, then \(\a \wedge \b = d_{ab} \P\) where \(d_{ab}\) denotes the distance between the lines.

The product of two euclidean points is:

\[ \P \Q = -1 + d_{PQ} \vv{V} \]

where \(d_{PQ}\) is the distance between them and \(\vv{V}\) is an ideal line representing the direction perpendicular to the line joining them. We define

\[ \P \times \Q = d_{PQ} \vv{V} \]

hence \(d_{PQ} = || \P \times \Q ||_\infty\).

The product of a euclidean line and a euclidean point is:

\[ \a \P = \a \cdot \P + \a \wedge \P = \a^\perp_\P + d_{\a\P}\I \]

where \(\a^\perp_\P = \a \cdot \P\) is the line through \(\P\) perpendicular to \(\a\) and \(d_{\a\P}\) is the distance between the point and the line.

It turns out we also have meaningful products when one or both of the inputs are ideal.

## Transformations

There are more geometric interpretations to explore and we have yet to touch the 3-dimensional case. But let’s fast-forward to the part we’ve all been waiting for: rotors!

The reflection of \(\vv{X}\) in the \((n-1)\)-hyperplane \(\a\) is:

\[ \a\vv{X}\a \]

This formula reflects points, lines, planes, and so on.

We could finish here, as all Euclidean isometries can be generated from reflections. Composing reflections through \(\a\) then \(\b\) is simply:

\[ \b\a\vv{X}\a\b \]

If \(\vv{R} = \b \a\) is normalized then we call it a *rotor*, and transform a
subspace \(\vv{X}\) by computing \(\vv{R} \vv{X} \tilde{\vv{R}}\), where
\(\tilde{\vv{R}}\) is the *reversal* of \(\vv{R}\), meaning we multiply the
factors in reverse order. Calculation shows \(\vv{R}\tilde{\vv{R}} = 1\).

Rotations have an exponential form:

\[ e^{\frac{\theta}{2} \P} \]

where \(\P\) is the normalized point (in 2D) or axis (in 3D) at the center of a rotation by the angle \(\theta\). For euclidean points, \(\P^2 = -1\) so \(\P\) takes the role played by \(\vv{i}\) in other models.

For example, a rotation around the \(y\)-axis by 120 degrees is given by:

```
approxEq v w = all (~== 0) $ M.elems $ M.unionWith (-) v w
prop_rotateExample =
axis == [("13",-1)] && rotor `approxEq` [("",0.5),("13",-sqrt(3)/2)]
where
rotor = scalarAdd (cos (a/2)) $ scalarMul (sin (a/2)) axis
axis = normalize $ point3 0 0 0 `vee3` point3 0 1 0
a = 2*pi/3
```

Observe we computed the meet product of two points to find the multivector describing the \(y\)-axis. It turns out to have a simple form, though in general lines in 3D are less straightforward in PGA:

```
prop_lineExample =
line == [("01",-23),("02",-14),("03",17),("12",-7),("13",-3),("23",-7)]
where
line = point3 1 2 3 `vee3` point3 6 5 4
```

In 2D, translation by \(d\) units can be viewed as a rotation of size \(d\) with the center at an ideal point \(\P\) in the perpendicular direction. By considering the Taylor series expansion, and recalling \(\P^2 = 0\), we find:

\[ e^{\frac{d}{2} \P} = 1 + \frac{d}{2} \P \]

For a 3D translation, the axis is an ideal line, which Gunn states can be found by computing \((\E_0 \vee \vv{V})\I\) where \(\vv{V}\) is the direction of travel. It seems we find the line joining the origin and \(\vv{V}\) then take its orthogonal complement to yield the axis.

\[ e^{\frac{d}{2} (\E_0 \vee \vv{V})\I} \]

For example, the following rotor translates 12 units along the \(z\)-axis:

```
prop_translateExample =
zdir == [("03",-1)] && rotor == [("",1),("03",-6)]
where
zdir = mul (M.singleton "123" 1 `vee3` ideal3 0 0 1) pseudo3
rotor = scalarAdd 1 $ scalarMul (12/2) zdir
```

Rotations and translations involving the coordinate axes have a simple form so are great for demos.

```
moon a b c p = foldl1 mul $ rs ++ [p] ++ map inv (reverse rs) where
inv = M.mapWithKey f where
f [] v = v
f _ v = -v
rs =
[ tra "03" 12
, rot "21" c
, tra "01" 8
, rot "32" b
, tra "02" 2
, rot "13" a
]
rot axis ang = [("", cos (ang/2)), (axis, sin (ang/2))] :: Multivector
tra dir d = [("", 1), (dir, d/2)] :: Multivector
```

Above, our definition for \(V\) uses notation from our adventure in the
conformal model: the symbols \(\infty, \x, \y, \z\) correspond to
\(\e_0,\e_1,\e_2,\e_3\) which are `0 1 2 3`

in our code.

## Into orbit

Lastly, we slap together some code to animate a cube on this webpage.

```
#ifdef __HASTE__
-- Cube faces and their colours.
cubeFC = zip
[[0, 1, 3, 2], [4, 6, 7, 5],
[0, 4, 5, 1], [2, 3, 7, 6],
[1, 5, 7, 3], [0, 2, 6, 4]]
-- http://the-rubiks-cube.deviantart.com/journal/Using-Official-Rubik-s-Cube-Colors-268760351
[RGB 0 0x51 0xba, RGB 0 0x9e 0x60,
RGB 0xff 0xd5 0, RGB 0xff 0xff 0xff,
RGB 0xff 0x58 0, RGB 0xc4 0x1e 0x3a]
paint canvas tRaw = sequence_ $ renderOnTop canvas . drawFace <$> filter vis cubeFC
where
ms = round tRaw :: Int
ang s = fromIntegral (mod ms (s*1000)) * 2 * pi / fromIntegral (s*1000)
cubeVertices = (\[a,b,c] -> point3 a b c) <$> replicateM 3 [-1, 1]
ws = coords3 . moon (ang 2) (ang 5) (ang 23) <$> cubeVertices
drawFace (face, rgb) = color rgb . fill . path $ project . (ws!!) <$> face
project :: [Double] -> (Double, Double)
project [x, y, z] = (x/z*160 + 160, y/z*160 + 160)
-- Cull faces pointing away from us.
vis (face, _) = d!"" <= 0 where
(u:v:_) = zipWith (zipWith (-)) t ts
d = foldl1' mul [toGA u, toGA v, M.singleton "zyx" 1, toGA h]
ts@(h:t) = (ws!!) <$> face
toGA = M.fromList . zip (pure <$> "xyz")
main = withElems ["canvas"] $ \[cElem] -> do
Just canvas <- fromElem cElem
paused <- newIORef False
let
anim t = do
render canvas $ fill $ rect (0, 0) (320, 320)
paint canvas t
b <- readIORef paused
unless b $ void $ requestAnimationFrame anim
pause = do
b <- readIORef paused
writeIORef paused $ not b
when b $ void $ requestAnimationFrame anim
_ <- cElem `onEvent` MouseDown $ const pause
requestAnimationFrame anim
#endif
```

*blynn@cs.stanford.edu*💡