Che materia stai cercando?

# Geometric Algebra Primer

Appunti di Geometria e Algebra del prof. Franco su Geometric Algebra Primer: Subspaces (Bivectors, Trivectors, Blades), Geometric Product, Multivectors, Dot and Outer Product revisited, Inner, Outer and Geometric, Tools, The Euclidian Plane, Euclidian Space, Homogeneous Space.

Esame di Geometria e Algebra lineare docente Prof. D. Franco

Anteprima

### ESTRATTO DOCUMENTO

Abstract

Adopted with great enthusiasm in physics, geometric algebra slowly emerges

in computational science. Its elegance and ease of use is unparalleled. By

introducing two simple concepts, the multivector and its geometric product,

we obtain an algebra that allows subspace arithmetic. It turns out that being

able to ‘calculate’ with subspaces is extremely powerful, and solves many of the

hacks required by traditional methods. This paper provides an introduction to

geometric algebra. The intention is to give the reader an understanding of the

basic concepts, so advanced material becomes more accessible.

c

Copyright ° 2003 Jaap Suter. Permission to make digital or hard copies of

part of this work for personal or classroom use is granted without fee provided

that copies are not made or distributed for profit or commercial advantage and

that copies bear this notice and the full citation. Abstracting with credit is per-

mitted. To copy otherwise, to republish, to post on servers, or to redistribute to

lists, requires prior specific permission and/or a fee. Request permission through

mation.

Contents

1 Introduction 4

1.1 Rationale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

1.2 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

1.3 Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . 5

1.4 Disclaimer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

2 Subspaces 6

2.1 Bivectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

2.1.1 The Euclidian Plane . . . . . . . . . . . . . . . . . . . . . 8

2.1.2 Three Dimensions . . . . . . . . . . . . . . . . . . . . . . 10

2.2 Trivectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

2.3 Blades . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

3 Geometric Algebra 16

3.1 The Geometric Product . . . . . . . . . . . . . . . . . . . . . . . 17

3.2 Multivectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

3.3 The Geometric Product Continued . . . . . . . . . . . . . . . . . 18

3.4 The Dot and Outer Product revisited . . . . . . . . . . . . . . . 22

3.5 The Inner Product . . . . . . . . . . . . . . . . . . . . . . . . . . 23

3.6 Inner, Outer and Geometric . . . . . . . . . . . . . . . . . . . . . 25

4 Tools 27

4.1 Grades . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

4.2 The Inverse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

4.3 Pseudoscalars . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

4.4 The Dual . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

4.5 Projection and Rejection . . . . . . . . . . . . . . . . . . . . . . . 31

4.6 Reflections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

4.7 The Meet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

5 Applications 37

5.1 The Euclidian Plane . . . . . . . . . . . . . . . . . . . . . . . . . 37

5.1.1 Complex Numbers . . . . . . . . . . . . . . . . . . . . . . 37

5.1.2 Rotations . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

1

5.1.3 Lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

5.2 Euclidian Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

5.2.1 Rotations . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

5.2.2 Lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

5.2.3 Planes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

5.3 Homogeneous Space . . . . . . . . . . . . . . . . . . . . . . . . . 55

5.3.1 Three dimensional homogeneous space . . . . . . . . . . . 55

5.3.2 Four dimensional homogeneous space . . . . . . . . . . . . 58

5.3.3 Concluding Homogeneous Space . . . . . . . . . . . . . . 60

6 Conclusion 61

6.1 The future of geometric algebra . . . . . . . . . . . . . . . . . . . 62

6.2 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . 62

2

List of Figures

2.1 The dot product . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

2.2 Vector a extended along vector b . . . . . . . . . . . . . . . . . . 7

2.3 Vector b extended along vector a . . . . . . . . . . . . . . . . . . 7

2.4 A two dimensional basis . . . . . . . . . . . . . . . . . . . . . . . 9

2.5 A 3-dimensional bivector basis . . . . . . . . . . . . . . . . . . . 10

2.6 A Trivector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

2.7 Basis blades in 2 dimensions . . . . . . . . . . . . . . . . . . . . . 14

2.8 Basis blades in 3 dimensions . . . . . . . . . . . . . . . . . . . . . 14

2.9 Basis blades in 4 dimensions . . . . . . . . . . . . . . . . . . . . . 14

3.1 Multiplication Table for basis blades in C . . . . . . . . . . . . 21

2

3.2 Multiplication Table for basis blades in C . . . . . . . . . . . . 22

3

3.3 The dot product of a bivector and a vector . . . . . . . . . . . . 24

4.1 Projection and rejection of vector a in bivector B . . . . . . . . . 32

4.2 Reflection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

4.3 The Meet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

5.1 Lines in the Euclidian plane . . . . . . . . . . . . . . . . . . . . . 42

5.2 Rotation in an arbitrary plane . . . . . . . . . . . . . . . . . . . 44

5.3 An arbitrary rotation . . . . . . . . . . . . . . . . . . . . . . . . . 48

5.4 A rotation using two reflections . . . . . . . . . . . . . . . . . . . 51

5.5 A two dimensional line in the homogeneous model . . . . . . . . 56

5.6 An homogeneous intersection . . . . . . . . . . . . . . . . . . . . 57

3

Chapter 1

Introduction

1.1 Rationale

Information about geometric algebra is widely available in the field of physics.

Knowledge applicable to computer science, graphics in particular, is lacking. As

Leo Dorst [1] puts it:

“. . . A computer scientist first pointed to geometric algebra as a

promising way to ‘do geometry’ is likely to find a rather confusing

collection of material, of which very little is experienced as imme-

diately relevant to the kind of geometrical problems occurring in

practice. . .

. . . After perusing some of these, the computer scientist may well

wonder what all the fuss is about, and decide to stick with the old

way of doing things . . . ”

And indeed, disappointed by the mathematical obscurity many people dis-

card geometric algebra as something for academics only. Unfortunately they

miss out on the elegance and power that geometric algebra has to offer.

Not only does geometric algebra provide us with new ways to reason about

computational geometry, it also embeds and explains all existing theories in-

cluding complex numbers, quaternions, matrix-algebra, and Plückerspace. Geo-

metric algebra gives us the necessary and unifying tools to express geometry and

its relations without the need for tricks or special cases. Ultimately, it makes

communicating ideas easier.

1.2 Overview

The layout of the paper is as follows; I start out by talking a bit about sub-

spaces, what they are, what we can do with them and how traditional vectors

or one-dimensional subspaces fit in the picture. After that I will define what a

4

geometric algebra is, and what the fundamental concepts are. This chapter is

the most important as all other theory builds upon it. The following chapter

will introduce some common and handy concepts which I call tools. They are

not fundamental, but useful in many applications. Once we have mastered the

fundamentals, and armed with our tools, we can tackle some applications of

geometric algebra. It is this chapter that tries to demonstrate the elegance of

geometric algebra, and how and where it replaces traditional methods. Finally, I

wrap things up, and provide a few references and a roadmap on how to continue

a study of geometric algebra..

1.3 Acknowledgements

I would like to thank David Hestenes for his books [7] [8] and papers [10] and

Leo Dorst for the papers on his website [6]. Anything you learn from this

introduction, you indirectly learned from them.

My gratitude to Per Vognsen for explaining many of the mathematical ob-

scurities that I encountered, and providing me with some of the proofs in this

paper. Thanks to Kurt Miller, Conor Stokes, Patrick Harty, Matt Newport,

Willem De Boer, Frank A. Krueger and Robert Valkenburg for comments. Fi-

nally, I am greatly indebted to Dirk Gerrits. His excellent skills as an editor

and his thorough proofreading allowed me to correct many errors.

1.4 Disclaimer

Of course, any mistakes in this text are entirely mine. I only hope to provide an

easy-to-read introduction. Proofs will be omitted if the required mathematics

are beyond the scope of this paper. Many times only an example or an intuitive

outline will be given. I am certain that some of my reasoning won’t hold in

a thorough mathematical review, but at least you should get an impression.

The enthusiastic reader should pick up some of the references to extend his

knowledge, learn about some of the subtleties and find the actual proofs.

5

Chapter 2

Subspaces

It is often neglected that vectors represent 1-dimensional subspaces. This is

mainly due to the fact that it seems the only concept at hand. Hence we

abuse vectors to form higher-dimensional subspaces. We use them to represent

planes by defining normals. We combine them in strange ways to create oriented

subspaces. Some papers even mention quaternions as vectors on a 4-dimensional

unit hypersphere.

For no apparent reason we have been denying the existence of 2-, 3- and

higher-dimensional subspaces as simple concepts, similar to the vector. Ge-

ometric algebra introduces these and even defines the operators to perform

arithmetic with them. Using geometric algebra we can finally represent planes

as true 2-dimensional subspaces, define oriented subspaces, and reveal the true

identity of quaternions. We can add and subtract subspaces of different dimen-

sions, and even multiply and divide them, resulting in powerful expressions that

can express any geometric relation or concept.

This chapter will demonstrate how vectors represent 1-dimensional subspaces

and uses this knowledge to express subspaces of arbitrary dimensions. However,

before we get to that, let us consider the very basics by using a familiar example.

a

b

Figure 2.1: The dot product

What if we project a 1-dimensional subspace onto another? The answer is

well known; For vectors a and b, the dot product a · b projects a onto b resulting

in the scalar magnitude of the projection relative to b’s magnitude. This is

6

depicted in figure 2.1 for the case where b is a unit vector.

Scalars can be treated as 0-dimensional subspaces. Thus, the projection of

a 1-dimensional subspace onto another results in a 0-dimensional subspace.

2.1 Bivectors

Geometric algebra introduces an operator that is in some ways the opposite of

the dot product. It is called the outer product and instead of projecting a vector

onto another, it extends a vector along another. The ∧ (wedge) symbol is used

to denote this operator. Given two vectors a and b, the outer product a ∧ b is

depicted in figure 2.2. a b

Figure 2.2: Vector a extended along vector b

The resulting entity is a 2-dimensional subspace, and we call it a bivector.

It has an area equal to the size of the parallelogram spanned by a and b and an

orientation depicted by the clockwise arc. Note that a bivector has no shape.

Using a parallelogram to visualize the area provides an intuitive way of under-

standing, but a bivector is just an oriented area, in the same way a vector is

just an oriented length. a

b

Figure 2.3: Vector b extended along vector a

If b were extended along a the result would be a bivector with the same area

but an opposite (ie. counter-clockwise) orientation, as shown in figure 2.3. In

mathematical terms; the outer product is anticommutative, which means that:

a ∧ b = −b ∧ a (2.1)

7

With the consequence that: a ∧ a =0 (2.2)

which makes sense if you consider that a ∧ a = −a ∧ a and only 0 equals its own

negation (0 = -0). The geometrical interpretation is a vector extended along

itself. Obviously the resulting bivector will have no area.

Some other interesting properties of the outer product are:

(λa) ∧ b = λ(a ∧ b) associative scalar multiplication (2.3)

λ(a ∧ b) = (a ∧ b)λ commutative scalar multiplication (2.4)

a ∧ (b + c) = (a ∧ b) + (a ∧ c) distributive over vector addition (2.5)

For vectors a, b and c and scalar λ. Drawing a few simple sketches should

convince you, otherwise most of the references provide proofs.

2.1.1 The Euclidian Plane

Given an n-dimensional vector a there is no way to visualize it until we see a

decomposition onto a basis (e , e , ...e ). In other words, we express a as a linear

1 2 n

combination of the basis vectors e . This allows us to write a as an n-tuple of

i

real numbers, e.g. (x, y) in two dimensions, (x, y, z) in three, etcetera. Bivectors

are much alike; they can be expressed as linear combinations of basis bivectors.

2

To illustrate, consider two vectors a and b in the Euclidian Plane R . Figure

2.4 depicts the real number decomposition a = (α , α ) and b = (β , β ) onto

1 2 1 2

the basis vectors e and e .

1 2

Written down, this decomposition looks as follows:

a = α e + α e

1 1 2 2

b = β e + β e

1 1 2 2

The outer product of a and b becomes:

a ∧ b = (α e + α e ) ∧ (β e + β e )

1 1 2 2 1 1 2 2

Using (2.5) we may rewrite the above to:

a ∧ b =(α e ∧ β e )+

1 1 1 1

(α e ∧ β e )+

1 1 2 2

(α e ∧ β e )+

2 2 1 1

(α e ∧ β e )

2 2 2 2

Equation (2.3) and (2.4) tell us we may reorder the scalar multiplications to

obtain: a ∧ b =(α β e ∧ e )+

1 1 1 1

(α β e ∧ e )+

1 2 1 2

(α β e ∧ e )+

2 1 2 1

(α β e ∧ e )

2 2 2 2

8

β b

2

α a

2

e

2 I e β α

1 1 1

Figure 2.4: A two dimensional basis

Now, recall equation (2.2) which says that the outer product of a vector with

itself equals zero. Thus we are left with:

a ∧ b =(α β e ∧ e )+

1 2 1 2

(α β e ∧ e )

2 1 2 1

Now take another look at figure 2.4. There, I represents the outer product

e ∧ e . This will be our choice for the basis bivector. Because of (2.1) this

1 2

means that e ∧ e = −I. Using this information in the previous equation, we

2 1

obtain: a ∧ b = (α β − α β )I (2.6)

1 2 2 1

Which is how to calculate the outer product of two vectors a = (α , α ) and

1 2

b = (β , β ). Thus, in two dimensions, we express bivectors in terms of a basis

1 2

bivector called I. In the Euclidian plane we use I to represent e = e ∧ e .

12 1 2

9

2.1.2 Three Dimensions

3

In 3-dimensional space R , things become more complicated. Now, the orthog-

onal basis consists of three vectors: e , e , and e . As a result, there are three

1 2 3

basis bivectors. These are e ∧ e = e , e ∧ e = e , and e ∧ e = e , as

1 2 12 1 3 13 2 3 23

depicted in figure 2.5. e ∧ e

2 3

e

2 e e ∧ e

3 1 3

e ∧ e

1 2 e 1

Figure 2.5: A 3-dimensional bivector basis

It is worth noticing that the choice between using either e or e as a

ij ji

basis bivector is completely arbitrary. Some people prefer to use {e , e , e }

12 23 31

because it is cyclic, but this argument breaks down in four dimensions or higher;

e.g. try making {e , e , e , e , e , e } cyclic. I use {e , e , e } because it

12 13 14 23 24 34 12 13 23

solves some issues [12] in computational geometric algebra implementations.

The outer product of two vectors will result in a linear combination of the

three basis bivectors. I will demonstrate this by using two vectors a and b:

a = α e + α e + α e

1 1 2 2 3 3

b = β e + β e + α e

1 1 2 2 3 3

The outer product a ∧ b becomes:

a ∧ b = (α e + α e + α e ) ∧ (β e + β e + β e )

1 1 2 2 3 3 1 1 2 2 3 3

10

Using the same rewrite rules as in the previous section, we may rewrite this to:

a ∧ b =α e ∧ β e + α e ∧ β e + α e ∧ β e +

1 1 1 1 1 1 2 2 1 1 3 3

α e ∧ β e + α e ∧ β e + α e ∧ β e +

2 2 1 1 2 2 2 2 2 2 3 3

α e ∧ β e + α e ∧ β e + α e ∧ β e

3 3 1 1 3 3 2 2 3 3 3 3

And reordering scalar multiplication:

a ∧ b =α β e ∧ e + α β e ∧ e + α β e ∧ e +

1 1 1 1 1 2 1 2 1 3 1 3

α β e ∧ e + α β e ∧ e + α β e ∧ e +

2 1 2 1 2 2 2 2 2 3 2 3

α β e ∧ e + α β e ∧ e + α β e ∧ e

3 1 3 1 3 2 3 2 3 3 3 3

Recalling equations (2.1) and (2.2), we have the following rules for i 6 = j:

e ∧ e = 0 outer product with self is zero

i i

e ∧ e = e outer product of basis vectors equals basis bivector

i j ij

e ∧ e = −e anticommutative

j i ij

Using this, we can rewrite the above to the following: (2.7)

a ∧ b = (α β − α β )e + (α β − α β )e + (α β − α β )e

1 2 2 1 12 1 3 3 1 13 2 3 3 2 23

which is the outer product of two vectors in 3-dimensional Euclidian space.

For some, this looks remarkably like the definition of the cross product. But

they are not the same. The outer product works in all dimensions, whereas

1

the cross product is only defined in three dimensions. Furthermore, the cross

product calculates a perpendicular subspace instead of a parallel one. Later

2

we will see why this causes problems in certain situations and how the outer

product solves these.

1 Some cross product definitions are valid in all spaces with uneven dimension.

2 If you ever tried transforming a plane, you will remember that you had to use an inverse

of a transposed matrix to transform the normal of the plane.

11

2.2 Trivectors

Until now, we have been using the outer product as an operator of two vectors.

The outer product extended a 1-dimensional subspace along another to create

a 2-dimensional subspace. What if we extend a 2-dimensional subspace along a

1-dimensional one?

If a, b and c are vectors, then what is the result of (a∧b)∧c? Intuition tells us

this should result in a 3-dimensional subspace, which is correct and illustrated

in figure 2.6. b

c a

Figure 2.6: A Trivector

A bivector extended by a third vector results in a directed volume element.

We call this a trivector. Note that, like bivectors, a trivector has no shape;

only volume and sign. Even though a box helps to understand the nature of

trivectors intuitively, it could have been any shape.

3

In 3-dimensional Euclidian space R , there is one basis trivector equal to

e ∧ e ∧ e = e . Sometimes, in Euclidian space, this trivector is called I.

1 2 3 123

We already saw this symbol being used for e in the Euclidian plane, and we’ll

12

The result of the outer product of three arbitrary vectors results in a scalar

4

multiple of this basis trivector. In 4-dimensional space R , there are four basis

trivectors e , e , e , and e , and consequently an arbitrary trivector

123 124 134 234

will be a linear combination of these four basis trivectors. But what about the

Euclidian Plane? Obviously, there can be no 3-dimensional subspaces in a 2-

2

dimensional space R . The following informal proof demonstrates why trivectors

do not exist in two dimensions. 2

We need to show that for arbitrary vectors a, b, and c ∈ R the following

holds: (a ∧ b) ∧ c = 0

12

Again, we will decompose the vectors onto the basis vectors, using real numbers

(α , α ), (β , β ), and (γ , γ ):

1 2 1 2 1 2 a = α e + α e

1 1 2 2

b = β e + β e

1 1 2 2

c = γ e + γ e

1 1 2 2

Using equation (2.6), we may write:

(a ∧ b) ∧ c = ((α β − α β )e ∧ e ) ∧ (γ e + γ e )

1 2 2 1 1 2 1 1 2 2

We can rewrite this to:

((α β − α β )e ∧ e ) ∧ (γ e ) + ((α β − α β )e ∧ e ) ∧ (γ e )

1 2 2 1 1 2 1 1 1 2 2 1 1 2 2 2

Which becomes:

(γ (α β − α β )e ∧ e ∧ e ) + (γ (α β − α β )e ∧ e ∧ e )

1 1 2 2 1 1 2 1 2 1 2 2 1 1 2 2

The scalar parts are not really important. Take a good look at the outer product

of the basis vectors. We have:

e ∧ e ∧ e , and e ∧ e ∧ e

1 2 1 1 2 2

Because the outer product is anticommutative (equation (2.1)), we may rewrite

the first one: −e ∧ e ∧ e , and e ∧ e ∧ e

1 1 2 1 2 2

And using equation (2.2) which says that the outer product of a vector with

itself equals zero, we are left with:

−0 ∧ e , and e ∧ 0

2 1

From here, it does not take much to realize that the outer product of a vector

and the null vector results in zero. I’ll come back to a more formal treatment

of null vectors later, but for now it should be enough to understand that if we

extend a vector by a vector that has no length, we are left with zero area. Thus

2

we conclude that a ∧ b ∧ c = 0 in R . 13

So far we have seen scalars, vectors, bivectors and trivectors representing 0-, 1-,

2- and 3-dimensional subspaces respectively. Nothing stops us from generalizing

all of the above to allow subspaces with arbitrary dimension.

Therefore, we introduce the term k-blades, where k refers to the dimension

are 3-blades. In other words, the grade of a vector is one, and the grade of a

trivector is three. In higher dimensional spaces there can be 4-blades, 5-blades,

or even higher. As we have shown for n = 2 in the previous section, in an

Recall how we expressed vectors as a linear combination of basis vectors and

bivectors as a linear combination of basis bivectors. It turns out that every

k-blade can be decomposed onto a set of basis k-blades. The following tables

contain all the basis blades for subspaces of dimensions 2, 3 and 4.

1-blades (vectors) {e , e } 2

1 2

12

Figure 2.7: Basis blades in 2 dimensions

1-blades (vectors) {e , e , e } 3

1 2 3

2-blades (bivectors) {e , e , e } 3

12 13 23

123

Figure 2.8: Basis blades in 3 dimensions

1-blades (vectors) {e , e , e , e } 4

1 2 3 4

2-blades (bivectors) {e , e , e , e , e , e } 6

12 13 14 23 24 34

3-blades (trivectors) {e , e , e , e } 4

123 124 134 234

1234

Figure 2.9: Basis blades in 4 dimensions

Generalizing this; how many basis k-blades are needed in an n-dimensional

space to represent arbitrary k-blades? It turns out that the answer lies in the

14

binomial coefficient: µ ¶

n n!

= (n − k)!k!

k

This is because a basis k-blade is uniquely determined by the k basis vectors

¡ ¢

nk

from which it is constructed. There are n different basis vectors in total. is

the number of ways to choose k elements from a set of n elements and thus it

¡ ¢

nk

is easily seen that the number of basis k-blades is equal to .

Here are a few examples which you can compare to the tables above. The

number of basis bivectors or 2-blades in 3-dimensional space is:

µ ¶ 3!

3 = =3

2 (3 − 2)!2!

The number of basis trivectors or 3-blades in 3-dimensional space equals:

µ ¶

3 3! =1

=

3 (3 − 3)!3!

The number of basis bivectors or 2-blades in 4-dimensional space is:

µ ¶

4 4!

= =6

2 (4 − 2)!2!

15

Chapter 3

Geometric Algebra

n

All spaces R generate a set of basis blades that make up a Geometric Algebra

1

of subspaces, denoted by C . For example, a possible basis for C is:

n 2

{ 1 , e , e , I }

1 2

|{z} |{z}

| {z }

basis scalar basis bivector

basis vectors

Here, 1 is used to denote the basis 0-blade or scalar-basis. Every element of the

geometric algebra C can be expressed as a linear combination of these basis

2

blades. Another example is a basis of C which could be:

3

{ 1 , e , e , e , e , e , e e }

1 2 3 12 13 23 123

|{z} |{z}

| {z } | {z }

basis scalar basis trivector

basis vectors basis bivectors

The total number of basis blades for an algebra can be calculated by adding the

numbers required for all basis k-blades:

µ ¶

n

X n n

= 2 (3.1)

k

k=0

The proof relies on some combinatorial mathematics and can be found in many

places. You can use the following table to check the formula for a few simple

geometric algebras.

C basis blades total

n 0

C {1} 2 = 1

0 1

C {1; e } 2 = 2

1 1 2

C {1; e , e ; e } 2 = 4

2 1 2 12 3

C {1; e , e , e ; e , e , e ; e } 2 = 8

3 1 2 3 12 13 23 123 4

C {1; e , e , e , e ; e , e , e , e , e , e ; e , e , e , e ; e } 2 = 16

4 1 2 3 4 12 13 14 23 24 34 123 124 134 234 1234

1 The reason we use C is because geometric algebra is based on the theory of Clifford

n

Algebras, a topic within mathematics beyond the scope of this paper

16

3.1 The Geometric Product

Until now we have only used the outer product. If we combine the outer product

with the familiar dot product we obtain the geometric product. For arbitrary

vectors a, b the geometric product can be calculated as follows:

ab = a · b + a ∧ b (3.2)

Wait, how is that possible? The dot product results in a scalar, and the

outer product in a bivector. How does one add a scalar to a bivector?

Like complex numbers, we keep the two entities separated. The complex

number (3 + 4i) consists of a real and imaginary part. Likewise, ab = a · b + a ∧ b

consists of a scalar and a bivector part. Such combinations of blades are called

multivectors.

3.2 Multivectors 2

A multivector is a linear combination of different k-blades. In R it will contain

a scalar part, a vector part and a bivector part:

α + α e + α e + α I

1 2 1 3 2 4

|{z} | {z } |{z}

vector part

scalar part bivector part

Where α are real numbers, e.g. the components of the multivector. Note that

i

α can be zero, which means that blades are multivectors as well. For example,

i

if a and a are zero, we have a vector or 1-blade.

1 4

2 2

In R we need 2 = 4 real numbers to denote a full multivector. A multi-

3 3

vector in R can be defined with 2 = 8 real numbers and will look like this:

α + α e + α e + α e + α e + α e + α e + α e

1 2 1 3 2 4 3 5 12 6 13 7 23 8 123

|{z} | {z } | {z } | {z }

vector part trivector part

scalar part bivector part

4 4

In the same way, a multivector in R will have 2 = 16 components.

Unfortunately, multivectors can’t be visualized easily. Vectors, bivectors and

trivectors have intuitive visualizations in 2- and 3-dimensional space. Multivec-

tors lack this way of thinking, because we have no way to visualize a scalar

added to an area. However, we get something much more powerful than easy

visualization. A multivector, as a linear combination of subspaces, turns out to

be extremely expressive, and can be used to convey many different concepts in

geometry. 17

3.3 The Geometric Product Continued

The generalized geometric product is an operator for multivectors. It has the

following properties:

(AB)C = A(BC) associativity (3.3)

λA = Aλ commutative scalar multiplication (3.4)

A(B + C) = AB + AC distributive over addition (3.5)

For arbitrary multivectors A, B and C, and scalar λ. Proofs for the other

properties are beyond the scope of this paper. They are not difficult per se, but

it is mostly formal algebra even though all of the above intuitively feel right

already. The interested reader should pick up some of the references for more

information.

Note that the geometric product is, in general, not commutative:

AB 6 = BA

Nor is it anticommutative. This is a direct consequence of the fact that the

anticommutative outer product and the commutative dot product are both part

of the geometric product.

We have seen the geometric product for vectors using the dot product and the

outer product. However, since the dot product is only defined for vectors, and

the outer product only for blades, we need something different for multivectors.

Consider two arbitrary multivectors A and B from C .

2

A = α + α e + α e + α I

1 2 1 3 2 4

B = β + β e + β e + β I

1 2 1 3 2 4

Multiplying A and B using the geometric product, we get:

AB = (α + α e + α e + α I)B

1 2 1 3 2 4

Using equation (3.5) we may rewrite this to:

AB = α B + α e B + α e B + α IB

1 2 1 3 2 4

Now writing out B: AB = (α (β + β e + β e + β I))

1 1 2 1 3 2 4

+ (α e (β + β e + β e + β I))

2 1 1 2 1 3 2 4

+ (α e (β + β e + β e + β I))

3 2 1 2 1 3 2 4

+ (α I (β + β e + β e + β I))

4 1 2 1 3 2 4

And this can be rewritten to:

AB = α β + α β e + α β e + α β I

1 1 1 2 1 1 3 2 1 4

+ α e β + α e β e + α e β e + α e β I

2 1 1 2 1 2 1 2 1 3 2 2 1 4

+ α e β + α e β e + α e β e + α e β I

3 2 1 3 2 2 1 3 2 3 2 3 2 4

+ α Iβ + α Iβ e + α Iβ e + α Iβ I

4 1 4 2 1 4 3 2 4 4

18

And in the same way as we did when we wrote out the outer product, we may

reorder the scalar multiplications (3.4) to obtain:

AB = α β + α β e + α β e + α β I (3.6)

1 1 1 2 1 1 3 2 1 4

+ α β e + α β e e + α β e e + α β e I

2 1 1 2 2 1 1 2 3 1 2 2 4 1

+ α β e + α β e e + α β e e + α β e I

3 1 2 3 2 2 1 3 3 2 2 3 4 2

+ α β I + α β Ie + α β Ie + α β II

4 1 4 2 1 4 3 2 4 4

This looks like a monster of a calculation at first. But if you study it for a

while, you will notice that it is fairly structured. The resulting equation demon-

strates that we can express the geometric product of arbitrary multivectors as

a linear combination of geometric products of basis blades.

So what we need, is to understand how to calculate geometric products of

basis blades. Let’s look at a few different combinations. For example, using

equation (3.2) we can write: e e = e · e + e ∧ e

1 1 1 1 1 1

But remember from equation (2.2) that a ∧ a = 0 because it has no area. Also,

the dot product of a vector with itself is equal to its squared magnitude. If we

choose the magnitude of the basis vectors e , e , etc. to be 1, we may simplify

1 2

the above to: e e = e · e +e ∧ e

1 1 1 1 1 1

= e · e +0

1 1

=1 +0

=1

Another example is, again in C :

2

e e = e · e + e ∧ e

1 2 1 2 1 2

Now remember that e is perpendicular to e so the dot product e · e = 0.

1 2 1 2

This leaves us with: e e = e · e +e ∧ e

1 2 1 2 1 2

=0 +e ∧ e

1 2

=0 +I

= I

A more complicated example involves the geometric product of e and I.

1

The previous example showed us that I = e is equal to e e . We can use this

12 1 2

19

and equation (3.3) to write: e I = e e

1 1 12

= e (e e )

1 1 2

= (e e )e

1 1 2

= 1e 2

= e 2

You might begin to see a pattern. Because the basis blades are perpendicular,

the dot and outer product have trivial results. We use this to simplify the result

of a geometric product with a few rules.

etc.) can be written as an outer product of perpendicular vectors. Because

of this, their dot product equals zero, and consequently, we can write them

as a geometric product of vectors. For example, in some high dimensional

space, we could write:

e = e ∧ e ∧ e ∧ e ∧ e = e e e e e

12849 1 2 8 4 9 1 2 8 4 9

2. Equation (2.1) allows us to swap the order of two non-equal basis vectors

if we negate the result. This means that we can write:

e e e = −e e e = e e e = −e e e

1 2 3 2 1 3 2 3 1 3 2 1

3. Whenever a basis vector appears next to itself, it annihilates itself, because

the geometric product of a basis vector with itself equals one.

e e = 1 (3.7)

i i

Example: e = e

112334 24

Using these three rules we are able to simplify any geometric product of

basis blades. Take the following example:

e e e e = e e e e e e using rule one

1 23 31 2 1 2 3 3 1 2

= e e e e using rule three

1 2 1 2

= −e e e e using rule two

1 1 2 2

= −1 using rule three twice (3.8)

We can now create a so-called multiplication table which lists all the combi-

nations of geometric products of basis blades. For C it would look like figure

2

3.1. 20

1 e e I

1 2

1 1 e e I

1 2

e e 1 I e

1 1 2

e e −I 1 −e

2 2 1

I I −e e −1

2 1

Figure 3.1: Multiplication Table for basis blades in C

2

According to this table, the multiplication of I and I should equal −1, which

can be calculated as follows:

2

I = e e by definition

12 12

= e e e e using rule one

1 2 1 2

= −e e e e using rule two

2 1 1 2

= −e e using rule three

2 2

= −1 using rule three

Now that we have the required knowledge on geometric products of basis

equation (3.6) repeated for convenience:

AB = α β + α β e + α β e + α β I

1 1 1 2 1 1 3 2 1 4

+ α β e + α β e e + α β e e + α β e I

2 1 1 2 2 1 1 2 3 1 2 2 4 1

+ α β e + α β e e + α β e e + α β e I

3 1 2 3 2 2 1 3 3 2 2 3 4 2

+ α β I + α β Ie + α β Ie + α β II

4 1 4 2 1 4 3 2 4 4

We can simply look up the geometric product of basis blades in the multiplica-

tion table, and substitute the results:

AB = α β + α β e + α β e + α β I

1 1 1 2 1 1 3 2 1 4

+ α β e + α β + α β I + α β e

2 1 1 2 2 2 3 2 4 2

+ α β e − α β I + α β − α β e

3 1 2 3 2 3 3 3 4 1

+ α β I − α β e + α β e − α β

4 1 4 2 2 4 3 1 4 4

Now the last step is to group the basis-blades together.

AB = (α β + α β + α β − α β ) (3.9)

1 1 2 2 3 3 4 4

+ (α β − α β + α β + α β )e

4 3 3 4 1 2 2 1 1

+ (α β − α β + α β + α β )e

1 3 4 2 2 4 3 1 2

+ (α β + α β + α β − α β )I

4 1 1 4 2 3 3 2

The final result is a linear combination of the four basis blades {1, e , e , I} or,

1 2

in other words, a multivector. This proves that the geometric algebra is closed

under the geometric product. 21

That is to say; so far I have only showed you how the geometric product

works in C . It is trivial to extend the same methods to C or higher. The

2 3

same three simplification rules apply. Figure 3.2 contains the multiplication

table for C .

3 1 e e e e e e e

1 2 3 12 13 23 123

1 1 e e e e e e e

1 2 3 12 13 23 123

e e 1 e e e e e e

1 1 12 13 2 3 123 23

e e −e 1 e −e −e e −e

2 2 12 23 1 123 3 13

e e −e −e 1 e −e −e e

3 3 13 23 123 1 2 12

e e −e e e −1 −e e −e

12 12 2 1 123 23 13 3

e e −e −e e e −1 −e e

13 13 3 123 1 23 12 2

e e e −e e −e e −1 −e

23 23 123 3 2 13 12 1

e e e −e e −e e −e −1

123 123 23 13 12 3 2 1

Figure 3.2: Multiplication Table for basis blades in C

3

3.4 The Dot and Outer Product revisited

We defined the geometric product for vectors as a combination of the dot and

outer product: ab = a · b + a ∧ b

We can rewrite these equations to express the dot product and outer product

in terms of the geometric product: 1

a ∧ b = (3.10)

(ab − ba)

2

1

a · b = (3.11)

(ab + ba)

2

To illustrate, let us prove (3.10). Let’s take two multivectors A and B ∈ C

2

for which the scalar and bivector parts are zero, e.g. two vectors. Using equation

(3.9) and taking into account that α = β = α = β = 0 we can write AB

1 1 4 4

and BA as: AB = (α β + α β ) (3.12)

2 2 3 3

+ (α β − α β )I

2 3 3 2

BA = (β α + β α ) (3.13)

2 2 3 3

+ (β α − β α )I

2 3 3 2

22

Using these in equation (3.10) we get:

## AB BA

z }| { z }| {

((α β + α β ) + (α β − α β )I) − ((β α + β α ) + (β α − β α )I)

2 2 3 3 2 3 3 2 2 2 3 3 2 3 3 2

2

Reordering we get:

ScalarP art BivectorP art

z }| { z }| {

(α β + α β ) − (β α + β α ) + (α β − α β )I − (β α − β α )I

2 2 3 3 2 2 3 3 2 3 3 2 2 3 3 2

2

Notice the scalar part results in zero, which leaves us with:

(α β − α β )I − (β α − β α )I

2 3 3 2 2 3 3 2

2

Subtracting the two bivectors we get:

(α β − α β − β α + β α )I

2 3 3 2 2 3 3 2

2

This may be rewritten as: (2α β − 2α β )I

2 3 3 2

2

And now dividing by 2 we obtain:

A ∧ B = (α β − α β )I

2 3 3 2

for multivectors A and B with zero scalar and bivector part. Compare this with

equation (2.6) that defines the outer product for two vectors a and b. If you

remember that the vector part of a multivector ∈ C is in the second and third

2

component, you will realize that these equations are the same.

Note that (3.10) and (3.11) only hold for vectors. The inner and outer

product of higher order blades is more complicated, not to mention the inner

and outer product for multivectors. Yet, let us try to see what they could mean.

3.5 The Inner Product

I informally demonstrated what the outer product of a vector and a bivector

looks like when I introduced trivectors. What about the dot product? What

could the dot product of a vector and a bivector look like? Figure 3.3 depicts

the result.

Notice how the inner product is the vector perpendicular to the actual pro-

jection. In more general terms, it is the complement (within the subspace of B)

We will no longer call this gener-

of the orthogonal projection of a onto B. [2]

alization a dot product. The generic notion of projections and perpendicularity

is captured by an operator called the inner product.

23 c

b cc(a ∧ b) a

Figure 3.3: The dot product of a bivector and a vector

Unfortunately, there is not just one definition of the inner product. There

are several versions floating around, their usefulness depending on the problem

area. They are not fundamentally different however, and all of them can be

expressed in terms of the others. In fact, one could say that the flexibility

of the different inner products is one of the strengths of geometric algebra.

Unfortunately, this does not really help those trying to learn geometric algebra,

as it can be overwhelming and confusing.

The default and best known inner product [8] is very useful in Euclidian me-

chanics, whereas the contraction inner product [2], also known as the Lounesto

inner product, is more useful in computer science. Other inner products in-

clude the semi-symmetric or semi-commutative inner product, also known as

the Hestenes inner product, the modified Hestenes or (fat)dot product and the

forced Euclidean contractive inner product. [13] [5]

Obviously, because of our interest for computer science, we are most inter-

ested in the contraction inner product. We will use the c symbol to denote a

contraction. It may seem a bit weird at first, but it will turn out to be very

useful. Luckily, for two vectors it works exactly as the traditional inner product

or dot product. For different blades, it is defined as follows [2]:

scalars αcβ = αβ (3.14)

vector and scalar acβ = 0 (3.15)

scalar and vector αcb = αb (3.16)

vectors acb = a · b (the usual dot product) (3.17)

vector, multivector ac(b ∧ C) = (acb) ∧ C − b ∧ (acC) (3.18)

(3.19)

distribution (A ∧ B)cC = Ac(BcC)

Try to understand how the above provides a recursive definition of the con-

traction operator. There are the basic rules for vectors and scalars, and there is

for the contraction between a vector and the outer product of a vector and

(3.18) 24

a multivector. Because linearity holds over the contraction, we can decompose

contractions with multivectors into contractions with blades. Now, remember

that any blade D with grade n can be written as the outer product of a vector

b and a blade C with grade n − 1. This means that the contraction acD can

be written as ac(b ∧ C) and consequently as (acb) ∧ C − b ∧ (acC) according

to (3.18). We know how to calculate acb by definition, and we can recursively

solve acC until the grade of C is equal to 1, which reduces it to a contraction

of two vectors.

Obviously, this is not a very efficient way of calculating the inner product.

Fortunately, the inner product can be expressed in terms of the geometric prod-

uct (and vice versa as we’ve done before), which allows for fast calculations.

[12]

the tools chapter. In the chapter on applications we will see where and how

the contraction product is useful. From now on, whenever I refer to the inner

product I mean any of the generalized inner products. If I need the contraction,

I will mention it explicitly. I will allow myself to be sloppy, and continue to use

the · and c symbol interchangeably.

3.6 Inner, Outer and Geometric

We saw in equation (3.2) that the geometric product for vectors could be defined

in terms of the dot (inner) and outer product. What if we use (3.10) and (3.11)

combined: 1 1 (ab + ba) + (ab − ba)

(ab + ba) + (ab − ba) =

2 2 2

(ab + ba + ab − ba)

= 2

2ab

= 2

= ab

This demonstrates the two possible approaches to introduce geometric alge-

give an abstract definition of the geometric product, by

bra. Some books [7]

means of a few axioms, and derive the inner and outer product from it. Other

material [8] starts with the inner and outer product and demonstrates how the

geometric product follows from them.

You may prefer one over the other, but ultimately it is the way the geometric

product, the inner product and the outer product work together that gives

geometric algebra its strength. For two vectors a and b we have:

ab = a · b + a ∧ b

as a result, they are orthogonal if ab = -ba because the inner product of two

perpendicular vectors is zero. And they are collinear if ab = ba because the

25

wedge of two collinear vectors is zero. If the two vectors are neither collinear

nor orthogonal the geometric product is able to express their relationship as

‘something in between’. 26

Chapter 4

Tools

Strictly speaking, all we need is an algebra of multivectors with the geometric

product as its operator. Nevertheless, this chapter introduces some more defi-

nitions and operators that will be of great use in many applications. If you are

tired of all this theory, I suggest you skip over this section and start with some

of the applications. If you encounter unfamiliar concepts, you can refer to this

chapter.

blade is the dimension of the subspace it represents. Thus multivectors have

combinations of grades, as they are linear combinations of blades. We denote

the blade-part with grade s of a multivector A using hAi . For multivector

s

A = (4, 8, 5, 6, 2, 4, 9, 3) ∈ C we have:

3

hAi =4 scalar part

0

hAi = (8, 5, 6) vector part

1

hAi = (2, 4, 9) bivector part

2

hAi =3 trivector part

3

Any multivector A in C` can be denoted as a sum of blades, like we already

n

did informally: n

X hAi = hAi + hAi + . . . + hAi

k 0 1 n

k=0

Using this notation I can demonstrate what the inner and outer product

mean for grades. For two vectors a and b the inner product a · b results in a

hai · hbi = habi

1 1 0

27

In figure 3.3 we saw that a vector a projected onto a bivector B resulted in a

vector. Here, we’ll be using the contraction product. So, in other words the

multivector notation: hai chBi = haBi

1 2 2−1

Generalizing this for blades A and B with grade s and t respectively:

½ s > t, 0

hAi chBi = hABi where u =

s t u s ≤ t, t - s

We might say that the contraction inner product is a ’grade-lowering’ operation.

And, of course, the outer product is its opposite as a grade-increasing op-

eration. Recall that for two 1-blades or vectors the outer product resulted in a

2-blade or bivector: hai ∧ hbi = habi

1 1 2

The outer product between a 2-blade and a 1-blade results in a 2 + 1 = 3-blade

or trivector. Generalizing we get for two blades A and B with grade s and t:

hAi ∧ hBi = hABi

s t s+t

Note that A and B have to be blades. These equations do not hold when

they are arbitrary multivectors.

4.2 The Inverse −1

Most multivectors have a left inverse satisfying A A = 1 and a right inverse

## L

−1

satisfying AA = 1. We can use these inverses to divide a multivector by

## R

another. Recall that the geometric product is not commutative therefore the

## A

left and right inverse may or may not be equal. This means that the notation

## B

−1 −1

is ambiguous since it can mean both B A and AB .

## L R

Unfortunately calculating the inverse of a geometric product is not trivial,

much like calculating inverses of matrices is complicated for all but a few special

cases.

Luckily there is an important set of multivectors for which calculating the

inverse is very straightforward. These are called the versors and they have the

property that they are a geometric product of vectors. A multivector A is a

versor if it can be written as: A = v v v ...v

1 2 3 k

where v ...v are vectors, i.e. 1-blades. As a fortunate consequence, all blades

1 k 1

are versors too. For a versor A we define its reverse, using the † symbol, as:

A = v v ...v v (4.1)

k k−1 2 1

1 Remember that we use vectors to create subspaces of higher dimension, using the outer

product. 28

PAGINE

66

PESO

403.51 KB

AUTORE

PUBBLICATO

+1 anno fa

DETTAGLI
Corso di laurea: Corso di laurea in ingegneria informatica
SSD:
A.A.: 2013-2014

I contenuti di questa pagina costituiscono rielaborazioni personali del Publisher flaviael di informazioni apprese con la frequenza delle lezioni di Geometria e Algebra lineare e studio autonomo di eventuali libri di riferimento in preparazione dell'esame finale o della tesi. Non devono intendersi come materiale ufficiale dell'università Napoli Federico II - Unina o del prof Franco Davide.

Acquista con carta o conto PayPal

Scarica il file tutte le volte che vuoi

Paga con un conto PayPal per usufruire della garanzia Soddisfatto o rimborsato

Recensioni
Ti è piaciuto questo appunto? Valutalo!

Esercitazione

Dispensa

Esercitazione

Appunto