An Introduction to Geometric Algebra and Calculus
Alan Bromborsky Army Research Lab (Retired)
[email protected] November 15, 2014
2
Introduction Geometric algebra is the Clifford algebra of a finite dimensional vector space over real scalars cast in a form most appropriate for physics and engineering. This was done by David Hestenes (Arizona (Arizona State Universit University) y) in the 1960’s. From this start he developed the geometric geometric calculus whose fundamental theorem includes the generalized Stokes theorem, the residue theorem, and new integral integral theorems theorems not realized realized before. Hestenes Hestenes likes to say he was motivated motivated by the fact that physicists and engineers did not know how to multiply vectors. Researchers at Arizona State and Cambridge have applied these developments to classical mechanics, chanics,quan quantum tum mechanic mechanics, s, general general relativit relativity y (gauge theory theory of gravity gravity), ), projective projective geometry geometry,, conformal geometry, etc.
Contents 1 Basic Basic Geome Geometri tricc Algebr Algebra a 1.1 Axioms Axioms of Geom Geometr etric ic Alge Algebra bra . . . . . . . . . . . . . . . . . . . . . . . . 1.2 1.2 Why Why Lea Learn rn Th This is Stuff? Stuff? . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 1.3 Inne Inner, r, , and outer, , product of two vectors and their basic properties . 1.4 1.4 Oute Outer, r, , product for r for r Vectors in terms of the geometric product . . . . 1.5 Altern Alternate ate Definit Definition ion of of Outer, Outer, , product for r for r Vectors . . . . . . . . . . 1.6 Useful Useful Relati Relation’ on’ss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.7 Projecti Projection on Operato Operatorr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.8 1.8 Basi Basiss Blad Blades es . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.8.1 (3, (3, 0) Geometric Algebra (Euclidian Space) . . . . . . . . . . . . 1.8.2 (1, (1, 3) Geometric Algebra (Spacetime) . . . . . . . . . . . . . . . 1.9 1.9 Refle Reflect ctio ions ns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.10 Rotations Rotations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.10.1 Definitions Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.10.2 General General Rotatio Rotation n . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.10.3 Euclidean Euclidean Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.10.4 Minkows Minkowski ki Case Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.11 Expansion Expansion of geometric product and generalizat generalization ion of and . . . . . . 1.12 Duality Duality and the Pseudoscal Pseudoscalar ar . . . . . . . . . . . . . . . . . . . . . . . . 1.13 Reciprocal Reciprocal Frame Framess . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.14 Coordinates Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.15 Linear Transformations Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.15.1 Definitions Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.15.2 Adjoint Adjoint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.15.3 Inverse Inverse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.16 Commutato Commutatorr Product Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
· ∧
∧
∧
G G
· ·
3
∧
. . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . .
7 7 7 8 9 9 11 11 11 12 13 14 15 15 16 16 17 17 18 19 20 21 21 23 24 24
4
CONTENTS
2 Examp Examples les of Geome Geometri tricc Algebra Algebra 2.1 2.1 Quat Quater erni nion onss . . . . . . . . . . . . . . . . . . 2.2 2.2 Sp Spin inor orss . . . . . . . . . . . . . . . . . . . . . 2.3 Geometric Geometric Algebra Algebra of the Minkows Minkowski ki Plane Plane . 2.4 Lorentz Lorentz Transformat ransformation ion . . . . . . . . . . . . 3 Geometri Geometricc Calculus Calculus - The Deriv Derivativ ative e 3.1 3.1 Defin Definit itio ions ns . . . . . . . . . . . . . . . 3.2 Deriv Derivatives atives of of Scalar Scalar Functions unctions . . . 3.3 3.3 Produ Product ct Rule Rule . . . . . . . . . . . . . 3.4 Interior Interior and Exter Exterior ior Deriv Derivativ ativee . . . 3.5 Deriv Derivative ative of a Multiv Multivecto ectorr Functi Function on 3.5.1 3.5.1 Sph Spheri erical cal Coordin Coordinate atess . . . . 3.6 Analyt Analytic ic Functio unctions ns . . . . . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
4 Geometri Geometricc Calculus Calculus - Integr Integration ation 4.1 Line Line Integ Integral ralss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Surfac Surfacee Integ Integral ralss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 Direct Directed ed Integ Integrat ration ion - n-dimensional n -dimensional Surfaces . . . . . . . . . . . . . . . 4.3.1 k-Simplex Definition . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.2 k-Chain Definition (Algebraic Topology) . . . . . . . . . . . . . . 4.3.3 4.3.3 Simple Simplex x Notati Notation on . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4 Fundament undamental al Theorem Theorem of of Geometri Geometricc Calculus Calculus . . . . . . . . . . . . . . . 4.4.1 The Fundamenta undamentall Theore Theorem m At At Last! Last! . . . . . . . . . . . . . . . . 4.5 Examples Examples of of the the Fundam Fundament ental al Theore Theorem m . . . . . . . . . . . . . . . . . . . 4.5.1 Diverge Divergence nce and Green’s Green’s Theorems Theorems . . . . . . . . . . . . . . . . . . 4.5.2 Cauchy’ Cauchy’ss Integral Integral Form Formula ula In Two Dimensio Dimensions ns (Complex (Complex Plane) Plane) . 4.5.3 4.5.3 Green’ Green’ss Functio unctions ns in N in N -dimensional -dimensional Euclidean Spaces . . . . . . . 5 Geomet Geometric ric Calcu Calculus lus on Manif Manifold oldss 5.1 Definit Definition ion of of a Vect Vector or Mani Manifol fold d . . . . . . 5.1.1 5.1.1 The Pseudo Pseudosca scalar lar of of the the Manif Manifold old . 5.1.2 5.1.2 The Projecti Projection on Operato Operatorr . . . . . . 5.1.3 5.1.3 The Exclus Exclusion ion Operato Operatorr . . . . . . 5.1.4 The Intrinsic Intrinsic Deriv Derivative ative . . . . . . 5.1.5 The Covarian Covariantt Derivativ Derivativee . . . . . . 5.2 Coordinates Coordinates and Derivativ Derivatives es . . . . . . . . 5.3 Rieman Riemannia nian n Geomet Geometry ry . . . . . . . . . . . 5.4 Manifo Manifold ld Mappin Mappings gs . . . . . . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . .
27 27 28 28 30
. . . . . . .
33 33 34 34 35 37 39 40
. . . . . . . . . . . .
43 43 44 45 45 45 46 50 55 57 57 58 60
. . . . . . . . .
65 65 65 66 67 67 68 72 74 78
CONTENTS
5
5.5 The Fundmental Theorem of Geometric Calculus on Manifolds . 5.5.1 Divergence Theorem on Manifolds . . . . . . . . . . . . . 5.5.2 Stokes Theorem on Manifolds . . . . . . . . . . . . . . . 5.6 Differential Forms in Geometric Calculus . . . . . . . . . . . . . 5.6.1 Inner Products of Subspaces . . . . . . . . . . . . . . . . 5.6.2 Alternating Forms . . . . . . . . . . . . . . . . . . . . . 5.6.3 Dual of a Vector Space . . . . . . . . . . . . . . . . . . . 5.6.4 Standard Definition of a Manifold . . . . . . . . . . . . . 5.6.5 Tangent Space . . . . . . . . . . . . . . . . . . . . . . . . 5.6.6 Differential Forms and the Dual Space . . . . . . . . . . 5.6.7 Connecting Differential Forms to Geometric Calculus . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
84 85 86 88 88 90 91 93 95 97 98
6 Multivector Calculus 6.1 New Multivector Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Derivatives With Respect to Multivectors . . . . . . . . . . . . . . . . . . . . . . 6.3 Calculus for Linear Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . .
101 . 101 . 104 . 108
7 Multilinear Functions (Tensors) 7.1 Algebraic Operations . . . . . . . . . . . . . . . . . . . 7.2 Covariant, Contravariant, and Mixed Representations . 7.3 Contraction . . . . . . . . . . . . . . . . . . . . . . . . 7.4 Differentiation . . . . . . . . . . . . . . . . . . . . . . . 7.5 From Vector/Multivector to Tensor . . . . . . . . . . . 7.6 Parallel Transport Definition and Example . . . . . . . 7.7 Covariant Derivative of Tensors . . . . . . . . . . . . . 7.8 Coefficient Transformation Under Change of Variable .
. . . . . . . .
115 . 116 . 116 . 118 . 119 . 119 . 120 . 124 . 126
. . . . . . . . . . .
129 . 129 . 129 . 130 . 132 . 134 . 135 . 137 . 138 . 143 . 144 . 145
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
8 Lagrangian and Hamiltonian Methods 8.1 Lagrangian Theory for Discrete Systems . . . . . . . . . . . . . . 8.1.1 The Euler-Lagrange Equations . . . . . . . . . . . . . . . . 8.1.2 Symmetries and Conservation Laws . . . . . . . . . . . . . 8.1.3 Examples of Lagrangian Symmetries . . . . . . . . . . . . 8.2 Lagrangian Theory for Continuous Systems . . . . . . . . . . . . . 8.2.1 The Euler Lagrange Equations . . . . . . . . . . . . . . . . 8.2.2 Symmetries and Conservation Laws . . . . . . . . . . . . . 8.2.3 Space-Time Transformations and their Conjugate Tensors 8.2.4 Case 1 - The Electromagnetic Field . . . . . . . . . . . . . 8.2.5 Case 2 - The Dirac Field . . . . . . . . . . . . . . . . . . . 8.2.6 Case 3 - The Coupled Electromagnetic and Dirac Fields .
. . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . .
6
CONTENTS
9 Lie Groups as Spin Groups 9.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2 Simple Examples . . . . . . . . . . . . . . . . . . . . . . . 9.2.1 SO (2) - Special Orthogonal Group of Order 2 . . . 9.2.2 GL (2, ) - General Real Linear Group of Order 2 . 9.3 Properties of the Spin Group . . . . . . . . . . . . . . . . . 9.3.1 Every Rotor is the Exponential of a Bivector . . . . 9.3.2 Every Exponential of a Bivector is a Rotor . . . . . 9.4 The Grassmann Algebra . . . . . . . . . . . . . . . . . . . 9.5 The Dual Space to n . . . . . . . . . . . . . . . . . . . . 9.6 The Mother Algebra . . . . . . . . . . . . . . . . . . . . . 9.7 The General Linear Group as a Spin Group . . . . . . . . 9.8 Endomorphisms of n . . . . . . . . . . . . . . . . . . . .
V
10 Classical Electromagnetic Theory 10.1 Maxwell Equations . . . . . . . . . 10.2 Relativity and Particles . . . . . . . 10.3 Lorentz Force Law . . . . . . . . . 10.4 Relativistic Field Transformations . 10.5 The Vector Potential . . . . . . . . 10.6 Radiation from a Charged Particle
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . .
147 . 147 . 148 . 148 . 149 . 149 . 149 . 153 . 154 . 154 . 155 . 160 . 169
. . . . . .
173 . 174 . 175 . 177 . 177 . 179 . 180
A Further Properties of the Geometric Product
185
B BAC-CAB Formulas
191
C Reduction Rules for Scalar Projections of Multivectors
193
D Curvilinear Coordinates via Matrices
197
E Practical Geometric Calculus on Manifolds
201
F Direct Sum of Vector Spaces
205
G sympy/galgebra evaluation of GL (n, ) structure constants
207
H Blade Orientation Theorem
211
I
Case Study of a Manifold for a Model Universe 213 I.1 The Edge of Known Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220
Chapter 1 Basic Geometric Algebra 1.1
Axioms of Geometric Algebra
Let ( p, q ) be a finite dimensional vector space of signature ( p, q )1 over there exists a geometric product with the properties -
V
.
Then a,b,c
∀
∈ V
(ab)c = a(bc) a(b + c) = ab + ac (a + b)c = ac + bc aa
∈
If a2 = 0 then a −1 =
1.2
1 a. a2
Why Learn This Stuff?
√ −
The geometric product of two (or more) vectors produces something “new” like the 1 with respect to real numbers or vectors with respect to scalars. It must be studied in terms of its effect on vectors and in terms of its symmetries. It is worth the effort. Anything that makes understanding rotations in a N dimensional space simple is worth the effort! Also, if one proceeds 1
To be completely general we would have to consider ( p, q, r) where the dimension of the vector space is n = p + q + r and p, q , and r are the number of basis vectors respectively with positive, negative and zero squares.
V
7
CHAPTER 1. BASIC GEOMETRIC ALGEBRA
8
on to geometric calculus many diverse areas in mathematics are unified and many areas of physics and engineering are greatly simplified.
1.3
Inner, , and outer, properties
·
∧,
product of two vectors and their basic
The inner (dot) and outer (wedge) products of two vectors are defined by
· ≡ 12 (ab + ba)
(1.1)
a b
∧ b ≡ 12 (ab − ba) ab = a · b + a ∧ b a ∧ b = −b ∧ a
(1.2)
a
(1.3)
(1.4)
c = a + b c = (a + b)2 c2 = a 2 + ab + ba + b2 2a b = c 2 a2 b2 a b
(1.5)
a b = a b cos(θ) if a2 , b2 > 0
(1.6)
2
·
·
− − · ∈
| || |
Orthogonal vectors are defined by a b = 0. For orthogonal vectors a (a b)2
∧ b = ab.
·
∧
Now compute
2
(a
∧ b) = − (a ∧ b) (b ∧ a) (1.7) = − (ab − a · b) (ba − a · b) (1.8) = − abba − (a · b) (ab + ba) + (a · b) (1.9) = − a b − (a · b) (1.10) = −a b 1 − cos (θ) (1.11) = −a b sin (θ) (1.12) Thus in a Euclidean space, a , b > 0, (a ∧ b) ≤ 0 and a ∧ b is proportional to sin (θ). If e and any two orthonormal unit vectors in a Euclidean space then e e = −1. Who needs e are √ the −1?
2
2
2 2
2 2 2 2
2
2
2
2
2
2
⊥
⊥
1.4. OUTER, , PRODUCT FOR R VECTORS IN TERMS OF THE GEOMETRIC PRODUCT 9
∧
1.4
Outer, , product for r Vectors in terms of the geometric product
∧
1 ...ir Define the outer product of r vectors to be (εi1...r is the mixed permutation symbol)
a1
∧ . . . ∧ a ≡ r
1 r! i
1 ...ir εi1...r ai1 . . . air
(1.13)
1 ,...,ir
Thus a1
∧ . . . ∧ (a + b ) ∧ . . . ∧ a = a ∧ . . . ∧ a ∧ . . . ∧ a + a ∧ . . . ∧ b ∧ . . . ∧ a j
j
r
1
j
r
1
j
r
(1.14)
and a1
∧ ... ∧ a ∧ a ∧ ... ∧ a = − a ∧ ... ∧ a ∧ a ∧ ... ∧ a j
j+1
r
1
j+1
j
r
(1.15)
The outer product of r vectors is called a blade of grade r.
1.5
Alternate Definition of Outer,
∧, product for r Vectors
Let e1 , e2 , . . . , er be an orthogonal basis for the set of linearly independent vectors a 1 , a2 , . . . , ar so that we can write (1.16) ai = αij e j
j
Then a1 a2 . . . ar =
α1 j1 e j1
j1
=
α2 j2 e j2
...
j2
αrjr e jr
jr
α1 j1 α2 j2 . . . αrjr e j1 e j2 . . . e jr
(1.17)
j1 ,...,jr
Now define a blade of grade n as the geometric product of n orthogonal vectors. Thus the product e j1 e j2 . . . e jr in equation 1.17 could be a blade of grade r, r 2, r 4, etc. depending upon the number of repeated factors.
−
−
CHAPTER 1. BASIC GEOMETRIC ALGEBRA
10
If there are no repeated factors in the product we have that 1 ...jr e j1 . . . e jr = ε j1...r e1 . . . er
(1.18)
Due to the fact that interchanging two adjacent orthogonal vectors in the geometric product will reverse the sign of the product and we can define the outer product of r vectors as a1
∧ ... ∧ a
r
=
1 ...jr ε j1...r α1 j1 . . . αrjr e1 . . . er
(1.19)
j1 ,...,jr
= det(α) e1 . . . er
(1.20)
Thus the outer product of r independent vectors is the part of the geometric product of the r vectors that is of grade r. Equation 1.19 is equivalent to equation 1.13. This can be proved by substituting equation 1.17 into equation 1.13 to get a1
∧ ... ∧
1 ar = r! i = =
1 r! i
1 ...ir εi1...r αi1 j1 . . . αir jr e j1 . . . e jr
(1.21)
1 ,...,ir j1 ,...,jr 1 ...ir j1 ...jr εi1...r ε1...r αi1 j1 . . . αir jr e1 . . . er
(1.22)
1 ,...,ir j1 ,...,jr
1 r! j
1 ...jr j1 ...jr ε j1...r ε1...r det(α) e1 . . . er
(1.23)
1 ,...,jr
= det(α) e1 . . . er
We go from equation 1.22 to equation 1.23 by noting that
(1.24)
1 ...ir εi1...r αi1 j1 . . . αir jr is just det (α)
i1 ,...,ir
with the columns permuted. Multiplying det (α) nant with the columns permuted.
1 ...jr by ε j1...r
gives the correct sign for the determi-
If e 1 , . . . , en is an orthonormal basis for vector space the unit psuedoscalar is defined as I = e 1 . . . en
(1.25)
In equation 1.24 let r = n and the a 1 , . . . , an be another orthonormal basis for the vector space. Then we may write (1.26) a1 . . . an = det (α) e1 . . . en Since both the a’s and the e’s form orthonormal bases the matrix α is orthogonal and det (α) = 1. All psuedoscalars for the vector space are identical to within a scale factor of 1.2 Likewise a1 . . . an is equal to I times a scale factor.
±
∧ ∧ 2
It depends only upon the ordering of the basis vectors.
±
1.6. USEFUL RELATION’S
1.6
11
Useful Relation’s
1. For a set of r orthogonal vectors, e1 , . . . , er e1
∧ ... ∧ e
r
= e 1 . . . er
(1.27)
2. For a set of r linearly independent vectors, a1 , . . . , ar , there exists a set of r orthogonal vectors, e 1 , . . . , er , such that (1.28) a1 . . . ar = e 1 . . . er
∧ ∧
If the vectors, a1 , . . . , ar , are not linearly independent then a1
∧ ... ∧ a
r
=0
(1.29)
The product a 1 . . . ar is call a “blade” of grade r. The dimension of the vector space is the highest grade any blade can have.
∧ ∧
1.7
Projection Operator
A multivector, the basic element of the geometric algebra, is made of of a sum of scalars, vectors, blades. A multivector is homogeneous (pure) if all the blades in it are of the same grade. The grade of a scalar is 0 and the grade of a vector is 1. The general multivector A is decomposed with the grade projection operator A r as (N is dimension of the vector space):
N
A =
A
r
(1.30)
r=0
As an example consider ab, the product of two vectors. Then ab = ab 0 + ab
We define A
2
(1.31)
≡ A for any multivector A
1.8
0
Basis Blades
The geometric algebra of a vector space, ( p, q ), is denoted ( p, q ) or ( ) where ( p, q ) is the signature of the vector space (first p unit vectors square to +1 and next q unit vectors square to 1, dimension of the space is p + q ). Examples are:
V
−
G
G V
CHAPTER 1. BASIC GEOMETRIC ALGEBRA
12
p 3 1 4
q 0 3 1
Type of Space 3D Euclidean Relativistic Space Time 3D Conformal Geometry
If the orthonormal basis set of the vector space is e1 , . . . , eN , the basis of the geometric algebra (multivector space) is formed from the geometric products (since we have chosen an orthonormal basis, ei 2 = 1) of the basis vectors. For grade r multivectors the basis blades are all the combinations of basis vectors products taken r at a time from the set of N vectors. Thus the number basis blades of rank r are N , the binomial expansion coefficient and the total dimension r of the multivector space is the sum of N over r which is 2 N . r
±
1.8.1
G (3, 0) Geometric Algebra (Euclidian Space)
The basis blades for
G (3, 0) are: Grade 0 1 2 3 1 e1 e1 e2 e1 e2 e3 e2 e1 e3 e3 e2 e3
The multiplication table for the 1 e1 e2 e3 e1 e2 e1 e3 e2 e3 e1 e2 e3
G (3, 0) basis blades is
1 e1 1 e1 e1 1 e2 e1 e2 e3 e1 e3 e1 e2 e2 e1 e3 e3 e2 e3 e1 e2 e3 e1 e2 e3 e2 e3
− − − −
e2 e3 e1 e2 e2 e3 e1 e2 e1 e2 e1 e3 e2 1 e2 e3 e1 e2 e3 1 e1 e2 e3 e1 e1 e2 e3 1 e1 e2 e3 e1 e2 e3 e3 e2 e1 e3 e1 e3 e1 e2 e3
− − − −
− − − −
e1 e3 e2 e3 e1 e2 e3 e1 e3 e2 e3 e1 e2 e3 e3 e1 e2 e3 e2 e3 e1 e2 e3 e3 e1 e3 e1 e2 e1 e2 e2 e3 e1 e3 e3 1 e1 e2 e2 e1 e2 1 e1 e2 e1 1
− − − −
− − − −
− − − −
Note that the squares of all the grade 2 and 3 basis blades are 1. The highest rank basis blade (in this case e 1 e2 e3 ) is usually denoted by I and is called the pseudoscalar.
−
1.8. BASIS BLADES
1.8.2
13
G (1, 3) Geometric Algebra (Spacetime)
The multiplication table for the 1 γ 0 γ 1 γ 2 γ 3 γ 0 γ 1 γ 0 γ 2 γ 1 γ 2 γ 0 γ 3 γ 1 γ 3 γ 2 γ 3 γ 0 γ 1 γ 2 γ 0 γ 1 γ 3 γ 0 γ 2 γ 3 γ 1 γ 2 γ 3 γ 0 γ 1 γ 2 γ 3
1 1 γ 0 γ 1 γ 2 γ 3 γ 0 γ 1 γ 0 γ 2 γ 1 γ 2 γ 0 γ 3 γ 1 γ 3 γ 2 γ 3 γ 0 γ 1 γ 2 γ 0 γ 1 γ 3 γ 0 γ 2 γ 3 γ 1 γ 2 γ 3 γ 0 γ 1 γ 2 γ 3
1 γ 0 γ 1 γ 2 γ 3 γ 0 γ 1 γ 0 γ 2 γ 1 γ 2 γ 0 γ 3 γ 1 γ 3 γ 2 γ 3 γ 0 γ 1 γ 2 γ 0 γ 1 γ 3 γ 0 γ 2 γ 3 γ 1 γ 2 γ 3 γ 0 γ 1 γ 2 γ 3
γ 0 γ 3 γ 0 γ 3 γ 3 γ 0 γ 1 γ 3 γ 0 γ 2 γ 3 γ 0 γ 1 γ 3 γ 2 γ 3 γ 0 γ 1 γ 2 γ 3 1 γ 0 γ 1 γ 0 γ 2 γ 1 γ 2 γ 3 γ 1 γ 2 γ 0 γ 1 γ 2 γ 1 γ 2
− − − −
− − − −
G (1, 3) basis blades is
γ 0 γ 1 γ 0 γ 1 1 γ 0 γ 1 γ 0 γ 1 1 γ 0 γ 2 γ 1 γ 2 γ 0 γ 3 γ 1 γ 3 γ 1 γ 0 γ 2 γ 0 γ 1 γ 2 γ 0 γ 1 γ 2 γ 2 γ 3 γ 0 γ 1 γ 3 γ 0 γ 1 γ 3 γ 3 γ 0 γ 2 γ 3 γ 1 γ 2 γ 3 γ 1 γ 2 γ 0 γ 2 γ 1 γ 3 γ 0 γ 3 γ 2 γ 3 γ 0 γ 1 γ 2 γ 3 γ 0 γ 1 γ 2 γ 3 γ 2 γ 3 γ 1 γ 2 γ 3 γ 0 γ 2 γ 3
− − − − − −
− −
− − − − − −
− −
γ 1 γ 3 γ 2 γ 3 γ 1 γ 3 γ 2 γ 3 γ 0 γ 1 γ 3 γ 0 γ 2 γ 3 γ 3 γ 1 γ 2 γ 3 γ 1 γ 2 γ 3 γ 3 γ 1 γ 2 γ 0 γ 3 γ 0 γ 1 γ 2 γ 3 γ 0 γ 1 γ 2 γ 3 γ 0 γ 3 γ 2 γ 3 γ 1 γ 3 γ 0 γ 1 γ 0 γ 2 1 γ 1 γ 2 γ 1 γ 2 1 γ 0 γ 2 γ 3 γ 0 γ 1 γ 3 γ 0 γ 0 γ 1 γ 2 γ 0 γ 1 γ 2 γ 0 γ 2 γ 1 γ 0 γ 2 γ 0 γ 1
−
− − −
− − − −
−
− −
−
−
− − −
γ 2 γ 3 γ 0 γ 1 γ 2 γ 3 γ 0 γ 1 γ 0 γ 2 γ 0 γ 3 γ 1 γ 1 γ 2 γ 1 γ 3 γ 0 1 γ 2 γ 3 γ 0 γ 1 γ 2 γ 2 γ 3 1 γ 0 γ 1 γ 3 γ 0 γ 1 γ 2 γ 0 γ 1 γ 3 1 γ 0 γ 0 γ 2 γ 3 γ 1 γ 2 γ 1 γ 1 γ 2 γ 3 γ 0 γ 2 γ 0 γ 2 γ 3 γ 0 γ 1 γ 3 γ 1 γ 2 γ 3 γ 1 γ 0 γ 3 γ 3 γ 2 γ 0 γ 1 γ 2 γ 3 γ 0 γ 1 γ 0 γ 1 γ 2 γ 3 γ 2 γ 0 γ 1 γ 2 γ 3 γ 0 γ 1 γ 3 γ 0 γ 3 γ 0 γ 2 γ 1 γ 2 γ 3 γ 1 γ 3 γ 1 γ 2 γ 0 γ 2 γ 3 γ 0 γ 1 γ 3 γ 0 γ 1 γ 2 γ 2 γ 3
− − − − − − − −
γ 0 γ 1 γ 2 γ 0 γ 1 γ 3 γ 0 γ 1 γ 2 γ 0 γ 1 γ 3 γ 1 γ 2 γ 1 γ 3 γ 0 γ 2 γ 0 γ 3 γ 0 γ 1 γ 0 γ 1 γ 2 γ 3 γ 0 γ 1 γ 2 γ 3 γ 0 γ 1 γ 2 γ 3 γ 1 γ 1 γ 2 γ 3 γ 0 γ 0 γ 2 γ 3 γ 1 γ 2 γ 3 γ 1 γ 0 γ 2 γ 3 γ 0 γ 0 γ 1 γ 3 γ 0 γ 1 γ 2 1 γ 2 γ 3 γ 2 γ 3 1 γ 1 γ 3 γ 1 γ 2 γ 0 γ 3 γ 0 γ 2 γ 3 γ 2
−
−
− −
−
− −
− −
−
−
− −
− − − − − − −
−
− − − − − − −
−
− −
− − − −
γ 0 γ 2 γ 1 γ 2 γ 0 γ 2 γ 1 γ 2 γ 2 γ 0 γ 1 γ 2 γ 0 γ 1 γ 2 γ 2 γ 0 γ 1 γ 0 γ 2 γ 3 γ 1 γ 2 γ 3 γ 1 γ 2 γ 0 γ 2 1 γ 0 γ 1 γ 0 γ 1 1 γ 2 γ 3 γ 0 γ 1 γ 2 γ 3 γ 0 γ 1 γ 2 γ 3 γ 2 γ 3 γ 0 γ 3 γ 1 γ 3 γ 1 γ 0 γ 1 γ 2 γ 3 γ 0 γ 2 γ 3 γ 3 γ 0 γ 1 γ 3 γ 0 γ 1 γ 3 γ 3 γ 1 γ 3 γ 0 γ 3
−
−
− − −
γ 0 γ 2 γ 3 γ 1 γ 2 γ 3 γ 0 γ 1 γ 2 γ 3 γ 0 γ 2 γ 3 γ 1 γ 2 γ 3 γ 0 γ 1 γ 2 γ 3 γ 2 γ 3 γ 0 γ 1 γ 2 γ 3 γ 1 γ 2 γ 3 γ 0 γ 1 γ 2 γ 3 γ 2 γ 3 γ 0 γ 2 γ 3 γ 0 γ 3 γ 1 γ 3 γ 0 γ 1 γ 3 γ 0 γ 2 γ 1 γ 2 γ 0 γ 1 γ 2 γ 1 γ 2 γ 3 γ 0 γ 2 γ 3 γ 2 γ 3 γ 3 γ 0 γ 1 γ 3 γ 1 γ 3 γ 0 γ 1 γ 3 γ 3 γ 0 γ 3 γ 2 γ 0 γ 1 γ 2 γ 1 γ 2 γ 0 γ 1 γ 2 γ 2 γ 0 γ 2 γ 0 γ 1 γ 0 γ 1 γ 1 γ 3 γ 0 γ 3 γ 3 γ 1 γ 2 γ 0 γ 2 γ 2 1 γ 0 γ 1 γ 1 γ 0 γ 1 1 γ 0 γ 1 γ 0 1
−
− − − − − − −
− − − − − − − −
−
− −
− − − − −
CHAPTER 1. BASIC GEOMETRIC ALGEBRA
14
1.9
Reflections
We wish to show that a, v
∈ V → ava ∈ V and v is reflected about a if a
2
= 1.
Figure 1.1: Reflection of Vector
1. Decompose v = v +v⊥ where v is the part of v parallel to a and v⊥ is the part perpendicular to a. 2. av = av + av⊥ = v a
−
3. ava = a2 v
⊥ a since a and
−v
v⊥ are orthogonal.
v⊥ is a vector since a 2 is a scalar.
4. ava is the reflection of v about the direction of a if a 2 = 1. 5. Thus a1 . . . ar var . . . a1 a2r = 1.
2 1
∈ V and produces a composition of reflections of v if a = · ·· =
1.10. ROTATIONS
1.10 1.10.1
15
Rotations Definitions
First define the reverse of a product of vectors. If R = a1 . . . as then the reverse is R† = (a1 . . . as )† = a r . . . a1 , the order of multiplication is reversed. Then let R = ab so that RR† = (ab)(ba) = ab 2 a = a2 b2 = R † R
(1.32)
2
Let RR † = 1 and calculate RvR † , where v is an arbitrary vector. RvR†
2
= RvR† RvR † = Rv 2 R† = v 2 RR† = v 2
(1.33)
Thus RvR † leaves the length of v unchanged. Now we must also prove Rv1 R† Rv2 R† = v 1 v2 . Since Rv1 R† and Rv2 R† are both vectors we can use the definition of the dot product for two vectors
·
·
1 Rv1 R† Rv2 R† + Rv2 R† Rv1 R† 2 1 = Rv1 v2 R† + Rv2 v1 R† 2 1 = R (v1 v2 + v2 v1 ) R† 2 = R (v1 v2 ) R†
Rv1 R† Rv2 R† =
·
·
= v 1 v2 RR† = v 1 v2
· ·
Thus the transformation RvR † preserves both length and angle and must be a rotation. The normal designation for R is a rotor. If we have a series of successive rotations R 1 , R2 , . . . , Rk to be applied to a vector v then the result of the k rotations will be Rk Rk−1 . . . R1 vR †1 R2† . . . Rk† Since each individual rotation can be written as the geometric product of two vectors, the composition of k rotations can be written as the geometric product of 2 k vectors. The multivector that results from the geometric product of r vectors is called a versor of order r. A composition of rotations is always a versor of even order.
CHAPTER 1. BASIC GEOMETRIC ALGEBRA
16
1.10.2
General Rotation θ
The general rotation can be represented by R = e 2 u where u is a unit bivector in the plane of the rotation and θ is the rotation angle in the plane.3 The two possible non-degenerate cases are u2 = 1 θ (Euclidean plane) u2 = 1 : cos θ2 + u sin θ2 u 2 = (1.34) e (Minkowski plane) u2 = 1 : cosh θ2 + u sinh θ2
±
−
−
Decompose v = v + v v where v is the projection of v into the plane defined by u. Note that v v is orthogonal to all vectors in the u plane. Now let u = e ⊥ e where e is parallel to v and of course e ⊥ is in the plane u and orthogonal to e . v v anticommutes with e and e⊥ and v anticommutes with e ⊥ (it is left to the reader to show RR † = 1).
−
1.10.3
−
Euclidean Case
For the case of u2 =
−1
Figure 1.2: Rotation of Vector
RvR† = Since v
θ cos 2
θ + e⊥ e sin 2
− v anticommutes with e
− −
v + v
v
+ e e⊥ sin
θ 2
and e ⊥ it commutes with R and
RvR† = Rv R† + v ∞
3 A
θ cos 2
e is defined as the Taylor series expansion e A =
j =0
v
Aj where A is any multivector. j!
(1.35)
1.11. EXPANSION OF GEOMETRIC PRODUCT AND GENERALIZATION OF AND 17
·
∧
(1.36)
So that we only have to evaluate †
Rv R =
θ cos 2
+ e⊥ e sin
Since v = v e
θ 2
θ 2
v cos
+ e e⊥ sin
θ 2
Rv R† = v cos(θ) e + sin (θ) e⊥
(1.37)
and the component of v in the u plane is rotated correctly.
1.10.4
Minkowski Case
For the case of u2 = 1 there are two possibilities, v2 > 0 or v2 < 0. In the first case e2 = 1 and e2⊥ = 1. In the second case e2 = 1 and e2⊥ = 1. Again v v is not affected by the rotation so that we need only evaluate
−
−
θ Rv R = cosh 2 †
θ + e⊥ e sinh 2
v
θ cosh 2
+ e e⊥ sinh
θ 2
v2 and
Note that in this case v =
Rv R† =
1.11
−
v2 > 0 : v cosh(θ) e + sinh (θ) e⊥ v2 < 0 : v cosh (θ) e sinh (θ) e⊥
−
(1.38)
Expansion of geometric product and generalization of and
·
∧
(1.39)
If Ar and B s are respectively grade r and s pure grade multivectors then Ar Bs = Ar Bs
|r−s| +
A B + ··· + A B A · B ≡ A B A ∧ B ≡ A B r
s |r−s|+2
r r
s
r
r
s
r
s min(r+s,2N −(r+s))
s |r−s|
(1.40)
s r+s
(1.41)
Thus if r + s > N then A r Bs = 0, also note that these formulas are the most efficient way of calculating A r Bs and A r Bs . Using equations 1.28 and 1.39 we can prove that for a vector a and a grade r multivector B r
·
∧ ∧
1 a Br = (aBr 2
·
− (−1)
r
Br a)
(1.42)
CHAPTER 1. BASIC GEOMETRIC ALGEBRA
18
1 = (aBr + ( 1)r Br a) (1.43) 2 If equations 1.42 and 1.43 are true for a grade r blade they are also true for a grade r multivector (superposition of grade r blades). By equation 1.28 let Br = e 1 . . . er where the e s are orthogonal and expand a a
∧B
−
r
r
a = a⊥ +
α j e j
(1.44)
j=1
where a ⊥ is orthogonal to all the e s. Then4 r
aBr =
−
( 1) j −1 α j e j2 e1
j=1
= a Br + a
·
··· e˘ ··· e + a j
r
⊥ e1 . . . er
∧B
r
(1.45)
Now calculate r
Br a =
−
( 1)r− j α j e j2 e1
j=1
= ( 1)r−1
−
j
r
−
a⊥ e1 . . . er
r
( 1) j −1 α j e j2 e1
j=1
= ( 1)r−1 (a Br
−
r −1
· ·· e˘ ··· e − (−1)
·· · e˘ ··· e − a j
r
⊥ e1 . . . er
· −a∧B )
(1.46)
r
Adding and subtracting equations 1.45 and 1.46 gives equations 1.42 and 1.43.
1.12
Duality and the Pseudoscalar
If e 1 , . . . , en is an orthonormal basis for the vector space the the pseudoscalar I is defined by I = e 1 . . . en
(1.47)
Since one can transform one orthonormal basis to another by an orthogonal transformation the I ’s for all orthonormal bases are equal to within a 1 scale factor with depends on the ordering of the basis vectors. If A r is a pure r grade multivector (Ar = Ar r ) then
±
Ar I = Ar I n−r
4
e1 . . . ej−1 e˘j ej +1 . . . er = e 1 . . . ej−1 ej +1 . . . er
(1.48)
1.13. RECIPROCAL FRAMES
19
or A r I is a pure n
− r grade multivector. Further by the symmetry properties of I we have (1.49) IA = (−1) A I I can also be used to exchange the · and ∧ products as follows using equations 1.42 and 1.43 (n−1)r
r
r
1 aAr I ( 1)n−r Ar Ia 2 1 − − = aAr I ( 1)n r ( 1)n 1 Ar aI 2 1 = (aAr + ( 1)r Ar a) I 2 = (a Ar ) I
a (Ar I ) =
·
−− −− −
−
(1.53)
≤ n we have using equation 1.40
Ar (Bs I ) = Ar Bs I |r−(n−s)|
·
= A B I = A B I = (A ∧ B ) I r
s
r
s r+s
r
(1.51) (1.52)
∧
More generally if Ar and Bs are pure grade multivectors with r+s and 1.48
(1.50)
n−(r+s)
(1.54)
(1.55)
s
(1.56) (1.57)
Finally we can relate I to I † by I † = ( 1)
−
1.13
n(n−1)
2
I
(1.58)
Reciprocal Frames
Let e1 , . . . , en be a set of linearly independent vectors that span the vector space that are not necessarily orthogonal. These vectors define the frame (frame vectors are shown in bold face since they are almost always associated with a particular coordinate system) with volume element E n So that E n
≡ e ∧ . . . ∧ e 1
n 1
n
∝ I . The reciprocal frame is the set of vectors e , . . . , e e · e = δ , ∀i, j = 1, . . . , n i
j
i j
(1.59)
that satisfy the relation
(1.60)
CHAPTER 1. BASIC GEOMETRIC ALGEBRA
20 The ei are constructed as follows j −1
e j = ( 1)
−1 n
∧ e ∧ . . . ∧ ˘e ∧ . . . ∧ e E So that the dot product is (using equation 1.53 since E ∝ I ) e · e = (−1) e · e ∧ e ∧ . . . ∧ ˘ e ∧ . . . ∧ e E = (−1) (e ∧ e ∧ e ∧ . . . ∧ ˘ e ∧ . . . ∧ e ) E j = 0, ∀i = −
e1
2
j
n
(1.61)
−1 n
i
j −1
j
j −1
i
i
1
2
1
−1 n n −1 n n
j
2
j
(1.62)
(1.63) (1.64)
and e1 e1 = e 1
·
1.14
·
= (e1 =1
−1 n −1 n n
∧ . . . ∧ e E ∧ e ∧ . . . ∧ e ) E e2
n
2
(1.65) (1.66) (1.67)
Coordinates
The reciprocal frame can be used to develop a coordinate representation for multivectors in an arbitrary frame e1 , . . . , en with reciprocal frame e1 , . . . , en . Since both the frame and it’s reciprocal span the base vector space we can write any vector a in the vector space as a = a i ei = a i ei
(1.68)
where if an index such as i is repeated it is assumes that the terms with the repeated index will be summed from 1 to n. Using that ei e j = δ ji we have
·
ai = a ei
· = a · e
ai
(1.69)
i
(1.70)
In tensor notation a i would be the covariant representation and a i the contravariant representation of the vector a. Now consider the case of grade 2 and grade 3 blades: i
i
· ∧ b) = a · e b − b · e a e a · e b − b · e a = ab − ba = 2a ∧ b e · (a ∧ b ∧ c) = a · e b ∧ c − b · e a ∧ c + c · e a ∧ b a · e b ∧ c − b · e a ∧ c + c · e a ∧ b = ab ∧ c − ba ∧ c + ca ∧ b = 3a ∧ b ∧ c i
ei
ei (a
i
i
i
i
i
i
i
i
i
1.15. LINEAR TRANSFORMATIONS
21
for an r-blade A r we have (the proof is left to the reader) ei ei Ar = rA r
·
(1.71)
Since ei ei = n we have ei ei
∧A
r
= e i ei Ar
i
−e ·A
r
= (n
− r) A
r
(1.72)
Flipping ei and A r in equations 1.71 and 1.72 and subtracting equation 1.71 from 1.72 gives r
ei Ar ei = ( 1) (n
−
− 2r) A
r
(1.73)
In Hestenes and Sobczyk (3.14) it is proved that
ekr
k1
∧ . . . ∧ e · (e ∧ . . . ∧ e
jr )
j1
= δ jk11 δ jk22 . . . δ jk rr
(1.74)
so that the general multivector A can be expanded in terms of the blades of the frame and reciprocal frame as ek (1.75) A = Aij ···k ei e j
∧ ∧···∧
i
where
Aij ···k = (ek
∧···∧e ∧e )·A j
i
(1.76)
The components A ij···k are totally antisymmetric on all indices and are usually referred to as the components of an antisymmetric tensor .
1.15
Linear Transformations
1.15.1
Definitions
Let f be a linear transformation on a vector space f : with f (αa + βb) = αf (a) + βf (b) and α, β . Then define the action of f on a blade of the geometric algebra by a, b
∀ ∈ V
V → V
∈
f (a1
∧ . . . ∧ a ) = f (a ) ∧ . . . ∧ f (a ) and the action of f on any two A, B ∈ G (V ) by r
1
f (αA + βB ) = αf (A) + βf (B)
1
(1.77)
(1.78)
CHAPTER CHAPTER 1. BASIC GEOMETRI GEOMETRIC C ALGEBRA ALGEBRA
22
Since any multivector A can be expanded as a sum of blades f (A ( A) is defined defined.. This This has many consequences. Consider the following definition for the determinant of f of f ,, det det (f ). f ). ( I ) = det det (f ) I f (I
(1.79)
First show that this definition is equivalent to the standard definition of the determinant (again ). e1 , . . . , eN is an orthonormal basis for ).
V
N
(er ) = f (e
ars es
(1.80)
s=1
Then
∧ ∧ N
( I ) = f (I
N
a1s1 es
...
s1 =1
=
aN sN es
sN =1
a1s1 . . . aNs N es1 . . . esN
(1.81)
...sN es1 . . . esN = ε 1s1...N e1 . . . eN
(1.82)
s1 ,...,sN
But so that
( I ) = f (I
...sN ε1s1...N a1s1 . . . aN sN I
(1.83)
...sN ε1s1...N a1s1 . . . aNs N
(1.84)
s1 ,...,sN
or det(f det(f ) =
s1 ,...,sN
which which is the standard standard definitio definition. n. Now Now comput computee the determi determinan nantt of the product product of the linear linear transformations f transformations f and g det(f det(f g) I = f g (I ) = f (g ( g (I )) )) = f (det(g (det(g) I ) = det(g det(g) f (I (I ) = det(g det(g)det(f )det(f )) I
(1.85)
or det(f det(f g) = det det (f )det( f )det(gg)
(1.86)
Do you have any idea of how miserable that is to prove from the standard definition of determinant?
1.15. LINEAR TRANSFORMA TRANSFORMATIONS TIONS
1.15 1.15.2 .2
23
Adjo Adjoin intt
If F is F is linear transformation and a and b are two arbitrary vectors the adjoint function, F , F , is defined by ( b) = b F (a ( a) (1.87) a F (b
·
·
From the definition the adjoint is also a linear transformation. For an arbitrary frame e 1 , . . . , en we have ( a) = a F ( ( ei ) (1.88) ei F (a
·
·
So that we can explicitly construct the adjoint as
· · · ·
( a) = e i ei F (a ( a) F (a
= e i (a F ( ( ei ))
= e i F ( ( ei ) e j a j
(1.89)
so that F that F ij = F ( ( ei ) e j is the matrix representation of F of F for for the e1 , . . . , en frame. However
·
( a) = e i F e j F (a
ei a j
(1.90)
so that the matrix representation of F is F ij ( e j ) ei . If the e 1 , . . . , en are orthonormal then F is F ij = F ( all j and and F F ij = F ji e j = e j for all j ji exactly the same as the adjoint in matrices.
·
Other basic properties of the adjoint are: ( a) = e i a F ( ( ei ) = e i ei F (a ( a) = F (a ( a) F (a
·
(1.91)
·
and
· · · ·
F G (a) = e i ei F G (a)
= e i (a F (G ( G (ei ))) = e i F (a ( a) G (ei ) = e i ei G F (a ( a) = G F (a ( a)
(1.92)
so that F = F and F G = G F . symmetricc function function is one where F = F . examp mple le F . A symmetri F . As an exa consider F consider F F (1.93) F F = F F = F F
CHAPTER CHAPTER 1. BASIC GEOMETRI GEOMETRIC C ALGEBRA ALGEBRA
24
1.15 1.15.3 .3
Inv In verse erse
Another linear algebraic relation in geometric algebra is −1
f
(I −1 A) I f (I (A) = det(f det(f ))
∀A ∈ G (V )
(1.94)
where f is ( b) = b f (a ( a) a, b f is the adjoint transformation defined by a f (b explicit formula for the inverse of a linear transformation!
·
1.16 1.16
∀ ∈ V and and you have an
·
Comm Commutato utatorr Product Product
The commutator product of two multivectors A and B is defined as
× ≡ 12 (AB (AB − BA) BA)
(1.95)
A B
An important theorem for the commutator product is that for a grade 2 multivector, A 2 = A 2 , and a grade r grade r multivector B multivector B r = B r we have
A2 Br = A2
∧ B + A + A ×B + A + A · B r
2
r
2
r
(1.96)
From the geometric product grade expansion for multivectors we have A2 Br = A2 Br
r+2 +
A B + A B
2
r r
2
r |r−2|
(1.97)
Thus we must show that = A 2 Br
A B 2
×
r r
n
Let e1 , . . . , en be an orthogonal set for the vector space where Br = e 1 . . . er and A2 =
αlm el em
l
so we can write
n
A2 Br =
×
αlm el em
l
×
Now consider the following three cases
1. l and and m where e l em e1 . . . er = e 1 . . . er el em m > r where e 2. l
(1.98)
≤ r and and m where e e m > r where e
l m e1 . . . er
=
−e
1
. . . er el em
(e1 . . . er )
(1.99)
1.16. COMMUTA COMMUTATOR PRODUCT 3. l and and m m
≤ r where e where e e
l m e1
25
. . . er = e 1 . . . er el em
For case 1 and 3 el em commute with Br and the contribution to the commutator product is zero zero.. In cas casee 3 el em anticommutes with Br and thus are the only terms that contribute to the commuta commutator tor.. All these these terms terms are of grade r and the theorem is proved. proved. Additionall Additionally y, the commutator product obeys the Jacobi identity + B (A C ) A (B C ) = (A B ) C + B
× ×
× ×
× ×
This is important for the geometric algebra treatment of Lie groups and algebras.
(1.100)
26
CHAPTER 1. BASIC GEOMETRIC ALGEBRA
Chapter 2 Examples of Geometric Algebra 2.1
Quaternions
Any multivector A
∈ G (3, 0) may be written as A = α + a + B + βI
(2.1)
where α, β ,a (3, 0), B is a bivector, and I is the unit pseudoscalar. The quaternions are the multivectors of even grades
∈ ∈ V
A = α + B
(2.2)
B can be represented as B = α i + β j + γ k
(2.3)
where i = e 2 e3 , j = e 1 e3 , and k = e 1 e2 , and
i2 = j 2 = k 2 = ijk =
−1.
(2.4)
The quaternions form a subalgebra of (3, 0) since the geometric product of any two quaternions is also a quaternion since the geometric product of two even grade multivector components is a even grade multivector. For example the product of two grade 2 multivectors can only consist of grades 0, 2, and 4, but in (3, 0) we can only have grades 0 and 2 since the highest possible grade is 3.
G
G
27
CHAPTER 2. EXAMPLES OF GEOMETRIC ALGEBRA
28
2.2
Spinors
The general definition of a spinor is a multivector, ψ ( p, q ), such that ψvψ † ( p, q ) v ( p, q ). Practically speaking a spinor is the composition of a rotation and a dilation (stretching or shrinking) of a vector. Thus we can write
∈ G
V
∈ V
ψvψ † = ρRvR†
∀ ∈ (2.5)
where R is a rotor RR† = 1 . Letting U = R † ψ we must solve U vU † = ρv
(2.6)
U must generate a pure dilation. The most general form for U based on the fact that the l.h.s of equation 2.6 must be a vector is (2.7) U = α + βI so that
− −
U vU † = α 2 v + αβ Iv + vI † + β 2 IvI † = ρv Using vI † = ( 1)
−
(n−1)(n−2) 2
2
(n
(2.8)
Iv, vI † = ( 1)n−1 I † v, and II † = ( 1)q we get
−
α v + αβ 1 + ( 1) If
−
(n−1)(n−2) 2
Iv + ( 1)n+q−1 β 2 v = ρv
(2.9)
− 1) (n − 2) is even β = 0 and α = 0, otherwise α, β = 0. For the odd case 2
ψ = R (α + βI )
(2.10)
where ρ = α 2 + ( 1)n+q−1 β 2 . In the case of (1, 3) (relativistic space time) we have ρ = α 2 + β 2 , ρ > 0.
−
2.3
G
Geometric Algebra of the Minkowski Plane
Because of Relativity and QM the Geometric Algebra of the Minkowski Plane is very important for physical applications of Geometric Algebra so we will treat it in detail.
2.3. GEOMETRIC ALGEBRA OF THE MINKOWSKI PLANE
29
Let the orthonormal basis vectors for the plane be γ 0 and γ 1 where γ 02 = geometric product of two vectors a = a0 γ 0 + a1 γ 1 and b = b 0 γ 0 + b1 γ 1 is
2 1
= 1.1 Then the
(2.11) (2.12) (2.13)
−γ
ab = (a0 γ 0 + a1 γ 1 ) (b0 γ 0 + b1 γ 1 ) = a0 b0 γ 02 + a1 b1 γ 12 + (a0 b1 a1 b0 ) γ 0 γ 1 = a0 b0 a1 b1 + (a0 b1 a1 b0 ) I
−
−
−
so that
·
a b = a 0 b0 and
0 1
and
I 2 = γ 0 γ 1 γ 0 γ 1 = Thus ∞
e
1 1
∧ b = (a b − a b ) I
a
αI
−a b
= =
2 2 0 1
i=0 ∞
i=0
∞
i=0
(2.15)
=1
αi I i i!
α2i + (2i)!
1 0
−γ γ
(2.14)
(2.16)
(2.17)
α2i+1 I (2i + 1)!
= cosh (α) + sinh (α) I
(2.18)
(2.19)
since I 2i = 1.
In the Minkowski plane all vectors of the form a ± = α (γ 0 γ 1 ) are null a2± = 0 . One question to answer are there any vectors b ± such that a ± b± = 0 that are not parallel to a± .
±
·
∓
a± b± = α b± b± 0 1 = 0 ± ± b0 b1 = 0 b± b± 0 = 1
·
∓
±
Thus b ± must be proportional to a ± and the are no vectors in the space that can be constructed that are normal to a± . Thus there are no non-zero bivectors, a b, such that (a b)2 = 0. Conversely, if a b = 0 then (a b)2 > 0.
∧
∧
∧
∧
Finally for the condition that there always exist two orthogonal vectors e 1 and e 2 such that a
∧ b = e e
we can state that neither e 1 nor e 2 can be null. 1
I = γ 0 γ 1
1 2
(2.20)
CHAPTER 2. EXAMPLES OF GEOMETRIC ALGEBRA
30
2.4
Lorentz Transformation
We now have all the tools needed to derive the Lorentz transformation with Geometric Algebra. Consider a two dimensional time-like plane with with coordinates t 2 and x 1 and basis vectors γ 0 and γ 1 . Then a general space-time vector in the plane is given by x = tγ 0 + x1 γ 1 = t γ 0 + x1 γ 1
(2.21)
where the basis vectors of the two coordinate systems are related by γ µ = Rγ µ R† µ = 0, 1
(2.22)
and R is a Minkowski plane rotor R = sinh so that
α α + cosh γ 1 γ 0 2 2
(2.23)
Rγ 0 R† = cosh(α) γ 0 + sinh (α) γ 1
(2.24)
Rγ 1 R† = cosh(α) γ 1 + sinh (α) γ 0
(2.25)
and Now consider the special case that the primed coordinate system is moving with velocity β in the direction of γ 1 and the two coordinate systems were coincident at time t = 0. Then x 1 = βt and x 1 = 0 so we may write (2.26) tγ 0 + βtγ 1 = t Rγ 0 R† t (γ 0 + βγ 1 ) = cosh (α) γ 0 + sinh (α) γ 1 t Equating components gives
t t t sinh(α) = β t cosh (α) =
Solving for α and
(2.27)
(2.28)
(2.29)
(2.30)
t in equations 2.28 and 2.29 gives t tanh(α) = β
2
We let the speed of light c = 1.
2.4. LORENTZ TRANSFORMATION t = γ = t
31 1
− 1
(2.31)
β 2
Now consider the general case of x, t and x , t giving
tγ 0 + xγ 1 = t Rγ 0 R† + x Rγ 1 R† = t γ (γ 0 + βγ 1 ) + x γ (γ 1 + βγ 0 )
(2.32) (2.33)
Equating basis vector coefficients recovers the Lorentz transformation t = γ (t + βx ) x = γ (x + βt )
(2.34)
32
CHAPTER 2. EXAMPLES OF GEOMETRIC ALGEBRA
Chapter 3 Geometric Calculus - The Derivative 3.1
Definitions
If F (a) if a multivector valued function of the vector a, and a and b are any vectors in the space then the derivative of F is defined by b
· ∇F ≡ lim F (a + b) − F (a)
→0
(3.1)
then letting b = ek be the components of a coordinate frame with x = xk ek (we are using the summation convention that the same upper and lower indices are summed over 1 to N ) we have ek
F (x j e j + ek ) F = lim →0
·∇
j
− F (x e ) j
(3.2)
Using what we know about coordinates gives
∇F = e ∂F = e ∂ F ∂x j
j
j
j
∇ as a symbolic operator we may write ∇ = e ∂
(3.3)
or looking at
j
j
(3.4)
Due to the properties of coordinate frame expansions F is independent of the choice of the ek frame. If we consider x to be a position vector then F (x) is in general a multivector field.
∇
33
CHAPTER 3. GEOMETRIC CALCULUS - THE DERIVATIVE
34
3.2
Derivatives of Scalar Functions
If f (x) is scalar valued function of the vector x then the derivative is k
∇f = e ∂ f
k
(3.5)
which is the standard definition of the gradient of a scalar function (remember that in an orthonormal coordinate system e k = e k ). Using equation 3.5 we can show the following results for the gradient of some specific scalar functions f = x a, xk , xx f = a, ek , 2x
·
∇ 3.3
(3.6)
Product Rule
Let represent a bilinear product operator such as the geometric, inner, or outer product and note that for the multivector fields F and G we have
◦
∂ k (F G) = (∂ k F ) G + F (∂ k G)
◦
◦
(3.7)
◦
so that
∇ (F ◦ G) = e
k
= e k
((∂ k F ) G + F (∂ k G))
◦ ◦ (∂ F ) ◦ G + e F ◦ (∂ G) k
k
(3.8)
k
However since the geometric product is not communicative, in general
∇ (F ◦ G) = (∇F ) ◦ G + F ◦ (∇G)
(3.9)
The notation adopted by Hestenes is
∇ (F ◦ G) = ∇F ◦ G + ∇˙ F ◦ G˙
(3.10)
The convention of the overdot notation is i. In the absence of brackets,
∇ acts on the object to its immediate right
ii. When the
∇ is followed by brackets, the derivative acts on all the the terms in the brackets. iii. When the ∇ acts on a multivector to which it is not adjacent, we use overdots to describe the scope.
Note that with the overdot notation the expression A˙ ˙ makes sense!
∇
3.4. INTERIOR AND EXTERIOR DERIVATIVE
3.4
35
Interior and Exterior Derivative
The interior and exterior derivatives of an r-grade multivector field are simply defined as (don’t forget the summation convention) r r−1 = e
k
· ∂ A
r r+1 = e
k
∧ ∂ A
∇ · A ≡ ∇A r
k
r
(3.11)
(3.12)
and
∇ ∧ A ≡ ∇A r
k
r
Note that i
j
∇ ∧ (∇ ∧ A ) = e ∂ e ∧ ∂ A = e ∧ e ∧ (∂ ∂ A ) r
i
i
j
j
r
i j
r
=0
since ei
j
j
i
(3.13)
∧ e = −e ∧ e , but ∂ ∂ A = ∂ ∂ A . ∇ · (∇ · A ) = e · ∂ e · ∂ A = e · e · (∂ ∂ A ) = ±e · e · (∂ ∂ A I ) = ±e · e ∧ (∂ ∂ A ) I = ± e ∧ e ∧ (∂ ∂ A ) I i j
r
j i
r
i
r
i
j
i
j
i
r
i j
j
i
r
∗ r
i j
j
i
=0
j
i j
j
∗ r
i j
∗ r
(3.14)
Where ∗ indicates the dual of a multivector, A ∗ = AI (I is the pseudoscalar and A = I 2 = 1), and we use equation 1.53 to exchange the inner and outer products.
±
Thus for the general multivector field A (built from sums of A r ’s) we have ( A) = 0. If φ is a scalar function we also have
∇· ∇·
i i
=0
∇∧ (∇ ∧ A) = 0 and
j
∇ ∧ (∇φ) = e ∧ ∂ e ∂ φ = e ∧ e ∂ ∂ φ i j
∗
±A I since
j
i j
(3.15)
Another use for the overdot notation would in the case where f (x, a) is a linear function of its second argument (f (x,αa + βb) = αf (x, a) + βf (x, b)) and a is a general function of position
CHAPTER 3. GEOMETRIC CALCULUS - THE DERIVATIVE
36
(a(x) = ai (x) ei ). Now calculate
∂ k ∂ = e f (x, a) f x, ai (x) ei k k ∂x ∂x ∂ = e k k ai (x) f (x, ei ) ∂x i k ∂a i k ∂ = e e ) + a e f (x, f (x, ei ) i ∂x k ∂x k ∂a ∂ = e k f x, k + ai ek k f (x, ei ) ∂x ∂x
∇f (x, a) = e
k
(3.17) (3.18)
Defining
∇˙ f ˙ (a) ≡
Then suppressing the explicit x dependence of f we get
∇˙ f ˙ (a) = ∇f (a) − e f k
(3.19)
∂ k ∂ ) = e e ae f (x, f (x, a) i ∂x k ∂x k i k
(3.16)
(3.20)
a=constant
∂a ∂x k
(3.21)
Other basic results (examples) are
∇x · A = rA ∇x ∧ A = (n − r) A ∇˙ A ˙x = (−1) (n − 2r) A r
r
(3.22)
r
r
r
r
(3.23)
r
(3.24)
The basic identities for the case of a scalar field α and multivector field F are
∇ (αF ) = (∇α) F + α∇F ∇ · (αF ) = (∇α) · F + α∇ · F ∇ ∧ (αF ) = (∇α) ∧ F + α∇ ∧ F
(3.25)
(3.26)
(3.27)
if f 1 and f 2 are vector fields
∇ ∧ (f ∧ f ) = (∇ ∧ f ) ∧ f − (∇ ∧ f ) ∧ f 1
2
1
2
2
1
(3.28)
and finally if F r is a grade r multivector field
∇ · (F I ) = (∇ ∧ F ) I r
where I is the psuedoscalar for the geometric algebra.
r
(3.29)
3.5. DERIVATIVE OF A MULTIVECTOR FUNCTION
3.5
37
Derivative of a Multivector Function
For a vector space of dimension N spanned by the vectors ui the coordinates of a vector x are the xi = x ui so that x = x i ui (summation convention is from 1 to N ). Curvilinear coordinates for that space are generated by a one to one invertible differentiable mapping from x1 , . . . , xN θ1 , . . . , θN where the θ i are called the curvilinear coordinates. If the mapping is given by x θ1 , . . . , θN = x i θ1 , . . . , θN ui then the basis vectors associated with the transformation are given by ∂x ∂x i ek = = k ui (3.30) ∂θ k ∂θ The one critical relationship that is required to express the geometric derivative in curvilinear coordinated is ∂θ k i k e = u (3.31) ∂x i The proof is
·
k
e j e =
·
= = =
∂x m ∂θ k um un j n ∂θ ∂x ∂x m ∂θ k n δ ∂θ j ∂x n m ∂x m ∂θ k ∂θ j ∂x m ∂θ k = δ jk j ∂θ
↔
(3.32)
·
(3.33)
(3.34)
(3.35)
We wish to express the geometric derivative of an R-grade multivector field F R in terms of the curvilinear coordinates. Thus
∇
∂F R F R = u i i ∂x
i ∂θ
= u
k
∂x i
∂F R k ∂F R = e ∂θ k ∂θ k
(3.36)
Note that if we start by defining the ek ’s the reciprocal frame vectors ek can be calculated geometrically (we do not need the inverse partial derivatives). Now define a new blade symbol by (3.37) e[i1 ,...,iR ] = e i1 . . . eiR
∧ ∧
and represent an R-grade multivector function F by F = F i1 ...iR e[i1 ,...,iR ]
(3.38)
CHAPTER 3. GEOMETRIC CALCULUS - THE DERIVATIVE
38 Then
∇
∂F i1...iR k i1 ...iR k ∂ e e + F e e[i ,...,i ] F = [i ,...,i ] 1 R ∂θ k ∂θ k 1 R
Define
≡
C e[i1 ,...,iR ]
∂ e[i ,...,i ] ∂θ k 1 R
ek
(3.39)
(3.40)
Where C e[i1,...,iR ] are the connection multivectors for each base of the geometric algebra and we can write ∂F i1 ...iR k = e e[i1 ,...,iR ] + F i1 ...iR C e[i1 ,...,iR ] (3.41) F k ∂θ
∇
Note that all the quantities in the equation not dependent upon the F i1...iR can be directly calculated if the ek θ1 , . . . , θN is known so further simplification is not needed.
In general the ek ’s we have defined are not normalized so define
|e | = k
ˆk = e and note that eˆ2k =
| | e2k
ek ek
(3.42)
| |
(3.43)
±1 depending upon the metric. Note also that ˆ = |e | e e k
since e ˆ j eˆk =
·
k
k
| | · | | e j e j
ek ek
= δ jk
(3.44)
|e | = δ |e | j
k
j k
(3.45)
so that if F R is represented in terms of the normalized basis vectors we have F R = F Ri1 ...iR eˆ[i1 ,...,iR ]
(3.46)
and the geometric derivative is now ∂F i1 ...iR eˆk ˆ[i1 ,...,iR ] + F i1...iR ˆ e F = C eˆ[i1 ,...,iR ] k ∂θ ek
∇ and
| |
e ˆk ∂ ˆ ˆ e[i ,...,i ] C eˆ[i1 ,...,iR ] = ek ∂θ k 1 R
| |
(3.47)
(3.48)
3.5. DERIVATIVE OF A MULTIVECTOR FUNCTION
3.5.1
39
Spherical Coordinates
For spherical coordinates the coordinate generating function is: x = r (cos (θ) uz + sin (θ)(cos(φ) ux + sin (φ) uy ))
(3.49)
so that er = cos (θ)(cos(φ) ux + sin (φ) uy ) + sin (θ) uz eθ = r (
(3.50) (3.51) (3.52)
− sin(θ)(cos(φ) u + sin (φ) u ) + cos (θ) u ) e = r cos(θ) (− sin(φ) u + cos (φ) u ) x
φ
y
x
z
y
where
|e | = 1 |e | = r |e | = r sin(θ) r
θ
φ
(3.53)
and
ˆr = cos (θ)(cos(φ) ux + sin (φ) uy ) + sin (θ) uz e ˆθ = e
− sin(θ)(cos(φ) u + sin (φ) u ) + cos (θ) u ˆ = − sin(φ) u + cos (φ) u e x
φ
y
x
z
y
(3.54) (3.55) (3.56)
the connection mulitvectors for the normalize basis vectors are ˆ eˆr = 2 C r ˆ eˆθ = cos(θ) + 1 eˆr eˆθ C r sin(θ) r ˆ eˆφ = 1 eˆr eˆφ + cos(θ) eˆθ eˆφ C r r sin(θ) cos(θ) 1 ˆr eˆθ = ˆr + eˆθ e e r sin(θ) r 1 cos(θ) ˆr eˆφ = eˆφ ˆr eˆθ eˆφ e e r r sin(θ) 2 e ˆθ eˆφ = eˆr eˆθ eˆφ r ˆθ eˆφ = 0 e
{ } { }
{ }
ˆ C ˆ C ˆ C
∧
∧
{ ∧ }
−
∧ ∧
(3.58)
∧
{ ∧ } −
{ ∧ } ˆ {eˆ ∧ ∧ } C r
(3.57)
(3.60)
∧ ∧
(3.59)
(3.61) (3.62) (3.63)
CHAPTER 3. GEOMETRIC CALCULUS - THE DERIVATIVE
40
For a vector function A using equation 3.41 and that
∇A = ∇ · A + ∇ ∧ A
∇ − ∇ ∧ − − − − − ∇ − −
1 ∇ · A = r sin(θ) =
Aθ cos(θ) + ∂ φ Aφ +
1 2Ar + ∂ θ Aθ + ∂ r Ar r
(3.64)
1 1 2 r + ∂ r A ∂ θ sin(θ) Aθ + ∂ φ Aφ r 2 r r sin(θ)
×
I (
A =
=
×
(3.65)
(3.66)
A)
1 ∂ θ Aφ + Aφ cos(θ) ∂ φ Aθ r r sin(θ) ∂ φ Ar A φ + ∂ r Aφ eˆθ r sin(θ) r Aθ ∂ θ Ar θ ˆφ + + ∂ r A e r r
A =
ˆr e
(3.68)
1 ∂ θ sin(θ) Aφ ∂ φ Aθ eˆr r sin(θ) 1 1 + ∂ φ Ar ∂ r rAφ eˆθ r sin(θ) 1 + ∂ r rAθ ∂ θ Ar eˆφ r
(3.67)
(3.69)
(3.70) (3.71) (3.72)
These are the standard formulas for div and curl in spherical coordinates.
3.6
Analytic Functions
Starting with Then we have
G (2, 0) and orthonormal basis vectors e
x
and ey so that I = ex ey and I 2 =
r = x ex + y ey
−1.
(3.73)
∂ ∂ + ey ∂x ∂y
(3.74)
z = x + Iy = e x r
(3.75)
∇ = e
x
Map r onto the complex number z via
3.6. ANALYTIC FUNCTIONS
41
Define the multivector field ψ = u + Iv where u and v are scalar fields. Then
∇ψ =
∂u ∂x
−
∂ v ∂y
ex +
∂v ∂ u + ey ∂x ∂y
(3.76)
Thus the statement that ψ is an analytic function is equivalent to
∇ψ = 0
(3.77)
This is the fundamental equation that can be generalized to higher dimensions remembering that in general that ψ is a multivector rather than a scalar function! To complete the connection with complex analysis we define (z † = x Iy) 1 ∂ = ∂z 2
−
∂ ∂x
−
∂ I ∂y
1 ∂ = , 2 ∂z †
∂ ∂ + I ∂x ∂y
(3.78)
so that
∂z ∂z † = 1, =0 ∂z ∂z † (3.79) ∂z ∂z = 0, =1 ∂z † ∂z † An analytic function is one that depends on z alone so that we can write ψ (x + Iy) = ψ (z ) and ∂ψ (z ) =0 ∂z † equivalently 1 2
∂ ∂ + I ∂x ∂y
Now it is simple to show why solutions to
ψ =
(3.80)
1 ex ψ = 0 2
(3.81)
∇
∇ψ = 0 can be written as a power series in z . First ∇z = ∇ (e r) x
∂ r ∂ r + ey ex ∂x ∂y = e x ex ex + ey ex ey = e x ex =0 = e x ex
−
so that
∇ (z − z ) 0
k
= k
(3.82)
∇ (e r − z ) (z − z ) x
0
0
k−1
=0
(3.83)
42
CHAPTER 3. GEOMETRIC CALCULUS - THE DERIVATIVE
Chapter 4 Geometric Calculus - Integration 4.1
Line Integrals
If F (x) is a multivector field and x (λ) is a parametric representation of a vector path (curve) then the line Integral of F along the path x is defined to be
dx F (x) dλ = dλ
n
F dx
≡
lim
n→∞
¯ i ∆xi F
(4.1)
i=1
where
¯ i = 1 (F (xi−1 ) + F (xi )) (4.2) F 2 dx dx if xn = x 1 the path is closed. Since dx is a vector, that is F (x) = F (x), a more general dλ dλ line integral would be dx (4.3) F (x) G (x) dλ = F (x) dx G (x) dλ ∆xi = x i
−x
i−1 ,
The most general form of line integral would be
L (∂ λ x; x)
dλ =
L (dx)
(4.4)
where L (a) = L (a; x) = is a multivector-valued linear function of a. The position dependence in L can often be suppressed to streamline the notation. 43
CHAPTER 4. GEOMETRIC CALCULUS - INTEGRATION
44
4.2
Surface Integrals
The next step is a directed surface integral. Let F (x) be a multivector field and let a surface be parametrized by two coordinates x (x1 , x2 ). Then we can define a directed surface measure by dX =
∂x ∂x 1
∂x ∧ ∂x
2
dx1 dx2 = e 1
1
2
(4.5)
∧ e dx dx 2
A directed surface integral takes the form
F dX =
F e1
1
2
(4.6)
∧ e dx dx 2
In order to construct some of the more important proof it is necessary to express the surface integral as the limit of a sum. This requires the concept of a triangulated surface as shown Each
Figure 4.1: Triangulated Surface triangle in the surface is described by a planar simplex as shown The three vertices of the planar simplex are x 0 , x 1 , and x 2 with the vectors e1 and e2 defined by e1 = x 1
−x , 0
e2 = x 2
−x
0
(4.7)
so that the surface measure of the simplex is
≡ 12 e ∧ e = 12 (x ∧ x + x ∧ x + x ∧ x )
∆X
1
2
1
2
2
0
0
1
(4.8)
with this definition of ∆X we have
n
≡
F dX
lim
n→∞
k=1
¯ k is the average of F over the kth simplex. where F
¯ k ∆X k F
(4.9)
4.3. DIRECTED INTEGRATION - N -DIMENSIONAL SURFACES
45
Figure 4.2: Planar Simplex
4.3 4.3.1
Directed Integration - n-dimensional Surfaces k-Simplex Definition
In geometry, a simplex or k-simplex is an k-dimensional analogue of a triangle. Specifically, a simplex is the convex hull of a set of (k + 1) affinely independent points in some Euclidean space of dimension k or higher. For example, a 0-simplex is a point, a 1-simplex is a line segment, a 2-simplex is a triangle, a 3simplex is a tetrahedron, and a 4-simplex is a pentachoron (in each case including the interior). A regular simplex is a simplex that is also a regular polytope. A regular k-simplex may be constructed from a regular (k 1)-simplex by connecting a new vertex to all original vertices by the common edge length.
−
4.3.2
k-Chain Definition (Algebraic Topology)
A finite set of k-simplexes embedded in an open subset of n is called an affine k-chain. The simplexes in a chain need not be unique, they may occur with multiplicity. Rather than using standard set notation to denote an affine chain, the standard practice to use plus signs to separate each member in the set. If some of the simplexes have the opposite orientation, these are prefixed by a minus sign. If some of the simplexes occur in the set more than once, these are prefixed with an integer count. Thus, an affine chain takes the symbolic form of a sum with integer coefficients.
CHAPTER 4. GEOMETRIC CALCULUS - INTEGRATION
46
4.3.3
Simplex Notation
If (x0 , x1 , . . . , xk ) is the k-simplex defined by the k + 1 points x 0 , x1 , . . . , xk . This is abbreviated by (x)(k) = (x0 , x1 , . . . , xk ) (4.10) The order of the points is important for a simplex, since it specifies the orientation of the simplex. If any two adjacent points are swapped the simplex orientation changes sign. The boundary operator for the simplex is denoted by ∂ and defined by k
≡ −
( 1)i (x0 , . . . , ˘ xi , . . . , xk )(k−1)
∂ (x)(k)
(4.11)
(4.12)
i=0
To see that this make sense consider a triangle (x)(3) = (x0 , x1 , x2 ). Then ∂ (x)(3) = (x1 , x2 )(2)
− (x , x )
2 (2) + (x0 , x1 )(2)
0
= (x1 , x2 )(2) + (x2 , x0 )(2) + (x0 , x1 )(2)
each 2-simplex in the boundary 2-chain connects head to tail with the same sign. Now consider the boundary of the boundary ∂ 2 (x)(3) = ∂ (x1 , x2 )(2) + ∂ (x2 , x0 )(2) + ∂ (x0 , x1 )(2) = (x1 )(1) =0
− (x )
2 (1) + (x2 )(1)
− (x )
0 (1) + (x0 )(1)
− (x )
1 (1)
(4.13)
We need to prove is that in general ∂ 2 (x)(k) = 0. To do this consider the boundary of the ith (k−2)
term on thr r.h.s. of equation 4.11 letting A ij Then
= (x0 , . . . , ˘ xi , . . . , ˘ x j , . . . , xk )(k−1) .
∂ (x0 , . . . , ˘ xi , . . . , xk )(k−1) =
k
−
(k−2)
( 1) j −1 Aij
i = 0 :
j=1 i−1
0 < i < k :
−
j
( 1)
(k−2) Aij
+
j=0
i = k :
k
−
j=i+1 k −1
−
(k−2)
( 1) j Aij
j=0
(k−2)
( 1) j −1 Aij
(4.14)
4.3. DIRECTED INTEGRATION - N -DIMENSIONAL SURFACES
47
The critical point in equation 4.14 is that the exponent of 1 in the second term on the r.h.s. is not j, but j 1. The reason for this is that when xi was removed from the simplex the vertices were not renumbered. We can now express the boundary of the boundary in terms of (k−2) (k−2) the following matrix elements (Bij = ( 1)i+ j Aij ) as
−
−
−
k −1
k
2
∂ (x)(k) =
− − − − − − − − j −1
( 1)
(k −2) A0 j
( 1) j Akj
+ ( 1)
j=1
j=0
k −1
i−1
i=1
k
(k−2)
( 1)i
+
( 1) j Aij
(k−2)
( 1) j −1 Aij
+
j=0
j=i+1
k −1
k
=
(k−2)
k
(k−2) B0 j
(k −2)
+
j=1
Bkj
j=0
k−1 i−1
k −1
k
(k−2) Bij
+
(k−2)
Bij
i=1 j=0
=0
(4.15)
i=1 j=i+1
(k−2)
The consider Bij as a matrix (i-row index, j-column index). The matrix is symmetrical and in equation 4.15 you are subtracting all the elements above the main diagonal from the elements below the main diagonal so that ∂ 2 (x)(k) = 0 and the boundary of a boundary of a simplex is zero. Now add geometry to the simplex by defining the vectors ei = x i
−x , 0
i = 1, . . . , k ,
(4.16)
and the directed volume element ∆X = We now wish to prove that
1 e1 k!
∧···∧e
k
dX = ∆X
(4.17)
(4.18)
(x)(k)
Any point in the simplex can be written in terms of the coordinates λ i as k
x = x 0 +
i=1
λi ei
(4.19)
CHAPTER 4. GEOMETRIC CALCULUS - INTEGRATION
48 with restrictions
k
i
0 First we show that
≤λ ≤1
dX =
(x)(k)
e1
(x)(k)
or
and
−
j i=1
λi
i=1
≤1
1
k
∧ · · · ∧ e dλ ··· dλ k
dλ1
(x)(k)
define Λ j = 1
k
=
··· dλ
= ∆X
1 k!
(4.20)
(4.21)
(4.22)
λi (Note that Λ0 = 1). From the restrictions on the λ i ’s we have
dλ
1
(x)(k)
k
··· dλ
Λ0
=
Λ1
1
dλ
0
2
dλ
0
···
Λk−1
dλk
(4.23)
0
To prove that the r.h.s of equation 4.23 is 1/k! we form the following sequence and use induction to prove that V j is the result of the first j partial Integrations of equation 4.23 V j =
1 (Λk− j ) j j!
(4.24)
Then
Λk−j −1
V j+1 =
dλk− j V j
0
Λk−j −1
=
0
=
−1
1 Λk− j −1 dλk− j j!
Λk−j −1 k− j j+1
Λk− j −1 λ ( j + 1) j! 1 = (Λk− j −1 ) j+1 ( j + 1)!
−
k− j j
−λ
0
(4.25)
so that V k = 1/k! and the assertion is proved. Now let there be a multivector field F (x) that assumes the values F i = F (xi ) at the vertices of the simplex and define the interpolating function k
f (x) = F 0 +
i=1
λi (F i
− F ) 0
(4.26)
4.3. DIRECTED INTEGRATION - N -DIMENSIONAL SURFACES
49
We now wish to show that
k
1 f dX = k + 1 (x)(k)
¯ ∆X F i ∆X = F
(4.27)
i=0
To prove this we must show that
λi dX =
(x)(k)
1 ∆X, k + 1
i
(4.28)
∀λ
To do this consider the integral (equation 4.25 with V j replaced by λk− j V j )
Λk−j −1
k− j
dλ
k− j
λ
Λk−j −1
V j =
0
0
=
1 dλk− j λk− j Λk− j −1 j!
1 (Λ k− j −1 ) j+2 ( j + 2)!
k− j j
−λ
(4.29)
Note that since the extra λ i factor occurs in exactly one of the subintegrals for each different λ i 1 the final result of the total integral is multiplied by a factor of (k+1) since the weight of the total 1 integral is now (k+1)! and the assertion (equation 4.28 and hence equation 4.27) is proved. Now summing over all the simplices making up the directed volume gives
n
volume
F dX = lim
n→∞
¯ i ∆X i F
(4.30)
i=1
The most general statement of equation 4.30 is
n
volume
L (dX )
= lim
n→∞
¯ L
i
∆X i
(4.31)
i=1
¯i is the where L (F n ; x) is a position dependent linear function of a grade- n multivector F n and L average value of L (dX ) over the vertices of each simplex. An example of this would be L (F n ; x)
= G (x) F n H (x)
where G (x) and H (x) are multivector functions of x.
(4.32)
CHAPTER 4. GEOMETRIC CALCULUS - INTEGRATION
50
4.4
Fundamental Theorem of Geometric Calculus
Now prove that the directed measure of a simplex boundary is zero
∆ ∂ (x)(k) = ∆ ∂ (x0 , . . . , xk )(k) = 0
(4.33)
Start with a planar simplex of three points ∂ (x0 , x1 , x2 )(2) = (x1 , x2 )(1) so that
∆ ∂ (x0 , x1 , x2 )(2) = (x2
− (x , x ) 0
2 (1) + (x0 , x1 )(1)
(4.34)
− x ) − (x − x ) + (x − x ) = 0 1
2
0
1
(4.35)
0
We shall now prove equation 4.33 via induction. First note that
∆ (˘xi )(k−1) =
and
1
i = 0 : 0 < i
∆ (˘xk )(k−1) =
−1 1 ∆ (˘x ) ∆ (˘x ) k−1
0 (k−2)
k
≤k−1:
i (k−2)
∧ (x − x ) ∧ (x − x ) k
1
k
0
(4.36)
1
− 1)! (x − x ) ∧ · · · ∧ (x − x ) 1
(k
k −1
0
(4.37)
0
so that
∆ ∂ (x)(k) =
k −1
− − 1
k
1
( 1)i ∆ (˘xi )(k−2)
i=1
∧ (x − x ) + C k
0
where
C = k −1 1 ∆ (˘x )
0 (k−2)
if we let δ = x 0
∧ (x − x ) + (−1) k
1
k
∆ (˘xk )(k−1)
(4.38)
− x we can write 1
C = k −1 1 ∆ (˘x )
0 (k−2)
∧ (x − x ) + k −1 1 ∆ (˘x ) k
0
0 (k−2)
∧ δ + (−1)
k
∆ (˘xk )(k−1)
(4.39)
4.4. FUNDAMENTAL THEOREM OF GEOMETRIC CALCULUS
51
Then ∆ (˘x0 )(k−2)
∧ δ = (k −1 2)! (x − x ) ∧ · · · ∧ (x − x ) ∧ δ 1 − x + δ ) ∧ δ = (x − x + δ ) ∧ · · · ∧ (x (k − 2)! =
2
1
2
0
k−1
k−1
0
1
− 2)! (x − x ) ∧ · · · ∧ (x − x ) ∧ δ (−1) −x ) = δ ∧ (x − x ) ∧ · · · ∧ (x (k − 2)! (−1) −x ) = (x − x ) ∧ (x − x ) ∧ · · · ∧ (x (k − 2)! 2
(k
k−1
0
(4.40)
1
0
(4.41) (4.42)
k−2
2
k −1
0
0
(4.43)
k−1
1
0
2
k−1
0
0
(4.44)
Thus
−1 ∆ (˘x ) ∧ δ = k−1 (−1) −x ) = (x − x ) ∧ (x − x ) ∧ · · · ∧ (x (k − 1)! = (−1) ∆ (˘x )
(4.45)
0 (k−2)
k
1
k
0
2
( 1)k ∆ (˘xk )(k−1) =
−
C = k −1 1 ∆ (˘x )
−1 ∆ (˘x ) k−1
0 (k−2)
and
(4.46)
0
(4.47)
k (k −1)
However so that
k−1
0
∆ ∂ (x)(k) = =
k =0
∧ δ
(4.48)
∧ (x − x ) k
(4.49)
0
− ∧ − ∧ − − k−1
1
k
0 (k−2)
1
1
1
( 1)i ∆ (˘xi )(k−2)
(xk
i=0
∆ ∂ (x)(k−1)
(xk
−x ) 0
x0 ) (4.50)
We have proved equation 4.33. Note that to reduce equation 4.49 we had to use that for any set of vectors δ, y1 , . . . , yk we have δ (y1 + δ )
∧
∧ · · · ∧ (y + δ ) = δ ∧ y ∧ · · · ∧ y k
1
k
(4.51)
CHAPTER 4. GEOMETRIC CALCULUS - INTEGRATION
52
Think about equation 4.51. It’s easy to prove (δ δ = 0)!
∧
Equation 4.33 is sufficient to prove that the directed integral over the surface of simplex is zero
− k
( 1)i dX = ∆ ∂ (x)(k) = 0
dS =
∂ (x)(k)
(4.52)
(˘ xi )(k−1)
i=0
The characteristics of a general volume are: 1. A general volume is built up from a chain of simplices. 2. Simplices in the chain are defined so that at any common boundary the directed areas of the bounding faces are equal and opposite. 3. Surface integrals over two simplices cancel over their common face. 4. The surface integral over the boundary of the volume can be replaced by the sum of the surface integrals over each simplex in the chain. If the boundary of the volume is closed we have
n
dS = lim
n→∞
dS a = 0
(4.53)
a=1
Where dS a is the surface Integral over the ath simplex. Implicit in equation 4.53 is that the surface is orientated, simply connected, and closed. The next lemma to prove that if b is a constant vector on the simplex (x)(k) then
·
b x dS = b ∆ (x)(k) = b ∆X
·
∂ (x)(k)
·
(4.54)
The starting point of the lemma is equation 4.28. First define k
b =
bi ei ,
(4.55)
i=1
where the e i ’s are the reciprocal frame to e i = x i
− x so that 0
k
x
− x = 0
i=1
λi ei ,
(4.56)
4.4. FUNDAMENTAL THEOREM OF GEOMETRIC CALCULUS bi = b ei ,
·
and
53 (4.57)
k
·
λi bi = b (x
· − x ).
i=1
0
(4.58)
Substituting into equation 4.28 we get
k
i
bi λ dX =
(x)(k)
i=1
b (x
(x)(k)
−
1 x0 ) dX = k + 1
k
·
b ei ∆X
(4.59)
i=1
Rearranging terms gives
·
1 b x dX = k + 1 (x)(k) b k + 1
=
· − · k
b (xi
x0 )
+ (k + 1) b x0 ∆X
·
i=1 k
xi ∆X
i=0
= b x¯∆X
·
(4.60)
Now using the definition of a simplex boundary (equation 4.11) and equation 4.60 we get
·
1 b x dS = k ∂ (x)(k)
k
−
i
( 1) b (x0 +
i=0
·
··· + x˘ + ··· + x ) ∆ i
k
(x˘i )(k−1)
(4.61)
1 1 and not because both k k + 1 (x0 + + x˘i + + xk ) and (x˘i )(k−1) refer to k 1 simplices (the boundary of (x)(k) is the sum of all the simplices ( x˘i )(k) with proper sign assigned). The coefficient multiplying the r.h.s. of equation 4.61 is
···
···
−
Now to prove equation 4.54 we need to prove one final purely algebraic lemma k
−
( 1)i b (x0 +
i=0
·
··· + x˘ + ··· + x ) ∆ (x˘ ) i
i (k−1) =
k
1
(k
− 1)! b · (e ∧ · · · ∧ e ) 1
(4.62)
k
Begin with the definition of the l.h.s. of equation 4.62 k
C =
−
k
i
( 1) b (x0 +
i=0
·
··· + x˘ + ··· + x ) ∆ ( x˘ ) i
k
i (k−1) =
i=0
Ci
(4.63)
CHAPTER 4. GEOMETRIC CALCULUS - INTEGRATION
54 where
Ci is
defined by Ci =
now define E k = e 1
+ xk ) ∆ ( x ˘0 )(k−1) i = 0 : b (x1 + i 0 < i k : ( 1) b (x0 + + x˘i + + xk ) ∆ ( x ˘i )(k−1)
≤
−
··· · ··
···
(4.64)
∧···∧ e so that (using equation 1.61 from the section on reciprocal frames) (−1) e E = e ∧ · · · ∧ e˘ ∧ · · · ∧ e , ∀ 0 < i ≤ k (4.65) k
i−1 i
and C0
=
k
1
i
k
−1 b · (x + ··· + x˘ + ··· + x ) e E (k − 1)! 0
The main problem is in evaluating
i
k
i
k
(4.66)
C0 since
1
∆ (˘x0 )(k−1) = using e i = x i
·
·
(k
− 1)! (x − x ) ∧ · · · ∧ (x − x ) 2
1
k
(4.67)
1
− x reduces equation 4.67 to 0
∆ (˘x0 )(k−1) =
1
(k
− 1)! (e − e ) ∧ · · · ∧ (e − e ) 2
1
k
(4.68)
1
but equation 4.68 can be expanded into equation 4.69. The critical point in doing the expansion is that in generating the sum on the r.h.s. of the first line of equation 4.69 all products containing x1 (of course all terms in the sum contain x 1 exactly once since we are using the outer product) are put in normal order by bringing the x 1 factor to the front of the product thus requiring the factor of ( 1)i in each term in the sum.
−
k
(e2
− e ) ∧ · · · ∧ (e − e ) = e ∧ · · · ∧ e 1
k
1
2
k
k
=
−
( 1)i e1
i=2
( 1)i−1 e1
i=1 k
=
− −
∧ e ∧ · · · ∧ e˘ ∧ · · · ∧ e 2
i
k
∧ e ∧ · · · ∧ e˘ ∧ · · · ∧ e 2
i
ei E k
k
(4.69)
i=1
or
∆ (˘x0 )(k−1) =
1 (k
− 1)!
k
i=1
ei E k
(4.70)
4.4. FUNDAMENTAL THEOREM OF GEOMETRIC CALCULUS
55
from equation 4.69 we have C =
= = =
1 (k
− 1)! 1
(k
− 1)! 1
(k
− 1)! 1
k
(b (xi
i
· − x )) e E
i=1 k
0
k
(b ei ) ei E k
·
i=1 k
bi ei E k
i=1
− 1)! bE 1 = (b · E + b ∧ E ) (k − 1)! k
(k
k
=
1
(k
− 1)! b · E
k
k
(4.71)
and equation 4.62 is proved which means substituting equation 4.62 into equation 4.61 proves equation 4.54