aa r X i v : . [ m a t h . HO ] J un This is page 1Printer: Opaque this
Multivector DifferentialCalculus
Eckhard M.S. Hitzer
26 September 2002 Department of Mechanical Engineering, Fukui University, 3-9-1 Bunkyo,910-0024 Fukui, Japan, e-mail: [email protected], homepage:http://sinai.mech.fukui-u.ac.jp/ABSTRACT Universal geometric calculus simplifies and unifies the structure andnotation of mathematics for all of science and engineering, and for technologicalapplications. This paper treats the fundamentals of the multivector differentialcalculus part of geometric calculus. The multivector differential is introduced, fol-lowed by the multivector derivative and the adjoint of multivector functions. Thebasic rules of multivector differentiation are derived explicitly, as well as a varietyof basic multivector derivatives. Finally factorization, which relates functions ofvector variables and multivector variables is discussed, and the concepts of bothsimplicial variables and derivatives are explained. Everything is proven explicitlyin a very elementary level step by step approach. The paper is thus intendedto serve as reference material, providing a number of details, which are usuallyskipped in more advanced discussions of the subject matter. The arrangement ofthe material closely follows chapter 2 of [3].
But the gate to life is narrow and the way that leads to itis hard, and there are few people who find it. . . . I assure youthat unless you change and become like children, you will neverenter the Kingdom of heaven.
Jesus Christ [1]. . . for geometry, you know, is the gate of science, and thegate is so low and small that one can only enter it as a littlechild.
William K. Clifford [2]
The work is presented as a series of definitions (D. or Def.), propositions(P.) and Remarks. All propositions (except P. 76) are explicitly proved in asometimes elaborate step by step procedure. Such a degree of explicitnessseems to be lacking in the literature on the subject which I have comeacross so far. This motivated me to undertake this study. It is only as-sumed that the readers are familiar with geometric (multivector) algebra,as presented in chapter 1 of [3], the introductory sections of [6], [7] or innumerous other publications on geoemtric (Clifford) algebra.The presentation closely follows the arrangement in chapter 2 of [3]. It isinfact the major purpose of this work to help non-experts work thoroughlythrough such a text. There is therefore no great claim to originality, apartfrom the hope that it will assist and motivate non-experts and new-comersto delve into the material and become confident of comprehensively under-standing and mastering it. The multivector derivative is to some extent just First presented in the Geometry session of the 6th Int. Clifford Conference,Cookeville, 23 May 2002. Compare e.g. chapter 2 of [3], the section on multivector calculus of [4] or [5]. a generalization of the vector derivative. I encourage non-experts and new-comers therefore to study [6], because it contains a similar study restrictedto the vector derivative of multivector functions .In the next section we will start with introducing the multivector differ-ential of multivector functions, i.e. functions with arguments and ranges inthe universal geometric algebra G ( I ) with unit pseudoscalar I . This willbe followed by sections defining the multivector derivative and the adjointof multivector functions. Both the multivector differential and the adjointcan be understood as two linear approximations associated pointwise toeach continuously differential multivector function.Conventional scalar differential calculus does not distinguish the threeconcepts of multivector derivative, differential and adjoint, because in thescalar case this distinction becomes trivial.Standard definitions of continuity and scalar differentiability apply tomultivector-valued functions, because the scalar product determines aunique ”distance” | A − B | between two elements A, B ∈ G ( I ) . Definition 1 ( differential, A -derivative ) . For a multivector function F = F ( X ) defined on the geometric algebra G ( I ) : F : X ∈ G ( I ) → F ( X ) ∈ G ( I ) , I = h I i n , (2.1)and for a multivector A and the projection P ( A ) = ( A · I ) · ˜ I the A -derivative (or differential) is defined by: A ∗ ∂ X F ( X ) ≡ ∂ τ F ( X + τ P ( A )) | τ =0 = lim τ → F ( X + τ P ( A )) − F ( X ) τ . (2.2) ∗ signifies the scalar product defined in [3], chap. 1, p. 13, (1.44). By itsdefinition the A -derivative is a linear function of A denoted in variousways as: F = F ( A ) = F ( X, A ) = F A ( X ) = A ∗ ∂ X F ( X ) = A ∗ ∂F. (2.3) Remark 2.
Note the important convention, that inner ( · ), outer ( ∧ )and scalar ( ∗ ) products have priority over geometric products. If X is In this context compare also section 2-8 of [8] on directional derivatives, section II-2of [9] on vector derivatives and differentials, section V-20 of [10] on differentiation withits applications throughout [10], chapter 2 of [11] on geometric calculus, and section 5of [12] on the vector derivative. restricted to a certain subspace of G , then the projection P in definition1 becomes the projection into that subspace. This is e.g. the case at thebeginning of section 7. Proposition 3. F ( A ) = F ( P ( A )) . (2.4) Proof 3 F ( A ) Def . ∂ τ F ( X + τ P ( A )) | τ =0 = ∂ τ F ( X + τ P ( P ( A ))) | τ =0 Def . F ( P ( A )) . (2.5) Proposition 4 ( grade invariance ) . A ∗ ∂ h F i r = h A ∗ ∂F i r = h F ( A ) i r . (2.6) Proof 4 A ∗ ∂ h F i r Def . ∂ τ h F ( X + τ P ( A )) i r | τ =0 = lim τ → h F ( X + τ P ( A )) i r − h F ( X ) i r τ τ scalar = (cid:28) lim τ → F ( X + τ P ( A )) − F ( X ) τ (cid:29) r Def . h A ∗ ∂F i r Def . h F ( A ) i r . (2.7) Proposition 5. F ( A + B ) = F ( A ) + F ( B ) . (2.8) Proof 5 F ( A + B ) Def .
1= lim τ → F ( X + τ P ( A + B )) − F ( X ) τ linearity of P = lim τ → F ( X + τ P ( A ) + τ P ( B )) − F ( X ) τ = lim τ → (cid:26) F ( X + τ P ( A ) + τ P ( B )) − F ( X + τ P ( B )) τ + F ( X + τ P ( B )) − F ( X ) τ (cid:27) X τ ≡ X + τP ( B ) , Def. 1 = lim τ → F ( X τ + τ P ( A )) − F ( X τ ) τ + F ( B ) lim τ → X τ = X, Def. 1 = A ∗ ∂ X F ( X ) + F ( B ) Def . F ( A ) + F ( B ) . (2.9)An alternative rigorous proof of P. 5 is the following: I first define a function ε F ( X, A, τ ) , which is according to Def. 1 continuous at τ = 0 ε F ( X, A, τ ) ≡ (cid:26) F ( X + τP ( A )) − F ( X ) τ − F ( X, A ) τ = 0 , τ = 0 . (2.10) We hence have F ( X + τ P ( A )) (2.10) = F ( X ) + τ F ( X, A ) + τ ε F ( X, A, τ ) (2.11)and F ( X + τ P ( A + B )) linearity of P = F ( X + τ P ( A ) | {z } + τ P ( B )) (2.11) = F ( X + τ P ( A )) + τ F ( X + τ P ( A ) , B ) + τ ε F ( X + τ P ( A ) , B, τ ) (2.11) = F ( X ) + τ F ( X, A ) + τ ε F ( X, A, τ )+ τ F ( X + τ P ( A ) , B ) + τ ε F ( X + τ P ( A ) , B, τ ) . (2.12)We can now calculate F ( X, A + B ) according to Def. 1 F ( X, A + B ) Def. 1 = lim τ → F ( X + τ P ( A + B )) − F ( X ) τ (2.12) = lim τ → { F ( X, A ) + ε F ( X, A, τ ) + F ( X + τ P ( A ) , B ) + ε F ( X + τ P ( A ) , B, τ ) } = lim τ → { F ( X, A ) + ε F ( X, A, τ ) + F ( X, B ) + ε F ( X, B, τ ) } (2.10) = F ( X, A ) + F ( X, B ) . (2.13) Proposition 6.
For constant scalar λF ( λA ) = λF ( A ) . (2.14) Proof 6 F ( λA ) Def .
1= lim τ → F ( X + τ λP ( A )) − F ( X ) τ λ =0 = lim τ → λ F ( X + τ λP ( A )) − F ( X ) τ λ τ → τ ′ = τλ = λ lim τ ′→ F ( X + τ ′ P ( A )) − F ( X ) τ ′ Def . λF ( A ) , (2.15)in case of λ = 0 we simply have F (0 A ) = F (0) Def .
1= lim τ → F ( X ) − F ( X ) τ = 0 = 0 F ( A ) . (2.16) Proposition 7 ( sum rule ) . For two multivector functions
F, G on G ( I ) : A ∗ ∂ X ( F + G ) = A ∗ ∂ X F + A ∗ ∂ X G. (2.17) Proof 7 A ∗ ∂ X ( F + G ) Def . ∂ τ [ F ( X + τ P ( A )) + G ( X + τ P ( A ))] = (2.18)= ∂ τ F ( X + τ P ( A )) + ∂ τ G ( X + τ P ( A )) Def . A ∗ ∂ X F + A ∗ ∂ X G. Proposition 8 ( product rule ) . For two multivector functions
F, G on G ( I ) : A ∗ ∂ X ( F G ) = ( A ∗ ∂ X F ) G + F ( A ∗ ∂ X G ) . (2.19) Proof 8 A ∗ ∂ X ( F G ) Def .
1= lim τ → F ( X + τ P ( A )) G ( X + τ P ( A )) − F ( X ) G ( X ) τ = lim τ → F ( X + τ P ( A )) G ( X + τ P ( A )) − F ( X ) G ( X + τ P ( A )) · · · + F ( X ) G ( X + τ P ( A )) − F ( X ) G ( X ) τ = lim τ → (cid:26) F ( X + τ P ( A )) − F ( X ) τ G ( X + τ P ( A ))++ F ( X ) G ( X + τ P ( A )) − G ( X ) τ (cid:27) Def .
1= ( A ∗ ∂ X F ) lim τ → G ( X + τ P ( A )) + F ( A ∗ ∂ X G )= ( A ∗ ∂ X F ) G + F ( A ∗ ∂ X G ) . (2.20) Proposition 9 ( constant function ) . For B independent of X : A ∗ ∂ X B = 0 . (2.21) Proof 9
Def. 1 gives for F ≡ B : A ∗ ∂ X B = lim τ → B − Bτ = 0 . (2.22) Proposition 10 ( identity ) . A ∗ ∂ X X = P ( A ) . (2.23) Proof 10
For F ( X ) ≡ X : A ∗ ∂ X X Def. 1 = lim τ → X + τ P ( A ) − Xτ = P ( A ) . (2.24) Proposition 11.
For B independent of X : A ∗ ∂ X ( X ∗ B ) = P ( A ) ∗ B. (2.25) Proof 11 A ∗ ∂ X ( X ∗ B ) Def. of scalar prod. = A ∗ ∂ X h XB i = h A ∗ ∂ X XB i =(2.26) h ( A ∗ ∂ X X ) B + XA ∗ ∂ X B i = h ( P ( A ) B + 0 i = P ( A ) ∗ B. Proposition 12 ( Taylor expansion ) . F ( X + P ( A )) = ∞ X k =0 k ! ( A ∗ ∂ X ) k F = exp( A ∗ ∂ X ) F. (2.27) Proof 12 G ( τ ) ≡ F ( X + τ P ( A )) ⇒ dG (0) dτ = ∂ τ F ( X + τ P ( A )) | τ =0 Def . A ∗ ∂ X F, (2.28)Def. of G ⇒ d G (0) dτ = ∂ τ ( ∂ τ F ( X + τ P ( A ))) | τ =0 (2.29) Def . A ∗ ∂ X ( ∂ τ F ( X + τ P ( A ))) | τ =0 Def .
1= ( A ∗ ∂ X ) F. In general: d k G (0) dτ k = ( A ∗ ∂ X ) k F. (2.30)Taylor expansion of G : G (1) = G (0 + 1) = G (0)+ dG (0) dτ + 12 d G (0) dτ + . . . = ∞ X k =0 k ! d k G (0) dτ k . (2.31) ⇒ G (1) = F ( X + P ( A )) = ∞ X k =0 k ! ( A ∗ ∂ X ) k F = exp( A ∗ ∂ X ) F. (2.32) Proposition 13 ( chain rule ) . f : X ∈ G ( I ) → X ′ ∈ G ( I ′ ) with I ′ = h I ′ i n (2.33)The composite function F ( X ) = G ( f ( X )) has the differential A ∗ ∂F = A ∗ ∂G ( f ( X )) = f ( A ) ∗ ∂G (2.34) Proof 13
The Taylor expansion (P.12) of f gives: f ( X + τ P ( A )) = f ( X ) + ( τ A ∗ ∂ X ) f + 12 ( τ A ∗ ∂ X ) f + . . . Def . f ( X ) + f ( τ A ) + 12 ( τ A ∗ ∂ X ) f + . . . P . f ( X ) + τ f ( A ) + τ A ∗ ∂ X ) f + . . . (2.35)Therefore A ∗ ∂ X F = A ∗ ∂ X G ( f ( X )) Def . ∂ τ G ( f ( X + τ P ( A ))) | τ =0Taylor = ∂ τ G ( f ( X ) + τ f ( A )) (cid:12)(cid:12) τ =0range of f, f in G ( I ′ ) = ∂ τ G ( f ( X ) + τ P ( f ( A ))) (cid:12)(cid:12) τ =0Def . f ( A ) ∗ ∂ X ′ G ( X ′ ) | X ′ = f ( X ) . (2.36) Remark 14.
Proof 13 employs the Taylor expansion (P. 12). Yet thereare continuously differentiable functions without a Taylor expansion. Butthe chain rule P. 13 will still apply as long as a function f has the linearapproximation f ( X + τ P ( A )) ≈ f ( X ) + τ f ( A ) for sufficiently small valuesof τ . I therefore want to give another proof for the chain rule based on thelinearization of multivector functions with the help of the differential. Onesuch linearization is already given in (2.10) and (2.11). In the same way Idefine ε f ( X, A, τ ) ≡ (cid:26) f ( X + τP ( A )) − f ( X ) τ − f ( X, A ) τ = 0 , τ = 0 , (2.37)Hence we have the linearization of f : X ∈ G → f ( X ) ∈ G f ( X + τ P ( A )) (2.37) = f ( X ) + τ f ( X, A ) + τ ε f ( X, A, τ ) . (2.38)I further define ε G ( Y, B, τ ) ≡ (cid:26) G ( Y + τP ( B )) − G ( Y ) τ − G ( Y, B ) τ = 0 , τ = 0 , (2.39)and hence G ( Y + τ P ( B )) (2.39) = G ( Y ) + τ G ( Y, B ) + τ ε G ( Y, B, τ ) . (2.40) I owe this remark to Dr. H. Ishi, of Yokohama City University.
According to Def. 1 both functions ε f and ε G are continuous at τ = 0 . Considering that f ( X ) ∈ G Def. 1 ⇒ f ∈ G (2.37) ⇒ ε f ∈ G (the same is validfor a function f on a linear subspace of G ), we can now rewrite F ( X ) = G ( f ( X )) as G ( f ( X + τ P ( A ))) (2.38) = G ( f ( X ) + τ { f ( X, A ) + ε f ( X, A, τ ) } ) f, ε f ∈ G = G ( f ( X ) + τ P ( f ( X, A ) + ε f ( X, A, τ ))) (2.40) = G ( Y = f ( X )) + τ G ( Y = f ( X ) , B = f ( X, A ) + ε f ( X, A, τ ))+ τ ε G ( f ( x ) , f ( X, A ) + ε f ( X, A, τ ) , τ ) . (2.41)Subtracting G ( f ( X )) , deviding by τ and taking the lim τ → we get accord-ing to Def. 1 the differential of the composite function F ( X ) = G ( f ( X ))lim τ → G ( f ( X + τ P ( A ))) − G ( f ( X )) τ = lim τ → { G (cid:0) f ( X ) , f ( X, A ) + ε f ( X, A, τ ) (cid:1) + ε G ( f ( x ) , f ( X, A ) + ε f ( X, A, τ ) , τ ) } (2.37),(2.39) = G ( f ( X ) , f ( X, A )) . (2.42)This concludes the alternative proof for the differential of composite func-tions (chain rule). Definition 15 ( second differential ) . F AB = F AB ( X ) ≡ B ∗ ˙ ∂A ∗ ∂ ˙ F . (2.43)The over-dot notation indicates, that ˙ ∂ acts only on F . Proposition 16 ( integrability ) . Let F ( X + τ P ( A ) + σP ( B )) be a twicecontinously differentiable function in the vicinity of X, i.e. for all valuesof 0 ≤ τ ≤ τ and 0 ≤ σ ≤ σ . Then F AB ( X ) = F BA ( X ) . (2.44) Remark 17.
To motivate the slightly more elaborate proof of P. 16 whichfollows, I present this ”handwaving” argument: F AB ( X ) Def . B ∗ ˙ ∂A ∗ ∂ ˙ F Def . B ∗ ˙ ∂ (cid:16) ∂ τ ˙ F ( X + τ P ( A )) (cid:12)(cid:12)(cid:12) τ =0 (cid:17) Def . ∂ σ ∂ τ F ( X + τ P ( A ) + σP ( B )) | τ = σ =0 (2.45)= lim σ → lim τ → F ( X + τP ( A )+ σP ( B )) − F ( X + τP ( B )) τ − F ( X + τP ( A ) − F ( x )) τ σ = lim σ → lim τ → F ( X + τ P ( A ) + σP ( B )) − F ( X + τ P ( B )) − F ( X + τ P ( A )) + F ( x ) στ . The last expression is symmetric under the exchange ( P ( A ) , τ ) ↔ ( P ( B ) , σ ) .Hence F AB ( X ) = B ∗ ˙ ∂A ∗ ∂ ˙ F ( X ) = A ∗ ˙ ∂B ∗ ∂ ˙ F ( X ) = F BA ( X ) . (2.46) Proof 16
According to the fundamental theorem of calculus ([15], 216.Cand [16], p. 140, Satz 3) we have for in the intervall [0 , τ ] continouslydifferentiable functions g ( τ ) g ( τ ) − g (0) = Z τ ∂ τ g ( η ) dη. (2.47)We now use the in τ and σ twice continuously differentiable function f ( τ, σ ) ≡ F ( X + τ P ( A ) + σP ( B )) (2.48)to define M ( τ , σ ) ≡ f ( τ , σ ) − f ( τ , − f (0 , σ ) + f (0 , . (2.49)Following the notation of (2.45) we can therefore write F AB = lim σ → (cid:18) lim τ → M ( τ , σ ) σ τ (cid:19) = ∂ σ ∂ τ f | τ = σ =0 = ∂ σ ∂ τ f (0 , , (2.50) F BA = lim τ → (cid:18) lim σ → M ( τ , σ ) σ τ (cid:19) = ∂ τ ∂ σ f | τ = σ =0 = ∂ τ ∂ σ f (0 , . (2.51)Using the fundamental theorem of calculus twice the function M ( τ , σ )can be reexpressed on the one hand by M ( τ , σ ) (2.47) = Z σ ∂ σ f ( τ , ξ ) dξ − Z σ ∂ σ f (0 , ξ ) dξ = Z σ [ ∂ σ f ( τ , ξ ) − ∂ σ f (0 , ξ )] dξ (2.47) = Z σ Z τ ∂ τ ∂ σ f ( η, ξ ) dηdξ, (2.52)and on the other hand by M ( τ , σ ) = f ( τ , σ ) − f (0 , σ ) − f ( τ ,
0) + f (0 , (2.47) = Z τ ∂ τ f ( η, σ ) dη − Z τ ∂ τ f ( η, dη = Z τ [ ∂ τ f ( η, σ ) − ∂ τ f ( η, dη (2.47) = Z τ Z σ ∂ σ ∂ τ f ( η, ξ ) dξdη. (2.53) Remark 17 is ”handwaving”, because the implied interchange of the limits in σ and τ is mathematically nontrivial. I am very much indebted to Dr. H. Ishi (YokohamaCity University, Japan) for discussing P. 16 with me and for suggesting the proof of itgiven below. It follows from (2.52) and (2.53) that M ( τ , σ ) (2.52) = Z σ Z τ ∂ τ ∂ σ f ( η, ξ ) dηdξ (2.53) = Z τ Z σ ∂ σ ∂ τ f ( η, ξ ) dξdη. (2.54)The assumption of the double continous differentiability of f means that ∂ τ ∂ σ f ( η, ξ ) and ∂ σ ∂ τ f ( η, ξ ) are both continuous for0 ≤ η ≤ τ , ≤ ξ ≤ σ . (2.55)This can in turn has the consequence that m ( τ , σ ) ≡ max { ≤ η ≤ τ , ≤ ξ ≤ σ } | ∂ σ ∂ τ f ( η, ξ ) − ∂ σ ∂ τ f (0 , | → τ , σ ) → (0 , , (2.56)and n ( τ , σ ) ≡ max { ≤ η ≤ τ , ≤ ξ ≤ σ } | ∂ τ ∂ σ f ( η, ξ ) − ∂ τ ∂ σ f (0 , | → τ , σ ) → (0 , . (2.57)We therefore have (cid:12)(cid:12)(cid:12)(cid:12) M ( τ , σ ) σ τ − ∂ τ ∂ σ f (0 , (cid:12)(cid:12)(cid:12)(cid:12) (2.52) = (cid:12)(cid:12)(cid:12)(cid:12) σ τ Z σ Z τ ∂ τ ∂ σ f ( η, ξ ) dηdξ − σ τ Z σ Z τ ∂ τ ∂ σ f (0 , dηdξ (cid:12)(cid:12)(cid:12)(cid:12) ≤ σ τ Z σ Z τ | ∂ τ ∂ σ f ( η, ξ ) − ∂ τ ∂ σ f (0 , | dηdξ (2.57) ≤ σ τ Z σ Z τ n ( τ , σ ) dηdξ = n ( τ , σ ) R σ R τ dηdξσ τ = n ( τ , σ ) . (2.58)Because of (2.57) we then getlim ( τ ,σ ) → (0 , (cid:12)(cid:12)(cid:12)(cid:12) M ( τ , σ ) σ τ − ∂ τ ∂ σ f (0 , (cid:12)(cid:12)(cid:12)(cid:12) (2.57) = 0 . (2.59) Likewise we have (cid:12)(cid:12)(cid:12)(cid:12) M ( τ , σ ) σ τ − ∂ σ ∂ τ f (0 , (cid:12)(cid:12)(cid:12)(cid:12) (2.53) = (cid:12)(cid:12)(cid:12)(cid:12) σ τ Z τ Z σ ∂ σ ∂ τ f ( η, ξ ) dξdη − σ τ Z τ Z σ ∂ σ ∂ τ f (0 , dξdη (cid:12)(cid:12)(cid:12)(cid:12) ≤ σ τ Z τ Z σ | ∂ σ ∂ τ f ( η, ξ ) − ∂ σ ∂ τ f (0 , | dξdη (2.56) ≤ σ τ Z τ Z σ m ( τ , σ ) dξdη = m ( τ , σ ) R τ R σ dξdησ τ = m ( τ , σ ) . (2.60)Because of (2.56) we then getlim ( τ ,σ ) → (0 , (cid:12)(cid:12)(cid:12)(cid:12) M ( τ , σ ) σ τ − ∂ σ ∂ τ f (0 , (cid:12)(cid:12)(cid:12)(cid:12) (2.56) = 0 . (2.61)Equations (2.54), (2.59) and (2.61) show that ∂ τ ∂ σ f (0 ,
0) = ∂ σ ∂ τ f (0 , . (2.62)Finally (2.62) results with (2.50) and (2.51) in the proof of P. 16: F AB ( X ) = F BA ( X ) . (2.63) Definition 18.
The brackets h . . . i r indicate the selection of the r -gradepart of the multivector expression enclosed by them. A ¯ r ≡ h A i r , h A i ≡ h A i = A ¯0 . (3.1) Definition 19 ( derivative ) . The derivative of a multivector function F ( X )by its argument X ∂ X F ( X ) = ∂F, (3.2)with derivative operator ∂ X , is assumed It is important to note that based on directed integration, which uses a ”directedRiemann measure”, one can define the multivector derivative by a limit process appliedto an integral over the boundary of a smooth open r -dimensional surface ”tangent” to(in this definition r -vector) X . In the limit process the volume of the r -dimensionalsurface is shrunk to zero. (Compare [5], section 4.) For an analogous definition of thevector derivative see section 5 of [12]. (i) to have the algebraic properties of a multivector in G ( I ) , with I theunit pseudoscalar. (ii) that as in definition 1: A ∗ ∂ X with A ∈ G ( I ) equals the differential A ∗ ∂ X F = F A = n X r =0 A ¯ r ∗ ∂ X F = n X r =0 A ∗ ∂ ¯ r F = n X r =0 A ¯ r ∗ ∂ ¯ r F. (3.3)[Compare [3], p. 13, (1.46) and (1.47a).] Remark 20.
Property (ii) in Def. 19 expresses that the derivative operatoracts like a map from the space (usually G or a linear subspace of G ) ofmultivectors A to the space of (differentials of) multivector functions F : ∂ X F : A ∈ G → A ∗ ∂ X F ∈ G . (3.4)In order to avoid easily occuring confusions, I want to point out that Re-mark 2 clearly implies that A ∗ ∂ X F = ( A ∗ ∂ X ) F = A ∗ ( ∂ X F ) . (3.5) Proposition 21 ( algebraic properties of ∂ X ) . ∂ X = P ( ∂ X ) = X J a J a J ∗ ∂ X for a J = const. = X J a J ∗ ∂ X a J , (3.6)with P the multivector projection into G ( ~a ∧ ~a ∧ . . . ∧ ~a n ) . a J is a simpleblade basis of G ( I ) : a J ≡ ~a j ∧ ~a j ∧ . . . ∧ ~a j n , (3.7)with j k = k or 0 and elimination of elements with j k = 0 . Hence j See the definition of the algebraic properties of ∂ X in defini-tion 19(i). For the last equality in (3.6) take into account that a J ∗ ∂ X isalgebraically scalar and [3], chapter 1-1, (1.11). Proposition 22 ( constant scalar factor ) . Another algebraic property ofthe multivector derivative ∂ X is that we have for constant scalar factors λ : ∂ X ( λF ) = λ ∂ X F. (3.12) Proof 22 ∂ X ( λF ) ( . ) = X J a J a J ∗ ∂ X ( λF ) P . X J a J λ a J ∗ ∂ X F [3], chap. 1-1, (1.11) = λ X J a J a J ∗ ∂ X F ( . ) = λ ∂ X F. (3.13) Proposition 23. ∂ X = n X r =0 ∂ h X i r with ∂ h X i r = h ∂ X i r = X J h a J i r h a J i r ∗ ∂ X . (3.14) ∂ h X i r is thus the derivative with respect to a variable h X i r ∈ G r ( I ) . Proof 23 a J ∗ ∂ X [3], p. 13 (1.46) = n X r =0 h a J i r ∗ ∂ X (ibidem) = n X r =0 h a J i r ∗ h ∂ X i r (3.15) h ∂ X i r is the r -blade part of ∂ X , which has the algebraic properties of amultivector (Def. 19, P. 21). The sum P J a J a J ∗ ∂ X is therefore naturallyperformed in two steps: Sum up over all index sets J = ( j , j , . . . , j n ) with r non-zero mem-bers. Sum up over all r = 0 . . . n : ∂ X = n X r =0 X J h a J i r h a J i r ∗ ∂ X . (3.16) Proposition 24. ∂ = n X r =0 ∂ ¯ r . (3.17) Proof 24 ∂ X P. 23 = n X r =0 ∂ h X i r P. 23 = n X r =0 h ∂ X i r Def. 18 = n X r =0 ∂ ¯ r . (3.18) Proposition 25. ∂ X = ∂ A A ∗ ∂ X . (3.19) Proof 25 ∂ A A ∗ ∂ X P. 21 = X J a J a J ∗ ∂ A ( A ∗ ∂ X ) P. 11 = X J a J P ( a J ) ∗ ∂ X = X J a J a J ∗ ∂ X P. 21 = ∂ X . (3.20) Proposition 26 ( derivative from differential ) . ∂F ≡ ∂ X F ( X ) = ∂ A F ( X, A ) = ∂ A F A ( X ) ≡ ∂ F . (3.21) ∂ means the derivative with respect to the differential argument A of F . Proof 26 ∂ X F ( X ) P. 25 = ∂ A A ∗ ∂ X F Def. 1 = ∂ A F ( X, A ) . (3.22) Definition 27 ( adjoint ) . The adjoint of a multivector function F is F = F ( A ′ ) ≡ ∂ ( F ∗ A ′ ) , (4.1)or explicitly F ( A ′ ) ≡ ∂ A [ { A ∗ ∂ X F ( X ) } ∗ A ′ ] Def. 1 = ∂ A [ F ( X, A ) ∗ A ′ ] . (4.2) Proposition 28. F ( A ′ ) = ∂ X ( F ∗ A ′ ) . (4.3)or explicitly F ( X, A ′ ) = ∂ X ( F ( X ) ∗ A ′ ) . (4.4) Proof 28 F ( X, A ′ ) Def. 27 = ∂ A [ { ( A ∗ ∂ X ) F ( X ) } ∗ A ′ ] A ∗ ∂ X alg. scalar = ∂ A ( A ∗ ∂ X ) [ F ( X ) ∗ A ′ ] P. 25 = ∂ X ( F ( X ) ∗ A ′ ) . (4.5) Proposition 29 ( common definition of adjoint ) . B ∗ F ( A ) = F ( B ) ∗ A. (4.6) Proof 29 B ∗ F ( A ) P. 28 = B ∗ ∂ X F ∗ A B ∗ ∂ X algebraic scalar = ( B ∗ ∂ X F ) ∗ A Def. 1 = F ( B ) ∗ A. (4.7) Proposition 30. P ( F ( A )) = F ( A ) . (4.8) Proof 30 P ( F ( A )) P. 28 = P ( ∂ X ( F ∗ A ) | {z } scalar ) = P ( ∂ X )( F ∗ A ) P. 21 = ∂ X ( F ∗ A ) P. 28 = F ( A ) . (4.9) Proposition 31. h F ( A ) i r = ∂ ¯ r ( F ∗ A ) = ∂ ¯ r F ∗ A. (4.10) Proof 31 h F ( A ) i r Def. 27 = h ∂ F ∗ A | {z } scalar i r = h ∂ B i r F ( X, B ) ∗ A Def. 18 = ∂ ¯ r ( F ∗ A ) , (4.11) h F ( A ) i r P. 28 = h ∂ X ( F ( X ) ∗ A | {z } scalar ) i r = h ∂ X i r ( F ( X ) ∗ A ) Def. 18 = ∂ ¯ r F ( X ) ∗ A. (4.12) Proposition 32 ( linearity of adjoint ) . F ( A + B ) = F ( A ) + F ( B ) , (4.13) F ( λA ) = λF ( A ) , if λ = h λ i scalar. (4.14) Proof 32 F ( A + B ) P. 28 = ∂ X ( F ∗ ( A + B )) linearity ofscalar product = ∂ X ( F ∗ A + F ∗ B ) P. 21 = X J a J a J ∗ ∂ X ( F ∗ A + F ∗ B ) P. 5, distributivity = X J a J a J ∗ ∂ X ( F ∗ A ) + X J a J a J ∗ ∂ X ( F ∗ B ) P. 21 = ∂ X ( F ∗ A ) + ∂ X ( F ∗ B ) P. 28 = F ( A ) + F ( B ) , (4.15)where I mean the distributivity of geometric multiplication with respect toaddition as in [3] p. 3, (1.4), (1.5). F ( λA ) P. 28 = ∂ X F ∗ ( λA ) scalar product = ∂ X ( λF ∗ A ) P. 21 = X J a J a J ∗ ∂ X ( λF ∗ A ) P. 6 = X J a J λa J ∗ ∂ X ( F ∗ A ) [3] p. 4, (1.11) = λ X J a J a J ∗ ∂ X ( F ∗ A ) P. 28, P. 21 = λF ( A ) . (4.16) Proposition 33 ( sum rule ) . ∂ r ( F + G ) = ∂ r F + ∂ r G. (5.1) Proof 33 ∂ r ( F + G ) P. 23 = X J h a J i r h a J i r ∗ ∂ X ( F + G ) P. 7 = X J h a J i r {h a J i r ∗ ∂ X F + h a J i r ∗ ∂ X G } distributivity = X J h a J i r h a J i r ∗ ∂ X F + X J h a J i r h a J i r ∗ ∂ X G P. 23 = ∂ r F + ∂ r G, (5.2) where I again mean the distributivity of geometric multiplication with re-spect to addition as in [3] p. 3, (1.4), (1.5). Proposition 34 ( product rule ) . ∂ r ( F G ) = ˙ ∂ r ˙ F G + ˙ ∂ r F ˙ G. (5.3) Proof 34 ∂ r ( F G ) P. 23 = X J h a J i r h a J i r ∗ ∂ X ( F G ) P. 8 = X J h a J i r (cid:8) ( h a J i r ∗ ∂ X F ) G + F ( h a J i r ∗ ∂ X | {z } algebraic scalar ) G (cid:9) = X J h a J i r n ( h a J i r ∗ ˙ ∂ X ) ˙ F G + ( h a J i r ∗ ˙ ∂ X ) F ˙ G o distributivity = X J h a J i r ( h a J i r ∗ ˙ ∂ X ) ˙ F G + X J h a J i r ( h a J i r ∗ ˙ ∂ X ) F ˙ G P. 23 = ˙ ∂ r ˙ F G + ˙ ∂ r F ˙ G, (5.4)where I again mean the distributivity of geometric multiplication with re-spect to addition as in [3] p. 3, (1.4), (1.5). Definition 35 ( product rule variation ) . Left and right side derivation isindicated by: ˙ F ˙ ∂ r ˙ G = ˙ F ˙ ∂ r G + F ˙ ∂ r ˙ G. (5.5) Remark 36. Expanding the variation (5.5) of the product rule explicitelyin the a J blade basis (3.7) of the geometric algebra G shows how (5.3)and (5.5) are algebraically different (compare Proof 34):˙ F ˙ ∂ r ˙ G D. 35 = ˙ F ˙ ∂ r G + F ˙ ∂ r ˙ G P. 23 = X J ( h a J i r ∗ ˙ ∂ X ) ˙ F h a J i r G + X J F h a J i r ( h a J i r ∗ ˙ ∂ X ) ˙ G (5.6) Proposition 37 ( change of variables ) . For F ( X ) = G ( f ( X )) , i.e. f : X → X ′ = f ( X ) ∂ r F = ˙ ∂ r G ( ˙ f ) = h f ( ˙ ∂ ′ ) i r ˙ G, (5.7)i.e. ∂ r = h ∂ X i r = h f ( ∂ X ′ ) i r = h f ( ∂ ′ ) i r . (5.8) Proof 37 ∂ r F ( X ) P. 26 = ∂ A ( A ∗ h ∂ X i r ) G ( f ( X )) [3] p. 13, (1.46) = ∂ A r ( A r ∗ h ∂ X i r ) G ( f ( X )) Def. 1 = ∂ A r ∂ τ G ( f ( X + τ P ( A r ))) | τ =0P. 12 = ∂ A r ∂ τ G ( f ( X ) + τ ( A r ∗ h ∂ X i r ) f ( X )) | τ =0Def. 1 = ∂ A r ( { A r ∗ h ∂ X i r f ( X ) } ∗ ∂ X ′ ) G ( X ′ ) | X ′ = f ( X )Def. 1 = ∂ A r (cid:0) f ( A r ) ∗ ∂ X ′ (cid:1) G ( X ′ ) | X ′ = f ( X )P. 31 = h f ( ∂ X ′ ) i r G ( X ′ ) | X ′ = f ( X ) . (5.9) Remark 38. Remark 14 also applies to proof 37. I conclude from proof 37that formulas (2.26a+b) in [3], p. 56 are slightly incorrect. There the gradeselector h . . . i r is applied to the argument of f , but in order to be correctit should be applied to f itself, i.e. appear around f as in P. 37. Proposition 39 ( using the full derivative ∂ X ) . Sum rule: ∂ ( F + G ) = ∂F + ∂G, (5.10)Constant scalar factor λ : ∂ ( λF ) = λ∂F, (5.11)Product rule: ∂ ( F G ) = ˙ ∂ ˙ F G + ˙ ∂F ˙ G, (5.12)with variation: ˙ F ˙ ∂ ˙ G = ˙ F ˙ ∂G + F ˙ ∂ ˙ G. (5.13)For F ( X ) = G ( f ( X )) , i.e. f : X → X ′ = f ( X )Chain rule: ∂F = ˙ ∂G ( ˙ f ) = f ( ˙ ∂ ) ˙ G, (5.14)i.e. ∂ = f ( ∂ X ′ ) = f ( ∂ ′ ) . (5.15) Proof 39 One just needs to take the sum over all grades r : P nr =0 onboth left and right hand sides of P. 33, P. 34, Def. 35 and P. 37, as well astake into account P. 23 and that f ( ∂ ′ ) = n X r =0 h f ( ∂ ′ ) i r , compare [6], (13). (5.16)For (5.11) compare P. 22. Remark 40. The above results obtain, if F = F ( X ) is defined for X = X r = h X i r ∈ G r ( I ) . The results can be adapted to functions on any linear subspace of G ( I ) . The differential F = F ( X, A ) (Def. 1) and the adjoint F = F ( X, A ′ ) (Def. 27) are the only two linear functions of A (and A ′ respectively) that can be formed from F using the derivative ∂ = ∂ X andthe scalar product (compare the definitions Def. 1 and Def. 27.) Proposition 41 ( scalar differential calculus ) . For a multivector functionof a scalar variable X = X ( τ ) = f ( τ ) :Multivector derivative ∂ τ X = h ∂ τ i X = dXdτ , (6.1)Differential X ( λ ) = λ ∗ ∂X = λ∂ τ X = λ dXdτ , (6.2)Adjoint X ( A ) = ∂ τ X ∗ A = (cid:18) dXdτ (cid:19) ∗ A, (6.3)Chain rule dFdτ = f ( ∂ X ) F ( X ) = dXdτ ∗ ∂ X F ( X ) . (6.4)The second expression of equation (2.27d) in [3] seems to be sligthly wrongcompared to (6.4). In the special case of a scalar function X = x ( τ ) = h x ( τ ) i , only the scalar part α = h A i of A contributes to the adjoint: X ( α ) = α dxdτ , (6.5)and the chain rule for such a scalar function has the form: dFdτ = dxdτ dFdx . (6.6) Proof 41 For the multivector derivative (6.1):Because τ is scalar, h ∂ τ i r X = 0 for r = 0 , ⇒ ∂ = ∂ τ = ddτ ⇒ for the differential (6.2) : X ( A ) Def. 1 = A ∗ ∂X = ( A ∗ ∂ τ ) X scalar product = h A i ∂ τ X λ ≡ h A i = λ∂ τ X. (6.7)For the adjoint (6.3) compare P. 28.For the chain rule (6.4) compare P. 37 and (6.3) with A = ∂ X .We further have from (6.3) for scalar x ( τ ) = f ( τ ) : dxdτ = h dxdτ i x ( A ) = dxdτ ∗ A scalar product = dxdτ ∗ h A i α ≡ h A i = dxdτ α (6.8) and from (6.4) ⇒ dFdτ = f ( ∂ x ) F ( x ) = h dxdτ i ∗ ∂ x F ( x ) = dxdτ ddx F ( x ) . (6.9) Remark 42. In scalar differential calculus differentials (6.2) and adjoints(6.5) are identical for λ = α . The distinction with the derivative (6.1)is trivial (i.e. only the scalar factor λ ). The single concept of derivativein elementary differential calculus is therefore now generalized to threedistinct, related concepts of multivector derivative, differential and adjoint. In the following I assume (i) X = F ( X ) to be the identity function on some linear subspace of G ( I )of dimension d, (ii) /A ≡ P subpace ( A ) to be the projection into the above mentioned d − di-mensional subspace of G ( I ) , (iii) that singularities at X = 0 are to be excluded. Proposition 43. A ∗ ∂ X X = ˙ ∂ X ˙ X ∗ A = /A. (7.1) Remark 44. To help avoid confusion I refer to Remarks 2 and 20 in orderto clarify that the computation of (7.1) implies the following brackets:( A ∗ ∂ X ) X = ˙ ∂ X ( ˙ X ∗ A ) = /A. (7.2) Proof 43 F ( X ) ≡ X,A ∗ ∂ X X Def. 1 = F ( A ) P. 3 = F ( P ( A )) = F ( /A ) P. 10 = /A, (7.3)˙ ∂ X ˙ X ∗ A Def. 19 = X J d a J a J ∗ ˙ ∂ X ˙ X | {z } using (7.3) ∗ A = X J d a J a J ∗ A = /A, (7.4)where J d is the index subset for the multivector base of the assumed linear d − dimensional subspace of G ( I ) of X. Proposition 45. A ∗ ∂ X ˜ X = ˙ ∂ X ˙˜ X ∗ A = / ˜ A, (7.5)where the tilde operation (˜) indicates reversion as defined in [3], pp. 5,6,(1.17) to (1.20). Proof 45 A ∗ ∂ X | {z } algebraic scalar ˜ X = ^ ( A ∗ ∂ X X ) P. 43 = / ˜ A (7.6) / ˜ A P. 43 = ˙ ∂ X ˙ X ∗ ˜ A [3] p. 13, (1.48) = ˙ ∂ X ˙˜ X ∗ A (7.7) Proposition 46. ∂ X X = d. (7.8) Proof 46 ∂ X X Def. 19 = X J d a J a J ∗ ∂ X X P. 43 = X J d a J a J = X J d δ J J = d, (7.9)where J d is the index subset for the multivector base of the assumed linear d − dimensional subspace of G ( I ) of X. Proposition 47. ∂ X | X | = 2 ˜ X. (7.10) Proof 47 ∂ X | X | = ∂ X h X ˜ X i Def. 19, P. 21 = X J d a J a J ∗ ∂ X h X ˜ X i P. 8, P. 4distributivity = X J d a J a J ∗ ˙ ∂ X h ˙ X ˜ X i + X J d a J a J ∗ ˙ ∂ X h X ˙˜ X i (7.11) Def. 19, P. 21 = ˙ ∂ X h ˙ X ˜ X i + ˙ ∂ X h X ˙˜ X i = ˙ ∂ X ( ˙ X ∗ ˜ X ) + ˙ ∂ X ( X ∗ ˙˜ X ) [3] p. 13, (1.47a) = ˙ ∂ X ( ˙ X ∗ ˜ X ) + ˙ ∂ X ( ˙˜ X ∗ X ) P. 43,P. 45 = ˜ X + ˜ X = 2 ˜ X, where I mean the distributivity of geometric multiplication with respect toaddition as in [3] p. 3, (1.4), (1.5). J d is the index subset for the multivectorbase of the assumed linear d − dimensional subspace of G ( I ) of X. Proposition 48. A ∗ ∂ X X k = /AX k − + X/AX k − + . . . + X k − /A. (7.12) Proof 48 F ≡ X,A ∗ ∂ X X k = A ∗ ∂ X F k Def. 1 = ( F k ) P. 8 = F F k − + F F F k − + . . . + F k − F P. 43, F ≡ X = /AX k − + X/AX k − + . . . + X k − /A. (7.13) Proposition 49. ∂ X | X | k = k | X | k − ˜ X. (7.14) Proof 49 f ( X ) ≡ | X | ,∂ X | X | k = ∂ X ( | X | ) k P. 37 = f ( ∂ α ) α k (cid:12)(cid:12)(cid:12) α = f ( X )= | X | P. 28 = ˙ ∂ X ( | ˙ X | ∗ ∂ α ) α k (cid:12)(cid:12)(cid:12) α = f ( X )= | X | = ( ∂ X | X | ) ∂ α α k (cid:12)(cid:12)(cid:12) α = f ( X )= | X | P. 47 = 2 ˜ X k | X | k − = k | X | k − ˜ X. (7.15) Proposition 50. ∂ X log | X | = ˜ X | X | . (7.16) Proof 50 f ( X ) ≡ | X | ,∂ X log | X | P. 37 = f ( ∂ α ) log α | α = f ( X )= | X | P. 28 = ˙ ∂ X ( | ˙ X | ∗ ∂ α ) log α | α = f ( X )= | X | = ( ∂ X | X | ) 1 | X | P. 49 = | X | − ˜ X | X | = ˜ X | X | . (7.17) Proposition 51. A ∗ ∂ X ( | X | k X ) = | X | k /A + k A ∗ ˜ XX | X | ! . (7.18) Proof 51 A ∗ ∂ X ( | X | k X ) P. 8 = ( A ∗ ∂ X | X | k ) X + | X | k A ∗ ∂ X X (7.19) Def. 19(ii), P. 49, P. 43 = A ∗ ( k | X | k − | {z } scalar ˜ X ) X + | X | k /A = | X | k /A + k A ∗ ˜ XX | X | ! . Proposition 52. ∂ X ( | X | k X ) = | X | k d + k ˜ XX | X | ! . (7.20) Proof 52 ∂ X ( | X | k X ) P. 34, Rem. 40 = ( ∂ X | X | k ) X + ˙ ∂ X | X | k |{z} scalar ˙ X P. 49 = ( k | X | k − ˜ X ) X + | X | k ∂ X X P. 46 = k | X | k ˜ XX | X | + | X | k d = | X | k d + k ˜ XX | X | ! . (7.21) Proposition 53. For X = P nr =0 X ¯ r defined on the whole of G ( I ) and ∂ = ∂ X A ∗ ∂X = ˙ ∂ ˙ X ∗ A = P ( A ) . (7.22) Proof 53 In Proposition 43, the assumed ”subspace” becomes now all of G ( I ) , therefore /A =P(A). Proposition 54. For X = P nr =0 X ¯ r defined on the whole of G ( I ) and ∂ = ∂ X A ∗ ∂ ¯ r X = ˙ ∂ ¯ r ˙ X ∗ A = P ( A ¯ r ) . (7.23) Proof 54 A ∗ ∂ ¯ r X [3], p. 13, (1.45a) = A ¯ r ∗ ∂ X X P. 53 = P ( A ¯ r ) , (7.24)and ˙ ∂ ¯ r ˙ X ∗ A P. 23 = X J h a J i r h a J i r ∗ ˙ ∂ X ˙ X | {z } ∗ A (7.24) = X J h a J i r P ( h a J i r ) ∗ A [3], p. 13, (1.45a) = X J h a J i r h a J i r ∗ A ¯ r = P ( A ¯ r ) . (7.25) Proposition 55. ∂ ¯ r X = ∂X ¯ r = ∂ ¯ r X ¯ r = (cid:18) nr (cid:19) . (7.26) Proof 55 ∂ ¯ r X P. 23 = X J h a J i r h a J i r ∗ ∂ X X [3], p. 13, (1.45a) = X J h a J i r h a J i r ∗ ∂ h X i r X | {z } P. 54 = X J h a J i r P ( h a J i r ) = X J h a J i r h a J i r = (cid:18) nr (cid:19) , (7.27) where (cid:18) nr (cid:19) is the dimension of the r -vector subspace of G ( I ) . ∂ ¯ r X ¯ r is in this subspace and we therefore have according to P. 23 that ∂ ¯ r X ¯ r P. 23 = X J h a J i r h a J i r ∗ ∂ ¯ r X ¯ r P. 43 = X J h a J i r h a J i r = (cid:18) nr (cid:19) , (7.28)and ∂X ¯ r P. 23 = X s ∂ ¯ s X ¯ r P. 23 = X s X J h a J i s h a J i s ∗ ∂ ¯ s X ¯ r P. 4 = X s X J h a J i s hh a J i s ∗ ∂ ¯ s X i r P. 54 = X s X J h a J i s h P ( h a J i s ) i r = X s X J h a J i s hh a J i s i r | {z } = δ sr h a J i s = X J h a J i r h a J i r = (cid:18) nr (cid:19) . (7.29) Proposition 56. ∂X = X r (cid:18) nr (cid:19) = 2 n . (7.30) Proof 56 ∂X P. 23 = X r ∂ ¯ r X P. 55 = X r (cid:18) nr (cid:19) = 2 n . (7.31) Proposition 57. For A = P ( A ) , K = ( r + s − | r − s | ) and (cid:18) ij (cid:19) = 0if j > i∂ ¯ s h XA ¯ r i m = h A ¯ r ∂ ¯ s i m X = (cid:18) rk (cid:19) (cid:18) n − rs − k (cid:19) δ mr + s − k A ¯ r , (7.32)with δ mr + s − k the Kronecker symbol for non-negative integers. Proof 57 (This step by step proof extends over the next two pages.) I willnow assume A r to be a simple r -blade A r = ~a ~a . . .~a r , i.e. a geometricproduct of r orthogonal unit vectors. We now select an orthonormal basisof G ( I ) which includes ~a , ~a , . . . , ~a r . The first step in our proof is to showthe formula ∂ ¯ s h XA r i m = ∂ ¯ s h X ¯ s A r i m . (7.33)The left hand side of (7.33) yields ∂ ¯ s h XA r i m P. 23 = X J h a J i s (cid:28) h a J i s ∗ ∂ ¯ s X | {z } A r (cid:29) m P. 54 = X J h a J i s hh a J i s A r i m , (7.34) and the right hand side of (7.33) gives ∂ ¯ s h X ¯ s A r i m P. 23 = X J h a J i s (cid:28) h a J i s ∗ ∂ ¯ s X ¯ s | {z } A r (cid:29) m P. 54 = X J h a J i s D ˙ ∂ ¯ s ˙ X ∗ h a J i s A r E m P. 54 = X J h a J i s hh a J i s ∗ ∂ ¯ s XA r i m (7.34) = ∂ ¯ s h XA r i m , (7.35)where to obtain the second equality we first set in P. 54 r = s and A = h a J i s : h a J i s ∗ ∂ ¯ s X = ˙ ∂ ¯ s ˙ X ∗ h a J i s . (7.36)Second, by applying the s -grade selector h i s on both sides of (7.36) weget the necessary relationship h a J i s ∗ ∂ ¯ s X ¯ s = ˙ ∂ ¯ s ˙ X ∗ h a J i s | {z } alg . scalar . (7.37)This completes the proof of (7.33). Writing the vector factors h a J i s , h a J i s and A r in the last expression of (7.34) explicitely, we obtain from (7.33)that ∂ ¯ s h XA r i m = ∂ ¯ s h X ¯ s A r i m = X J h a J i s hh a J i s A r i m = X j <... For A = P ( A ) , K = ( r + s − | r − s | ) and (cid:18) ij (cid:19) = 0if j > i ∂ ¯ s A ¯ r X ¯ s = X J h a J i s A ¯ r h a J i s = Γ rs A ¯ r , (7.43) with Γ rs = K X k =0 ( − rs − k (cid:18) rk (cid:19) (cid:18) n − rs − k (cid:19) . (7.44) Proof 58 h A ¯ r X ¯ s i m [3], p. 6, (1.20a) = ( − m ( m − / h ˜ A ¯ r ˜ X ¯ s i m = ( − m ( m − / r ( r − / s ( s − / h X ¯ s A ¯ r i m . (7.45)Therefore A ¯ r X ¯ s [3], p. 10, (1.36) = K X k =0 h A ¯ r X ¯ s i | r − s | +2 k = K X k =0 h A ¯ r X ¯ s i r + s − k (7.45) = K X k =0 h X ¯ s A ¯ r i r + s − k ( − ( r + s − k )( r + s − k − / r ( r − / s ( s − / . (7.46)The two different summations in line one of (7.46) correspond to countingup as in [3], p. 10, (1.36): | r − s | , | r − s | + 2 , | r − s | + 4 , . . .. . . , | r − s | + 2 K = | r − s | + r + s − | r − s | = r + s, (7.47)and counting down as in [3], p. 58, after (2.38c): r + s,r + s − , r + s − , . . .. . . , r + s − K = r + s − ( r + s ) + | r − s | = | r − s | . (7.48)The exponent of ( − 1) in (7.46) can further be simplified to( − ( r + s − k )( r + s − k − / r ( r − / s ( s − / = ( − ( r + s +4 k +2 rs − ks − kr − r − s +2 k + r − r + s − s ) = ( − (2 r +2 s − r − s +2 rs +2 k ) = ( − ( r + s − r − s + rs + k ) = ( − r ( r − s ( s − rs − k +2 k = ( − rs − k . (7.49)Equation (7.46) simplifies therefore to A ¯ r X ¯ s = K X k =0 h X ¯ s A ¯ r i r + s − k ( − rs − k . (7.50) The s -grade derivative of (7.50) gives ∂ ¯ s A ¯ r X ¯ s = K X k =0 ∂ ¯ s h X ¯ s A ¯ r i r + s − k ( − rs − k (7.40) = K X k =0 ( − rs − k (cid:18) rk (cid:19) (cid:18) n − rs − k (cid:19)| {z } =Γ rs A ¯ r = Γ rs A ¯ r (7.51)To prove the first identity in P. 58 we write ∂ ¯ s A ¯ r X ¯ s P. 23 = X J h a J i s h a J i s ∗ ∂ ¯ s | {z } scalar A ¯ r X ¯ s = X J h a J i s A ¯ r ( h a J i s ∗ ∂ ¯ s X ¯ s ) P. 43 = X J h a J i s A ¯ r P s -dim.subspace ( h a J i s ) = X J h a J i s A ¯ r h a J i s . (7.52) Proposition 59. ∂ ¯ s AX = ∂AX ¯ s = n X r =0 Γ rs A ¯ r , (7.53)with Γ rs = P Kk =0 ( − rs − k (cid:18) rk (cid:19) (cid:18) n − rs − k (cid:19) and K = ( r + s − | r − s | ) . Proof 59 We will first proof that for an arbitrary but fixed grade r∂ ¯ s A ¯ r X = ∂ ¯ s A ¯ r X ¯ s = ∂A ¯ r X ¯ s . (7.54)(For r = 0 we end up with a scalar multiple of P. 55.) ∂ ¯ s A ¯ r X P. 23 = X J h a J i s h a J i s ∗ ∂ ¯ s | {z } scalar A ¯ r X = X J h a J i s A ¯ r h a J i s ∗ ∂ ¯ s X | {z } P. 54 = X J h a J i s A ¯ r h a J i s P. 58 = ∂ ¯ s A ¯ r X ¯ s . (7.55) The second identity in (7.54) can be shown as follows: ∂A ¯ r X ¯ s P. 23 = n X t =0 X J h a J i t h a J i t ∗ ∂ | {z } scalar A ¯ r X ¯ s [3], p. 13, (1.46) = n X t =0 X J h a J i t h a J i t ∗ ∂ ¯ t | {z } scalar A ¯ r X ¯ s = n X t =0 X J h a J i t A ¯ r h a J i t ∗ ∂ ¯ t X ¯ s | {z } P. 4 = X t X J h a J i t A ¯ r hh a J i t ∗ ∂ ¯ t X i s P. 54 = X t X J h a J i t A ¯ r h P ( h a J i t ) i s = X t X J h a J i t A ¯ r hh a J i t i s | {z } = δ ts h a J i t = X J h a J i s A ¯ r h a J i s P. 58 = ∂ ¯ s A ¯ r X ¯ s . (7.56)Performing the sum over all grades r in (7.54) we get ∂ ¯ s AX = ∂AX ¯ s = n X r =0 ∂ ¯ s A ¯ r X ¯ s P. 58 = n X r =0 Γ rs A ¯ r , (7.57)where I used the fact that according to Def. 18 and [6] (13): A = n X r =0 A ¯ r . (7.58) Proposition 60. ∂ ¯ s X ∧ A ¯ r = A ¯ r ∧ ∂ ¯ s X = (cid:18) n − rs (cid:19) A ¯ r . (7.59) Proof 60 ∂ ¯ s X ∧ A ¯ r = ∂ ¯ s h XA ¯ r i max. grade (7.40) = ∂ ¯ s h X ¯ s A ¯ r i r + s =max. grade = ∂ ¯ s h XA ¯ r i r + s P. 57 = h A ¯ r ∂ ¯ s i r + s X = A ¯ r ∧ ∂ ¯ s X, (7.60) ∂ ¯ s X ∧ A ¯ r (7.60) = ∂ ¯ s h XA ¯ r i r + s P. 57 = (cid:18) rk (cid:19) (cid:18) n − rs − k (cid:19) δ m = r + sr + s − k A ¯ rk = 0 = (cid:18) r (cid:19)| {z } =1 (cid:18) n − rs (cid:19) A ¯ r = (cid:18) n − rs (cid:19) A ¯ r . (7.61) Proposition 61. ∂ ¯ s X · A ¯ r = A ¯ r · ∂ ¯ s X = (cid:18) rs (cid:19) A ¯ r if < s ≤ r (cid:18) n − rs − r (cid:19) A ¯ r if < r ≤ s . (7.62) Proof 61 ∂ ¯ s X · A ¯ r = ∂ ¯ s h XA ¯ r i min. grade (7.40) = ∂ ¯ s h X ¯ s A ¯ r i | r − s | =min. grade = ∂ ¯ s h XA ¯ r i | r − s | P. 57 = h A ¯ r ∂ ¯ s i | r − s | X = A ¯ r · ∂ ¯ s X (7.63) ∂ ¯ s X · A ¯ r (7.63) = ∂ ¯ s h XA ¯ r i | r − s | P. 57 = (cid:18) rk (cid:19) (cid:18) n − rs − k (cid:19) δ m = | r − s | r + s − k A ¯ r = (cid:18) rk (cid:19) (cid:18) n − rs − k (cid:19) A ¯ r δ r − sr + s − k | {z } k = s if 0 < s ≤ r (cid:18) rk (cid:19) (cid:18) n − rs − k (cid:19) A ¯ r δ s − rr + s − k | {z } k = r if 0 < r ≤ s = (cid:18) rs (cid:19) (cid:18) n − rs − s = 0 (cid:19)| {z } =1 A ¯ r if 0 < s ≤ r (cid:18) rr (cid:19)| {z } =1 (cid:18) n − rs − k (cid:19) A ¯ r if 0 < r ≤ s . (7.64) Proposition 62. For simple, X -independent A r the expansion ∂ ¯ s X ¯ s = ∂ ¯ s ( X ¯ s A r ) A − r = ∂ ¯ s X ¯ s ∧ A r A − r + ∂ ¯ s h X ¯ s ∧ A r i r + s − A − r + . . . + ∂ ¯ s h X ¯ s ∧ A r i | r − s | +2 A − r + ∂ ¯ s X ¯ s · A r A − r (7.65)is termwise equivalent to (assuming 0 < s ≤ r ) (cid:18) ns (cid:19) = (cid:18) r (cid:19) (cid:18) n − rs (cid:19) + (cid:18) r (cid:19) (cid:18) n − rs − (cid:19) + . . . + (cid:18) rs − (cid:19) (cid:18) n − r (cid:19) + (cid:18) rs (cid:19) (cid:18) n − r (cid:19) . (7.66) Proof 62 For simple A r : X ¯ s [3], p. 3, (1.3), [6], (30) = ( X ¯ s A r ) A − r . (7.67) The expansion of X ¯ s A r is done according to [3], p. 10, (1.36).The correspondence of the first and last term are shown by P. 60 and P. 61respectively. In general each term in the right hand side expansion (7.65)has the form ∂ ¯ s h X ¯ s A r i r + s − k A − r P. 57, (7.40) = (cid:18) rk (cid:19) (cid:18) n − rs − k (cid:19) A r A − r = (cid:18) rk (cid:19) (cid:18) n − rs − k (cid:19) , (7.68)with k = 0 , . . . , K and K = ( r + s − | r − s | ) . The left hand side of (7.65)is ∂ ¯ s X ¯ s P. 55 = (cid:18) ns (cid:19) . (7.69)The resulting binomial coefficient identity is known as theorem of addition:[14], p. 105, (2.4). Factorization relates functions of multivector variables with correspond-ing functions of (several) lower grade content multivector variables. In thesimplest case the latter functions will just be functions of several vector variables. Proposition 63. For two multivector variables A, B : ∂ A G ( A ∧ B ) = ˙ ∂ A ( ˙ A ∧ B ) ∗ ∂ U G U ( A ∧ B ) = ˙ ∂ A G ( A ∧ B, ˙ A ∧ B ) , (8.1)with ∂ U G U ( A ∧ B ) P. 26 = ∂ X G ( X ) | X = A ∧ B (8.2) Proof 63 For f ( A ) = A ∧ B and F = G ( f ( A )) :¯ f ( ∂ X ′ ) P. 28 = ˙ ∂ A ( ˙ A ∧ B ) ∗ ∂ X ′ . (8.3)Therefore ∂ A F = ∂ A G ( f ( A )) P. 37, P. 39 = ¯ f ( ∂ X ′ ) G ( X ′ ) | X ′ = f ( A )= A ∧ B (8.3) = ˙ ∂ A ( ˙ A ∧ B ) ∗ ∂ X ′ G ( X ′ ) | X ′ = A ∧ B Def. 1 = ˙ ∂ A G ( X ′ , ˙ A ∧ B ) | X ′ = A ∧ B = ˙ ∂ A G ( A ∧ B, ˙ A ∧ B ) . (8.4) Proposition 64. ∂ B ∂ A G ( A ∧ B ) = ◦ ∂ B ˙ ∂ A ( ˙ A ∧ ◦ B ) ∗ ∂ U G U ( A ∧ B )+ ◦ ∂ B ( A ∧ ◦ B ) ∗ ∂ V ˙ ∂ A ( ˙ A ∧ B ) ∗ ∂ U G UV ( A ∧ B ) , (8.5)with G UV = V ∗ ˙ ∂ U ∗ ∂ ˙ G ( A ∧ B ) . (8.6) Proof 64 ∂ B ∂ A G ( A ∧ B ) P. 63 = ∂ B n ˙ ∂ A ( ˙ A ∧ B ) ∗ ∂ U G U ( A ∧ B ) o [3], p. 13 (1.44), P. 4, Def. 19(ii), P. 34 = ◦ ∂ B ˙ ∂ A ( ˙ A ∧ ◦ B ) ∗ ∂ U G U ( A ∧ B )+ ◦ ∂ B ˙ ∂ A ( ˙ A ∧ B ) ∗ ◦ ( ∂ U G U ( A ∧ B )) . (8.7)The second term on the right hand side of (8.7) becomes ◦ ∂ B ˙ ∂ A ( ˙ A ∧ B ) ∗ ◦ ( ∂ U G U ( A ∧ B )) P. 63 = ◦ ∂ B ˙ ∂ A ( ˙ A ∧ B ) ∗{ ( A ∧ ◦ B ) ∗ ∂ X ′ ∂ U G U ( X ′ ) | X ′ = A ∧ B | {z } Def. 19(ii), P. 26, Def. 15= ∂ V ∂ U G UV ( A ∧ B ) } = ◦ ∂ B n ( A ∧ ◦ B ) ∗ ∂ V o ˙ ∂ A n ( ˙ A ∧ B ) ∗ ∂ U o G UV ( A ∧ B ) . (8.8) Proposition 65. For a multivector function G defined on G r ( I ) , i.e. G ( X ) = G ( h X i r ) , (8.9)and for A = h A i s in P. 62, P. 63 we have B = h B i r − s and ∂ A G ( A ∧ B ) = B · ∂ U G U ( A ∧ B ) . (8.10) Proof 65 G ( A ∧ B ) = G ( h A ∧ B i r ) G on G r ( I ) = G ( hh A i s ∧ B i r ) = G ( hh A i s h B i r − s ) , (8.11)and ∂ A ( A ∧ B ) | {z } = h A ∧ B i r ∗ ∂ X |{z} = h ∂ i r [3], p. 13 (1.46) = ∂ A ( A |{z} = h A i s ∧ B |{z} = h B i r − s ) · ∂ X |{z} = h ∂ i r [3], p. 7 (1.25b) = ∂ A ( A · ( B · ∂ X )) [3], p. 13 (1.46), A = h A i s = ∂ A ( A ∗ ( B · ∂ X )) P. 54 = B · ∂ X . (8.12) Using (8.11) and (8.12) we finally get ∂ A G ( A ∧ B ) P. 63 = ˙ ∂ A ( ˙ A ∧ B ) ∗ ∂ U G U ( A ∧ B ) similar to (8.12) = B · ∂ U G U ( A ∧ B ) . (8.13) Proposition 66. For a multivector function G defined on G r ( I ) , i.e. G ( X ) = G ( h X i r ) , and for A = h A i s in P. 62, P. 63 we have B = h B i r − s and ∂ B ∂ A G ( A ∧ B ) = (cid:18) rr − s (cid:19) ∂ U G U ( A ∧ B )+ ( − s ( r − s ) A · ∂ V B · ∂ U G UV ( A ∧ B ) . (8.14) Proof 66 The following three identities arise from proof 65: B = h B i r − s , (8.15)˙ ∂ A ( ˙ A ∧ B ) ∗ ∂ X = B · ∂ X , (8.16)˙ ∂ B ( A ∧ ˙ B ) ∗ ∂ X [6] (40) = ˙ ∂ B ( ˙ B ∧ A ) ∗ ∂ X ( − s ( r − s ) = A · ∂ X ( − s ( r − s ) . (8.17)We can therefore show that ◦ ∂ B ˙ ∂ A ( ˙ A ∧ ◦ B ) ∗ ∂ X (8.16) = ∂ B |{z} h ∂ B i r − s B |{z} h B i r − s · ∂ X |{z} h ∂ X i r P. 61 = (cid:18) rr − s (cid:19) ∂ X . (8.18)We finally get for ∂ B ∂ A G ( A ∧ B ) P. 64 = ◦ ∂ B ˙ ∂ A ( ˙ A ∧ ◦ B ) ∗ ∂ U G U ( A ∧ B )+ ◦ ∂ B ( A ∧ ◦ B ) ∗ ∂ V | {z } similar to (8.17) ˙ ∂ A ( ˙ A ∧ B ) ∗ ∂ U | {z } similar to (8.16) G UV ( A ∧ B ) similar to (8.18) = (cid:18) rr − s (cid:19) ∂ U G U ( A ∧ B )+ ( − s ( r − s ) A · ∂ V B · ∂ U G UV ( A ∧ B ) . (8.19) Proposition 67. For a linear multivector function L = L ( X ) we have L Def. 1 = L U ( X ) = L ( U ) , (8.20) L UV = 0 . (8.21) Proof 67 L Def. 1 = L U ( X ) Def. 1 = U ∗ ∂ X | {z } scalar L ( X ) linearity = L ( U ∗ ∂ X X ) = L ( P ( U )) for P ( U ) = U = L ( U ) , (8.22) L UV Def. 15 = V ∗ ˙ ∂ X U ∗ ∂ X ˙ L ( X ) (8.22) = V ∗ ∂ X L ( P ( U )) linearity = L ( V ∗ ∂ X P ( U ) | {z } =0 ) linearity = 0 . (8.23) Proposition 68. A multivector function L = L ( U ) is linear, if and onlyif L = L ( U ) . (8.24) Proof 68 ( ⇒ ) L = L ( X ) is linear P. 67 ⇒ L = L ( U ) , (8.25)( ⇐ ) L = L ( U ) ⇒ L ( αU + βV ) = L ( X, αU + βV ) P. 5, P. 6 = αL ( X, U ) + βL ( X, V ) L = L ( U ) = αL ( U ) + βL ( V ) ⇒ L is linear. (8.26) Proposition 69 ( factorization of derivative of lin. function ) . If G ( A ∧ B ) = L ( A ∧ B ) is a linear function on G ( I ) , and A = h A i s , B = h B i r − s ,∂ B ∂ A L ( A ∧ B ) = (cid:18) rr − s (cid:19) ∂ ¯ r L. (8.27) Proof 69 ∂ B ∂ A L ( A ∧ B ) P. 64, P. 67 = ◦ ∂ B ˙ ∂ A ( ˙ A ∧ ◦ B ) ∗ ∂ U L U ( A ∧ B ) Def. 19(ii), P. 26 = ◦ ∂ B ˙ ∂ A ( ˙ A ∧ ◦ B ) ∗ ∂ X L ( X ) | X = A ∧ BA = h A i s , B = h B i r − s , [3], p. 13 (1.45a) = ◦ ∂ B ˙ ∂ A h ˙ A ∧ ◦ B i r ∗ h ∂ X i r L ( X ) | X = A ∧ B Proof 66, P. 23, Def. 18 = (cid:18) rr − s (cid:19) ∂ ¯ r L = (cid:18) rr − s (cid:19) ∂L, (8.28)where the last equality holds if L ( X ) = L ( h X i r ) on G r ( I ) . Proposition 70 ( vector derivative factor ) . For a linear function L : ∂ B ∂ ~a L ( ~a ∧ B ) = r∂ ¯ r L. (8.29) Proof 70 ∂ B ∂ ~a L ( ~a ∧ B ) P. 69 = (cid:18) rr − ( r − (cid:19)| {z } = r ∂ ¯ r L ( ~a ∧ B ) = r∂ ¯ r L. (8.30) Proposition 71. The function ∂ ~a L ( ~a ∧ B ) is linear in B . Proof 71 For α, β independent of ~a : ∂ ~a L ( ~a ∧ ( αB + βC )) L linear, [6] (23) = ∂ ~a { αL ( ~a ∧ B ) + βL ( ~a ∧ C ) } [6] P. 51 = ∂ ~a { αL ( ~a ∧ B ) } + ∂ ~a { βL ( ~a ∧ C ) } [6] Def. 17(i) and eq. (21), (23), Def. 19(i) = α∂ ~a L ( ~a ∧ B ) + β∂ ~a L ( ~a ∧ C ) . (8.31) Proposition 72. ∂ r . . . ∂ ∂ L ( ~a ∧ ~a . . . ∧ ~a r ) = r ! ∂ ¯ r L = ∂ r ∧ . . . ∧ ∂ ∧ ∂ L ( ~a ∧ ~a . . . ∧ ~a r ) , (8.32)with ∂ k ≡ ∂ ~a k . Proof 72 ∂ ¯ r L P. 70 = 1 r ∂ B ∂ L ( h ~a ∧ ~a . . . ∧ ~a r | {z } = B r − i r )= 1 r ∂ B M ( B ) M is linear according to P. 71= 1 r ∂ r − M P. 70 = 1 r r − ∂ C ∂ M ( h ~a ∧ ~a . . . ∧ ~a r | {z } = C r − i r − ) P. 70 = . . . P. 70 = 1 r ! ∂ r Z ( ~a r ) Z is linear according to P. 71. (8.33)Hence r ! ∂ ¯ r L = ∂ r Z ( ~a r ) = . . . = ∂ r ∂ r − . . . ∂ ∂ L ( ~a ∧ ~a . . . ∧ ~a r ) . (8.34) We finally consider that ∂ r ∂ r − . . . ∂ ∂ L ( ~a ∧ ~a . . . ∧ ~a r )= ∂ r ∂ r − . . . ( ∂ · ∂ + ∂ ∧ ∂ ) L ( ~a ∧ ~a . . . ∧ ~a r ) [6] Def. 17(i) = ∂ r ∂ r − . . . ∂ · ∂ L ( ~a ∧ ~a . . . ∧ ~a r )+ ∂ r ∂ r − . . . ∂ ∧ ∂ L ( ~a ∧ ~a . . . ∧ ~a r )= ∂ r ∂ r − . . . ∂ ∧ ∂ L ( ~a ∧ ~a . . . ∧ ~a r ) , (8.35)because ∂ r ∂ r − . . . ∂ · ∂ L ( ~a ∧ ~a . . . ∧ ~a r )= ∂ r ∂ r − . . . 12 ( ∂ ∂ + ∂ ∂ ) L ( ~a ∧ ~a . . . ∧ ~a r )= ∂ r ∂ r − . . . 12 ( ∂ ∂ − ∂ ∂ |{z} relabeling1 ↔ ) L ( ~a ∧ ~a | {z } interchanging ~a ∧ ~a = − ~a ∧ ~a . . . ∧ ~a r ) = 0 . (8.36)The factor ( − 1) in the argument of L caused by the interchange ~a ∧ ~a = − ~a ∧ ~a and the relabeling (1 ↔ 2) , can be factored out, because of thelinearity of L. Applying the same consideration to any pair of indizes i, j ∈ { , , . . . , r } of ∂ r ∂ r − . . . ∂ ∂ L ( ~a ∧ ~a . . . ∧ ~a r )as to the pair 1 , ∂ r ∂ r − . . . ∂ ∂ L ( ~a ∧ ~a . . . ∧ ~a r ) = ∂ r ∧ . . . ∧ ∂ ∧ ∂ L ( ~a ∧ ~a . . . ∧ ~a r ) . (8.37) Definition 73 ( simplicial variable, simplicial derivative ) . A simplicialvariable a ( r ) is a simple blade of the form a ( r ) ≡ ~a ∧ . . . ∧ ~a r . (9.1)The simplicial derivative ∂ ( r ) is defined as ∂ ( r ) ≡ r ! ∂ ~a ∧ . . . ∧ ∂ ~a r . (9.2) Proposition 74 ( equivalence ) . For linear functions L the r -vector deriva-tive (right hand side) is equivalent to the simplicial derivative (left handside): ∂ ( r ) L ( a ( r ) ) = ∂ ¯ r L. (9.3) Proof 74 P. 72 and Def. 73. Proposition 75. ∂ ( r ) a ( r ) = ∂ ¯ r X ¯ r = (cid:18) nr (cid:19) . (9.4) Proof 75 ∂ ( r ) a ( r ) P. 74 = ∂ ¯ r X P. 55 = ∂ ¯ r X ¯ r P. 55 = (cid:18) nr (cid:19) . (9.5) Proposition 76. ∂ ( r ) (cid:0) a ( r ) (cid:1) = ( r + 1) a ( r ) . (9.6) Remark 77. Though I lack the general proof so far, the expamles of one,two and three dimensions are quite instructive.1-D ∂ (1) (cid:0) a (1) (cid:1) = ∂ ~a ( ~a ) = 2 ~a . (9.7)2-D ∂ (2) (cid:0) a (2) (cid:1) = 12 ∂ ∧ ∂ ( ~a ∧ ~a ) = 12 ∂ ∧ ∂ [( ~a · ~a ) − ~a ~a ]= 12 ∂ ∧ ∂ ( ~a · ~a ) − 12 ( ∂ ~a ) ∧ ( ∂ ~a )= 12 ∂ ∧ { ~a · ~a ) ∂ ( ~a · ~a ) } − 12 2 ~a ∧ (2 ~a )= ∂ ( ~a · ~a ) ∧ ~a + 2 ~a ∧ ~a = ~a ∧ ~a + ~a · ~a ∂ ∧ ~a | {z } = 0[6], P. 67 +2 ~a ∧ ~a = 3 ~a ∧ ~a . (9.8)3-D ∂ (3) (cid:0) a (3) (cid:1) = 13! ∂ ∧ ∂ ∧ ∂ ( ~a ∧ ~a ∧ ~a ) = 13! ∂ ∧ ∂ ∧ ∂ ( − ~a ~a ~a + ~a ( ~a · ~a ) + ~a ( ~a · ~a ) + ~a ( ~a · ~a ) − ~a · ~a ~a · ~a ~a · ~a )= 16 ( − ( ∂ ~a ) ∧ ( ∂ ~a ) ∧ ( ∂ ~a ) + ( ∂ ~a ) ∧ ( ∂ ∧ ∂ )( ~a · ~a ) − ( ∂ ~a ) ∧ ( ∂ ∧ ∂ )( ~a · ~a ) + ( ∂ ~a ) ∧ ( ∂ ∧ ∂ )( ~a · ~a ) − ∂ ∧ ∂ ∧ ∂ )[ ~a · ~a ~a · ~a ~a · ~a ]) = 16 {− ~a ∧ ~a ∧ ~a + 2 ~a ∧ [2( ~a ∧ ~a )] − ~a ∧ [2( ~a ∧ ~a )]+ 2 ~a ∧ [2( ~a ∧ ~a )] − ∂ ∧ ∂ ∧ ( ~a ~a · ~a ~a · ~a + ~a ~a · ~a ~a · ~a ) } [6], P. 67 = 16 { ~a ∧ ~a ∧ ~a + (4 + 4 + 4) ~a ∧ ~a ∧ ~a − ∂ ∧ ( ~a ∧ ~a ( ~a · ~a ) + ~a ∧ ~a ( ~a · ~a ) + ~a ∧ ~a | {z } =0 ( ~a · ~a )) } [6], P. 67 = 16 { ~a ∧ ~a ∧ ~a − ~a ∧ ~a ∧ ~a + ~a ∧ ~a ∧ ~a )= 246 ~a ∧ ~a ∧ ~a = 4 ~a ∧ ~a ∧ ~a . (9.9) Proposition 78. (cid:18) r + ss (cid:19) ∂ ( r + s ) L ( a ( r + s ) ) = ∂ ( r ) ∧ ∂ ( s ) L ( a ( r ) ∧ a ( s ) ) = ∂ ( r ) ∂ ( s ) L ( a ( r ) ∧ a ( s ) )(9.10)for ∂ ( r ) and a ( r ) as in Def. 73 and L a linear function. Proof 78 a ( r + s ) Def. 73 = a ( r ) ∧ a ( s ) , (9.11) ∂ ( r + s ) Def. 73 = 1( r + s )! ∂ r + s ∧ . . . ∧ ∂ s +1 ∧ ∂ s ∧ . . . ∧ ∂ = r ! s !( r + s )! ( 1 r ! ∂ r + s ∧ . . . ∧ ∂ s +1 ) ∧ ( 1 s ! ∂ s ∧ . . . ∧ ∂ )= (cid:18) ( r + s )!( r + s − s )! s ! (cid:19) − ∂ ( r ) ∧ ∂ ( s ) = (cid:18) r + ss (cid:19) − ∂ ( r ) ∧ ∂ ( s ) . (9.12)Hence (cid:18) r + ss (cid:19) ∂ ( r + s ) = ∂ ( r ) ∧ ∂ ( s ) , (9.13) and (cid:18) r + ss (cid:19) ∂ ( r + s ) L ( a ( r + s ) ) = ∂ ( r ) ∧ ∂ ( s ) L ( a ( r ) ∧ a ( s ) )= ∂ ( r ) ∂ ( s ) L ( a ( r ) ∧ a ( s ) ) . (9.14)The last equality holds because of the symmetry of the argument a ( r ) ∧ a ( s ) . Proposition 79. For linear functions L : ∂ B ∂ A L ( A ∧ B ) = ∂ B ∧ ∂ A L ( A ∧ B ) . (9.15) Proof 79 ∂ B ∂ A L ( A ∧ B ) P. 69 = (cid:18) rr − s (cid:19) ∂ ¯ r L P. 74 = (cid:18) rr − s (cid:19) ∂ ( r ) L ( a ( r ) ) (cid:18) rr − s (cid:19) = (cid:18) rs (cid:19) = (cid:18) r − s + ss (cid:19) ∂ ( r − s + s ) L ( a ( r − s + s ) ) P. 78 = ∂ ( r − s ) ∧ ∂ ( s ) L ( a ( s ) ∧ a ( r − s ) ) (9.16) P. 74 = ∂ r − s ∧ ∂ ( s ) L ( a ( s ) ∧ B r − s ) P. 74 = ∂ r − s ∧ ∂ s L ( A s ∧ B r − s ) = ∂ B ∧ ∂ A L ( A s ∧ B r − s ) . Proposition 80. For a linear function F r = F r ( ~a , ~a , . . . , ~a r ) of r vectorvariables ~a k · ∂ ~a k F r = F r , ( k = 1 , . . . , r ) . (9.17) Proof 80 ~a k · ∂ ~a k F r = ~a k · ∂ ~a k F r ( ~a , ~a , . . . , ~a k , . . . , ~a r ) P. 67, P. 68, X ≡ ~a k = U = F r ( ~a , ~a , . . . , ~a k , . . . , ~a r ) , (9.18)for all k = 1 , . . . , r. Definition 81 ( skew-symmetrizer ) . G r ≡ r ! ( ~a ∧ . . . ∧ ~a r ) · ( ∂ r ∧ . . . ∧ ∂ ) F r = a ( r ) · ∂ ( r ) F r , (9.19)for a linear function F r of r vector variables. Proposition 82. G r = G r ( ~a , . . . , ~a r ) (9.20)is according to Def. 81 skew-symmetric in its arguments. If F r is skew-symmetric, then G r = F r . (9.21) Proof 82 ( ~a ∧ . . . ∧ ~a r ) · ( ∂ r ∧ . . . ∧ ∂ ) F r [3] p. 7 (1.25b), p. 11 (1.38) = ( ~a ∧ . . . ∧ ~a r − ) · [ ~a r · ( ∂ r ∧ . . . ∧ ∂ )] F r [3] p. 11 (1.38) = r X k =1 ( − r − k ( ~a ∧ . . . ∧ ~a r − ) · ( ∂ r ∧ . . . ∧ ˇ ∂ k ∧ . . . ∧ ∂ ) ~a r · ∂ k F r ( ~x , . . . , ~x k , . . . , ~x r ) F linear, P. 80 = r X k =1 ( − r − k ( ~a ∧ . . . ∧ ~a r − ) · ( ∂ r ∧ . . . ∧ ˇ ∂ k ∧ . . . ∧ ∂ ) F r ( ~x , . . . , position k ~a r , . . . , ~x r ) F skew-symmetric = r X k =1 ( ~a ∧ . . . ∧ ~a r − ) · ( ∂ r ∧ . . . ∧ ˇ ∂ k ∧ . . . ∧ ∂ ) F r ( ~x , . . . , ˇ ~x k , . . . , ~x r , ~a r ) relabeling of ~x , . . . , ~x r = r ( ~a ∧ . . . ∧ ~a r − ) · ( ∂ r − ∧ . . . ∧ ∂ ) F r ( ~x , . . . , ~x r − , ~a r ) P. 80 = r ( ~a ∧ . . . ∧ ~a r − ) · ( ∂ r − ∧ . . . ∧ ∂ ) ~a r · ∂ r F r ( ~x , . . . , ~x r − , ~x r )= . . . = r ! ~a · ∂ . . . ~a r · ∂ r F r P. 80 = r ! F r , (9.22)where the symbols ˇ ~x k , ˇ ∂ k in the above mean that ~x k and ∂ k = ∂ ~x k areto be left out. Proposition 83 ( canonical form of alternating linear form ) . The simpli-cial derivative of an alternating linear r -form α r = α r ( ~a , . . . , ~a r ) ([3], pp.33 ff.) yields a r -vector A + r ≡ ∂ ( r ) α r , (9.23)with the property α r = ( ~a ∧ . . . ∧ ~a r ) · A + r = a ( r ) · A + r , (9.24)also called the canonical form of an alternating linear r -form α r . Proof 83 Def. 73 and Def. 19(i) (for vector derivatives) show that theabove defined A + r must be an r -vector. a ( r ) · A + r = a ( r ) · ∂ ( r ) α r = α r . (9.25)The last equality is due to P. 80, and the linearity and skew-symmetry of α r . 10 Conclusion In order to make the treatment in the future more self-contained, it may beuseful to compile a set of important geometric algebra relationships, whichare necessary for multivector differential calculus, and often referred to inthis paper.It may be of interest to notice that there is a MAPLE package soft-ware implementation of the multivector derivative and multivector differ-ential [13].After discussing vector differential calculus [6] and multivector differen-tial calculus in some detail, I would like to proceed and produce a similardiscussion on directed integration of multivector functions in the future. Acknowledgements I first of all thank God for the joy of studying his creation:”. . . since the creation of the world God’s invisible qualities- his eternal power and divine nature - have been clearly seen,being understood from what has been made . . . ”[17].I thank my wife for encouragement, H. Ishi for corrections and improve-ments, G. Sobczyk for some hints, and O. Giering and J.S.R. Chisholmfor attracting me to geometry. Fukui University provided a good researchenvironment. I thank K. Shinoda (Kyoto) for his prayerful support. References [1] Jesus Christ, Gospel according to Matthew, Bible, Today’s EnglishVersion.[2] S. Gull, A. Lasenby, C. Doran, Imaginary Numbers are not Real. - theGeometric Algebra of Spacetime, Found. Phys. 23 (9) , 1175 (1993).[3] D. Hestenes, G. Sobczyk, Clifford Algebra to Geometric Calculus,Kluwer, Dordrecht, 1999. [4] I. Bell, Maths for (Games) Programmers, Section 4 - MultivectorMethods, < > [5] D. Hestenes, Multivector Calculus, J. Math. Anal. and Appl., 24 (2) ,313-325 (1968). < http://modelingnts.la.asu.edu/html/GeoCalc.html > [6] E.M.S. Hitzer, Vector Differential Calculus, Mem. Fac. Eng. FukuiUniv., 50 (1) (2002). < http://sinai.mech.fukui-u.ac.jp/gcj/pubs.html > [7] E.M.S. Hitzer, Antisymmetric Matrices are Real Bivec-tors, Mem. Fac. Eng. Fukui Univ., 49 (2) , 283-298 (2001). < http://sinai.mech.fukui-u.ac.jp/gcj/pubs.html > [8] D. Hestenes, New Foundations for Classical Mechanics, Kluwer, Dor-drecht 1999.[9] D. Hestenes, Space Time Calculus, < http://modelingnts.la.asu.edu/pdf/SpaceTimeCalc.pdf > [10] D. Hestenes, Space Time Algebra, Gordon and Breach, New York 1966.[11] D. Hestenes, New Foundations for Mathematical Physics, < http://modelingnts.la.asu.edu/html/NFMP.html > [12] G. Sobczyk, Simplicial Calculus with Geometric Algebra, available at: < http://modelingnts.la.asu.edu/html/GeoCalc.html > [13] Cambridge Maple GA package, < > [14] I.N. Bronstein, K.A. Semendjajew, Taschenbuch der Mathematik(German), 22nd ed. , Verlag Harri Deutsch, Frankfurt am Main 1985.[15] K. Ito (ed.), Encyclopedic Dictionary of Mathematics, 2nd ed., Mathe-matical Society of Japan, The MIT Press, Cambridge, Massachusetts,1996.[16] O. Forster, Analysis 1 (Differential- und Integralrechnung einerVer¨anderlichen), Viewig, Wiesbaden & Rowohlt Taschenbuch Verlag,Reinbek bei Hamburg, 1976.[17] Paul, Romans 1:20, The Holy Bible New International Version, IBS,Colorado, 1973. < http://bible.gospelcom.net/http://bible.gospelcom.net/