Boosting Vector Calculus with the Graphical Notation
BBoosting Vector Differential Calculus with the Graphical Notation
Joon-Hwi Kim ∗ Department of Physics and Astronomy, Seoul National University, Seoul, South Korea
Maverick S. H. Oh † and Keun-Young Kim ‡ Department of Physics and Photon Science, Gwangju Institute of Science and Technology, Gwangju, South Korea (Dated: January 9, 2020)Learning vector calculus techniques is one of the major hurdles faced by physics undergraduates.However, beginners report various difficulties dealing with the index notation due to its bulkiness.Meanwhile, there have been graphical notations for tensor algebra that are intuitive and effective incalculations and can serve as a quick mnemonic for algebraic identities. Although they have beenintroduced and applied in vector algebra in the educational context, to the best of our knowledge,there have been no publications that employ the graphical notation to three-dimensional Euclideanvector calculus , involving differentiation and integration of vector fields. Aiming for physics studentsand educators, we introduce such “graphical vector calculus,” demonstrate its pedagogical advan-tages, and provide enough exercises containing both purely mathematical identities and practicalcalculations in physics. The graphical notation can readily be utilized in the educational environ-ment to not only lower the barriers in learning and practicing vector calculus but also make studentsinterested and self-motivated to manipulate the vector calculus syntax and heuristically comprehendthe language of tensors by themselves.
I. INTRODUCTION
As an essential tool in all fields of physics, vector calcu-lus is one of the mathematical skills that physics under-graduates have to be acquainted with. However, vectorcalculus with the index notation can be challenging tobeginners due to its abstractness and bulkiness. Theyreport various difficulties: manipulating indices, gettinglost and being ignorant about where to proceed towardduring long calculations, memorizing the vector calcu-lus identities, etc. Meanwhile, there have been graphicallanguages for tensor algebra such as Penrose graphicalnotation, birdtracks, or trace diagrams that are in-tuitive and effective in calculations. Although they canbe readily applied to three-dimensional Euclidean vec-tor calculus, publications covering vector calculus in agraphical notation remain absent in our best knowledge.Previous works only dealt with linear “algebraic” cal-culations and did not consider vector differential and in-tegral “calculus.”In response to this, for physics learners and educators,we introduce the “graphical vector calculus,” advertisehow easy and quick the graphical notation can derive vec-tor calculus identities, and provide practical examples inthe physics context. Here, we consider differential calcu-lus only; vector integral calculus might be covered in afollowing paper, as it also frequently appears in physics.See the supplementary material for a brief discussion.Pedagogical advantages of the graphical notation arenumerous. First of all, it evidently resolves the aforemen-tioned difficulties of a beginner. It serves as an intuitivelanguage that is easy to acquire but does not lack anyessential elements of vector calculus compared to the or-dinary index notation. In addition, students who are ac-quainted with the index notation would also benefit fromlearning the graphical notation. The graphical notation will increase their virtuosity in index gymnastics and pro-mote them to develop concrete ideas of coordinate-freetensor algebra. Lastly, the graphical notation of vectorcalculus serves as an excellent primer for graphical toolsin modern physics such as perturbative diagrams in fieldtheories as a conceptual precursor to Feynman diagrams.We anticipate that this “user’s manual” of graphical vec-tor calculus we provide will lower the barriers in learningand practicing vector calculus, as Feynman diagrams didin quantum field theory. II. GRAPHICAL VECTOR ALGEBRAA. Motivation and Basic Rules
We have two vectors, (cid:126)A and (cid:126)B . We can make a scalarfrom these two by the dot product. In the ordinary indexnotation, we write (cid:126)B · (cid:126)A = B i A i . Now, let us give someartistic touch to it. B i = B i A i A i = B i A i (1)The “ B -atom” and the “ A -atom” are pairing their “elec-trons” (repeated index i ) to form a “covalent bond!”Analogous to chemistry, depict a “shared electron pair”by a line connecting two “atoms.” (cid:126)B · (cid:126)A = B A (2)Vectors (cid:126)A and (cid:126)B are graphically represented as a boxwith a line attached to it. The inner product is depictedby connecting the two lines of the two boxes. Further-more, an additional insight from this is that scalars willbe graphically represented as objects with no “external”lines.
B A only has an “internal” line; no lines are a r X i v : . [ phy s i c s . e d - ph ] J a n connected to the outside. It is isolated so that if the en-tire diagram is put inside a black box, no lines will pokeout from it. In other words, scalars do not have freeindices. Scalars: f = f Vectors: (cid:126)A = A (3)The basic observations here are summarized in Table I. Index Language Graphical Language An n -index quantity A box with n attached linesThe name of a quantity The character written insidethe boxPairing (contracting) twoindices Connecting two ends of linesFree indices External linesContracted (dummy) indices Internal linesTABLE I. Translation between the index language and thegraphical language. Meanwhile, for scalar multiplication, addition, andsubtraction, we do not introduce new notational rulesto represent them but just borrow the ordinary notation;that is, they are denoted by juxtaposition and by “+”and “ − ” symbols.Scalar multiplication: f g = f gf (cid:126)A = fA Addition/subtraction: f ± g = f ± g(cid:126)A ± (cid:126)B = A ± B (4)When two objects are juxtaposed, their relative positionis irrelevant, such as f g = f g = fg = · · · etc.However, it should be noted that in Eq. 2, (cid:126)B is de-picted as a box with a line attached at its right side. Itturns out that it is okay to not care about which side aline stems from a box for denoting vectors. A line canstart from the left side, right side, upper side, lower side,or anywhere from the box, as if it freely “dangles” to befreely repositioned. For example, B A = A B = AB = B A = · · · , (5)and so on. It can be seen that an arbitrary rotation doesnot affect the value of a graphical equation. Moreover,an arbitrary rearrangement of boxes also does not. Forexample, Eq. 5 can be further deformed as the following. AB = AB = A B = A B (6) So even if a diagram is drawn to look a little bit stiff,please remember that it is “ dancing ” freely behind thescene! Also, a line can freely pass under boxes, as youcan see in the second equality in Eq. 6. In addition,intersections of lines have no significance; think of themjust overpassing each other. When such intersections oc-cur, we will always draw it in a manner that no ambiguityarises if one follows the “law of good continuation.” Thatis, “ ” is an overlap of “ ” and “ ,” not “ ” and“ .”
B. Meet the Kronecker Delta
The diagram for (cid:126)B · (cid:126)A can be interpreted from a dif-ferent perspective. The last diagram in Eq. 5 seems liketwo vectors B and A are “plugged into” a -shapedobject. B A (7)Then, what does the -shaped object represent? It is a“machine” that takes two vectors as input and gives ascalar; it is the inner product “ · ,” or in the index nota-tion, “ δ ij .” Plugging lines into the machine correspondsto contraction of indices. B i δ ij A j = B A = B A = · · · ; (8) δ ij = i j = i j = · · · . (9)In the second line, we turned on the “index markers”to avoid confusion that which terminal of the line corre-sponds to the index i and j , respectively.A comment should be made about the symmetry ofthe Kronnecker delta. The fact that δ ij = δ ji is alreadyreflected in the design of our graphical notation, that isthe appearance of δ ij with the dancing rule of equivalentdiagrams . In the graphical notation, δ ij is an undirectedline, so that there is no way to distinguish its “left” and“right” terminals. For instance, see the first equality ofEq. 5. If you want to write this symmetry conditionwithout “test vectors” plugged in, observe the secondform of (cid:126)B · (cid:126)A in Eq. 5 and the last form in Eq. 6. It canbe seen that = . (10)Turning on the index markers, = i j i j , (11)or giving one more touch, = i j i j . (12)The left hand side assigns i to the left terminal of the -shaped and j to the right terminal; the right hand sideassigns i to the right terminal and j to the left terminal.Just pretend for a moment that the index assigned tothe left terminal should be placed first when reading the-shaped in Eq. 12 in the index notation; then, we have δ ij = δ ji . C. Meet the Cross Product Machine
Now, move on to the next important structure, thecross product. The cross product is a machine that takestwo vectors as a input and gives a vector. Hence, twolines are needed for input and one line for output. (cid:126)A × (cid:126)B = A B = AB = B A = · · · (13)Please do not forget the diagrams are dancing and Eq. 13is showing just three snapshots. There are infinitude ofpossible configurations that (cid:126)A × (cid:126)B can be drawn. Also,note that the third diagram is read as (cid:126)A × (cid:126)B as wellas the first one. The lines attached to the cross productmachine ( ) should be read counterclockwise from thecore (the small dot) of the machine: . The left andright arms of the cross product machine is connected to (cid:126)A and (cid:126)B respectively in both the first and third diagrams inEq. 13, so they are equivalent. Continuous deformationsdo not affect the value of a diagram.However, how about discontinuous deformations? Incase of the inner product, yanking a twist, a discontin-uous deformation that yields a cusp during the process,did not affect the value because the inner product is sym-metric. In case of the cross product, it is antisymmetricso that (cid:126)A × (cid:126)B = − (cid:126)B × (cid:126)A ; therefore, when the two armsof the first diagram in Eq. 13 are swapped—which isthe third diagram—and yanked, a minus sign pops out,as depicted in Fig. 1. Associating a kinesthetic imagerythat the lines of the cross product machine are elasticbut particularly stiff near the core might be helpful tointuitively remember this. Do not forget the minus sign.Yanking a twist is a discontinuous “clank” process.Note that in case of a general object (tensor), the valueafter swap-then-yanking its two arms is by no means re-lated to the original value, unless it bears symmetry orantisymmetry with respect to permutation of the two in-dices. B A c l a n k B A −− FIG. 1. A minus sign pops out with a “clank!” sound whenyou swap-then-yank the two arms of a cross product machine.The plaintext equation corresponding to this action is “ (cid:126)A × (cid:126)B = − (cid:126)B × (cid:126)A .” D. Triple Products
Having introduced the graphical notation for the crossproduct, let us now graphically express triple productidentities. First, a scalar triple product (cid:126)C · (cid:0) (cid:126)A × (cid:126)B (cid:1) canbe drawn by connecting the free terminals of C and Eq.13: C A B = CA B . (14)The cyclic symmetry of the scalar triple product is al-ready reflected in its graphical design : it looks the sameunder threefold rotation.
CA B = AB C = BC A (cid:108) (cid:126)C · (cid:0) (cid:126)A × (cid:126)B (cid:1) = (cid:126)A · (cid:0) (cid:126)B × (cid:126)C (cid:1) = (cid:126)B · (cid:0) (cid:126)C × (cid:126)A (cid:1) (15)This is the economy of graphical notations : redundantplaintext expressions are brought to the same or at leastmanifestly equivalent diagram.As a side note, imagine what would it mean if thecross product machine is naked, while it is fully dressedin Eq. 14, which is (cid:15) ijk C i A j B k in the index notation.As some readers might already noticed, another namefor the cross product machine is the Levi-Civita symbol, (cid:15) ijk . It is a three-terminal machine (three-index tensor),and antisymmetric in every pair of its arms (indices). (cid:15) ijk = ki j (16)Next is the vector triple product. The BAC-CAB for-mula translates into the graphical language as the follow-ing. B CA = B CA − B CA (cid:108) (cid:126)A × (cid:0) (cid:126)B × (cid:126)C ) = (cid:126)B ( (cid:126)A · (cid:126)C ) − (cid:126)C ( (cid:126)A · (cid:126)B ) (17)This holds for arbitrary (cid:126)A , (cid:126)B , and (cid:126)C ; thus, one can ex-tract the “bones” only:= − . (18)Until now, all graphical equations followed from definingrules of graphical representation. However, Eq. 18 is thefirst—and indeed the only—nontrivial formula relatingcross product machines and Kronecker deltas. This is themost important identity that serves as a basic “syntax”of our calculations.Equation 18 is by no means “new.” With the indexmarkers, it turns out that it is the well-known formulaabout contracted two (cid:15) ijk ’s. k ijl m = ijl m − ijl m (19) (cid:108) (cid:15) ijk (cid:15) klm = δ jm δ il − δ jl δ im (20)However, the graphical way has multiple appealingpoints. First, it naturally serves as a quick visualmnemonic for Eq. 20. Also, in practical circumstances,the graphical form avoids the bulkiness of dummy in-dices and significantly simplifies the procedure of indexreplacement by δ ij ’s. One does not have to say “ i to l , j to m ” over and over in one’s mind organizing the ex-panded terms. This makes a greater difference in calcu-lation time as the equation involves more operations anddummy indices (proof of the Jacobi identity, for exam-ple). On the other hand, classification of vector algebraicidentities is immediate if they are written in the graphi-cal notation, because it shows the (tensorial) structure ofequations explicitly. One can recognize identical struc-tures within a single glance, as comprehension of visualsis much faster than that of texts. Some may take a criti-cal stance to this, because mere counting of the symbols“ × ” and “ · ” would also reveal the structure of equa-tions, albeit for simple cases. However, with the graph-ical notation, generating different identities of the samestructure is also straightforward; it is accomplished byjust attaching “flesh pieces” (vectors or arbitrary multi-terminal objects ) to the “bone.” For instance, one can easily write down the equations equivalent to the BAC-CAB rule or the Jacobi identity. Knowing what fun-damental rules that identities are rooted in with beingable to generate equivalent identities will effectively pro-mote concrete understandings of the structure of vectoralgebra.
III. GRAPHICAL VECTOR CALCULUS
Now is the time for graphical vector “calculus.” Here,we are considering not just scalars and vectors, but“scalar fields” f ( (cid:126)r ), g ( (cid:126)r ), · · · and “vector fields” (cid:126)A ( (cid:126)r ), (cid:126)B ( (cid:126)r ), · · · ; they depend on spatial coordinates, or equiv-alently, the position vector (cid:126)r . In this section, “( (cid:126)r )” isomitted unless there is an ambiguity whether it dependson (cid:126)r or not. A. The Basics
The first mission would be graphically representing ∇ = (cid:126) e i ∂∂x i := (cid:126) e i ∂ i , where (cid:126) e i and x i are the i th Carte-sian basis vector and coordinate, respectively. ∇ is a“vector” (that is, it carries an index), but also a differen-tial operator at the same time. Therefore, to accomplishthe mission, a notation that has one terminal and is ca-pable of representing the Leibniz property (the productrule of derivatives) should be devised. The later can beachieved by an empty circle, which reminds of a balloon.Things inside the balloon are subjected to differentiation.The balloon “eats” f g by first biting f only then g only: f g = f g + f g ↔ ( f g ) (cid:48) = f (cid:48) g + f g (cid:48) . To“vectorize” this, we simply attach a single tail to it. f g i = f g i + f g i (cid:108) ∂ i ( f g ) = ∂ i ( f ) g + f ∂ i ( g ) (21)This “differentiation hook” design was previously sug-gested by Penrose. However, he has not publishedhow to do the Euclidean vector calculus in three dimen-sions using it. As you will see soon, it is powerful to dis-tinguish vector algebraic manipulations from the range ofdifferentiation when an index-free format is kept, whileboth are denoted without distinction by parentheses inthe ordinary notation.The Leibniz rule, Eq. 21, can be applied regardless ofthe operand type. For instance, a vector can be fed to ∇ .. A ji = ∂ i A j = (cid:0) “ ∇ (cid:126)A ” (cid:1) ij (22)Here, visual reasoning comes earlier, naturally suggest-ing the concept “ ∇ (cid:126)A ” without reference to coordinates(before we attach index markers). This is one of theinstances where the graphical notation intuitively hintsstudents, who do not have abstract and rigorous mathe-matical understanding, to enter the world of tensors withits coordinate-free nature unspoiled.The expression Eq. 22 can be physically or geometri-cally meaningful, but it frequently appears in a particularencoding: divergence and curl. They are obtained whenwe let the two tails of Eq. 22 “interact” with each otherwith the machines we have seen in Section II. A = ∇ · (cid:126)A , A = ∇ × (cid:126)A . (23)A final note: the differentiation apply only on boxes,not lines. It is because δ ij ’s and (cid:15) ijk ’s are all constants.So, one can freely rearrange the balloons (differentiation)relative to connecting lines and cross product machinesregardless of how they are entangled with each other.An imagery that the balloon membrane is impermeableto boxes but do not care whether lines or cross productmachines pass through can be helpful. B. First Derivative Identities
Finally, we will now show how easy deriving vector cal-culus identities is with the graphical notation! Essentialexamples are demonstrated; the remaining identities areworked on the supplementary material as exercises. ∇ · (cid:0) (cid:126)A × (cid:126)B (cid:1) From the diagrams for the cross product (Eq. 13) andthe divergence of a vector field (Eq. 23), ∇ · (cid:0) (cid:126)A × (cid:126)B (cid:1) can be easily represented graphically. Then, apply theLeibniz rule Eq. 21. B A = B A + B A (24)The second term is a contraction of B and A ,which is (cid:126)B · ( ∇ × (cid:126)A ). The first term is a contraction of B = − B and A , which is ( −∇ × (cid:126)B ) · (cid:126)A . E A f g B C F G D FIG. 2. The “ecosystem” of the graphical vector calculus.
Thus, we obtain (cid:126)B · ( ∇ × (cid:126)A ) − (cid:126)A · ( ∇ × (cid:126)B ). We donot need to memorize the tricky minus sign or look up avector identity list all the time. All we need to do is justto doodle the diagrams and see what happens. ∇ × (cid:0) (cid:126)A × (cid:126)B (cid:1) ∇ × (cid:0) (cid:126)A × (cid:126)B (cid:1) can readily be written in a graphical formfrom the diagrams for the cross product (Eq. 13) anda curl of a vector field (Eq. 23). The formula is rathercomplex-looking: ∇ × (cid:0) (cid:126)A × (cid:126)B (cid:1) = ( ∇ · (cid:126)B ) (cid:126)A + ( (cid:126)B · ∇ ) (cid:126)A − ( ∇ · (cid:126)A ) (cid:126)B − ( (cid:126)A · ∇ ) (cid:126)B . While proving this in the indexnotation, you may frown at equations to recognize whichindices corresponds to which epsilon and delta; however,it is much neater in the graphical notation. The proofproceeds by applying the Leibniz rule Eq. 21 and the“ = − ” identity Eq. 18. B A = B A − B A (25)=
B A + B A − B A − B A
Translating back to the ordinary notation gives the de-sired result. Note that the second term in the bottom linetranslates into ( (cid:126)B · ∇ ) (cid:126)A , since B ( · · · ) is the derivative“modified” by (cid:126)B : it “ (cid:126)B -likely” differentiates ( · · · ) , thatis, the directional derivative with respect to (cid:126)B , B i ∂ i ( · · · ) . ∇ (cid:0) (cid:126)A · (cid:126)B (cid:1) Lastly, we will demonstrate a graphical reasoning onthe notorious vector calculus identity: ∇ (cid:0) (cid:126)A · (cid:126)B (cid:1) . Theformula is given by Eq. 28. It is perhaps the most com-plicated among all vector calculus identities. However,a bigger problem is that it is not clear how to massage ∇ (cid:0) (cid:126)A · (cid:126)B (cid:1) into smaller expressions. In the graphical nota-tion, one can see the motivation of each step more trans-parently. Start from the diagram for ∇ (cid:0) (cid:126)A · (cid:126)B (cid:1) : AB = AB + BA . (26)We aim to express Eq. 26 in tractable terms; we musttransform it into vectorial terms that can be written in acoordinate-free manner in the ordinary notation (such asdivergence, curl, or directional derivatives). The secondterm in the right hand side is identical to the first term if A is substituted to B and B is substituted to A ; therefore,we may work on the first term first then simply do thesubstitution to obtain the result for the second term.The central observation that guides us is that if thefirst term was AB , it can be written as ( (cid:126)B ·∇ ) (cid:126)A . Then,interchanging two lines is readily possible by = − . AB = AB − AB = AB + A B = ( (cid:126)B · ∇ ) (cid:126)A + (cid:126)B × ( ∇ × (cid:126)A ) (27)In the second line, the upper cross product machine is“clanked.” Finally, ∇ (cid:0) (cid:126)A · (cid:126)B (cid:1) = ( (cid:126)B · ∇ ) (cid:126)A + (cid:126)B × ( ∇ × (cid:126)A ) + ( (cid:126)A ↔ (cid:126)B ) , (28)where “+( (cid:126)A ↔ (cid:126)B )” means adding the same expressionwith (cid:126)A and (cid:126)B interchanged. This trick of interchangingtwo lines, = − , is often useful. With the graphicalnotation, utilizing it and recognizing when to use it isachieved without difficulty. C. Second and Higher-order Derivative Identities
Graphical proofs of second and higher order iden-tities can be easily proceeded analogously. Second-order derivatives are depicted as double-balloon dia-grams. There are no new graphical rules introduced ex-cept the following “commutativity of derivatives,”= , (29) where anything smooth that the derivatives commute canbe go inside the balloons. This is translated into theordinary notation as ∂ j ∂ i = ∂ i ∂ j as an operator identity.One of the most immediate results in second orderderivatives is the following.= = = − = 0 (30)At the first equality, the inner balloon is rearranged tobe the outer one according to Eq. 29; the second equal-ity comes from the dancing rule ; at the third equality,the “clank” process is used. One can easily see that ∇ × ( ∇ f ) = 0 and ∇ · ( ∇ × (cid:126)A ) = 0 are all the conse-quences of this property. The details are contained in thesupplementary material with the proof of other secondand higher order identities. IV. PRACTICAL EXAMPLES
So far, this is the story of the graphical notation, abeginners’ companion to vector calculus. In this section,we provide practical examples in the physics context.
A. The Economy of the Graphical Notation: TheSame Diagram, Different Readings
Remember the economy of the graphical notation inSection II C? In music, there are musical objects thathave multiple names in ordinary notation. For exam-ple, D (cid:93) and E (cid:91) are the same when they are aurally rep-resented. Likewise, there are situations that differentplaintext equations are represented as a single graphicalexpression so that one can easily recognize their equiva-lence. The following two, which appears when one dealswith the equations of motion of a rigid electric dipoletranslating and rotating in a magnetic field, are equalin their values but spelled differently in the ordinary no-tation. (cid:126)v · (cid:0) ( (cid:126)ω × (cid:126)p ) × (cid:126)B (cid:1) , − (cid:0) (cid:126)p × ( (cid:126)v × (cid:126)B ) (cid:1) · (cid:126)ω (31)To see the equivalence of them, one should spend timeon permuting the vectors according to properties of thetriple products. However, it is strikingly easy if one drawsa diagram corresponding to them. ωp Bv (32)Two expressions in Eq. 31 are just different readings(groupings) of Eq. 32. It is the matter of grouping theleft branch ( (cid:126)ω × (cid:126)p ) first or the right branch ( (cid:126)v × (cid:126)B ) first inEq. 32. Permuting the vectors in the ordinary notationand in the graphical notation are just two different waysof manipulating an identical tensor structure, but it ismuch easier in the graphical notation. Then, why notuse the graphical notation, at least as a mnemonic? B. Cross Your Fingers
The capacity of the graphical notations is more thana mnemonic. It is a calculation tool equipped with itsown syntax so that one can proceed the entire processof vector calculus in the graphical notation without ref-erence to indices. Let us demonstrate such calculationaladvantages.The trick of interchanging lines introduced in Sec-tion III B 3 has an objective to reassign contractions be-tween indices to obtain a more convenient form. For anexample of its practical usage, consider the electrostaticforce formula for a point electric dipole (cid:126)p in an electricfield (cid:126)E ( (cid:126)r ). It is given by ∇ (cid:0) (cid:126)p · (cid:126)E ( (cid:126)r ) (cid:1) , but also ( (cid:126)p ·∇ ) (cid:126)E ( (cid:126)r ).It would be an overkill to look up the vector calculus iden-tity table and apply the general formula Eq. 28, because (cid:126)p is not differentiated by ∇ . Simply, the following graph-ical equations completes the proof of the equivalence ofthe two. Ep = Ep = Ep − Ep (33)Note that E = ∇ × (cid:126)E ( (cid:126)r ) = 0. This shows the inten-tion of the calculation evidently, without memorizing thewhole formula. In case of a point magnetic dipole (cid:126)m ina magnetic field (cid:126)B ( (cid:126)r ), Bm = Bm − Bm = Bm + µ mJ , (34)so the force exerted on the dipole is ∇ (cid:0) (cid:126)m · (cid:126)B ( (cid:126)r ) (cid:1) = ( (cid:126)m ·∇ ) (cid:126)B ( (cid:126)r )+ µ (cid:126)m × (cid:126)J ( (cid:126)r ), where (cid:126)J ( (cid:126)r ) = µ ∇× (cid:126)B ( (cid:126)r ) is currentdensity at (cid:126)r . C. Identities Involving (cid:126)r
As a specific and important example, consider the vec-tor calculus with the position vector, (cid:126)r . First, note that r = , (35) which is ∂ i x j = δ ij in the ordinary notation. If the twoterminals are connected by Kronecker delta, a “vacuumbubble” is obtained: r = ij = δ ij δ ij = 3 . (36)If a cross product machine is used, r = = − = − = 0 , (37)as you know that ∇ × (cid:126)r = 0. The second and the thirdequality proceed by “swap-then-yanking” the cross prod-uct machine and the Kronecker delta part, respectively.Lastly, note that r = n , where (cid:126)n := (cid:126)r/r ( r := | (cid:126)r | ) is the unit radial vector.With these basic graphical equations, one can graph-ically prove identities involving r and (cid:126)r such as the fol-lowing.( (cid:126)A ∇ ) (cid:126)r = (cid:126)A ↔ rA = A (38) ∇ (cid:126)r = 0 ↔ r = = 0 (39)Here, the fact that ∂ k δ ij = 0 ↔ i jk = 0 is used. Also, expressions such as ∇× ( r sin θ(cid:126) e ˆ φ ) ( (cid:126) e ˆ φ := ∇ φ/ |∇ φ | ,where φ is the azimuthal angle) can be calculated byrecasting it into a coordinate-free expression: ∇× ( (cid:126) e z × (cid:126)r ). r e z = r e z = e z = − e z = 2 e z (40)The last step is due to = − , which can be provedby the following.= − = − − Also, it offers an algebraic way to find the δ (3) ( (cid:126)r ) term in the divergence or curl of a vector field. It is notable that such advantages are doubled with thegraphical notation that significantly lowers the difficultyof handling higher-rank index manipulations. For var-ious physical examples such as dipolar electromagneticfields and flow configurations in fluid dynamics, refer tothe supplementary material. D. A First Look on Tensors
Lastly, we want to comment about tensors, since theyoccasionally appear in undergraduate physics. Studentsare likely to develop the ideas of tensors by themselveswhile utilizing the graphical vector calculus; the exten-sion from zero and one-terminal objects to multi-terminalobjects is straightforward, and the graphical notationnaturally involves the manipulation of multiple termi-nals. Also, graphical representations are useful to ex-plain the concept of tensors to students, utilizing the“machine view.” For example, think about the inertiatensor, I ij = I i j . It is simply a two-terminal devicethat “modulates” a one-terminal input (angular velocity, ω ) into a one-terminal output (angular momentum, L = I ω ). Imagine as if a “signal” generatedfrom the ω box propagates from right to left. Swappingthe two arms of the inertia tensor does not affects thevalue), because it is symmetric: I i j = I ij = I ji = I i j = I i j . However, this is not the case fora general multi-terminal object unless it is symmetric, aswe have already discussed in Section II C. For the detailsof graphical representations of such general objects, referto the supplementary material. Here, we restrict ourattention to symmetric rank-2 tensors.At least there are three of the practical benefits of us-ing graphical notation for tensor equations. First, it isconvenient to calculate the trace and related quantitiesof a tensor. Next, the graphical notation provides atransparent and unambiguous way to denote contractionstructures. For example, consider the two expressions be-low denoting K = ω i I ij ω j and (cid:15) ijk ω j L k = (cid:15) ijk ω j I kl ω l respectively, K = Iω ω , Lω = I ωω , (42)or the following more complex example that appears inthe formula for the angular profile of electric quadrupoleradiation power. (cid:34) n Q Q ∗ n − n Q ∗ nn nQ (cid:35) (43)Here, Q i j = Q ij is the electric quadrupole momentwhich is also a symmetric tensor. The asterisk stands forcomplex conjugation. For a calculus example, considerthe divergence of the stress tensor σ , ∇ · σ . Which indexof σ is in charge of the inner product in the expression“ ∇ · σ ?’—find the answer in the following diagram. σ (44)The contraction structures and their symmetry areclearly evident at a glance and can be quickly denoted in an unambiguous and less-bulky form, in comparisonto ordinary notations. Moreover, as one finds in thesupplementary material, one can wisely calculate enor-mous tensor expressions in a shortcut with the guidanceof the graphical notation. Lastly, the graphical notationis considerably useful in denoting and explaining the in-variance property of tensorial expressions. As elaboratedin the supplementary material, one can easily exam-ine how the terminals of a tensor expression transformsunder rotation intuitively by “arrow pushing”—the paircreation/annihilation and propagation of arrowheads. V. CONCLUSIONS
Graphical notations of tensor algebra have a historyspanning over a century. The basic idea can be tracedback to the late 19 th century works on invariant theorythat related invariants to graphs. In the mid-20 th century, diagrammatic methods such as Levinson andYutsis’ diagrams for 3 n - j symbols and Cvitanovi´c’sbirdtracks are devised to conduct group-theoreticcalculations and applied to quantum theory. Ac-cording to Levinson, one of the major motivations todevelop such apparatus was “the extreme inconveniencedue to the bulkiness” of the ordinary plaintext notation.On the other hand, Penrose devised a graphical no-tation for tensor algebra and utilized it in tensors andspinors in general relativity, theory of angular momen-tum and spin networks, and twistor theory. Simi-lar to Levinson, one of his motivation was also to sim-plify the complicated equations and to effectively graspthe various interrelations they have by visual reasoning; however, he was also intended to introduce the conceptof “abstract tensor system” by a coordinate-free notationthat transparently retains the full syntactic structures oftensor equations. The concept of the abstract ten-sor system and the Penrose graphical notation motivatedthe study of category theory and its graphical languagein algebraic geometry, and served as a background to “language engineering” works to physics, such asdiagrams in tensor network of states or quantum in-formation and computing. So, why is the three-dimensional Euclidean vector cal-culus so quiet with such “graphicalism?” Perhaps it hasbeen already being used as a private calculation tech-nique, but its intractability to be printed due to graphi-cal format might hindered its publication.
However,regarding the popularity of Feynman diagrams that isalso a graphical notation, it is worth casting light on thegraphical tensor notation, as graphical vector calculushas its own pedagogical benefits. (Moreover, it conceptu-ally precedes to Feynman diagrams.) On the other hand,educators, already well-acquainted with the index nota-tion and less sensible to beginners’ difficulties, might havenot tried to employ a graphical machinery to do vectorcalculus. However, there are introductory materials forgraphical vector algebra and linear algebra, wheredifferentiation does not comes into play. Therefore, pub-lishing an educator’s manual for the application of thegraphical notation in vector calculus would be a usefulthing to do.What is newly proposed in this work is the graphicalderivations and tricks of the vector differential calculus.No previous publications have dealt with the differentia-tion and integration of vector fields, while the graphicalvector algebra introduced in this paper can be found alsoin other publications.
Also, pedagogical values ofthe graphical notation are demonstrated, and sufficientexercises containing both mathematical and physical cal-culations are provided. Overall, this paper will serve asa self-contained educational material.The graphical notation has a lot of advantages. First,it provides a quick mnemonic or derivation for identities(e.g. Eq. 18 or the vector calculus identities). It also en-hances the calculation speed, giving a bird ' s eye viewto calculation scenario. The strategy of reducing compli-cated expressions can be wisely decided. Although theyare best performed in the graphical environment, suchtechniques on index gymnastics gained from graphicalrepresentations are inherited altogether into the indexnotation environment. An index notation user also willbenefit from association of a tensorial expression with agraphical image.Next, it has advantages in denoting and comprehend-ing tensors. If it is unambiguous, an index-free no-tation is preferred, that is, “ ∇ × (cid:126)A ” is preferred over“ (cid:126) e i (cid:15) ijk ∂ j A k ,” probably because it is more simple andeasy to read off the tensorial structure in groups of se-mantic units (such as parsing (cid:126)B ·∇× (cid:126)A into “ (cid:126)B dot ∇× (cid:126)A ,”not “( (cid:126)B cross ∇ ) dot (cid:126)A ”). Particularly, the graphical no-tation is preferable to other index-free notations, becauseit can flexibly represent tensor equations which becomebulky in the ordinary index-free notation and transpar-ently displays the contraction structure. The symmetryof a tensorial expression also can be grasped at a singleglance. Moreover, students will automatically discoverthe concept of tensors as an invariant n -terminal objectand develop essential ideas of tensors in a coordinate-freesetting using the graphical notation. For example, stu-dents will realize themselves interpreting the first term inthe right hand side of Eq. 26 as Eq. 22 contracted with (cid:126)B at its second terminal (“input slot”). As a result, the ideaof the tensor “ ∇ (cid:126)A ” can be understood without leavinga vague impression, as its graphical representation pro-vides a concrete comprehension of its functionality (as a“machine”). As parse trees (graphs) can promote under-standing syntactic structures and generating sentencesof the same structure, the graphical representation cando the same in tensor calculus and its education. Fur-thermore, an unsupervised acquisition of tacit knowledgeduring graphical manipulation experiences such as “theequations are also valid after undressing test vectors fromthem” (Section II D) or “a compound n -terminal objectthat has a permutation symmetry can be reduced into a simpler expression of the same symmetry up to a propor-tionality constant” is also notable. Finally, it serves as an excellent primer to the graphicallanguages of advanced physics for undergraduates. Afterlearning the graphical vector algebra, one can easily learnthe birdtracks notation that is capable of group-theoreticcalculations in quantum theory. Also, the graphical vec-tor calculus provides exercises of “diagrammatics,” trans-lating equations into graphics and vice versa that is aneveryday task when one learns quantum field theory. En-thusiastic undergraduates who have always been curiousabout the working principles of Feynman diagrams willquench their thirst by learning the graphical tensor alge-bra. In essence, graphs for tensorial expressions of var-ious symmetry groups, birdtracks, is a group-theoreticportion of Feynman diagrams. It is easy to learn Feyn-man diagrams after learning birdtracks or graphical ten-sor algebra and vice versa because the way they denotemathematical structures is alike: loop diagrams for trace(“vacuum bubbles,” Eq. 36) or etc. Meanwhile, bird-tracks may leave a more concrete impression because ithas graphical “progression rules” that enables to jumpfrom an expression to another via equality unlike Feyn-man diagrams. Furthermore, when one considers a se-ries expansion of a tensorial expression, one encountersthe exact parallel with diagrammatic perturbation in sta-tistical mechanics or quantum field theory. Pedagogicalexamples can be found in the supplementary material. The core characteristic that provides a background toall these advantages is the “physically implemented syn-tax” of the notation. It is believed that Feynman dia-grams work because it is indeed a faithful representationof the physical reality (to the best of our knowledge)—the nature is implemented by worldlines of particles thatare isomorphic to Feynman diagrams. In the graphi-cal notation of tensors, the grammar of tensors is “em-bodied” in the wires, 3-junctions, nodes, beads, and allthat: the symbols behave as its physical appearance( self-explanatory design of symbols in Section II B andSection II C). Consequently, the language is highly in-tuitive and automatically simplifies tensorial expressions( the economy of the graphical notation ). The associationof a kinesthetic imagery further simplifies the perceptionand manipulation of the elements ( the dancing rule andthe “clank” in Section II C). As Feynman diagrams arethe most natural language to describe the microscopicprocess of elementary particles, the graphical notation isthe canonical language of the vector calculus system.Last but not least, the graphical notation will changea vector calculus class into an enjoyable game. As a childplaying with educational toys such as Lego blocks or mag-netic building sticks, it will be an entertaining experienceto “doodle” with the dancing diagrams. Even a calcula-tion of complicated tensorial invariants can be a challeng-ing task that thrills a person; one would feel as if he orshe is doing cat ' s cradle or literally “gymnastics” involv-ing their visual, kinesthetic, or even multimodal neuralsubstrates. Such an amusing character can attract stu-0dents interest and offer a motivation to study vector cal-culus. Students would voluntarily build various tensorialstructures, heuristically find the identities, and gain intu-itions. One possible “creative classroom” scenario can besuggested is to present students only the basic grammarof the graphical notation and letting them spontaneouslyand exploratively find the “sentences (identities),” per-haps in a group. The teacher can collect their results andhave a group presentation, then introduce missing iden-tities if any. This will turn a formula-memorizing classinto an amusing voluntary learning experience. So, how about boosting your education by the graphical notation? VI. ACKNOWLEDGEMENT
We thank Elisha Peterson for the provision of accessto his research materials and clarification on the rea-son why he gave the name “trace diagrams” to his di-agrams via e-mail. The work of K.-Y. Kim was sup-ported by Basic Science Research Program through theNational Research Foundation of Korea (NRF) fundedby the Ministry of Science, ICT & Future Planning(NRF2017R1A2B4004810) and GIST Research Institute(GRI) grant funded by the GIST in 2019. ∗ [email protected] † [email protected] ‡ [email protected]; https://phys.gist.ac.kr/gctp/ R. Penrose, Combinatorial mathematics and its applica-tions , 221 (1971). P. Cvitanovi´c,
Group theory: birdtracks, Lie’s, and excep-tional groups (Princeton University Press, 2008) p. 273. G. E. Stedman,
Diagram techniques in group theory (Cam-bridge University Press, Cambridge, 2009). E. Peterson,
Trace diagrams, representations, and low-dimensional topology , Ph.D. thesis, University of Maryland(2006). J. Blinn, IEEE Computer Graphics and Applications ,86 (2002). E. Peterson, arXiv e-prints , arXiv:0910.1362 (2009),arXiv:0910.1362 [math.HO]. E. Peterson, arXiv e-prints , arXiv:0712.2058 (2007),arXiv:0712.2058 [math.HO]. E. Peterson,
On A Diagrammatic Proof of the Cayley-Hamilton Theorem , Tech. Rep. (United States MilitaryAcademy, 2009) arXiv:0907.2364v1. J. Richter-Gebert and P. Lebmeir, Discrete & Computa-tional Geometry , 305 (2009). J.-H. Kim, M. S. H. Oh, and K.-Y. Kim, “An Invitationto Graphical Tensor Methods: Exercises in Graphi-cal Vector and Tensor Calculus and More,” . Cf. the “machine” view of tensors, such as in Misner,Wheeler, and Thorne’s book. . When you translate a graphical equation with no indexmarkers specified, the locations of terminals are the refer-ence for assigning indices. Imagine both sides of the equa-tion are wrapped in a black box. Then, assign the same in-dices for identical sites on the black box surface; i.e. “sameindex for same terminal” of the black box. This rule mustbe respected also when you write a graphical equation withno index markers specified. If the terminals of both sidesof the equation do not match, such as “ ⊃ = | ”, then it isinvalid. We say, “types do not match.”. In category theory, what we are calling “objects” here has aname “morphism.” “Objects” rather refer to indices in cat-egory theorists’ terminology. However, we are not intended to imply such technical term when we say “objects.”. R. Penrose and W. Rindler,
Spinors and space-time , Cam-bridge Monographs on Mathematical Physics, Vol. 1 (Cam-bridge University Press Cambridge, 1987). R. Penrose,
The road to reality: A complete guide to thephysical universe , vintage ed. (Vintage Books, New York,2007). However, if one considers covariant derivatives, its actiondepends on the operand type. The graphical notation forit can be devised easily, also. “ ∇ (cid:126)A ” is decomposed into three invariant combinations un-der SO(3) action: divergence, curl, and “shear.” However,as Romano and Price points out, shear is a rather un-popular concept in usual undergraduate courses. The i th component of the first term in Eq. (26) is B j ∂ i ( A j ),which is inaccessible in the ordinary coordinate-free nota-tion. Some notations, such as Hestenes’ overdot notationand Feynman’s subscript notation, had been suggested toavoid such componentwise description, while the graph-ical notation being the most clear and transparent one.In Hestenes’ overdot notation, (cid:126) e i B j ∂ i ( A j ) is denoted as˙ ∇ ( ˙ (cid:126)A · (cid:126)B ). The overdot specifies which quantity is subjectto differentiation; parentheses is for vector algebraic pars-ing. On the other hand, in Feynman’s subscript notation, it is denoted as ∇ (cid:126)A ( (cid:126)A · (cid:126)B ). Both can be used as long asthey don’t arise confusion with preexisting notations (suchas time derivatives and directional derivatives with respectto vector fields). For example, see the theoretical problem 2 of the AsianPhysics Olympiad in 2001, an undergraduate-level prob-lem that is interesting and physically meaningful, hav-ing implications on special relativistic electrodynamics and“Gilbert-Ampre duality.” . As mentioned before, the coordinates we are consideringhere is Cartesian. In curvilinear coordinates, one shouldprove ∇ (cid:126)r = 0 by ∇ (cid:126)r = ∇ ( ∇ · (cid:126)r ) − ∇ × ( ∇ × (cid:126)r ) = 0. This is also true for integral calculus of vector fields. Our graphical notation for Kronecker delta and thecross product machine is identical to Peterson’s “tracediagrams.”
In fact, the name “trace diagrams” orig-inates from the fact that the trace of a matrix is one of thesimplest diagram and easily calculated in the notation. J. J. Sylvester, American Journal of Mathematics , 64(1878). W. K. Clifford, American Journal of Mathematics , 126(1878). A. B. Kempe, Proceedings of the London MathematicalSociety s1-17 , 107 (1885). A. Cayley, The London, Edinburgh, and Dublin Philosoph-ical Magazine and Journal of Science , 172 (1857). I. B. Levinson, Proceed. Physical-Technical Inst. Acad. Sci.Lithuanian SSR , 4 (1956). A. P. Yutsis, V. Vanagas, and I. B. Levinson,
Mathemat-ical apparatus of the theory of angular momentum (IsraelProgram for Scientific Translations, 1960). P. Cvitanovi´c, Physical Review D , 1536 (1976). P. Cvitanovi´c and A. D. Kennedy, Physica Scripta , 5(1982). G. Canning, Physical Review. D, Particles Fields , 395(1978). P. Cvitanovi´c, P. Lauwers, and P. Scharbach, NuclearPhysics B , 165 (1981). J. Paldus, B. G. Adams, and J. ˇC´ıˇzek, International Jour-nal of Quantum Chemistry , 813 (1977). R. Penrose,
Tensor methods in algebraic geometry , Ph.D.thesis, St John’s College, Cambridge (1957). R. Penrose and M. MacCallum, Physics Reports , 241(1973). R. Penrose,
Roger Penrose: Collected works , slp ed., Vol. 1(Oxford University Press, 2010) p. 25. A. Joyal and R. Street, Advances in Mathematics , 55(1991). P. J. Freyd and D. N. Yetter, Advances in Mathematics , 156 (1989). P. Selinger, in
New structures for physics (Springer, Berlin,Heidelberg, 2010) pp. 289–355. B. Coecke and R. Duncan, New Journal of Physics ,043016 (2011). B. Coecke, in
AIP Conference Proceedings (AIP, 2006) pp.81–98. B. Coecke,
New Structures for Physics (Lecture Notes inPhysics) (Springer, 2010). B. Coecke and ´E. Paquette, in
New Structures for Physics ,edited by B. Coecke (Springer, 2010) pp. 173–286. P. Selinger, Mathematical Structures in Computer Science , 527 (2004). S. Abramsky and B. Coecke, in
Proceedings of the 19thAnnual IEEE Symposium on Logic in Computer Science,2004. (IEEE, 2004) pp. 415–425. V. Bergholm and J. D. Biamonte, Journal of Physics A:Mathematical and Theoretical , 245304 (2011). J. Biamonte, V. Bergholm, and M. Lanzagorta, Journalof Physics A: Mathematical and Theoretical , 475301(2013). S. J. Denny, J. D. Biamonte, D. Jaksch, and S. R. Clark,Journal of Physics A: Mathematical and Theoretical ,15309 (2012). S. Singh and G. Vidal, Physical Review B , 195114(2012). B. Coecke and A. Kissinger,
Picturing Quantum Processes: A First Course in Quantum Theory and DiagrammaticReasoning , 1st ed. (Cambridge University Press, 2017) p.827. R. Penrose, Quantum theory and beyond , 151 (1971). J. Blinn,
Jim Blinn’s corner : notation, notation, notation (Morgan Kaufman Publishers, 2003) p. 327. S. Keppeler, arXiv e-prints , arXiv:1707.07280 (2017), arXiv:1707.07280 [math-ph]. Tip: during quick calculations, you can omit surroundingcharacters with boxes. In fact, (the graphical representation of) tensor calcu-lus can be regarded as a formal language and sharesmany aspects with languages. The “mathematics as alanguage” metaphor (such as in the title of the article“Diagrammar” ) is valid in this sense. See also an inter-esting work that introduces “Feynman rules for weightedcontext-free grammars. E. DeGiuli Journal of Physics A: Mathematical and The-oretical , IOP Publishing (2019). This is related to Schur’s lemma or Wigner-Eckart theoremin essence. . Cf. harmonic progression in music. J. A. Wheeler. C. W. Misner, K. S. Thorn, “Gravitation,”(1973). J. D. Romano and R. H. Price, American Journal ofPhysics , 519 (2012). D. Hestenes, G. Sobczyk
Clifford Algebra to Geomet-ric Calculus: A Unified Language for Mathematics andPhysics , Clifford Algebra to Geometric Calculus: A UnifiedLanguage for Mathematics and Physics (Springer Science& Business Media, 2012) pp. 46. R. Feynman, R. Leighton, and M. Sands,
The FeynmanLectures on Physics, Vol. II: The New Millennium Edition ,Feynman Lectures on Physics (Basic Books, 2011) pp. 27–4. Y. Zheng,
Asian Physics Olympiad (1st - 8th), problemsand solutions (World Scientific Publishing Company, 2009)p. 308. G. E. Vekstein, European Journal of Physics , 113(1997). V. Namias, American Journal of Physics , 171 (1989). L. Vaidman, American Journal of Physics , 978 (1990). Y. Aharonov, P. Pearle, and L. Vaidman, Physical ReviewA , 4052 (1988). G. ’t Hooft and M. J. G. Veltman,