Picard-Fuchs equations of special one-parameter families of invertible polynomials
aa r X i v : . [ m a t h . AG ] S e p Picard-Fuchs equationsof special one-parameter familiesof invertible polynomials
Der Fakultät für Mathematik und Physikder Gottfried Wilhelm Leibniz Universität Hannoverzur Erlangung des GradesDoktorin der NaturwissenschaftenDr. rer. nat.vorgelegte DissertationvonDipl.-Math. Swantje Gährsgeboren am 08. Mai 1983 in Hamburg bstract
The thesis deals with calculating the Picard-Fuchs equation of special one-parameter familiesof invertible polynomials. In particular, for an invertible polynomial g ( x , . . . , x n ) we considerthe family f ( x , . . . , x n ) = g ( x , . . . , x n ) + s · Q x i , where s denotes the parameter. For thefamilies of hypersurfaces defined by these polynomials, we compute the Picard-Fuchs equation,i.e. the ordinary differential equation which solutions are exactly the period integrals. For theproof of the exact appearance of the Picard-Fuchs equation we use a combinatorial versionof the Griffiths-Dwork method and the theory of GKZ systems. As consequences of our workand facts from the literature, we show the relation between the Picard-Fuchs equation, thePoincaré series and the monodromy in the space of period integrals. Keywords:
Picard-Fuchs equation, invertible polynomials, Griffiths-Dwork method, GKZsystems, Poincaré series, Monodromy ontents
Introduction 11 Background on invertible polynomials and the Griffiths-Dwork method 5
A Examples: Simple K3 singularities 83B The Griffiths-Dwork method in Singular 91Bibliography 94 ntroduction
In this thesis we investigate the Picard-Fuchs equation of special one-parameter families ofCalabi-Yau varieties. Calabi-Yau varieties have been studied in much detail, especially inMirror Symmetry. Much of the early interest in this field focused on toric varieties. This ismostly due to Batyrev [Bat94], who showed that for hypersurfaces in toric varieties dualityin the sense of Mirror Symmetry can be reduced to polar duality between polytopes of toricvarieties. This was the starting point for many achievements in Mirror Symmetry of Calabi-Yau varieties. The work of Batyrev, however, does not cover the families in weighted projectivespace that we consider in this thesis. In particular, Batyrev requires an ambient space thatis Gorenstein. This implies that every weight divides the sum of all weights. In the caseof hypersurfaces in weighted projective spaces this restricts the class covered by Batyrev’sapproach to polynomials of Brieskorn-Pham type.The hypersurfaces we investigate are defined by invertible polynomials. These are weightedhomogeneous polynomials g ( x , . . . , x n ) ∈ C [ x , . . . , x n ] , which are a sum of exactly n monomials, such that the weights q , . . . , q n of the variables x , . . . , x n are unique up to aconstant and the affine hypersurface defined by the polynomial has an isolated singular-ity at . The class of invertible polynomials includes all polynomials of Brieskorn-Phamtype, but is much bigger. These polynomials were already studied by Berglund and Hüb-sch [BH92], who showed that a mirror manifold is related to a dual polynomial. For aninvertible polynomial g ( x , . . . , x n ) = P nj =1 Q ni =1 x E ij i the dual polynomial g t ( x , . . . , x n ) is defined by transposing the exponent matrix E = ( E ij ) i,j of the original polynomial, so g t ( x , . . . , x n ) = P nj =1 Q ni =1 x E ji i . If the polynomial is of Brieskorn-Pham type then the poly-nomial is in the above sense self-dual. This work was made precise by Krawitz, et al. (cf.[KPA + Introduction
In this thesis we analyse the Picard-Fuchs equations of the one-parameter families of hyper-surfaces. The Picard-Fuchs equation is a differential equation that is satisfied by the periodsof the family, i.e. the integrals of a form over a basis of cycles. These differential equationshave been studied by many people and this can lead to several aspects of Mirror Symmetry.For example, Morrison [Mor92] used the Picard-Fuchs equations of hypersurfaces to calculatethe mirror map and Yukawa couplings for mirror manifolds. In [CYY08] Chen, Yang and Yuistudy the monodromy for Picard-Fuchs equations of Calabi-Yau threefolds in terms of mon-odromy groups. These give two potential applications of the results of this thesis to furtherresearch.We consider a special one-parameter family over an invertible polynomial and calculatethe Picard-Fuchs equation for this family. In detail we start with an invertible polynomial g ( x , . . . , x n ) , and in addition we require that the weights q , . . . , q n of g add up to the degree d of g . This is called the Calabi-Yau condition, because in [Dol82] Dolgachev showed that underthis condition the canonical bundle of the hypersurface { g ( x , . . . , x n ) = 0 } ⊂ P ( q , . . . , q n ) istrivial. The special one-parameter family we are dealing with is given by f ( x , . . . , x n ) := g ( x , . . . , x n ) + s n Y i =1 x i , where s is a parameter. We calculate the Picard-Fuchs equation for this one-parameter familyby using the Griffiths-Dwork method, which provides an algorithm to calculate the Picard-Fuchs equation (cf. [CK99]). Unfortunately this method of calculations can be quite compu-tationally expensive. Therefore we develop a combinatorial approach for the Griffiths-Dworkmethod. This approach, among other things, allows us to prove the order of the Picard-Fuchsequation. With this statement and the computation of the GKZ system satisfied by the sameperiods, we can prove a general formula for the Picard-Fuchs equation. For a one-parameterfamily f defined above the Picard-Fuchs equation is given by n Y i =1 b q b q i i s b d n Y i =1 b q i − Y j =0 ( δ + j · b d b q i ) Y ℓ ∈ I ( δ + ℓ ) − − ( − b d ) b d b d − Y j =0 ( δ − j ) Y ℓ ∈ I ( δ − ℓ ) − , where b q , . . . , b q n are the weights of the dual polynomial g t , b d is the degree of g t , and I = { , . . . , b d − } ∩ S ni =1 n , b d b q i , b d b q i , . . . , ( b q i − b d b q i o .One interesting observation is that the Picard-Fuchs equation consists only of the data givenby the dual polynomial, namely the dual weights and the dual degree. As pointed out to usby Stienstra, this Picard-Fuchs equation was already obtained in a work by Corti and Goly-shev [CG06] in the context of local systems and Landau-Ginzburg pencils. Our combinatorialapproach however is very constructive and yields not only the Picard-Fuchs equation itselfbut computes a basis of the important part of the cohomology. These computations will againrelate to the duality between the polynomials. In addition we are able to show for certainvalues of the parameter a 1-1 correspondence between the roots of the Picard-Fuchs equationof f , the Poincaré series of the dual polynomial g t and the monodromy in the solution spaceof the Picard-Fuchs equation. ntroduction Structure of the thesis
This thesis starts with some preliminaries in Chapter 1. We recall definitions and relevantstatements needed for discussions and fix notation here. The first section of this chapteris devoted to invertible polynomials and the duality among them. After that, in the secondsection we concentrate on the Griffiths-Dwork method to calculate the Picard-Fuchs equations.In Subsection 1.2.1 we present the method in general for hypersurfaces in weighted projectivespaces and in Subsection 1.2.2 we introduce a new combinatorial notation which will be usedto reconstruct the Griffiths-Dwork method in detail for one-parameter families f of the formmentioned above.The main goal of Chapter 2 is to prove the order of the Picard-Fuchs equation for a one-parameter family. To achieve this goal we will investigate the structure of the underlyingcombinatorics. This will shorten the calculations for the Griffiths-Dwork method, and alsoallows us to construct explicitly the forms involved in the calculations of the Picard-Fuchsequation. One important ingredient for the whole procedure to work nicely will be that theCalabi-Yau condition holds, so the weights of the polynomial add up to the degree. In the lastsection (Section 2.3) of this chapter we calculate the complete Picard-Fuchs equation withthe Griffiths-Dwork method in one example.Chapter 3 combines several results achieved so far in the thesis with results that can be foundin the literature. The main theorem (see Section 3.2) presents the Picard-Fuchs equation for aone-parameter family associated to an invertible polynomial in general. We already calculatedthe order of the Picard-Fuchs equation and together with the GKZ system, computed inSection 3.1, the theorem is proved. We take advantage of the constructive proof of the orderof the Picard-Fuchs equation in Section 3.3, where we investigate the cohomology of thehypersurface defined by the one-parameter family. In Section 3.4 we calculate the Picard-Fuchs equations explicitly for the famous class of examples of the 14 exceptional unimodalhypersurface singularities. In addition to this being an important class, this was the originof our work and most of the interesting phenomena can already be seen. In the last section(Section 3.5) we investigate the 1-1 correspondence between the Picard-Fuchs equation of aone-parameter family, the Poincaré series of the transposed polynomial and the monodromyin the solution space of the Picard-Fuchs equation.Finally, the Appendix is divided into two parts. The first part (Appendix A) shows anotherclass of examples for which we calculated the Picard-Fuchs equation. This special class of Introduction examples is extracted from the list of 93 hypersurfaces stated in [Yon90]. In Appendix Bone can find the code of the algorithm that is provided by the Griffiths-Dwork method. Thealgorithm is written for Singular, but it can easily be adapted for any computer algebrasystem.
Acknowledgements:
First of all I would like to thank Prof. Wolfgang Ebeling for super-vising me and for always giving me the opportunity to take advantage of his knowledge, hisgood intuition and his enthusiasm. I would also like to thank Prof. Noriko Yui and Prof.Ragnar-Olaf Buchweitz, who were great hosts during my stay at the Fields Institute inToronto and gave me the possibility to speak in front of a wide audience, and in particular Iwant to thank Prof. Yui for discussing my work in a very encouraging way. I want to gratefullymention Prof. Jan Stienstra, who showed me the relation between my work and the work ofGolyshev and Corti and gave me a final input to the proof of the main theorem, and Prof.Victor Golyshev, who took the time to answer my questions on his paper. I am obliged toDr. Nathan Broomhead who helped me solving several problems in his patient manner and inaddition all other members of the Insitute of Algebraic Geometry at the Leibniz UniversitätHannover. hapter 1
Background on invertible polynomialsand the Griffiths-Dwork method
We start this chapter by defining invertible polynomials and proving some properties we needlater.
Definition 1.1.
Let g ( x ) = m X j =1 c j n Y i =1 x E ij i ∈ C [ x ] be a quasihomogeneous polynomial with weights q , . . . , q n ∈ Z , where x = ( x , . . . , x n ) and E ij ∈ N . Then g ( x ) is an invertible polynomial, if the following conditions hold:(i) variables = summands , i.e. n = m ,(ii) the weights q , . . . , q n are unique up to scaling by a multiple of gcd ( q , . . . , q n ) − and(iii) the Milnor ring C [ x ] /J ( g ) has a finite basis, where J ( g ) = h ∂g∂x , . . . , ∂g∂x n i is the Jacobianideal. This is equivalent to being an isolated singularity of { g ( x ) = 0 } ⊆ C n . Remark . We want to state some conventions we are using throughout the thesis: • We require the weights to be reduced, i.e. gcd( q , . . . , q n ) = 1 . This way the weights areunique. • Some authors call the polynomial g ( x ) invertible, if the first two conditions are satisfiedand a non-degenerate invertible polynomial, if g ( x ) satisfies all three conditions. • From now on we assume that the coefficients c j are all equal to . This can always beachieved by an easy coordinate transformation. Background on invertible polynomials and the Griffiths-Dwork method • The weights are also defined by the smallest numbers q , . . . , q n ∈ N and d ∈ N satisfyingthe equation E · · · E n ... ... E n · · · E nn q ... q n = d ... d or concisely E · q = d . We call E the exponent matrix M. Kreuzer and H. Skarke showed that the polynomials which are invertible are a compositionof only two types.
Theorem 1.3. (Kreuzer and Skarke [KS92])
Every invertible polynomial is a sum of polyno-mials with distinct variables of the following two typesloop: x k x + x k x + · · · + x k m − m − x m + x k m m x for m ≥ chain: x k x + x k x + · · · + x k m − m − x m + x k m m for m ≥ Example 1.4.
We want to list two very famous class of examples here.(i) A polynomial is of Brieskorn-Pham type if it is of the form g ( x ) = P ni =1 x k i i with k i ∈ N .In this case the polynomial is always invertible and the exponent matrix is a diagonalmatrix with the exponents k i on the diagonal. It follows that q i = lcm ( k ,...,k n ) k i and d = lcm ( k , . . . , k n ) .(ii) For the 14 exceptional unimodal singularities, invertible polynomials can be chosen.Table 1.1 lists their name, invertible polynomial, reduced weights and degree in the firstfour columns. In the last columns the dual singularity due to Arnol’d [Arn75] is listed.In the next definition we will see how this duality fits into the context of invertiblepolynomials which also explains the rest of the table. The example of Arnold’s strangeduality will be studied in detail in Section 3.4.In their paper [BH92] P. Berglund and T. Hübsch proposed a way to define dual pairs ofinvertible polynomials by transposing the exponent matrix. Definition 1.5. If g ( x ) = P nj =1 Q ni =1 x E ij i is an invertible polynomial then the Berglund-Hübsch transpose is given by g t ( x ) = n X j =1 n Y i =1 x E ji i . Example 1.6.
As noticed before the dual singularities in the examples of Arnold’s strangeduality are given by transposed polynomials: .1 Invertible polynomials g ( x, y, z ) Weights Deg Dual weights g t ( x, y, z ) DualE x + y + z (6,14,21) 42 (6,14,21) x + y + z E E x y + y + z (4,10,15) 30 (6,8,15) x + xy + z Z Z x + xy + z (6,8,15) 30 (4,10,15) x y + y + z E E x z + y + z (3,8,12) 24 (6,8,9) x + y + xz Q Q x + y + xz (6,8,9) 24 (3,8,12) x z + y + z E Z x y + xy + z (4,6,11) 22 (4,6,11) x y + xy + z Z W x + y z + z (4,5,10) 20 (4,10,5) x + y + yz W Z x z + xy + z (3,5,9) 18 (4,6,7) x y + y + xz Q Q x y + y + xz (4,6,7) 18 (3,5,9) x z + xy + z Z W x y + y z + z (3,4,8) 16 (4,6,5) x + xy + yz S S x + y z + xz (4,5,6) 16 (3,8,4) x z + y + yz W Q x z + y + xz (3,5,6) 15 (3,5,6) x z + y + xz Q S x y + y z + xz (3,4,5) 13 (3,5,4) x z + xy + yz S U x + y z + yz (3,4,4) 12 (3,4,4) x + y z + yz U Table 1.1: Arnold’s strange duality
Remark . Notice that taking the transpose does not change the type of the polynomial. Theexponent matrix is a direct sum of matrices, where every summand belongs to a polynomialof chain or loop type. Therefore we can transpose every chain and loop separately: • g ( x ) = x k x + · · · + x k m − m − x m + x k m m x ⇒ g t ( x ) = x m x k + x x k · · · + x m − x k m m and • g ( x ) = x k x + · · · + x k m − m − x m + x k m m ⇒ g t ( x ) = x k + x x k · · · + x m − x k m m . Definition 1.8.
Let g ( x ) be an invertible polynomial. We set f ( x ) to be the one-parameterfamily associated to g ( x ) via f ( x ) = g ( x ) + s n Y i =1 x i , where s denotes the parameter.This one-parameter family f ( x ) will be one of the main objects of interest in this thesis.Because we still want this family to be quasihomogeneous, we require that the weights of g ( x ) Background on invertible polynomials and the Griffiths-Dwork method add up to the degree of g ( x ) . In [Dol82] I. Dolgachev showed that this is the condition for aquasihomogeneous polynomial to define a Calabi-Yau hypersurface. Proposition 1.9. ([Dol82])
Let g ( x ) be a quasihomogeneous polynomial with weights q , . . . , q n . Then g ( x ) defines a hypersurface in P ( q , . . . , q n ) that is Calabi-Yau, if n X i =1 q i = d = deg g ( x ) . Lemma 1.10.
If the Calabi-Yau condition holds for the weights of an invertible polynomialthen it also holds for the weights of the transposed polynomial.Notation . For an invertible polynomial g ( x ) we denote the reduced weights with q , . . . , q n and deg g = d . For the dual polynomial g t the weights are b q , . . . , b q n and deg g t = b d . Thediagonal entries of the exponent matrix E are k , . . . , k n . Notice that this are the same for g and g t . Proof.
The Calabi-Yau condition is equivalent to the condition det ... E · · · = 0 This is due to the fact that the weights are unique up to scaling and therefore the linearrelation between the rows of the above matrix have to be given by multiplying with thevector ( − d, q , . . . , q n ) t . Now it is obvious that if the above condition holds for E it also holdsfor E t .We want to investigate another relation of the dual weights here, which has to do with thepartial derivatives of the one-parameter family f ( x ) . This connection between the Jacobianideal of f ( x ) and the exponent matrix of g ( x ) occurs again in the next chapter. Remark . First assume g ( x ) = x k x + · · · + x k m − m − x m + x k m m x is a loop of length m , thenwe have f ( x ) = x k x + · · · + x k m − m − x m + x k m m x + sx · · · x m . The dual weights of g ( x ) can becalculated via the equation k · · · k · · ·
00 1 k · · · ... . . . . . . ... · · · k m − · · · k m · b q ...... b q n = b d ...... b d .1 Invertible polynomials g ( x ) also satisfy the Calabi-Yau equation due to Lemma 1.10 weget the following equation: k · · · k · · ·
00 1 k · · · ... . . . . . . ... · · · k m − · · · k m − · · · · · · ... . . . ...... . . . ... · · · · · · · b q ...... b q n = k − − · · · − k − − · · · − − k − − · · · − ... . . . . . . ... − · · · − k m − − − − · · · − k m − · b q ...... b q n This is interesting, because the i th column of this matrix is connected to ∂f∂x i = k i x k i − i x i +1 + x i − + s Q j = i x j in the sense that the i th column is given by subtracting the exponent vectorof the summand k i x k i − i x i +1 by the exponent vector of the summand s Q j = i x j . Of course theindex is taken modulo m .The same happens if g ( x ) = x k x + · · · + x k m − m − x m + x k m m is a polynomial of chain type. Theequation from above is now given by k · · · k · · ·
00 1 k · · · ... . . . . . . ... · · · k m − · · · k m − · · · · · · ... . . . ...... . . . ... · · · · · · · b q ...... b q n = k − − · · · − − k − − · · · − − k − − · · · − ... . . . . . . ... − · · · − k m − − − − · · · − k m − · b q ...... b q n and the partial derivatives of f ( x ) are ∂f∂x = k x k + s Q j =1 x j , ∂f∂x i = k i x k i − i x i +1 + x i − + s Q j = i x j for i = 2 , . . . , m − and ∂f∂x m = k m x k m − m + x m − + s Q j = m x j . If we again subtractthe exponent vector of the summand s Q j = i x j , for i = 1 , . . . , m , in the partial derivative fromthe exponent vector of the summand k x k , k i x k i − i x i +1 or k m x k m − m in the partial derivativethen the result is exactly given by the columns of the matrix above.0 Background on invertible polynomials and the Griffiths-Dwork method
This section is divided into two parts. In the first part we explain the Griffiths-Dwork method,which is a well-known method to calculate the Picard-Fuchs equation of a one-parameterfamily of hypersurfaces. In the second part we will introduce our own combinatorial notationto describe the Griffiths-Dwork method. Using this we are able to describe the Griffiths-Dworkmethod in Chapter 2 in a sufficiently abstract way. So we are able to present facts about thePicard-Fuchs equation in general.Before we start describing the Griffiths-Dwork method, we want to recall the definition of aPicard-Fuchs equation.
Definition 1.13.
The Picard-Fuchs equation of a one-parameter family f ( x ) of hypersurfacesis defined as the ordinary differential equation with differential operator δ = s ∂∂s , where s isthe parameter, which has as solutions exactly the period integrals. So the solutions are givenby R γ i ω for a basis { γ i } of H n − ( V ( f )) and ω ∈ H n − ( V ( f )) . In this section we want to repeat how the Griffiths-Dwork method works. The method is dueto Griffiths [Gri69], Dwork [Dwo62] and the generalization to weighted polynomials was doneby Dolgachev [Dol82]. A good reference for this method in general is Chapter 5.3 of the bookby Cox and Katz [CK99]. We will not do everything in general but restrict ourselves to thecases which are important to us. In particular we will only consider the one-parameter family f ( x ) = g ( x ) + s Q ni =1 x i ∈ C ( s )[ x ] defined in 1.8 and explain the Griffiths-Dwork method forthese families.We will recall and fix the notation we already used in the last section. We use this notationthroughout the rest of the thesis. Notation . The polynomial g ( x ) = P nj =1 Q ni =1 x E ij i is an invertible polynomial. The di-agonal entries of the exponent matrix E are denoted by k , . . . , k n . The polynomial g ( x ) isquasihomogeneous with weights q , . . . , q n and degree d . We define f ( x ) = g ( x ) + s Q ni =1 x i tobe a one-parameter family with parameter s .We want to calculate the Picard-Fuchs equation for f ( x ) . The first step is to describe thecohomology H n − ( V ( f )) in more detail. For this we use the residue mapRes : H n − ( P ( q , . . . , q n ) \ V ( f )) ։ P H n − ( V ( f )) ⊆ H n − ( V ( f )) , where P H n − ( V ( f )) = { η ∈ H n − ( V ( f )) | η · H = 0 for the hyperplane class H } is the prim-itive cohomology. Note that P H n − ( V ( f )) = H n − ( V ( f )) if n is even. The advantage of thismap is that the cohomology H n − ( P ( q , . . . , q n ) \ V ( f )) was explicitly described by Griffithsin [Gri69] and we can use the Residue map to carry this description over to the cohomologyof the hypersurface. In detail, the classes in H n − ( P ( q , . . . , q n ) \ V ( f )) can be represented byforms of the form Q Ω f l , where Ω = P ni =1 ( − j d j x j dx ∧ · · · ∧ d dx j ∧ · · · ∧ dx n , l ∈ N and .2 Introduction to the Griffiths-Dwork method Q ∈ C ( s )[ x ] is a polynomial with deg Q = (deg f )( l − . We can now define the residue mapas follows: Z γ Res Q Ω f l = Z T γ Q Ω f l (1.1)for an ( n − -cycle γ and T γ a tubular neighbourhood around γ .Let us go back to our goal. The Picard-Fuchs equation is of the form p r ( s ) δ r + · · · + p ( s ) δ + p ( s )) (cid:18)Z γ i ω (cid:19) , where p i ( s ) ∈ C ( s ) . Suppose ω ∈ P H n − ( V ( f )) , so ω = Res Q Ω f l for some Q, f, l . Usingequation (1.1) we get p r ( s ) δ r + · · · + p ( s ) δ + p ( s )) (cid:18)Z γ i Res Q Ω f l (cid:19) = Z T γ (cid:18) p r ( s ) δ r Q Ω f l + · · · + p ( s ) δ Q Ω f l + p ( s ) Q Ω f l (cid:19) = Z γ i ( p r ( s ) δ r ω + · · · + p ( s ) δω + p ( s ) ω ) , because the integral commutes with the differential operator. This means if we find a differ-ential equation satisfied by the ( n − -form ω , then this also holds for the period integrals of ω . From now on we will write Q Ω f l instead of Res Q Ω f l for a form in P H n − ( V ( f )) . The ideais now to calculate δ i Q Ω f l for i = 0 , , . . . until we find a linear relation between these forms.Unfortunately this is not so easy to do, because as i increases, the pole order l also increases.The Griffiths-Dwork method tells us how to solve this problem. The primitive cohomologycan be compared with the Milnor ring by the following isomorphism ( C ( s )[ x ] /J ( f )) (deg f )( l − ∼ = P H n − l − ,l − ( V ( f )) Q Q Ω o f l , where the subscript (deg f )( l − denotes the graded piece of degree (deg f )( l − in the Milnorring. The fundamental ingredient to this isomorphism is the Griffiths formula (cf. Theorem4.3 in [Gri69]) that tells us how and when to reduce the pole order of an ( n − -form: ( l − P nj =1 G j ∂f∂x j Ω f l = P nj =1 ∂G j ∂x j Ω f l − (modulo exact forms) . (1.2)This can be seen easily from the following calculation: ( l − P nj =1 G j ∂f∂x j Ω f l − P nj =1 ∂G j ∂x j Ω f l − = d f l − X i The big advantage is now that all computations can be done with a Gröbner basis in theMilnor ring and the Picard-Fuchs equation can be calculated with the following algorithm. Algorithm 1.15. (cf. Cox and Katz [CK99], Section 5.3) With the following steps one candetermine the Picard-Fuchs equation for the one-parameter family f ( x ) with parameter s :(i) Find a basis B of the Milnor Ring C ( s )[ x ] /J ( f ) in degree d ( l − for ≤ l ≤ n − (this is equivalent to having a basis of the primitive cohomology).(ii) Write δ i ω = (cid:0) s ∂∂s (cid:1) i ω in the basis B for all ≤ i ≤ | B | . This is done by writing δ i ω as a sum of a part that is in the basis and a part that is in the Jacobian ideal andcan therefore be reduced with the Griffiths formula. After reducing this process can berepeated until pole order is reached.(iii) Now there are | B | basis elements and | B | + 1 derivatives of ω , so there is a linear relationbetween them. The linear relation between the δ i ω gives the Picard-Fuchs equation of f . Remark . One could ask why it is still interesting to investigate the Griffiths-Dworkmethod in even more detail. The reason is that some of the calculations done in the abovealgorithm are very expensive. Furthermore, it very often happens that in the calculations notall elements of the basis of the Milnor ring are needed. The goal is therefore to find an abstractway to describe the steps in the Griffiths-Dwork method and try to restrict the calculationsto a minimum. Remark . If we have an invertible polynomial g ( x ) that satisfies the Calabi-Yau condition1.9 and the one-parameter family we are looking at is f ( x ) = g ( x ) + s Q x i , then we can easilycalculate δ i ω for ω = s Ω f and all i ≥ : δω = s Ω f − s Q x i Ω f δ ω = s Ω f − s Q x i Ω f + 2 s ( Q x i ) Ω f δ ω = s Ω f − s Q x i Ω f + 6 2 s ( Q x i ) Ω f − s ( Q x i ) Ω f . . .δ i ω = i X m =0 ( − m r im m ! s m +1 ( Q x j ) m Ω f m +1 , where r im = − r i − m − + ( m + 1) r i − m for i, m ≥ , m < i with r m = r mm = 1 for all m and r = 1 .This means in the second step of the Algorithm 1.15 we have to write every m ! s m +1 ( Q x j ) m Ω f m +1 in the basis B of the Milnor ring. Remark . In practice we are going to interchange the first and the second step in Algorithm1.15. The basic idea is to first write the δ i ω with monomials of degree ≤ d ( n − and thensee which of them are linearly independent in the Milnor ring and choose a basis this way. .2 Introduction to the Griffiths-Dwork method In this section we want to show a possibility how to give a diagrammatic version of theGriffiths-Dwork method. This means we will develop diagrams for all the steps in the Griffiths-Dwork method. This is helpful later to reduce the algorithm to the important parts and dothe steps in a clear way to see what is happening there.From now on we restrict ourselves completely to invertible polynomials. So g ( x ) = P ni =1 Q ni =1 x E ij i is an invertible polynomial with weights q , . . . , q n and deg g = d . We de-note by k , . . . , k n the diagonal entries of the matrix E = ( E ij ) i,j . These are the only entries = 0 , in this matrix. The one-parameter family f is given by f ( x ) = g ( x ) + s Q ni =1 x i . We willassume that the weights of g satisfy the Calabi-Yau condition 1.9, so that f is still weightedhomogeneous.First we will have a closer look at the Jacobian ideal J ( f ) . We start with an invertiblepolynomial, so every variable can appear in at most two terms of g or equivalently 3 terms of f . The possibilities for the terms that contain the variable x i in g are the following:(i) x k i i , which occurs if x i is in a chain of length ,(ii) x k i i + x i x k i − i − , which occurs if x i is the end of a chain of length ≥ ,(iii) x k i i x i +1 , which occurs if x i is the beginning of a chain of length ≥ or(iv) x k i i x i +1 + x i x k i − i − , which occurs if x i is in the middle of a chain of length ≥ or in aloop of arbitrary length.Therefore there are only 4 possibilities for the partial derivative of f with respect to x i :(i) ∂∂x i f = k i x k i − i + s Q j = i x j ,(ii) ∂∂x i f = k i x k i − i + x k i − i − + s Q j = i x j ,(iii) ∂∂x i f = k i x k i − i x i +1 + s Q j = i x j , or(iv) ∂∂x i f = k i x k i − i x i +1 + x k i − i − + s Q j = i x j . Partial derivatives in a diagrammatic way Now we want to write these partial derivatives in a diagrammatic way. To do this we will notwrite down monomials, but restrict ourselves to the exponents. So instead of writing Q ni =1 x a i i we write the tuple of exponents ( a , a , . . . , a n ) .In the next step we want to write the sum of monomials in the diagrammatic notation. Becausethe Jacobian ideal of f is only generated by sums of two or three monomials, we concentrate4 Background on invertible polynomials and the Griffiths-Dwork method on how to write these sums of two or three monomials in the Jacobian ideal in a good way.First let us assume our partial derivative with respect to x i is a sum of two monomials, e.g. x i occurs in f as k i x k i − i + s Q j = i x j or k i x k i − i x i +1 + s Q j = i x j . Then we can identify the twoinvolved monomials by two points given by the exponents and indicate the fact that they arein a sum by an arrow that points to the monomial which has the parameter s as coefficient.The coefficient k i of the other monomial will be written at the beginning of the arrow. Lateron if it is not important we will omit all coefficients to reduce the notation to the essentialinformation. So we would write the partial derivative k i x k i − i + s Q j = i x j as , . . . , , k i − , , . . . , , . . . , , , , . . . , k i Figure 1.1: Diagram associated to k i x k i − i + s Q j = i x j and similarly the sum k i x k i − i x i +1 + s Q j = i x j would be represented by , . . . , , k i − , , , . . . , , . . . , , , , , . . . , k i Figure 1.2: Diagram associated to k i x k i − i x i +1 + s Q j = i x j Now let us study what happens if we multiply a partial derivative by a monomial. Multi-plication with a monomial does not change the number of summands. So we still end upwith a sum of two or three monomials and the exponent of the monomial just gets added tothe exponents of the partial derivative. For example if we multiply k i x k i − i + s Q j = i x j by amonomial m = c Q x a i i then the product is represented in our new notation, by a , . . . , a i − , a i + k i − , a i +1 , . . . , a n a , . . . , a i − , a i , a i +1 , . . . , a n k i c Figure 1.3: Multiplication of k i x k i − i + s Q j = i x j by a monomial m = c Q x a i i We keep track of the coefficient c next to the middle of the arrow. Again if the informa-tion is not important, we will omit the coefficient. A sum of two monomials can be writtenas a monomial times a partial derivative, if the difference between the two monomials is (1 , . . . , , − k i + 1 , , . . . , and there is a partial derivative of the form k i x k i − i + s Q j = i x j or .2 Introduction to the Griffiths-Dwork method (1 , . . . , , k i − , , , . . . , , the ( i + 1) th entry is > and there is a partialderivative of the form k i x k i − i x i +1 + s Q j = i x j . Of course the coefficients have to fit, but thiswill not need extra attention here.Now let us discuss the case that the partial derivative is a sum of three monomials, so it iseither k i x k i − i + x k i − i − + s Q j = i x j or k i x k i − i x i +1 + x k i − i − + s Q j = i x j . As in the case of twomonomials, we connect all monomials that form the partial derivative. This is best understoodin an example. First consider the case k i x k i − i + x k i − i − + s Q j = i x j , this leads to the picture , . . . , , , k i − , , . . . , , . . . , , , , , . . . , , . . . , , k i − , , . . . , k i Figure 1.4: Diagram associated to k i x k i − i + x k i − i − + s Q j = i x j and for the case k i x k i − i x i +1 + x k i − i − + s Q j = i x j the picture is given by , . . . , , , k i − , , , . . . , , . . . , , , , , , . . . , , . . . , , k i − , , , . . . , k i Figure 1.5: Diagram associated to k i x k i − i x i +1 + x k i − i − + s Q j = i x j We put the arrow in the direction where the difference is given by (1 , . . . , , − k i + 1 , , . . . , or (1 , . . . , , − k i + 1 , , , . . . , to indicate the sum between the monomial with the biggest x i exponent and the monomial with coefficient s .In the same way as before we can describe what happens if we multiply such a derivativeconsisting of three monomials by another monomial m = c Q x a i i . The result of multiplying k i x k i − i + x k i − i − + s Q j = i x j by m would be a , . . . , a i − , a i + k i − , a i +1 , . . . , a n a , . . . , a i − , a i , a i +1 , . . . , a n a , . . . , a i − , a i − + k i − , a i , . . . , a n k i c Figure 1.6: Multiplication of k i x k i − i + x k i − i − + s Q j = i x j by a monomial m = c Q x a i i In the same way as before we can state the conditions that the sum of three monomials isgiven by multiplying a monomial to a partial derivative. In the case of k i x k i − i + x k i − i − + s Q j = i x j the pairwise differences between the three monomials has to be (1 , . . . , , − k i + Background on invertible polynomials and the Griffiths-Dwork method , , . . . , , (1 , . . . , , − k i − + 1 , , , . . . , and (0 , . . . , , k i − , − k i + 1 , , . . . , . In the caseof k i x k i − i x i +1 + x k i − i − + s Q j = i x j the differences of the summands have to be (1 , . . . , , − k i +1 , , , . . . , , (1 , . . . , , − k i − + 1 , , , . . . , and (0 , . . . , , k i − , − k i + 1 , − , . . . , . Again weignore the fact that the coefficients have to match. Remark . We will sometimes say that an arrow of one of the two above types which hasthree adjacent vertices creates an extra vertex. This is meant in the sense that if we want toconnect two vertices with distance (1 , . . . , , − k i + 1 , , . . . , or (1 , . . . , , − k i + 1 , , , . . . , ,then an extra vertex has to be created in order to get all the differences correct. Remark . One should notice that the vertices adjacent to the same arrow all have thesame weighted degree, because all summands of the partial derivative have the same weighteddegree. This means in particular that if you know the degree of one vertex, you know thedegree of all others. The role of chains and loops Remark . The property of a chain and a loop is also represented in the partial derivatives.In a loop x k x + · · · + x k m m x all partial derivatives are of the form ∂f∂x i = k i x k i − i x i +1 + x k i − i − + s Q j = i x j . Notice that the difference between the exponents of x k i − i − and s Q j = i x j is (1 , . . . , , − k i − + 1 , , , . . . , which is exactly the difference of the arrow which belongsto the partial derivative ∂f∂x i − = k i − x k i − − i − x i + x k i − i − + s Q j = i − x j with respect to x i − .We have to be a little bit careful that all exponents involved are positive, which means that ∂f∂x i has to be at least multiplied by x i . In our notation this means that if the numbers arebig enough, i.e. all entries of the vertex at the arrow tip are ≥ , the partial derivatives of apolynomial of loop type form a loop. We show here the smallest possible example. In generalthis works if all entries are at least as big as shown here: , . . . , k , , , . . . , , , . . . , , k m ∂f∂x k ∂f∂x m k m , . . . , , k m − , k m − ∂f∂x m − , k , , , . . . , ∂f∂x k , . . . , , k m − , , k m − Figure 1.7: Partial derivatives of a loopIn the case of a chain x k x + · · · + x k m − m − x m + x k m m of length ≥ , there are three types ofpartial derivatives involved. The partial derivatives of the variables in the middle of the chain .2 Introduction to the Griffiths-Dwork method k i x k i − i x i +1 + x k i − i − + s Q j = i x j and the beginning and end are of the form k x k − x + s Q j =1 x j and k m x k m − m + x k m − m − + s Q j = m x j respectively. In the same way asfor polynomials of loop type these partial derivatives match in our notation to give a chain.Again we show an example with the smallest non-negative entries. , . . . , , . . . , , k m , . . . , , k m − , ∂f∂x m k m ∂f∂x m − k m − , . . . , , k m − , , ∂f∂x m − k m − k , , , . . . , ∂f∂x k Figure 1.8: Partial derivatives of a chain of length m > Of course this also works for chains of length , where only two partial derivatives are involved.One is of the type k i − x k i − − i − x i + s Q j = i − x j and the other k i x k i − i + x k i − i − + s Q j = i x j . Howthis matches is also shown in the following picture: , . . . , , , k i , , . . . , , . . . , , . . . , , k i − , , . . . , ∂f∂x i k i ∂f∂x i − k i − Figure 1.9: Partial derivatives of a chain of length Later one of the important parts is to know when an arrow is generated by a partial derivative.We have seen before that for this to happen the difference between the monomials has to beappropriate. To shorten the notation we define the following. Notation . ∂ i is an abbreviation of the partial derivative in the new notation. In detail ∂ i is a short notation for the arrow connecting all vertices of the partial derivative of f withrespect to x i .8 Background on invertible polynomials and the Griffiths-Dwork method Writing a monomial with the generators of the Jacobian ideal Now let us see how our new way of writing the partial derivatives fits into the Griffiths-Dworkmethod: We concentrate on the second part of the Algorithm 1.15 and repeat what there is todo. We assume we have a basis of the Milnor ring in the appropriate degrees. From Remark1.17, we know that all δ i ω are linear combinations of m ! s m +1 ( Q x j ) m Ω f m +1 with m ≤ i . So all wehave to do is write every m ! s m +1 ( Q x j ) m in the basis of the Milnor ring using the Griffithsformula. We should notice that for m < n − the monomial m ! s m +1 ( Q x j ) m is not in theJacobian ideal. To make things easier we can without loss of generality assume that they arebasis elements. But for m ≥ n − the monomial m ! s m +1 ( Q x j ) m is in the Jacobian ideal andcan be written as a linear combination of the partial derivatives. So for m ≥ n − one canfind polynomials p mi ( x ) for i = 1 , . . . , n such that m ! s m +1 ( Q x j ) m = P ni =1 p mi ( x ) ∂f∂x i .Let us see what this means in our notation. First we represent the monomial m ! s m +1 ( Q x j ) m by ( m, . . . , m ) . On the other side we have a sum of p mi ( x ) ∂f∂x i . Here we get an arrow (maybewith an additional vertex) for every monomial in p mi ( x ) , but these arrows are not completelyindependent in the following sense: If we expand p mi ( x ) ∂f∂x i than every monomial apart from ( Q x j ) m has to appear at least twice, because all monomials apart from ( Q x j ) m have todisappear after adding up everything. In the new notation this means that there are alwaysat least two arrows meeting at a vertex. Putting this information for every vertex together,we can say that because there are only finitely many monomials involved, all arrows thatrepresent the sum P ni =1 p mi ( x ) ∂f∂x i form not necessarily oriented cycles with the exception of aline meeting one of the cycles at the one end and the point ( m, . . . , m ) , which corresponds tothe monomial ( Q x j ) m , at the other end. Notice that all cycles are connected otherwise theycan be omitted.Summarizing the above we know that the monomial ( Q x j ) m written as a linear combinationof partial derivatives must consist of connected cycles and maybe an additional line fromone of the cycles ending at ( m, . . . , m ) . However, a representation of ( Q x j ) m by the partialderivatives is not given a priori and it is not necessarily unique. Therefore the goal is to findsuch a linear combination of partial derivatives. Is it possible to arrange the arrows so thatwe end up with a linear combination of partial derivatives giving ( Q x j ) m ? The answer isyes, because otherwise the monomial is not in the Jacobian ideal. A first solution on how thisarrangement of arrows looks like will be given in the next chapter in Proposition 2.3. Afterthat we develop an explicit method of finding such a representation with the arrows of partialderivatives. Using the Griffiths formula Now let us return to Algorithm 1.15 and assume we have found a way to write ( m, . . . , m ) withthe arrows representing the partial derivatives. The Griffiths formula (1.2) tells us that we canreduce the monomial ( Q x j ) m to a sum of monomials of degree d ( m − by differentiatingthe coefficient polynomials in the representation by the partial derivatives. In other words,if we can write m ! s m +1 ( Q x j ) m = m P ni =1 p mi ( x ) ∂f∂x i , then in the primitive cohomology the .2 Introduction to the Griffiths-Dwork method m ! s m +1 ( Q x j ) m can be identified with the polynomial P ni =0 ∂∂x i p mi ( x ) . Now wewant to translate this behaviour to our notation.For every arrow belonging to the partial derivative we have to extract the information whatthe coefficient monomial is, i.e. the monomial the partial derivative was multiplied with togive this arrow, and then take the appropriate partial derivative. This means that using theGriffiths formula every arrow gets contracted to a point in the following way: a , . . . , a n b , . . . , b i = 0 , . . . , b n c , . . . , c n∂f∂x i b − , . . . , b n − Griffiths formulaFigure 1.10: Griffiths formulaThis states that no matter which of the 4 types of partial derivatives we have, as long as b i = 0 (all other b j = 0 anyway) the Griffiths formula maps the whole arrow to the point atthe arrow tip subtracted by (1 , . . . , . Let us see why this is correct: The point at the arrowtip represents the exponents of the coefficient monomial times Q j = i x i therefore the coefficientmonomial is given by x b i i Q j = i x b j − j . If we differentiate this monomial with respect to x i and b i = 0 we end up with the monomial Q nj =1 x b j − j . If the entry b i = 0 then the conclusion thatthe coefficient monomial is Q j = i x b j − j is still true, but if we differentiate this with respect to x i it simply vanishes. Summary of the diagrammatic Griffiths-Dwork method We want to do a final summary of how the Griffiths-Dwork method and in particular Algorithm1.15 work in our diagrammatic interpretation. We will skip the step of choosing a basis. Thisissue will be addressed later in Chapter 2 and also we will not deal with finding the linearrelation. But these two steps are not the hard part of Algorithm 1.15. The most difficult partis to use the Griffiths formula 1.2. So we want to write the derivatives δ i ω in the basis fora given i . We have seen in Remark 1.17 that they consist only of the monomials ( Q x j ) m for m ≤ i . So we can restrict ourselves to write ( Q x j ) m in a basis. For m ≥ n − thesemonomials are in the Jacobian ideal ( Q x j ) m ∈ J ( f ) and for m ≤ n − we can assumethey are basis elements, because they are definitely linear independent. So we concentrate onwriting ( Q x j ) m for m ≥ n − in the basis. First thing we have to do to achieve this goal is0 Background on invertible polynomials and the Griffiths-Dwork method writing ( Q x j ) m in terms of partial derivatives or, in our new diagrammatic notation, find away starting at ( m, . . . , m ) and using the arrows belonging to the partial derivatives in sucha way that there are always at least two adjacent arrows at each vertex. As mentioned beforethis is not a sufficient condition, so if we have found such an arrangement of arrows we haveto check that it really works and the way does not end up to be trivial in the way that thecoefficients we ignored are trivial. If we have found a valid arrangement giving ( Q x j ) m interms of partial derivatives we can use the Griffiths formula, which diagrammatically involvesreplacing every arrow by a single point, namely the vertex at the arrow tip subtracted by (1 , . . . , , if the appropriate entry is ≥ , or if this entry is the arrow just vanishes whenwe use the Griffiths formula. After using the Griffiths formula once we end up with verticescorresponding to monomials which are either in the basis or can be written as something inthe basis plus something in the Jacobian ideal. We then have to repeat the same procedurefor everything in the Jacobian ideal until we end up with monomials in the basis. This is theidea, however, in practice it becomes slightly more complicated. We will come back to this inChapter 2 in the cases important to us. In particular we will see that we can restrict ourselvesto understanding the whole procedure for the monomial ( Q x j ) m for just one m and from thiswe can deduce what happens in all the other cases. hapter 2 Calculations for the Picard-Fuchsequation with the Griffiths-Dworkmethod In this chapter we will analyse the Picard-Fuchs equation for our one-parameter family in alot more detail. We will simulate all steps of the Griffiths-Dwork method in our new notation.The goal of this chapter is to calculate the order of the Picard-Fuchs equation. The proofof the order of the Picard-Fuchs equation will also play a role in Chapter 3, where we willuse this result together with the calculation of the GKZ System to show exactly what thePicard-Fuchs equation looks like.We will use the same notation as before, but we want to recall it again here and use itthroughout this chapter without further notice. Notation . Let g ( x ) = g ( x , x , . . . , x n ) := P ni =1 Q ni =1 x E ij i be an invertible polynomialwith reduced weights q , q , . . . , q n and deg g = d for which the Calabi-Yau condition, d = P ni =1 q i , holds. The diagonal entries of the exponent matrix E = ( E ij ) i,j are defined as k , . . . , k n . We denote by g t ( x ) the transposed polynomial of g , the dual reduced weightsbelonging to g t are denoted by b q , b q , . . . , b q n and the degree by deg g t = b d .The invertible polynomial consists of loops and chains of arbitrary length. For a variable x i wealways take x i − and x i +1 to be the neighbouring variables in the loop or chain. The indicesare without further notice taken modulo the length of the loop or chain.We always denote by f ( x , . . . , x n ) the one-parameter family with parameter s defined by f ( x ) = f ( x , . . . , x n ) := g ( x , . . . , x n ) + s Q ni =1 x i .We will prove in Chapter 3 that the Picard-Fuchs equation has a very special form, which isonly dependent on the dual weights and the dual degree of the invertible polynomial. Thisspecial form can be seen in the following theorem:2 Calculations for the Picard-Fuchs equation with the Griffiths-Dwork method Theorem 3.6. The Picard-Fuchs equation for the one-parameter family f ( x , . . . , x n ) = g ( x , . . . , x n ) + s Q x i is given by n Y i =1 b q b q i i s b d n Y i =1 b q i − Y j =0 ( δ + j · b d b q i ) Y ℓ ∈ I ( δ + ℓ ) − − ( − b d ) b d b d − Y j =0 ( δ − j ) Y ℓ ∈ I ( δ − ℓ ) − , where I = { , . . . , b d − } ∩ S ni =1 n , b d b q i , b d b q i , . . . , ( b q i − b d b q i o . So in this chapter we will prove that the order of the Picard-Fuchs equation in the theoremabove is correct. Before we state the main theorem (Theorem 2.8) of this chapter which presents what the orderof the Picard-Fuchs equation is, we will have a closer look at the Griffiths-Dwork method.In Section 1.2.2 we already discussed how one can see the Griffiths-Dwork method in andiagrammatic way. But as promised we will be more concrete in this Chapter. The first thingwe want to make concrete is how to write ( Q x i ) n − with the generators of the Jacobianideal. We will see in the proof of Theorem 2.8 that this information is everything one needs todetermine ( Q x i ) m for arbitrary m . We already know from Section 1.2.2 that we need a pathusing the arrows representing the partial derivatives and we know from Remark 1.20 that allvertices, or correspondingly monomials, on this path have the same degree. So in this case allhave degree d ( n − . In the first lemma we will study all monomials of this degree. Lemma 2.2. Suppose we have a monomial of weighted degree d ( n − , the weights satisfythe Calabi-Yau condition and the monomial is not Q ni =1 x k i − i . Then there is an i ∈ { , . . . , n } such that x i has an exponent ≥ k i .Proof. Assume m ( x ) is a monomial, where for all i the exponent of x i is ≤ k i − , then theweighted degree of this monomial satisfies deg m ≤ n X i =1 q i ( k i − 1) = n X i =1 q i k i − d ≤ nd − d = d ( n − . This means the degree of m ( x ) is smaller than d ( n − except if q i k i = d for all i (polynomialof Brieskorn-Pham type) and m ( x ) = Q x k i − i .Using Lemma 2.2 we can give a quite concrete construction of the path using the arrowsrepresenting the partial derivatives. Proposition 2.3. The shortest non-trivial way from ( n − , . . . , n − to itself in any diagramthat can be constructed using the arrows corresponding to the partial derivatives has length b d . .1 Combinatorial ideas for the order of the Picard-Fuchs equation Furthermore there is a shortest way that has at most one vertex with a zero entry. We callsuch a way Jacobi path .Proof. The idea is to first restrict ourselves to a main path, which will be a cycle startingand ending at ( n − , . . . , n − and then we will take care of the extra vertices we created.So first of all we forget about the extra vertices and treat every partial derivative as if itwould consist of one arrow and two vertices. If we use every step ∂ i (cf. Notation 1.22) of theJacobian ideal exactly b q i times, then in total we end up at the starting point. This can beseen in the following calculation. We will look at the entries separately and show that afteradding all the steps all entries are zero. In the case when b q i k i = b d , the i th entry of adding upall steps is − b q i ( k i − 1) + X j = i b q j = − b q i k i + n X j =1 b q j = − b d + b d = 0 . Notice that if b q i k i = b d this means that in our polynomial g ( x ) the variable x i is only appearingas the term x k i i or x k i i x i − .Now let b q i k i = b d . This means that in g ( x ) the variable x i is either appearing as the term x k i i + x k i − i − x i or x k i i x i +1 + x k i − i − x i . So x i is the end of the chain or in the middle of a chain orin a loop. In these cases the i th entry of adding up all steps is − b q i ( k i − 1) + X j = i,i − b q j = − b q i k i + X j = i − b q j = − b d + n X j =1 b q j = 0 , because ∂ i − will have as i th entry.According to Lemma 2.2 we know that we can arrange the arrows in a way that at each vertexall entries are > . The lemma states that we can always use some arrow, because at least forone x i the exponent is bigger than k i , i.e. one entry in the vertex is bigger than k i , and wecan exclude that we have to use ∂ i more than b q i times, because this would produce a negativeentry somewhere which is not possible.Now we have to take care of the extra vertices but this is very easy because according toLemma 2.2 all entries are ≥ in the case where an extra vertex appears. So we can useRemark 1.21, which shows that we can use the rest of the chain or the full loop here. Thismeans that all arrows of the rest of the chain or loop fit here with the arrow tip pointing atthe same vertex as can be seen in Figure 1.7 and 1.8. After doing this, every vertex has atleast two adjacent arrows.There is obviously no non-trivial shorter way, because this means that there is a linear relationbetween the rows of E and therefore the weights would not be reduced.The last thing we have to exclude is that this path is trivial. This is not possible, becausewe produced a path where every arrow on the main path points in the same direction. Atevery vertex except ( n − , . . . , n − the coefficients are chosen in a way that they add up tozero, but the arrow tip always carries the s as a coefficient so in order to add up to zero thecoefficient of the next arrow has to have a higher s exponent. This means that at the point ( n − , . . . , n − there is one arrow with a coefficient that contains s and one arrow with4 Calculations for the Picard-Fuchs equation with the Griffiths-Dwork method a coefficient containing s b d . So they can never add up to zero, but we can easily normalise allcoefficients to get the coefficient at this vertex. Remark . In regular notation the output of Proposition 2.3 is that the monomial Q ni =1 x n − i can be written exactly as a sum P ni =1 ( p i ( x ) + h i ( x )) ∂f∂x i , where p i ( x ) is a sum of exactly b q i monomials and with only one exception all of these monomials include all variables with apositive exponent and h i ( x ) includes all monomials that come from the arrows of the chainsand loops to adopt the extra vertices we created on the Jacobi path. Remark . We want to draw some attention on the fact that the dual weights and thedual degree come into play in Proposition 2.3. The reason for that was already established inRemark 1.12. There we found out that the differences in the exponent vectors are given bythe matrix E t − (cid:18) ··· ... ... ··· (cid:19) . Also in the equations in Remark 1.12 we can see that the relationsbetween these columns are given by the dual weights: E t − · · · ... ... · · · b q ... b q n = ... This explains immediately why the dual weights give the relation between the steps done bythe partial derivatives.Now we know exactly how many times we need each arrow ∂ i , corresponding to a partialderivative, to have the shortest possible way of writing the monomial Q ni =1 x i with the partialderivatives. However, we still don’t know in which order to use them. To achieve this, we willdefine a few sets that will tell us exactly where to use each partial derivative (see in Lemma2.9) and will also be used in the formulation of the main theorem, telling us the order of thePicard-Fuchs equation. Definition 2.6. D : = { , , . . . , b d } Q i : = { b d b q i , b d b q i , . . . , ( b q i − b d b q i , b d } Q Z i : = Q i ∩ Z Q Q i : = Q i ∩ ( Q \ Z ) V : = D \ ( n [ i =1 Q Z i ) v : = | V | u : = n X i =1 | Q i | − | n [ i =1 Q Z i | .1 Combinatorial ideas for the order of the Picard-Fuchs equation Remark . Since | D | = b d and | Q i | = b q i , we have v = | V | = b d − | n [ i =1 Q Z i | = n X i =1 ( b q i ) − | n [ i =1 Q Z i | = n X i =1 | Q i | − | n [ i =1 Q Z i | = u and u = v ≥ ϕ ( b d ) , where ϕ is Euler’s phi function. In addition we have u ≥ n − .Now we have everything to state the main theorem of the chapter: Theorem 2.8. Let g ( x ) be an invertible polynomial and f ( x ) = g ( x )+ s Q i x i a one-parameterfamily with parameter s . Then the Picard-Fuchs equation of f ( x ) has order u . Before we prove this theorem we will work out some details about the polynomial g ( x ) and theone-parameter family f ( x ) . We will need this information to prove Theorem 2.8. Especiallywe need to know in more detail how our Jacobi path looks like. We know which steps have tobe done, but we do not know in which order they are used. But this is important to keep trackof which monomials vanish when we use the Griffiths formula. As before we will concentrateon the monomial ( Q ni =1 x i ) n − and try to figure out everything in this case first.We already know that the Jacobi path has length b d . So there are b d positions on our path thathave to be filled. We want to figure out now at which position a partial derivative producesa vertex where at least one entry is . This is important, because we know with only oneexception, that all entries on our Jacobi path are ≥ . So if a partial derivative producesa vertex where at least one entry is then this is the earliest position where this partialderivative will be used. The following lemma tells us in detail where these earliest positionsare. Lemma 2.9. The smallest position a partial derivative ∂ i can be used is where it produces anentry in the vertex at the arrow tip. To state the smallest positions we distinguish betweentwo cases. The smallest positions for the partial derivative ∂ i are(i) q − n + 2 for q ∈ Q i and b q i k i = b d or(ii) ⌊ q ⌋ − n + 2 for q ∈ Q Q i and q − n + 1 for q ∈ Q Z i , if b q i k i = b d .Remark . The numbers q − n + 2 , q − n + 1 and ⌊ q ⌋ − n + 2 can be ≤ . If this happens, thisobviously means that we cannot use this partial derivative at a position where we produce anentry . We have to move these partial derivatives at least to position . But this will becomeclear later. Proof. Let us assume we are in case (i), so b q i k i = b d . This means that in g ( x ) the variable x i appears only as x k i i or x k i i x i +1 . It follows now that all ∂ l for l = i have as i th entry. So if we6 Calculations for the Picard-Fuchs equation with the Griffiths-Dwork method assume that all other positions are taken, then the first time we can use ∂ i is when the i thentry is k i = b d b q i , but because we started with the monomial ( n − , . . . , n − and at everystep = ∂ i we add in the i th entry this happens after k i − ( n − steps. Now the next timewe can use ∂ i is after k i steps of adding to the i th entry. So in total we can use ∂ i at thepositions q − n + 2 for q ∈ Q i . This proves case (i).For case (ii) we assume b q i k i + b q i − = b d . The numbers in Q i are evenly spread between and b d . These numbers minus n − nearly give the smallest positions of ∂ i . But we have toinvestigate this a little bit more to see what is happening. Similar to the other case it followsthat the only terms that consist of x i in g ( x ) are x k i i + x i x k i − i − or x k i i x i +1 + x i x k i − i − . But thistime not all of the other partial derivatives add to the i th entry. The partial derivative ∂ i − adds to the i th entry. We can use ∂ i for the first time if the i th entry is k i , so we have touse k i − n + 2 = b d − b q i − b q i − n + 2 of the partial derivatives with respect to x j where j = i, i − .In addition we have to count how often ∂ i − got used before we use ∂ i . Since for both partialderivatives the numbers in Q i and Q i − are evenly spread, the relation between b q i and b q i − tells us exactly the relative position of both numbers on the Jacobi path. The term b q i − b q i tellsus exactly how often ∂ i − is used before we used ∂ i for the first time. If this is not a naturalnumber we have to round down and get ⌊ q ⌋ − n + 2 as position for ∂ i and as before the otherpositions are the multiples of these. If b q i − b q i is a natural number, then this means that ∂ i − can be used at the same position. But because this does not contribute anything in the entry i , we can use ∂ i one position earlier. This proves part (ii). Remark . Notice that all partial derivatives ∂ i with b q i k i = b d are at position b d − n + 2 and all partial derivatives ∂ i with b q i k i = b d are at position b d − n + 1 , which agrees with thefact that after using every partial derivative ∂ i exactly b q − times we always end with themonomial Q b q i k i = b d x k i − i Q b q i k i = b d x k i i .We want to draw a picture illustrating the Jacobi path and indicating where to use the partialderivatives. We want to explain this using an example. Example 2.12. Consider the one-parameter family f ( x , x , x , x ) = x + x x + x x + x + sx x x x with weights ( q , q , q , q ) = (1 , , , and weighted degree deg f = d = 18 .The dual weights and the dual weighted degree are given by ( b q , b q , b q , b q ) = (1 , , , and b d = 18 .The sets needed for calculating the path in the Jacobian ideal are given as follows: Q = { } , Q = { , , , , , , , , } , (2.1) Q = { , , } , Q = { , , , , } We know that the Jacobi path has length 18 = b d , so we will make a table where every positioncan be entered. The table stops at position 16 = b d − n + 2 because that is the biggest numberthat can occur as a smallest position. .1 Combinatorial ideas for the order of the Picard-Fuchs equation ∂ ∂ ∂ ∂ ∂ ∂ ∂ ∂ ∂ ∂ ∂ ∂ ∂ ∂ ∂ ∂ ∂ ∂ Figure 2.1: Smallest positions for the partial derivativesThe idea is that because we know where the smallest position is where we can use ∂ i , weget a Jacobi path by just shifting the positions until we have one partial derivative at everyposition. Proposition 2.3 tells us that this is always possible. Basically we should only shiftthe partial derivatives to the right, because shifting to the left is only possible by one positionand this might disconnect the path. The only position where we allow to shift to the left willbe from position b d − n + 2 to position b d − n + 1 . The reason for this becomes clear later. Wewill have a look what to do in the example and indicate the shifts by arrows in the table. Example 2.13. (Continuation Example 2.12) The shifting we have to do can be seen in thefollowing picture: ∂ ∂ ∂ ∂ ∂ ∂ ∂ ∂ ∂ ∂ ∂ ∂ ∂ ∂ ∂ ∂ ∂ ∂ Figure 2.2: Shifting of positions on the Jacobi pathAt this point it is not entirely clear why we choose to shift exactly like this. But the importantpoint here is that we have exactly one partial derivative at each position and apart from thepartial derivative ∂ at position we shifted all arrows to the right.8 Calculations for the Picard-Fuchs equation with the Griffiths-Dwork method The above picture tells us which partial derivative we have to use at every position. So we canuse the above picture to write down the Jacobi path in the diagrammatic notation introducedin Section 1.2.2. The only thing we have to take care of in addition is to complete the loopsand chains so that the coefficients can be chosen such that after adding up everything elsebut ( Q ni =1 x i ) n − vanishes. From Remark 1.21 we know that this is nearly always possiblewithout any difficulties. We will go back to the Example now to see how this works. Example 2.14. (Continuation Example 2.12) We will write down how the Jacobi path lookslike explicitly. We start with the monomial (3 , , , and use the partial derivatives as shownin Figure 2.2, which gives the order ∂ , ∂ , ∂ , ∂ , ∂ , ∂ , ∂ , ∂ , ∂ , ∂ , ∂ , ∂ , ∂ , ∂ , ∂ , ∂ , ∂ , ∂ . This leads to the following picture, where we wrote down every monomial on the Jacobi path.Notice that we neglected all the coefficients here. , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ∂ ∂ ∂ ∂ ∂ ∂ ∂ ∂ ∂ ∂ ∂ ∂ ∂ ∂ ∂ ∂ ∂ ∂ , , , ∂ , , , ∂ , , , ∂ , , , ∂ , , , ∂ , , , ∂ , , , ∂ , , , ∂ , , , ∂ , , , ∂ , , , ∂ , , , ∂ , , , ∂ Figure 2.3: The Jacobi path for Example 2.12 .1 Combinatorial ideas for the order of the Picard-Fuchs equation (1 , . . . , or if the point has as an entry the arrow vanishes completely. But from Proposition 2.3 weknow that the Jacobi path starting at ( n − , . . . , n − has at most one vertex with as anentry. This means that after using the Griffiths formula at most one vertex will vanish. Thevertices that are still there after the use of the Griffiths formula have the same differences asbefore. So we can basically put the arrows in again, but we have to be a little bit careful.If the partial derivative belongs to a loop or a chain, then all entries belonging to anothervariable of the loop or to the rest of the chain have to be > . If an arrow fits in between twovertices together with the rest of the chain, or the loop as can be seen in Figure 1.7 and 1.8,then we can adjust all coefficients. This means we only need one basis element for every partof a path. But we have to be careful that our shifting did not disconnect two vertices whichare not linearly dependent. Therefore we want to investigate what a good and what a badway of shifting is. This means we want to find out how to shift the partial derivatives suchthat if a path gets disconnected there is no other connection between two vertices. The last n positions play a special role here and we will take care of them in the end. We distinguishbetween the following cases:(i) Two partial derivatives ∂ i and ∂ i are at the same position p , where(a) ∂ i and ∂ i are not neighbouring elements in a chain or a loop or(b) ∂ i and ∂ i are neighbouring elements in a chain or a loop.(ii) Two partial derivatives ∂ i and ∂ i are at two succeeding positions p and p + 1 and thefirst one ∂ i gets shifted, where(a) ∂ i and ∂ i are not neighbouring elements in a chain or a loop or(b) ∂ i and ∂ i are neighbouring elements in a chain or a loop.Step by step we will show how to shift in all these cases. Before we take care of all special cases,we will state a lemma that shows us that for some partial derivatives it is always possible toshift them. Lemma 2.15. Let M ⊆ { , . . . , n } be the set of all indices with b q m k m = b d for m ∈ M .Then, without one exception, every monomial Q ni =1 x α i i , with α m = k m for m ∈ M and α i < k i for i ∈ { , . . . , n } \ M , has degree < d ( n − . The only exception is the monomial Q m ∈ M x k m m Q i/ ∈ M x k i − i .Proof. Assume the statement is false. This means that there is a monomial Q ni =1 x α i i with α m = k m for m ∈ M and α i < k i for i ∈ { , . . . , n } \ M that has weighted degree d ( n − .Notice that b d = b q m k m means that either x k m m + x k m − m − x m or x k m m x m +1 + x k m − m − x m is in thepolynomial g ( x ) , where the indices are taken modulo the length of the appropriate chain or0 Calculations for the Picard-Fuchs equation with the Griffiths-Dwork method loop. It follows that q m + q m − k m − = d . If we calculate the degree of the monomial Q ni =1 x α i i ,we get: deg n Y i =1 x α i i ! = n X i =1 q i α i = X m ∈ M q m k m + X i/ ∈ M q i α i ≤ X m ∈ M q m k m + X i/ ∈ M q i ( k i − 1) = − d + X m ∈ M q m + n X i =1 q i k i = − d + X m ∈ M ( q m + q m − k m − | {z } = d ) + X i ; i +1 / ∈ M q i k i |{z} = d = − d + d | M | + d ( n − | M | ) = d ( n − . It follows that the degree of the monomial Q ni =1 x α i i with α m = k m for m ∈ M and α i < k i for i ∈ { , . . . , n } \ M is < d ( n − unless α i = k i − for all i / ∈ M .We want to relate Lemma 2.15 to what we know. The lemma states in particular that if ∂ i creates an extra vertex, then there is, with one exception, no monomial where x i has exponent k i and for all other x j it is smaller than k j . This means that on the Jacobi path there is noposition, apart from b d − n + 1 (cf. Remark 2.11), where we can use ∂ i exclusively. In otherwords: There is always the possibility to shift ∂ i if it produces an extra vertex.To make the notation a little bit easier, we state two extra definitions. Definition 2.16. For a fixed Jacobi path, we denote by κ ( p ) := min ≤ i ≤ n { a i | ( a , . . . , a n ) is the p th vertex on the Jacobi path } . So κ ( p ) is the smallest entry of the vertex at position p.The second number we define is ∂ ( p ) . For every position on a fixed Jacobi path ∂ ( p ) := ∂ i , if ∂ i is the arrow connecting the vertices at position p and p + 1 and ∂ ( p ) := 0 , if there is noarrow connecting the vertices at position p and p + 1 .Now we want to investigate how to shift in case (i). So we have two partial derivatives ∂ i and ∂ i that are possible at the same position p . If the variables x i and x i are not neighbours ina loop or a chain, then they are independent of each other. This is subcase (a). Assume weshifted ∂ i , so we assume ∂ ( p ) = ∂ i and ∂ ( p + 1) = ∂ i . This means that κ ( p + 1) = 1 and κ ( p + 2) = 2 . If we used the Griffiths formula once, we have κ ( p + 2) = 1 and therefore we stillhave ∂ ( p + 1) = ∂ i . After using the Griffiths formula twice the vertex at position p + 1 willvanish, because after the first use of the Griffiths formula, we had κ ( p + 1) = 0 . For the arrowat position p it can make a difference what partial derivative we use here. If κ ( p ) = κ ( p + 1) ,then after the first use of the Griffiths formula there is an arrow between the two vertices ifand only if the partial derivative belongs to a chain of length or is the beginning of a chain.So if only one of the two partial derivatives belongs to the middle or end of a chain or to aloop than this should be shifted to position p + 1 otherwise it does not matter. Now we get tosubcase (b), which means that the two partial derivatives ∂ i and ∂ i at position p belong to .1 Combinatorial ideas for the order of the Picard-Fuchs equation Lemma 2.17. Let x k x + · · · + x k m m x be a loop of length m in g ( x ) .(i) q ∈ Q Z i for some i ∈ { , . . . , m } ⇒ q ∈ Q Z i for all i ∈ { , . . . , m } .(ii) q = ⌊ ˜ q i ⌋ = ⌊ ˜ q i +1 ⌋ with ˜ q i ∈ Q i , ˜ q i +1 ∈ Q i +1 ⇒ q ∈ Q Z i for all i ∈ { , . . . , m } .Let x k x + · · · + x k m − m − x m + x k m m be a chain of length m in g ( x ) .(iii) q ∈ Q Z i for i ∈ { , . . . , m } ⇒ q ∈ Q Z j for all j ∈ { , . . . , i } .(iv) q = ⌊ ˜ q i ⌋ = ⌊ ˜ q i +1 ⌋ with ˜ q i ∈ Q i , ˜ q i +1 ∈ Q i +1 ⇒ q ∈ Q Z j for all j ∈ { , . . . , i } .Proof. (i): If x k x + · · · + x k m m x is in g ( x ) then this means that b q k + b q m = b d, b q m k m + b q m − = b d, . . . , b q k + b q = b d. If q = c b d b q i ∈ Q Z i then b q i | c b d and it follows immediately that b q i | c b q i − , . . . , c b q , c b q m , . . . , c b q i +1 , c b d .Define β j := c b q j b q i then we have β j b d b q j = β j c b dβ j b q i = q and therefore q ∈ Q Z j for all j ∈ { , . . . , m } . (ii): Let ˜ q i = c i b d b q i and ˜ q i +1 = c i +1 b d b q i +1 , then q = c i b d − a i b q i = c i +1 b d − a i +1 b q i +1 with a i , a i +1 ∈ Z , a i < b q i and a i +1 < b q i +1 . Because b d = b q i +1 k i +1 + b q i the following calculation holds b q i +1 ( c i b d − a i ) = b q i ( c i +1 b d − a i +1 ) b q i +1 ( c i b d − a i ) = ( b d − b q i +1 k i +1 )( c i +1 b d − a i +1 ) b q i +1 ( c i b d − a i ) = b d ( c i +1 b d − a i +1 ) − b q i +1 k i +1 ( c i +1 b d − a i +1 ) c i b d − a i = b dq − k i +1 ( c i +1 b d − a i +1 ) . It follows that b d | ( a i − a i +1 k i +1 ) < b q i +1 k i +1 + b q i = b d . Therefore a i = a i +1 = 0 and with (i)the result follows. (iii): If x k x + · · · + x k m − m − x m + x k m m is in g ( x ) then it follows that b q k = b d, b q k + b q = b d, . . . , b q i k i + b q i − = b d. If q = c b d b q i ∈ Q Z i then b q i | c b d and it follows that b q i | c b q i − , . . . , c b q . Define β j := c b q j b q i then wehave β j b d b q j = β j c b dβ j b q i = q and therefore q ∈ Q Z j for all j ∈ { , . . . , i } . (iv): The proof is essentially the same as in (ii) . The only extra case is if i = 1 , but q ∈ Z and this just means a = 0 from the beginning.We proved that if we are in case (ib), where two partial derivatives are at the same positionand they belong to neighbouring variables in a chain or loop, then the loop is completely at2 Calculations for the Picard-Fuchs equation with the Griffiths-Dwork method this position or the chain until ending at one of the two variables or later is at this position.To be more precise, if two neighbouring variables ∂ i and ∂ i − of a chain have the same number q ∈ Q i ∩ Q i − , then q ∈ Q j for j ≤ i as long as x j is part of the chain. According to Lemma2.9 this means that the beginning of the chain is at position q − n + 2 and at least everythingin the chain between the beginning and ∂ i is at position q − n + 1 . We will show now in detailwhat to do if a loop or a chain of arbitrary length is at one position. In the first remark we willsee how to shift a complete loop and what the linear dependencies between the monomialsare. In Remark 2.19 we will do the same for a chain of arbitrary length. Remark . Assume the loop x k x + · · · + x k m m x is in g ( x ) and there is an element q ∈ T mi =1 Q i . This means all partial derivatives with respect to a variable in the loop have thesame smallest position q − n + 1 . In the pictures below we want to see what happens if weuse the partial derivatives in order, i.e. starting with ∂ , then ∂ until in the end we use ∂ m .the polynomial g ( x ) might have variables x m +1 , . . . which are not in the loop, but we willomit all entries in the vertices that do not belong to the loop, i.e. all other variables in themonomials, because they will only increase by in every step and do not have any effect onthe partial derivatives we use here. All partial derivatives have the same smallest position, sothe starting monomial has to be ( k + c, . . . , k m + c ) , where c depends on how often the wholeloop got shifted. We will start at c = − because this is the first time appears as entry, soif c is bigger nothing interesting is happening until we have used the Griffiths formula severaltimes. So we start with the vertex ( k − , . . . , k m − and use every partial derivative of thevariables in the loop exactly once. The picture we get is now the following. k − , . . . , k m − , k − , k , . . . , k m , , k , k + 1 , . . . , k m + 12 , , , k + 1 , k + 2 , . . . , k m + 2 m − , m − , . . . , m − , k m + m − m − , . . . , m − k , , k − , k , . . . , k m ∂ ∂ ∂ m Figure 2.4: The case of a complete loop with the same smallest position .1 Combinatorial ideas for the order of the Picard-Fuchs equation > . Therefore we can take careof the extra vertices with the rest of the partial derivatives as we have seen in Figure 1.7. Sothere are a lot of extra vertices that we do not write down because this is the normal caseas in Remark 1.21. In the above picture all arrows starting from the third vertex until theend point to a vertex with positive entries and therefore we marked the arrows with a dot inthe middle. Also in this part of the picture the smallest number increases by in every step.So there is nothing to worry here. The part that needs more attention is between the firstand the second vertex and the second and the third vertex. In both cases the vertex that thearrow would point to has a zero entry. So the situation looks the same in both cases. We canuse the arrow ∂ between the first two vertices and we can use ∂ between the second and thethird vertex. However, we are in both cases not able to use all other partial derivatives suchthat they point to the second or third vertex, which means that there would be a vertex withjust one adjacent arrow. What we will see in the following part is that the problem can befixed for the arrow between the second and the third vertex, but not for the arrow betweenthe first two vertices. Therefore in Figure 2.4 we draw the arrow ∂ but not the arrow ∂ .In the next picture we see that there is a path which is on the one end connected with theextra vertex ( k , , k − , k , . . . , k m ) from Figure 2.4 and on the other end with the vertex ( m − , . . . , m − , where the part of the path we are looking at ends. Now we draw thecomplete picture of the part of the path. After that we will show the interesting part of thispicture again in more detail. The original path from Figure 2.4 is shown in the first row ofthe following picture. v v , v , v , v m − ,m − v m v , v , v ,m − v ,m v , v , v ,m v , v , v , v , v , v m − ,m v m − , v m − ,m − v m − ,m − ∂ ∂ ∂ m ∂ ∂ ∂ m ∂ ∂ ∂ ∂ ∂ ∂ ∂ ∂ m − ∂ m − Figure 2.5: The case of a complete loop at one positionTo show the structure of what is happening, we used the following abbreviations for thevertices:4 Calculations for the Picard-Fuchs equation with the Griffiths-Dwork method v = ( k − , . . . , k m − v m = ( m − , . . . , m − v i,j = ( v i,j , . . . , v mi,j ) with v li,j = k l + i − if l ≡ j + 1 (mod m ) k l + i − if l ≡ j + 2 , . . . , j − i (mod m ) i − if l ≡ j − i + 1 (mod m ) i − if l ≡ j − i + 2 , . . . , j (mod m ) In the picture we marked in light grey the vertices which are not produced by the Jacobipath, so which are not included in Figure 2.4, but which we can put in extra in order to havetwo adjacent arrows at every vertex. Again the dot in the middle of an arrow indicates thatthere is actually an extra vertex and this extra vertex can be adopted with a normal loop asin Remark 1.21. To have a better view what is happening here, we draw a detailed pictureof the second and the third vertex on the original path, which includes the extra vertices wehave to put in additionally to take care of the extra vertex created by ∂ . , k − , k , . . . , k m , , k , k + 1 , . . . , k m + 1 k , , k − , k , . . . , k m k , k , , k − , k , . . . , k m k , . . . , k m − , , k m − k − , k , . . . , k m , k + 1 , , , k , k + 1 , . . . , k m + 1 k + 1 , k + 1 , , , k , k + 1 , . . . , k m + 1 k , k + 1 , . . . , k m − + 1 , , , k , k + 1 , . . . , k m − + 1 , ∂ ∂ ∂ ∂ m ∂ Figure 2.6: The second and third vertex of a complete loop at one positionHere we can see in detail that the extra vertex from Figure 2.4 and all other vertices on the leftside have two adjacent arrows. This means that it is always possible to adjust the coefficientsand therefore the extra vertex is linear dependent to the vertices that are already on the path.The vertices on the right side have also another adjacent arrow: For every vertex on the right .1 Combinatorial ideas for the order of the Picard-Fuchs equation ( m − , . . . , m − . This wasshown in Figure 2.5 above.We will recall now the important facts of this remark which we will also need later. Thevertices on the Jacobi path can be seen in Figure 2.4 and we want to summarize the relationsof the vertices on this path. First of all starting from the third vertex until the end we have κ ( p )+1 = κ ( p +1) . This is all information we need for these vertices. We also see that betweenthe first and the second vertex there is a gap as soon as a appears in the second vertex. Thespecial behaviour is between the second and the third vertex. Here we found out that evenif a appears in the third vertex and although the arrow connecting the two vertices createsan extra vertex, the second vertex is still linear dependent to the rest of the path. The extravertices we put in to make the second vertex linear dependent to the already existing pathdo not really play an extra role here. The reason is that we were able to find a path startingat every of these extra vertices ending at ( m − , . . . , m − . It follows that after using theGriffiths formula we will loose the beginning of these paths and they will always be connectedto ( m − , . . . , m − . Therefore we can ignore all the extra vertices we put in and just keepin mind that the second vertex is linear dependent despite the fact that the smallest numberof the third vertex is .Another important fact to notice is that nothing changes if we change the order in the positionsof the partial derivatives, because we will always have an arrow that connects two verticeswith the same smallest numbers. So the basic idea is the same.The above picture does not work if m = 2 , but there something similar happens. One of theimportant facts in the above picture was that the second vertex did not stand alone. Thesame thing will happen in the special case of a loop of length . This time we will not omitthe coefficients, because they are the key ingredient in this case. We will mark the coefficientsfrom inside the partial derivatives in purple and the ones we can choose in blue. So assume x k x + x k x is in g ( x ) . Then we get the following picture: k , k , k , ∂ k α − k k k ∂ sk ( k − α (1 − k k ) k , k − k , ∂ α − k k k ∂ s ( k − α (1 − k k ) k , k − Griffiths formula , s ( k − k − α (1 − k k ) Figure 2.7: The case of a loop of length at one position6 Calculations for the Picard-Fuchs equation with the Griffiths-Dwork method Here α is the coefficient that comes from the arrow before the loop starts. The followingcoefficients are chosen such that we get at this vertex. If this is not the first time we usedthe Griffiths formula, which means that the coefficients should not add up to at a vertexbut to a certain constant, it is very easy to adjust the coefficients. We can see in the picturethat the second vertex vanishes after the use of the Griffiths formula, because of the fact thatthe coefficient is . This means again that the second vertex does not stand alone. In otherwords, if we have a loop of length everything is linear dependent as long as all entries arebigger than . If we used the Griffiths formula and the entries become , we get only one gapand not gaps as one might expect.With this remark we know what to do if there is a complete loop at one position. Now we willshow what happens if a chain (or a part of a chain) is at one position. This partially involvespart (ii), because the beginning of the loop and the partial derivative before are never at thesame position. In the case that occurs in Lemma 2.17 the partial derivative belonging to thebeginning of the chain is at position p + 1 and the rest of the part of the chain is at position p . Remark . From Lemma 2.17, we know that if two partial derivatives of neighbouringvariables x i and x i − in a chain have the same element p in Q i and Q i − , then this elementis also in all Q j for j ∈ { , . . . , i } . This means that we can use ∂ at position p and ∂ j at theposition p − for all ≤ j ≤ i . So let x k x + · · · + x k m − m − x m + x k m m be a chain in g ( x ) andassume ∂ , . . . , ∂ i are at the same position. Let a i +1 > k i +1 . Then the picture is the following k − , k , . . . , k i , a i +1 , . . . , k , k + 1 , . . . , k i + 1 , a i +1 + 1 , . . . , , k + 1 , k + 2 , . . . , k i + 2 , a i +1 + 2 , . . . , , , k + 2 , k + 3 , . . . , k i + 3 , a i +1 + 3 , . . .i − , . . . , i − , k i + i − , a i +1 + i − , . . .i − , . . . , i − , a i +1 + i − , . . .∂ ∂ ∂ ∂ i Figure 2.8: The case of a chain at one position .1 Combinatorial ideas for the order of the Picard-Fuchs equation ∂ first, because otherwise one of the arrows creating an extravertex would connect two vertices with the same smallest numbers and therefore disconnectthe path earlier. The question in which order to use the rest of the partial derivatives issimilar. The answer is that doing if we use them in a different order, then there is an arrow ∂ j where for the vertex at the arrow tip ( a , . . . , a m ) we have a j = a i for an i < j and a j isalso the smallest number. If we use the Griffiths formula several times such that a j = a i = 0 ,then the partial derivative ∂ i does not fit here and therefore we are not able to take care ofthe extra vertex produced by ∂ j . For later purposes the important part of this remark is thatthe smallest number increases at every position.Now let us investigate the last two cases, listed under (ii). So the smallest possible positionsfor ∂ i and ∂ i are p and p + 1 respectively and the first partial derivative ∂ i gets shifted. Firstassume that we are in subcase (a) and the two partial derivatives do not belong to neighbouringvariables of a chain or loop. Then we can choose ∂ ( p + 1) = ∂ i and ∂ ( p + 2) = ∂ i . This way κ ( p + 2) = κ ( p + 3) and the vertices only get disconnected if ∂ i is in a loop, a middle or endof a chain and the smallest number is and after the next Griffiths step everything vanishes.If we choose ∂ ( p + 1) = ∂ i and ∂ ( p + 2) = ∂ i , then κ ( p + 2) ≤ κ ( p + 3) = 2 , because ∂ i wasshifted further than ∂ i . So the connection gets cut earlier or at the same time. So we shoulduse the first way of shifting to be on the safe side.The last case to consider is (iib), so as before the two partial derivatives ∂ i and ∂ i canbe used first at position p and p + 1 and the first partial derivative ∂ i gets shifted. Thistime, however, they correspond to neighbouring variables in a loop or a chain. If ∂ i is thebeginning of a chain then this is the case of Remark 2.19 and we should choose ∂ ( p + 1) = ∂ i and ∂ ( p + 2) = ∂ i . Otherwise, both partial derivatives are either from a loop, or from themiddle or end of a chain, then we should choose ∂ ( p + 1) = ∂ i and ∂ ( p + 2) = ∂ i . Theargument is the same as in the case (iia). If we use the partial derivatives as indicated then κ ( p + 2) = κ ( p + 3) , so the diagram disconnects first when κ ( p + 2) = κ ( p + 3) = 0 . If wechange the positions, then κ ( p + 2) = 1 and κ ( p + 3) = 2 and the connection is cut earlier, sowe stick to the first way of ordering the partial derivatives.Now we know where to shift everything and this explains most of the shifting we did inExample 2.12 and the shifting we have to do in general. But as mentioned before we still haveto look closer at the last n positions. In Remark 2.11 we saw that all partial derivatives areat the positions b d − n + 2 and b d − n + 1 . At all the positions p > b d − n + 2 there is no partialderivative that produces as an entry here. So we have to spread all partial derivatives overthe positions b d − n + 1 , . . . , b d . There we have to use every partial derivative exactly once. Inorder to see all linear dependencies between the monomials this should be done in the sameway as before. So every chain and every loop itself should be used in the order suggested inRemark 2.18 and 2.19. Because separate chains and loops do not interact, the order betweenthe loops and chains does not matter. Notice that at position b d − n + 1 there is either anarrow with an extra vertex and κ ( b d − n + 1) = 1 or an arrow without extra vertex and8 Calculations for the Picard-Fuchs equation with the Griffiths-Dwork method κ ( b d − n + 1) = 0 . In the second case we have to shift this partial derivative to the left, whichalso explains the shift to the left in Example 2.12. This means, no matter what the rest ofthe path looks like, after the first use of the Griffiths formula there will be a gap behind thevertex at position b d − n , so ∂ ( b d − n ) = 0 . In the first case the vertex at position b d − n + 1 will not vanish but the arrow will not fit with the complete loop anymore and in the secondcase the vertex at position b d − n + 1 simply vanishes. Because of Remark 2.18 and 2.19 thesmallest numbers never decrease between the position b d − n + 1 and b d . Therefore the vertexat position p + 1 always vanishes after the vertex at position p and it follows that only thebeginning of the path from b d − n + 1 to b d vanishes. So the last n steps and what is left afterall the Griffiths steps is always linear dependent to the vertex ( a, . . . , a ) , a ≤ n − , where westarted. First we will prove a weak form of the main theorem of this chapter. We will show that oneneeds u basis elements to write one special form of the forms appearing in the Picard-Fuchsequation of f ( x ) . In the proof of Theorem 2.8 we will see that these are all basis elementswe need for all forms appearing in the Picard-Fuchs equation, which means that u is also theorder of the Picard-Fuchs equation. Proposition 2.20. The form s n ( Q ni =1 x i ) n − Ω f n is a linear combination of u basis elements ofthe primitive cohomology with coefficients in C ( s ) . In other words, using the Griffiths formula, ( Q ni =1 x i ) n − can be written as a combination of u basis elements of the Milnor ring C ( s ) /J ( f ) with coefficients in C ( s ) . Notice that the ( Q ni =1 x i ) n − can never be a basis element in the Milnor ring itself, because wehave ( Q ni =1 x i ) j ∈ J ( f ) for j ≥ n − . Before we prove this proposition we want to calculatean example to have a better understanding what we have to do in the proof. We will continuethe example we already used throughout the whole chapter. The notation for this can befound in Example 2.12. In order to show the proposition in this example, we will start with ( Q ni =1 x i ) n − and count how many basis elements we need to write ( Q ni =1 x i ) n − as a linearcombination of them. We will do this by following the steps in the Griffiths-Dwork method. Example 2.21. (Continuation Example 2.12) We want to calculate the number of basiselements we need to write ( wxyz ) in this example. We know that ( wxyz ) is an element ofthe Jacobian ideal. Following the Griffiths-Dwork method we have to write down the Jacobipath, which we already did in Example 2.14. We start with this Jacobi path and use theGriffiths formula once. This means we subtract (1 , , , from every vertex. Again we do notmention the various coefficients one needs to do the actual calculations because they do notgive any interesting input for the calculation of the number of basis elements. The importantpart for us is to count the disconnected parts of the Jacobi path in every degree. We want tonote that for counting the basis elements we would not need to write down the extra vertices. .2 Proof of the order of the Picard-Fuchs equation vertices on the Jacobi path and remember which of thearrows produces an extra vertex. So after using the Griffiths formula once Figure 2.3 becomes: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ∂ ∂ ∂ ∂ ∂ ∂ ∂ ∂ ∂ ∂ ∂ ∂ ∂ ∂ ∂ ∂ , , , ∂ , , , ∂ , , , ∂ , , , ∂ , , , ∂ , , , ∂ , , , ∂ , , , ∂ , , , ∂ , , , ∂ , , , ∂ , , , ∂ Figure 2.9: The Jacobi path for Example 2.12 after the first use of the Griffiths formulaAs we can see from the picture, all monomials are still connected. We only have one gapbetween the th and the th vertex. But we knew before that we will create a gap after thevertex at position b d − n = 18 − . This means we need one basis element in this degree(indicated by a purple colour) in order to be able to choose all coefficients appropriately. Letus assume we add this basis element with the appropriate coefficient, such that the resultingpath is in the Jacobian ideal. Now we can use the Griffiths formula again and see how muchgaps we get in the next degree. We end up with the following picture:0 Calculations for the Picard-Fuchs equation with the Griffiths-Dwork method , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ∂ ∂ ∂ ∂ Figure 2.10: The Jacobi path for Example 2.12 after the second use of the Griffiths formulaAgain we coloured a choice for basis elements we need in purple. This means we need basiselements in this degree. If we use the Griffiths formula again, we are left with (0 , , , , whichtherefore has to be a basis element as well. So in total we counted basis element in degree and degree · and basis elements in degree , which adds up to basis elements overall.If the theorem is true, then it should hold that u = 9 . So we will calculate u with the Q i calculated in (2.1): u = b d − | n [ i =1 Q Z i | = 18 − |{ } ∪ { , , , , , , , , } ∪ { , , } ∪ { }| = 18 − . Lemma 2.22. Let η : = { p | κ ( p ) + 1 = κ ( p + 1) , ≤ p ≤ b d } ,η : = { p | κ ( p ) + 2 = κ ( p + 1) , ≤ p ≤ b d } ,η : = { p | κ ( p ) = κ ( p + 1) , ≤ p ≤ b d, ∂ ( p ) = ∂ i satisfies Condition ( ∗ ) } . Then u = | η | + 2 | η | + | η | := η. .2 Proof of the order of the Picard-Fuchs equation Condition ( ∗ ) • ∂ i creates an extra vertex, i.e. b q i k i = b d . • If x i is part of the loop x k i i x i + · · · + x k im i m x i and the partial derivatives are used in theorder ∂ i , ∂ i , . . . , ∂ i m whenever they have the same smallest position, then i should notequal i .Proof. First recall that u = P ni =1 | Q i | − | S ni =1 Q Z i | = P ni =1 | Q Q i | + P ni =1 | Q Z i | − | S ni =1 Q Z i | . Part 1: u ≤ η First assume q ∈ Q Q i j for j = 1 . . . , ℓ , this means we have the summand ℓ in u . But this alsomeans that position p = ⌊ q ⌋ − n + 2 is the smallest possible position for ∂ i , . . . , ∂ i ℓ and dueto Lemma 2.17 none of the x i j are neighbouring variables in a loop or a chain. Therefore theyare all independent and ∂ i j adds to all entries i , . . . , i ℓ except i j . Let us assume that ∂ i got shifted to position e p (according to Lemma 2.15 it has to be shifted) and all others whereshifted correspondingly, i.e. ∂ ( e p + j − 1) = ∂ i j for ≤ j ≤ ℓ . This means that after completelyshifting we get κ ( e p + ℓ ) = κ ( e p + ℓ − 1) + 1 = · · · = κ ( e p + 2) + ( ℓ − 2) = κ ( e p + 1) + ( ℓ − and therefore e p + 1 , . . . , e p + ℓ − ∈ η . In addition we get that κ ( e p ) ≤ κ ( e p + 1) , which means e p ∈ η , because each ∂ i j creates an extra vertex, or κ ( e p ) + 1 = κ ( e p + 1) , which means e p ∈ η .In total this means that we also sum up ℓ in η .Now assume q ∈ Q Z i j for j = 1 . . . , ℓ + 1 , this means that we sum up ℓ in u . But this also meansthat whole loops are at position q − n + 1 or the beginning of a chain is at position q − n + 2 and the rest of the chain stopping somewhere is at position q − n + 1 as shown in Lemma2.17. Remember that we saw in Lemma 2.15 that all partial derivatives at position q − n + 1 get shifted anyway. So assume that everything gets shifted to position e p . Now we additionallyshift every loop and every chain according to Remark 2.18 and 2.19 respectively and we wantto look at κ ( e p ) , . . . , κ ( e p + ℓ ) . Therefore suppose a loop of length m got shifted to e p + a . ThenRemark 2.18 tells us that κ ( e p + a + m ) = κ ( e p + a + m − 1) + 1 = · · · = κ ( e p + a + 2) + m − and κ ( e p + a + 1) = κ ( e p + a + 2) . In addition we get that κ ( e p + a ) + 2 = κ ( e p + a + 1) ,because either there is another loop at the positions before e p and as we can see in thisloop κ ( e p + a + 1) + m − κ ( e p + a + m ) so the smallest numbers increase by m − in m − steps and in addition the partial derivative at position e p − got shifted one less whichleads to κ ( e p + a ) + 2 = κ ( e p + a + 1) . If there is a chain at the positions before e p , then thebeginning of the chain was originally possible at position q − n + 2 and the loop at position q − n + 1 and now the loop is at one position later, therefore the loop got shifted two more and κ ( e p + a ) + 2 = κ ( e p + a + 1) . Now we can calculate η : We have e p + a + m − , . . . , e p + a + 2 ∈ η and e p + a ∈ η , which means adding up m − · m in η . Now let us assume thatthere is the beginning of a chain which now has length m at position e p + a . Then Remark2.19 tells us that κ ( e p + a + m ) = κ ( e p + a + m − 1) + 1 = · · · = κ ( e p + a + 1) + m − . Inaddition κ ( e p + a ) + 1 = κ ( e p + a + 1) , because if there is a chain at the positions before, wejust shifted one more and if there is a loop at the positions before, this got shifted the sameamount, but as mentioned before the smallest numbers increased one less. In total this gives e p + a + m − , . . . , e p + a ∈ η and therefore we added m in η . The last thing to do is to look at2 Calculations for the Picard-Fuchs equation with the Griffiths-Dwork method the chain or loop at position e p . The smallest numbers at position e p + 1 and the later ones areas above. But the arrow at position e p − got shifted at least as much as the one at position e p , so κ ( e p ) ≥ κ ( e p + 1) . If there is a chain at position e p , we definitely get no contribution to η from this position. If there is a loop at position e p , then e p ∈ η . So in general this means thechain or loop at position e p adds up one less then its length and all others add up exactly theirlengths in η . Since the lengths add up to ℓ + 1 , we add up ℓ in η . Part 2: u ≥ η We start with a position p in η . This means that κ ( p ) + 1 = κ ( p + 1) and therefore ∂ ( p ) = ∂ i and ∂ ( p + 1) = ∂ i have the same smallest possible position e p . This means either e p + n − ∈ Q i ∩ Q i , ⌊ q ⌋ = e p + n − with q ∈ Q i ∩ Q i or e p + n − ∈ Q i and ⌊ q ⌋ = e p + n − with q ∈ Q i . But in all cases we add up in u .Now let p ∈ η . So κ ( p ) + 2 = κ ( p + 1) , this only happens if we shifted ∂ ( p + 1) over ∂ ( p ) which we only do if a full loop is at the same position and as we saw in part 1, this leads toadding up for this position in u .The last possibility is p ∈ η . Here we have κ ( p ) ≥ κ ( p + 1) and ∂ ( p ) = ∂ i creates an extravertex. It follows that either ∂ i comes from a position e p with ⌊ q ⌋ = e p + n − and q ∈ Q Q i or e p + n − ∈ Q Z i and this is the first position of a complete loop. But in both cases we haveadded up in u . Proof of Proposition 2.20. We will now start proving Proposition 2.20 using all the results weachieved so far. The rough idea is the following: We count the holes that occur after using theGriffiths formula and relate them to the sets η , η and η and therefore with Lemma 2.22 tothe number u , because for each hole on the Jacobi path we need an extra basis element.So we investigate all cases when a path becomes disconnected. This depends on the smallestnumbers occurring in a monomial, or correspondingly vertex. Given that a vertex vanishes ifthe smallest number was before using the Griffiths formula. If two vertices are neighbours inthe Jacobi path we distinguish between cases of relations between their smallest numbers.Let p and p + 1 be two positions on the Jacobi path, then the following situations can occur:(i) κ ( p ) = κ ( p + 1) ,(ii) κ ( p ) < κ ( p + 1) or(iii) κ ( p ) > κ ( p + 1) .We should notice that the maximal gap between the smallest numbers in (ii) is because inthe way we shift, the arrow used before is at most shifted two less than the arrow we usebetween the two vertices. We already investigated this in the proof of Lemma 2.22.We will count one basis element for every start of a disconnected part of the path. Firstconsider case (iii), so κ ( p ) > κ ( p + 1) . If ∂ ( p ) does not create an extra vertex, then this willonly shorten a path after using the Griffiths formula the appropriate number of times, butthis will never be the beginning of a path. So we can neglect this case. But if ∂ ( p ) does createan extra vertex and we get to the point that κ ( p + 1) = 0 the arrow ∂ ( p ) does not fit here .2 Proof of the order of the Picard-Fuchs equation p + 1 vanishes. So we have to count onebasis element for every time that κ ( p ) > κ ( p + 1) and ∂ ( p ) creates an extra vertex. But thisis done in | η | .Now consider case (ii). As long as κ ( p ) ≥ the positions p and p + 1 are always connectedby an arrow. So assume that we used the Griffiths formula several times until κ ( p ) = 0 and κ ( p + 1) > . Then there is still an arrow connecting the two vertices no matter which kind ofpartial derivative it is, but after the next use of the Griffiths formula the vertex at position p will vanish and the one at position p + 1 will still be there. This means that at position p + 1 a disconnected part of the path starts, which shows that we need an extra basis element here.The vertex at position p + 1 will stay until κ ( p + 1) = 0 , so we need an extra basis element inevery degree until the vertex vanishes. So we need basis element if κ ( p ) + 1 = κ ( p + 1) and basis elements if κ ( p ) + 2 = κ ( p + 1) . The set η counts exactly the first case and η thesecond case.The last case to consider is (i). Here we have to distinguish between several cases: Firstassume that the two vertices are connected by an arrow ∂ i with b q i k i = b d , i.e. the arrow hasno additional vertex. In this case we can put the arrow in as long as κ ( p + 1) ≥ and thenthe vertices at position p and p + 1 vanish at the same time, so we don’t need an extra basiselement here. Remember that if ∂ i belongs to the beginning of a chain of length ≥ thereis an arrow as long as the ( i + 1) th entry of the vertex at position p + 1 is > . This canonly occur if ∂ i +1 was used at the position before. But we already discussed in Remark 2.19that if ∂ i +1 is at the position before ∂ i we shift ∂ i +1 over ∂ i . So this case never occurs andwe can assure that we never need an extra basis element if ∂ i is between two vertices withthe same smallest numbers and b q i k i = b d . Assume now that we are still in case (i), so thesmallest numbers are the same, but the arrow ∂ i between the two vertices produces an extravertex. As long as κ ( p + 1) ≥ the arrow and all the other arrows from the chain or loopcan be used here, which means that we can always choose the coefficients in a way that theadditional vertices vanish and the vertex on the path has the appropriate coefficient. If weuse the Griffiths formula until κ ( p + 1) = 0 we might still be able to put in the arrow butthe next arrow in the loop or chain does not fit anymore. So we need an extra basis elementexcept if we are in the situation of Remark 2.18, where all partial derivatives of the loop orchain are at the same position. This is exactly what is counted in | η | in addition to case (iii).In total we see that counting basis elements is the same as | η | + | η | + | η | and therefore thenumber of basis elements is u . Remark . We have n − basis elements for sure, because we need at least basis elementin every degree. We choose this to be s k ( Q x i ) k − Ω f k for ≤ k ≤ n − , because we need thisanyway to write the first ( n − derivatives of ω .Now we are able to put everything together and prove Theorem 2.8. Proof of Theorem 2.8. To prove Theorem 2.8 we will show that all the powers of Q ni =1 x i canbe written as a combination of the same u basis elements we needed for ( Q ni =1 x i ) n − seen in4 Calculations for the Picard-Fuchs equation with the Griffiths-Dwork method Proposition 2.20. If we have done that, it is clear that δ i ω can be written as a combinationof u basis elements for all i . So if we take all δ i ω up to i = u , then we get a linear relationbetween them. So the Picard-Fuchs equation has order u .We will show by induction that ( Q ni =1 x i ) j can for all j be written as a combination of the samebasis elements. Therefore we first look at ( Q ni =1 x i ) n : We know how to write ( Q ni =1 x i ) n − interms of the partial derivatives, so if we multiply this by Q ni =1 x i we get an expression for ( Q ni =1 x i ) n . In our notation this means adding (1 , . . . , to every monomial that appears onthe Jacobi path. Now we can use the Griffiths formula and because all entries were biggerthan all monomials are still there. Now we can use the Griffiths formula again, the onlything we have to add is a multiple of ( Q ni =1 x i ) n − . From now on everything works as in thecase for ( Q ni =1 x i ) n − . This means we can write ( Q ni =1 x i ) n as a linear combination of the u basis elements and ( Q ni =1 x i ) n − , but this monomial is itself a linear combination of the u basis elements. So ( Q ni =1 x i ) n can be written with the same u basis elements as ( Q ni =1 x i ) n − .Now look at ( Q ni =1 x i ) j for a j > n − then again we get the expression of ( Q ni =1 x i ) j inthe partial derivatives by multiplying the expression of ( Q ni =1 x i ) n − by ( Q ni =1 x i ) j − n +1 . Andagain we can use the Griffiths formula once and no monomial will vanish until we used theGriffiths formula j − n + 2 times. This means that we can write ( Q ni =1 x i ) j as a combinationof the u basis elements and all ( Q ni =1 x i ) l with l < j , but by induction all these powers canthemselves be written as a linear combination of the same u basis elements. So in total we getthat we can write all powers of Q ni =1 x i as a linear combination of the same u basis elements.This leads to our statement that the Picard-Fuchs equation of f ( x ) has order u . In this section we want to calculate in all details an example of computing the Picard-Fuchsequation with the Griffiths-Dwork method. We will choose a slightly smaller example as inthe section before. Theorem 2.8 tells us immediately how many calculations we have to do,because we know how many basis elements we need and therefore how many of the δ i ω we haveto calculate. We will prove the actual appearance of the Picard-Fuchs equation of Theorem 3.6not by using these calculations, but we want to show how this can be done using the Griffiths-Dwork method and especially that with the help of Theorem 2.8 and our new diagrammaticnotation it can be done relatively quick. Example 2.24. Let g ( x , x , x , x ) = x x + x x + x + x . This is the polynomial we arelooking at. The reduced weights for this polynomial are given by ( q , q , q , q ) = (5 , , , and the degree is d = 32 . The transposed polynomial is given by g t ( w, x, y, z ) = w + wx + xy + z and has weights ( b q , b q , b q , b q ) = (2 , , , and the degree is b d = 10 . So according toTheorem 2.8 the Picard-Fuchs equation of f ( x , x , x , x ) = x x + x x + x + x + sx x x x has order .3 Detailed example for the Picard-Fuchs equation u = b d − | n [ i =1 Q Z i | = 10 − |{ , } ∪ { , } ∪ { } ∪ { , , , , }| = 10 − . This means we have to calculate δ i ω for i = 0 , . . . , and ω = s Ω f . We know from Remark1.17 that the derivatives of ω can be written as a sum of ℓ ! s ℓ +1 ( x x x x ) ℓ Ω f ℓ +1 for ≤ ℓ . In detailwe get: δω = s Ω f − s x x x x Ω f δ ω = s Ω f − s x x x x Ω f + 2 s ( x x x x ) Ω f δ ω = s Ω f − s x x x x Ω f + 6 2 s ( x x x x ) Ω f − s ( x x x x ) Ω f δ ω = s Ω f − s x x x x Ω f + 25 2 s ( x x x x ) Ω f − 10 6 s ( x x x x ) Ω f + 24 s ( x x x x ) Ω f . We are looking at the Milnor ring in degree , and . Therefore we can choose s Ω f , s x x x x Ω f and s ( x x x x ) Ω f to be basis elements. We define them as b , b and b respectively. Then the above expression reduces to ω = b δω = b − b δ ω = b − b + b δ ω = b − b + 6 b − s ( x x x x ) Ω f δ ω = b − b + 25 b − 10 6 s ( x x x x ) Ω f + 24 s ( x x x x ) Ω f . (2.2)Now we have to figure out how to write s ( x x x x ) Ω f and s ( x x x x ) Ω f in a basis of theMilnor ring. From Proposition 2.20 we already know that we need basis elements. So thethree we had so far are not enough. We will find out what the extra basis element shouldbe in the process of calculating. We want to shorten the notation for numbers that occurthroughout the whole calculation. Notation . The number ∆ is defined as ∆ := Q b q b q i i s b d − ( − b d ) b d = 5 s − and willoccur as a normalization factor in the calculations. We also want to define c q := Q b q b q i i = 5 and c d := ( − b d ) b d = 10 separately. So ∆ = c q s b d − c d .6 Calculations for the Picard-Fuchs equation with the Griffiths-Dwork method Calculating ( x x x x ) We start by calculating s ( x x x x ) Ω f . The first step for this is to write down the Jacobi pathwith all coefficients. This is done in the following picture. In the last two sections we mostlyignored all coefficients, but now we have to calculate all of them. In the diagram the coefficientsare marked in three ways, which is distinguished by three colours. The blue number near eachvertex is the coefficient that the corresponding monomial should have after adding everythingup. The green number next to each arrow is the coefficient the corresponding partial derivativeneeds such that after adding up we get the blue numbers as results. In addition in purple wemarked the exponents k i which appear as coefficients inside the partial derivative. , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , − s ∆ − s ∆ s , , , − s · − s ∆ s ∆ − s ∆ s , , , − s · − s , , , s · , , , − s · · s ∆ Figure 2.11: The Jacobi path for ( x x x x ) In this first step the goal is to get a description of ( wxyz ) . So after adding up all othermonomials should vanish. Therefore the blue numbers , i.e. coefficients of the monomials, areall except the first one. In Figure 2.11 the green numbers are relatively easy to find. he lastone is always c q s b d − ∆ and the others can be calculated inductively. The basic idea is that fromone arrow to the next one has to divide by b d and multiply by the appropriate b q i . One needsto put more effort in doing this in general and we will not do this here. Nevertheless, it is easyto find these coefficients, because they only consist of powers of b d , s and b q i for i = 1 , . . . , n . Ifone translates the above picture in normal notation it tells us that the form s ( x x x x ) Ω f can be written in the following way as a linear combination of the partial derivatives: .3 Detailed example for the Picard-Fuchs equation s ( x x x x ) Ω f = 6 s Ω f (cid:18) − s ∆ x x x x − s · x x x x − s ∆ x x x − s · x x x − s · · x x x x (cid:19) ∂f∂x + 6 s Ω f (cid:18) s x x x x + 10 s x x x + 10 s · x x x x (cid:19) ∂f∂x + 6 s Ω f (cid:18) − s x x x x (cid:19) ∂f∂x + 6 s Ω f (cid:18) − x x x x + 10 s ∆ x x x x − s ∆ x x x x + 10 s ∆ x x x x + 5 s ∆ x x x x (cid:19) ∂f∂x . Now we use the Griffiths formula (1.2). Because we have just written everything in terms ofthe partial derivatives we can do this directly and get the following result: s ( x x x x ) Ω f = 2 s Ω f (cid:18) − s ∆ x x x − s · x x x x − s · x x − s · · x x x x (cid:19) + 2 s Ω f (cid:18) s x x x x + 10 s x x + 2 10 s · x x x x (cid:19) + 2 s Ω f (cid:18) − s x x x x (cid:19) + 2 s Ω f (cid:18) − x x x x + 10 s ∆ x x x − s ∆ x x x x + 10 s ∆ x x x + 3 5 s ∆ x x x x (cid:19) . We can also very easily do this step in our picture and the advantage is that we directly get adecomposition of the result as a linear combination of partial derivatives. We want to stressthat in this new picture the coefficients are closely related to the coefficients from before. Theblue number, i.e. coefficient of the monomial, in the second picture is the green number nextto the arrow pointing to the vertex in the last picture multiplied by a constant, because theGriffiths formula contracts an arrow to the vertex at the arrow tip. The factor one has tomultiply is given by the i th entry if the arrow pointing to the vertex is the partial derivativewith respect to x i . The green numbers in the second picture can now be calculated such thatall blue numbers are correct after adding up. This is possible everywhere but at a vertexcorresponding to a basis element. At these vertices we will have to add a multiple of the basis8 Calculations for the Picard-Fuchs equation with the Griffiths-Dwork method element. The factor of the basis element is written down next to the vertex with the comment“extra”. , , , s ∆6 c q s +9 c d s − ∆ extra , , , − , , , s ∆ , , , − s ∆ , , , s ∆ , , , − s ∆ , , , s ∆ , , , − · s ∆ , , , s ∆ − s − ∆ ∆ − s ∆ s , , , − s · − s ∆ s ∆ s , , , − s · , , , s · · − s ∆ Figure 2.12: The Jacobi path for ( x x x x ) after the first use of the Griffiths formulaInterpreting the picture we get a description of s ( x x x x ) Ω f in terms of the basis in degree , which is only given by b = s ( x x x x ) Ω f , and a linear combination of the partialderivatives: s ( x x x x ) Ω f = 2 s ( x x x x ) Ω f (cid:18) c q s + 9 c d ∆ (cid:19) + 2 s Ω f (cid:18) − s ∆ x x − s · x x x + 10 s · · x x (cid:19) ∂f∂x + 2 s Ω f (cid:18) s x x x − s · x x (cid:19) ∂f∂x + 2 s Ω f (cid:18) s x x (cid:19) ∂f∂x + 2 s Ω f (cid:18) − s − ∆ x x x x + 7 10 ∆ x x x − s ∆ x x x x + 10 s ∆ x x x − s ∆ x x x x (cid:19) ∂f∂x . .3 Detailed example for the Picard-Fuchs equation s ( x x x x ) Ω f = (cid:18) c q s + 9 c d ∆ (cid:19) b + s Ω f (cid:18) − s · x x + 10 s · · x (cid:19) + s Ω f (cid:18) s x x − s · x (cid:19) + s Ω f (cid:18) s x (cid:19) + s Ω f (cid:18) − s − ∆ x x x − s ∆ x x x − s ∆ x x x x (cid:19) . In total there are still monomials apart from b in the formula, but this is not the end,because some of them are linear dependent to each other. This is not always easy to see inthe normal notation, but it is very easy to see in our new notation. We can see this if we goback to our picture. After subtracting (cid:16) c q s +9 c d ∆ (cid:17) b we get a polynomial in the Jacobianideal and we can use the Griffiths formula which leads to the following picture: , , , − s ∆ − c q s +9 c d s − ∆ extra , , , − s − ∆ , , , · s ∆ , , , s ∆ s ∆ extra , , , − s ∆ − s − ∆ − s ∆ s ∆ Figure 2.13: The Jacobi path for ( x x x x ) after using the Griffiths formula twice0 Calculations for the Picard-Fuchs equation with the Griffiths-Dwork method Before we do further calculations, we define a fourth basis element b = s x x Ω f . Weknow from Theorem 2.8 that we need a fourth basis element and from the picture above, wecan see this is a good way of choosing b , because we can immediately see the coefficients ofthis element in the picture. Choosing a good basis is another reason why our constructionhelps making the computations faster. If one has bigger examples and uses a computer algebrasystem to compute the coefficients, knowing a good basis makes it more efficient. So in totalwe have one basis element in degree , which is b , two basis elements in degree whichare b and b and we will get one basis element in degree denoted by b . Now translatingback gives s ( x x x x ) Ω f = (cid:18) c q s + 9 c d ∆ (cid:19) b + (cid:18) − c q s + 9 c d ∆ (cid:19) b + 8∆ b + s Ω f (cid:18) − s − ∆ x x x − s ∆ x + 5 s ∆ x (cid:19) ∂f∂x . It is very easy to use the Griffiths formula here, because there is only a multiple of the partialderivative with respect to x left and because two of the terms vanish when one takes thepartial derivative, we end up with a rather short expression for s ( x x x x ) Ω f : s ( x x x x ) Ω f = (cid:18) c q s + 9 c d ∆ (cid:19) b + (cid:18) − c q s + 9 c d ∆ (cid:19) b + 8∆ b + (cid:18) c q s ∆ (cid:19) b . (2.3)Of course the same happens in the picture. Every vertex except (1 , , , in Figure 2.13 hasa zero entry and therefore vanishes after the use of the Griffiths formula. The only remainingvertex is (0 , , , and the coefficient is just the same as the coefficient next to the arrowpointing at (1 , , , in Figure 2.13. So the picture is given by , , , c q s ∆ Figure 2.14: The Jacobi path for ( x x x x ) after using the Griffiths formula three timesIt is obviously not necessary to translate back in between the pictures and formulas the wholetime. We can do the whole calculations in the pictures and it is less work to draw all picturesfirst and after that translate back to the formulas. We will do this in the next step, where we .3 Detailed example for the Picard-Fuchs equation s ( x x x x ) Ω f . We will also see that wecan use some of the calculations we have already done for computing a linear combinationof s ( x x x x ) Ω f in a basis of the Milnor ring for the case of s ( x x x x ) Ω f . Especially wealready chose all basis elements we need and now the goal is to find the correct coefficientshere. Calculating ( x x x x ) Again we start with writing down the Jacobi path, because ( x x x x ) ∈ J ( f ) and thereforethis is possible. This can be done very easy, because we just have to add (1 , , , to theJacobi path in Figure 2.11. In addition the coefficients are all the same as in Figure 2.11.This does not hold for the rest of the pictures, because the factors we have to multiply to thecoefficients after using the Griffiths formula are bigger as in the earlier pictures, because theentries in the vertices and therefore the exponents of the monomials are bigger. However, thecoefficients will not be to far from each other. From Figure 2.11 we get that the Jacobi pathfor (4 , , , is given by , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , − s ∆ − s ∆ s , , , − s · − s ∆ s ∆ − s ∆ s , , , − s · − s , , , s · , , , − s · · s ∆ Figure 2.15: The Jacobi path for ( x x x x ) Now we use the Griffiths formula for the first time. The picture looks very similar to the onein the first case before we used the Griffiths formula. The vertices are exactly like they werebefore, but of course the coefficients are different. But one should notice that throughout all2 Calculations for the Picard-Fuchs equation with the Griffiths-Dwork method calculations the denominator of the coefficients is always the same, i.e. ∆ . The reason is thatthe denominator ∆ always appears in the first step of writing ( wxyz ) i as a normalising factorand gets carried on afterwards. We will see below that this gives rise to the fact that ∆ is alsothe leading coefficient of the Picard-Fuchs equation. , , , s ∆10 c q s +15 c d s − ∆ extra , , , − , , , s ∆ , , , − s ∆ , , , s ∆ , , , − s ∆ , , , s ∆ , , , − s ∆ , , , s ∆ , , , − · s ∆ − s − ∆ ∆ − s ∆ s , , , − s · − s ∆ s ∆ − s , , , s · s , , , − s · , , , s · · − s ∆ Figure 2.16: The Jacobi path for ( x x x x ) after the first use of the Griffiths formulaHere we have to take c q s +15 c d ∆ of the form s ( x x x x ) Ω f extra in order to have an expres-sion in the Jacobian ideal. The form s ( x x x x ) Ω f itself can be written with the monomialsappearing on the path as we already calculated, so we could adjust the coefficients in the pic-ture by adding c q s +15 c d ∆ times the coefficients from Figure 2.11, but then the picture getsmuch bigger and this is not necessary because we already know how to write s ( x x x x ) Ω f in the basis. So there is no need to put this information in the picture and do the calculationsagain. It is enough to remember the coefficient c q s +15 c d ∆ and add c q s +15 c d ∆ times formula(2.3) to the description of s ( x x x x ) Ω f in the end. We want to mention here that in thedescription of s ( x x x x ) Ω f in formula (2.3) all coefficients already have ∆ as denominator,so after multiplying with c q s +15 c d ∆ we have ∆ as denominator. This is true in general: Inthe description of ℓ ! s ℓ +1 ( x x x x ) ℓ Ω f ℓ +1 the denominator ∆ ℓ − will appear and ℓ − is the biggestexponent that appears. This means that we have at least to multiply everything by ∆ to geta relation between the partial derivatives. This explains why ∆ is the leading coefficient ofthe Picard-Fuchs equation. .3 Detailed example for the Picard-Fuchs equation (3 , , , for the second time. Again the monomials that occur are the same wehad for the calculations of ( wxyz ) , but the coefficients are bigger. The Jacobi path we getafter using the Griffiths formula two times is the following: , , , − s ∆ − c q s +80 c d s − ∆ extra , , , − s − ∆ , , , ∆ , , , − s ∆ , , , s ∆ , , , − s ∆ , , , s ∆ , , , · s ∆ , , , − s ∆ − s − ∆ s − ∆ − s , , , − s · − s ∆ s ∆ − s , , , s · , , , − s · · s ∆ Figure 2.17: The Jacobi path for ( x x x x ) after the second use of the Griffiths formulaAs expected we have one gap in this picture after the th vertex. So we have to use the basiselement b = s ( x x x x ) Ω f here. If we add c q s − c d s − ∆ of this basis element to the abovepicture, we are ending up with an expression in the Jacobian ideal and we are able to usethe Griffiths formula again. There are only two steps until we have everything to write downthe linear combination of s ( x x x x ) Ω f in the basis and because we already know what thebasis is, we can concentrate on the coefficients at the corresponding vertices. Especially in thelast step we only have to figure out the coefficient at the vertex (0 , , , . Now we will showthe pictures after the third and fourth use of the Griffiths formula:4 Calculations for the Picard-Fuchs equation with the Griffiths-Dwork method , , , s ∆15 c q s +80 c d s − ∆ extra , , , − s − ∆ , , , − · s ∆ , , , s ∆ s ∆ extra , , , − s ∆ − s − ∆ − s ∆ − s ∆ Figure 2.18: The Jacobi path for ( x x x x ) after using the Griffiths formula three times , , , − c q s ∆ Figure 2.19: The Jacobi path for ( x x x x ) after using the Griffiths formula four timesNow we can put everything together and write down the expression of s ( x x x x ) Ω f in thebasis { b , b , b , b } . First the formula we got from the Jacobi path starting and ending at (4 , , , : s ( x x x x ) Ω f = (cid:18) c q s + 15 c d ∆ (cid:19) s ( x x x x ) Ω f + (cid:18) − c q s + 80 c d ∆ (cid:19) b + (cid:18) c q s + 80 c d ∆ (cid:19) b + 40∆ b + (cid:18) − c q s ∆ (cid:19) b . .3 Detailed example for the Picard-Fuchs equation s ( x x x x ) Ω f = (cid:18) c q s + 15 c d ∆ (cid:19) (cid:18)(cid:18) c q s + 9 c d ∆ (cid:19) b + (cid:18) − c q s + 9 c d ∆ (cid:19) b + 8∆ b + (cid:18) c q s ∆ (cid:19) b (cid:19) + (cid:18) − c q s + 80 c d ∆ (cid:19) b + (cid:18) c q s + 80 c d ∆ (cid:19) b + 40∆ b + (cid:18) − c q s ∆ (cid:19) b = b (cid:18) c q s + 180 c d c q s + 135 c d ∆ + − c q s + 80 c d ∆ (cid:19) + b (cid:18) − c q s − c d c q s + 135 c d ∆ + 15 c q s + 80 c d ∆ (cid:19) + b (cid:18) c q s + 120 c d ∆ + 40∆ (cid:19) + b (cid:18) c q s + 15 c d c q s ∆ + − c q s ∆ (cid:19) . (2.4)From Theorem 3.6 we know that the Picard-Fuchs equation should be s δ ( δ + 5) − ( δ − δ − δ − δ − ω )= ( c q s − c d ) δ ω + (5 · c q s + 20 · c d ) δ ω − · c d δ ω + 300 · c d δω − · c d ω. Now we can put equation (2.2) in this Picard-Fuchs equation and end up with the followingformula to check c q s − c d ) (cid:18) b − b + 25 b − 10 6 s ( x x x x ) Ω f + 24 s ( x x x x ) Ω f (cid:19) + (5 · c q s + 20 · c d ) (cid:18) b − b + 6 b − s ( x x x x ) Ω f (cid:19) − · c d ( b − b + b ) + 300 · c d ( b − b ) − · c d b = ( c q s − c d ) 24 s ( x x x x ) Ω f + ( − · c q s − · c d ) 6 s ( x x x x ) Ω f + (55 · c q s − · c d ) b + ( − · c q s − · c d ) b + 6 · c q s b . Now we can put in the expression of s ( x x x x ) Ω f in the basis (2.3):6 Calculations for the Picard-Fuchs equation with the Griffiths-Dwork method s ( x x x x ) Ω f + (cid:18) − c q s − · c d c q s − c d ∆ + 55 · c q s − · c d (cid:19) b + (cid:18) c q s − · c d c q s − c d ∆ − · c q s − · c d (cid:19) b + (cid:18) − · c q s − · c d ∆ (cid:19) b + (cid:18) − c q s − · c d c q s ∆ + 6 · c q s (cid:19) b . Finally we put in formula (2.4) which describes s ( x x x x ) Ω f in the basis and we checkthat the Picard-Fuchs equation holds: (cid:18) − c q s − · c d c q s + 45 c d ∆ + 30 · c q s + 45 · c d (cid:19) b + (cid:18) c q s − · c d c q s + 45 c d ∆ − · c q s + 45 · c d (cid:19) b + (cid:18) − · c q s − · c d ∆ + 40 (cid:19) b + (cid:18) − c q s + 5 · c d c q s ∆ + 5 · c q s (cid:19) b = 1∆ (0 · b + 0 · b + 0 · b + 0 · b ) . hapter 3 The Picard-Fuchs equation forinvertible polynomials andconsequences In this Chapter we focus on the Picard-Fuchs equation of the one-parameter family f ( x ) anddiscuss some consequences of the results achieved so far. From the last chapter we alreadyknow the order of the Picard-Fuchs equation. In the first section of this chapter we calculatethe GKZ system and in the second section we see how this fits together with the resulton the order of the Picard-Fuchs equation to prove Theorem 3.6, which states the Picard-Fuchs equation for the one-parameter family f ( x ) . In the Section 3.2 we will also see howthis relates to a paper by Corti and Golyshev [CG06], where the same differential equationappears. This is also the starting point for Section 3.3, where we concentrate on relationsbetween the cohomology of the hypersurface defined by the one-parameter family f ( x ) andthe cohomology of the solution space of the Picard-Fuchs equation. In Section 3.4 we willdiscuss the results in an important class of examples given by Arnold’s strange duality. Thiswas also the starting point of the research done in this thesis. Finally, in the last section wecover the relation between the zero sets of the Picard-Fuchs equation of f for special choicesof the parameter, the Poincaré series of the dual polynomial g t and the monodromy in thesolution space of the Picard-Fuchs equation. This section is devoted to GKZ systems. We will give a short introduction to GKZ systemsand do the calculations for invertible polynomials afterwards.8 The Picard-Fuchs equation for invertible polynomials and consequences Introduction to GKZ systems In this first part we want to give a short introduction to GKZ systems as far as we need it.The theory on GKZ systems is much larger than the part we present here. Good referencesfor an introduction as well as an overview on several aspects of GKZ systems are the articleby Stienstra [Sti07], which has a large part on solutions of GKZ systems, the book by Katzand Cox [CK99], which among other things embeds GKZ systems in a bigger context, andthe article of Hosono [Hos98], which focuses on the case of toric varieties. The theory of GKZsystems was originally established by a series of articles of Gelfand, Kapranov and Zelevinsky[GZK89, GZK93, GKZ90, GKZ91] as a generalisation of hypergeometric differential equations.This also explains the name GKZ systems. Notation . Let A ⊂ Z n be a finite subset which generates Z n as an abelian group and forwhich there exists a group homomorphism h : Z n → Z such that h ( A ) = 1 , i.e. A lies in a ( n − -dim. hypersurface. Let γ ∈ C n be an arbitrary vector.Let |A| = N , then L := { ( l , . . . , l N ) ∈ Z N : l a + · · · + l N a N = 0 , a i ∈ A} denotes thelattice of linear relations among A . Because of A lying in a hypersurface, P l i = 0 holds for ( l , . . . , l N ) ∈ L . Remark . We will calculate the GKZ system for the one-parameter family f ( x ) later.Keep in mind that for these calculations A will be the set of all exponent vectors of ourone-parameter family. The reasons for this will also become clear later. Definition 3.3. The GKZ system (sometimes also called A system) for A and γ is a systemof differential equations for functions Φ of N variables v , . . . , v N given by Y l i > (cid:0) ∂∂v i (cid:1) l i Φ = Y l i < (cid:0) ∂∂v i (cid:1) − l i Φ for every l ∈ L and (3.1) N X i =1 a ij v i ∂ Φ ∂v i = γ j Φ for all j = 1 , . . . , k + 1 and ( a i , . . . , a i k +1 ) ∈ A . (3.2)The above definition gives a system of partial differential equations. We stated the basicdefinition of a GKZ system. From this point on we will continue in the special case of invertiblepolynomials. Calculation of the GKZ system for invertible polynomials We will now start calculating the GKZ system for the one-parameter family f ( x ) = g ( x ) + s Q i x i , where g ( x ) is an invertible polynomial. The notation in this section is the same asbefore and can be found in 2.1 and 3.1. In addition we will define some extra notation: Notation . We define the rows of the exponent matrix E to be e i = ( e i , . . . , e i n ) for i = 1 , . . . , n . Then we can write g ( x ) as g ( x ) = P ni =1 x e i , where x e i = Q nj =1 x e i j j . Now wedefine a general n + 1 -parameter family f v ( x ) = f v ,...,v n ( x ) = P ni =1 v i x e i + sx (1 ,..., with .1 The GKZ system for invertible polynomials v . . . , v n and s . So in the previously used notation we have N = n + 1 and we set v n +1 := s . In this way the notation is consistent with the previous chapters, because we havethat f ,..., ( x ) = n X i =1 x e i + sx (1 ,..., = g ( x ) + s n Y i =1 x i = f ( x ) . We will now start calculating the GKZ system for A = { e t , . . . , e tn , (1 , . . . , t } and γ =( − , . . . , − t . The reason for the choice of γ will become clear when we look at the solutions ofthe GKZ system. For the first equation (3.1) we need to calculate the lattice of linear relations L among the vectors in A . If we define A to be the matrix with columns e t , . . . , e tn , (1 , . . . , t ,then A is an n × ( n + 1) -matrix and L is 1-dimensional. We know that A · b q ... b q n − b d = E t · b q ... b q n − b d ... b d = ... and therefore L = h ( b q , . . . , b q n , − b d ) t i . Now we are able to write down equation (3.1) for thislattice L : (cid:18) ∂∂s (cid:19) b d Φ = (cid:18) ∂∂v (cid:19) b q · · · · · (cid:18) ∂∂v n (cid:19) b q n Φ . (3.3)In the end we want to compare the GKZ system to the Picard-Fuchs equation from Theorem3.6. To do this we will write the GKZ system with the differential operators δ = s ∂∂s and δ i = v i ∂∂v i for i = 1 , . . . , n by inserting s − δ = ∂∂s and v − i δ i = ∂∂v i . (cid:0) s − δ (cid:1) b d Φ = (cid:0) v − δ (cid:1) b q · · · · · (cid:0) v − n δ n (cid:1) b q n Φ . Now we move s − and v − i to the front and the product rule gives us an easy way to interchangethe differential operators δ, δ i with the variables s, v i : δs p = s p ( δ + p ) for p ∈ Z and δ i v pi = v pi ( δ i + p ) for i = 1 , . . . , n and p ∈ Z . (3.4)Using these equations we can move every s and every v i very quickly to the front of the0 The Picard-Fuchs equation for invertible polynomials and consequences equation: ( s − δ ) b d = s − δs − δ . . . s − δs − s − ( δ − δ = s − δs − δ . . . s − δs − s − ( δ − ( δ − δ = . . . = s − b d ( δ − ( b d − · · · · · ( δ − δ and in the same way we get ( v − i δ i ) b q i = v − i δ i v − i δ i . . . v − i δv − iv − i ( δ i − δ i = v − i δ i v − i δ i . . . v − i δ i v − iv − i ( δ i − ( δ i − δ i = . . . = v − b q i i ( δ i − ( b q i − · · · · · ( δ i − δ i . Putting this all together the first equation of the GKZ system is given by s − b d ( δ − ( b d − · · · · · ( δ − δ Φ = n Y i =1 v − b q i i ( δ i − ( b q i − · · · · · ( δ i − δ i Φ . (3.5)We will work with this equation later on and calculate the second part (3.2) of the GKZ systemnext. The second system of equations of the GKZ system is given by putting γ = ( − , . . . , − t in (3.2): A · v ∂∂v ... v n ∂∂v n s ∂∂s Φ = A · δ ... δ n δ Φ = E t δ ... δ n Φ + ... δ Φ = − ... − Φ (3.6)Before we do any further calculations, we focus on solutions of the GKZ system. There isa whole theory on solutions of GKZ-System which, for example, is explained in [Sti07]. Wehowever, do not need the full strength of this, because to compare the GKZ system to thePicard-Fuchs equation in Theorem 3.6, it is enough to know that the form ω = s Ω f ( x ) is asolution of the GKZ system shown in equation (3.5) and (3.6). This is the goal, but we willstart with a slightly different solution in the next lemma. Lemma 3.5. The form Φ = Ω f v ( x ) is a solution for the above GKZ system. .1 The GKZ system for invertible polynomials Proof. We will calculate the differentials for Φ = Ω f v ( x ) and see that the equations (3.3) and(3.6) hold. For equation (3.3) we need the partial derivatives with respect to s and v i for i = 1 , . . . , n . They are easy to calculate: (cid:18) ∂∂s (cid:19) b d Φ = ( − b d b d ! (cid:16)Y x i (cid:17) b d Ω (cid:0) f v ( x ) (cid:1) b d +1 ∂∂v i Φ = − x e i Ω (cid:0) f v ( x ) (cid:1) n Y i =1 (cid:18) ∂∂v i (cid:19) b q i Φ = ( − P b q i (cid:16)X b q i (cid:17) ! x P b q i e i Ω (cid:0) f v ( x ) (cid:1) P b q i . Because of the Calabi-Yau condition we have P b q i = b d and from the definition of the dualweights and degree we get P b q i e i = E t · ( b q , . . . , b q n ) t = ( b d, . . . , b d ) t . Therefore we have (cid:18) ∂∂s (cid:19) b d Φ = ( − b d b d ! (cid:16)Y x i (cid:17) b d Ω (cid:0) f v ( x ) (cid:1) b d = ( − P b q i (cid:16)X b q i (cid:17) ! x P b q i e i Ω (cid:0) f v ( x ) (cid:1) P b q i = n Y i =1 (cid:18) ∂∂v i (cid:19) b q i Φ . This proves that Φ is a solution for equation (3.3). Now we check the second equation, wherewe need δ Φ and δ i Φ for i = 1 , . . . , n , because the system of equations is given by: ... δ Φ + E t δ ... δ n Φ + ... Φ = ... Φ . So for every j = 1 , . . . , n we have the following equation: δ Φ + n X i =1 e ij δ i Φ + Φ = − sx (1 ,..., Ω (cid:0) f v ( x ) (cid:1) + n X i =1 e ij ( − v i x e i ) Ω (cid:0) f v ( x ) (cid:1) + Ω f v ( x )= − (cid:0) sx (1 ,..., + P ni =1 e ij v i x e i (cid:1) Ω (cid:0) f v (cid:1) ( x ) + Ω f v ( x )= − x j ∂∂x j f v ( x )Ω (cid:0) f v ( x ) (cid:1) + Ω f v ( x )= 0 , The Picard-Fuchs equation for invertible polynomials and consequences where the last expression is an exact form due to the Griffiths formula and is therefore zero.As mentioned before Φ is not the solution we want to have. A solution that would fit ourpurposes would be ω v = s Ω f v ( x ) , because ω ,..., = s Ω f ( x ) = ω . So we insert Φ = s − ω v in theequations 3.5 and 3.6. So equation (3.5) leads to: s − b d ( δ − ( b d − · · · · · ( δ − δs − ω v = n Y i =1 v − b q i i ( δ i − ( b q i − · · · · · ( δ i − δ i s − ω v . We can use equation (3.4) as earlier to move the variable s to the front and get the followingequation. s − b d ( δ − b d ) · · · · · ( δ − ω v = n Y i =1 v − b q i i ( δ i − ( b q i − · · · · · ( δ i − δ i ω v . (3.7)By putting Φ = s − ω v in equation (3.6) and using (3.4) again we get: ... δ Φ + E t δ ... δ n Φ = − ... − Φ ... δs − ω v + E t δ ... δ n s − ω v = − ... − s − ω v s − ... ( δ − ω v + s − E t δ ... δ n ω v = s − − ... − ω v ... δω v + E t δ ... δ n ω v = ... ω v . Solving this equation for ( δ , . . . , δ n ) t gives δ ... δ n = − ( E t ) − ... δ = b q b d ... b q n b d δ. .2 The Picard-Fuchs equation δ , . . . , δ n in terms of δ . For all i = 1 , . . . , n we have δ i = − b q i b d δ. We can use this equation to write equation (3.7) as an ordinary differential equation withdifferential operator δ : s − b d ( δ − b d ) · · · · · ( δ − ω v = n Y i =1 v − b q i i ( δ i − ( b q i − · · · · · ( δ i − δ i ω v = n Y i =1 v − b q i i (cid:18) − b q i b d δ − ( b q i − (cid:19) · · · · · (cid:18) − b q i b d δ − (cid:19) (cid:18) − b q i b d δ (cid:19) ω v = n Y i =1 (cid:18) − b q i b d v − i (cid:19) b q i δ + ( b q i − b d b q i ! · · · · · δ + b d b q i ! δω v . Now we set v i = 1 , which brings us back to our one-parameter family f ( x ) . Because thesolutions of the differential equation before are given by ω v , we get a differential equation for ω = s Ω f ( x ) . So our final expression is given by s − b d ( δ − b d ) · · · · · ( δ − ω = n Y i =1 (cid:18) − b q i b d (cid:19) b q i δ + ( b q i − b d b q i ! · · · · · δ + b d b q i ! δω. or to have the same appearance as in Theorem 3.6: s b d n Y i =1 ( b q i ) b q i δ δ + b d b q i ! · · · · · δ + ( b q i − b d b q i ! ω − ( − b d ) − b d ( δ − · · · · · ( δ − b d ) ω. (3.8) In the last chapter we already proved the order of the Picard-Fuchs equation. If we lookat examples such as those in Section 2.3 and 3.4 and in Appendix A, we can also conjectureexactly what the Picard-Fuchs equation looks. We can use the GKZ system that we calculatedin the last section to confirm that this is true.4 The Picard-Fuchs equation for invertible polynomials and consequences Theorem 3.6. Let g ( x , . . . , x n ) be an invertible polynomial with weighted degree deg g = d and reduced weights q , . . . , q n for which the Calabi-Yau condition, d = P q i , holds.Let g t ( x , . . . , x n ) be the transposed polynomial with reduced weights b q , . . . , b q n and degree deg g t = b d . Then the Picard-Fuchs equation for the one-parameter family f ( x , . . . , x n ) = g ( x , . . . , x n ) + s Q x i is given by n Y i =1 b q b q i i s b d n Y i =1 b q i − Y j =0 ( δ + j · b d b q i ) Y ℓ ∈ I ( δ + ℓ ) − − ( − b d ) b d b d − Y j =0 ( δ − j ) Y ℓ ∈ I ( δ − ℓ ) − , where I = { , . . . , b d − } ∩ S ni =1 n , b d b q i , b d b q i , . . . , ( b q i − b d b q i o .Proof. From Lemma 3.5 we know that ω = s Ω f ( x ) is a solution for the equation (3.8). It followsthat all period integrals are solutions of (3.8) and therefore the Picard-Fuchs equation divides s b d n Y i =1 ( b q i ) b q i δ δ + b d b q i ! · · · · · δ + ( b q i − b d b q i ! ω − ( − b d ) − b d ( δ − · · · · · ( δ − b d ) ω. We also know from Theorem 2.8 that the order of the Picard-Fuchs equation is given by u = b d − { , , . . . , b d − } ∩ n [ i =1 { , b d b q i , . . . , ( b q i − b d b q i } ! . So we try to find common factors in the summands of (3.8) until the order of the equationis u . If we multiply equation (3.8) by s − b d and use the commutation relations (3.4) to pass itthrough the differential operators we get n Y i =1 ( b q i ) b q i δ δ + b d b q i ! · · · · · δ + ( b q i − b d b q i ! ω − ( − b d ) − b d ( δ + ( b d − · · · · · ( δ + 1) δs − b d ω. Now it is easy to see that every linear factor δ + j with j ∈ { , , . . . , b d − } ∩ S ni =1 { , b d b q i , . . . , ( b q i − b d b q i } is in both summands and can therefore be deleted. This leads usto the equation n Y i =1 b q b q i i s b d n Y i =1 b q i − Y j =0 ( δ + j · b d b q i ) Y ℓ ∈ I ( δ + ℓ ) − − ( − b d ) b d b d − Y j =0 ( δ − j ) Y ℓ ∈ I ( δ − ℓ ) − where I = { , . . . , b d − } ∩ S ni =1 n , b d b q i , b d b q i , . . . , ( b q i − b d b q i o .Finally, this equation is divisible by the Picard-Fuchs equation and has the degree of thePicard-Fuchs equation (cf. Theorem 2.8).We give another class of examples here, which are the simple elliptic singularities. There areonly 3 examples and their Picard-Fuchs equation is known. .2 The Picard-Fuchs equation Example 3.7. In the following table we can see 3 polynomials that define the simple ellipticsingularities, their weights, the degree and the Picard-Fuchs equation, which can easily becalculated with Theorem 3.6 or with the Griffiths-Dwork method directly.Name Invertible polynomial Degree Weights Picard-Fuchs equation ˜ E x + y + z , , s δ + 3 ( δ − δ − E x + y + z , , s δ − ( δ − δ − E x + y + z , , s δ − · ( δ − δ − Table 3.1: Simple elliptic singularities and their Picard-Fuchs equationsA similar result, but approached from a different point of view, can be found in a paper byCorti and Golyshev [CG06]. In this paper the differential equation that they look at is thesame as our Picard-Fuchs equation, but they start with a local system, which is given in thefollowing way: Y = ( Q ni =1 y w i i = λ P ni =1 y i = 1 ⊂ ( C ∗ ) n × C ∗ (3.9)If we insert y i = − s − x e i − (1 ,..., and w i = b q i , then we get that Y consists of the following twoequations: λ = n Y i =1 y w i i = n Y i =1 (cid:16) − s − x e i − (1 ,..., (cid:17) b q i = (cid:16) ( − s ) − P b q i (cid:17) x P b q i e i − ( P b q i ,..., P b q i ) = ( − s ) − b d x ( b d,..., b d ) − ( b d,..., b d ) = ( − s ) − b d n X i =1 y i = n X i =1 (cid:16) − s − x e i − (1 ,..., (cid:17) = − s − x − (1 ,..., n X i =1 x e i . So, from the first equation we get ( − s ) − b d = λ and the second equation can easily be rewrittenas n X i =1 x e i + sx (1 ,..., = f ( x ) . This shows the direct connection to our hypersurface V ( f ) . It is very easy to write the Picard-Fuchs equation with differential operator D = λ ∂∂λ , because the relation between D and δ is6 The Picard-Fuchs equation for invertible polynomials and consequences just given by δ = s ∂∂s = − b d ( − s ) − b d ∂∂ ( − s ) − b d = − b dλ ∂∂λ = − b d D . So in terms of D the Picard-Fuchs equation is given by n Y i =1 b q b q i i s b d n Y i =1 b q i − Y j =0 ( δ + j · b d b q i ) Y ℓ ∈ I ( δ + ℓ ) − − ( − b d ) b d b d − Y j =0 ( δ − j ) Y ℓ ∈ I ( δ − ℓ ) − = n Y i =1 b q b q i i λ − n Y i =1 b q i − Y j =0 ( − b d D + j · b d b q i ) Y ℓ ∈ I ( − b d D + ℓ ) − − b d b d b d − Y j =0 ( − b d D − j ) Y ℓ ∈ I ( − b d D − ℓ ) − n Y i =1 b q b q i i n Y i =1 b q i − Y j =0 ( D − j b q i ) Y ℓ ∈ I ( D − ℓ b d ) − − b d b d λ b d − Y j =0 ( D + j b d ) Y ℓ ∈ I ( D + ℓ b d ) − (3.10)which agrees with formula (1) in [CG06].In Theorem 1.1 of the article [CG06] it is stated that the solutions of the Picard-Fuchs equationcome from the local system (3.9) and in Conjecture 1.4 and Proposition 1.5 the Hodge numbersfor the solution space are given. This brings us to the next section where we will investigatethis in detail. We want to relate already known statements to the work we have done so far in the thesis.First we continue the last section. We will relate our results to work of Corti and Golyshev[CG06]. In their paper there is a result that calculates the Hodge numbers of the solutionspace of the Picard-Fuchs equation. We will state their result in form, which is compatiblewith our setting. Proposition 3.8. ([CG06] Conjecture 1.4 and Proposition 1.5) Consider the sets A := F ni =1 (cid:16)(cid:16) Q i \ { b d } (cid:17) ∪ { } (cid:17) and D = ( D \ { b d } ) ∪ { } (cf. Definition 2.6). Set { α , . . . , α u } := A \ ( A ∩ D ) with α i ≤ α i +1 for all i and { β , . . . , β u } := D \ ( A ∩ D ) with β i < β i +1 for all i .Now consider the differential equation (3.2), which is with the above notation given by s b d n Y i =1 b q b q i i u Y i =1 ( δ + α i ) − ( − b d ) b d u Y i =1 ( δ − β i ) = 0 . Now define the following function p ( k ) := |{ j | α j < β k }| − ( k − for k = 1 , . . . , u and let p + := max { p ( k ) } and p − := min { p ( k ) } . .3 Statements on the cohomology of the solution space Then the local system of solutions of the ordinary differential equation above supports a realpolarised variation of Hodge structure of weight p + − p − and Hodge numbers h j − p − ,p + − j = | p − ( j ) | . Corollary 3.9. It follows easily from the calculations in Chapter 2 that the following numberscoincide: • p + = p (1) = n − • p − = p ( u ) = 1 • P n − j =1 h j − ,n − − j = P n − j =1 | p − ( j ) | = u . We are able to make the relation between u and the above Hodge numbers even more precise. Proposition 3.10. Let u = u + · · · + u n − , where u i denotes the number of degree i · d basiselements of the u basis elements one needs to write the Picard-Fuchs equation as calculated inthe proof of Theorem 2.8. Then u i − = h i − ,n − i − = h i − p − ,p + − i . Remark . Notice that u i ≥ for all i , because we have at least one basis element in everydegree, and u = u n − = 1 , because in degree and n − we have exactly the basis elements s Ω f and s n − ( Q x i ) n − Ω f n − respectively. Proof. We will relate the function p ( k ) to the u basis elements. In particular we show thatfor every ≤ k ≤ u with p ( k ) = i we need one basis element in degree ( n − i − d . Noticethat p ( k ) can be written recursively as follows: p ( k + 1) = |{ j | α j < β k +1 }| − k = |{ j | α j < β k }| − ( k − 1) + |{ j | β k < α j < β k +1 }| − p ( k ) + |{ j | β k < α j < β k +1 }| − . Now we will show the statement via induction. If p ( k ) = i corresponds to a basis element indegree ( n − i − d , then p ( k + 1) corresponds to a basis element in degree ( n − i − d +( |{ j | β k − < α j < β k }| + 1) d . For the correspondence between the function p and the basiselements, we just view the α i and β i as potential positions on the Jacobi path, where the α i correspond to positions which have multiple possibilities or which are occupied by a partialderivative that creates an extra vertex and the β i correspond to free positions before shifting.Notice that in contrast to the proof of Theorem 2.8 the p ( k ) count the end of a connectedpart on the Jacobi path and not the beginning.First we investigate k = 1 . We know that α = · · · = α n − = 0 and β = 1 , so p (1) = n − .This makes sense, because at position we have the vertex ( n − , . . . , n − and when we haveused the Griffiths formula ( n − -times, we reach (0 , , , and there is definitely a connectedpart of the path ending here and we need one basis element in degree n − ( n − − d .8 The Picard-Fuchs equation for invertible polynomials and consequences Now assume that p ( k ) = i and we already know that this corresponds to the fact that afterusing the Griffiths formula the appropriate number of times, there is a connected path endingat position β k with a vertex of degree ( n − i − d . Consider the next number k + 1 , then p ( k + 1) = p ( k ) + |{ j | β k < α j < β k +1 }| − i + |{ j | β k < α j < β k +1 }| − and we want to show that this leads to a basis element in degree ( n − i −|{ j | β k < α j < β k +1 }| ) d .To prove this, we have 3 distinct cases:(i) p ( k + 1) = p ( k ) − (ii) p ( k + 1) = p ( k ) (iii) p ( k + 1) > p ( k ) In the first case, there are no α i between β k and β k +1 , so every position in between is coveredby exactly one partial derivative, therefore the smallest number drops by one at position β k + 1 , i.e. κ ( β k ) − κ ( β k + 1) , and stays the same until β k +1 is reached. This marks theend of the connected part of the path. Since the smallest number is one less in this case wehave to use the Griffiths formula one time less before we reach the basis element and thereforethe degree of the basis element for this part of the path is d times bigger than before, so thedegree of this basis element is ( n − i ) d = ( n − ( i − − d which agrees with the fact that p ( k + 1) = i − .In case (ii) there is exactly one α i between β k and β k +1 . If α i ∈ Z , then this means thereis one position between β k and β k +1 that is occupied by two partial derivatives. So again κ ( β k ) − κ ( β k + 1) , but before this part of the path ends, the smallest number increases byone due to the double occupation. So with the argument from before, we get a basis elementof the same degree ( n − i − d = ( n − p ( k + 1) − d . If α i ∈ Q \ Z , then either the partialderivative corresponding to α i is at position β k + 1 and the smallest number did not drop, orit is somewhere between β k and β k +1 . In this case it is in the same place as another partialderivative, because every number not in the set { β i } is occupied and we are back to the firstconsideration. So either way we have a basis element with the same smallest number as before,which therefore also has degree ( n − i − d = ( n − p ( k + 1) − d .Finally, the third case is just an expansion of the previous case. Let us define the number of α i between β k and β k +1 as a k := |{ j | β k < α j < β k +1 }| , then with the same argumentation asbefore, we can see that the smallest number increases by a k − on the path between β k and β k +1 , i.e. κ ( β k ) + a k − κ ( β k +1 ) . It follows that the degree of the basis element of this partof the path is ( a k − d times smaller than the previous basis element. So this basis elementhas degree ( n − i − d − ( a k − d = ( n − ( i + a k − − d = ( n − i − a k ) d , which agreeswith the above formula and ends the proof. Remark . We have p ( k ) + p ( u − k + 1) = n . This is due to the fact that the α i = 0 and the β i are evenly spread between and b d − . This implies |{ j | β k < α j < β k +1 }| = .4 The case of Arnold’s strange duality |{ j | β u − k < α j < β u − k +1 }| for < k < u/ and therefore p ( k ) + p ( u − k + 1) = |{ j | α j < β }| + k − X i =1 |{ j | β i < α j < β i +1 }| − ( k − |{ j | α j < β }| + u − k X i =1 |{ j | β i < α j < β i +1 }| − ( u − k )= 2( n − 1) + u − X i = u − k +1 |{ j | β i < α j < β i +1 }| + u − k X i =1 |{ j | β i < α j < β i +1 }| − ( u − n − 1) + u − ( n − − ( u − 1) = n. This also implies h i − ,n − i − = h n − i − ,i − . In [CG06] one can also find a more detailed description of the Hodge numbers that appearhere. This relies mainly on the work of Danilov [DK86] on Deligne-Hodge numbers and Newtonpolyhedra. Remark . The Hodge numbers h i − ,n − i − = u i that appear in our work as well asin [CG06] are the Deligne-Hodge numbers of the cohomology with compact support of ahypersurface defined by a Laurent polynomial with Newton polyhedron ∆ , where ∆ = D(cid:16) b q b d , . . . , b q n b d (cid:17) , (1 , . . . , , . . . , (0 , . . . , , E . In particular from this viewpoint the u i areDeligne-Hodge numbers of a toric variety with polytope ∆ in the lattice Z (cid:16) b q b d , . . . , b q n b d (cid:17) + Z n . In this section we will show all the results and some more details for the 14 exceptionalunimodal hypersurfaces singularities. This is a class of examples first studied by Arnol’din [Arn75] where he among other things discovered that there is a duality among this 14exceptional unimodal hypersurfaces singularities which is now known as Arnold’s strangeduality. One can define Gabrielov and Dolgachev numbers for every one of these hypersurfacesingularities and he showed that for every one of the 14 exceptional unimodal hypersurfacesingularities there is another singularity in this list with interchanged Dolgachev and Gabrielovnumbers. The consequences of this duality between the exceptional unimodal hypersurfacesingularities have been studied by a number of people. An overview on a lot of aspects of thisduality can be found in a paper by Ebeling [Ebe99]. These examples were also the startingpoint for the analysis of the Picard-Fuchs equations in this thesis. We want to concentrate inthis section on the duality between the invertible polynomials of Arnold’s strange duality andthe consequences we get from the results achieved so far.0 The Picard-Fuchs equation for invertible polynomials and consequences In the following table we list important data of the 14 exceptional unimodal hypersurfacesingularities we need. In particular we have a look at the compactification that comes fromcompactifying the Milnor fibres in the weighted projective space with one additional dimensionwhich has weight one. The table consists of the polynomial defining the compactification of thehypersurface singularity, the degree of this polynomial, the weights and the dual singularitydue to Arnol’d. As in Table 1.1 there is still a the duality between the invertible polynomials.Name g ( w, x, y, z ) Deg Weights DualE w + x + y + z 42 (1,6,14,21) E E w + x y + y + z 30 (1,4,10,15) Z Z w + x + xy + z 30 (1,6,8,15) E E w + x z + y + z 24 (1,3,8,12) Q Q w + x + y + xz 24 (1,6,8,9) E Z w + x y + xy + z 22 (1,4,6,11) Z W w + x + y z + z 20 (1,4,5,10) W Z w + x z + xy + z 18 (1,3,5,9) Q Q w + x y + y + xz 18 (1,4,6,7) Z W w + x y + y z + z 16 (1,3,4,8) S S w + x + y z + xz 16 (1,4,5,6) W Q w + x z + y + xz 15 (1,3,5,6) Q S w + x y + y z + xz 13 (1,3,4,5) S U w + x + y z + yz 12 (1,3,4,4) U Table 3.2: The Compactification of Arnold’s strange dualityWe will now list the sets { α , . . . , α u } = A \ ( A ∩ D ) and { β , . . . , β u } = D \ ( A ∩ D ) ,where A = F ni =1 (cid:16)(cid:16) Q i \ { b d } (cid:17) ∪ { } (cid:17) and D = ( D \ { b d } ) ∪ { } as in Proposition 3.8, and theresulting order of the Picard-Fuchs equation u . Remember from Definition 2.6 that the sets Q i and D are defined via the dual weights. Notice that δ + α i and δ − β i are the linear factorsin the two summands of the Picard-Fuchs equation. Together with the dual weights and thedual degree they completely determine the Picard-Fuchs equation. We want to mention that { β , . . . , β u } contains all numbers ≤ b ≤ b d which are coprime to b d . The set { α , . . . , α u } ∪ Z .4 The case of Arnold’s strange duality b d .Name α , . . . , α u β , . . . , β u u E , , , , , , , , , , , 36 1 , , , , , , , , , , , 41 12 E , , , , , , , , , , , , , , , , , , , , , , 29 12 Z , , , , , , , , , 24 1 , , , , , , , , , 29 10 E , , , , , , , , , , , , , , , , , , , , , , , 23 12 Q , , , , , , , 18 1 , , , , , , , 23 8 Z , , , , , , , , , , , , , , , , , , 21 10 W , , , , , , , 16 1 , , , , , , , 19 8 Z , , , , , , , , , , , , , , , , , , , , , , 17 12 Q , , , , , , , , , , , , , , , , 17 9 W , , , , , , , , , , , , , , , , , , , , , , 15 12 S , , , , , , , 12 1 , , , , , , , 15 8 Q , , , , , , , , , , , , , , 14 8 S , , , , , , , , , , , , , , , , , , , , , , 12 12 U , , , , , , , , , , 11 6 Table 3.3: The sets defining the linear factors of the Picard-Fuchs equationWith the table above and Proposition 3.8 we are able to state how many basis elements weneed in every degree. Using the methods from Chapter 2, we can give an exact basis of thepart of the cohomology that is used in the calculations. This can be done by relating thenumbers α i and β i to the positions on the Jacobi path, where a basis element can be chosen.In the examples we have h , = h , = 1 and we can choose the basis elements s Ω f and s ( wxyz ) Ω f respectively. This means we have to find u − basis elements in degree d . We willonly list a basis for the part of the middle cohomology we used and not a basis for the wholecohomology, or equivalently Milnor ring. We show how to calculate these basis elements in anexample. The rest of the basis elements in Table 3.4 can be calculated the same way.2 The Picard-Fuchs equation for invertible polynomials and consequences Example 3.14. We concentrate on the singularity S , so f ( x ) = w + x + y z + xz withweights (1 , , , and degree d = 16 . The dual weights are (1 , , , and b d = d = 16 . Wehave Q = { } , Q = { , , } , Q = { , , , , , , , } and Q = { , , , } . We can read off from the above table what the α i are and we can see that we need basiselements in the middle cohomology. Apart from , the α i consist of the elements in the disjointunion of the Q i which are rational or appear twice. So the number α i tells us the position ofthe basis element on the Jacobi path. This becomes clear in the calculation:We start with α = 4 . To calculate the corresponding basis elements we need to check whicharrows have already been used, in other words how many numbers in Q i are ≤ . In Q thereis no element, so ∂ w does not appear on the Jacobi path before the vertex of the basis elementwe are looking for. The set Q has also no element ≤ , but Q contains and and Q contains also . This means ∂ y was used twice and ∂ z was used once on the Jacobi path beforewe arrive at the basis element. So starting at (1 , , , we have to add up (1 , , − , twiceand (1 , , , − once and we end up at the vertex (1 , , , 1) + 2 · ∂ y + ∂ z = (1 , , , 1) + 2 · (1 , , − , 0) + (1 , , , − 1) = (4 , , , which means that the basis element we are looking for is given by w x .For α = , we check that counting the elements ≤ in every Q i , we have to use ∂ x once, ∂ y twice and ∂ z once to get the basis element. We calculate that (1 , , , 1) + ∂ z + 2 · ∂ y + ∂ z = (1 , , , 1) + (1 , − , , 1) + 2 · (1 , , − , 0) + (1 , , , − , , , which leads to w yz as basis element. In the same way one gets that α = 8 corresponds tothe basis element w x , α = gives w y and finally α = 12 leads to w x . Now we have basis elements and in addition we have wxyz which corresponds to one of the zeroes in A .With this construction we are able to calculate the basis in the middle cohomology for allexamples, and this is listed in the following table. So the table includes the name of thesingularity, the number h , = u − from Proposition 3.8 and a basis for the part of theMilnor ring in degree d , which gives also a basis of the part of the middle cohomology we areusing. .4 The case of Arnold’s strange duality u − basis elements in the Milnor ring in degree d E wxyz, w x, w x , w x , w x , w x , w x , w y, w y , w z E wxyz, w x, w x , w x , w y, w y , w z, w xz, w x z, w x z Z wxyz, w x, w x , w x , w x , w y, w z, w yz E wxyz, w x, w x , w y, w y , w z, w xy, w x y, w xy , w x y Q wxyz, w x, w x , w x , w y, w y Z wxyz, w x, w x , w y, w z, w xz, w x z, w yz W wxyz, w x, w x , w x , w x , w y Z wxyz, w x, w x , w y, w z, w xy, w x y, w xy , w x y , w yz Q wxyz, w x, w x , w y, w y , w xz, w x z W wxyz, w x, w x , w x , w y, w xy, w x y, w x y, w xz, w x z S wxyz, w x, w x , w x , w z, w yz Q wxyz, w x, w y, w y , w xy, w xy S wxyz, w x, w x , w y, w z, w xy, w x y, w xz, w x z, w yz U wxyz, w x, w x , w x Table 3.4: Basis elements for the middle cohomologyOf course from the previous work we can immediately calculate the Picard-Fuchs equation,either with the Griffiths-Dwork method as shown in Section 2.3, with a computer algebrasystem as shown in Appendix B, or by inserting in the α i and β j as linear factors as inTheorem 3.6. The output for all singularities we investigated in this section is shown in thenext table. T h e P i c a r d - F u c h s e qu a t i o n f o r i n v e r t i b l e p o l y n o m i a l s a nd c o n s e qu e n ce s Name Picard-Fuchs equation for f E s δ ( δ + 6)( δ + 12)( δ + 14)( δ + 18)( δ + 21)( δ + 24)( δ + 28)( δ + 30)( δ + 36) − ( δ − δ − δ − δ − δ − δ − δ − δ − δ − δ − δ − δ − E s δ ( δ + )( δ + )( δ + 10)( δ + )( δ + 15)( δ + )( δ + 20)( δ + )( δ + ) − ( δ − δ − δ − δ − δ − δ − δ − δ − δ − δ − δ − δ − Z s δ ( δ + 6)( δ + )( δ + 12)( δ + 15)( δ + 18)( δ + )( δ + 24) − ( δ − δ − δ − δ − δ − δ − δ − δ − δ − δ − E s δ ( δ + )( δ + )( δ + 8)( δ + )( δ + 12)( δ + )( δ + 16)( δ + )( δ + ) − ( δ − δ − δ − δ − δ − δ − δ − δ − δ − δ − δ − δ − Q s δ ( δ + 6)( δ + 8)( δ + 12)( δ + 16)( δ + 18) − ( δ − δ − δ − δ − δ − δ − δ − δ − Z s δ ( δ + )( δ + )( δ + )( δ + 11)( δ + )( δ + )( δ + ) − ( δ − δ − δ − δ − δ − δ − δ − δ − δ − δ − W s δ ( δ + 4)( δ + 8)( δ + 10)( δ + 12)( δ + 16) − ( δ − δ − δ − δ − δ − δ − δ − δ − Z s δ ( δ + )( δ + )( δ + )( δ + )( δ + 9)( δ + )( δ + )( δ + )( δ + ) − ( δ − δ − δ − δ − δ − δ − δ − δ − δ − δ − δ − δ − Q s δ ( δ + )( δ + 6)( δ + )( δ + )( δ + 12)( δ + ) − ( δ − δ − δ − δ − δ − δ − δ − δ − δ − . T h ec a s e o f A r n o l d ’ ss t r a n g e du a li t y Name Picard-Fuchs equation for f W s δ ( δ + )( δ + )( δ + )( δ + )( δ + 8)( δ + )( δ + )( δ + )( δ + ) − ( δ − δ − δ − δ − δ − δ − δ − δ − δ − δ − δ − δ − S s δ ( δ + 4)( δ + )( δ + 8)( δ + )( δ + 12) − ( δ − δ − δ − δ − δ − δ − δ − δ − Q s δ ( δ + )( δ + 5)( δ + )( δ + 10)( δ + ) − ( δ − δ − δ − δ − δ − δ − δ − δ − S s δ ( δ + )( δ + )( δ + )( δ + )( δ + )( δ + )( δ + )( δ + )( δ + )+13 ( δ − δ − δ − δ − δ − δ − δ − δ − δ − δ − δ − δ − U s δ ( δ + 3)( δ + 6)( δ + 9) − ( δ − δ − δ − δ − δ − δ − The Picard-Fuchs equation for invertible polynomials and consequences We give another viewpoint on the Picard-Fuchs equation. From Theorem 3.6 we know thatthe Picard-Fuchs equation always consists of exactly two summands. They can be separatedby setting s b d = 0 or s b d = ∞ . We already know that if we view the Picard-Fuchs equation as apolynomial with variable δ then the zeroes of these polynomials after setting s b d = 0 are givenby β , . . . , β u and the zeroes for s b d = ∞ are given by − α , . . . , − α u . Now we want to focus ona polynomial that has related zeroes. Namely, we define χ to be the polynomial with zeroes exp (cid:16) π i β i b d (cid:17) for i = 0 , . . . , n and χ ∞ the polynomial with roots exp (cid:16) π i α i b d (cid:17) for i = 0 , . . . , n and notice that multiple roots in the Picard-Fuchs equation lead to multiple roots of χ ∞ and χ . Equivalently, we can first write the Picard-Fuchs equation for the variable λ = ( − s ) − b d and then start with the zeroes of this equation for λ = ∞ and λ = 0 . Notation . We will shorten the notation for a rational function with only roots of unityas zeroes and poles. We will write ν · · · ν m /η · · · η m for the rational function χ ( t ) = (1 − t ν ) · · · · · (1 − t ν m )(1 − t η ) · · · · · (1 − t η m ) With this notation we write down the functions χ and χ ∞ in the following table.Name Deg Weights χ χ ∞ E 42 (1,6,14,21) · · · / · · · 21 2 · · E 30 (1,4,10,15) · / · 15 1 · · Z 30 (1,6,8,15) · / · 15 1 · · E 24 (1,3,8,12) · / · · · Q 24 (1,6,8,9) · / · 12 1 · · Z 22 (1,4,6,11) · / · 11 1 · · · / W 20 (1,4,5,10) · / · 10 1 · · Z 18 (1,3,5,9) / · · Q 18 (1,4,6,7) / · · W 16 (1,3,4,8) / · · S 16 (1,4,5,6) / · · Q 15 (1,3,5,6) · / · · · S 13 (1,3,4,5) / · · U 12 (1,3,4,4) · / · · · .5 Relations to the Poincaré series and monodromy χ and χ ∞ are in all cases a little bit different, but the interesting thing is thatthe quotient of the two functions is always the same. Remark . The rational functions χ and χ ∞ described in the above table have always theproperty that χ ( t ) χ ∞ ( t ) = (1 − t b d )(1 − t b q )(1 − t b q )(1 − t b q )(1 − t b q ) In the next section we will look at this phenomenon in more generality and we will also see thatthe roots of χ and χ ∞ are the eigenvalues of the local monodromy around ( − b d λ − = s b d = 0 and ( − b d λ − = s b d = ∞ respectively. In this section we want to relate the numbers in the Picard-Fuchs equation of f ( x ) to thePoincaré series of g t ( x ) and to the monodromy around and ∞ in the solution space ofthe Picard-Fuchs equation. The last remark in the previous chapter already showed us thedirection. Poincaré series First we want to investigate the relation to the Poincaré series. Therefore we consider thePicard-Fuchs equation in the form of (3.10) which is a differential equation with parameter λ = ( − s ) − b d . If we view this differential equation as a polynomial with variable D , then wecan immediately read off the zeroes for λ = 0 and λ = ∞ : λ = 0 : α b d , . . . , α u b dλ = ∞ : − β b d , . . . , − β u b d Remark . Because of the symmetry of the α j and β j , the sets n exp (cid:16) π i α j b d (cid:17)o and n exp (cid:16) π i β j b d (cid:17)o are closed under complex conjugation.We will now relate these numbers α j and β j or exp (cid:16) π i α j b d (cid:17) and exp (cid:16) π i β j b d (cid:17) respectively tothe Poincaré series of g t ( x ) . Let us recall first how the Poincaré series is defined. Definition 3.18. Let A := C [ x ] / ( g ( x )) be the coordinate algebra of the hypersurface { g ( x ) = 0 } . Then A admits naturally a grading A = L ∞ m =0 A m , where A m is generated8 The Picard-Fuchs equation for invertible polynomials and consequences by the monomials in A of weighted degree m . The Poincaré series for this hypersurface isgiven by p A ( t ) := p g ( t ) := ∞ X m =0 dim C A m t m Remark . (cf. [AGZV85]) If g ( x ) is quasihomogeneous with weights q , . . . , q n andweighted degree d , then the Poincaré series is given by p g ( t ) = (1 − t d )(1 − t q ) · · · · · (1 − t q n ) A rational function of this form is of course uniquely determined by the set of poles andzeroes. So we study these sets for the Poincaré series of g t ( x ) , because as mentioned beforethis will be related to the Picard-Fuchs equation of f ( x ) . So we study the zeroes and poles ofthe function p g t ( t ) = (1 − t b d )(1 − t b q ) · · · · · (1 − t b q n ) . The zeroes of (1 − t b d ) are given by the set n exp (cid:16) π i j b d (cid:17) | ≤ j ≤ b d − o and the zeroes of (1 − t b q ) · · · · · (1 − t b q n ) are given by the set S nk =1 n exp (cid:16) π i j b q k (cid:17) | ≤ j ≤ b q k − o . So puttingthis together, the zeroes of the Poincaré series of g t ( x ) are given by (cid:26) exp (cid:18) π i j b d (cid:19) | j ∈ Z (cid:27) \ (cid:26) exp (cid:18) π i j b d (cid:19) | j ∈ Z (cid:27) ∩ n [ k =1 (cid:26) exp (cid:18) π i j b q k (cid:19) | j ∈ Z (cid:27)! = (cid:26) exp (cid:18) π i b b d (cid:19) | b ∈ D (cid:27) \ (cid:18)(cid:26) exp (cid:18) π i β b d (cid:19) | b ∈ D (cid:27) ∩ (cid:26) exp (cid:18) π i a b d (cid:19) | a ∈ A (cid:27)(cid:19) = (cid:26) exp (cid:18) π i β j b d (cid:19) | j = 0 , . . . , u (cid:27) and the poles are given by the set n G k =1 (cid:26) exp (cid:18) π i j b q k (cid:19) | j ∈ Z (cid:27) \ (cid:26) exp (cid:18) π i j b d (cid:19) | j ∈ Z (cid:27) ∩ n [ k =1 (cid:26) exp (cid:18) π i j b q k (cid:19) | j ∈ Z (cid:27)! = (cid:26) exp (cid:18) π i a b d (cid:19) | a ∈ A (cid:27) \ (cid:18)(cid:26) exp (cid:18) π i b b d (cid:19) | b ∈ D (cid:27) ∩ (cid:26) exp (cid:18) π i a b d (cid:19) | a ∈ A (cid:27)(cid:19) = (cid:26) exp (cid:18) π i α j b d (cid:19) | j = 0 , . . . , u (cid:27) , where the disjoint union indicates that poles occur in this set counted with multiplicity. Noticethat the notation n exp (cid:16) π i a b d (cid:17) | a ∈ A o , where A = F nk =1 (cid:16)(cid:16) Q k \ { b d } (cid:17) ∪ { } (cid:17) , is short for F nk =1 n exp (cid:16) π i a k b d (cid:17) | a k ∈ (cid:16) Q k \ { b d } (cid:17) ∪ { } o .In the above we can see clearly the relation between the zeroes of the Picard-Fuchs equationof f ( x ) for λ = 0 and λ = ∞ and the Poincaré series of g t ( x ) . We summarize this in thefollowing corollary. .5 Relations to the Poincaré series and monodromy Corollary 3.20. The zeroes of the Poincaré series of g t ( x ) are in − correspondence withthe zeroes of the Picard-Fuchs equation of f ( x ) for λ = ∞ or s = 0 and the poles of thePoincaré series of g t ( x ) are in − correspondence to the zeroes of the Picard-Fuchs equationof f ( x ) for λ = 0 or s = ∞ .Equivalently the same holds for the Picard-Fuchs equation of f t ( x ) = g t ( x ) + s Q x i and thePoincaré series of g ( x ) . Monodromy Now we want to explain why the roots of the Picard-Fuchs equation for λ = ( − s ) − b d = 0 and λ = ( − s ) − b d = ∞ are in 1-1 correspondence with the eigenvalues of the local monodromyaround and ∞ in the solution space of the Picard-Fuchs equation, i.e. the space of theperiod integrals. More precisely, the eigenvalues of the monodromy around and ∞ are equalto the poles and zeroes of the Poincaré series respectively. First we recall monodromy in thecontext of Picard-Fuchs equations in as much generality as we need. References for the relationbetween monodromy and the Picard-Fuchs equation are [CK99], [Mor92] and [Del70].In this subsection we will always regard the Picard-Fuchs equation in D = λ ∂∂λ , so we areworking with the differential equation (3.10) n Y i =1 b q b q i i n Y i =1 b q i − Y j =0 ( D − j b q i ) Y ℓ ∈ I ( D − ℓ b d ) − Φ − b d b d λ b d − Y j =0 ( D + j b d ) Y ℓ ∈ I ( D + ℓ b d ) − Φ . Due to [Del70] this Picard-Fuchs equation has only regular singular points. This can forexample be seen by the fact that in the Picard-Fuchs equation, written as D u Φ + u − X i =0 h i ( λ ) D i Φ = 0 , (3.11)all coefficients h i ( λ ) are holomorphic functions of λ . Now we can define the residue matrix for λ . Definition 3.21. Let ω , . . . , ω u be a basis of the solution space of the Picard-Fuchs equationand define the connection matrix (Γ) ij via D ω i = P j Γ ij ω j . Then the residue matrix is givenby Res = Res λ =0 ((Γ) ij ) . Remark . In the cases we consider (Γ) ij has no poles at λ = 0 , so the residue matrix isjust given by Res = ((Γ) ij ) λ =0 . Theorem 3.23. ([Del70]) The following relations between the residue matrix and the mon-odromy around λ = 0 in the solution space of the Picard-Fuchs equation hold.(i) η is an eigenvalue of Res ⇔ exp(2 π i η ) is an eigenvalue of the monodromy. The Picard-Fuchs equation for invertible polynomials and consequences (ii) exp( − π iRes ) is conjugate to the monodromy.(iii) The monodromy is unipotent ⇔ Res is nilpotent. We cannot be sure that ω, D ω, . . . , D u − ω , with ω a solution of the Picard-Fuchs equation,is a basis for the solution space, but we can easily write down the connection matrix for thisbasis: Γ = · · · 00 0 1 0 · · · ... . . . . . . ... · · · · · · − h ( λ ) − h ( λ ) − h ( λ ) · · · − h u − ( λ ) A theorem by Morisson gives a condition for these elements ω, D ω, . . . , D u − ω to be a basisof the solution space. The condition depends on the eigenvalues of the matrix Γ . Theorem 3.24. ([Mor92]) Let D ~ω ( λ ) = Γ ~ω ( λ ) be a system of ordinary differential equationswith a regular singular point at λ = 0 . If distinct eigenvalues of Γ λ =0 do not differ by integers,then ω , . . . , ω u with ~ω = ( ω , . . . , ω u ) is a basis for the solution space of the system of ordinarydifferential equations. So we calculate the eigenvalues of Γ λ =0 . For this purpose we only have to remember that theequation (3.11) or equally equation (3.10) has the following solutions for λ = 0 : n G i =1 (cid:26) , b q i , . . . , b q i − b q i (cid:27) \ ( , b d , . . . , b d − b d ) ∩ n [ i =1 (cid:26) , b q i , . . . , b q i − b q i (cid:27)! . This means that no distinct eigenvalues differ by an integer and therefore Γ λ =0 = Res isa residue matrix by Theorem 3.24. In addition it follows from Theorem 3.23 that for everyeigenvalue η of Γ we get an eigenvalue exp(2 π i η ) of the monodromy. So together with Corollary3.20, we get the following statement. Corollary 3.25. The poles of the Poincaré series of g t ( x ) are the eigenvalues of the mon-odromy around λ = ( − s ) − b d = 0 in the solution space of the Picard-Fuchs equation of f ( x ) = g ( x ) + s Q i x i and the zeroes of the Poincaré series of g t ( x ) are the eigenvaluesof the monodromy around λ = ( − s ) − b d = ∞ . The second part of this statement is proved analogously to the first part, substituting only λ by λ − . .5 Relations to the Poincaré series and monodromy Remark . For the calculations in the last section this means that the eigenvalues of themonodromy around λ = 0 are given by the roots of χ ∞ and the eigenvalues of the monodromyaround λ = ∞ are given by the roots of χ . Remark . Notice that the monodromy around and ∞ is not unipotent, but it is quasi-unipotent, i.e. a power of the monodromy is unipotent. This agrees with Theorem 2.3 in[Del70].We want to mention that the points and ∞ are not the only points with monodromy. At λ = Q b q b q i i / b d b d the Picard-Fuchs equation degenerates and therefore we can consider monodromyaround this point in the solution space as well. But the monodromy around this point is justa combination of the monodromy around the other two points. This can be seen from the factthat the parameter can be considered on a projective line (cf. [Mor01]).Also we want to mention that the critical points of λ in the solution space of the Picard-Fuchsequation apart from λ = ∞ are in 1-1 correspondence with the critical values of f ( x ) in s .Namely λ = ( − s ) − b d = 0 and λ = ( − s ) − b d = Q b q b qii b d b d are the critical values of f ( x ) in s . ppendix A Examples: Simple K3 singularities In this first part of the appendix we will state some famous examples. These were additionalexamples leading to Theorem 3.6.The following examples were all calculated with the Griffiths-Dwork method using Singular.The code for these calculations can be found in the second part of the appendix. The polyno-mials given below are those from the list of 95 polynomials in [Yon90] that can be described asinvertible polynomials by considering an involution on the corresponding hypersurface. Thepolynomials that can be achieved by the involution are due to personal communication withNoriko Yui and will be published soon in a joint paper with Yasuhiro Goto and Ron Livné[GLY].In all the examples below Theorem 3.6 can be checked. This can easily be done by comput-ing the dual weights and dual degree and comparing them to the numbers appearing in thePicard-Fuchs equation we calculated.The number in the first column of the table is the index given in the article of Yone-mura [Yon90]. The second column contains the invertible polynomial that was stated as g ( x , x , x , x ) before. The third and fourth column contain the weights of the invertiblepolynomial and the order of the Picard-Fuchs equation respectively. The order can also becalculated with the result of Theorem 2.8, which is a fast computation once the dual weightsand the dual degree are known. Finally in the last column the result of the calculations,namely the Picard-Fuchs equation of f ( x , x , x , x ) = g ( x , x , x , x ) + sx x x x , is givenand this is exactly the formula given in Theorem 3.6. E x a m p l e s : S i m p l e K s i n g u l a r i t i e s Nr. invertible polynomial weights order PF Picard-Fuchs equation x + x + x + x (1 , , , 1) 3 s δ − ( δ − δ − δ − x + x + x + x (2 , , , 4) 6 s δ ( δ + 4)( δ + 6)( δ + 8) − ( δ − δ − δ − δ − δ − δ − x + x + x + x (1 , , , 2) 4 s δ ( δ + 3) − ( δ − δ − δ − δ − x + x + x + x (1 , , , 4) 6 s δ ( δ + 3)( δ + 6)( δ + 9) − ( δ − δ − δ − δ − δ − δ − x + x + x + x (1 , , , 3) 3 s δ − ( δ − δ − δ − x + x + x + x (1 , , , 5) 4 s δ ( δ + 5) − ( δ − δ − δ − δ − x + x + x + x (1 , , , 4) 4 s δ ( δ + 4) − ( δ − δ − δ − δ − x + x + x + x (1 , , , 6) 6 s δ ( δ + 4)( δ + 6)( δ + 8) − ( δ − δ − δ − δ − δ − δ − x + x + x + x (1 , , , 10) 8 s δ ( δ + 4)( δ + 6)( δ + 8)( δ + 10)( δ + 12) − ( δ − δ − δ − δ − δ − δ − δ − δ − x + x + x + x (1 , , , 6) 4 s δ ( δ + 6) − ( δ − δ − δ − δ − x + x + x + x (2 , , , 15) 10 s δ ( δ + 6)( δ + 10)( δ + 12)( δ + 15)( δ + 18)( δ + 20)( δ + 24) − ( δ − δ − δ − δ − δ − δ − δ − δ − δ − δ − x + x + x + x (1 , , , 9) 6 s δ ( δ + 6)( δ + 9)( δ + 12) − ( δ − δ − δ − δ − δ − δ − x + x + x + x (1 , , , 12) 8 s δ ( δ + 6)( δ + 8)( δ + 12)( δ + 16)( δ + 18) − ( δ − δ − δ − δ − δ − δ − δ − δ − x + x + x + x (1 , , , 21) 12 s δ ( δ + 6)( δ + 12)( δ + 14)( δ + 18)( δ + 21)( δ + 24)( δ + 28)( δ + 30)( δ + 36) − ( δ − δ − δ − δ − δ − δ − δ − δ − δ − δ − δ − δ − Nr. invertible polynomial weights order PF Picard-Fuchs equation x + x + x x + x (3 , , , 5) 8 2 s δ ( δ + 3)( δ + 6)( δ + )( δ + 9)( δ + 12)+3 ( δ − δ − δ − δ − δ − δ − δ − δ − x + x + x x + x (3 , , , 8) 6 s δ ( δ + 3)( δ + 6)( δ + 9) − ( δ − δ − δ − δ − δ − δ − x + x + x + x x (1 , , , 3) 4 s δ ( δ + 4) − ( δ − δ − δ − δ − x + x + x + x x (1 , , , 9) 8 s δ ( δ + 6)( δ + 8)( δ + 12)( δ + 16)( δ + 18) − ( δ − δ − δ − δ − δ − δ − δ − δ − x + x + x + x x (1 , , , 6) 10 s δ ( δ + 6)( δ + 10)( δ + 12)( δ + 15)( δ + 18)( δ + 20)( δ + 24) − ( δ − δ − δ − δ − δ − δ − δ − δ − δ − δ − x + x + x + x x (1 , , , 5) 4 s δ ( δ + 6) − ( δ − δ − δ − δ − x + x + x + x x (2 , , , 9) 8 s δ ( δ + 4)( δ + 8)( δ + 10)( δ + 12)( δ + 16) − ( δ − δ − δ − δ − δ − δ − δ − δ − δ − x + x + x + x x (2 , , , 11) 8 s δ ( δ + 6)( δ + 8)( δ + 12)( δ + 16)( δ + 18) − ( δ − δ − δ − δ − δ − δ − δ − δ − x + x + x + x x (1 , , , 10) 12 s δ ( δ + 6)( δ + 12)( δ + 14)( δ + 18)( δ + 21)( δ + 24)( δ + 28)( δ + 30)( δ + 36) − ( δ − δ − δ − δ − δ − δ − δ − δ − δ − δ − δ − δ − x x + x + x + x (4 , , , 15) 3 s δ − ( δ − δ − δ − x + x x + x + x (5 , , , 20) 4 s δ ( δ + 5) − ( δ − δ − δ − δ − x + x + x x + x (3 , , , 12) 4 s δ ( δ + 4) − ( δ − δ − δ − δ − E x a m p l e s : S i m p l e K s i n g u l a r i t i e s Nr. invertible polynomial weights order PF Picard-Fuchs equation x + x + x x + x (2 , , , 7) 12 3 s δ ( δ + 4)( δ + 8)( δ + )( δ + 12)( δ + 14)( δ + 16)( δ + )( δ + 20)( δ + 24) − ( δ − δ − δ − δ − δ − δ − δ − δ − δ − δ − δ − δ − x + x + x x + x (2 , , , 9) 6 s δ ( δ + 4)( δ + 6)( δ + 8) − ( δ − δ − δ − δ − δ − δ − x + x + x x + x (2 , , , 15) 8 s δ ( δ + 4)( δ + 8)( δ + 10)( δ + 12)( δ + 16) − ( δ − δ − δ − δ − δ − δ − δ − δ − x x + x + x + x (3 , , , 14) 4 s δ ( δ + 4) − ( δ − δ − δ − δ − x x + x + x + x s δ ( δ + )( δ + 7)( δ + ) − ( δ − δ − δ − δ − δ − δ x + x x + x + x (2 , , , 10) 6 s δ ( δ + 4)( δ + 6)( δ + 8) − ( δ − δ − δ − δ − δ − δ − x + x x + x + x s δ ( δ + 5) − ( δ − δ − δ − δ − x + x x + x + x (1 , , , 8) 8 s δ ( δ + 4)( δ + 8)( δ + 10)( δ + 12)( δ + 16) − ( δ − δ − δ − δ − δ − δ − δ − δ − x + x x + x + x s δ ( δ + 4)( δ + )( δ + 8)( δ + )( δ + 12) − ( δ − δ − δ − δ − δ − δ − δ − δ − x + x + x x + x (1 , , , 15) 10 2 s δ ( δ + 6)( δ + )( δ + 12)( δ + 15)( δ + 18)( δ + )( δ + 24) − ( δ − δ − δ − δ − δ − δ − δ − δ − δ − δ − x + x + x x + x (1 , , , 9) 6 s δ ( δ + 6)( δ + 9)( δ + 12) − ( δ − δ − δ − δ − δ − δ − x + x + x x + x (1 , , , 7) 14 2 s δ ( δ + 6)( δ + )( δ + 12)( δ + 14)( δ + 18)( δ + 21)( δ + 24)( δ + 28)( δ + 30)( δ + )( δ + 36) − ( δ − δ − δ − δ − δ − δ − δ − δ − δ − δ − δ − δ − δ − δ − Nr. invertible polynomial weights order PF Picard-Fuchs equation x + x + x x + x (2 , , , 12) 4 s δ ( δ + 6) − ( δ − δ − δ − δ − x + x + x x + x (1 , , , 5) 10 s δ ( δ + 6)( δ + 10)( δ + 12)( δ + 15)( δ + 18)( δ + 20)( δ + 24) − ( δ − δ − δ − δ − δ − δ − δ − δ − δ − δ − x + x + x x + x (3 , , , 18) 6 s δ ( δ + 6)( δ + 9)( δ + 12) − ( δ − δ − δ − δ − δ − δ − x + x + x x + x (1 , , , 8) 8 s δ ( δ + 6)( δ + 8)( δ + 12)( δ + 16)( δ + 18) − ( δ − δ − δ − δ − δ − δ − δ − δ − x + x + x x + x (1 , , , 14) 12 s δ ( δ + 6)( δ + 12)( δ + 14)( δ + 18)( δ + 21)( δ + 24)( δ + 28)( δ + 30)( δ + 36) − ( δ − δ − δ − δ − δ − δ − δ − δ − δ − δ − δ − δ − x x + x + x + x (5 , , , 33) 4 s δ ( δ + 6) − ( δ − δ − δ − δ − x + x x + x + x (3 , , , 21) 6 2 s δ ( δ + )( δ + 7)( δ + ) − ( δ − δ − δ − δ − δ − δ − x + x x + x + x (3 , , , 24) 6 s δ ( δ + 6)( δ + 9)( δ + 12) − ( δ − δ − δ − δ − δ − δ − x + x x + x + x (2 , , , 21) 8 s δ ( δ + 6)( δ + 8)( δ + 12)( δ + 16)( δ + 18) − ( δ − δ − δ − δ − δ − δ − δ − δ − x + x x + x + x (1 , , , 15) 12 2 s δ ( δ + )( δ + )( δ + 10)( δ + )( δ + 15)( δ + )( δ + 15)( δ + )( δ + ) − ( δ − δ − δ − δ − δ − δ − δ − δ − δ − δ − δ − δ − x + x x + x + x (1 , , , 18) 12 s δ ( δ + 6)( δ + 12)( δ + 14)( δ + 18)( δ + 21)( δ + 24)( δ + 28)( δ + 30)( δ + 36) − ( δ − δ − δ − δ − δ − δ − δ − δ − δ − δ − δ − δ − E x a m p l e s : S i m p l e K s i n g u l a r i t i e s Nr. invertible polynomial weights order PF Picard-Fuchs equation x x + x x + x + x (7 , , , 12) 3 s δ − ( δ − δ − δ − x + x + x x + x x (2 , , , 7) 6 s δ ( δ + 4)( δ + 6)( δ + 8) − ( δ − δ − δ − δ − δ − δ − x + x + x x + x x (5 , , , 11) 3 s δ − ( δ − δ − δ − x + x x + x + x x (1 , , , 8) 8 s δ ( δ + 6)( δ + 8)( δ + 12)( δ + 16)( δ + 18) − ( δ − δ − δ − δ − δ − δ − δ − δ − x + x x + x + x x (1 , , , 7) 9 s δ ( δ + )( δ + 6)( δ + )( δ + )( δ + 12)( δ + ) − ( δ − δ − δ − δ − δ − δ − δ − δ − δ − x + x x + x + x x (4 , , , 11) 4 s δ ( δ + 4) − ( δ − δ − δ − δ − x + x x + x + x x (3 , , , 14) 4 s δ ( δ + 6) − ( δ − δ − δ − δ − x + x x + x + x x (3 , , , 13) 5 3 s δ ( δ + )( δ + ) − ( δ − δ − δ − δ − δ − x x + x x + x + x (7 , , , 25) 3 s δ − ( δ − δ − δ − x + x x + x x + x (4 , , , 16) 4 s δ ( δ + 5) − ( δ − δ − δ − δ − x + x x + x x + x (2 , , , 13) 8 3 s δ ( δ + 4)( δ + )( δ + 8)( δ + )( δ + 16) − ( δ − δ − δ − δ − δ − δ − δ − δ − x + x x + x x + x (1 , , , 13) 10 2 s δ ( δ + 6)( δ + )( δ + 12)( δ + 15)( δ + 18)( δ + )( δ + 24) − ( δ − δ − δ − δ − δ − δ − δ − δ − δ − δ − x + x x + x x + x (1 , , , 11) 10 2 s δ ( δ + )( δ + )( δ + )( δ + 11)( δ + )( δ + )( δ + ) − ( δ − δ − δ − δ − δ − δ − δ − δ − δ − δ − Nr. invertible polynomial weights order PF Picard-Fuchs equation x + x x + x x + x (2 , , , 16) 6 s δ ( δ + 6)( δ + 9)( δ + 12) − ( δ − δ − δ − δ − δ − δ − x + x x + x x + x (4 , , , 22) 4 s δ ( δ + 6) − ( δ − δ − δ − δ − x + x x + x x + x (2 , , , 13) 9 5 s δ ( δ + )( δ + 6)( δ + )( δ + )( δ + 12)( δ + ) − ( δ − δ − δ − δ − δ − δ − δ − δ − δ − x + x x + x x + x (1 , , , 11) 12 2 s δ ( δ + )( δ + )( δ + 10)( δ + )( δ + 15)( δ + )( δ + 20)( δ + )( δ + ) − ( δ − δ − δ − δ − δ − δ − δ − δ − δ − δ − δ − δ − x x + x x + x + x (4 , , , 27) 5 3 s δ ( δ + )( δ + ) − ( δ − δ − δ − δ − δ − x x + x x + x x + x (5 , , , 9) 3 s δ − ( δ − δ − δ − x + x x + x x + x x (2 , , , 5) 6 2 s δ ( δ + 3)( δ + )( δ + 6) + 3 ( δ − δ − δ − δ − δ − δ − x + x x + x x + x x (1 , , , 5) 12 2 s δ ( δ + )( δ + )( δ + )( δ + )( δ + )( δ + )( δ + )( δ + )( δ + )+13 ( δ − δ − δ − δ − δ − δ − δ − δ − δ − δ − δ − δ − x x + x x + x x + x (3 , , , 19) 5 3 s δ ( δ + )( δ + ) − ( δ − δ − δ − δ − δ − x x + x x + x x + x x (3 , , , 7) 4 2 s δ ( δ + ) + 5 ( δ − δ − δ − δ − ppendix B The Griffiths-Dwork method inSingular Below we show the algorithm for the Griffiths-Dwork method in Singular, which can be usedfor calculating the Picard-Fuchs-equation of a special one-parameter family associated toan arbitrary polynomial in C [ w, x, y, z ] . Of course it can easily be adjusted to a differentnumber of variables, but because most of the examples in this thesis are K3-surfaces this isnot necessary here. This method of calculation was used for all computations in this thesisunless the calculations are given explicitly. First we fix the notation:Let g ( x , x , x , x ) be any polynomial defining a hypersurface in P ( q , q , q , q ) . Then inthis case the algorithm calculates the Picard-Fuchs equation of the one-parameter family f ( x , x , x , x ) = g ( x , x , x , x ) + sx x x x using the Griffiths-Dwork method. But onecan do it the same way for any other one-parameter family.The output is the polynomial pf ( x ) , which is the Picard-Fuchs equation if one replacesthe variable x by the differential operator δ = s ∂∂s . The splitting of the summands of thePicard-Fuchs equation into linear factors has to be done by hand afterwards. > ring r=(0,s),(x1,x2,x3,x4),wp(q1,q2,q3,q4); // the ring r is// C(s)[x1,x2,x3,x4] with a weighted order> LIB "general.lib";> intvec q=(q1,q2,q3,q4); // the weight vector is defined> poly f=g(x1,x2,x3,x4)+s*x1*x2*x3*x4; //defining the one-parameter family f> ideal j=jacob(f); // defining the Jacobian ideal of the family> ideal sj=std(j); // calculates a Gröbner basis of the Jacobian ideal> int d=q[1]+q[2]+q[3]+q[4]; // this number is the degree of f> ideal kb0=weightKB(sj,0,q); // here the basis of the Milnor ring> ideal kb1=weightKB(sj,d,q); // in degree 0,d and 2d is calculated> ideal kb2=weightKB(sj,2*d,q);> list kl=kb0,kb1,kb2; The Griffiths-Dwork method in Singular > int kn=ncols(kb0)+ncols(kb1)+ncols(kb2);> matrix m[kn+1][kn]; // the matrix m stores the important> m[1,1]=s; // information derived below> for(int k=1;k<=kn;k++). {. poly p=factorial(k)*(-1)^k*s^(k+1)*(x1*x2*x3*x4)^k; // These are the. while(deg(p)>0) // polynomials that need to be written in the basis. { // of the Milnor ring. while(reduce(p,sj)==0) // If p is in the Jacobian,. { // use the Griffiths formula to reduce the degree. poly h=0;. ideal l=lift(j,p);. for(int jj=1;jj<=4;jj++). {. h=h+diff(l[jj],var(jj)); // this is the Griffiths formula. }. p=h*1/(deg(h)/d+1);. if(deg(p)==0){break;}. }. if(deg(p)==0){break;}. int u=deg(p)/d+1; // If p is not in the Jacobian, calculate the degree to. ideal kb=kl[u]; // use the correct degree of the Milnor ring and. ideal li=lift(kb,reduce(p,sj)); //reduce with respect to the basis. if(u==1) // In the following part we store in the matrix m the coefficients. { // of the basis elements we have used so far. m[k+1,1]=m[k+1,1]+li[1];. }. if(u==3). {. m[k+1,kn]=m[k+1,kn]+li[1];. }. if(u==2). {. for(int jl=1;jl<=ncols(kb);jl++). {. m[k+1,jl+1]=m[k+1,jl+1]+li[jl];. }. }. p=p-reduce(p,sj);. }. m[k+1,1]=m[k+1,1]+p;. } > matrix w[kn+1][kn+1]; // The matrix w consists of the coefficients of the> for(int kk=1;kk<=kn+1;kk++) // partial derivatives of the holomorphic form. { // omega we began with. w[1,kk]=1;. w[kk,kk]=1;. if(kk>=3). {. w[2,kk]=w[2,kk-1]+2^(kk-2);. }. for(int ll=3;ll<=kk-1;ll++). {. w[ll,kk]=ll*w[ll,kk-1]+w[ll-1,kk-1];. }. }> matrix en=transpose(w)*m; //the matrix en now contains the coefficients of// the partial derivatives of omega in the basis of the Milnor ring. module end=transpose(en);. module ende=syz(end); // The module ende gives all linear relations between// the partial derivatives> poly pf=0;> for(int lk=1;lk<=nrows(ende);lk++) // The coefficients of the Picard-Fuchs. { // equation are put in a polynomial with variable x1. pf=pf+ende[lk,1]*x1^(lk-1);. }> pf; ibliography [AGZV85] V. I. Arnol ′ d, S. M. Guse˘ın-Zade, and A. N. Varchenko. Singularities of differen-tiable maps. Vol. I , volume 82 of Monographs in Mathematics . Birkhäuser BostonInc., Boston, MA, 1985. The classification of critical points, caustics and wavefronts, Translated from the Russian by Ian Porteous and Mark Reynolds.[Arn75] V. I. Arnol ′ d. Critical points of smooth functions, and their normal forms. UspehiMat. Nauk , 30(5(185)):3–65, 1975.[Bat94] Victor V. Batyrev. Dual polyhedra and mirror symmetry for Calabi-Yau hyper-surfaces in toric varieties. J. Algebraic Geom. , 3(3):493–535, 1994.[BB97] Victor V. Batyrev and Lev A. Borisov. Dual cones and mirror symmetry forgeneralized Calabi-Yau manifolds. In Mirror symmetry, II , volume 1 of AMS/IPStud. Adv. Math. , pages 71–86. Amer. Math. Soc., Providence, RI, 1997.[BH92] Per Berglund and Tristan Hübsch. A generalized construction of mirror manifolds.In Essays on mirror manifolds , pages 388–407. Int. Press, Hong Kong, 1992.[Bor10] Lev A. Borisov. Berglund-Hübsch mirror symmetry via vertex algebras. preprint ,arXiv:1007.2633v3, 2010.[CG06] Alessio Corti and Vasily Golyshev. Hypergeometric equations and weighted pro-jective spaces. preprint , arXiv:0607016v1, 2006.[CK99] David A. Cox and Sheldon Katz. Mirror symmetry and algebraic geometry , vol-ume 68 of Mathematical Surveys and Monographs . American Mathematical Society,Providence, RI, 1999.[CR10] Alessandro Chiodo and Yongbin Ruan. LG/CY correspondence: The state spaceisomorphism. preprint , arXiv:0908.0908v2, 2010.[CYY08] Yao-Han Chen, Yifan Yang, and Noriko Yui. Monodromy of Picard-Fuchs differ-ential equations for Calabi-Yau threefolds. J. Reine Angew. Math. , 616:167–203,2008. With an appendix by Cord Erdenberger.[Del70] Pierre Deligne. Equations différentielles à points singuliers réguliers , volume 163of Lecture Notes in Math. Springer, Berlin, 1970.6 BIBLIOGRAPHY [DK86] V. I. Danilov and A. G. Khovanski˘ı. Newton polyhedra and an algorithm forcalculating Hodge-Deligne numbers. Izv. Akad. Nauk SSSR Ser. Mat. , 50(5):925–945, 1986.[Dol82] Igor Dolgachev. Weighted projective varieties. In Group actions and vector fields(Vancouver, B.C., 1981) , volume 956 of Lecture Notes in Math. , pages 34–71.Springer, Berlin, 1982.[Dol96] I. V. Dolgachev. Mirror symmetry for lattice polarized K surfaces. J. Math. Sci. ,81(3):2599–2630, 1996.[Dwo62] Bernard Dwork. On the zeta function of a hypersurface. Inst. Hautes Études Sci.Publ. Math. , (12):5–68, 1962.[Ebe99] Wolfgang Ebeling. Strange duality, mirror symmetry, and the Leech lattice. In Singularity theory (Liverpool, 1996) , volume 263 of London Math. Soc. LectureNote Ser. , pages xv–xvi, 55–77. Cambridge Univ. Press, Cambridge, 1999.[FJR09] Huijun Fan, Tylor Jarvis, and Yongbin Ruan. The Witten equation, Mirror Sym-metry and Quantum singularity theory. preprint , arXiv:0712.4021v3, 2009.[GKZ90] I. M. Gel ′ fand, M. M. Kapranov, and A. V. Zelevinsky. Generalized Euler integralsand A -hypergeometric functions. Adv. Math. , 84(2):255–271, 1990.[GKZ91] I. M. Gel ′ fand, M. M. Kapranov, and A. V. Zelevinsky. Hypergeometric functions,toric varieties and Newton polyhedra. In Special functions (Okayama, 1990) , ICM-90 Satell. Conf. Proc., pages 104–121. Springer, Tokyo, 1991.[GLY] Y. Goto, R. Livné, and N. Yui. The modularity of K3-fibered Calabi-Yau 3-foldswith involution. to appear.[Gri69] Phillip A. Griffiths. On the periods of certain rational integrals. I, II. Ann. ofMath. (2) 90 (1969), 460-495; ibid. (2) , 90:496–541, 1969.[GZK89] I. M. Gel ′ fand, A. V. Zelevinski˘ı, and M. M. Kapranov. Hypergeometric functionsand toric varieties. Funktsional. Anal. i Prilozhen. , 23(2):12–26, 1989.[GZK93] I. M. Gel ′ fand, A. V. Zelevinski˘ı, and M. M. Kapranov. Correction to [GZK89]. Funktsional. Anal. i Prilozhen. , 27(4):91, 1993.[Hos98] Shinobu Hosono. GKZ systems, Gröbner fans, and moduli spaces of Calabi-Yauhypersurfaces. In Topological field theory, primitive forms and related topics (Ky-oto, 1996) , volume 160 of Progr. Math. , pages 239–265. Birkhäuser Boston, Boston,MA, 1998.[KPA + 10] Marc Krawitz, Nathan Priddis, Pedro Acosta, Natalie Bergin, and Himal Rath-nakumara. FJRW-rings and mirror symmetry. Comm. Math. Phys. , 296(1):145–174, 2010. IBLIOGRAPHY preprint ,arXiv:0906.0796, 2009.[KS92] Maximilian Kreuzer and Harald Skarke. On the classification of quasihomogeneousfunctions. Comm. Math. Phys. , 150(1):137–147, 1992.[Mor92] David R. Morrison. Picard-Fuchs equations and mirror maps for hypersurfaces.In Essays on mirror manifolds , pages 241–264. Int. Press, Hong Kong, 1992.[Mor01] David R. Morrison. Geometric aspects of mirror symmetry. In Mathematicsunlimited—2001 and beyond , pages 899–918. Springer, Berlin, 2001.[Sti07] Jan Stienstra. GKZ hypergeometric structures. In Arithmetic and geometry aroundhypergeometric functions , volume 260 of