Rational Solutions of the Painlevé-III Equation
RRATIONAL SOLUTIONS OF THE PAINLEVÉ-III EQUATION
THOMAS BOTHNER, PETER D. MILLER, AND YUE SHENGA
BSTRACT . All of the six Painlevé equations except the first have families of rational solutions, which are frequently importantin applications. The third Painlevé equation in generic form depends on two parameters 𝑚 and 𝑛 , and it has rational solutionsif and only if at least one of the parameters is an integer. We use known algebraic representations of the solutions to studynumerically how the distributions of poles and zeros behave as 𝑛 ∈ ℤ increases and how the patterns vary with 𝑚 ∈ ℂ . Thisstudy suggests that it is reasonable to consider the rational solutions in the limit of large 𝑛 ∈ ℤ with 𝑚 ∈ ℂ being an auxiliaryparameter. To analyze the rational solutions in this limit, algebraic techniques need to be supplemented by analytical ones, andthe main new contribution of this paper is to develop a Riemann-Hilbert representation of the rational solutions of Painlevé-III that is amenable to asymptotic analysis. Assuming further that 𝑚 is a half-integer, we derive from the Riemann-Hilbertrepresentation a finite dimensional Hankel system for the rational solution in which 𝑛 ∈ ℤ appears as an explicit parameter.
1. I
NTRODUCTION
This paper is the first in a series concerned with the large degree asymptotic analysis of rational solutions 𝑢 𝑛 ( 𝑥 ; 𝑚 ) tothe generic Painlevé-III equation parametrized by 𝑛 ∈ ℤ and 𝑚 ∈ ℂ . The six Painlevé equations are best known for theirtranscendental solutions, and indeed their general solutions are frequently referred to as Painlevé transcendents . Thesetranscendental solutions are modern special functions that have appeared in numerous applications, most famously insimilarity solutions of nonlinear partial differential equations and in integrable probability. However, all of the Painlevéequations except the first are actually families of ordinary differential equations indexed by complex parameters, andit is well-known that if the parameters take on certain special values, then the Painlevé equation admits particularsolutions that are either finitely constructed from elementary special functions or rational functions.For example, the Painlevé-II equation 𝑢 ′′ = 2 𝑢 + 𝑥𝑢 + 𝑚 has a complex parameter 𝑚 , and it is elementary that if 𝑚 = 0 then the equation admits the trivial rational solution 𝑢 ( 𝑥 ) ≡ . With this solution in hand for 𝑚 = 0 , one canapply the Bäcklund transformation 𝑢 ( 𝑥 ) ↦ ̂𝑢 ( 𝑥 ) ∶= − 𝑢 ( 𝑥 ) − 2 𝑚 + 12 𝑢 ( 𝑥 ) + 2 𝑢 ′ ( 𝑥 ) + 𝑥 taking a solution of the equation with parameter 𝑚 into another solution of the same equation but with parameter 𝑚 ↦ ̂𝑚 ∶= 𝑚 + 1 . The Bäcklund transformation obviously preserves rationality and with its help one quickly obtains arational solution of the Painlevé-II equation for each integer value of 𝑚 . It turns out that the integral values of 𝑚 are theonly ones for which the equation admits a rational solution, and for each 𝑚 ∈ ℤ there is exactly one rational solution,denoted 𝑢 𝑚 ( 𝑥 ) , 𝑚 ∈ ℤ . Motivated by applications, the family of functions { 𝑢 𝑚 ( ⋅ )} 𝑚 ∈ ℤ has recently been studied fromthe analytic perspective, i.e., from the point of view of asymptotic analysis in the limit of large integer 𝑚 [2, 4, 5, 17].1.1. The Painlevé-III equation, its symmetries and its rational solutions.
The generic Painlevé-III equation d 𝑢 d 𝑥 = 1 𝑢 ( d 𝑢 d 𝑥 ) − 1 𝑥 d 𝑢 d 𝑥 + 4Θ 𝑢 + 4(1 − Θ ∞ ) 𝑥 + 4 𝑢 − 4 𝑢 , (1.1) Date : January 16, 2018.2010
Mathematics Subject Classification.
Primary 34M55; Secondary 34M35, 34E05.
Key words and phrases.
Painlevé-III equation, rational solutions, isomonodromy method, Riemann-Hilbert problem, large degree asymptotics.TB acknowledges support by the AMS and the Simons Foundation through a travel grant and the work of PDM is supported by the National Sci-ence Foundation under grant DMS-1513054. The authors are grateful to P. Clarkson, A. Its, C.-K. Law and W. Van Assche for useful conversations. a r X i v : . [ m a t h . C A ] J a n s the simplest of the Painlevé equations having a fixed singular point ( 𝑥 = 0 ), and it involves two distinct complexparameters Θ and Θ ∞ . As we shall see, both of these features introduce new phenomena into the behavior of even themost elementary, rational solutions.In order to study the rational solutions of (1.1), it will be convenient to represent the constant parameters Θ and Θ ∞ in the form Θ = 𝑛 + 𝑚 and Θ ∞ = 𝑚 − 𝑛 + 1 . (1.2)Equation (1.1) has many symmetries, including the following elementary ones: ∙ Inversion : if 𝑢 ( 𝑥 ) satisfies (1.1)–(1.2), then 𝑢 ( 𝑥 ) ↦ 𝐼 [ 𝑢 ]( 𝑥 ) ∶= 1∕ 𝑢 ( 𝑥 ) satisfies (1.1) with modified parameters 𝐼 ∶ Θ ↦ Θ ∞ − 1 = 𝑚 − 𝑛 and 𝐼 ∶ Θ ∞ ↦ Θ + 1 = 𝑚 + 𝑛 + 1 (corresponding to changing the sign of 𝑛 whileholding 𝑚 fixed). The mapping 𝐼 ∶ ( 𝑢 ( 𝑥 ) , Θ , Θ ∞ ) ↦ (1∕ 𝑢 ( 𝑥 ) , Θ ∞ − 1 , Θ + 1) is an involution. ∙ Rotation : if 𝑢 ( 𝑥 ) satisfies (1.1)–(1.2), then 𝑢 ( ⋅ ) ↦ 𝑅 [ 𝑢 ]( 𝑥 ) ∶= −i 𝑢 (−i 𝑥 ) satisfies (1.1) with modified parame-ters 𝑅 ∶ Θ ↦ Θ = 𝑛 + 𝑚 and 𝑅 ∶ Θ ∞ ↦ ∞ = 𝑛 − 𝑚 + 1 (corresponding to swapping 𝑚 and 𝑛 ). Themapping 𝑅 ∶ ( 𝑢 ( 𝑥 ) , Θ , Θ ∞ ) ↦ (−i 𝑢 (−i 𝑥 ) , Θ , ∞ ) is the generator of a cyclic symmetry group of order . Note that 𝑅 fixes the parameters (Θ , Θ ∞ ) in (1.1) but maps the solution 𝑢 ( 𝑥 ) to its odd reflection − 𝑢 (− 𝑥 ) .A nontrivial symmetry is the following Bäcklund transformation 𝑢 ( 𝑥 ) ↦ ̂𝑢 ( 𝑥 ) , which was discovered by Gromak [13]: ̂𝑢 ( 𝑥 ) ∶= 𝑥𝑢 ′ ( 𝑥 ) + 2 𝑥𝑢 ( 𝑥 ) + 2 𝑥 − 2(1 − Θ ∞ ) 𝑢 ( 𝑥 ) − 𝑢 ( 𝑥 ) 𝑢 ( 𝑥 ) ⋅ ( 𝑥𝑢 ′ ( 𝑥 ) + 2 𝑥𝑢 ( 𝑥 ) + 2 𝑥 + 2Θ 𝑢 ( 𝑥 ) + 𝑢 ( 𝑥 )) (1.3)solves (1.1) for modified parameters Θ ↦ ̂ Θ ∶= Θ + 1 = ( 𝑛 + 1) + 𝑚 and Θ ∞ ↦ ̂ Θ ∞ ∶= Θ ∞ − 1 = 𝑚 − ( 𝑛 + 1) + 1 ,which amounts to incrementing 𝑛 for fixed 𝑚 . Proposition 1.
Suppose now that (1.1) has a solution 𝑢 ( 𝑥 ) that is rational. Then either 𝑚 ∈ ℤ or 𝑛 ∈ ℤ or both.Proof. Indeed, assuming 𝑢 ( 𝑥 ) = 𝑎𝑥 𝑝 + 𝑂 ( 𝑥 𝑝 −1 ) as 𝑥 → ∞ for 𝑝 ∈ ℤ and 𝑎 ≠ , from (1.1) we obtain a dominantbalance only for 𝑝 = 0 , yielding (from the last two terms on the right-hand side) 𝑎 = 1 . Continuing the Laurentexpansion to the next order by writing 𝑢 ( 𝑥 ) = 𝑎 + 𝑏𝑥 −1 + 𝑂 ( 𝑥 −2 ) as 𝑥 → ∞ with 𝑎 = 1 , the calculation of 𝑏 onlybrings in the remaining terms in (1.1) that are not proportional to derivatives of 𝑢 , and we find 𝑏 = 𝑎 (Θ ∞ −1)∕4−Θ ∕4 .Therefore, the sum of all finite residues of the assumed rational solution 𝑢 ( 𝑥 ) must equal 𝑏 as well. If 𝑥 = 0 is a pole of 𝑢 ( 𝑥 ) , then a similar dominant balance argument involving the terms 𝑢 ′′ ( 𝑥 ) , 𝑢 ′ ( 𝑥 ) ∕ 𝑢 ( 𝑥 ) , 𝑢 ′ ( 𝑥 )∕ 𝑥 , 𝑢 ( 𝑥 ) ∕ 𝑥 , and 𝑢 ( 𝑥 ) shows that it must be a simple pole of residue −Θ . Finally, if 𝑥 ≠ is a pole of 𝑢 ( 𝑥 ) , then it must be a simple pole anda dominant balance involving 𝑢 ′′ ( 𝑥 ) , 𝑢 ′ ( 𝑥 ) ∕ 𝑢 ( 𝑥 ) , and 𝑢 ( 𝑥 ) shows that the residue is either or − . Letting 𝑘 ∈ ℤ denote the difference between the number of nonzero poles of 𝑢 ( 𝑥 ) with residues and − , we therefore arrive at theidentities 𝑘 ∓ 14 (Θ ∞ − 1) + 14 Θ = { Θ , if 𝑥 = 0 is a pole of 𝑢 , if 𝑥 = 0 is not a pole of 𝑢, (1.4)where 𝑎 = ±1 . Using (1.2) then shows that, if 𝑥 = 0 is not a pole of 𝑢 , then 𝑎 = 1 implies 𝑛 = 𝑘 ∈ ℤ , while 𝑎 = −1 implies 𝑚 = − 𝑘 ∈ ℤ . On the other hand, if 𝑥 = 0 is a pole of 𝑢 , then by inversion symmetry 𝐼 [ 𝑢 ]( 𝑥 ) = 1∕ 𝑢 ( 𝑥 ) is a rational solution of (1.1) analytic at the origin and corresponding to the modified parameters 𝐼 ∶ Θ ↦ 𝑚 − 𝑛 and 𝐼 ∶ Θ ∞ ↦ 𝑚 + 𝑛 + 1 . Applying (1.4) to 𝐼 [ 𝑢 ] with parameters replaced by their modified values then yields thesame conclusion as in the case that 𝑢 is analytic at the origin, namely that 𝑛 = 𝑘 ∈ ℤ if 𝑎 = 1 and 𝑚 = − 𝑘 ∈ ℤ if 𝑎 = −1 . (cid:3) This argument shows that each rational solution of (1.1) tends to one of four nonzero limits, ±1 or ±i , as 𝑥 → ∞ and hence cannot be an odd function of 𝑥 . Furthermore, it follows from odd reflection symmetry 𝑅 ∶ 𝑢 ( 𝑥 ) ↦ − 𝑢 (− 𝑥 ) that for given parameters (1.2) with 𝑚 ∈ ℤ or 𝑛 ∈ ℤ , the rational solutions come in distinct pairs permuted by oddreflection. In the most general form of the Painlevé-III equation one replaces the terms 𝑢 − 4 𝑢 −1 on the right-hand side by 𝛾𝑢 + 𝛿𝑢 −1 for arbitraryparameters ( 𝛾, 𝛿 ) ∈ ℂ . Under the generic assumption that 𝛾𝛿 ≠ , a suitable rescaling of the dependent and independent variables results in theform (1.1). There are two singular reductions: one in which either 𝛾 = 0 or 𝛿 = 0 but not both, which can be reduced by scaling to a one-parameterfamily of equations (or in the more special case that either Θ or ∞ vanishes to an equation whose general solution is known in closed form),and one in which 𝛾 = 𝛿 = 0 , which can be reduced by scaling to a unique form if Θ (1 − Θ ∞ ) ≠ . See [21, §32.2.2] and [12, Section 2.2]. t turns out that if 𝑚 ∈ ℤ or 𝑛 ∈ ℤ there indeed exists a rational solution of (1.1)–(1.2). If only one of 𝑚 and 𝑛 isintegral, then there are exactly two rational solutions, while if both are integral there are exactly four rational solutions.The existence and precise number of the rational solutions can be established by iterated Bäcklund transformationsonce the cases of 𝑚 = 0 or 𝑛 = 0 are analyzed.Suppose 𝑛 = 0 and 𝑚 ∉ ℤ . Then it is obvious that (1.1)–(1.2) has at least the two distinct rational (equilibrium)solutions 𝑢 ( 𝑥 ) = ±1 . It is easy to see that there are no other rational solutions in this case. Indeed, if we consider therational solutions that tend to ±1 as 𝑥 → ∞ and take 𝑛 = 0 in (1.1)–(1.2), a simple dominant balance argument showsthat these solutions satisfy 𝑢 ( 𝑥 ) = ±1 + 𝑂 ( 𝑥 − 𝑝 ) as 𝑥 → ∞ for every positive integer 𝑝 and hence as 𝑢 ( 𝑥 ) is rational theerror terms vanish identically so the exact solutions 𝑢 ( 𝑥 ) = ±1 are the only ones recovered. On the other hand, if weconsider the rational solutions that tend to ±i as 𝑥 → ∞ and take 𝑛 = 0 in (1.4) we find that for some 𝑘 ∈ ℤ we have 𝑚 = 𝑘 if 𝑥 = 0 is a pole of 𝑢 and 𝑚 = − 𝑘 otherwise, both of which contradict the assumption that 𝑚 ∉ ℤ . Similarlyif 𝑚 = 0 and 𝑛 ∉ ℤ , then (1.1)–(1.2) has the pair 𝑢 ( 𝑥 ) = ±i as its only rational solutions (this also follows directlyusing the rotation symmetry generator 𝑅 ). Finally if 𝑚 = 𝑛 = 0 there are precisely four rational solutions: 𝑢 ( 𝑥 ) = ±1 and 𝑢 ( 𝑥 ) = ±i . In Section 5.3 we use these facts to determine the precise number of rational solutions of (1.1) fornon-integral 𝑚 .The rational solutions of (1.1) have been known at least since the paper of Gromak [13]. The paper [19] is anexhaustive survey of special solutions of the Painlevé-III equation that describes the effect of iterating transformationssuch as (1.3), including cataloguing the exact numbers of poles and zeros of the iterates. This paper also includescomplete references on applications of the Painlevé-III equation accurate to the date of publication. Since rationalfunctions are naturally presented as ratios of polynomials, it is compelling to ask whether the polynomials themselveshave a simple recurrence formula like (1.3). Such a result was first found for the Painlevé-II equation by Yablonskii[25] and Vorob’ev [23], and since then many algebraic representations of these polynomials have been discovered. Forthe Painlevé-III equation, a representation of rational solutions in terms of special polynomials was first obtained byUmemura [22, Section 9]. Clarkson further developed Umemura’s scheme; in [7] a sequence of functions is definedby setting 𝑠 −1 ( 𝑥 ; 𝑚 ) ≡ 𝑠 ( 𝑥 ; 𝑚 ) ≡ (1.5)and then using the recurrence relation 𝑠 𝑛 +1 ( 𝑥 ; 𝑚 ) ∶= (4 𝑥 + 2 𝑚 + 1) 𝑠 𝑛 ( 𝑥 ; 𝑚 ) − 𝑠 𝑛 ( 𝑥 ; 𝑚 ) 𝑠 ′ 𝑛 ( 𝑥 ; 𝑚 ) − 𝑥 ( 𝑠 𝑛 ( 𝑥 ; 𝑚 ) 𝑠 ′′ 𝑛 ( 𝑥 ; 𝑚 ) − 𝑠 ′ 𝑛 ( 𝑥 ; 𝑚 ) ) 𝑠 𝑛 −1 ( 𝑥 ; 𝑚 ) , 𝑛 ∈ ℤ ≥ . (1.6)It turns out that the denominator is always a factor of the numerator, so the functions { 𝑠 𝑛 ( 𝑥 ; 𝑚 )} ∞ 𝑛 =0 are all polynomials in 𝑥 . Note that comparing with the notation of [7, 8], we have 𝜇 = 𝑚 + , 𝑧 = 2 𝑥 , 𝛽 = 2(1 − Θ ∞ ) , and 𝛼 = 2Θ . Theresult of the scheme is the following. Proposition 2 (Umemura [22], Clarkson [7], Clarkson, Law, and Lin [8]) . The result of applying the Bäcklund trans-formation (1.3) 𝑛 times to the seed solution 𝑢 ( 𝑥 ) ≡ is the function 𝑢 ( 𝑥 ) = 𝑢 𝑛 ( 𝑥 ; 𝑚 ) ∶= 𝑠 𝑛 ( 𝑥 ; 𝑚 − 1) 𝑠 𝑛 −1 ( 𝑥 ; 𝑚 ) 𝑠 𝑛 ( 𝑥 ; 𝑚 ) 𝑠 𝑛 −1 ( 𝑥 ; 𝑚 − 1) , 𝑛 ∈ ℤ ≥ , (1.7) defined in terms of polynomials { 𝑠 𝑛 ( 𝑥 ; 𝑚 )} ∞ 𝑛 =0 determined by (1.5) – (1.6) . Furthermore, 𝑢 𝑛 ( 𝑥 ; 𝑚 ) is the unique rationalsolution of (1.1) for parameters (1.2) for which 𝑢 𝑛 ( 𝑥 ; 𝑚 ) → as 𝑥 → ∞ . The family of rational solutions 𝑢 𝑛 ( 𝑥 ; 𝑚 ) can be extended to negative integral values of 𝑛 through the inversionsymmetry 𝐼 : 𝑢 − 𝑛 ( 𝑥 ; 𝑚 ) ∶= 𝐼𝑢 𝑛 ( 𝑥 ; 𝑚 ) = 1 𝑢 𝑛 ( 𝑥 ; 𝑚 ) , 𝑛 ∈ ℤ ≥ . (1.8)It obviously holds that 𝑢 − 𝑛 ( 𝑥 ; 𝑚 ) → as 𝑥 → ∞ , so the family captures every rational solution of the Painlevé-IIIequation (1.1) that tends to as 𝑥 → ∞ . It is clearly sufficient to study the family for integers 𝑛 ≥ . Without loss of Taking 𝑛 = 0 in (1.1)–(1.2) yields the so-called sine-Gordon reduction : writing 𝑢 ( 𝑥 ) = e −i 𝜑 ( 𝑥 ) and setting 𝑛 = 0 in (1.1)–(1.2) gives d 𝜑 d 𝑥 + 1 𝑥 d 𝜑 d 𝑥 = 8 𝑚𝑥 sin( 𝜑 ) + 8 sin(2 𝜑 ) . enerality we may also restrict attention to values of 𝑚 in the closed right half-plane: Re( 𝑚 ) ≥ ; indeed, composinginversion 𝐼 with two rotations, 𝑢 𝑛 ( 𝑥 ; − 𝑚 ) = 𝑅 ◦ 𝐼 ◦ 𝑅𝑢 𝑛 ( 𝑥 ; 𝑚 ) = 1 𝑢 𝑛 (− 𝑥 ; 𝑚 ) . (1.9)Moreover, unless 𝑚 ∈ ℤ , studying the family { 𝑢 𝑛 ( 𝑥 ; 𝑚 )} of rational solutions tending to as 𝑥 → ∞ captures all rational solutions of (1.1) because 𝑅 𝑢 𝑛 ( 𝑥 ; 𝑚 ) = − 𝑢 𝑛 (− 𝑥 ; 𝑚 ) is the rational solution of exactly the same Painlevé-IIIequation (1.1) tending to −1 as 𝑥 → ∞ . If both 𝑛 and 𝑚 are integers, we may access the rotation symmetry generator 𝑅 to finally exhaust all rational solutions of (1.1). Remark 1.
It has been proven by Clarkson, Law, and Lin [8, Theorem 4.6] that if 𝑚 + ∈ ℤ , then for 𝑛 > | 𝑚 + | , 𝑠 𝑛 has 𝑛 ( 𝑛 + 1) roots, 𝑠 𝑛 vanishes to order ( 𝑛 − | 𝑚 + | )( 𝑛 − | 𝑚 + | + 1) at the origin, and all remaining roots aresimple and nonzero. This shows that when 𝑚 is a half-integer and 𝑛 is large, 𝑠 𝑛 has a root of order 𝑂 ( 𝑛 ) at the originand merely 𝑂 ( 𝑛 ) simple nonzero roots. This result implies that when 𝑚 = , , , … , 𝑢 𝑛 ( 𝑥 ; 𝑚 ) has a simple zero at theorigin, while when 𝑚 = − , − , − , … , 𝑢 𝑛 ( 𝑥 ; 𝑚 ) has a simple pole at the origin. Riemann-Hilbert problem formulation and main result.
The purpose of this paper is to take the first stepstoward understanding the family { 𝑢 𝑛 ( 𝑥 ; 𝑚 )} ∞ 𝑛 =0 of rational solutions of the Painlevé-III equation (1.1) from the per-spective of mathematical analysis, a goal which essentially begs the question of how 𝑢 𝑛 ( 𝑥 ; 𝑚 ) behaves when 𝑛 is largeand how the result depends on ( 𝑥, 𝑚 ) ∈ ℂ . In Section 2 we present the results of several plots of poles and zeros of 𝑢 𝑛 ( 𝑥 ; 𝑚 ) set in the context of a formal scaling analysis of the Painlevé-III equation in the limit of large (integral) 𝑛 .These results suggest numerous remarkable phenomena that can occur in this limit, but whose proofs would requireother methods. The issue at hand is that the methods described above for constructing the rational function 𝑢 𝑛 ( 𝑥 ; 𝑚 ) allinvolve some sort of iteration, producing formulæ that generally become more complicated as 𝑛 increases. The recur-rence (1.6) is preferable to iteration of the Bäcklund transformation (1.3) in the sense that it takes advantage of explicitfactorization of the numerator and denominator polynomials in the rational function 𝑢 𝑛 ( 𝑥 ; 𝑚 ) , but it is a recurrencenonetheless. Kajiwara and Masuda [16] found a way to express (essentially) the polynomial 𝑠 𝑛 ( 𝑥 ; 𝑚 ) in closed formvia Wronskian determinants of polynomials obtained from an elementary generating function. However, unlike certaindeterminantal representations of Hankel type appearing in the theory of the rational solutions of the Painlevé-II [2] and(for the “generalized Hermite” rational solutions) Painlevé-IV [6] equations, the determinants of Kajiwara and Masudado not appear to be amenable to asymptotic analysis in the limit of large 𝑛 (in which the size of the determinant growswithout bound). The lack of an analytically tractable formula for 𝑢 𝑛 ( 𝑥 ; 𝑚 ) is the main problem that we address andsolve in this paper. After a review of the isomonodromy theory of the Painlevé-III equation in Section 3, in Sections 4and 5 we construct a Riemann-Hilbert representation of the function 𝑢 𝑛 ( 𝑥 ; 𝑚 ) that can be used [3] to successfully ana-lyze the rational solution for large 𝑛 . To formulate this problem here in the introduction, given a nonzero 𝑥 ∈ ℂ with − 𝜋 < Arg( 𝑥 ) < 𝜋 , let 𝐿 = 𝐿 ∞ ⬔ ∪ 𝐿 ⬔ ∪ 𝐿 ∞ ⬕ ∪ 𝐿 ⬕ be a contour in the complex 𝜆 -plane consisting of four arcs with thefollowing properties. There is an intersection point 𝑝 such that: ∙ 𝐿 ∞ ⬔ originates from 𝜆 = ∞ in such a direction that i 𝑥𝜆 is negative real and terminates at 𝜆 = 𝑝 , 𝐿 ⬔ begins at 𝜆 = 𝑝 and terminates at 𝜆 = 0 in a direction such that −i 𝑥𝜆 −1 is negative real, and the net increment of theargument of 𝜆 along 𝐿 ∞ ⬔ ∪ 𝐿 ⬔ is Δ arg( ⬔ ) = 2Arg( 𝑥 ) − 2 𝜋 sgn(Im( 𝑥 )) . (1.10) ∙ 𝐿 ∞ ⬕ originates from 𝜆 = ∞ in such a direction that −i 𝑥𝜆 is negative real and terminates at 𝜆 = 𝑝 , 𝐿 ⬕ beginsat 𝜆 = 𝑝 and terminates at 𝜆 = 0 in a direction such that i 𝑥𝜆 −1 is negative real, and the net increment of theargument of 𝜆 along 𝐿 ∞ ⬕ ∪ 𝐿 ⬕ is Δ arg( ⬕ ) = 2Arg( 𝑥 ) . (1.11) ∙ The arcs 𝐿 ∞ ⬔ , 𝐿 ⬔ , 𝐿 ∞ ⬕ , and 𝐿 ⬕ do not otherwise intersect.See Figure 14 below for an illustration. Consider now the following problem. Riemann-Hilbert Problem 1.
Given parameters 𝑚 ∈ ℂ and 𝑛 ∈ ℤ as well as 𝑥 ∈ ℂ ⧵ {0} with − 𝜋 < Arg( 𝑥 ) < 𝜋 , let 𝐿 denote an 𝑥 -dependent contour as above, and seek a matrix function 𝐘 ( 𝜆 ) = 𝐘 ( 𝑛 ) ( 𝜆 ; 𝑥, 𝑚 ) with the followingproperties: Analyticity: 𝐘 ( 𝜆 ) is analytic in 𝜆 in the domain 𝜆 ∈ ℂ ⧵ 𝐿 . It takes continuous boundary values on 𝐿 ⧵ {0} from each maximal domain of analyticity. (2) Jump conditions:
The boundary values 𝐘 ± ( 𝜆 ) are related on each arc of 𝐿 by the following formulæ: 𝐘 + ( 𝜆 ) = 𝐘 − ( 𝜆 ) ⎡⎢⎢⎢⎣ √ 𝜋𝜆 −( 𝑚 +1) ⬕ Γ( − 𝑚 ) 𝜆 𝑛 e i 𝑥 ( 𝜆 − 𝜆 −1 ) ⎤⎥⎥⎥⎦ , 𝜆 ∈ 𝐿 ⬔ (1.12) 𝐘 + ( 𝜆 ) = 𝐘 − ( 𝜆 ) ⎡⎢⎢⎢⎣ √ 𝜋𝜆 −( 𝑚 +1) ⬕ Γ( − 𝑚 ) 𝜆 𝑛 e i 𝑥 ( 𝜆 − 𝜆 −1 ) ⎤⎥⎥⎥⎦ , 𝜆 ∈ 𝐿 ∞ ⬔ (1.13) 𝐘 + ( 𝜆 ) = 𝐘 − ( 𝜆 ) ⎡⎢⎢⎢⎣ √ 𝜋 ( 𝜆 ( 𝑚 +1)∕2 ⬕ ) + ( 𝜆 ( 𝑚 +1)∕2 ⬕ ) − Γ( + 𝑚 ) 𝜆 − 𝑛 e −i 𝑥 ( 𝜆 − 𝜆 −1 ) ⎤⎥⎥⎥⎦ , 𝜆 ∈ 𝐿 ∞ ⬕ (1.14) 𝐘 + ( 𝜆 ) = 𝐘 − ( 𝜆 ) ⎡⎢⎢⎢⎣ −e 𝜋 i 𝑚 √ 𝜋 ( 𝜆 ( 𝑚 +1)∕2 ⬕ ) + ( 𝜆 ( 𝑚 +1)∕2 ⬕ ) − Γ( + 𝑚 ) 𝜆 − 𝑛 e −i 𝑥 ( 𝜆 − 𝜆 −1 ) −e −2 𝜋 i 𝑚 ⎤⎥⎥⎥⎦ , 𝜆 ∈ 𝐿 ⬕ . (1.15)(3) Asymptotics: 𝐘 ( 𝜆 ) → 𝕀 as 𝜆 → ∞ . Also, the matrix function 𝐘 ( 𝜆 ) 𝜆 −(Θ +Θ ∞ ) 𝜎 ∕2 ⬕ = 𝐘 ( 𝜆 ) 𝜆 −( 𝑚 + 12 ) 𝜎 ⬕ has awell-defined limit as 𝜆 → (the same limit from each side of 𝐿 ). Here, 𝜆 𝑝 ⬕ is notation for a certain well-defined (see Section 4.2 below) branch of the power function with its branchcut on the contour 𝐿 ⬕ ∪ 𝐿 ∞ ⬕ , 𝜎 ∶= diag[1 , −1] denotes a standard Pauli spin matrix, and subscripts + / − refer toboundary values taken on the indicated contour from the left/right. We introduce the expansions 𝐘 ( 𝜆 ) = 𝕀 + 𝐘 ∞1 ( 𝑥 ) 𝜆 −1 + 𝑂 ( 𝜆 −2 ) , 𝜆 → ∞; 𝐘 ∞1 ( 𝑥 ) = [ 𝑌 ∞1 ,𝑗𝑘 ( 𝑥 ) ] 𝑗,𝑘 =1 (1.16)and 𝐘 ( 𝜆 ) 𝜆 −( 𝑚 + 12 ) 𝜎 ⬕ = 𝐘 ( 𝑥 ) + 𝑂 ( 𝜆 ) , 𝜆 → 𝐘 ( 𝑥 ) = [ 𝑌 ,𝑗𝑘 ( 𝑥 ) ] 𝑗,𝑘 =1 . (1.17)Note that the matrix coefficients 𝐘 ∞1 ( 𝑥 ) and 𝐘 ( 𝑥 ) depend parametrically on both 𝑛 and 𝑚 , as well as 𝑥 . Then we havethe following result. Theorem 1.
The rational solution 𝑢 𝑛 ( 𝑥 ; 𝑚 ) of the Painlevé-III equation (1.1) with parameters 𝑚 and 𝑛 ∈ ℤ definedin Proposition 2 and extended to negative integral 𝑛 by inversion 𝐼 is given equivalently in terms of the solution 𝐘 ( 𝑛 ) ( 𝜆 ; 𝑥, 𝑚 ) of Riemann-Hilbert Problem 1 by 𝑢 𝑛 ( 𝑥 ; 𝑚 ) = −i 𝑌 ∞1 , ( 𝑥 ) 𝑌 , ( 𝑥 ) 𝑌 , ( 𝑥 ) (1.18) where we have suppressed the parametric dependence on 𝑛 ∈ ℤ and 𝑚 ∈ ℂ on the right-hand side. The proof of this theorem will be completed at the end of Section 5. Finally, in Section 6 we study how the Riemann-Hilbert representation degenerates when 𝑚 ∈ ℤ + .2. N UMERICAL O BSERVATIONS AND F ORMAL S CALING T HEORY
Scaling analysis.
Eliminating Θ and Θ ∞ in favor of 𝑚 and 𝑛 by (1.2), the Painlevé-III equation (1.1) becomes d 𝑢 d 𝑥 = 1 𝑢 ( d 𝑢 d 𝑥 ) − 1 𝑥 d 𝑢 d 𝑥 + 4( 𝑛 + 𝑚 ) 𝑢 + 4( 𝑛 − 𝑚 ) 𝑥 + 4 𝑢 − 4 𝑢 . (2.1) onsidering 𝑚 fixed and 𝑛 large, we introduce a new independent variable by the scaling 𝑥 = 𝑛𝑦 , and then to furtherzoom in on the neighborhood of a particular point 𝑦 we set 𝑦 = 𝑦 + 𝑤 ∕ 𝑛 . A simple calculation then shows that if weset 𝑝 ( 𝑤 ) ∶= −i 𝑢 ( 𝑥 ) = −i 𝑢 ( 𝑛𝑦 + 𝑤 ) , (2.1) becomes d 𝑝 d 𝑤 = 1 𝑝 ( d 𝑝 d 𝑤 ) + 4i 𝑦 ( 𝑝 − 1) − 4 𝑝 + 4 𝑝 + 𝑂 ( 𝑛 −1 ) where the final term combines several others all of which are proportional to 𝑛 −1 . Neglecting this formally smallterm and replacing 𝑝 with the symbol ̇𝑝 indicating a formal approximation yields an autonomous nonlinear equationparametrized by 𝑦 ∈ ℂ ⧵ {0} : d ̇𝑝 d 𝑤 = 1 ̇𝑝 ( d ̇𝑝 d 𝑤 ) + 4i 𝑦 ( ̇𝑝 − 1) − 4 ̇𝑝 + 4 ̇𝑝 . (2.2)This model equation admits a first integral: multiply (2.2) through by ̇𝑝 ′ ∕ ̇𝑝 ( ′ = d∕d 𝑤 ) and rearrange to obtain ̇𝑝 ′ ̇𝑝 ′′ ̇𝑝 − ( ̇𝑝 ′ ) ̇𝑝 = 4 [ i 𝑦 (1 − ̇𝑝 −2 ) − ̇𝑝 + ̇𝑝 −3 ] ̇𝑝 ′ which is easily integrated to yield ( ̇𝑝 ′ ) ̇𝑝 = 4 [ i 𝑦 ( ̇𝑝 + ̇𝑝 −1 ) − 12 ̇𝑝 − 12 ̇𝑝 −2 ] + 8 𝐶𝑦 , where 𝐶 is a constant of integration. Therefore, ( d ̇𝑝 d 𝑤 ) = 16 𝑦 𝑃 ( ̇𝑝 ; 𝑦 , 𝐶 ) , 𝑃 ( ̇𝑝 ; 𝑦 , 𝐶 ) ∶= − 𝑦 ̇𝑝 + i 𝑦 ̇𝑝 + 𝐶 ̇𝑝 + i 𝑦 ̇𝑝 − 𝑦 . (2.3)Suppose that 𝑦 and 𝐶 are such that the quartic 𝑃 ( ̇𝑝 ; 𝑦 , 𝐶 ) has a double root ̇𝑝 = 𝑝 ; eliminating 𝐶 between theequations 𝑃 ( 𝑝 ; 𝑦 , 𝐶 ) = 0 and 𝑃 ′ ( 𝑝 ; 𝑦 , 𝐶 ) = 0 shows that 𝑝 is a solution of the quartic equation 𝑦 𝑝 − i 𝑝 + i 𝑝 − 𝑦 = 0 . (2.4)Obviously, 𝑝 − 1 is a factor of the left-hand side: 𝑦 𝑝 − i 𝑝 + i 𝑝 − 𝑦 = ( 𝑝 − 1)( 𝑦 ( 𝑝 + 1) − i 𝑝 ) , so there are fourpossibilities for double roots of 𝑃 ( ̇𝑝 ; 𝑦 , 𝐶 ) , namely: 𝑝 = 1 , 𝑝 = −1 , 𝑝 = 𝑝 +0 ( 𝑦 ) ∶= i2 𝑦 − i √ 𝑦 + 1 , 𝑝 = 𝑝 −0 ( 𝑦 ) ∶= i2 𝑦 + i √ 𝑦 + 1 . (2.5)Note that since the quartic equation (2.4) is the same equation as arises upon setting ̇𝑝 = 𝑝 and neglecting derivativesof ̇𝑝 in (2.2), the four values (2.5) are precisely the equilibrium solutions of the differential equation (2.2). The corre-sponding values of 𝐶 are then obtained explicitly from the equation 𝑃 ′ ( 𝑝 ; 𝑦 , 𝐶 ) = 0 , which is linear in 𝐶 (and thecoefficient of 𝐶 is nonzero in each case): 𝐶 = − i 𝑦 𝑝 − 3i 𝑦 𝑝 + 𝑦 𝑝 . (2.6)Thus, whenever 𝐶 is given by (2.6) and 𝑝 is a root of the quartic equation (2.4) (equivalently, an equilibrium solutionof (2.2)), 𝑃 ( ̇𝑝 ; 𝑦 , 𝐶 ) = − 𝑦 ̇𝑝 − 𝑝 ) ( ̇𝑝 + 𝑏 ̇𝑝 + 𝑐 ) , where 𝑏 ∶= 2 𝑝 − 2i 𝑦 , 𝑐 ∶= 1 𝑝 . For each fixed ( 𝑦 , 𝐶 ) pair, the root locus of 𝑃 ( ̇𝑝 ; 𝑦 , 𝐶 ) is invariant under ̇𝑝 ↦ ̇𝑝 . Since ±1 are individually fixed bythis involution while the other two possible double roots listed in (2.5) are permuted by this involution, we see that ifthere exists a double root distinct from or −1 , then there are two distinct double roots and hence 𝑃 ( ̇𝑝 ; 𝑦 , 𝐶 ) factorsas a perfect square of a quadratic with distinct roots. If one of the points ±1 is a double root, then either all four rootscoincide, the two remaining roots coalesce at ∓1 , or the two remaining roots are distinct simple roots that are permutedby the involution. .2. Experiments and conjectures.
To begin to assess the validity of predictions following from the above formallarge- 𝑛 scaling arguments, we may try to examine a finite number of the functions 𝑢 𝑛 ( 𝑥 ; 𝑚 ) , say for 𝑛 = 0 , , , … , 𝑁 ,and plot their poles and zeros in 𝑥 . Since according to Proposition 2, 𝑢 𝑛 ( 𝑥 ; 𝑚 ) → as 𝑥 → ∞ and 𝑢 𝑛 ( 𝑥 ; 𝑚 ) is rational in 𝑥 with simple poles and zeros only, such plots actually convey complete information. In practice, it is substantially moreefficient for large 𝑛 to implement the polynomial recurrence scheme of Umemura/Clarkson than to directly iterate theBäcklund transformation (1.3). Therefore, we symbolically compute a sufficient number of the polynomials 𝑠 𝑛 , whichhave coefficients rational in 𝑚 . Then by using rational values for the real and imaginary parts of 𝑚 , we may applythe Mathematica routine NSolve with the option
WorkingPrecision->30 to obtain accurate approximations of theroots. We then plot separately the roots of the four polynomial factors in the representation (1.7). As long as the rootsof the factors are simple and distinct, no information is lost in making such a plot; this is known to be the case [7, 8]unless 𝑚 ∈ ℤ + , in which case for large enough 𝑛 there is a common root of high order at the origin in all four factors,leading to a high degree of cancellation. We restrict our numerical calculations of poles and zeros to nonnegative valuesof 𝑛 and to Re( 𝑚 ) ≥ without loss of generality, compare (1.8) and (1.9).Since the scaling formalism is based at first on the scaling 𝑥 = 𝑛𝑦 , it is useful to initially view the plots of poles/zerosof 𝑢 𝑛 ( 𝑥 ; 𝑚 ) in the 𝑦 -plane. Figures 1–4 study the convergence properties of the pole/zero patterns in the 𝑦 -plane as 𝑛 increases for several values of 𝑚 ∈ ℂ . The key feature evident in the plots of Figures 1, 2, and 3 is that while there - - - - - - - - - - - - - - - - - - F IGURE
1. Poles of 𝑢 𝑛 ( 𝑥 ; 𝑚 ) (red dots, filled for the roots of 𝑠 𝑛 ( 𝑥 ; 𝑚 ) and unfilled for the roots of 𝑠 𝑛 −1 ( 𝑥 ; 𝑚 − 1) ) and zeros of 𝑢 𝑛 ( 𝑥 ; 𝑚 ) (blue dots, filled for the roots of 𝑠 𝑛 ( 𝑥 ; 𝑚 − 1) and unfilled forthe roots of 𝑠 𝑛 −1 ( 𝑥 ; 𝑚 ) ) rendered in the 𝑦 = 𝑥 ∕ 𝑛 -plane for 𝑚 = 0 . Left: 𝑛 = 5 , center: 𝑛 = 10 , right: 𝑛 = 20 . The black curves are independent of 𝑛 and 𝑚 and form the boundaries of two half-eye-shapedregions known to contain the poles and zeros of 𝑢 𝑛 ( 𝑥 ; 𝑚 ) for large 𝑛 [3].is some variability with the value of 𝑚 ∈ ℂ , as 𝑛 increases the region of the 𝑦 -plane that contains the poles and zerosof 𝑢 𝑛 ( 𝑛𝑦 ; 𝑚 ) appears to stabilize to an eye-shaped domain 𝐸 that is independent of both 𝑛 and 𝑚 . Figure 4 shows asimilar convergence study, here for a half-integral value of 𝑚 . While the poles and zeros seem to move toward the sameeye-shaped domain 𝐸 as 𝑛 increases, the distribution of poles and zeros within 𝐸 appears to be completely differentthan in Figures 1–3, with poles and zeros concentrating only along one “eyebrow” of the eye 𝐸 .Taken together, these figures suggest that 𝑢 𝑛 ( 𝑛𝑦 ; 𝑚 ) may have a well-defined limit as 𝑛 → ∞ as long as 𝑦 is restrictedto the exterior of 𝐸 . We are led to formulate the following conjecture. Conjecture 1.
Assume that 𝑦 lies outside of a certain eye-shaped bounded domain 𝐸 ⊂ ℂ . Then lim 𝑛 → ∞ 𝑢 𝑛 ( 𝑛𝑦 ; 𝑚 ) = i 𝑝 +0 ( 𝑦 ) , (2.7) where 𝑝 +0 ( 𝑦 ) is defined by (2.5) in which the square root refers to the principal branch. This conjecture asserts that for 𝑦 outside of 𝐸 , the quartic 𝑃 ( ̇𝑝 ; 𝑦, 𝐶 ) has a distinct pair of double roots at ̇𝑝 = 𝑝 ±0 ( 𝑦 ) ,and that the equilibrium ̇𝑝 = 𝑝 +0 ( 𝑦 ) (we are identifying 𝑦 with the constant 𝑦 ) is the relevant solution of the autonomous We observed that if the real or imaginary part of 𝑚 is irrational then NSolve performs poorly for moderately large 𝑛 . We used
Mathematica version 11. - - - - - - - - - - - - - - - - - F IGURE
2. As in Figure 1 but for 𝑚 = 1 . - - - - - - - - - - - - - - - - - - F IGURE
3. As in Figure 1 but for 𝑚 = i . - - - - - - - - - - - - - - - - - - F IGURE
4. As in Figure 1 but for 𝑚 = . Here we know from [8] that the apparent pole near theorigin in the plots is an artifact of our method of plotting separately the roots of the polynomial factorsin (1.7); in fact 𝑢 𝑛 ( 𝑥 ; ) has a simple zero at 𝑥 = 0 .model differential equation (2.2). Note that i 𝑝 +0 ( 𝑦 ) is independent of the second parameter 𝑚 , and i 𝑝 +0 ( 𝑦 ) → as 𝑦 → ∞ ,which is consistent with the fact that for each fixed 𝑛 , 𝑢 𝑛 ( 𝑥 ; 𝑚 ) → as 𝑥 → ∞ . A suitably precise version of Conjecture 1is proven in [3] using the Riemann-Hilbert representation of 𝑢 𝑛 ( 𝑥 ; 𝑚 ) presented in Theorem 1 formulated in Section 1.2;part of the proof is to correctly specify the domain 𝐸 . The black curves shown in Figures 1–4 are described in [3]; inparticular the top and bottom corners of the domain 𝐸 lie at the points 𝑦 = ± i . he asymptotic pattern of poles and zeros of 𝑢 𝑛 ( 𝑥 ; 𝑚 ) is qualitatively similar to that shown in Figure 4 whenever 𝑚 ∈ ℤ + , but different details emerge as 𝑚 is increased through half-integers as illustrated in Figure 5. From these - - - - - - - - - - - - - - - - - - F IGURE
5. As in Figure 4 but for 𝑛 = 20 and 𝑚 = (left), 𝑚 = (center), and 𝑚 = (right).plots we may formulate a second conjecture. Conjecture 2.
Suppose that 𝑚 = + 𝑘 , 𝑘 ∈ ℤ ≥ . Then as 𝑛 → ∞ , the poles and zeros of 𝑢 𝑛 ( 𝑛𝑦, 𝑚 ) accumulatenear the left boundary arc of the domain 𝐸 in the 𝑦 -plane. In more detail, the poles and zeros are arranged along 𝑘 + 2 non-intersecting arcs roughly parallel to and 𝑜 (1) distance from the left boundary arc of 𝐸 . The outermostcurve contains 𝑛 poles of 𝑢 𝑛 ( 𝑛𝑦 ; 𝑚 ) coming from roots of 𝑠 𝑛 ( 𝑛𝑦 ; 𝑚 ) and moving inwards the next curve contains 𝑛 − 1 zeros of 𝑢 𝑛 ( 𝑛𝑦 ; 𝑚 ) coming from roots of 𝑠 𝑛 −1 ( 𝑛𝑦 ; 𝑚 ) . If 𝑘 > there are then 𝑘 families of four nested curves each; the 𝑗 th family lies to the outside of the 𝑗 + 1 st and consists of (in order from outside to inside, 𝑗 = 1 , … , 𝑘 ): ∙ A curve containing 𝑛 − 𝑗 + 1 zeros of 𝑢 𝑛 ( 𝑛𝑦 ; 𝑚 ) coming from roots of 𝑠 𝑛 ( 𝑛𝑦 ; 𝑚 − 1) . ∙ A curve containing 𝑛 − 𝑗 poles of 𝑢 𝑛 ( 𝑛𝑦 ; 𝑚 ) coming from roots of 𝑠 𝑛 −1 ( 𝑛𝑦 ; 𝑚 − 1) . ∙ A curve containing 𝑛 − 𝑗 poles of 𝑢 𝑛 ( 𝑛𝑦 ; 𝑚 ) coming from roots of 𝑠 𝑛 ( 𝑛𝑦 ; 𝑚 ) . ∙ A curve containing 𝑛 − 𝑗 − 1 zeros of 𝑢 𝑛 ( 𝑛𝑦 ; 𝑚 ) coming from roots of 𝑠 𝑛 −1 ( 𝑛𝑦 ; 𝑚 ) . A suitably precise form of Conjecture 2 is proven in [3] using classical steepest descent analysis for certain Hankelsystems with Bessel function coefficients derived from Riemann-Hilbert Problem 1 in Section 6 below.Comparing Figures 1–3 with Figures 4–5 makes clear that the asymptotic behavior of 𝑢 𝑛 ( 𝑥 ; 𝑚 ) cannot possiblybe uniform with respect to 𝑚 in any neighborhood of a half-integral value. It appears to therefore be compelling toinvestigate how 𝑢 𝑛 ( 𝑥 ; 𝑚 ) behaves if 𝑛 is large while simultaneously 𝑚 is close to a given half-integer. Such an experimentis reproduced in Figure 6. This figure suggests that if 𝑚 is taken to be very close to a half-integer, the majority of thepoles and zeros of 𝑢 𝑛 ( 𝑥 ; 𝑚 ) are captured in the midst of a process in which they are collapsing toward the origin,leaving just a small fraction of them near the left (for positive half-integer 𝑚 ) “eyebrow”. In this situation, the domaincontaining the majority of the poles and zeros appears to be smaller than the full domain 𝐸 . This collapse processcan be studied [3] with the help of Theorem 1 and asymptotic analysis in a double-scaling limit in which 𝑛 is largeand 𝑚 differs from a half-integer by an exponentially small amount. The green curve plotted in Figure 6 is one of theoutcomes of this analysis. The same analysis shows that the convergence claimed in Conjecture 1 also holds for 𝑦 inthe annular region between the boundary of 𝐸 and the green curve, as well as near the right “eyebrow” (but somethingmore like Conjecture 2 occurs near the left “eyebrow”).Taking now 𝑚 ∉ ℤ + , an interesting question suggested by the scaling analysis above is whether 𝑢 𝑛 ( 𝑛𝑦 + 𝑤 ; 𝑚 ) behaves asymptotically (as a function of 𝑤 for fixed 𝑦 ∈ 𝐸 ) like an elliptic function solving (2.3) for a suitable choiceof integration constant 𝐶 such that the quartic 𝑃 has four distinct roots. To investigate this, we select a point 𝑦 inthe domain 𝐸 and display in Figure 7 the poles and zeros of 𝑢 𝑛 ( 𝑛𝑦 + 𝑤 ; 𝑚 ) in the 𝑤 -plane. This figure suggests thatindeed for given large 𝑛 , the poles and zeros are arranged roughly in a doubly-periodic lattice, with the lattice becomingmore rigid as 𝑛 increases. An important observation is that the lattice does not appear to become fixed as 𝑛 increases,although its lattice vectors do. To the contrary, there appears to be a strong fluctuation of the offset of the lattice as 𝑛 - - - - - - - - - - - - - - - - - F IGURE
6. As in Figure 1 but for 𝑛 = 20 and 𝑚 = − 10 −4 (left), 𝑚 = (center), and 𝑚 = + 10 −4 (right). Superimposed in green is another curve that better approximates the central pole/zero regionin a double-scaling limit where 𝑛 grows while 𝑚 approaches a half-integer [3]. - - - - - - F IGURE
7. As in Figure 1 but plotted in the 𝑤 -plane for 𝑚 = 0 and 𝑦 = 0 . with 𝑛 = 18 (left), 𝑛 = 19 (center), and 𝑛 = 20 (right).is increased in integer increments. These observations are consistent with the approximation of 𝑢 𝑛 ( 𝑛𝑦 + 𝑤 ; 𝑚 ) by afamily of solutions of the autonomous elliptic function differential equation (2.3) differing by an 𝑛 -dependent shift inthe argument 𝑤 . We formulate this as a conjecture. Conjecture 3.
Assume that 𝑚 ∉ ℤ + is fixed, and fix 𝑦 ∈ 𝐸 . Then there is a solution ̇𝑝 = ̇𝑝 𝑛 ( 𝑤 ; 𝑦 ) (an ellipticfunction of 𝑤 ) of the differential equation (2.3) for suitable 𝐶 = 𝐶 ( 𝑦 ) such that the quartic 𝑃 has distinct roots, forwhich lim 𝑛 → ∞ ( 𝑢 𝑛 ( 𝑛𝑦 + 𝑤 ; 𝑚 ) − i ̇𝑝 𝑛 ( 𝑤 ; 𝑦 ) ) = 0 . (2.8)This conjecture is proved in [3] using Theorem 1. Part of the proof involves isolating the correct value of theintegration constant 𝐶 given 𝑦 ∈ 𝐸 . It is also important in the proof that 𝑦 not lie on the imaginary axis, which isexcluded from 𝐸 as shown in Figures 1–6. Also, 𝑤 should be restricted to a bounded domain that excludes arbitrarilysmall fixed neighborhoods of certain lattice points.We have already pointed out that the two “corner points” of the eye-shaped domain 𝐸 occur at the values 𝑦 = 𝑦 =± i . These values are the only ones for which the quartic 𝑃 can have only one four-fold root. This particularly severedegeneration of the quartic suggests that the rational solution 𝑢 𝑛 ( 𝑥 ; 𝑚 ) may behave in a special way for large 𝑛 when 𝑥 ≈ ± i 𝑛 , a notion that is reinforced by another suitable rescaling of (2.1). Indeed, to localize 𝑦 = 𝑥 ∕ 𝑛 near 𝑦 = ± i ,we set 𝑥 = ±i( 𝑛 + ( 𝑛 ) 𝜉 ± ) and consider 𝜉 ± to be bounded. Similarly, since 𝑝 +0 (± i 𝑛 ) = ±1 , we wish to localize near ±i so we set 𝑢 = ±i(1 − ( 𝑛 ) −1∕3 𝑊 ± ) and consider 𝑊 ± to be bounded. (The exponents of ± are chosen toachieve a dominant balance, and the numerical coefficients of and are chosen for convenience.) Making thesesubstitutions, we multiply (2.1) through by ∓ i 𝑥𝑢 ( 𝑥 ) and obtain d 𝑊 d 𝜉 = 2 𝑊 + 𝜉𝑊 + 𝑚 + 𝑂 ( 𝑛 −1∕3 ) , 𝜉 = 𝜉 ± , 𝑊 = 𝑊 ± , where again the final term combines several others all proportional to 𝑛 −1∕3 or more negative powers of 𝑛 . Neglectingthe error terms and relabeling 𝑊 as ̇𝑊 yields as a model equation d ̇𝑊 d 𝜉 = 2 ̇𝑊 + 𝜉 ̇𝑊 + 𝑚 (2.9)which is the Painlevé-II equation with parameter 𝑚 . Based on this calculation, we may expect that when 𝑛 is largeand 𝑚 is held fixed, the rational Painlevé-III functions behave near the points 𝑥 = ± i 𝑛 like certain solutions of thePainlevé-II equation (2.9); moreover, the dependence on the fixed parameter 𝑚 becomes apparent at leading order inthis approximation. To explore this possibility, we plot the poles and zeros of 𝑢 𝑛 ( 𝑥 ; 𝑚 ) in the 𝜉 ± planes for two fixedvalues of 𝑚 and for increasing 𝑛 in Figures 8–11. - - - - - - - - - - - - F IGURE
8. As in Figure 1 but plotted in the 𝜉 + -plane for 𝑚 = 0 and 𝑛 = 18 (left), 𝑛 = 19 (center), and 𝑛 = 20 (right). Also shown with dashed lines are the rays Arg( 𝜉 + ) = ± 𝜋 , which are the tangentsto the boundary of 𝐸 at the upper corner. - - - - - - - - - - - - F IGURE
9. As in Figure 1 but plotted in the 𝜉 − -plane for 𝑚 = 0 and 𝑛 = 18 (left), 𝑛 = 19 (center), and 𝑛 = 20 (right). Also shown with dashed lines are the rays Arg( 𝜉 − ) = ± 𝜋 , which are the tangentsto the boundary of 𝐸 at the lower corner. - - - - - - - - - - - F IGURE
10. As in Figure 8 (zooming into the upper corner of the domain 𝐸 ) but for 𝑚 = i . - - - - - - - - - - - - F IGURE
11. As in Figure 9 (zooming into the lower corner of the domain 𝐸 ) but for 𝑚 = i .In each of these figures, the three plots for consecutive reasonably large values of 𝑛 are nearly indistinguishableto the eye, suggesting convergence to a particular solution of (2.9) independent of 𝑛 . To try to identify the relevantparticular solutions, we may start with the outer approximation given in Conjecture 1 and re-express it in terms ofthe recentered and rescaled independent variables 𝜉 ± , taking careful account of the principal branch interpretation ofthe square root in (2.5). Thus, 𝑢 𝑛 ( 𝑥 ; 𝑚 ) ≈ i 𝑝 +0 ( 𝑦 ) = i 𝑝 +0 ( 𝑛 −1 𝑥 ) = ±i2 𝑛 −1∕3 ( 𝜉 ± ) + 𝑂 ( 𝑛 −2∕3 𝜉 ± ) assuming thatConjecture 1 holds and that 𝜉 ± is small compared to 𝑛 . If this expression is to agree in some overlap domain withan approximation based on the Painlevé-II equation (2.9), we should express 𝑊 = 𝑊 ± in terms of 𝑢 𝑛 ( 𝑥 ; 𝑚 ) ≈ i 𝑝 +0 ( 𝑦 ) .Thus, 𝑊 ± = ( 𝑛 ) (1 ± i 𝑢 𝑛 ( 𝑥 ; 𝑚 )) ≈ ( 𝑛 ) (1 ∓ 𝑝 +0 ( 𝑦 )) = ±i( 𝜉 ± ) + 𝑂 ( 𝑛 −1∕3 𝜉 ± ) if also 𝜉 ± is small comparedto 𝑛 . Assumption of an overlap domain then suggests that the relevant solutions of the Painlevé-II equation (2.9)should satisfy ̇𝑊 ± ∼ ±i( 𝜉 ± ) as 𝜉 ± → ∞ in the exterior domain where the outer approximation is valid. In thelimit 𝑛 → ∞ , this region corresponds to the sector Arg( 𝜉 ± ) ∈ (− 𝜋, 𝜋 ) . It is known that [11, Chapter 11] for eachcomplex 𝑚 there are two and only two solutions of the Painlevé-II equation (2.9) denoted ̇𝑊 = ̇𝑊 ± ( 𝜉 ; 𝑚 ) with theasymptotic behavior ̇𝑊 ± ( 𝜉 ; 𝑚 ) ∼ ±i( 𝜉 ) as 𝜉 → ∞ with | Arg( 𝜉 ) | ≤ 𝜋 − 𝜖 for 𝜖 > sufficiently small, wherethe one-half power denotes the principal branch. These are known as tritronquée solutions of (2.9). We are led toformulate the following conjecture. Conjecture 4.
Let 𝑚 ∈ ℂ be fixed. Then, lim 𝑛 → ∞ ( 𝑛 ) (1 ± i 𝑢 𝑛 (±i( 𝑛 + ( 𝑛 ) 𝜉 ); 𝑚 ) = ̇𝑊 ± ( 𝜉 ; 𝑚 ) , (2.10) where ̇𝑊 = ̇𝑊 ± ( 𝜉 ; 𝑚 ) are the aforementioned tritronquée solutions of the Painlevé-II equation (2.9) . he convergence might be expected to be uniform on compact subsets of the 𝜉 -plane from which arbitrarily smallopen disks centered at the poles of the tritronquée solution in question have been excised. The assertion that theparticular solutions of (2.9) should be of tritronquée type means that they are asymptotically analytic in a sector of thecomplex 𝜉 -plane of opening angle 𝜋 , consistent with the plots in Figures 8–11. Tronquée and tritronquée solutionsof the Painlevé-II equation (2.9) were originally studied long ago by Boutroux; see also Joshi and Mazzocco [15].When 𝑚 = 0 , the Painlevé-II equation (2.9) has the obvious symmetry ̇𝑊 ( 𝜉 ) ↦ − ̇𝑊 ( 𝜉 ) , and by uniqueness of thetwo tritronquée solutions this means that ̇𝑊 − ( 𝜉 ; 0) = − ̇𝑊 + ( 𝜉 ; 0) . Comparing Figures 8–9 we therefore expect asign change while the figures clearly show instead some sort of reciprocation, with poles and zeros of 𝑢 𝑛 ( 𝑥 ; 𝑚 ) beingexchanged. The explanation for this lies in the relation 𝑢 = ±i(1 − ( 𝑛 ) −1∕3 𝑊 ± ) , which shows that both poles andzeros of 𝑢 correspond to 𝑊 ± becoming very large; in other words, both the red and the blue dots in Figures 8–11should be attracted in the limit 𝑛 → +∞ toward the fixed simple poles of the corresponding tritronquée solution ofthe Painlevé-II equation (2.9). More to the point, assuming the validity of Conjecture 4 with the suggested nature ofconvergence, one may apply the argument principle to the rational function 𝑢 𝑛 (±i( 𝑛 + ( 𝑛 ) 𝜉 ); 𝑚 ) about a Jordancurve 𝐶 in the 𝜉 -plane that encloses exactly one pole of the corresponding tritronquée solution of (2.9). The index(increment of the argument) of 𝑢 𝑛 about 𝐶 is zero for sufficiently large 𝑛 because 𝑢 𝑛 converges uniformly on 𝐶 to ±i as ̇𝑊 ± is analytic and therefore bounded on 𝐶 . This means that in fact each pole of the Painlevé-II tritronquée wouldbe expected to attract (in the 𝜉 -plane) an equal number of poles and zeros of 𝑢 𝑛 in the large- 𝑛 limit . One can see theindicated pairing of poles with zeros in Figures 8–11, although with larger values of 𝑛 the phenomenon should becomeeven more obvious to the eye. Remark 2.
While tritronquée solutions are by definition asymptotically (i.e., for large | 𝜉 | ) pole-free in a certain sectorof the complex plane, the pole-free property is not a priori guaranteed in any bounded region of the complex-plane.However, recently it was shown [9] that all tritronquée solutions of the Painlevé-I equation are actually analytic downto the origin in the asymptotically pole-free sector, proving a conjecture of Dubrovin. See [1] for related results oncertain solutions of the Painlevé-II equation (2.9) . It is not known whether the tritronquée solutions ̇𝑊 ± ( 𝜉 ; 𝑚 ) of thePainlevé-II equation are exactly pole-free in the sector − 𝜋 < Arg( 𝜉 ) < 𝜋 . Because we expect pole/zero pairs of 𝑢 𝑛 to converge toward fixed poles of ̇𝑊 ± in the 𝜉 -plane, in our opinion the plots shown in Figures 8–11 are not sufficientlyresolved (i.e., 𝑛 is not sufficiently large) to provide convincing evidence one way or the other, even though Figure 11shows some poles and zeros of 𝑢 𝑛 lying in the asymptotic pole-free sector for ̇𝑊 − ( 𝜉 ; i) near the origin. The origin 𝑥 = 0 is a fixed singular point of the Painlevé-III equation (1.1) and its presence appears to affect thepattern of poles and zeros of 𝑢 𝑛 ( 𝑥 ; 𝑚 ) close to the origin if 𝑚 ∉ ℤ + , as can be seen in Figures 1–3. In particular, thedensity of the regular distribution of poles and zeros within the domain 𝐸 seems to blow up as 𝑦 → , a phenomenonthat is confirmed by the asymptotic analysis in [3]. However, this accumulation phenomenon cannot be uniformly validin any neighborhood of the origin because 𝑢 𝑛 ( 𝑥 ; 𝑚 ) is rational. Our numerical computations suggest that the 𝑥 -distanceof the smallest poles and zeros of 𝑢 𝑛 ( 𝑥 ; 𝑚 ) to the origin scales as 𝑛 −1 when 𝑛 is large, which suggests introducing into(2.1) the scaling 𝑥 = 𝑛 −1 𝑧 and considering 𝑛 large for 𝑚 bounded. Then (2.1) becomes d 𝑢 d 𝑧 = 1 𝑢 ( d 𝑢 d 𝑧 ) − 1 𝑧 d 𝑢 d 𝑧 + 4 𝑢 + 4 𝑧 + 𝑂 ( 𝑛 −1 ) , (2.11)which is a perturbation of the parameter-free PIII equation d ̇𝑢 d 𝑧 = 1 ̇𝑢 ( d ̇𝑢 d 𝑧 ) − 1 𝑧 d ̇𝑢 d 𝑧 + 4 ̇𝑢 + 4 𝑧 (2.12)(arising from the general Painlevé-III equation in the special case 𝛾 = 𝛿 = 0 , see [12, Section 2.2]). We may thereforeexpect that 𝑢 𝑛 ( 𝑛 −1 𝑧 ; 𝑚 ) should behave like a particular solution (or possibly a family of particular solutions parametrizedby 𝑚 and/or 𝑛 ) of this limiting equation when 𝑛 is large and 𝑧 is bounded. To explore this possibility, we plotted thepoles and zeros of 𝑢 𝑛 ( 𝑛 −1 𝑧 ; 𝑚 ) in the complex 𝑧 -plane for two different fixed values of 𝑚 and increasing large 𝑛 inFigures 12 and 13. Noting the alternation in the pattern of poles and zeros with increasing 𝑛 in each case and takinginto account the symmetry ̇𝑢 ↦ − ̇𝑢 −1 of (2.12) leads to the following conjecture. Conjecture 5.
Let 𝑚 ∈ ℂ ⧵ ( ℤ + ) be given. Then there exists a corresponding particular solution ̇𝑢 ( 𝑧 ; 𝑚 ) of the 𝑚 -independent model equation (2.12) such that lim 𝑗 → ∞ 𝑢 𝑗 ((2 𝑗 ) −1 𝑧 ; 𝑚 ) = ̇𝑢 ( 𝑧 ; 𝑚 ) and lim 𝑗 → ∞ 𝑢 𝑗 +1 ((2 𝑗 + 1) −1 𝑧 ; 𝑚 ) = − ̇𝑢 ( 𝑧 ; 𝑚 ) −1 . (2.13) - - - - - - - - - - - F IGURE
12. As in Figure 1 but plotted in the 𝑧 -plane for 𝑚 = 0 and 𝑛 = 18 (left), 𝑛 = 19 (center),and 𝑛 = 20 (right). - - - - - - - - - - - - F IGURE
13. As in Figure 1 but plotted in the 𝑧 -plane for 𝑚 = i and 𝑛 = 18 (left), 𝑛 = 19 (center),and 𝑛 = 20 (right).The reason for excluding half-integral values of 𝑚 from this statement is that 𝑢 𝑛 ( 𝑥 ; 𝑚 ) has either a simple pole or asimple zero at the origin [8] for such 𝑚 and asymptotic analysis [3] shows convergence to a function of 𝑦 = 𝑥 ∕ 𝑛 (theanalytic continuation of i 𝑝 +0 ( 𝑦 ) to the complement of the “eyebrow”), which would correspond under rescaling eitherto ̇𝑢 ≡ or ̇𝑢 ≡ ∞ ; moreover, this limit is independent of whether 𝑛 is odd or even. Naturally, this discrepancy begsagain the question of how the solution behaves near the origin in a double-scaling limit of large 𝑛 and 𝑚 close to ahalf-integer.The asymptotic analysis to establish Conjectures 4 and 5 using Theorem 1 is work in progress. The proof of Con-jecture 5 is expected to be particularly challenging because Riemann-Hilbert Problem 1 cannot even be formulated for 𝑥 = 0 . 3. L AX P AIR AND I SOMONODROMY T HEORY FOR THE P AINLEVÉ -III E
QUATION
The representation of the Painlevé-III equation (1.1) as the compatibility condition for a Lax pair of first-order linearsystems was discovered by Jimbo and Miwa [14]. Consider the linear differential equations 𝜕 𝚿 𝜕𝜆 ( 𝜆 ; 𝑥 ) = 𝐀 ( 𝜆 ; 𝑥 ) 𝚿 ( 𝜆 ; 𝑥 ) , 𝐀 ( 𝜆 ; 𝑥 ) ∶= i 𝑥 𝜎 + 1 𝜆 [ − Θ ∞ 𝑦𝑣 Θ ∞ ] + 1 𝜆 [ i 𝑥 − i 𝑠𝑡 i 𝑠 −i 𝑡 ( 𝑠𝑡 − 𝑥 ) − i 𝑥 + i 𝑠𝑡 ] , (3.1) nd 𝜕 𝚿 𝜕𝑥 ( 𝜆 ; 𝑥 ) = 𝐁 ( 𝜆 ; 𝑥 ) 𝚿 ( 𝜆 ; 𝑥 ) , 𝐁 ( 𝜆 ; 𝑥 ) ∶= i 𝜆 𝜎 + 1 𝑥 [ 𝑦𝑣 ] − 1 𝜆𝑥 [ i 𝑥 − i 𝑠𝑡 i 𝑠 −i 𝑡 ( 𝑠𝑡 − 𝑥 ) − i 𝑥 + i 𝑠𝑡 ] . (3.2)Here, Θ ∞ is a constant parameter and 𝑦 = 𝑦 ( 𝑥 ) , 𝑣 = 𝑣 ( 𝑥 ) , 𝑠 = 𝑠 ( 𝑥 ) , and 𝑡 = 𝑡 ( 𝑥 ) are coefficient functions (potentials).The matrix coefficient of 𝜆 −2 in (3.1) and of −( 𝜆𝑥 ) −1 in (3.2) looks complicated, but it simply represents the mostgeneral matrix having ± i 𝑥 as its eigenvalues (all such matrices depend on two parameters whose roles are played by 𝑠 ( 𝑥 ) and 𝑡 ( 𝑥 ) ). The compatibility condition 𝐀 𝑥 − 𝐁 𝜆 + [ 𝐀 , 𝐁 ] = for the simultaneous equations (3.1)–(3.2) is thefirst-order system of nonlinear differential equations 𝑥 d 𝑦 d 𝑥 = −2 𝑥𝑠 + Θ ∞ 𝑦, 𝑥 d 𝑣 d 𝑥 = −2 𝑥𝑡 ( 𝑠𝑡 − 𝑥 ) − Θ ∞ 𝑣,𝑥 d 𝑠 d 𝑥 = (1 − Θ ∞ ) 𝑠 − 2 𝑥𝑦 + 4 𝑦𝑠𝑡, 𝑥 d 𝑡 d 𝑥 = Θ ∞ 𝑡 − 2 𝑦𝑡 + 2 𝑣. (3.3)This system admits an integral of motion: 𝐼 ∶= 2Θ ∞ 𝑥 𝑠𝑡 − Θ ∞ − 2 𝑥 𝑦𝑡 ( 𝑠𝑡 − 𝑥 ) + 2 𝑥 𝑣𝑠 (3.4)is a conserved quantity, i.e, (3.3) implies that d 𝐼 ∕d 𝑥 = 0 holds identically. Using (3.3) one can show that the combi-nation 𝑢 ( 𝑥 ) ∶= − 𝑦 ( 𝑥 ) 𝑠 ( 𝑥 ) (3.5)satisfies the differential equation 𝑥 d 𝑢 d 𝑥 = 2 𝑥 − (1 − 2Θ ∞ ) 𝑢 + 4 𝑠𝑡𝑢 − 2 𝑥𝑢 . (3.6)Taking another 𝑥 -derivative and letting Θ denote the constant value of the integral 𝐼 one then obtains the Painlevé-III equation in the form (1.1). (For some details of these calculations, see the last lines of the proof of Lemma 2 inSection 5.2 below.) The isomonodromy method algorithm for solving the initial-value problem for (1.1) with initialconditions 𝑢 ( 𝑥 ) = 𝑢 and 𝑢 ′ ( 𝑥 ) = 𝑢 ′0 is then the following [11]. Given constants (Θ , Θ ∞ , 𝑥 , 𝑢 , 𝑢 ′0 ) ∈ ℂ with 𝑥 𝑢 ≠ ,(1) Choose an arbitrary nonzero initial value of 𝑦 : 𝑦 ( 𝑥 ) = 𝑦 ≠ . Then from (3.5) at 𝑥 = 𝑥 one obtains theinitial value of 𝑠 : 𝑠 ∶= 𝑠 ( 𝑥 ) = − 𝑦 ∕ 𝑢 , which is well-defined and nonzero. Next, since 𝑠 𝑢 = − 𝑢 𝑦 ≠ , 𝑡 ∶= 𝑡 ( 𝑥 ) is well-defined from (3.6) at 𝑥 = 𝑥 : 𝑡 = 14 𝑢 𝑦 ( 𝑥 − (1 − 2Θ ∞ ) 𝑢 − 2 𝑥 𝑢 − 𝑥 𝑢 ′0 ) . (3.7)Finally, from (3.4) using 𝐼 = Θ and substituting for 𝑠 and 𝑡 we get the initial value of 𝑣 : 𝑣 ∶= 𝑣 ( 𝑥 ) where 𝑣 = 116 𝑦 𝑢 ( 𝑥 + (1 − 4Θ ) 𝑢 − 4 𝑥 𝑢 − 8Θ 𝑥 𝑢 − 4 𝑥 𝑢 − 4 𝑥 𝑢 ′0 + 2 𝑥 𝑢 𝑢 ′0 + 𝑥 𝑢 ′20 ) . (3.8)Note that 𝑠 is proportional, while 𝑡 and 𝑣 are inversely proportional, to the arbitrary nonzero constant 𝑦 .(2) Taking 𝑦 = 𝑦 , 𝑣 = 𝑣 , 𝑠 = 𝑠 , 𝑡 = 𝑡 , and 𝑥 = 𝑥 ≠ , seek four specific fundamental solution matrices of(3.1) called canonical solutions , namely two satisfying the normalization condition 𝚿 𝜆 Θ ∞ 𝜎 ∕2 e −i 𝑥𝜆𝜎 ∕2 → 𝕀 , 𝜆 → ∞ (3.9)in two different abutting sectors with opening angle 𝜋 and bisected by directions in which the factors e ±i 𝑥𝜆 areoscillatory; and two satisfying the normalization condition [ 𝑎 ( 𝑥 ) 𝑏 ( 𝑥 ) 𝑠 ( 𝑥 ) 𝑎 ( 𝑥 ) 𝑡 ( 𝑥 ) 𝑏 ( 𝑥 )( 𝑠 ( 𝑥 ) 𝑡 ( 𝑥 ) − 𝑥 ) ] −1 𝚿 𝜆 −Θ 𝜎 ∕2 e i 𝑥𝜆 −1 𝜎 ∕2 → 𝕀 , 𝜆 → , (3.10) Our parametrization of the Lax system (3.1)–(3.2) differs from that of Jimbo and Miwa [14], who instead of 𝑠 ( 𝑥 ) and 𝑡 ( 𝑥 ) worked with thecombinations (in the notation of [11]) 𝑈 ( 𝑥 ) ∶= 𝑠 ( 𝑥 ) 𝑡 ( 𝑥 ) and 𝑤 ( 𝑥 ) ∶= 𝑡 ( 𝑥 ) −1 . The parametrization (3.1)–(3.2) has the advantage that the singularitiesof the potentials 𝑦 , 𝑣 , and 𝑠 are exactly the singularities of the simultaneous solution 𝚿 with respect to the parameter 𝑥 . Given any constant 𝛼 ≠ , the system of equations (3.3) is obviously invariant under the substitution ( 𝑦 ( 𝑥 ) , 𝑣 ( 𝑥 ) , 𝑠 ( 𝑥 ) , 𝑡 ( 𝑥 )) ↦ ( 𝛼𝑦 ( 𝑥 ) , 𝛼 −1 𝑣 ( 𝑥 ) , 𝛼𝑠 ( 𝑥 ) , 𝛼 −1 𝑡 ( 𝑥 )) , which also leaves 𝑢 ( 𝑥 ) defined by (3.5) invariant. n two different abutting sectors with opening angle 𝜋 and bisected by directions in which the factors e ±i 𝑥𝜆 −1 are oscillatory. In (3.10), 𝑎 ( 𝑥 ) and 𝑏 ( 𝑥 ) are arbitrary except that the determinant of the matrix factor on the leftshould be equal to and therefore 𝑎 ( 𝑥 ) 𝑏 ( 𝑥 ) = − 𝑥 −1 . The two fundamental matrices near 𝜆 = 0 are obviouslyrelated by right-multiplication by one 𝜆 -independent Stokes matrix for each of the two sector boundary arcs;similarly for the fundamental solution matrices near 𝜆 = ∞ . A fifth connection matrix relates the solution inone sector near 𝜆 = 0 to that in one sector near 𝜆 = ∞ . The four Stokes matrices and the connection matrixconstitute the solution of the direct monodromy problem .(3) The equation (3.2) implies that the Stokes matrices and the connection matrix are independent of 𝑥 when 𝑦 , 𝑣 , 𝑠 , and 𝑡 evolve according to (3.3); this is the isomonodromy property of the representation (3.1)–(3.2). Hence,letting 𝑥 ∈ ℂ be arbitrary, solve the inverse monodromy (Riemann-Hilbert) problem of determining the fourfundamental solution matrices from the jump conditions relating them via right-multiplication by the Stokesmatrices and the connection matrix and from the asymptotic normalization conditions (3.9)–(4.4). From thesolution of this problem the coefficients ( 𝑦, 𝑣, 𝑠, 𝑡 ) of equation (3.1) can then be extracted and from them 𝑢 isobtained for 𝑥 ≠ 𝑥 from (3.5).4. M ONODROMY D ATA FOR 𝑢 ( 𝑥 ) = 𝑢 ( 𝑥 ; 𝑚 ) = 1 In the special case that Θ = Θ ∞ − 1 , i.e., 𝑛 = 0 for arbitrary 𝑚 ∈ ℂ , the Painlevé-III equation (1.1) has the rational(constant) solutions 𝑢 ( 𝑥 ) = ±1 . Our aim in this section is to calculate the necessary monodromy data so that the solution 𝑢 ( 𝑥 ) = 1 can be obtained from an appropriate Riemann-Hilbert problem. Although this appears to involve the study ofthe direct problem (3.1) alone, our approach will be to leverage the compatibility with the isomonodromic deformation(3.2) to solve the latter equation instead and then build in additional dependence on 𝜆 via integration constants to satisfy(3.1) as well. With these results in hand, in Section 5 we will apply Schlesinger transformations to increment/decrementby the value of the difference Θ ∞ − Θ = 1 − 2 𝑛 and thus obtain a Riemann-Hilbert representation for the Bäcklundchain of rational solutions with seed solution 𝑢 ( 𝑥 ) = 1 .4.1. The Lax pair for Θ = Θ ∞ − 1 and 𝑢 ( 𝑥 ) = 1 . Since we will be exploiting the differential equation (3.2) toconstruct the monodromy data, we need to know how the coefficients ( 𝑦, 𝑣, 𝑠, 𝑡 ) depend on 𝑥 . From (3.5) with 𝑢 ( 𝑥 ) ≡ we find that 𝑠 ( 𝑥 ) ≡ − 𝑦 ( 𝑥 ) , so the differential equation for 𝑦 ( 𝑥 ) in (3.3) closes as a linear equation with solution 𝑦 ( 𝑥 ) = − 14 𝐾 e 𝑥 𝑥 Θ ∞ and hence also 𝑠 ( 𝑥 ) = 14 𝐾 e 𝑥 𝑥 Θ ∞ , (4.1)where 𝐾 ≠ is an arbitrary constant of integration. Using this result and 𝑢 ( 𝑥 ) ≡ in (3.6) we obtain 𝑡 ( 𝑥 ) : 𝑡 ( 𝑥 ) = (1 − 2Θ ∞ ) 𝐾 −1 e −2 𝑥 𝑥 −Θ ∞ . Finally, using these along with 𝐼 = Θ = Θ ∞ − 1 in (3.4), we solve for 𝑣 ( 𝑥 ) : 𝑣 ( 𝑥 ) = − 14 (1 − 2Θ ∞ )(4 𝑥 + 1 + 2Θ ∞ ) 𝐾 −1 e −2 𝑥 𝑥 −Θ ∞ . In order that the coefficients in the Lax pair are well-defined, we assume for the purposes of this calculation that 𝑥 ∈ ℂ ⧵ ℝ − and agree to label the argument of 𝑥 as being in the interval (− 𝜋, 𝜋 ) , i.e., we use the principal branch arg( 𝑥 ) = Arg( 𝑥 ) . The arbitrary constant 𝐾 plays a similar role as the arbitrary nonzero initial value 𝑦 = 𝑦 ( 𝑥 ) in the solution of the initial-value problem for (1.1) by the isomonodromy method. Next, introducing into (3.2) thewell-defined substitution 𝚿 = e 𝑥𝜎 𝑥 Θ ∞ 𝜎 ∕2 𝑥 −1∕2 𝐖 , one finds that the first-row matrix entries 𝑊 𝑗 are solutions 𝑊 of the confluent hypergeometric equation (cf., [21,Eq. 13.14.1]) d 𝑊 d 𝜁 + [ − 14 + 𝜅𝜁 + 1 − 4 𝜇 𝜁 ] 𝑊 = 0 , 𝜇 = 14 , 𝜅 = 12 (Θ ∞ − 1) , (4.2)where 𝜁 ∶= i 𝑥 ( 𝜆 + 2i − 𝜆 −1 ) . The elements 𝑊 𝑗 of the second row are obtained from those in the first row by theformula 𝑊 𝑗 = − 4 𝜁 ( 𝑊 ′1 𝑗 ( 𝜁 ) − 𝑊 𝑗 ( 𝜁 )) + (4 𝜅 − i(1 − 2Θ ∞ ) 𝜆 −1 ) 𝑊 𝑗 ( 𝜁 ) 𝐾 (1 + i 𝜆 −1 ) . f we fix a fundamental pair of solutions of (4.2) that depend on 𝜆 only through the variable 𝜁 as the first row of thematrix 𝐖 , then the general solution of (3.2) can be written in the form 𝚿 = e 𝑥𝜎 𝑥 Θ ∞ 𝜎 ∕2 𝑥 −1∕2 𝐖𝐂 ( 𝜆 ) , (4.3)where 𝐂 ( 𝜆 ) cannot depend on 𝑥 but might depend on 𝜆 . Having found the general solution of the “ 𝑥 -equation” (3.2) inthe Lax pair for the Painlevé-III equation, we can now determine 𝐂 ( 𝜆 ) such that the expression (4.3) is simultaneously asolution of both (compatible, because 𝑦 ( 𝑥 ) , 𝑣 ( 𝑥 ) , 𝑤 ( 𝑥 ) , and 𝑈 ( 𝑥 ) satisfy (3.3)) equations (3.1)–(3.2). Upon substitutionof (4.3) into (3.1) one easily finds that 𝐂 ( 𝜆 ) = ( 𝜆 + i) −1∕2 𝐂 , where 𝐂 is a matrix independent of both 𝑥 and 𝜆 .4.2. Normalized simultaneous solutions for
Im( 𝑥 ) ≠ . For the moment, we assume that
Im( 𝑥 ) ≠ and define 𝑥 𝑝 (e.g., in (4.3)) by taking arg( 𝑥 ) = Arg( 𝑥 ) ∈ (− 𝜋, 𝜋 ) . Later in Section 4.4 we will consider the exceptional cases arg(± 𝑥 ) = 0 . Our goal now is to determine the values of the matrix 𝐂 in order to define the four canonical fundamentalsolution matrices satisfying the normalization conditions (3.9)–(3.10). Note that (3.10) here takes the form [ 𝑎 ( 𝑥 ) 𝑏 ( 𝑥 ) 𝐾 e 𝑥 𝑥 Θ ∞ 𝑎 ( 𝑥 ) 𝐾 −1 (1 − 2Θ ∞ )e −2 𝑥 𝑥 −Θ ∞ 𝑏 ( 𝑥 ) (1 − 2Θ ∞ − 4 𝑥 ) ] −1 𝚿 𝜆 −Θ 𝜎 ∕2 e i 𝑥𝜆 −1 𝜎 ∕2 → 𝕀 , 𝜆 → (4.4)where 𝑎 ( 𝑥 ) 𝑏 ( 𝑥 ) = − 1 𝑥 . (4.5)To specify these four solutions carefully, we should make sure that the power functions 𝜆 𝑝 for various 𝑝 appearing in thenormalization conditions, as well as the scalar factor ( 𝜆 + i) −1∕2 and the solutions 𝑊 of the confluent hypergeometricequation (4.2) that are chosen for the first row of the matrix 𝐖 are all unambiguous. We do this as follows. Firstly,we note that according to the Wronskian identity [21, Eq. 13.14.30], we may choose as a fundamental pair of solutionsof (4.2) the two Whittaker functions 𝑊 ∶= 𝑊 − 𝜅,𝜇 (− 𝜁 ) and 𝑊 ∶= 𝑊 𝜅,𝜇 ( 𝜁 ) . Now, 𝑊 ± 𝜅,𝜇 ( 𝑧 ) are multi-valuedfunctions, and to be completely unambiguous we select in both cases the principal branches, whose argument 𝑧 lies inthe domain arg( 𝑧 ) ∈ (− 𝜋, 𝜋 ) . These solutions are related by the identity (cf., [21, Eq. 13.14.13]) lim 𝜖 ↓ 𝑊 ± 𝜅,𝜇 (− 𝑧 + i 𝜖 ) = e ±2 𝜋 i 𝜅 lim 𝜖 ↓ 𝑊 ± 𝜅,𝜇 (− 𝑧 − i 𝜖 ) + 2 𝜋 ie ±i 𝜋𝜅 Γ( + 𝜇 ∓ 𝜅 )Γ( − 𝜇 ∓ 𝜅 ) 𝑊 ∓ 𝜅,𝜇 ( 𝑧 ) , 𝑧 > , (4.6)and its (negative) derivative lim 𝜖 ↓ 𝑊 ′± 𝜅,𝜇 (− 𝑧 + i 𝜖 ) = e ±2 𝜋 i 𝜅 lim 𝜖 ↓ 𝑊 ′± 𝜅,𝜇 (− 𝑧 − i 𝜖 ) − 2 𝜋 ie ±i 𝜋𝜅 Γ( + 𝜇 ∓ 𝜅 )Γ( − 𝜇 ∓ 𝜅 ) 𝑊 ′∓ 𝜅,𝜇 ( 𝑧 ) , 𝑧 > , (4.7)which express jump conditions for 𝑊 ± 𝜅,𝜇 ( 𝑧 ) and its derivative across the branch cut on the negative real 𝑧 -axis. Wealso have the asymptotic behavior (cf., [21, Eq. 13.14.21]) 𝑊 ± 𝜅,𝜇 ( 𝑧 ) = e − 𝑧 ∕2 𝑧 ± 𝜅 (1 + 𝑂 ( 𝑧 −1 )) , 𝑧 → ∞ , arg( 𝑧 ) ∈ (− 𝜋, 𝜋 ) , as well as 𝜁 ( 𝑊 ′11 ( 𝜁 ) − 𝑊 ( 𝜁 )) = − 𝜅 e 𝜁 ∕2 (− 𝜁 ) − 𝜅 (1 + 𝑂 ( 𝜁 −1 )) , 𝜁 → ∞ , arg(− 𝜁 ) ∈ (− 𝜋, 𝜋 ) , and 𝜁 ( 𝑊 ′12 ( 𝜁 ) − 𝑊 ( 𝜁 )) = −e − 𝜁 ∕2 𝜁 𝜅 +1 (1 + 𝑂 ( 𝜁 −1 )) , 𝜁 → ∞ , arg( 𝜁 ) ∈ (− 𝜋, 𝜋 ) , and in these last three relations the indicated power functions all have their principal values. Now, with the principalbranches selected, given Arg( 𝑥 ) ∈ (− 𝜋, 𝜋 ) , the matrix 𝐖 becomes a well-defined analytic function of 𝜆 , henceforthdenoted 𝐖 = 𝐖 ( 𝑥, 𝜆 ) , defined in the complement of the preimage under 𝜁 of the real axis. This 𝑥 -dependent preimageis therefore the jump contour 𝐿 for 𝐖 , and it takes different forms for − 𝜋 < Arg( 𝑥 ) < and < Arg( 𝑥 ) < 𝜋 ; seeFigure 14. Given a value of 𝑥 with Im( 𝑥 ) ≠ and a corresponding jump contour 𝐿 as illustrated in this figure, wewill now define the multivalued functions 𝜆 𝑝 and ( 𝜆 + i) −1∕2 precisely as follows. For 𝜆 𝑝 , we take as a branch cut 𝐿 ∞ ⬕ ∪ 𝐿 ⬕ . Furthermore, noting that as 𝑥 varies in the upper half-plane 𝐿 ∞ ⬕ sweeps through the left half 𝜆 -plane, wedefine arg( 𝜆 ) = 0 for sufficiently large positive 𝜆 when Im( 𝑥 ) > . Similarly, as 𝑥 varies in the lower half-plane 𝐿 ∞ ⬕ sweeps through the right half 𝜆 -plane and we therefore define arg( 𝜆 ) = 𝜋 for 𝜆 < of sufficiently large magnitudewhen Im( 𝑥 ) < . This choice of branch along with the cut 𝐿 ∞ ⬕ ∪ 𝐿 ⬕ unambiguously determines arg( 𝜆 ) and hence 𝜆 𝑝 - - - - - - - F IGURE
14. The jump contour 𝐿 for the Whittaker matrix 𝐖 ( 𝑥, 𝜆 ) takes a different form dependingon whether < Arg( 𝑥 ) < 𝜋 (left) or − 𝜋 < Arg( 𝑥 ) < (right). The arcs 𝐿 ∞ ⬔ and 𝐿 ⬔ (red) arewhere 𝜁 < , and the arcs 𝐿 ∞ ⬕ and 𝐿 ⬕ (cyan) are where 𝜁 > . All four contour arcs meet at theonly zero of 𝜁 , namely 𝜆 = −i . Together with the unit circle (dotted), the contour arcs divide thecomplex 𝜆 -plane into four disjoint domains as indicated, Ω adjacent to 𝜆 = 0 and where ±Im( 𝜁 ) > holds, and unbounded domains Ω ∞± where ±Im( 𝜁 ) > holds. The subscript notation ⬕ / ⬔ on thecontour arcs is a mnemonic for the lower/upper triangular structure of jump matrices defined below(cf., (4.34)–(4.35)) that will be carried by the corresponding contour arcs.for any 𝑝 ∈ ℂ given 𝑥 with Im( 𝑥 ) ≠ . We use the notation 𝜆 𝑝 ⬕ to indicate this branch. Note that if arg ⬕ ( 𝜆 ) denotesthe value of the argument corresponding to this choice of branch we have − 𝜋 𝑥 ) < arg ⬕ ( 𝜆 ) < 𝜋 𝑥 ) , | 𝜆 | → ∞ , (4.8)while Arg( 𝑥 ) − 𝜋 < arg ⬕ ( 𝜆 ) < Arg( 𝑥 ) + 3 𝜋 , | 𝜆 | → . (4.9)Then, to define ( 𝜆 + i) −1∕2 , we select 𝐿 ∞ ⬕ as the branch cut and for Im( 𝑥 ) > we take ( 𝜆 + i) −1∕2 to be positive forsufficiently positive values of 𝜆 + i , while for Im( 𝑥 ) < we take ( 𝜆 + i) −1∕2 to be negative imaginary for sufficientlynegative values of 𝜆 + i . We denote the resulting well-defined function as ( 𝜆 + i) −1∕2 ⬕ . With this choice, we have inparticular that ( 𝜆 + i) −1∕2 ⬕ = e −i 𝜋 ∕4 + 𝑂 ( 𝜆 ) , 𝜆 → , (4.10)and ( 𝜆 + i) −1∕2 ⬕ = 𝜆 −1∕2 ⬕ (1 + 𝑂 ( 𝜆 −1 )) , 𝜆 → ∞ . (4.11)With these definitions in hand, we now construct the four normalized solutions for 𝑢 ( 𝑥 ) = 1 as analytic functions of 𝜆 in the four disjoint domains Ω ∞± and Ω . We will denote the resulting piecewise-analytic simultaneous matrix solutionof (3.1)–(3.2) by 𝚿 ( 𝜆 ; 𝑥 ) .4.2.1. Defining 𝚿 ( 𝜆 ; 𝑥 ) for 𝜆 ∈ Ω ∞+ . We define 𝚿 ( 𝜆 ; 𝑥 ) for 𝜆 ∈ Ω ∞+ by the formula 𝚿 ( 𝜆 ; 𝑥 ) = e 𝑥𝜎 𝑥 Θ ∞ 𝜎 ∕2 𝑥 −1∕2 ( 𝜆 + i) −1∕2 ⬕ 𝐖 ( 𝑥, 𝜆 ) 𝐂 ∞+ , 𝜆 ∈ Ω ∞+ , (4.12) nd we determine the constant matrix 𝐂 ∞+ so that 𝚿 = 𝚿 ( 𝜆 ; 𝑥 ) satisfies (3.9) (with 𝜆 Θ ∞ 𝜎 ∕2 defined carefully as 𝜆 Θ ∞ 𝜎 ∕2 ⬕ )in the limit 𝜆 → ∞ in Ω ∞+ . Note that the precisely-defined factor ( 𝜆 + i) −1∕2 ⬕ satisfies (4.11), and that when 𝜆 → ∞ theWhittaker matrix 𝐖 ( 𝑥, 𝜆 ) takes the following asymptotic form: 𝐖 ( 𝑥, 𝜆 ) = ([ 𝐾 −1 𝜁 ] + 𝑂 ( 𝜆 −1 ) ) [ e 𝜁 ∕2 (− 𝜁 ) − 𝜅
00 e − 𝜁 ∕2 𝜁 𝜅 ] , 𝜆 → ∞ . (4.13)This can be further simplified by recalling that 𝜁 = i 𝑥 ( 𝜆 + 2 𝑖 − 𝜆 −1 ) is large when 𝜆 is large, and making use of the factthat the expressions (± 𝜁 ) ± 𝜅 refer to the principal branch. Indeed, by definition Im( 𝜁 ) > and Im(− 𝜁 ) < hold for 𝜆 inthe domain Ω ∞+ . Therefore to define (− 𝜁 ) − 𝜅 by the principal branch we need to have − 𝜋 < arg(− 𝜁 ) < or, for large 𝜆 , − 𝜋 < arg(−i 𝑥𝜆 (1+ 𝑂 ( 𝜆 −1 )) < . Writing arg(−i 𝑥𝜆 (1+ 𝑂 ( 𝜆 −1 ))) = − 𝜋 +Arg( 𝑥 )+arg ⬕ ( 𝜆 )+Arg(1+ 𝑂 ( 𝜆 −1 ))+2 𝜋 𝓁 , 𝓁 ∈ ℤ , where arg ⬕ ( 𝜆 ) satisfies (according to Figure 14 and (4.8) for large 𝜆 ∈ Ω ∞1 ) arg ⬕ ( 𝜆 ) + Arg( 𝑥 ) ∈ (− 𝜋, 𝜋 ) ,we see that 𝓁 = 0 , and therefore (− 𝜁 ) − 𝜅 = e i 𝜋𝜅 ∕2 𝑥 − 𝜅 𝜆 − 𝜅 ⬕ (1 + 𝑂 ( 𝜆 −1 )) as 𝜆 → ∞ in Ω ∞+ , where 𝑥 − 𝜅 refers to theprincipal branch. Similarly, to define 𝜁 𝜅 by the principal branch we need to have < arg( 𝜁 ) < 𝜋 or for large 𝜆 , < arg(i 𝑥𝜆 (1 + 𝑂 ( 𝜆 −1 ))) < 𝜋 . Writing arg(i 𝑥𝜆 (1 + 𝑂 ( 𝜆 −1 ))) = 𝜋 + Arg( 𝑥 ) + arg ⬕ ( 𝜆 ) + Arg(1 + 𝑂 ( 𝜆 −1 )) + 2 𝜋 𝓁 and again using arg ⬕ ( 𝜆 ) + Arg( 𝑥 ) ∈ (− 𝜋, 𝜋 ) gives 𝓁 = 0 so that 𝜁 𝜅 = e i 𝜋𝜅 ∕2 𝑥 𝜅 𝜆 𝜅 ⬕ (1 + 𝑂 ( 𝜆 −1 )) as 𝜆 → ∞ in Ω ∞+ ,where again 𝑥 𝜅 is the principal branch. Putting these results together gives 𝚿 ( 𝜆 ; 𝑥 )e −i 𝑥𝜆𝜎 ∕2 𝜆 Θ ∞ 𝜎 ∕2 ⬕ = ([ e i 𝜋𝜅 ∕2
00 4 𝐾 −1 e i 𝜋 ( 𝜅 +1)∕2 ] + 𝑂 ( 𝜆 −1 ) ) ⋅ 𝜆 −Θ ∞ 𝜎 ∕2 ⬕ e i 𝑥𝜆𝜎 ∕2 𝐂 ∞+ e −i 𝑥𝜆𝜎 ∕2 𝜆 Θ ∞ 𝜎 ∕2 ⬕ , 𝜆 → ∞ , 𝜆 ∈ Ω ∞+ . (4.14)Since Ω ∞+ contains directions in which both exponential factors e ±i 𝑥𝜆 are exponentially large as 𝜆 → ∞ , this can onlyhave a finite limit if 𝐂 ∞+ is a diagonal matrix, in which case the correct normalization requires that 𝐂 ∞+ ∶= [ e −i 𝜋𝜅 ∕2
00 − i4 𝐾 e −i 𝜋𝜅 ∕2 ] . (4.15)Using this formula for 𝐂 ∞+ in (4.12) completes the precise definition of 𝚿 ( 𝜆 ; 𝑥 ) for 𝜆 ∈ Ω ∞+ .4.2.2. Defining 𝚿 ( 𝜆 ; 𝑥 ) for 𝜆 ∈ Ω ∞− . In a similar way, we define 𝚿 ( 𝜆 ; 𝑥 ) for 𝜆 ∈ Ω ∞− by the formula 𝚿 ( 𝜆 ; 𝑥 ) = e 𝑥𝜎 𝑥 Θ ∞ 𝜎 ∕2 𝑥 −1∕2 ( 𝜆 + i) −1∕2 ⬕ 𝐖 ( 𝑥, 𝜆 ) 𝐂 ∞− , 𝜆 ∈ Ω ∞− (4.16)and we determine 𝐂 ∞− so that 𝚿 = 𝚿 ( 𝜆 ; 𝑥 ) satisfies (3.9) with 𝜆 Θ ∞ 𝜎 ∕2 interpreted as 𝜆 Θ ∞ 𝜎 ∕2 ⬕ in the limit 𝜆 → ∞ with 𝜆 ∈ Ω ∞− . Again we may use both (4.11) and (4.13), and it remains to interpret the principal branch powerfunctions (± 𝜁 ) ± 𝜅 appearing in (4.13). Now by definition, Im( 𝜁 ) < and Im(− 𝜁 ) > hold for 𝜆 ∈ Ω ∞− , so for theprincipal branch powers we have − 𝜋 < arg( 𝜁 ) < and < arg(− 𝜁 ) < 𝜋 . Writing arg( 𝜁 ) = arg(i 𝑥𝜆 (1 + 𝑂 ( 𝜆 −1 ))) = 𝜋 + Arg( 𝑥 ) + arg ⬕ ( 𝜆 ) + Arg(1 + 𝑂 ( 𝜆 −1 )) + 2 𝜋 𝓁 , 𝓁 ∈ ℤ , and taking into account that arg ⬕ ( 𝜆 ) + Arg( 𝑥 ) ∈ ( 𝜋, 𝜋 ) according to Figure 14 and (4.8) we find that 𝓁 = −1 and so 𝜁 𝜅 = e −3 𝜋 i 𝜅 ∕2 𝑥 𝜅 𝜆 𝜅 ⬕ (1 + 𝑂 ( 𝜆 −1 )) as 𝜆 → ∞ from Ω ∞− where 𝑥 𝜅 is the principal branch. Similarly writing arg(− 𝜁 ) = arg(−i 𝑥𝜆 (1 + 𝑂 ( 𝜆 −1 ))) = − 𝜋 + Arg( 𝑥 ) + arg ⬕ ( 𝜆 ) +Arg(1 + 𝑂 ( 𝜆 −1 )) + 2 𝜋 𝓁 we get that 𝓁 = 0 and so (− 𝜁 ) − 𝜅 = e i 𝜋𝜅 ∕2 𝑥 − 𝜅 𝜆 − 𝜅 ⬕ (1 + 𝑂 ( 𝜆 −1 )) as 𝜆 → ∞ from Ω ∞− where 𝑥 − 𝜅 is the principal branch. Using this information and imposing the normalization condition (3.9) on the formula (4.16)we learn that the matrix 𝐂 ∞− must again be diagonal for the required limit to exist, and then 𝐂 ∞− = [ e −i 𝜋𝜅 ∕2
00 − i4 𝐾 e 𝜋 i 𝜅 ∕2 ] . Combining this with (4.16) completes the definition of 𝚿 ( 𝜆 ; 𝑥 ) for 𝜆 ∈ Ω ∞− . .2.3. Defining 𝚿 ( 𝜆 ; 𝑥 ) for 𝜆 ∈ Ω . We write 𝚿 ( 𝜆 ; 𝑥 ) for 𝜆 ∈ Ω in the form 𝚿 ( 𝜆 ; 𝑥 ) = e 𝑥𝜎 𝑥 Θ ∞ 𝜎 ∕2 𝑥 −1∕2 ( 𝜆 + i) −1∕2 ⬕ 𝐖 ( 𝑥, 𝜆 ) 𝐂 , 𝜆 ∈ Ω , (4.17)and try to determine the constant matrix 𝐂 such that (4.4) holds (with 𝜆 −Θ 𝜎 ∕2 carefully interpreted as 𝜆 −Θ 𝜎 ∕2 ⬕ ) forsome appropriate 𝑎 and 𝑏 in the limit 𝜆 → from Ω . Note that the precisely-defined factor ( 𝜆 + i) −1∕2 ⬕ is analytic near 𝜆 = 0 and satisfies (4.10), while in the limit 𝜆 → , the Whittaker matrix 𝐖 ( 𝑥, 𝜆 ) takes the following asymptotic form: 𝐖 ( 𝑥, 𝜆 ) = ([ 𝐾 −1 (1 − 2Θ ∞ ) 𝐾 −1 (1 − 2Θ ∞ − 4 𝑥 ) ] + 𝑂 ( 𝜆 ) ) [ e 𝜁 ∕2 (− 𝜁 ) − 𝜅
00 e − 𝜁 ∕2 𝜁 𝜅 ] , 𝜆 → . (4.18)We carefully interpret the principal branch powers appearing in (4.18) by noting that 𝜆 ∈ Ω means by definition that Im( 𝜁 ) < so we need to have − 𝜋 < arg( 𝜁 ) < and < arg(− 𝜁 ) < 𝜋 . Writing arg( 𝜁 ) = arg(−i 𝑥𝜆 −1 (1 + 𝑂 ( 𝜆 ))) =− 𝜋 + Arg( 𝑥 ) − arg ⬕ ( 𝜆 ) + Arg(1 + 𝑂 ( 𝜆 )) + 2 𝜋 𝓁 , 𝓁 ∈ ℤ , and observing from Figure 14 and (4.9) that 𝜆 small and in Ω means arg ⬕ ( 𝜆 )−Arg( 𝑥 ) ∈ (− 𝜋, 𝜋 ) , we see that 𝓁 = 0 and so 𝜁 𝜅 = e −i 𝜋𝜅 ∕2 𝑥 𝜅 𝜆 − 𝜅 ⬕ (1+ 𝑂 ( 𝜆 )) as 𝜆 → from Ω where 𝑥 𝜅 is the principal branch. Similarly, writing arg(− 𝜁 ) = arg(i 𝑥𝜆 −1 (1+ 𝑂 ( 𝜆 )) = 𝜋 +Arg( 𝑥 )−arg ⬕ ( 𝜆 )+Arg(1+ 𝑂 ( 𝜆 ))+2 𝜋 𝓁 and again using arg ⬕ ( 𝜆 ) − Arg( 𝑥 ) ∈ (− 𝜋, 𝜋 ) we find that 𝓁 = 0 and so (− 𝜁 ) − 𝜅 = e −i 𝜋𝜅 ∕2 𝑥 − 𝜅 𝜆 𝜅 ⬕ (1 + 𝑂 ( 𝜆 )) as 𝜆 → from Ω where 𝑥 − 𝜅 denotes the principal branch. Using this information in (4.4) we see that again 𝐂 must bea diagonal matrix, say 𝐂 = [ 𝑐 𝑑 ] (4.19)with 𝑐 and 𝑑 independent of both 𝑥 and 𝜆 , and then 𝚿 = 𝚿 ( 𝜆 ; 𝑥 ) indeed satisfies (4.4) provided that 𝑎 ( 𝑥 ) = e −i 𝜋𝜅 ∕2 e −i 𝜋 ∕4 𝑐𝑏 ( 𝑥 ) = 4 𝐾 −1 e −i 𝜋𝜅 ∕2 e −i 𝜋 ∕4 𝑥 −1 𝑑. (4.20)Note that 𝑎 ( 𝑥 ) is independent of 𝑥 . The unimodularity condition (4.5) is then equivalent to the following condition onthe constants 𝑐 and 𝑑 : det( 𝐂 ) = 𝑐𝑑 = − 14 i 𝐾 e i 𝜋𝜅 . (4.21)Therefore, to completely define 𝚿 ( 𝜆 ; 𝑥 ) we should simply choose convenient values for 𝑐 and 𝑑 consistent with (4.21)and then combine (4.19) with (4.17).4.2.4. Defining 𝚿 ( 𝜆 ; 𝑥 ) for 𝜆 ∈ Ω . We write 𝚿 ( 𝜆 ; 𝑥 ) for 𝜆 ∈ Ω in the form 𝚿 ( 𝜆 ; 𝑥 ) = e 𝑥𝜎 𝑥 Θ ∞ 𝜎 ∕2 𝑥 −1∕2 ( 𝜆 + i) −1∕2 ⬕ 𝐖 ( 𝑥, 𝜆 ) 𝐂 , 𝜆 ∈ Ω , (4.22)for a constant matrix 𝐂 to be determined from the normalization condition (4.4) in which 𝜆 −Θ 𝜎 ∕2 is interpretedas 𝜆 −Θ 𝜎 ∕2 ⬕ . We may again use (4.10) and (4.18) and it remains to interpret the principal branch power functions 𝜁 𝜅 and (− 𝜁 ) − 𝜅 for 𝜆 ∈ Ω . By definition, 𝜆 ∈ Ω means Im( 𝜁 ) > , so < arg( 𝜁 ) < 𝜋 and − 𝜋 < arg(− 𝜁 ) < . Writing arg( 𝜁 ) = arg(−i 𝑥𝜆 −1 (1 + 𝑂 ( 𝜆 ))) = − 𝜋 + Arg( 𝑥 ) − arg ⬕ ( 𝜆 ) + Arg(1 + 𝑂 ( 𝜆 )) + 2 𝜋 𝓁 , 𝓁 ∈ ℤ , andnoting from Figure 14 and (4.9) that 𝜆 small in Ω means that arg ⬕ ( 𝜆 ) − Arg( 𝑥 ) ∈ ( 𝜋, 𝜋 ) , we obtain 𝓁 = 1 and therefore 𝜁 𝜅 = e 𝜋 i 𝜅 ∕2 𝑥 𝜅 𝜆 − 𝜅 ⬕ (1 + 𝑂 ( 𝜆 )) ) as 𝜆 → from Ω where 𝑥 𝜅 is the principal branch. Likewise writing arg(− 𝜁 ) = arg(i 𝑥𝜆 −1 (1 + 𝑂 ( 𝜆 ))) = 𝜋 + Arg( 𝑥 ) − arg ⬕ ( 𝜆 ) + Arg(1 + 𝑂 ( 𝜆 )) + 2 𝜋 𝓁 we see that 𝓁 = 0 and therefore (− 𝜁 ) − 𝜅 = e −i 𝜋𝜅 ∕2 𝑥 − 𝜅 𝜆 𝜅 ⬕ (1 + 𝑂 ( 𝜆 )) as 𝜆 → from Ω where 𝑥 − 𝜅 is the principal branch. Using this information in(4.4) we see that the matrix 𝐂 must be diagonal: 𝐂 = [ 𝑔 ℎ ] (4.23)where the constants 𝑔 and ℎ are related to 𝑎 ( 𝑥 ) and 𝑏 ( 𝑥 ) by 𝑎 ( 𝑥 ) = e −i 𝜋𝜅 ∕2 e −i 𝜋 ∕4 𝑔𝑏 ( 𝑥 ) = 4 𝐾 −1 e 𝜋 i 𝜅 ∕2 e −i 𝜋 ∕4 𝑥 −1 ℎ. (4.24) nce again, 𝑎 ( 𝑥 ) is independent of 𝑥 , and the unimodularity condition (4.5) is then equivalent to det( 𝐂 ) = 𝑔ℎ = − 14 i 𝐾 e −i 𝜋𝜅 . (4.25)Choosing any constants 𝑔 and ℎ consistent with (4.25) therefore determines 𝚿 ( 𝜆 ; 𝑥 ) for 𝜆 ∈ Ω by combining (4.23)with (4.22).4.3. Jump matrices for
Im( 𝑥 ) ≠ . Before computing the jump matrices, we will remove the ambiguity of the con-stants 𝑐, 𝑑, 𝑔, ℎ still present in the definition of 𝚿 ( 𝜆 ; 𝑥 ) for 𝜆 ∈ Ω in the following way: ∙ If Im( 𝑥 ) > , we choose 𝑐 and 𝑑 so that 𝐂 = 𝐂 ∞− . This is allowed because the diagonal elements of 𝐂 ∞− obviously also satisfy (4.21) because 𝜅 + 1 = Θ ∞ . Similarly, if Im( 𝑥 ) < , we choose 𝑔 and ℎ such that 𝐂 = 𝐂 ∞+ , which is consistent because the diagonal elements of 𝐂 ∞+ satisfy (4.25). ∙ We then insist that the normalization factors 𝑎 ( 𝑥 ) and 𝑏 ( 𝑥 ) appearing in (4.4) are exactly the same regardlessof whether 𝜆 → from Ω or from Ω .The first choice implies that at every point 𝜆 ≠ −i of the unit circle forming the common boundary of Ω ∞− and Ω (for Im( 𝑥 ) > ) or the common boundary of Ω ∞+ and Ω (for Im( 𝑥 ) < ), the boundary values taken by 𝚿 ( 𝜆 ; 𝑥 ) agree, i.e., the jump matrix for 𝚿 ( 𝜆 ; 𝑥 ) across the unit circle 𝑆 ⧵ {−i} is exactly the identity matrix . The second choice togetherwith the first implies, in light of (4.20) and (4.24), that the matrices 𝐂 are necessarily given by 𝐂 = [ e −i 𝜋𝜅 ∕2
00 − i 𝐾 e 𝜋 i 𝜅 ∕2 ] and 𝐂 = [ e −i 𝜋𝜅 ∕2
00 − i 𝐾 e −i 𝜋𝜅 ∕2 ] . Note that these formulæ do not depend on the sign of
Im( 𝑥 ) . Thus, the matrix function 𝚿 ( 𝜆 ; 𝑥 ) has been determinedmodulo only the value of the constant 𝐾 ≠ , as an analytic function of 𝜆 ∈ ℂ ⧵ 𝐿 where 𝐿 = 𝐿 ∞ ⬔ ∪ 𝐿 ⬔ ∪ 𝐿 ∞ ⬕ ∪ 𝐿 ⬕ is the jump contour for the Whittaker matrix 𝐖 illustrated with red and cyan curves in Figure 14.The jump conditions satisfied by 𝚿 ( 𝜆 ; 𝑥 ) across the four arcs of 𝐿 oriented as shown in Figure 14 are computed bycomparing the formulæ for 𝚿 ( 𝜆 ; 𝑥 ) on either side using the identities (4.6)–(4.7) together with the fact that 𝜁 < along 𝐿 ⬔ and 𝐿 ∞ ⬔ while 𝜁 > along 𝐿 ⬕ and 𝐿 ∞ ⬕ . One also has to take into account that the factor ( 𝜆 + i) −1∕2 ⬕ changes signacross 𝐿 ∞ ⬕ by definition, but otherwise is analytic. The jump conditions are as follows: ∙ The arc 𝐿 ∞ ⬔ separates the domain Ω ∞+ on its left from Ω ∞− on its right. Using 𝜁 < for 𝜆 ∈ 𝐿 ∞ ⬔ we deducethat 𝚿 + ( 𝜆 ; 𝑥 ) = 𝚿 − ( 𝜆 ; 𝑥 ) 𝐕 ∞ ⬔ , 𝜆 ∈ 𝐿 ∞ ⬔ (4.26)where 𝐕 ∞ ⬔ ∶= ⎡⎢⎢⎣ 𝐾 e i 𝜋𝜅 ⋅ 𝜋 Γ( + 𝜇 − 𝜅 )Γ( − 𝜇 − 𝜅 )0 1 ⎤⎥⎥⎦ . (4.27) ∙ The arc 𝐿 ⬔ separates the domain Ω on its left from Ω on its right. Using 𝜁 < we get 𝚿 + ( 𝜆 ; 𝑥 ) = 𝚿 − ( 𝜆 ; 𝑥 ) 𝐕 ⬔ , 𝜆 ∈ 𝐿 ⬔ (4.28)where 𝐕 ⬔ ∶= ⎡⎢⎢⎣ 𝐾 e i 𝜋𝜅 ⋅ 𝜋 Γ( + 𝜇 − 𝜅 )Γ( − 𝜇 − 𝜅 )0 1 ⎤⎥⎥⎦ . (4.29) ∙ The arc 𝐿 ⬕ separates the domain Ω on its left from Ω on its right. Using 𝜁 > we arrive at 𝚿 + ( 𝜆 ; 𝑥 ) = 𝚿 − ( 𝜆 ; 𝑥 ) 𝐕 ⬕ , 𝜆 ∈ 𝐿 ⬕ (4.30)where 𝐕 ⬕ ∶= ⎡⎢⎢⎣ e 𝜋 i 𝜅 𝐾 e i 𝜋𝜅 ) −1 ⋅ 𝜋 Γ( + 𝜇 + 𝜅 )Γ( − 𝜇 + 𝜅 ) e −2 𝜋 i 𝜅 ⎤⎥⎥⎦ . (4.31) Finally, the arc 𝐿 ∞ ⬕ separates the domain Ω ∞− on its left from Ω ∞+ on its right. Using 𝜁 > and taking intoaccount that ( 𝜆 + i) −1∕2 ⬕ changes sign across 𝐿 ∞ ⬕ we obtain 𝚿 + ( 𝜆 ; 𝑥 ) = 𝚿 − ( 𝜆 ; 𝑥 ) 𝐕 ∞ ⬕ , 𝜆 ∈ 𝐿 ∞ ⬕ , (4.32)where 𝐕 ∞ ⬕ ∶= ⎡⎢⎢⎣ −e −2 𝜋 i 𝜅 𝐾 e i 𝜋𝜅 ) −1 ⋅ 𝜋 Γ( + 𝜇 + 𝜅 )Γ( − 𝜇 + 𝜅 ) −e 𝜋 i 𝜅 ⎤⎥⎥⎦ . (4.33)These formulæ may be simplified further by recalling the definitions 𝜇 = and 𝜅 = (Θ ∞ − 1) (so Θ ∞ = 𝑚 + 1 for 𝑛 = 0 implies 𝜅 = 𝑚 ), using the duplication formula [21, Eq. 5.5.5] Γ(2 𝑧 ) = 𝜋 −1∕2 𝑧 −1 Γ( 𝑧 )Γ( 𝑧 + ) , and choosing 𝐾 = 2 𝑚 +2 e −i 𝜋𝑚 ∕2 . Thus we find 𝐕 ∞ ⬔ = 𝐕 ∞ ⬔ ( 𝑚 ) ∶= ⎡⎢⎢⎢⎣ √ 𝜋 Γ( − 𝑚 )0 1 ⎤⎥⎥⎥⎦ , 𝐕 ⬔ = 𝐕 ⬔ ( 𝑚 ) ∶= ⎡⎢⎢⎢⎣ √ 𝜋 Γ( − 𝑚 )0 1 ⎤⎥⎥⎥⎦ , (4.34) 𝐕 ⬕ = 𝐕 ⬕ ( 𝑚 ) ∶= ⎡⎢⎢⎢⎣ e i 𝜋𝑚 √ 𝜋 Γ( + 𝑚 ) e −i 𝜋𝑚 ⎤⎥⎥⎥⎦ , 𝐕 ∞ ⬕ = 𝐕 ∞ ⬕ ( 𝑚 ) ∶= ⎡⎢⎢⎢⎣ −e −i 𝜋𝑚 √ 𝜋 Γ( + 𝑚 ) −e i 𝜋𝑚 ⎤⎥⎥⎥⎦ . (4.35)In the general theory [11] of the direct monodromy problem for (1.1), the Stokes constants are subject to an identityknown as the cyclic relation . In this setting, the cyclic relation is simply equivalent to the statement that for consistency,the ordered product of the jump matrices around the self-intersection point 𝜆 = −i must be the identity: 𝐕 ∞ ⬕ ( 𝑚 ) −1 𝐕 ∞ ⬔ ( 𝑚 ) −1 𝐕 ⬕ ( 𝑚 ) 𝐕 ⬔ ( 𝑚 ) = 𝕀 . (4.36)While it is straightforward to check directly that (4.36) holds, this identity is in fact a simple consequence of the waythe jump matrices were computed, namely by comparing four functions, each of which admits analytic continuationto a full neighborhood of the self-intersection point 𝜆 = −i and that differ only by right-multiplication by constantmatrices. In other words, (4.36) holds as a (Čech-)cohomological identity.4.4. The limiting cases of 𝑥 > and 𝑥 < . The jump contour 𝐿 for the Whittaker matrix 𝐖 ( 𝑥, 𝜆 ) undergoes abifurcation when 𝑥 crosses either the positive or negative real axes. The bifurcation that occurs as Arg( 𝑥 ) passesthrough zero is illustrated in Figure 15. Clearly, the arcs 𝐿 ⬕ and 𝐿 ∞ ⬕ depend continuously on Arg( 𝑥 ) near Arg( 𝑥 ) = 0 ,but the parts of 𝐿 ⬔ and 𝐿 ∞ ⬔ close to the unit circle become interchanged as Arg( 𝑥 ) passes through zero. However,noting that the matrices 𝐕 ∞ ⬔ ( 𝑚 ) and 𝐕 ⬔ ( 𝑚 ) as defined in (4.34) are inverse to each other, we easily conclude that thejump conditions satisfied by the matrix 𝚿 ( 𝜆 ; 𝑥 ) actually depend continuously on Arg( 𝑥 ) near Arg( 𝑥 ) = 0 . This makesit possible to define the jump conditions by continuity for Arg( 𝑥 ) = 0 . Note also that not only are the branch cutsof the functions 𝜆 𝑝 ⬕ and ( 𝜆 + i) −1∕2 ⬕ continuous with respect to Arg( 𝑥 ) near Arg( 𝑥 ) = 0 , but so also are the functionsthemselves.On the other hand, as 𝑥 approaches the negative real axis from above and below, the bifurcation as illustrated inFigure 16 is apparently more serious. Indeed, the arcs of 𝐿 ∞ ⬕ and 𝐿 ⬕ near the unit circle are now interchanged while 𝐿 ∞ ⬔ and 𝐿 ⬔ depend continuously on Arg(− 𝑥 ) . Since, according to (4.35), 𝐕 ⬕ ( 𝑚 ) 𝐕 ∞ ⬕ ( 𝑚 ) = − 𝕀 , it is not hard to seethat in the limit Arg(− 𝑥 ) → the limiting jump conditions from Im( 𝑥 ) > and Im( 𝑥 ) < differ precisely on the unitcircle, by a sign. In terms of the matrix 𝚿 ( 𝜆 ; 𝑥 ) itself, lim 𝜖 ↓ 𝚿 ( 𝜆 ; 𝑥 + i 𝜖 ) = sgn(ln | 𝜆 | ) lim 𝜖 ↓ 𝚿 ( 𝜆 ; 𝑥 − i 𝜖 ) , 𝑥 < . Naturally, both limiting values correspond to simultaneous solutions of the Painlevé-III Lax pair (3.1)–(3.2) for exactlythe same solution 𝑢 ( 𝑥 ) = 1 ; the apparent monodromy in the function 𝚿 ( 𝜆 ; 𝑥 ) about 𝑥 = 0 can be absorbed into a signchange in the arbitrary constants 𝑎 and 𝑏 appearing in (4.4). For practical calculations one has to be careful about thevalues of the power functions 𝜆 𝑝 ⬕ for | 𝜆 | < in taking the limit of 𝚿 ( 𝜆 ; 𝑥 ) as 𝑥 approaches a negative real value from - - - - - - - F IGURE
15. As in Figure 14 except for values of 𝑥 close to the positive real axis. - - - - - - - - F IGURE
16. As in Figure 14 except for values of 𝑥 close to the negative real axis.the upper/lower half-planes. Indeed, keeping track of the dependence of arg ⬕ ( 𝜆 ) on 𝑥 with the augmented notation arg ⬕ ( 𝜆 ; 𝑥 ) , we have the identity lim 𝜖 ↓ arg ⬕ ( 𝜆 ; 𝑥 + i 𝜖 ) = lim 𝜖 ↓ arg ⬕ ( 𝜆 ; 𝑥 − i 𝜖 ) − 2 𝜋 sgn(ln | 𝜆 | ) , 𝑥 < .
5. S
CHLESINGER -B ÄCKLUND T RANSFORMATIONS
Schlesinger transformations to increment/decrement 𝑛 . Now suppose that 𝐕 ∞ ⬕ , 𝐕 ∞ ⬔ , 𝐕 ⬕ , and 𝐕 ⬔ are anyunimodular matrices satisfying the cyclic relation (4.36), and that 𝚿 ( 𝜆 ; 𝑥 ) is an analytic function of 𝜆 in thedomain ℂ ⧵ 𝐿 , 𝐿 ∶= 𝐿 ∞ ⬕ ∪ 𝐿 ∞ ⬔ ∪ 𝐿 ⬕ ∪ 𝐿 ⬔ , satisfying jump conditions of the form (4.26), (4.28), (4.30), (4.32), as ell as asymptotic conditions of the form 𝚿 ( 𝜆 ; 𝑥 ) 𝜆 Θ ∞ 𝜎 ∕2 ⬕ e −i 𝑥𝜆𝜎 ∕2 = 𝕀 + 𝚿 ∞1 ( 𝑥 ) 𝜆 −1 + ⋯ , 𝜆 → ∞ (5.1)and 𝚿 ( 𝜆 ; 𝑥 ) 𝜆 −Θ 𝜎 ∕2 ⬕ e i 𝑥𝜆 −1 𝜎 ∕2 = 𝚿 ( 𝑥 ) + 𝚿 ( 𝑥 ) 𝜆 + ⋯ , 𝜆 → . (5.2)Here, 𝚿 ∞ 𝑘 ( 𝑥 ) , 𝑘 ≥ and 𝚿 𝑘 ( 𝑥 ) , 𝑘 ≥ , are certain matrix coefficients. Since it necessarily holds that det( 𝚿 ( 𝜆 ; 𝑥 )) = 1 ,it follows that det( 𝚿 ( 𝑥 )) = 1 and tr( 𝚿 ∞1 ( 𝑥 )) = 0 . We define the Pauli-type matrices ̂𝜎 ∶= [ ] and ̌𝜎 ∶= [ ] , and supposing further that the matrix element Ψ , ( 𝑥 ) is not identically zero, we consider the Schlesinger transforma-tion (also known as a Darboux transformation) given by ̂ 𝚿 ( 𝜆 ; 𝑥 ) ∶= ( ̂𝜎𝜆 ⬕ + ̂ 𝐁 ( 𝑥 ) 𝜆 −1∕2 ⬕ ) 𝚿 ( 𝜆 ; 𝑥 ) , (5.3)where ̂ 𝐁 ( 𝑥 ) ∶= [ Ψ , ( 𝑥 )Ψ ∞1 , ( 𝑥 )∕Ψ , ( 𝑥 ) −Ψ ∞1 , ( 𝑥 )−Ψ , ( 𝑥 )∕Ψ , ( 𝑥 ) 1 ] . (5.4)Note that det( ̂ 𝚿 ( 𝜆 ; 𝑥 )) = det( 𝚿 ( 𝜆 ; 𝑥 )) by direct calculation. Since 𝜆 ±1∕2 ⬕ are analytic except on 𝐿 ⬕ ∪ 𝐿 ∞ ⬕ across whichthese factors change sign, ̂ 𝚿 ( 𝜆 ; 𝑥 ) is also analytic for 𝜆 ∈ ℂ ⧵ 𝐿 , and it is a direct matter to check the following jumpconditions: ̂ 𝚿 + ( 𝜆 ; 𝑥 ) = ̂ 𝚿 − ( 𝜆 ; 𝑥 ) ⎧⎪⎪⎨⎪⎪⎩ 𝐕 ⬔ , 𝜆 ∈ 𝐿 ⬔ , 𝐕 ∞ ⬔ , 𝜆 ∈ 𝐿 ∞ ⬔ , − 𝐕 ⬕ , 𝜆 ∈ 𝐿 ⬕ , − 𝐕 ∞ ⬕ , 𝜆 ∈ 𝐿 ∞ ⬕ . (5.5)Next, combining (5.1) and (5.3), observe that in the limit 𝜆 → ∞ we have ̂ 𝚿 ( 𝜆 ; 𝑥 ) 𝜆 (Θ ∞ −1) 𝜎 ∕2 ⬕ e −i 𝑥𝜆𝜎 ∕2 = ( ̂𝜎𝜆 ⬕ + ̂ 𝐁 ( 𝑥 ) 𝜆 −1∕2 ⬕ )( 𝕀 + 𝚿 ∞1 ( 𝑥 ) 𝜆 −1 + ⋯ ) 𝜆 − 𝜎 ∕2 ⬕ = 𝜆 ( ̂𝜎 + ̂ 𝐁 ( 𝑥 ) 𝜆 −1 )( 𝕀 + 𝚿 ∞1 ( 𝑥 ) 𝜆 −1 + ⋯ )( ̌𝜎 + ̂𝜎𝜆 −1 )= ̂𝜎̌𝜎𝜆 + [ ̂𝜎 + ̂𝜎 𝚿 ∞1 ( 𝑥 ) ̌𝜎 + ̂ 𝐁 ( 𝑥 ) ̌𝜎 ] + ̂ 𝚿 ∞1 ( 𝑥 ) 𝜆 −1 + ⋯ = 𝕀 + ̂ 𝚿 ∞1 ( 𝑥 ) 𝜆 −1 + ⋯ , where ̂ 𝚿 ∞1 ∶= ̂𝜎 𝚿 ∞1 ( 𝑥 ) ̂𝜎 + ̂𝜎 𝚿 ∞2 ( 𝑥 ) ̌𝜎 + ̂ 𝐁 ( 𝑥 ) ̂𝜎 + ̂ 𝐁 ( 𝑥 ) 𝚿 ∞1 ( 𝑥 ) ̌𝜎. (5.6)Similarly, combining (5.2) with (5.3) shows that in the limit 𝜆 → we have ̂ 𝚿 ( 𝜆 ; 𝑥 ) 𝜆 −(Θ +1) 𝜎 ∕2 ⬕ e i 𝑥𝜆 −1 𝜎 ∕2 = ( ̂𝜎𝜆 ⬕ + ̂ 𝐁 ( 𝑥 ) 𝜆 −1∕2 ⬕ )( 𝚿 ( 𝑥 ) + 𝚿 ( 𝑥 ) 𝜆 + ⋯ ) 𝜆 − 𝜎 ∕2 ⬕ = 𝜆 −1 ( ̂ 𝐁 ( 𝑥 ) + ̂𝜎𝜆 )( 𝚿 ( 𝑥 ) + 𝚿 ( 𝑥 ) 𝜆 + ⋯ )( ̂𝜎 + ̌𝜎𝜆 )= ̂ 𝐁 ( 𝑥 ) 𝚿 ( 𝑥 ) ̂𝜎𝜆 −1 + ̂ 𝚿 ( 𝑥 ) + ̂ 𝚿 ( 𝑥 ) 𝜆 + ⋯ = ̂ 𝚿 ( 𝑥 ) + ̂ 𝚿 ( 𝑥 ) 𝜆 + ⋯ , where ̂ 𝚿 ( 𝑥 ) ∶= ̂ 𝐁 ( 𝑥 ) 𝚿 ( 𝑥 ) ̌𝜎 + ̂ 𝐁 ( 𝑥 ) 𝚿 ( 𝑥 ) ̂𝜎 + ̂𝜎 𝚿 ( 𝑥 ) ̂𝜎. (5.7)Thus, the Schlesinger transformation (5.3) results in a simple modification of the jump conditions and preserves the formof the asymptotic conditions (5.1)–(5.2), but with the replacements Θ ∞ ↦ ̂ Θ ∞ ∶= Θ ∞ − 1 and Θ ↦ ̂ Θ ∶= Θ + 1 .Comparing with (1.2), we see that these replacements have the effect of incrementing the value of 𝑛 by and holding 𝑚 fixed. Similarly, assuming that Ψ , ( 𝑥 ) is not identically zero and setting ̌ 𝚿 ( 𝜆 ; 𝑥 ) ∶= ( ̌𝜎𝜆 ⬕ + ̌ 𝐁 ( 𝑥 ) 𝜆 −1∕2 ⬕ ) 𝚿 ( 𝜆 ; 𝑥 ) , (5.8) here ̌ 𝐁 ( 𝑥 ) ∶= [ , ( 𝑥 )∕Ψ , ( 𝑥 )−Ψ ∞1 , ( 𝑥 ) Ψ , ( 𝑥 )Ψ ∞1 , ( 𝑥 )∕Ψ , ( 𝑥 ) ] (5.9)respectively, one finds that again det( ̌ 𝚿 ( 𝜆 ; 𝑥 )) = det( 𝚿 ( 𝜆 ; 𝑥 )) and (5.5) holds with ̌ 𝚿 replacing ̂ 𝚿 , but now as 𝜆 → ∞ , ̌ 𝚿 ( 𝜆 ; 𝑥 ) 𝜆 (Θ ∞ +1) 𝜎 ∕2 ⬕ e −i 𝑥𝜆𝜎 ∕2 = 𝕀 + ̌ 𝚿 ∞1 ( 𝑥 ) 𝜆 −1 + ⋯ , where ̌ 𝚿 ∞1 ( 𝑥 ) ∶= ̌𝜎 𝚿 ∞1 ( 𝑥 ) ̌𝜎 + ̌𝜎 𝚿 ∞2 ( 𝑥 ) ̂𝜎 + ̌ 𝐁 ( 𝑥 ) ̌𝜎 + ̌ 𝐁 ( 𝑥 ) 𝚿 ∞1 ( 𝑥 ) ̂𝜎, and similarly, as 𝜆 → , ̌ 𝚿 ( 𝜆 ; 𝑥 ) 𝜆 −(Θ −1) 𝜎 ∕2 ⬕ e i 𝑥𝜆 −1 𝜎 ∕2 = ̌ 𝚿 ( 𝑥 ) + ̌ 𝚿 ( 𝑥 ) 𝜆 + ⋯ where ̌ 𝚿 ( 𝑥 ) ∶= ̌ 𝐁 ( 𝑥 ) 𝚿 ( 𝑥 ) ̂𝜎 + ̌ 𝐁 ( 𝑥 ) 𝚿 ( 𝑥 ) ̌𝜎 + ̌𝜎 𝚿 ( 𝑥 ) ̌𝜎. Therefore, the Schlesinger transformation (5.8) also results in a simple modification of the jump conditions and pre-serves the form of the asymptotic conditions (5.1)–(5.2), but now with the replacements Θ ∞ ↦ ̌ Θ ∞ ∶= Θ ∞ + 1 and Θ ↦ ̌ Θ ∶= Θ − 1 , replacements having the effect of decrementing the value of 𝑛 by and holding 𝑚 fixed. We nowshow that the transformations (5.3) and (5.8) are in fact inverse to each other: Lemma 1. ̌̂ 𝚿 ( 𝜆 ; 𝑥 ) = ̂̌ 𝚿 ( 𝜆 ; 𝑥 ) = 𝚿 ( 𝜆 ; 𝑥 ) . Proof.
Fix 𝑥 ∈ ℂ such that 𝚿 ( 𝜆 ; 𝑥 ) exists satisfying the appropriate analyticity, jump, and normalization conditions;hence in particular the diagonal elements of 𝚿 ( 𝑥 ) are finite. If Ψ , ( 𝑥 ) ≠ so that ̂ 𝚿 ( 𝜆 ; 𝑥 ) exists, then accordingto (5.7) with (5.4), the fact that det( 𝚿 ( 𝑥 )) = 1 implies that ̂ Ψ , ( 𝑥 ) = 1∕Ψ , ( 𝑥 ) ≠ . Therefore, (5.8) can beapplied to ̂ 𝚿 ( 𝜆 ; 𝑥 ) with the elements of ̌ 𝐁 ( 𝑥 ) obtained from ̂ 𝚿 ∞1 ( 𝑥 ) and ̂ 𝚿 ( 𝑥 ) rather than 𝚿 ∞1 ( 𝑥 ) and 𝚿 ( 𝑥 ) . Bothrows of the latter matrix are proportional to [1 , − ̂ Ψ , ( 𝑥 )∕ ̂ Ψ , ( 𝑥 )] , while both columns of ̂ 𝐁 ( 𝑥 ) are proportional to [−Ψ ∞1 , ( 𝑥 ) , ⊤ , with the inner product being −Ψ ∞1 , ( 𝑥 ) − ̂ Ψ , ( 𝑥 ) ̂ Ψ , ( 𝑥 ) = −Ψ ∞1 , ( 𝑥 ) − ̂ Ψ , ( 𝑥 )Ψ , ( 𝑥 ) = 0 , again using (5.7) with (5.4). Therefore, since ̌𝜎̂𝜎 = , ̌̂ 𝚿 ( 𝜆 ; 𝑥 ) = [ , ( 𝑥 )∕Ψ , ( 𝑥 ) − ̂ Ψ ∞1 , ( 𝑥 ) 1 ] 𝚿 ( 𝜆 ; 𝑥 ) = 𝚿 ( 𝜆 ; 𝑥 ) , with the help of (5.6) and (5.4). Another proof of this result is simply to note that the matrices ̌̂ 𝚿 ( 𝜆 ; 𝑥 ) and 𝚿 ( 𝜆 ; 𝑥 ) satisfyexactly the same analyticity, jump, and normalization conditions, and therefore since det( 𝚿 ( 𝜆 ; 𝑥 )) = 1 , Liouville’stheorem shows that ̌̂ 𝚿 ( 𝜆 ; 𝑥 ) 𝚿 ( 𝜆 ; 𝑥 ) −1 = 𝕀 . The proof that (5.3) can be applied to ̌ 𝚿 ( 𝜆 ; 𝑥 ) provided that Ψ , ( 𝑥 ) ≠ sothat the latter exists, with the result that ̂̌ 𝚿 ( 𝜆 ; 𝑥 ) = 𝚿 ( 𝜆 ; 𝑥 ) , is completely analogous. (cid:3) The defining inverse monodromy problem for the rational solution 𝑢 𝑛 ( 𝑥 ; 𝑚 ) . Let 𝚿 (0) ( 𝜆 ; 𝑥, 𝑚 ) ∶= 𝚿 ( 𝜆 ; 𝑥 ) be the matrix function defined in Sections 4.1–4.2, which satisfies (5.1)–(5.2) with Θ = 𝑚 and Θ ∞ = 𝑚 + 1 , andfor which Ψ , ( 𝑥 ) = 𝑎 ( 𝑥 ) = e −i 𝜋𝜅 ∕2 e −i 𝜋 ∕4 𝑐 ≠ and Ψ , ( 𝑥 ) ≢ for 𝑏 ( 𝑥 ) ≢ (note that both inequalities followfrom (4.20)–(4.21)). We now apply the Schlesinger transformations (5.3) and (5.8) repeatedly, assuming that after eachiteration, the condition Ψ , ( 𝑥 )Ψ , ( 𝑥 ) ≢ persists to obtain for each integer 𝑛 ∈ ℤ a matrix function 𝚿 ( 𝑛 ) ( 𝜆 ; 𝑥, 𝑚 ) See statement 2 of Lemma 2. hat satisfies (5.1)–(5.2) as well as the jump conditions 𝚿 ( 𝑛 )+ ( 𝜆 ; 𝑥, 𝑚 ) = 𝚿 ( 𝑛 )− ( 𝜆 ; 𝑥, 𝑚 ) ⎧⎪⎪⎨⎪⎪⎩ 𝐕 ⬔ ( 𝑚 ) , 𝜆 ∈ 𝐿 ⬔ , 𝐕 ∞ ⬔ ( 𝑚 ) , 𝜆 ∈ 𝐿 ∞ ⬔ , (−1) 𝑛 𝐕 ⬕ ( 𝑚 ) , 𝜆 ∈ 𝐿 ⬕ , (−1) 𝑛 𝐕 ∞ ⬕ ( 𝑚 ) , 𝜆 ∈ 𝐿 ∞ ⬕ , (5.10)where now the matrices 𝐕 ⬔ ( 𝑚 ) and 𝐕 ∞ ⬔ ( 𝑚 ) are defined in (4.34) and 𝐕 ⬕ ( 𝑚 ) and 𝐕 ∞ ⬕ ( 𝑚 ) are defined in (4.35). Since det( 𝚿 (0) ( 𝜆 ; 𝑥, 𝑚 )) = 1 it follows that det( 𝚿 ( 𝑛 ) ( 𝜆 ; 𝑥, 𝑚 )) = 1 for all 𝑛 ∈ ℤ . The inverse monodromy problem consists offixing 𝑛 ∈ ℤ , 𝑚 ∈ ℂ , and 𝑥 ∈ ℂ ⧵ {0} and attempting to determine 𝚿 ( 𝑛 ) ( 𝜆 ; 𝑥, 𝑚 ) from the following conditions only: ∙ Analyticity: 𝚿 ( 𝑛 ) ( 𝜆 ; 𝑥, 𝑚 ) is analytic for 𝜆 ∈ ℂ ⧵ 𝐿 and analyticity extends to the the contour 𝐿 from eachcomponent of its complement. ∙ Jump conditions: The boundary values taken by 𝚿 ( 𝑛 ) ( 𝜆 ; 𝑥, 𝑚 ) on the four oriented arcs of 𝐿 are to be relatedby the jump conditions (5.10). ∙ Behavior for small and large 𝜆 : 𝚿 ( 𝑛 ) ( 𝜆 ; 𝑥, 𝑚 ) satisfies the two conditions (5.1)–(5.2) in which Θ and Θ ∞ aredefined in terms of 𝑚 and 𝑛 by (1.2).By its construction in Sections 4.1–4.2, 𝚿 (0) ( 𝜆 ; 𝑥, 𝑚 ) is the simultaneous solution of a Lax pair of linear problems.We now show that this is also true for 𝚿 ( 𝑛 ) ( 𝜆 ; 𝑥, 𝑚 ) , ∀ 𝑛 ∈ ℤ , establishing simultaneously some related importantproperties. Lemma 2.
Let 𝑛 ∈ ℤ and 𝑚 ∈ ℂ be fixed and suppose the above inverse monodromy problem for 𝚿 ( 𝜆 ; 𝑥 ) = 𝚿 ( 𝑛 ) ( 𝜆 ; 𝑥, 𝑚 ) is solvable for 𝑥 in some domain 𝐷 ⊂ ℂ ⧵ {0} . For 𝜆 ∈ ℂ ⧵ 𝐿 , the function 𝚿 ( 𝜆 ; 𝑥 ) = 𝚿 ( 𝑛 ) ( 𝜆 ; 𝑥, 𝑚 ) is a simultaneous solution matrix of the Lax system (3.1) – (3.2) in which the 𝑥 -dependent coefficients 𝑦 , 𝑣 , 𝑠 , and 𝑡 are given in terms of the leading matrix coefficientsin the expansions (5.1) – (5.2) by 𝑦 ( 𝑥 ) = −i 𝑥 Ψ ∞1 , ( 𝑥 ) , 𝑣 ( 𝑥 ) = i 𝑥 Ψ ∞1 , ( 𝑥 ) , 𝑠 ( 𝑥 ) = − 𝑥 Ψ , ( 𝑥 )Ψ , ( 𝑥 ) , 𝑡 ( 𝑥 ) = Ψ , ( 𝑥 )Ψ , ( 𝑥 ) . (5.11)2. None of the three matrix elements Ψ , ( 𝑥 ) , Ψ , ( 𝑥 ) , nor Ψ , ( 𝑥 ) of the leading coefficient in the expansion (5.2) of 𝚿 ( 𝜆 ; 𝑥 ) = 𝚿 ( 𝑛 ) ( 𝜆 ; 𝑥, 𝑚 ) vanishes identically on the domain 𝐷 . The combination 𝑢 ( 𝑥 ) ∶= − 𝑦 ( 𝑥 )∕ 𝑠 ( 𝑥 ) (cf., (3.5) ) is a solution of the Painlevé-III equation (1.1) meromorphicon 𝐷 with parameters Θ and Θ ∞ given by (1.2) .Proof. It is a standard result based on Liouville’s theorem and the fact that the jump matrices are all unimodular thatthere can be at most one solution of the inverse monodromy conditions and that this solution satisfies det( 𝚿 ( 𝑛 ) ( 𝜆 ; 𝑥, 𝑚 )) =1 . Applying analytic Fredholm theory to a suitable singular integral equation equivalent to the inverse monodromyproblem and parametrized analytically by 𝑥 ∈ ℂ ⧵ {0} , existence of a solution for 𝑥 ∈ 𝐷 implies that for each 𝑚 ∈ ℂ and for each fixed 𝜆 disjoint from the jump contour 𝐿 for all 𝑥 ∈ 𝐷 , 𝑥 ↦ 𝚿 ( 𝑛 ) ( 𝜆 ; 𝑥, 𝑚 ) is analytic on 𝐷 . In particular,in a neighborhood of such fixed 𝜆 and any 𝑥 ∈ 𝐷 , 𝚿 ( 𝑛 ) ( 𝜆 ; 𝑥, 𝑚 ) is jointly differentiable with respect to both 𝜆 and 𝑥 .Because the jump matrices in (5.10) are independent of both 𝜆 (on each arc) and 𝑥 , it follows that the matrices 𝐀 ( 𝑛 ) ( 𝜆 ; 𝑥, 𝑚 ) ∶= 𝜕 𝚿 ( 𝑛 ) 𝜕𝜆 ( 𝜆 ; 𝑥, 𝑚 ) 𝚿 ( 𝑛 ) ( 𝜆 ; 𝑥, 𝑚 ) −1 and 𝐁 ( 𝑛 ) ( 𝜆 ; 𝑥, 𝑚 ) ∶= 𝜕 𝚿 ( 𝑛 ) 𝜕𝑥 ( 𝜆 ; 𝑥, 𝑚 ) 𝚿 ( 𝑛 ) ( 𝜆 ; 𝑥, 𝑚 ) −1 are both analytic functions of ( 𝜆, 𝑥 ) in the domain ( ℂ ⧵ {0}) × 𝐷 . Note that to define 𝐁 ( 𝑛 ) ( 𝜆 ; 𝑥, 𝑚 ) , we may take thejump contour 𝐿 to be locally independent of 𝑥 because the boundary values taken from each sector on 𝐿 are analytic unctions of 𝜆 . From (5.1) we see that in the limit 𝜆 → ∞ , 𝐀 ( 𝑛 ) ( 𝜆 ; 𝑥, 𝑚 ) = i 𝑥 𝜎 + ( i 𝑥 [ 𝚿 ∞1 ( 𝑥 ) , 𝜎 ] − Θ ∞ 𝜎 ) 𝜆 −1 + ( − 𝚿 ∞1 ( 𝑥 ) − Θ ∞ [ 𝚿 ∞1 ( 𝑥 ) , 𝜎 ] + i 𝑥 {[ 𝚿 ∞2 ( 𝑥 ) , 𝜎 ] − [ 𝚿 ∞1 ( 𝑥 ) , 𝜎 ] 𝚿 ∞1 ( 𝑥 ) }) 𝜆 −2 + 𝑂 ( 𝜆 −3 ) , 𝐁 ( 𝑛 ) ( 𝜆 ; 𝑥, 𝑚 ) = i2 𝜎 𝜆 + i2 [ 𝚿 ∞1 ( 𝑥 ) , 𝜎 ] + ( 𝚿 ∞′1 ( 𝑥 ) + i2 [ 𝚿 ∞2 ( 𝑥 ) , 𝜎 ] − i2 [ 𝚿 ∞1 ( 𝑥 ) , 𝜎 ] 𝚿 ∞1 ( 𝑥 ) ) 𝜆 −1 + 𝑂 ( 𝜆 −2 ) . (5.12)Similarly, in the limit 𝜆 → , from (5.2) we get 𝐀 ( 𝑛 ) ( 𝜆 ; 𝑥, 𝑚 ) = i 𝑥 𝚿 ( 𝑥 ) 𝜎 𝚿 ( 𝑥 ) −1 𝜆 −2 + ( Θ 𝚿 ( 𝑥 ) 𝜎 𝚿 ( 𝑥 ) −1 + i 𝑥 𝚿 ( 𝑥 ) 𝜎 𝚿 ( 𝑥 ) −1 − i 𝑥 𝚿 ( 𝑥 ) 𝜎 𝚿 ( 𝑥 ) −1 𝚿 ( 𝑥 ) 𝚿 ( 𝑥 ) −1 ) 𝜆 −1 + 𝑂 (1) 𝐁 ( 𝑛 ) ( 𝜆 ; 𝑥, 𝑚 ) = − i2 𝚿 ( 𝑥 ) 𝜎 𝚿 ( 𝑥 ) −1 𝜆 −1 + 𝚿 ( 𝑥 ) 𝚿 ( 𝑥 ) −1 + i2 [ 𝚿 ( 𝑥 ) 𝜎 𝚿 ( 𝑥 ) −1 , 𝚿 ( 𝑥 ) 𝚿 ( 𝑥 ) −1 ] + 𝑂 ( 𝜆 ) . (5.13)Therefore, Liouville’s theorem shows that 𝐀 ( 𝑛 ) ( 𝜆 ; 𝑥, 𝑚 ) and 𝐁 ( 𝑛 ) ( 𝜆 ; 𝑥, 𝑚 ) are Laurent polynomials: 𝐀 ( 𝑛 ) ( 𝜆 ; 𝑥, 𝑚 ) = i 𝑥 𝜎 + ( i 𝑥 [ 𝚿 ∞1 ( 𝑥 ) , 𝜎 ] − Θ ∞ 𝜎 ) 𝜆 −1 + i 𝑥 𝚿 ( 𝑥 ) 𝜎 𝚿 ( 𝑥 ) −1 𝜆 −2 (5.14)and 𝐁 ( 𝑛 ) ( 𝜆 ; 𝑥, 𝑚 ) = i2 𝜎 𝜆 + i2 [ 𝚿 ∞1 ( 𝑥 ) , 𝜎 ] − i2 𝚿 ( 𝑥 ) 𝜎 𝚿 ( 𝑥 ) −1 𝜆 −1 . (5.15)Furthermore, the coefficients of different powers of 𝜆 in (5.14)–(5.15) are analytic matrix-valued functions of 𝑥 on 𝐷 .Since 𝚿 ( 𝑛 ) 𝜆 ( 𝜆 ; 𝑥, 𝑚 ) = 𝐀 ( 𝑛 ) ( 𝜆 ; 𝑥, 𝑚 ) 𝚿 ( 𝑛 ) ( 𝜆 ; 𝑥, 𝑚 ) and 𝚿 ( 𝑛 ) 𝑥 ( 𝜆 ; 𝑥, 𝑚 ) = 𝐁 ( 𝑛 ) ( 𝜆 ; 𝑥, 𝑚 ) 𝚿 ( 𝑛 ) ( 𝜆 ; 𝑥, 𝑚 ) , matching (5.14)–(5.15)with (3.1)–(3.2) using also det( 𝚿 ( 𝑥 )) = 1 yields the expressions (5.11) and proves statement 1.Suppose Ψ , ( 𝑥 ) ≡ holds as an identity on 𝐷 . From det( 𝚿 ( 𝑥 )) ≡ we then get Ψ , ( 𝑥 )Ψ , ( 𝑥 ) ≡ −1 .Therefore 𝑠 ( 𝑥 ) ≡ and i 𝑥 − i 𝑠 ( 𝑥 ) 𝑡 ( 𝑥 ) ≡ − i 𝑥 , so the matrices 𝐀 ( 𝑛 ) ( 𝜆 ; 𝑥, 𝑚 ) and 𝐁 ( 𝑛 ) ( 𝜆 ; 𝑥, 𝑚 ) can be written in thealternate form 𝐀 = 𝐀 ( 𝑛 ) ( 𝜆 ; 𝑥, 𝑚 ) = i 𝑥 𝜎 + 1 𝜆 [ − Θ ∞ 𝑦𝑣 Θ ∞ ] + 1 𝜆 [ − i 𝑥 𝑉 i 𝑥 ] 𝐁 = 𝐁 ( 𝑛 ) ( 𝜆 ; 𝑥, 𝑚 ) = i 𝜆 𝜎 + 1 𝑥 [ 𝑦𝑣 ] − 1 𝜆𝑥 [ − i 𝑥 𝑉 i 𝑥 ] (5.16)with 𝑦 ( 𝑥 ) and 𝑣 ( 𝑥 ) defined as in (5.11), while 𝑉 ( 𝑥 ) ∶= − 𝑥 Ψ , ( 𝑥 )Ψ , ( 𝑥 ) . Existence of the simultaneous fundamental solution matrix 𝚿 ( 𝑛 ) ( 𝜆 ; 𝑥, 𝑚 ) of the Lax system implies that these coefficientmatrices satisfy the zero-curvature compatibility condition 𝐀 𝑥 − 𝐁 𝜆 + [ 𝐀 , 𝐁 ] = , which in turn implies that 𝑦 ( 𝑥 ) ≡ also, making 𝐀 and 𝐁 lower-triangular with explicit diagonal entries. Therefore, the elements of the first row aredetermined from the Lax system up to overall constants 𝑐 and 𝑐 by [ Ψ ( 𝑛 )11 ( 𝜆 ; 𝑥, 𝑚 ) Ψ ( 𝑛 )12 ( 𝜆 ; 𝑥, 𝑚 ) ] = [ 𝑐 e i 𝑥 ( 𝜆 + 𝜆 −1 )∕2 𝜆 −Θ ∞ ∕2 ⬕ 𝑐 e i 𝑥 ( 𝜆 + 𝜆 −1 )∕2 𝜆 −Θ ∞ ∕2 ⬕ ] . Applying the condition (5.1) then forces the choice 𝑐 = 0 , so Ψ ( 𝑛 )12 ( 𝜆 ; 𝑥, 𝑚 ) ≡ and therefore also Ψ , ( 𝑥 ) ≡ on 𝐷 .But since det( 𝚿 ( 𝑥 )) ≡ , this contradicts the assumption that Ψ , ( 𝑥 ) ≡ .Suppose next that Ψ , ( 𝑥 ) ≡ . Then using det( 𝚿 ( 𝑥 )) ≡ shows that the combination −i 𝑡 ( 𝑥 )( 𝑠 ( 𝑥 ) 𝑡 ( 𝑥 ) − 𝑥 ) vanishes identically, and then the compatibility condition for the matrices 𝐀 ( 𝑛 ) ( 𝜆 ; 𝑥, 𝑚 ) and 𝐁 ( 𝑛 ) ( 𝜆 ; 𝑥, 𝑚 ) implies that lso 𝑣 ( 𝑥 ) ≡ . Therefore, the coefficient matrices are upper-triangular in this case, and since also i 𝑥 − i 𝑠 ( 𝑥 ) 𝑡 ( 𝑥 ) ≡ − i 𝑥 , the second row of 𝚿 ( 𝑛 ) ( 𝜆 ; 𝑥, 𝑚 ) takes the form [ Ψ ( 𝑛 )21 ( 𝜆 ; 𝑥, 𝑚 ) Ψ ( 𝑛 )22 ( 𝜆 ; 𝑥, 𝑚 ) ] = [ 𝑐 e −i 𝑥 ( 𝜆 + 𝜆 −1 )∕2 𝜆 Θ ∞ ∕2 ⬕ 𝑐 e −i 𝑥 ( 𝜆 + 𝜆 −1 )∕2 𝜆 Θ ∞ ∕2 ⬕ ] (5.17)where 𝑐 and 𝑐 are constants. Applying as before the condition (5.1) now forces 𝑐 = 0 , so Ψ , ( 𝑥 ) and Ψ , ( 𝑥 ) bothvanish identically in contradiction to det( 𝚿 ( 𝑥 )) ≡ .Finally, suppose that Ψ , ( 𝑥 ) ≡ on 𝐷 . Then also 𝑠 ( 𝑥 ) ≡ and 𝑠 ( 𝑥 ) 𝑡 ( 𝑥 ) ≡ , and the compatibility condition forthe Lax system implies that also 𝑦 ( 𝑥 ) ≡ , making the coefficient matrices lower-triangular. Solving for the first rowof 𝚿 ( 𝑛 ) ( 𝜆 ; 𝑥, 𝑚 ) now yields [ Ψ ( 𝑛 )11 ( 𝜆 ; 𝑥, 𝑚 ) Ψ ( 𝑛 )12 ( 𝜆 ; 𝑥, 𝑚 ) ] = [ 𝑐 e i 𝑥 ( 𝜆 − 𝜆 −1 )∕2 𝜆 −Θ ∞ ∕2 ⬕ 𝑐 e i 𝑥 ( 𝜆 − 𝜆 −1 )∕2 𝜆 −Θ ∞ ∕2 ⬕ ] (5.18)for constants 𝑐 and 𝑐 , and applying the normalization condition (5.1) forces 𝑐 = 1 and 𝑐 = 0 . For this result to becompatible with (5.2) it is then necessary that Θ + Θ ∞ = 0 , i.e., that 𝑚 = − . But, if 𝑚 = − , the jump conditionacross the arc 𝐿 ∞ ⬔ implies that (using Θ ∞ = − 𝑛 for 𝑚 = − ) Ψ ( 𝑛 )12+ ( 𝜆 ; 𝑥, − ) − Ψ ( 𝑛 )12− ( 𝜆 ; 𝑥, − ) = √ 𝜋 Ψ ( 𝑛 )11− ( 𝜆 ; 𝑥, − ) = √ 𝜋 e i 𝑥 ( 𝜆 − 𝜆 −1 )∕2 𝜆 𝑛 ∕2−1∕4 ⬕ , 𝜆 ∈ 𝐿 ∞ ⬔ . (5.19)The right-hand side is nonzero on the indicated contour, which is obviously inconsistent with Ψ ( 𝑛 )12 ( 𝜆 ; 𝑥, − ) ≡ implied by 𝑐 = 0 . All together, since assuming Ψ , ( 𝑥 ) ≡ , Ψ , ( 𝑥 ) ≡ , or Ψ , ( 𝑥 ) ≡ leads in each case to acontradiction, we have established statement 2.The potentials 𝑦 ( 𝑥 ) , 𝑣 ( 𝑥 ) , and 𝑠 ( 𝑥 ) are analytic on 𝐷 by analytic Fredholm theory, and by statement 2 it also holdsthat 𝑡 ( 𝑥 ) is meromorphic on 𝐷 . In general, the compatibility condition 𝐀 𝑥 − 𝐁 𝜆 + [ 𝐀 , 𝐁 ] = on the matrices (5.14)–(5.15) implies that these four functions satisfy the coupled nonlinear differential equations (3.3). The system (3.3) hasa conserved quantity 𝐼 defined by (3.4); to determine its constant value, it suffices evaluate it at any 𝑥 ∈ 𝐷 that makeseach term in 𝐼 finite (it is only necessary to avoid the isolated zeros of Ψ , ( 𝑥 ) ). Note that the direct monodromyproblem (3.1) has an irregular singular point of Poincaré rank at 𝜆 = 0 and hence by general theory two fundamentalsolutions exist in a vicinity of 𝜆 = 0 which are uniquely specified by their asymptotics as 𝜆 → in the associatedStokes sectors. An explicit computation of the formal expansions directly from the differential equation (3.1) (cf., [24])yields, upon comparison with the expansion (5.2) the identity 𝐼 = Θ . Now, the expression 𝑢 ( 𝑥 ) = − 𝑦 ( 𝑥 )∕ 𝑠 ( 𝑥 ) definesa meromorphic function on 𝐷 because the zeros of 𝑠 ( 𝑥 ) are isolated by statement 2. Differentiating this expressionusing (3.3) and eliminating 𝑦 ( 𝑥 ) = − 𝑠 ( 𝑥 ) 𝑢 ( 𝑥 ) , one finds that 𝑢 ( 𝑥 ) and the product 𝑠 ( 𝑥 ) 𝑡 ( 𝑥 ) are related by the first orderdifferential equation (3.6). Solving this identity for 𝑠 ( 𝑥 ) 𝑡 ( 𝑥 ) in terms of 𝑢 ( 𝑥 ) and 𝑢 ′ ( 𝑥 ) and differentiating the resultyields a second-order differential expression involving 𝑢 ( 𝑥 ) alone. On the other hand, the product 𝑠 ( 𝑥 ) 𝑡 ( 𝑥 ) can bedifferentiated directly using (3.3) after which 𝑦 ( 𝑥 ) can be eliminated using 𝑦 ( 𝑥 ) = − 𝑠 ( 𝑥 ) 𝑢 ( 𝑥 ) , 𝑣 ( 𝑥 ) can be eliminatedusing the integral of motion 𝐼 = Θ , and finally the product 𝑠 ( 𝑥 ) 𝑡 ( 𝑥 ) can be eliminated once again using (3.6). Equatingthese two equivalent expressions for the derivative of 𝑠 ( 𝑥 ) 𝑡 ( 𝑥 ) yields precisely the Painlevé-III equation (1.1) for 𝑢 ( 𝑥 ) .This proves statement 3. (cid:3) Next, we have the following result.
Lemma 3.
Given 𝑛 ∈ ℤ and 𝑚 ∈ ℂ , there is a finite set 𝑃 𝑛 ( 𝑚 ) such that the inverse monodromy problem is uniquelysolvable for 𝑥 ∈ ℂ ⧵ ( ℝ − ∪ 𝑃 𝑛 ( 𝑚 )) . The corresponding solution 𝑢 ( 𝑥 ) of the Painlevé-III equation (1.1) is a rationalfunction.Proof. Since existence of a solution implies uniqueness by a Liouville argument, it is sufficient to establish existencefor suitable 𝑥 . To this end we first consider 𝑛 = 0 . The explicit solution 𝚿 (0) ( 𝜆 ; 𝑥, 𝑚 ) of the direct monodromyproblem constructed in Section 4 obviously satisfies the conditions of the inverse monodromy problem as well, and itis well-defined for 𝑥 ∈ ℂ ⧵ ℝ − . A calculation shows that the leading term 𝚿 ( 𝑥 ) takes the form 𝚿 ( 𝑥 ) = [ e −i 𝜋 ∕4 e −i 𝜋𝑚 ∕2 𝑚 e −3 𝜋 i∕4 e 𝑥 𝑥 𝑚 e 𝜋 i∕4 − 𝑚 (2 𝑚 + 1) 𝑥 −1 e −2 𝑥 𝑥 − 𝑚 e i 𝜋 ∕4 e i 𝜋𝑚 ∕2 (2 𝑚 + 1 + 4 𝑥 ) 𝑥 −1 ] , 𝑛 = 0 . (5.20) bviously, Ψ , ( 𝑥 ) , Ψ , ( 𝑥 ) , e −2 𝑥 𝑥 − 𝑚 Ψ , ( 𝑥 ) , and e 𝑥 𝑥 𝑚 Ψ , ( 𝑥 ) are all rational functions (with poles at 𝑥 = 0 only).Similar calculations give Ψ ∞1 , ( 𝑥 ) = −i2 𝑚 e −i 𝜋𝑚 ∕2 e 𝑥 𝑥 𝑚 and Ψ ∞1 , ( 𝑥 ) = −i2 −( 𝑚 +4) e i 𝜋𝑚 ∕2 (2 𝑚 + 1)(4 𝑥 − 2 𝑚 − 1)e −2 𝑥 𝑥 − 𝑚 , 𝑛 = 0 . (5.21)Therefore also e −2 𝑥 𝑥 − 𝑚 Ψ ∞1 , ( 𝑥 ) and e 𝑥 𝑥 𝑚 Ψ ∞1 , ( 𝑥 ) are rational functions. Clearly, 𝑃 ( 𝑚 ) = ∅ (the pole at 𝑥 = 0 isalready excluded as ℝ − ), and the corresponding solution 𝑢 ( 𝑥 ) = −iΨ ∞1 , ( 𝑥 )∕(Ψ , ( 𝑥 )Ψ , ( 𝑥 )) ≡ is clearlyrational. Next, let 𝑘 ≥ be an integer, and suppose that 𝑃 𝑘 ( 𝑚 ) is finite, that the inverse monodromy problem for 𝑛 = 𝑘 is (uniquely) solvable for 𝑚 ∈ ℂ and 𝑥 ∈ ℂ ⧵ ( ℝ − ∪ 𝑃 𝑘 ( 𝑚 )) , and that for 𝑛 = 𝑘 the expansion coefficients Ψ , ( 𝑥 ) , Ψ , ( 𝑥 ) , e −2 𝑥 𝑥 − 𝑚 Ψ , ( 𝑥 ) , e 𝑥 𝑥 𝑚 Ψ , ( 𝑥 ) , e −2 𝑥 𝑥 − 𝑚 Ψ ∞1 , ( 𝑥 ) , and e 𝑥 𝑥 𝑚 Ψ ∞1 , ( 𝑥 ) are all rational functions. Taking 𝐷 = ℂ ⧵ ( ℝ − ∪ 𝑃 𝑘 ( 𝑚 )) and applying Lemma 2 we see that Ψ , ( 𝑥 ) ≢ holds on 𝐷 , so the Schlesinger transformation(5.3) exists on 𝐷 except at the finitely-many zeros of the rational function Ψ , ( 𝑥 ) in 𝐷 . Letting 𝑃 𝑘 +1 ( 𝑚 ) denotethe union of the set of these zeros with 𝑃 𝑘 ( 𝑚 ) , the matrix 𝚿 ( 𝑘 +1) ( 𝜆 ; 𝑥, 𝑚 ) ∶= ̂ 𝚿 ( 𝑘 ) ( 𝜆 ; 𝑥, 𝑚 ) clearly satisfies all of theproperties of the inverse monodromy problem for 𝑛 = 𝑘 , 𝑚 ∈ ℂ , and 𝑥 ∈ ℂ ⧵ ( ℝ − ∪ 𝑃 𝑘 ( 𝑚 )) . Since, according to(5.4) and the inductive hypotheses in force, the matrix e − 𝑥𝜎 𝑥 − 𝑚𝜎 ∕2 ̂ 𝐁 ( 𝑥 ) 𝑥 𝑚𝜎 ∕2 e 𝑥𝜎 is a rational function of 𝑥 , it thenfollows that the transformed expansion coefficients are such that ̂ Ψ , ( 𝑥 ) , ̂ Ψ , ( 𝑥 ) , e −2 𝑥 𝑥 − 𝑚 ̂ Ψ , ( 𝑥 ) , e 𝑥 𝑥 𝑚 ̂ Ψ , ( 𝑥 ) , e −2 𝑥 𝑥 − 𝑚 ̂ Ψ ∞1 , ( 𝑥 ) , and e 𝑥 𝑥 𝑚 ̂ Ψ ∞1 , ( 𝑥 ) are all rational functions, as is ̂𝑢 ( 𝑥 ) = −i ̂ Ψ ∞1 , ( 𝑥 )∕( ̂ Ψ , ( 𝑥 ) ̂ Ψ , ( 𝑥 )) , which byLemma 2 satisfies the Painlevé-III equation with parameters 𝑛 = 𝑘 + 1 and 𝑚 . The desired conclusion therefore holdsfor all integers 𝑛 ≥ by induction on 𝑛 .For 𝑛 ≤ , we apply instead the transformation (5.8)–(5.9) to decrease 𝑛 , making use of the fact that Ψ , ( 𝑥 ) ≢ .A parallel induction argument shows that the desired conclusion holds for all negative integers 𝑛 as well. (cid:3) We remark that the points at which the inverse monodromy problem fails to have a solution need not coincide withthe poles or zeros of the rational function 𝑢 ( 𝑥 ) .5.3. Induced Bäcklund transformations.
The Schlesinger transformation (5.3) implies a corresponding Bäcklundtransformation for the potentials 𝑣 ( 𝑥 ) , 𝑦 ( 𝑥 ) , 𝑠 ( 𝑥 ) and 𝑡 ( 𝑥 ) : ̂𝑣 ( 𝑥 ) ∶= −i 𝑥𝑡 ( 𝑥 ) ̂𝑦 ( 𝑥 ) ∶= i 𝑥 ( 𝑥𝑠 ( 𝑥 ) − (Θ ∞ − 1) 𝑦 ( 𝑥 ) + 𝑦 ( 𝑥 ) 𝑡 ( 𝑥 ) ) ̂𝑠 ( 𝑥 ) ∶= i 𝑦 ( 𝑥 ) 𝑥 ( 𝑥 + 𝑦 ( 𝑥 ) 𝑡 ( 𝑥 ) − Θ ∞ 𝑦 ( 𝑥 ) 𝑡 ( 𝑥 ) − 𝑣 ( 𝑥 ) 𝑦 ( 𝑥 ) ) ̂𝑡 ( 𝑥 ) ∶= i 𝑥 𝑦 ( 𝑥 ) 𝑡 ( 𝑥 ) − Θ ∞ 𝑡 ( 𝑥 ) − 𝑣 ( 𝑥 ) 𝑥 + 𝑦 ( 𝑥 ) 𝑡 ( 𝑥 ) − Θ ∞ 𝑦 ( 𝑥 ) 𝑡 ( 𝑥 ) − 𝑣 ( 𝑥 ) 𝑦 ( 𝑥 ) . (5.22)It is straightforward to confirm directly that whenever ( 𝑣, 𝑦, 𝑠, 𝑡 ) solves (3.3), then so does ( ̂𝑣, ̂𝑦, ̂𝑠, ̂𝑡 ) when Θ ∞ is replacedin (3.3) by ̂ Θ ∞ ∶= Θ ∞ − 1 . Defining ̂𝑢 ( 𝑥 ) ∶= − ̂𝑦 ( 𝑥 )∕ ̂𝑠 ( 𝑥 ) and using (5.22) along with 𝑢 ( 𝑥 ) = − 𝑦 ( 𝑥 )∕ 𝑠 ( 𝑥 ) , the identity 𝐼 = Θ , and (3.6), one arrives at Gromak’s transformation (1.3). This proves the following. Proposition 3.
The rational function 𝑢 ( 𝑥 ) obtained from the inverse monodromy problem with parameters 𝑚 ∈ ℂ and 𝑛 ∈ ℤ ≥ coincides with the function 𝑢 ( 𝑥 ) = 𝑢 𝑛 ( 𝑥 ; 𝑚 ) obtained via 𝑛 iterations of the Bäcklund transformation (1.3) starting from the seed 𝑢 ( 𝑥 ; 𝑚 ) ≡ . This result establishes the link between the algebraic representation (1.6)–(1.7) of 𝑢 𝑛 ( 𝑥 ; 𝑚 ) and the analytic represen-tation afforded by the inverse monodromy problem. It is easy to check that the Bäcklund transformation (1.3) preservesthe property 𝑢 ( 𝑥 ) → as 𝑥 → ∞ , and therefore 𝑢 𝑛 ( 𝑥 ; 𝑚 ) and its odd reflection 𝑅 𝑢 𝑛 ( 𝑥 ; 𝑚 ) = − 𝑢 𝑛 (− 𝑥 ; 𝑚 ) are distinctrational solutions of the Painlevé-III equation (1.1) for the same values of 𝑛 ∈ ℤ and 𝑚 ∈ ℂ . Suppose that 𝑚 ∉ ℤ , but 𝑢 ( 𝑥 ) is a rational solution of (1.1) for parameters ( 𝑚, 𝑛 ) . We may invert the Bäcklund transformation (the correspondingexplicit formula for the inverse can be obtained from the 𝑛 -reducing Schlesinger transformation (5.8) in the same waythat Gromak’s transformation can be deduced from (5.3)) and apply the inverse 𝑛 times to 𝑢 ( 𝑥 ) , thereby arriving at arational solution of (1.1) with parameters ( 𝑚, . However, it has been shown that when 𝑛 = 0 and 𝑚 ∉ ℤ , the onlyrational solutions of (1.1) are the constants ±1 . By Lemma 1, the inverse transformation is injective and therefore itfollows that either 𝑢 ( 𝑥 ) = 𝑢 𝑛 ( 𝑥 ; 𝑚 ) or 𝑢 ( 𝑥 ) = 𝑅 𝑢 𝑛 ( 𝑥 ; 𝑚 ) , i.e., for 𝑚 ∉ ℤ and 𝑛 ∈ ℤ , there are exactly two rational olutions. From this it follows that for general 𝑚 it is sufficient to study the family of functions { 𝑢 𝑛 ( 𝑥 ; 𝑚 )} 𝑛 ∈ ℤ to analyzeall rational solutions of (1.1). This can be done using the inverse monodromy problem, suitably reformulated in theform of Riemann-Hilbert Problem 1, which we now are in a position to establish.5.4. Renormalization.
To study the asymptotic behavior of the rational solutions for 𝑛 a large integer and 𝑚 ∈ ℂ fixed,it is useful to study in place of 𝚿 ( 𝑛 ) ( 𝜆 ; 𝑥, 𝑚 ) a matrix that is normalized to the identity matrix as 𝜆 → ∞ . Therefore,we consider the matrix 𝐘 ( 𝑛 ) ( 𝜆 ; 𝑥, 𝑚 ) defined by a small modification of the left-hand side of (5.1): 𝐘 ( 𝑛 ) ( 𝜆 ; 𝑥, 𝑚 ) ∶= 𝚿 ( 𝑛 ) ( 𝜆 ; 𝑥, 𝑚 ) 𝜆 Θ ∞ 𝜎 ∕2 ⬕ e −i 𝑥 ( 𝜆 − 𝜆 −1 ) 𝜎 ∕2 where Θ ∞ is given by (1.2). It is easy to check that if it exists for a given 𝑥 ∈ ℂ , this matrix satisfies the conditions ofRiemann-Hilbert Problem 1. Recalling the expansions (1.16)–(1.17), the coefficients 𝐘 ∞1 ( 𝑥 ) and 𝐘 ( 𝑥 ) are related tothe expansions of 𝚿 ( 𝑛 ) ( 𝜆 ; 𝑥, 𝑚 ) by 𝚿 ∞1 ( 𝑥 ) = 𝐘 ∞1 ( 𝑥 ) − i 𝑥 𝜎 and 𝚿 ( 𝑥 ) = 𝐘 ( 𝑥 ) , (5.23)and therefore combining (3.5), (5.11), and (5.23), the rational solution 𝑢 𝑛 ( 𝑥 ; 𝑚 ) of the Painlevé-III equation (1.1) isgiven by (1.18).It is a consequence of the cyclic relation (4.36) that at this point we may take the contour 𝐿 to be arbitrary subject tothe restrictions indicated in Subsection 1.1. Such a modified form of 𝐿 can always be connected with the original 𝐿 bya homotopy that moves the intersection point but maintains the increment of arguments as specified by (1.10)–(1.11),and throughout which the power functions 𝜆 𝑝 ⬕ appearing in the jump conditions (1.12)–(1.15) are deformed in a naturalway by analytic continuation. This completes the proof of Theorem 1.6. A LGEBRAIC S OLUTION OF R IEMANN -H ILBERT P ROBLEM FOR 𝑚 ∈ ℤ + Note that the jump matrices on 𝐿 ∞ ⬔ ∪ 𝐿 ⬔ reduce to the identity if 𝑚 = , , , … . Likewise, the jump matrices on 𝐿 ∞ ⬕ ∪ 𝐿 ⬕ reduce to the identity if 𝑚 = − , − , − , … . This observation results in an algebraic solution technique forhalf-integer values of 𝑚 that we will now describe.Suppose first that 𝑚 = + 𝑘 , 𝑘 ∈ ℤ ≥ . Then according to Riemann-Hilbert Problem 1, 𝐘 ( 𝑛 ) ( 𝜆 ; 𝑥, 𝑚 ) is analyticfor ℂ ⧵ 𝐿 where now we may take 𝐿 = 𝐿 ⬕ ∪ 𝐿 ∞ ⬕ because the jump matrices on 𝐿 ⬔ ∪ 𝐿 ∞ ⬔ reduce to the identity soanalyticity follows by Morera’s theorem. Moreover, the jump condition on 𝐿 takes the form 𝐘 ( 𝑛 )+ ( 𝜆 ; 𝑥, + 𝑘 ) = 𝐘 ( 𝑛 )− ( 𝜆 ; 𝑥, + 𝑘 ) ⎡⎢⎢⎣ √ 𝜋𝑘 ! ( 𝜆 𝑘 ∕2+3∕4 ⬕ ) + ( 𝜆 𝑘 ∕2+3∕4 ⬕ ) − 𝜆 − 𝑛 e −i 𝑥 ( 𝜆 − 𝜆 −1 ) ⎤⎥⎥⎦ , 𝜆 ∈ 𝐿, 𝑘 ∈ ℤ ≥ . (6.1)A similar Morera argument therefore implies that the second column of 𝐘 ( 𝑛 ) ( 𝜆 ; 𝑥, + 𝑘 ) has no jump across 𝐿 andhence is analytic for 𝜆 ∈ ℂ ⧵ {0} . Applying the normalization condition at 𝜆 = ∞ yields 𝑌 ( 𝑛 )12 ( 𝜆 ; 𝑥, + 𝑘 ) = 𝑂 ( 𝜆 −1 ) and 𝑌 ( 𝑛 )22 ( 𝜆 ; 𝑥, + 𝑘 ) = 1 + 𝑂 ( 𝜆 −1 ) as 𝜆 → ∞ , while 𝑌 ( 𝑛 ) 𝑗 ( 𝜆 ; 𝑥, + 𝑘 ) = 𝑂 ( 𝜆 𝑘 +1 ) as 𝜆 → for 𝑗 = 1 , . It follows byLiouville’s theorem that 𝑌 ( 𝑛 )12 ( 𝜆 ; 𝑥, + 𝑘 ) = 𝑘 +1 ∑ 𝑗 =1 𝑎 ( 𝑛,𝑘 ) 𝑗 ( 𝑥 ) 𝜆 − 𝑗 and 𝑌 ( 𝑛 )22 ( 𝜆 ; 𝑥, + 𝑘 ) = 1 + 𝑘 +1 ∑ 𝑗 =1 𝑏 ( 𝑛,𝑘 ) 𝑗 ( 𝑥 ) 𝜆 − 𝑗 where 𝑎 ( 𝑛,𝑘 ) 𝑗 ( 𝑥 ) and 𝑏 ( 𝑛,𝑘 ) 𝑗 ( 𝑥 ) are coefficients to be determined. The first column of the jump condition (6.1) can then beused together with the Plemelj formula and the normalization conditions 𝑌 ( 𝑛 )11 ( 𝜆 ; 𝑥, + 𝑘 ) = 1+ 𝑂 ( 𝜆 −1 ) and 𝑌 ( 𝑛 )21 ( 𝜆 ; 𝑥, + 𝑘 ) = 𝑂 ( 𝜆 −1 ) as 𝜆 → ∞ to express 𝑌 ( 𝑛 ) 𝑗 ( 𝜆 ; 𝑥, + 𝑘 ) explicitly in terms of 𝑌 ( 𝑛 ) 𝑗 ( 𝜆 ; 𝑥, + 𝑘 ) : 𝑌 ( 𝑛 )11 ( 𝜆 ; 𝑥, + 𝑘 ) = 1 + 1i 𝑘 ! √ 𝜋 ∫ 𝐿 𝑌 ( 𝑛 )12 ( 𝜇 ; 𝑥, + 𝑘 )( 𝜇 𝑘 ∕2+3∕4 ⬕ ) + ( 𝜇 𝑘 ∕2+3∕4 ⬕ ) − 𝜇 − 𝑛 e −i 𝑥 ( 𝜇 − 𝜇 −1 ) 𝜇 − 𝜆 d 𝜇 nd 𝑌 ( 𝑛 )21 ( 𝜆 ; 𝑥, + 𝑘 ) = 1i 𝑘 ! √ 𝜋 ∫ 𝐿 𝑌 ( 𝑛 )22 ( 𝜇 ; 𝑥, + 𝑘 )( 𝜇 𝑘 ∕2+3∕4 ⬕ ) + ( 𝜇 𝑘 ∕2+3∕4 ⬕ ) − 𝜇 − 𝑛 e −i 𝑥 ( 𝜇 − 𝜇 −1 ) 𝜇 − 𝜆 d 𝜇. It only remains to enforce the condition that 𝑌 ( 𝑛 ) 𝑗 ( 𝜆 ; 𝑥, + 𝑘 ) = 𝑂 ( 𝜆 𝑘 +1 ) as 𝜆 → for 𝑗 = 1 , . Expanding ( 𝜇 − 𝜆 ) −1 for small 𝜆 in a geometric series and elimination of the second column elements in favor of 𝑎 ( 𝑛,𝑘 ) 𝑗 ( 𝑥 ) and 𝑏 ( 𝑛,𝑘 ) 𝑗 ( 𝑥 ) , 𝑗 = 1 , … , 𝑘 + 1 , yields separate ( 𝑘 + 1) × ( 𝑘 + 1) linear systems of Hankel type separately for the 𝑎 ( 𝑛,𝑘 ) 𝑗 ( 𝑥 ) and the 𝑏 ( 𝑛,𝑘 ) 𝑗 ( 𝑥 ) : defining coefficients 𝐼 + 𝑛,𝑘,𝑗 ( 𝑥 ) by 𝐼 + 𝑛,𝑘,𝑗 ( 𝑥 ) ∶= ∫ 𝐿 ( 𝜆 𝑘 ∕2+3∕4 ⬕ ) + ( 𝜆 𝑘 ∕2+3∕4 ⬕ ) − 𝜆 − 𝑛 − 𝑗 e −i 𝑥 ( 𝜆 − 𝜆 −1 )) d 𝜆 (6.2)the systems are 𝐇 + 𝑛,𝑘 ( 𝑥 ) 𝐚 ( 𝑛,𝑘 ) ( 𝑥 ) = −i √ 𝜋𝑘 ! 𝐞 (1) and 𝐇 + 𝑛,𝑘 ( 𝑥 ) 𝐛 ( 𝑛,𝑘 ) ( 𝑥 ) = − 𝐯 + 𝑛,𝑘 ( 𝑥 ) where 𝐞 (1) ∶= (1 , , , … , ⊤ denotes the first coordinate unit vector, the unknowns are arranged in vectors as 𝐚 ( 𝑛,𝑘 ) ( 𝑥 ) ∶= ( 𝑎 ( 𝑛,𝑘 )1 ( 𝑥 ) , … , 𝑎 ( 𝑛,𝑘 ) 𝑘 +1 ( 𝑥 )) ⊤ , 𝐛 ( 𝑛,𝑘 ) ( 𝑥 ) ∶= ( 𝑏 ( 𝑛,𝑘 )1 ( 𝑥 ) , … , 𝑏 ( 𝑛,𝑘 ) 𝑘 +1 ( 𝑥 )) ⊤ , and the Hankel matrix and right-hand side vector for the 𝐛 ( 𝑛,𝑘 ) ( 𝑥 ) system are 𝐇 + 𝑛,𝑘 ( 𝑥 ) ∶= { 𝐼 + 𝑛,𝑘,𝑝 + 𝑞 ( 𝑥 )} 𝑘 +1 𝑝,𝑞 =1 , 𝐯 + 𝑛,𝑘 ( 𝑥 ) ∶= ( 𝐼 + 𝑛,𝑘, ( 𝑥 ) , … , 𝐼 + 𝑛,𝑘,𝑘 +1 ( 𝑥 )) ⊤ . Therefore, when 𝑚 = + 𝑘 , 𝑘 ∈ ℤ ≥ , Riemann-Hilbert Problem 1 has a solution obtained by linear algebra indimension 𝑘 + 1 provided that 𝑥 is such that the complex Hankel determinant 𝐷 + 𝑛,𝑘 ( 𝑥 ) ∶= det( 𝐇 + 𝑛,𝑘 ( 𝑥 )) is nonzero. From the formula (1.18) we then get the corresponding rational solution 𝑢 𝑛 ( 𝑥 ; + 𝑘 ) of the Painlevé-IIIequation (1.1) for 𝑘 = 0 , , , , … in the form 𝑢 𝑛 ( 𝑥 ; + 𝑘 ) = √ 𝜋𝑘 ! 𝑎 ( 𝑛,𝑘 )1 ( 𝑥 ) 𝑎 ( 𝑛,𝑘 ) 𝑘 +1 ( 𝑥 ) 𝑘 +1 ∑ 𝑗 =1 𝑎 ( 𝑛,𝑘 ) 𝑗 ( 𝑥 ) 𝐼 + 𝑛,𝑘,𝑗 + 𝑘 +2 ( 𝑥 ) , 𝑘 ∈ ℤ ≥ . (6.3)For instance, if 𝑘 = 0 , then we obtain 𝑎 ( 𝑛, ( 𝑥 ) = − i √ 𝜋𝐷 + 𝑛, ( 𝑥 ) and 𝑏 ( 𝑛, ( 𝑥 ) = − 1 𝐷 + 𝑛, ( 𝑥 ) ∫ 𝐿 ( 𝜆 ⬕ ) + ( 𝜆 ⬕ ) − 𝜆 − 𝑛 −1 e −i 𝑥 ( 𝜆 − 𝜆 −1 ) d 𝜆 where 𝐷 + 𝑛, ( 𝑥 ) ∶= ∫ 𝐿 ( 𝜆 ⬕ ) + ( 𝜆 ⬕ ) − 𝜆 − 𝑛 −2 e −i 𝑥 ( 𝜆 − 𝜆 −1 ) d 𝜆. Therefore, assuming that 𝐷 + 𝑛, ( 𝑥 ) ≠ , the solution of Riemann-Hilbert Problem 1 has been obtained in closed form forarbitrary integer 𝑛 and for 𝑚 = . The corresponding rational solution of the Painlevé-III equation (1.1) is 𝑢 𝑛 ( 𝑥 ; ) = i ∫ 𝐿 ∞ ⬕ ∪ 𝐿 ⬕ ( 𝜆 ⬕ ) + ( 𝜆 ⬕ ) − 𝜆 −( 𝑛 +2) e −i 𝑥 ( 𝜆 − 𝜆 −1 ) d 𝜆 ∫ 𝐿 ∞ ⬕ ∪ 𝐿 ⬕ ( 𝜆 ⬕ ) + ( 𝜆 ⬕ ) − 𝜆 −( 𝑛 +3) e −i 𝑥 ( 𝜆 − 𝜆 −1 ) d 𝜆 . (6.4)Assuming that the integrals in the fraction (6.4) have no common zeros, we see that the zeros of 𝑢 𝑛 ( 𝑥 ; ) are thepoints where Riemann-Hilbert Problem 1 has no solution for 𝑚 = , while the poles of 𝑢 𝑛 ( 𝑥 ; ) are regular points for 𝐘 ( 𝑛 ) ( 𝜆 ; 𝑥, ) . ext assume that 𝑚 = −( + 𝑘 ) , 𝑘 ∈ ℤ ≥ . Then according to Riemann-Hilbert Problem 1, the matrix 𝐘 ( 𝑛 ) ( 𝜆 ; 𝑥, − − 𝑘 ) is analytic for 𝜆 ∈ ℂ ⧵ 𝐿 , where we may now take 𝐿 to be the contour 𝐿 = 𝐿 ∞ ⬔ ∪ 𝐿 ⬔ , across which we may writethe jump condition in the form 𝐘 ( 𝑛 )+ ( 𝜆 ; 𝑥, − − 𝑘 ) = 𝐘 ( 𝑛 )− ( 𝜆 ; 𝑥, − − 𝑘 ) ⎡⎢⎢⎣ √ 𝜋𝑘 ! ( 𝜆 𝑘 −1∕2 ⬕ ) ∞ 𝜆 𝑛 e i 𝑥 ( 𝜆 − 𝜆 −1 ) ⎤⎥⎥⎦ , 𝜆 ∈ 𝐿, 𝑘 ∈ ℤ ≥ , where ( 𝜆 𝑘 −1∕2 ⬕ ) ∞ denotes the function ( 𝜆 𝑘 −1∕2 ⬕ ) ∞ ∶= { 𝜆 𝑘 −1∕2 ⬕ , 𝜆 ∈ 𝐿 ∞ ⬔ , − 𝜆 𝑘 −1∕2 ⬕ , 𝜆 ∈ 𝐿 ⬔ . Note that ( 𝜆 𝑘 −1∕2 ⬕ ) ∞ is continuous at the junction point between 𝐿 ⬔ and 𝐿 ∞ ⬔ because 𝜆 𝑘 −1∕2 ⬕ changes sign across itsjump contour of 𝐿 ⬕ ∪ 𝐿 ∞ ⬕ . Obviously, it is now the first column of 𝐘 ( 𝑛 ) ( 𝜆 ; 𝑥, − − 𝑘 ) that is analytic for 𝜆 ∈ ℂ ⧵ {0} ,and from the normalization conditions 𝑌 ( 𝑛 )11 ( 𝜆 ; 𝑥, − − 𝑘 ) = 1 + 𝑂 ( 𝜆 −1 ) and 𝑌 ( 𝑛 )21 ( 𝜆 ; 𝑥, − − 𝑘 ) = 𝑂 ( 𝜆 −1 ) as 𝜆 → ∞ while 𝑌 ( 𝑛 ) 𝑗 ( 𝜆 ; 𝑥, − − 𝑘 ) = 𝑂 ( 𝜆 − 𝑘 ) as 𝜆 → , we see that the entries of the first column necessarily take the form 𝑌 ( 𝑛 )11 ( 𝜆 ; 𝑥, − − 𝑘 ) = 1 + 𝑘 ∑ 𝑗 =1 𝑐 ( 𝑛,𝑘 ) 𝑗 ( 𝑥 ) 𝜆 − 𝑗 and 𝑌 ( 𝑛 )21 ( 𝜆 ; 𝑥, − − 𝑘 ) = 𝑘 ∑ 𝑗 =1 𝑑 ( 𝑛,𝑘 ) 𝑗 ( 𝑥 ) 𝜆 − 𝑗 where 𝑐 ( 𝑛,𝑘 ) 𝑗 ( 𝑥 ) and 𝑑 ( 𝑛,𝑘 ) 𝑗 ( 𝑥 ) are coefficients to be determined. The jump condition together with the normalizationcondition that 𝑌 ( 𝑛 )12 ( 𝜆 ; 𝑥, − − 𝑘 ) = 𝑂 ( 𝜆 −1 ) and 𝑌 ( 𝑛 )22 ( 𝜆 ; 𝑥, − − 𝑘 ) = 1 + 𝑂 ( 𝜆 −1 ) as 𝜆 → ∞ then determines the secondcolumn from the first: 𝑌 ( 𝑛 )12 ( 𝜆 ; 𝑥, − − 𝑘 ) = 1i 𝑘 ! √ 𝜋 ∫ 𝐿 𝑌 ( 𝑛 )11 ( 𝜇 ; 𝑥, − − 𝑘 )( 𝜇 𝑘 −1∕2 ⬕ ) ∞ 𝜇 𝑛 e i 𝑥 ( 𝜇 − 𝜇 −1 ) 𝜇 − 𝜆 d 𝜇 and 𝑌 ( 𝑛 )22 ( 𝜆 ; 𝑥, − − 𝑘 ) = 1 + 1i 𝑘 ! √ 𝜋 ∫ 𝐿 𝑌 ( 𝑛 )21 ( 𝜇 ; 𝑥, − − 𝑘 )( 𝜇 𝑘 −1∕2 ⬕ ) ∞ 𝜇 𝑛 e i 𝑥 ( 𝜇 − 𝜇 −1 ) 𝜇 − 𝜆 d 𝜇. Then demanding that 𝑌 ( 𝑛 ) 𝑗 ( 𝜆 ; 𝑥, − − 𝑘 ) = 𝑂 ( 𝜆 𝑘 ) as 𝜆 → yields two Hankel systems on the coefficients 𝑐 ( 𝑛,𝑘 ) 𝑗 ( 𝑥 ) and 𝑑 ( 𝑛,𝑘 ) 𝑗 ( 𝑥 ) . Setting 𝐼 − 𝑛,𝑘,𝑗 ( 𝑥 ) ∶= ∫ 𝐿 ( 𝜆 𝑘 −1∕2 ⬕ ) ∞ 𝜆 𝑛 − 𝑗 e i 𝑥 ( 𝜆 − 𝜆 −1 ) d 𝜆, these systems take the form 𝐇 − 𝑛,𝑘 ( 𝑥 ) 𝐜 ( 𝑛,𝑘 ) ( 𝑥 ) = − 𝐯 − 𝑛,𝑘 ( 𝑥 ) and 𝐇 − 𝑛,𝑘 ( 𝑥 ) 𝐝 ( 𝑛,𝑘 ) ( 𝑥 ) = −i 𝑘 ! √ 𝜋 𝐞 (1) where 𝐜 ( 𝑛,𝑘 ) ( 𝑥 ) ∶= ( 𝑐 ( 𝑛,𝑘 )1 ( 𝑥 ) , … , 𝑐 ( 𝑛,𝑘 ) 𝑘 ( 𝑥 )) ⊤ , 𝐝 ( 𝑛,𝑘 ) ( 𝑥 ) ∶= ( 𝑑 ( 𝑛,𝑘 )1 ( 𝑥 ) , … , 𝑑 ( 𝑛,𝑘 ) 𝑘 ( 𝑥 )) ⊤ , and the Hankel matrix and right-hand side vector for the 𝐜 ( 𝑛,𝑘 ) ( 𝑥 ) system are 𝐇 − 𝑛,𝑘 ( 𝑥 ) ∶= { 𝐼 − 𝑛,𝑘,𝑝 + 𝑞 ( 𝑥 )} 𝑘𝑝,𝑞 =1 , 𝐯 − 𝑛,𝑘 ( 𝑥 ) ∶= ( 𝐼 − 𝑛,𝑘, ( 𝑥 ) , … , 𝐼 − 𝑛,𝑘,𝑘 ( 𝑥 )) ⊤ . Therefore, if 𝑘 ∈ ℤ ≥ and 𝑚 = − − 𝑘 , then Riemann-Hilbert Problem 1 has a solution obtained by 𝑘 × 𝑘 linear algebra,provided that the Hankel determinant 𝐷 − 𝑛,𝑘 ( 𝑥 ) ∶= det( 𝐇 − 𝑛,𝑘 ( 𝑥 )) s nonzero given 𝑥 . From (1.18) we get the corresponding rational solution of the Painlevé-III equation (1.1) in theform 𝑢 𝑛 ( 𝑥 ; − − 𝑘 ) = i 𝐼 − 𝑛,𝑘, ( 𝑥 ) + i 𝑘 ∑ 𝑗 =1 𝑐 ( 𝑛,𝑘 ) 𝑗 ( 𝑥 ) 𝐼 − 𝑛,𝑘,𝑗 ( 𝑥 ) 𝑐 ( 𝑛,𝑘 ) 𝑘 ( 𝑥 ) 𝐼 − 𝑛,𝑘,𝑘 +1 ( 𝑥 ) + 𝑐 ( 𝑛,𝑘 ) 𝑘 ( 𝑥 ) 𝑘 ∑ 𝑗 =1 𝑐 ( 𝑛,𝑘 ) 𝑗 ( 𝑥 ) 𝐼 − 𝑛,𝑘,𝑗 + 𝑘 +1 ( 𝑥 ) , 𝑘 ∈ ℤ ≥ . (6.5)Note that if 𝑘 = 0 , the linear algebra system is trivial and hence Riemann-Hilbert Problem 1 always has a solutionwhen 𝑚 = − : 𝐘 ( 𝑛 ) ( 𝜆 ; 𝑥, − ) = ⎡⎢⎢⎢⎣ √ 𝜋 ∫ 𝐿 ( 𝜇 −1∕2 ⬕ ) ∞ 𝜇 𝑛 e i 𝑥 ( 𝜇 − 𝜇 −1 ) 𝜇 − 𝜆 d 𝜇 ⎤⎥⎥⎥⎦ . The corresponding rational solution of the Painlevé-III equation (1.1) is 𝑢 𝑛 ( 𝑥 ; − ) = i ∫ 𝐿 ∞ ⬔ ∪ 𝐿 ⬔ ( 𝜆 −1∕2 ⬕ ) ∞ 𝜆 𝑛 e i 𝑥 ( 𝜆 − 𝜆 −1 ) d 𝜆 ∫ 𝐿 ∞ ⬔ ∪ 𝐿 ⬔ ( 𝜆 −1∕2 ⬕ ) ∞ 𝜆 𝑛 −1 e i 𝑥 ( 𝜆 − 𝜆 −1 ) d 𝜆 . Remark 3.
We remark that in both cases the solution becomes more complicated as | 𝑚 | increases. This is similar tothe situation with the explicit solution of the Fokas-Its-Kitaev Riemann-Hilbert problem for orthogonal polynomials [10] . Significantly however, the large parameter 𝑛 appears explicitly in the (algebraic) solution of the Hankel systemcorresponding to any fixed half-integral value of 𝑚 . It is this latter feature that enables a direct large- 𝑛 asymptoticanalysis by classical steepest descent methods [3] . Another observation is that the formula (6.4) can be written in terms of Bessel functions. Indeed, we may write thisformula in simplified form as 𝑢 𝑛 ( 𝑥 ; ) = i ∫ ∞0 𝜆 − 𝑛 −1∕2 e −i 𝑥 ( 𝜆 − 𝜆 −1 ) d 𝜆 ∫ ∞0 𝜆 − 𝑛 −3∕2 e −i 𝑥 ( 𝜆 − 𝜆 −1 ) d 𝜆 where in both integrals the path of integration is the same, chosen (depending on 𝑥 ) so that the integrals are convergentat 𝜆 = 0 , ∞ , and also the branch of 𝜆 − 𝑛 −1∕2 is arbitrary as long as it is analytic along the contour of integration andtaken to be the same in both integrals. By the substitution 𝜆 = 𝑒 𝑡 and comparison with [21, Equation 10.9.18] we thenfind that if Im( 𝑥 ) > , then 𝑢 𝑛 ( 𝑥 ; ) = i 𝐻 (2) 𝑛 −1∕2 (− i2 𝑥 ) 𝐻 (2) 𝑛 +1∕2 (− i2 𝑥 ) where 𝐻 (2) 𝜈 ( 𝑧 ) denotes a Hankel function. This formula admits meromorphic continuation to the whole complex 𝑥 -plane. The same formula can then be expressed in terms of spherical Bessel functions of the second kind [21, 10.47(ii)]as 𝑢 𝑛 ( 𝑥 ; ) = i 𝗁 (2) 𝑛 −1 (− i2 𝑥 ) 𝗁 (2) 𝑛 (− i2 𝑥 ) . The functions e i 𝑧 𝗁 (2) 𝑛 ( 𝑧 ) are explicit polynomials in 𝑧 −1 [21, Equation 10.49.7] and this in turn leads to the explicitformula 𝑢 𝑛 ( 𝑥 ; ) = 𝑛 ∑ 𝑗 =1 (2 𝑛 − 𝑗 − 1)!( 𝑛 − 𝑗 )!( 𝑗 − 1)! 𝑥 𝑗𝑛 ∑ 𝑗 =0 (2 𝑛 − 𝑗 )!( 𝑛 − 𝑗 )! 𝑗 ! 𝑥 𝑗 . he identification of 𝑢 ( 𝑥 ; ) with ratios of Bessel polynomials was also noted in [8]. More generally, from [21, Equa-tion 10.9.18] it is clear that the integrals 𝐼 ± 𝑛,𝑘,𝑗 ( 𝑥 ) are proportional to Hankel functions, and hence the expression for 𝑢 𝑛 ( 𝑥 ; ±( + 𝑘 )) can always be written in terms of ratios of Hankel-type determinants whose entries are Bessel func-tions. More important from the point of view of asymptotic analysis in the large- 𝑛 limit however is the fact that thecoefficients are integrals that may be analyzed by classical steepest descent methods; see [3].R EFERENCES[1] M. Bertola, “On the location of poles for the Ablowitz-Segur family of solutions to the second Painlevé equation,”
Nonlinearity , 1179–1185,2012.[2] M. Bertola and T. Bothner, “Zeros of large degree Vorob’ev-Yablonski polynomials via a Hankel determinant identity,” Int. Math. Res. Not. , 9330–9399, 2015.[3] T. Bothner and P. D. Miller, “Rational solutions of the Painlevé-III equation: large parameter asymptotics,” in preparation, 2018.[4] R. J. Buckingham and P. D. Miller, “Large-degree asymptotics of rational Painlevé-II functions: noncritical behaviour,”
Nonlinearity ,2489–2577, 2014.[5] R. J. Buckingham and P. D. Miller, “Large-degree asymptotics of rational Painlevé-II functions: critical behaviour,” Nonlinearity , 1539–1596, 2015.[6] R. J. Buckingham, “Large-degree asymptotics of rational Painlevé-IV functions associated to generalized Hermite polynomials,” arXiv:1706.09005 , 2017.[7] P. A. Clarkson, “The third Painlevé equation and associated special polynomials,” J. Phys. A: Math. Gen. , 9507–9532, 2003.[8] P. A. Clarkson, C.-K. Law, C.-H. Lin, “An algebraic proof for the Umemura polynomials for the third Painlevé equation,” arxiv:1609.00495 ,2016.[9] O. Costin, M. Huang, and S. Tanveer, “Proof of the Dubrovin conjecture and analysis of the tritronquée solutions of 𝑃 𝐼 ,” Duke Math. J. ,665–704, 2014.[10] A. S. Fokas, A. R. Its, and A. V. Kitaev, “Discrete Painlevé equations and their appearance in quantum gravity,”
Comm. Math. Phys. ,313–344, 1991.[11] A. S. Fokas, A. R. Its, A. A. Kapaev, and V. Yu. Novokshenov,
Painlevé Transcendents: The Riemann-Hilbert Approach , Volume 128, Math-ematical Surveys and Monographs, American Mathematical Society, Providence, 2006.[12] O. Gamayun, N. Iorgov, and O. Lisovyy, “How instanton combinatorics solves Painlevé VI, V, and III’s,”
J. Phys. A: Math. Theor. , 335203,2013.[13] V. I. Gromak, “The solutions of Painlevé’s third equation,” Differencial’nye Uravnenija , 2082–2083, 1973 (in Russian).[14] M. Jimbo and T. Miwa, “Monodromy preserving deformation of linear ordinary differential equations with rational coefficients. II,” Physica D , 407–448, 1981.[15] N. Joshi and M. Mazzocco, “Existence and uniqueness of tri-tronquée solutions of the second Painlevé hierarchy,” Nonlinearity , 427–439,2003.[16] K. Kajiwara and T. Masuda, “On the Umemura polynomials for the Painlevé-III equation,” Phys. Lett. A , 462–467, 1999.[17] P. D. Miller and Y. Sheng, “Rational solutions of the Painlevé-II equation revisited,”
SIGMA , 65, 29 pages, 2017.[18] A. E. Milne and P. A. Clarkson, “Rational solutions and Bäcklund transformations for the third Painlevé equation,” in P. A. Clarkson, ed., Applications of Analytic and Geometric Methods to Nonlinear Differential Equations , Kluwer Academic Publishers, 341–352, 1993.[19] A. E. Milne, P. A. Clarkson, and A. P. Bassom, “Bäcklund transformations and solution hierarchies for the third Painlevé equation,”
Stud. Appl.Math. , 139–194, 1997.[20] V. Yu. Novokshenov, “Tronquée solutions of the Painlevé II equation,” Theor. Math. Phys. , 1136–1146, 2012.[21] F. W. J. Olver, A. B. Olde Daalhuis, D. W. Lozier, B. I. Schneider, R. F. Boisvert, C. W. Clark, B. R. Miller, and B. V. Saunders, eds., NISTDigital Library of Mathematical Functions, http://dlmf.nist.gov/ , Release 1.0.14, 2016.[22] H. Umemura, “Painlevé equations in the past 100 years,”
Amer. Math. Soc. Transl. (2) Vol. , 2001[23] A. P. Vorob’ev, “On the rational solutions of the second Painlevé equation,”
Differ. Equations , 58–59, 1965.[24] W. Wasow, Asymptotic Expansions for Ordinary Differential Equations , Interscience-Wiley, New York, 1965.[25] A. I. Yablonskii, “On rational solutions of the second Painlevé equation,”
Vesti AN BSSR, Ser. Fiz.-Tech. Nauk , no. 3, 30–35, 1959.D
EPARTMENT OF M ATHEMATICS , U
NIVERSITY OF M ICHIGAN , 2074 E
AST H ALL , 530 C
HURCH S TREET , A NN A RBOR , MI 48109-1043,U
NITED S TATES
E-mail address : [email protected] D EPARTMENT OF M ATHEMATICS , U
NIVERSITY OF M ICHIGAN , 2074 E
AST H ALL , 530 C
HURCH S TREET , A NN A RBOR , MI 48109-1043,U
NITED S TATES
E-mail address : [email protected] D EPARTMENT OF M ATHEMATICS , U
NIVERSITY OF M ICHIGAN , 2074 E
AST H ALL , 530 C
HURCH S TREET , A NN A RBOR , MI 48109-1043,U
NITED S TATES . C
URRENT ADDRESS : D
EPARTMENT OF M ATHEMATICS , U
NIVERSITY OF P ENNSYLVANIA , 209 S
OUTH RD S TREET , P
HILADEL - PHIA , PA 19104-6395, U
NITED S TATES
E-mail address : [email protected]@sas.upenn.edu