CCERN-TH-2020-218
Scattering from production in 2d
Piotr Tourkine a,b and Alexander Zhiboedov b a CNRS, LPTHE, Sorbonne universit´e, 4 place Jussieu, 75005 Paris, France b CERN, Theoretical Physics Department, CH-1211 Geneva 23, Switzerland
Abstract
In 1968, Atkinson proved the existence of functions that satisfy all S-matrix axioms in four space-time dimensions. His proof is constructive and to our knowledge it is the only result of thistype. Remarkably, the methods to construct such functions used in the proof were never imple-mented in practice. In the present paper, we test the applicability of those methods in the simplersetting of two-dimensional S-matrices. We successfully implement two numerical iterative schemes(fixed-point iteration and Newton’s method), which, by iterating unitarity and dispersion relations,converge to solutions to the S-matrix axioms. We characterize the region in the amplitude-spacein which our algorithms converge, and discover a fractal structure connected to the so-called CDDambiguities which we call “CDD fractal”. To our surprise, the question of convergence naturallyconnects to the recent study of the coupling maximization in the two-dimensional S-matrix boot-strap. The methods exposed here pave the way for applications to higher dimensions, and exposesome of the potential challenges that will have to be overcome.
Dedicated to the memory of Andr´e Martin a r X i v : . [ h e p - t h ] J a n ontents A Linear interpolation 28B Interpolation with Bernstein polynomials 29C Newton’s method : oscillations and fractals. 30
C.1 Oscillations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30C.2 Fractals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
D Coupling maximization 30
D.1 Coupling maximization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30D.2 Shift of the zero due to inelasticity . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322
Introduction
Scattering and particle production are deeply connected in relativistic theories. In two-dimensionalspacetime, d = 2, scattering without production leads to integrability [1]. In higher dimensions, d >
2, scattering without production is impossible [2]. In this paper we will be interested in theconverse question:
Given particle production, can we reconstruct scattering?
In the simplest case of 2 → Disc s T → − | T → | (cid:124) (cid:123)(cid:122) (cid:125) Scattering = (MP) (cid:124) (cid:123)(cid:122) (cid:125)
Production (1.1)where T → is the two-to-two scattering amplitude, Disc s stands for discontinuity with respect toMandelstam variable s , and (MP) = (cid:80) n> | T → n | is the multi-particle contribution that describesparticle production in a given scattering process. The nontrivial problem we are interested in hereis to solve unitarity (1.1) combined together with analyticity and crossing (see section 2.1 for theprecise formulation).Following the pioneering works by Atkinson [3–7], we treat the particle production (MP) as afixed, given data, and solve for T → as a function of (MP). This is the central idea of the presentwork. In practice, we do this by implementing numerically an iterative procedure developed byAtkinson, which converges to a solution of (1.1). Remarkably, given (MP) is not too large, thesolutions to (1.1) can be shown to exist and be reached by simple iteration procedures that wedescribe in detail below.In this paper, we present and implement numerically two algorithms to reconstruct scatteringfrom production in two spacetime dimensions. Our first method relies on applying iteratively theunitarity equation (1.1) to find a fixed-point thereof, our second method is the standard Newtonmethod applied to this equation.Two dimensions is an obvious starting point because S-matrices are much simpler: they haveonly one kinematic invariant, and no phase-space integral in the unitarity equation. Furthermore,for scattering of identical particles the problem can be solved analytically and therefore the perfor-mance and limitations of the numerical algorithms can be explored in great depth. While in twodimensions our algorithm simply recovers known solutions in a novel way, its power lies in the factthat it can be easily generalized to higher dimensions where no solutions are known.We call a function of Mandelstam invariants that satisfies the fundamental constraints of ana-lyticity, unitarity, and crossing an amplitude-function. It is one of the aims of the nonperturbativeS-matrix program to characterize the space of such functions. It does not necessarily describescattering in some physical theory, but the converse is definitely true: every physical scattering am-plitude is an amplitude-function. Constructing amplitude-functions and characterizing the spaceof physical parameters that they span, which we can loosely call the space of couplings, is an in-teresting and important problem, sometimes also called the primal problem, see [12] for the recent By “scattering” we mean elastic scattering amplitudes n → n . By “production” we mean inelastic scatteringamplitudes n → m . For illustration purposes, we suppress the phase space integrals and simple kinematical pre-factors. For d = 2 the precise statement can be found in the bulk of the paper. For d = 4 Atkinson proved sufficientconditions for convergence in [3–6]. Note that the basic principles discussed here are known to lead to many nontrivial constraints on the physicalparameters of the theory, see [8–11] for some recent works. In these works, however, only some of the constraintscoming from unitarity are implemented. In the problem we are considering we aim at imposing unitarity fully at thelevel of two-to-two scattering amplitude in a gapped theory (no massless particles in the spectrum). elastic unitarity . Elastic unitarity is thestatement of unitarity at energies where no particle production is possible. In this energy range (e.g.4 m < s < m for pion scattering), (MP) vanishes in (1.1). When combined with analyticityand crossing, elastic unitarity is known to give many powerful and nontrivial constraints on thescattering amplitudes, see e.g. [18] for a recent discussion of the question.The methods that have been developed so far do not allow to impose elastic unitarity, due toits nonlinear nature. It is therefore an open problem to explore the space of amplitudes that satisfyboth elastic and inelastic unitarity, analyticity and crossing.In this context, one remarkable result that so far has attracted little attention is a constructiveproof by Atkinson [3–6] of the existence of functions satisfying all of the S-matrix axioms: crossing,elastic and inelastic unitarity, in four dimensions. The mathematical proof is based on constructingan iterative sequence and exhibiting some sufficient conditions about its convergence. To ourknowledge (see also [19]), it is the only result about two-to-two scattering amplitudes in gappedtheories that has all the desired unitarity properties.The main message of our paper is that the approach of imposing unitarity advocated in thepapers by Atkinson is actually suitable for efficient numerical implementation. It therefore of-fers an exciting avenue to explore the space of scattering amplitudes that satisfy all the desiredunitarity properties in higher dimensions and overcome some of the present limitations of the S-matrix program. If implemented in four dimensions, it would be the first method able to constructamplitude-functions which satisfy all axioms of the S-matrix program and therefore we believe thatthis idea deserves further attention [20].From a more philosophical standpoint, choosing (MP) seems to imply a huge amount of inde-terminacy on the resulting scattering amplitude-function. Nevertheless, we see two main aspectsthat justify this approach. Firstly, when deriving bounds on the space of low-energy S-matrix pa-rameters, we expect that detailed form of the multi-particle input to be irrelevant. By elevating the(MP) particle production amplitude to the role of an input parameter and observing how the 2 → • We implemented numerically for the first time iterative solutions to unitarity, analyticity,and crossing with a given inelasticity (MP) in two space-time dimensions. We recovered theknown integrable S-matrices and analogues with inelasticity which match the known solutions4o unitarity in 2d. • We characterized in details the range of convergence of the two methods we used to solve(1.1): fixed-point iteration, described in section 3; Newton’s method, described in section 4.We see that most of the parameter space of theories can be captured, appart from a slicenear the edge, for which our algorithms do not converge. Covering the complete space ofsolutions in two space-time dimensions requires methods that go beyond the ones describedin the paper. • In d = 2, particle production (MP) does not specify the scattering amplitude T → uniquely.In fact, there are infinitely many scattering amplitudes T → with a given inelasticity. Thisspace of solutions to our problem are characterized by CDD ambiguities [21–23]. We exploredhow different starting points of Newton’s method allow to recover S-matrices with differentCDD factors.The plan of the paper is as follows. In section 2, we review the basics of S-matrices in 2d andthe analytic solutions which we will recover through our numerical implementation. In section 3we describe our numerical implementation of the fixed-point iteration in two dimensions, and insection 4 we explain how we used Newton’s method to answer the question posed at the beginningof the paper. Section 5 presents the discussion of the results and some open directions. In this section, we review the basic properties of two-dimensional S-matrices and formulate preciselythe problem of scattering from production that we would like to solve. We review the analyticsolution to the problem which involves the notion of CDD S-matrices [23], central to this work.Then, we present our numerical iterative solution to the problem which is the subject of the presentpaper.
We consider scattering of identical massive bosons in two spacetime dimensions. The two-to-twokinematics is captured by one kinematic invariant, the center of mass energy s , while t = 0 and u = 4 m − s . The S -matrix is therefore a function of s only the scattering amplitude T is definedby S ( s ) = 1 + i T ( s ) (cid:112) s ( s − m ) . (2.1)In the free theory, T ( s ) = 0. The inverse factor of (cid:112) s ( s − m ) comes from the Jacobian thatrelates δ (2) ( p + p + p + p ) to δ (2) ( p + p ) δ (2) ( p + p ) in two dimensions (see for instance [24]).We assume that S ( s ) ≡ lim (cid:15) → S ( s + i(cid:15) ) satisfies Mandelstam analyticity, namely that it isholomorphic outside of the unitarity cuts s < s ≥ m , apart from potential poles thatcorrespond to bound states located at 0 < s < m . Real analyticity implies that T ( s ∗ ) = T ∗ ( s ).Crossing symmetry takes the form S ( s ) = S (4 m − s ) . (2.2)5inally, assuming that particle production starts from s , unitarity takes the following form | S ( s ) | = 1 , m ≤ s < s , (2.3) | S ( s ) | ≤ , s ≥ s . (2.4)In a theory with a single stable particle of mass m , s = 9 m and s = 16 m stands for the three-and four-particle thresholds correspondingly. The explicit value of s does not play an importantrole in the subsequent analysis.To characterize particle production, we explicitly introduce inelasticity as follows S ( s ) S ∗ ( s ) ≡ − f i ( s ) ≥ , (2.5)where f i ( s ) is defined for real, positive s and by assumption it has a nontrivial support startingfor s ≥ s . It characterizes the amount of particle production. Below, for concreteness we set s = 16 m which corresponds to a situation where T → = 0.In terms of T ( s ), unitarity and inelasticity readIm T ( s ) = 12 (cid:112) s ( s − m ) | T ( s ) | + v i ( s ) , (2.6)where v i ( s ) = f i ( s ) (cid:112) s ( s − m )4 . (2.7)At this point we can formulate precisely the problem that we would like to solve. Problem:
Given inelasticity v i ( s ), find the scattering amplitude T ( s ) that satisfies Mandel-stam analyticity, crossing (2.2), elastic unitarity (2.3), and inelastic unitarity (2.4). It turns out that an explicit solution to the problem stated above exists in two dimensions. Weconsider the cases v i ( s ) = 0 and v i ( s ) (cid:54) = 0 separately. Elastic amplitudes ( v i ( s ) = 0 ) In two dimensions, it is possible to have scattering withoutparticle production. Such theories are integrable, and they are characterized by an S-matrix whichis a pure phase. The building blocks thereof are known as CDD factors. Following the terminologyof [14], we will distinguish CDD poles and CDD zeros.The corresponding S-matrices are given by S poleCDD ( s ) = (cid:112) s ( s − m ) + (cid:113) m p (4 m − m p ) (cid:112) s ( s − m ) − (cid:113) m p (4 m − m p ) , (2.8) S zeroCDD ( s ) = (cid:112) s ( s − m ) − (cid:112) m z (4 m − m z ) (cid:112) s ( s − m ) + (cid:112) m z (4 m − m z ) . (2.9)These S-matrices satisfy Mandelstam analyticity, crossing, and unitarity | S poleCDD ( s ) | = | S zeroCDD ( s ) | =1. It is easy to check that S poleCDD ( s ) has a pole at s = m p , and that S zeroCDD ( m z ) = 0.6t is also possible for S ( s ) to have a pair of complex conjugate zeros. This can be achieved byeither choosing a single CDD factor with m z = 2 m + iα with α ∈ R , or by taking a product oftwo S zeroCDD with complex conjugate zeros.It is easy to see that any purely elastic S-matrix in 2d is a product of CDD-poles and CDD-zeros. Firstly, map one half of complex plane (Re( s ) >
2) minus the right cut s > m to the unitdisk via s → z = √ s ( s − m )+ √ a (4 m − a ) √ s ( s − m ) − √ a (4 m − a ) for some a (see [15, fig. 1]). Suppose that there exists anS-matrix ˜ S ( z ) with poles and zeros at some location in the complex plane, and define f ( z ) to bethe ratio of ˜ S ( z ) to the CDD-pole and CDD S-matrices with the same poles and zeros. f ( z ) istherefore holomorphic, and since it possesses no zeros in the unit disk, 1 /f ( z ) is also holomorphic.Since the cut maps the left cut s > m to the boundary of the unit disk (again, see [15, fig. 1]), | f (exp( iθ )) | = 1 for θ ∈ [0; 2 π ]. The application of the maximum modulus principle to f and 1 /f gives that f ( z ) = 1 everywhere. Crossing symmetry allows to extend this to the full complex plane. Inelastic amplitudes ( v i ( s ) (cid:54) = 0 ) Following [14, 22], we can immediately write down a generalsolution to eq. (2.6), as follows: S ( s ) = S elastic ( s ) e (cid:82) ∞ m ds (cid:48) πi log(1 − f i ( s (cid:48) )) (cid:114) s ( s − m s (cid:48) ( s (cid:48)− m (cid:16) s (cid:48)− s + s (cid:48)− (4 m − s ) (cid:17) , (2.10)where | S elastic ( s ) | = 1 is a purely elastic S -matrix given by the product of CDD zeros and CDDpoles, as well as possibly an overall sign factor. It is immediate to check that (2.10) satisfies all theconditions described previously. Note that this formula assumes that | log (1 − f i ( s )) | < s at large s . This will be enough for the purposes of this paper. Properties of T ( s ) Below we will be interested in the properties of the scattering amplitude T ( s ), in addition to those of S ( s ). Let us list a few of its relevant properties which concern itsbehavior close to the two-particle threshold s = 4 m as well as its high energy limit s → ∞ .Consider an amplitude with N zeros ≥ N poles ≥ s → ∞ , this amplitude goes to a constantlim s →∞ T ( s ) := c ∞ (2.11)given by (the computation is straightforward): c inf = 2 N poles (cid:88) i =1 (cid:113) m p i (4 m − m p i ) − N zeros (cid:88) j =1 (cid:113) m z j (4 m − m z j )+ (cid:90) ∞ m ds (cid:48) π log[1 − f i ( s (cid:48) )] 1 (cid:112) s (cid:48) ( s (cid:48) − m ) (cid:0) s (cid:48) − m (cid:1) . (2.12)Close to the threshold s (cid:39) m , the behaviour of the amplitude depends on wheter the total numberof CDD poles and zeros N tot = N zeros + N poles is even or odd: T ( s ) = 4 im (cid:112) s − m + O ( s − , N tot even ,T ( s ) = c i ( s − m ) + O (( s − ) , N tot odd . (2.13) In [14] such pairs of zeros were called CDD resonances. By changing the power in (2.10) to (cid:16) s ( s − m ) s (cid:48) ( s (cid:48) − m ) (cid:17) + n , we can generalize this formula to describe | log(1 − f inel ( s )) |
1] via x = 4 /s . (3.1)To avoid confusion, we should use a different name for functions of x , for instance ˜ T ( x ) ≡ T (4 /x ).Hoping that it will not confuse the reader, below, we omit tildes and simply write T ( x ). In thisvariable, the two-particle threshold at s = 4 is mapped to x = 1, infinite s is mapped to x = 0,and the four-particle inelastic threshold at s = 16 maps to x = 1 /
4. Finally, we introduce for theimaginary part of the amplitude ρ ( x ) ≡ Im T ( x ) , x ∈ [0 , . (3.2) Namely working on a given discretized grid with some fixed numerical precision. Stated differently we can say that we find that the iterative map (2.22a) is contracting. For the definition ofcontracting maps and review of the corresponding fixed point theorems, see e.g. [25].
Discretization.
We discretize the values of the function that we iterate on a grid of length Nx = 0 < x · · · x N − < x N = 1 . (3.3)The grid spacing controls the precision of the overall agreement with the analytic solutions.2. Interpolation.
We adopt two methods of interpolation: linear interpolation, and interpolationwith Bernstein polynomials, see respecively appendices A and B. Linear interpolation allowsan easier experimentation with various grids and local increase of precision near points ofinterest. The grid we used for the results of this paper for linear interpolants is given in (A.1).Bernstein polynomials require a unformly-spaced grid but result in smoother functions andare known to converge uniformly upon increasing the resolution of the grid. The value of thefunction ρ n at step n of the iteration on the grid are called ρ n,i ρ n ( x i ) = ρ n,i , ρ n, = 0 , ρ n,N = 0 . (3.4)Here x = 1 and x N = 1.3. Dispersion integral (2.23) . The dispersion integral reconstructs the real part of T ( x ) fromits imaginary part obtained using (2.23). Given a grid and an interpolation scheme, thisdispersion integral reduces to following matrix operation on ρ n , just like in [14],Re T n,i = c ∞ − g n (cid:18) /x i − m p − /x i − (4 − m p ) (cid:19) + 1 π N − (cid:88) j =1 B ij ρ n,j . (3.5)The reconstruction-matrix B ij is computed once and for all, given the discretization andinterpolation schemes. The explicit form of this matrix for the linear interpolation and inter-polation with Bernstein polynomials can be found in the dedicated appendices.The coupling g n in (3.5) is defined by demanding that Re T n,N = 0 (recall that x N = 1corresponds to s = 4): g n = (cid:18) − m p − m p (cid:19) − π (cid:88) j B Nj ρ n,j + c ∞ . (3.6)This is the discretized version of (2.24). Once this is done, Re T n readsRe T n,i = G i,j ρ n,j + q i (3.7)where G n,ij = B ij − P ( x i ) P (1) B Nj , , q i = c ∞ (cid:18) − P ( x i ) P (1) (cid:19) , (3.8)and P ( x ) is the pole function in the x -variable: P ( x ) ≡ /x − m p − /x − (4 − m p ) . (3.9)4. Iteration.
The resulting map assumes the final form ρ n +1 ,i = Φ( ρ n,j ) i := x i √ − x i (cid:16) ρ n i + ( (cid:88) j G ij ρ j + q i ) (cid:17) + v i ( x i ) (3.10)This is the discretized version of (2.22a)–(2.24).11ith the explicit form of the map given in (3.10) at hand, we can apply and implement nu-merically theorems about the convergence of such maps. In finite dimensions, a map (cid:126)ρ (cid:48) = Φ( (cid:126)ρ )converges locally to a fixed point (cid:126)ρ ∗ = Φ( (cid:126)ρ ∗ ) if the largest eigenvalue of Φ at the fixed point , alsocalled spectral radius , is strictly smaller than one. This result is standard and easily proven. Uponiteration, in a small neighborhood of the fixed point, any large eigvenvalue will drive the iterationaway and small eigvenvalues will make it converge. In our case, the Jacobian of the map can be computed explicitly and is given by J n,ij = ∂ρ n,i ∂ρ n,j = x i √ − x i (cid:16) ρ n,i δ i,j + G ij ( G ik ρ n,k + q i ) (cid:17) , i, j = 1 , .., N − , (3.11)where i is not summed. The eigenvalues of the Jacobian are easily computed numerically, and weverify very accurately that, when parameters are such that the spectral radius of the iteration isbigger than one, the algorithm diverges, while when it converges it does so at the exponential rateof the largest eigenvalue. We now present our results, and start with a few examples of iterations.
Here we present examples of the amplitudes for which the iteration converges. In terms of theanalytical solution (2.10) we consider mainly two cases: no bound state (and no CDD zeros),one bound state (and one CDD zero). We have also determined the convergence criterion of thealgorithm in these cases, which we present below.
We start with the case where we have no bound state below the two-particle threshold s = 4.This is a small subset of the amplitudes we study, one for which S elastic = 1 in (2.10). For them,convergence is easy to understand, and the algorithm simply stops converging when f i >
1, becausethere are no such S-matrices, as evident from eq. (2.5). Therefore, we need to have some inelasticity,otherwise the algorithm simply converges to zero.Besides, since we have no bound state, we need to turn temporarily to the other strategydescribed around (2.21), and adjust the constant at infinity at each step of the iteration so as tocancel the real part at s = 4.In figure 1, we display the results of the iteration for the following inelasticity: v i ( x ) = Hx (1 − x ) if x ≤ / , v i ( x ) = 0 otherwise (3.12)(see (2.7) for the relation between v i and f i ), starting on an arbitrary starting point. The parameter H > x = 1 / v i becomes so large that (2.5) cannot be satisfied. In our parametriza-tion, this corresponds to H = 197. Near H = 197, convergence is no more exponential, andeventually leads to divergence, as expected. Of course if, for instance, the matrix has blocks which are not contributing, one can refine this statement. This is so, because the quantities we look at, for instance the coupling or constant at infinity are not eigenvectorsof this Jacobian, hence they receive contributions from all modes. ���� � - ��� � ���� � - ��� � ���� ��� (a) (b) (c) Figure 1: Results of the algorithm with no bound state and an arbitrary but regular enough starting point(displayed). Convergence is exponentially fast, so we show only the first five iterations. The inelastic inputchosen here is given in (3.12) with H = 120. Linear interpolants, grid given in (A.1). a) Imaginary partof the S-matrix: iterations (color), analytic (dot-dashed). (b) Real part. (c) Modulus of the S-matrix. 7seconds for 100 iterations and the grid of (A.1). � �� �� �� �� ����� - �� �� - �� �� - �� �� - �� �� - � ��� Figure 2: Convergence of the algorithm is dictated by the spectral radius ξ of the Jacobian. The horizontalaxis labels the number of iterations. || . || ∞ represents the maximum of the elements of the discretized S-matrix seen as a vector of the values of the S-matrix, || S n − S n − || ∞ = Max i =0 ,...,N | S n,i − S n − ,i | where S n,i = S n ( x i ). A rather wide class of starting points can be used. We tried a variety of functions, but we didnot play the game of trying to characterize this space exactly, since the result of the iteration isunique and determined by (2.10), therefore there cannot be any subtleties in the final answer. Inparticular, one can start on a null initial amplitude ρ ( x ) = 0. We now introduce one bound state at mass s = m p ( x = 4 /m p ) in the analysis. From now on, weuse the version of the algorithm where we keep the c ∞ fixed and update the coupling, therefore wehave a sequence g n as well as a sequence ρ n ( x ).We provide two examples to illustrate the dynamics of the algorithm, with, and without inelas-ticity. Then we discuss the function space in which convergence is achieved below in section 3.2. No inelasticity
In the case without inelasticity, the input of the algorithm is solely the massof the bound state and the constant at infinity. Given this data, and for a wide class of inputfunctions, we found systematically amplitudes with one zero through the fixed point iteration. Anexample thereof is provided in figure 3.The algorithm converges to amplitude with one zero, whose location is exactly the one fixed by(2.12). In this particular example, we found in this example to be m z (cid:39) . ... . Figure 3 showsthe perfect agreement between this solution and the result of the iteration.13 ���� � - ��� � ���� � - ��� � ���� ��� ��� ��� (a) (b) (c) Figure 3: Results of the iterations for one bound state and fixed constant at infinity. Obtained for c ∞ = 1 . m p = 2 .
8. We show only first 5 iterations. The analytical solution is given by (2.10) with S elastic = S poleCDD S zeroCDD with m z (cid:39) . ... , the value which solves (2.12) with v i = 0. Linear interpolant, grid of eq.(A.1). A hundrediterations are performed in about 5 seconds. With the given input data, other options would have included 2 or more CDD zeros whosecontribution to the constant at infinity adds up according to (2.25). As alluded to above, manysuch solutions are possible. However, we also observed that for those solutions, the spectral radiusis always larger than one, hence, the version of the algorithm considered in this section does notconverge. We discuss this in more detail below.
With inelasticity
In the presence of inelasticity, we similarly found that we converge to thesolution (2.10) with only one-zero. We show an example of the iteration for the sake of the reader’scuriosity in fig. 4 below. � ���� � - ��� � ���� � - ��� � ���� ��� ��� �� (a) (b) (c) Figure 4: Results of the iterations for one bound state and a fixed constant at infinity. Obtained for c ∞ = − m p = 2 . H = 40. The result of the first four iterations only. The analytic solution is givenby (2.10) with S elastic = S poleCDD S zeroCDD with m z = 3 . ... , as per eq.(2.12). Linear interpolant, grid given ineq. (A.1). As we vary some of those parameters (inelasticity when present, or constant at infinity), con-vergence is lost, and this draws the boundary of some domain of convergence, which we describeat length in section 3.2.2 below.
Implementing the algorithm for more bound states presents no technical difficulty and can be doneeasily. If we have n bound states, the input data of the algorithm, in the spirit of [14], will be thelocation of the poles m p , . . . , m p n , the couplings of the poles except the first one g , . . . , g n and14he constant at infinity c ∞ . The simplest convergent possibility for this case is the amplitude with n CDD zeros. The problem is therefore to determine n variables: the coupling g and the locationof the poles m z . . . m z n . Each coupling g i at a given value of the m p j ’s furnishes one constraint onthe m z j ’s, therefore we have n − g , which lead to conjecture that the systematics is exactly identical. Only the shape ofthe domain in which convergence is achieved is changed. We have not performed an exhaustivestudy of this shape in the case with more poles. Let us now turn to the analysis of the region in parameter space in which the algorithm converges.
In the convergent examples above, the total number of CDD factors (poles and zeros) N tot wasalways even. It is easy to understand the origin of the fact that we never converged to solutionswith N tot odd: this is related to the near-threshold behavior (2.13). When N tot is odd, we have thefollowing near-threshold behavior of the amplitude N tot odd : ρ ( x ) = 8 √ − x (1 + O (1 − x )) , (3.13)Re T ( x ) = O (1 − x ) . With this near threshold behaviour, it is straightforward to show that there exists at least oneeigenvector of the Jacobian with the eigenvalue close to 2 and therefore the iterative map is di-vergent, as explained below the presentation of the algorithm for the fixed-point iteration, around(3.11).To demonstrate this explicitly, let us consider the behavior of the Jacobian (3.11) near x = 1.From the behavior (3.13) and (3.11) it immediately follows that J N − ,N − (cid:39) , J N − ,i (cid:39) . (3.14)This implies that ( J T ) ij δ jN − (cid:39) δ iN − and therefore J has an eigenvalue close to 2. As a resulteven if we start exactly on the solution, finite grid precision induces a small deviation from theactual solution which grows exponentially and eventually causes the map to diverge after a finitenumber of iterations (empirically, 5-10).When we compute the eigenvalues of the iterative map explicitly, we discover an intricate patternof complex eigenvalues, but confirm the presence of eigenvalues at close to 2. We show one examplein figure 5. The e.v.’s are intriguingly contained in a circle of radius one centered around one, butunfortunately the criterion for convergence is that the e.v. should be within the unit disk centeredaround zero. This might be related to the e.v.’s of Newton’s method, but in this case the criterionfor convergence is different: as we explain below, the divergence then comes from the fact that theJacobian becomes singular.Within the CDD-even class, we have found convergence only when the number of zeros equalsthe number of poles, and within a certain range of parameters among them, which we discuss in thesubsection below. For amplitudes with even number of CDD factors but different numbers of poles15 �� ��� ��� ��� ��� - ��� - ������������ ��� ��� ��� ��� ��� - ��� - ������������ (a) (b) Figure 5: Eigenvalues of the Jacobian for a pure CDD-pole amplitude for m p = 2 .
8: a) linear interpolation;b) interpolation with Bernstein polynomials. and zeros, there seem to exist no region where the iteration converges. We leave understandingthis more systematically for the future work.
In this section we present one of the main results of the paper, summarized in figure 6.We analyzed the shape of convergence-space for the one-zero one-pole amplitudes. Remarkably,it happens to be given by translates of the optimal coupling curve of [14]. We also found that,both with linear and Bernstein interpolants, the full space of one-zero one-pole amplitudes cannotbe covered and the algorithm stops converging at a small but nonzero distance away from the fullspace (in section 4 we show how Newton’s method allowed us to extend this domain).Taking for granted that our algorithm, when fed with one pole as in (2.23), converges onlyto a 1-pole solution (see hereafter sec. 3.2.3), the problem of the shape of the convergence spacebecomes effectively one-dimensional. Indeed, the inelastic contribution is fixed entirely by (2.12),and the only unknown that remains is the position of the zero, or equivalently the coupling of theresidue of the pole (they are related, hence determining one fixes the other). As there is only oneunknown, we can choose to represent it either as the location of the zero, the value of the coupling,or the value of the constant at infinity, since all these quantities are related to each other. To allowa more direct comparison with [14], we choose to represent the coupling, or rather the logarithmof the coupling squared.To better understand the results, we need first to describe two phenomena.Firstly, take a 1-pole 1-zero amplitude. At fixed m p (mass of the bound-state), the coupling isincreased if the CDD zero moves towards 4, and eventually becomes maximum when the zero isexactly at 4: that is the optimal coupling amplitude [14, 22] (black curve in figure 6). That factis easy to check by extracting explicitly the residue of the corresponding amplitude. Furthermore,as the constant at infinity is the input of our algorithm and is related to the position of the zeroby (2.12) of (3.16), increasing the constant at infinity can also be seen to increase both the zero,and, correspondingly, the coupling, until it reaches its maximal value. Relatedly, when decreasingthe constant at infinity, the zero approaches two, and eventually goes off in the complex plane suchthat Re( m z ) = 2 and m z = 4 − m z so that the constant at infinity remains real nut can then bearbitrarily negative while keeping ρ ( x = 0) = 0. A complex constant at infinity implies that the imaginary part of the amplitude does not decay to zero and hence g = g elastic e (cid:82) ∞ m ds (cid:48) π log(1 − f i ( s (cid:48) )) (cid:114) m p (4 m − m p ) s (cid:48) ( s (cid:48)− m (cid:18) s (cid:48)− m p + s (cid:48)− (4 m − m p ) (cid:19) , (3.15)and the position of the zero is constrained by c ∞ = 2 (cid:113) m p (4 m − m p ) − (cid:112) m z (4 m − m z )+ (cid:90) ∞ m ds (cid:48) π log[1 − f i ( s (cid:48) )] (cid:18) s (cid:48) ( s (cid:48) − m ) (cid:19) (cid:0) s (cid:48) − m (cid:1) . (3.16)In (3.16), we see that increasing inelasticity shifts the zero towards 4, hence increases the factor g elastic above. To determine whether g increases or not, one needs to look at the exponential. Itturns out that for most inelasticities, the exponent is numerically very small because of the fastdecay s of the integrand, and that integral is hardly different from zero. Therefore, from thisperspective, increasing inelasticity increases the coupling.
We can now describe our results. Our first result is that we mapped the domain of convergencespace without inelasticity by applying the process described above: at a fixed value of m p , forwhich values of the constant at infinity does the map converge? As expected, we found that theconstant at infinity can be arbitrarily negative, and this simply pushes the zero in the complexplane with real part 2. We find that the actual limit of convergence is reached for positive values of the constant at infinity, which are such that the coupling of the resulting amplitude is a certainfraction of the optimal coupling; g g ∼ c . (3.17)With linear interpolants, we found that fraction g g to be of order 0 .
5, while with Bernsteinpolynomials we had a slight improvement and observed a factor of 0 .
55. This is the blue and yellowthin curves in figure 6.We also explored the convergence space in the presence of inelasticity; this is our second result.Surprisingly, we found that inelasticity does not play a role, and whatever function we used, wecould only reach the same maximal coupling as without inelasticity. Later, when exploring Newton’smethod in the next section we will again find that remarkably the convergence region is wellcharacterized by the criterion g g is less than some number.Furthermore, we have not been able to produce any improvement of these bounds by improvingour grid and we believe that the bound is strict: in two dimensions the fixed-point iteration mapshould not cover all of parameter space. In the next section we will increase the region of convergenceusing Newton’s method.To produce our results, we applied the bisection method by hand for a given grid of mass-values m p : we could in this way found an optimal c ∞ where the algorithm is at the edge of not converginganymore. Our criteria was that, after a few hundred iterations, the S-matrix should be of modulusone to 10 − accuracy over the last 10 iterations. We came up with this criterion by empirical search.It turned out to correspond pretty accurately the fact that the spectral radius becomes equal to 1(within 10 − ). the dispersion integral needs to be written with subtractions. We do not consider such amplitudes in this work. �� ��� ��� ��� ��� - � - ������ ��� ��� ��� ��� ��� ��������������� Figure 6: Convergence of the fixed point iteration occurs in the blue-shaded and yellow-shaded regions,depending on the interpolation scheme. pole- n > zeros amplitude We now look at amplitudes with one pole and more zeros and ask whether our fixed-point iterationcould converge to them.The first observation is that one can always take an amplitude with n zeros and send n − . − away from 4, in order to have a spectral radius below one and hence possibleconvergence. At this point, the S-matrices look completely identical and this relates to the questionof the numerical accuracy of the algorithm.Excluding this pathological behaviour, we have scanned extensively the bulk of parameter spaceof the 1-pole n -zero amplitude (n even and odd) up to n = 5.We first designed a grid of n -tuples ( m p , m z , . . . , m z n ) of 10 evenly-spaced, distinct elementswithin [2 .
1; 3 .
9] (11 for n = 5). We computed systematically the spectral radius of the Jacobianfor all the elements (for n = 5, there are more than 400 tuples) and found all spectral radii to beequal exactly to two, to 10 − accuracy for linear interpolants. We also computed systematicallyfor n = 2 zeros the spectral radius for possibly identical elements (which include zeros of higherorder). This represents around 1000 eigenvalue computations, which all ended up being equal to 2to the same accuracy.Note that while we presented an argument for existence of such an eigenvalue for odd numberof poles and zeros around (3.14), we do not have such an argument for N tot even. It would beinteresting to understand the origin of this fact.Given the observed similarity of eigenvalue patterns between even and odd n , and knowing thatfor n odd we can determine that the spectral radius is at least two, we are lead to conjecture forall n , the spectral radius of the Jacobian of the map of the 1-pole n -zero amplitude is exactly 2. Itwould be very interesting to have a demonstration of this fact.18 Newton’s method
In the previous section we explored fixed-point mapping (2.22a) to solve unitarity and crossing.We observed convergence of the algorithm for a wide class of amplitudes and within some range ofparameters (positions of poles and zeros, constant at infinity). We gave evidence that the fixed pointiteration diverges at a finite distance away from the boundary of amplitude space, and explainedthat improving the grid did not decrease this distance.The aim of this section is to present results obtained using a different iteration scheme, withbetter convergence properties: Newton’s method (2.22b).In one dimension, Newton’s method aims at finding a root of a function f ( x ) via the followingsequence x n +1 = x n − f ( x n ) f (cid:48) ( x n ) . (4.1)For a function continuously differentiable near a root, there always exists a neighborhood of theroot such that Newton’s method will converge to the root. In this case, convergence will be“quadratic”. Convergence of the Newton method is a rich mathematical domain. One famousresult that maybe illustrates this point best is the fractal structure of the basins of attraction of themethod to determine complex roots of polynomials, see appendix C for details. In this section wewill observe similar patterns, which we relate to CDD ambiguities, and discuss them in section 4.2.The point that interests us here is that Newton’s method can be used to solve a fixed pointequation x ∗ = g ( x ∗ ), by searching for the roots of f = id − g , where id is the identity function,id( x ) = x . Atkinson suggested, see e.g. [7], that when the iteration method described in theprevious section reaches the boundary of convergence, one could extend convergence by usingNewton’s method. The conditions for convergence of the Newton method are indeed better thanthe fixed point iteration, because the gradient helps to direct the iteration.The steps of the iteration as we implemented it are as follows. The steps 1-3: discretization,interpolation, and dispersion integral, are implemented in the same way as before. At step 4 thealgorithm changes, as follows:4’. Newton’s method iteration.
The map (2.22b) is implemented as J Ψ n ( ρ n +1 − ρ n ) = − Ψ( ρ n ) (4.2)where i, j indices have been suppressed, Ψ ≡ id − Φ, id is the identity operator, Φ is theiterative map defined in the previous section in (3.10), J Ψ n,ij is the Jacobian of the map Ψ. Ineffect, this way of implementing Newton’s method is equivalent the form ρ n +1 = ρ n − ( J Ψ n ) − Ψ( ρ n ) , (4.3)whose continuous version was eq. (2.18). But eq.(4.2) has an immense advantage in terms ofits numerical cost and stability. The Jacobian is defined explicitly by J Ψ n,ij = ∂ ( ρ n,i − Φ[ ρ n,i ]) /∂ρ n,j = δ ij − J n,ij (4.4)where J was defined in (3.11). This means that the error at one step is proportional to the square of the error at the step before. This is fasterthan fixed point where convergence is only “linear”, which means that the error is proportional to the error at theprevious step, and typically decreases like q n for q < Linear solvers perform better than brute-force inversion for a variety of reasons. In our case, the main reasonseems to be that inversion is tantamount to having solved the system for all values of the input vectors. �� ��� ��� ��� ��� - � - ������ ��� ��� ��� ��� ��� ��������������� Figure 7: Maximal coupling as a function of the mass of the bound state. The fixed-point iteration methodconverges in blue-shaded and yellow-shaded regions. The Newton method extends convergence to the red-shaded region. With finite grids that we used we were not be able to fill in the gray region.
In Mathematica, we use
LinearSolve [ J (cid:48) n , b ], where b = J (cid:48) n ρ n − Ψ( ρ n ), which performs verywell in terms of speed for grids of size (cid:46) We first consider the case discussed in detail in section 3.2.2. The S-matrix of interest has oneCDD pole and one CDD zero. When doing the fixed-point iteration, we observed that the algo-rithm stopped converging as the CDD zero was approaching the two-particle threshold and theconvergence was lost when an eigenvalue of the map became bigger than one. Let us now show theresults of our analysis using the Newton’s method.We found that Newton’s method can be used to significantly extend the region of convergenceof the fixed-point mapping. This is an important result from two points of view. From the Atkinsonprogram’s perspective, it remained an unknown whether the convergence radius of the fixed-pointiteration could be extended at all. Our succesful implementation of Newton’s method shows that itis indeed the case. Secondly, from the point of view of our bootstraping procedure, it is importantto be able to describe a wider class of amplitudes.We depict our extended convergence region by plotting again the maximal coupling as a functionof the mass of the bound state in figure 7. We could get, with both linear and Bernstein interpolants,to g g ∼ .
75, which is a significant improvement of g g ∼ . − .
55, which we could achieve usingthe fixed-point iteration. Interestingly, linear interpolation gave better results than Bernstein’s thistime.In the bulk of the convergence region, we observe (see figure 8) that the convergence speed20 � � � ��� - �� �� - �� �� - �� �� - �� �� - �� � Figure 8: Fast convergence of Newton’s method. On the vertical axis we plot the difference between consec-utive axis. The horizontal axis labels the number of iterations. To be compared with figure 2. - ��� - ��� ��� ��� ��� - ��� - ������������ - ��� - ��� ��� ��� ��� - ��� - ������������ Figure 9: Distribution of eigenvalues of J Ψ for a pure elastic scattering amplitude consisting of a singe CDD-pole and a CDD-zero. We plot it close to the boundary of convergence, here m p = 2 . m z = 3 .
98 withlinear interpolation (left), while m z = 3 .
97 with Bernstein polynomial interpolation (right). As we move thelocation of zero closer to 4, the support of distribution of zeros on the right panel moves to negative valuesfor interpolation with Bernstein polynomials. It has a more obscure pattern for linear interpolants. Theamplitude functions, past the limit of convergence, look more and more jagged, with an instability growingnear x = 0 .
9, as in figure 10. of the Newton method is extremely fast, as expected from the fact that it typically convergesquadratically: we converge to 10 − in 5-10 iterations in the bulk of convergence range. Thiscounterbalances the time lost in computing the Jacobian.A natural question one then can ask is the following: as we improve the resolution of thegrid can we fill in the gray region on figure 7? To our surprise, given the efficiency of Newton’smethod, we found that for a few grid resolutions that we used, we could not improve the resultsin a clearcut way. This would lead us to conclude that there is a finite set of amplitudes whichthe default implementation of Newton’s method is not able to capture. We motivate this intuitionas follows. In analyzing how convergence is lost for one-zero one-pole amplitudes, we found that,as m z approaches 4 (i.e. as we increase the coupling), the Jacobian J Ψ becomes singular anddevelops a zero eigenvalue. We plot the distribution of eigenvalues for J Ψ close to the boundary ofconvergence in figure 9.We observed that the distribution of eigenvalues does not qualitatively change as we change thegrid. It simply becomes more dense along the lines that are clearly visible on both plots of figure 9.In particular, for Bernstein’s interpolants, the picture is very clear: the e.v.’s populate a continuumon the real line and once the minimal e.v. has passed 0, the Jacobian becomes uniformly singular.The situation is a little more complex for linear interpolants, where one could think that if a lineof e.v.’s crosses the origin, the algorithm might converge again. We observed that this is not the21 �� ��� ��� ��� ��� ��� - ��� - ��������������� � � �� �� �� �� ������������������������������������� Figure 10: An example of unstable behavior of Newton’s method with linear interpolations for m z = 3 . case and passed this point, the algorithm either diverges or yield pathological results.We therefore believe that, in order to fill the gray region that remains in figure 7, a moresophisticated method of solving unitarity is needed. It would be interesting to develop such amethod, but we do not pursue this further in the present paper. For illustrative purposes, infigure 10 we display an example of non-convergent iteration past the convergence limit for linearinterpolants. The functions exhibit an instability growing near x = 1 and look more and morejagged or dented, while on the coupling plot one can see oscillations appearing. Given that now we are in possession of an algorithm that converges for a much broader set ofamplitudes, we can ask again the question which was considered in section 3.1. Given an inputdata which is: the constant at infinity and the position of the poles, how does the algorithmdetermines the number of CDD zeros in the final solution? Indeed, many combinations of CDD-zero factors can produce the same constant at infinity, and it cannot be known in advance whatthe algorithm converges to. This is one aspect of the CDD ambiguity.The answer to this question is actually simple: it depends on the starting point. We havebeen able to observe this very explicitly. By choosing generic initial conditions we found that thealgorithm converges to the one pole and one zero solution. By tuning the initial data close toanother many-zero solution, we could reach the solutions with multiple zeros as we demonstrate infigure 11.In this figure, we report the results of the following experiment. We chose a particular CDDamplitude with one pole at m p = 2 . m z = 2 .
2. We further chose one CDDamplitude with the same pole and three zeros at m z = 3 . , m z = 3 . m z suchthat the constant at infinity of both amplitudes are identical, which means that m z solves thefollowing equation: (cid:113) m z (4 − m z ) + (cid:113) m z (4 − m z ) + (cid:113) m z (4 − m z ) = (cid:112) m z (4 − m z ) (4.5)Numerically, this gives m z (cid:39) . T ( x ) = f λ ( x ) thatinterpolates between the one-zero amplitude and the three-zero amplitude defined by f λ ( x ) = (1 − λ ) Im T − zero ( x, m p , m z ) + λ Im T − zero ( x, m p , m z , m z , m z ) , λ ∈ [0; 1] (4.6)22a) (b) � � � � � � � �� � � � � � � �� � � � � � � �� � � � � � � �� � � � � � � �� � � � � � � �� � � � � � � � ���������������������������� � � � � � � � �� � � � � � � �� � � � � � � �� � � � � � � �� � � � � � � �� � � � � � � �� � � � � � � � λ � � [ � � � ] (c) (d) ����� �� λ � → ���� ��� ���� ��� ��� ���� ��� ���� ��� ���� �� Figure 11: CDD fractal. (a) Position of the zero(s) of the amplitude as the initial point, determined by λ in eq. (4.6) is varied. For λ < / λ ≥ / m z = Re( m z ), as a functionof λ , on two superposed x -axis (top and bottom for ticks). Panels (c) and (d) represent projections of thethree-dimensional plot a). Those plots illustrate the roughness of the curve for λ ≥ /
2, at various scales,around λ = 0 . λ = 0 .
851 (blue). This is indicative of some chaotic fractal behaviour. Hence thename: CDD fractal.
We asked the following question: how does the result of Newton’s method iterations depend on λ ?In the first part of our analysis, we studied a uniform grid spacing in λ -space with incrementsof 0 .
05. We discovered that for λ < /
2, we systematically converge to the one-zero solution. Weprobed more values in this range, in particular close to λ (cid:46) /
2, and confirmed this behaviour.With linear interpolants, the first signs of different amplitudes arise very close to λ = 1 / λ = 0 . λ = 0 . λ = 0 . .
5, but the rough behaviours are identical. Then, for other evenly spaced values of λ ≥ / λ in 3 dimensions in figure (11a). The position on the real zero on the blue-shadedplane in particular is visibly erratic on this plot. We did not represent this but the complex zerosalso undergo such an erratic movement in the complex plane.The abrupt change around 1 / λ pushed us to repeat the experiment near other points of λ > / λ = 0 . λ = 0 .
85 for no specialreason. We observed at various scales a similar roughness, as we try to illustrate in figure (11b)where we represented only the position of the real part of the real zero. This figure shows thetwo series of points λ = 0 . λ = 0 .
85 in red and blue, respectively, whose x-graduations areread on the top and bottom axis, respectively. A zoomed version of a transition near 0 . N tot . This isremarkable because this was not possible using the fixed-point iteration method. This fact alsoexcludes the possibility that the square-root behaviour near s = 4 presents a problem for thealgorithm of the algorithm, because all amplitudes with odd N tot exhibit the same near-thresholdbehavior (3.13), and not just the pure CDD-pole amplitude for which Newton’s method does notconverge. We used two different iterative strategies to solve equation (1.1) together with analyticity andcrossing: fixed-point method (2.22a), and Newton’s method (2.22b). We further used two differentinterpolating strategies for each of the algorithms: linear interpolation described in appendix A,and interpolation with Bernstein polynomials described in appendix B. Here we review the prosand cons of various strategies. vs Newton
To remind the reader, the iterative map was defined by ρ n +1 = Φ[ ρ n ] (5.1)and Newton’s method was applied to find the roots of Ψ ≡ id − Φ ρ n +1 = ρ n − (Ψ (cid:48) ) − · Ψ[ ρ n ] . (5.2)While both methods lead to the same result, namely a solution to unitarity and crossing, thedomains in which the algorithms converge are not identical.We could partially characterize domains of convergence in both cases. For the fixed-pointiteration we related convergence of the algorithm to the spectral radius (the maximum eigenvalue,in modulus) of the Jacobian of the Φ (3.11). The fixed-point iteration converges whenever thespectral radius at the fixed-point is smaller than one. For Newton’s method, convergence is lostwhenever the Jacobian of Ψ becomes singular. There exist adaptions of the Newton method whichcan be employed to deal with singular Jacobians. We did not attempt this: since the problem ofunitarity and crossing in two dimensions is already solved theoretically, improving this aspect didnot seem pressing.Newton’s method is also slightly more complex to implement, as one needs to compute theJacobian of the map at every step. Since it was easily computable for us, it is not surprising thatwe could implement this method efficiently. If the Jacobian is not available or hard to compute,one can use the so-called quasi-Newton methods. Note also that the time spent in computing theJacobian is a trade-off for much faster convergence of Newton’s method.24 esults We observed that implementation of Newton’s method increased vastly the range ofconvergence of the fixed-point iterative algorithm. It extended convergence in two directions.Firstly, it allowed to describe more 1-zero 1-pole amplitudes, see figure 7. Secondly, it allowedto describe amplitudes with arbitrary number of CDD zeros. In the latter case, we observed arich pattern of convergence to various solutions as a function of the starting point, with a fractalstructure, see figure 11.Newton’s method also converged much faster than the fixed-point iteration, due to the help ofthe gradient. Compare for instance figures 2 and 8. From Atkinson’s program perspective, theseresults are truly new, as it had not been shown, even theoretically, that one could go beyond theconvergence radius of the fixed-point iteration. Our results show that it can be done. vs Bernstein polynomials
We used two different types of interpolants : linear, and polynomial (Bernstein). For both, wewere able to devise grids that would yield results with good accuracy, within a certain range ofparameters, with laptop computing power in
Mathematica . The results for finite grid-size, whichwe obtained were very good and in principle we could converge to an even better accuracy byincreasing the size of the grid (see below).The advantage of the linear interpolants is that we were able to locate more points near x = 1( s → + ). It was however a delicate task to know where specifically to add points to improve con-vergence, while with Bernstein polynomials is is more straightfoward, as it had a uniform spacing.With our tweaked, 200-point grid in the linear case we could converge to 10 − − − accuracy, allthe error coming from the trapezoidal rule in the dispersion integral. With Bernstein polynomials,we obtained a clear uniform convergence as we increased the grid size.In the both iterations, fixed-point and Newton’s method, both interpolation strategies wouldexhibit, near the edge of convergence, jagged solutions with stable O (1) oscillations growing tosimilar oscillations as those displayed in figure 10. Eventually, the oscillations become so largethat the S-matrix does not satisfy unitarity anymore. While spikes could have been expected withlinear interpolants, which have no notion of smoothness, this is more suprising for the Bernsteinpolynomials which could have been smoother.Finally, in terms of performance, linear interpolants performed better for the Newton method.Bernstein polynomials performed better with fixed-point iteration. The most exciting aspect of the method described in the present paper is that it generalizes to higherdimensions. Indeed, an algorithm similar to the one described in section 2.3 can be numericallyimplemented in four dimensions [20]. This refers both to the fixed-point iterations, as well as toNewton’s method.Technically, the main difference is that higher-dimensional amplitudes are functions of bothenergy and angle, or, equivalently, s and t , which makes the problem two-dimensional. Upon dis-cretization and interpolation the problem again reduces to multiplication of pre-computed matrices.Increasing the resolution of the two-dimensional grid is computationally more costly, however, webelieve it is feasible on a cluster, since most of the computations can be parallelized. Remarkably, such an attempt was made by Boguta in 1974 [26]. Unfortunately, the system of equations analyzedin that paper was not the physical one, and only two iterations were performed which overall makes the results ofthe analysis inconclusive. + +Im T (cid:48) | T | v i Figure 12: Graphical depiction of the fixed-point mapping action. In the RHS we included the contributionsof both the s and u -channel as required by crossing. On a given unitarity cut, s > s <
0, only one ofthem contributes.
The implications of elastic unitarity are much richer in higher dimensions, and the methoddescribed in this paper, to the best of our knowledge, is the only available tool to implement thecorrect structure of the support of the double spectral density as well as to satisfy the Mandel-stam elastic unitarity equation. In particular, it would be very interesting to construct amplitudefunctions that both satisfy elastic unitarity, and maximize the coupling. The results of [15] suggestthat the maximal coupling amplitudes have negligible inelasticity, therefore we expect that suchamplitudes are excellent candidates to be approached using the methods described in the presentpaper. We hope to report on this in the near future.
It is instructive to compare iterations discussed in this paper to the standard perturbation theory.The diagrammatic representation of the iterative map is given in figure 12. The black dot representsinelastic processes, namely it has support only in the multi-particle region, and is kept fixed duringthe iteration process. This is in sharp contrast with the usual perturbation theory where the numberof new multi-particle diagrams grows factorially, see e.g. [27,28] and [29] for a more recent discussion,and as a result the perturbation theory is typically divergent (in two dimensions, integrable theoriesare notable exceptions [1, 30]).We can actually in the iterative solution to unitarity we can relate the number we can relatethe number of graphs at step n to the number of graphs at step n − N n +1 = 2( N n ) + 1 (5.3)The n -th step iteration does not involve n loop diagrams, but L = 2 n loops. It is easy to see that(5.3) implies that log ( N n ) ∼ n ∼ L . Therefore, we have e c L graphs at L loops which is muchfewer graphs than the usual L ! number of Feynman graphs at L loops. In particular, denotinginelasticity by coupling λ , the series becomes (cid:80) L λ L e c L and has a finite radius of convergencegiven by the condition λe c < s − t crossing (as opposedto only s − u crossing in two dimensions). Rather than summing one-dimensional chains of bubblesgenerated by iterations with beads of inelasticity v i , higher-dimensional iterations naturally lead totwo-dimensional diagrams. It would be very interesting to study them in detail. Nevertheless, weexpect that the basic scaling of the number of diagrams as a function of the number of loops, withthe coupling measured by inelasticity, stays the same, and therefore given that inelasticity is nottoo big, iterations should converge. This picture is consistent with the rigorous results of existenceof nonzero convergence radius by Atkinson in four dimensions [3–6].One immediate observation regarding the standard perturbation theory versus unitarity itera-tions in higher dimensions is that any fixed order in the coupling result for the scattering amplitude26s not consistent with Gribov’s theorem, see e.g. [18] for a recent review, and thus cannot be usedas an input for the convergent iterations. In other words, it is crucial for the iterative algorithm towork to have as an input a UV-improved model for inelastic effects. We leave exploration of thisaspect for the future. Figuring out exact bounds of Atkinson’s convergence in 2d
In the present paper we analyzed convergence of the iterative map (2.22a)-(2.24) numericallyin the discretized setting. It would be interesting to establish convergence criteria of the functionalmap directly, in the spirit of Atkinson’s proofs [3–6]. We were not able to generalize the prooftechnique of the original papers in four dimensions to the two-dimensional case.Atkinson’s proof is nicely exemplified by the forward limit of the amplitude iteration in thelectures [7]. It proceeds in two steps. Firstly, one proves the existence of an open set of functionsthat maps to itself under the map which gives existence of a solution. Then one shows that themap is contracting within this set, which ensures uniqueness of this solution.The problem we encounter to reproduce the proof in 2d is that the map involves a factor of1 / √ − x , while in 4d, the square root is in the numerator √ − x . Defining ρ (cid:48) = Φ( ρ ), for thefirst step, we want to compare || ρ (cid:48) || = || Φ( ρ ) || to || ρ || , where the norm is the so-called H¨older norm,and is given by || ρ ( x ) || = sup ≤ x ,x ≤ | ρ ( x ) − ρ ( x ) || x − x | µ , (5.4)for some parameter 0 < µ <
1. The factor 1 / √ − x in the unitarity condition, however, seems toallow the function ρ (cid:48) to grow parametrically bigger than || ρ || near x = 1. Where Atkinson couldfind a way out by simply defining the set as the ball of radius b : || ρ || < b , we need to specify moredata about the function to ensure that || ρ || < b = ⇒ || ρ (cid:48) || < b .Looking back, we know that the fixed-point iteration is divergent for 1-pole 3-zero amplitudes,albeit it resembles a lot the convergent 1-pole 1-zero amplitudes, both from the perspective of thenear threshold behaviour, eq. (2.13) and the fast that they have similar Holder norms. Therefore,it is maybe not surprising that a generic proof `a la Atkinson, which assumes nothing but boundedHolder norms, should fail without adding further assumptions on the derivatives of the functionsfor instance.It would be interesting to understand this in detail and rigorously establish the space of inelas-ticities and input parameters that lead to convergent iterations for the continuous map.
Theories with multiple stable particles
An obvious direction to generalize the present anal-ysis is to consider theories with several stable particles, e.g. theories with nontrivial flavor symme-try [31–34]. In this case the analog of the analytic solution (2.10) is not readily available and thenumerical techniques developed in the present work might be useful.
Theories with massless particles
In theories with massless particle there is no separation inenergy between the elastic and inelastic contribution, both starting at s = 4 m . Nevertheless wecan still write the unitarity equation asDisc s T → − | T → | = Multi-particle . (5.5) In d dimensions, the factor is ( s − ( d − / ∼ (1 − x ) ( d − / . Acknowledgements
We would like to thank Amit Sever for collaboration at an early stage of the project. We wouldlike to thank Miguel Correia, Andr´e Martin, Slava Rychkov, and Amit Sever for useful discussionson related topics. PT is grateful to CERN for hospitality during the final phase of the project.This project has received funding from the European Research Council (ERC) under the EuropeanUnion’s Horizon 2020 research and innovation programme (grant agreement number 949077).
A Linear interpolation
All the plots which show iterations of both algorithms have been produced with the following grid,with 197 elements:
Join [( / ) − Range [ , ] , Range [ , , − ] , Range [ , / , − ] , Range [ / , / , . − ] , Range [ / , , .
25 10 − ] , ( − ( / ) − Range [ , ] )] // DeleteDuplicates // Rationalize // Sort (A.1)We provide this grid in an ancilliary file pts196.dat . On this grid, we defined our linear interpolantthroughout the whole domain [0; 1]. At step n we defined ρ n ( x ) = ρ n,i − + ( ρ n − ρ n,i − ) x − x i − x i − x i − , ≤ x i − < x < x i ≤ ρ n,i = ρ n ( x i ) (A.3)The next step is the dispersion integral of the interpolating function. We can compute ana-lytically this integral on each segment [ x i ; x i +1 ], in the spirit of [14]. Care must be taken of theposition of the point x j at which is evaluated the dispersion integral, to remove logarithmic diver-gences at the boundaries of the segment as prescribed by the principal value integral. This lead tothe definition of a matrix B i,j such that P.V.π (cid:90) ρ n ( x ) K ( x, x j ) dx = 1 π N (cid:88) i =0 B j,i ρ n,i (A.4)where K ( x, x ) = 2 x + 1 x − x − x + x − x (A.5)is the integration kernel resulting from changing s → x = 4 /s in the dispersion integral. Thisstep is the essential part of our implementation of the Atkinson program in 2d, which reduces the28omplexity of computing integrals at each step to a matrix action on a vector. This very same steprenders the calculations [14] amenable to efficient numerics.To be complete, we should also mention that we developped a mixed-interpolation strategy,using square-root interpolants, in order to have better convergence properties in the N tot evensection, whereby, close to 1, we would resort to the following interpolants. ρ n ( x ) = − ρ n,i − √ − x i − ρ n,i √ − x i − √ − x i − − √ − x i − √ − x ρ n,i − ρ n,i − √ − x i − − √ − x i , x i − < x < x i (A.6)We did not observe a significant gain in accuracy so we did not report on related results, whichare essentially identical to those obtained within the pure linear interpolant. In particular, thepure-pole CDD factor is still not a convergent fixed point of any of the algorithm. B Interpolation with Bernstein polynomials
Here we describe the interpolation method with Bernstein polynomials. We use this method inone-dimensional case that is relevant for the present paper but it can be straightforwardly adaptedto the higher-dimensional cases as well.Bernstein polynomials are defined as follows b ν,N ( x ) = N ! ν !( N − ν )! x ν (1 − x ) N − ν . (B.1)Given a continuous function f ( x ) on an interval x ∈ [0 , f N ( x ) ≡ N (cid:88) ν =0 f (cid:16) νN (cid:17) b ν,N ( x ) . (B.2)The famous result by Bernstein then states that as N → ∞ , f n ( x ) converges to f ( x ) uniformly in x lim N →∞ f N ( x ) = f ( x ) . (B.3)Let us consider now the following ansatz for the imaginary part of the amplitude ρ ( x ) ρ ( x ) = √ − x N (cid:88) ν =0 ρ (cid:16) νN (cid:17) b ν,N ( x ) . (B.4)Imposing the decay at infinity, ρ (0) = 0, and elastic unitarity at x = 1, or lim x → ρ ( x ) =8 δ N tot , even √ − x (1 + O ((1 − x ))), we can rewrite this ansatz as follows ρ ( x ) = √ − x (cid:16) δ N tot , even b N,N ( x ) + N − (cid:88) ν =1 ρ ν b ν,N ( x ) (cid:17) , (B.5)where the presence of δ N tot , even signifies the difference of the threshold behavior of the amplitudewith N tot even versus odd. The precise coefficient 8 is fixed by elastic unitarity, see (2.13).Alternatively, we can also use (B.4) without the √ − x factor in front. We tried both optionsin implementing the iterative algorithms. While the performance of both algorithms is very similar29or N tot odd, for N tot even (B.4) works much better because it correctly captures the near-thresholdbehavior of the amplitude.The dispersion integral relevant for iterations takes the form I ν,N ( x ) ≡ (cid:90) ∞ ds (cid:48) (cid:18) s (cid:48) − s + 1 s (cid:48) − (4 − s ) (cid:19) (cid:18) − s (cid:48) (cid:19) / b ν,N (cid:18) s (cid:48) (cid:19) = Γ( n + 1)Γ( n − ν + )Γ( n + )Γ( n − ν + 1) F (1 , ν, N + , x ) + F (1 , ν, N + , x − x ) ν , x = 4 s . (B.6)This integral is singular for ν = 0, however in the present paper we work with functions that satisfy ρ (0) = 0 therefore the integral is always effectively finite.Using this result we can immediately write down the matrix B ij that enters into the discretizedunitarity relation (3.5) B ij = Re (cid:104) I j,N (cid:18) iN (cid:19) (cid:105) , i = 1 , .., N, j = 1 , .., N − . (B.7) C Newton’s method : oscillations and fractals.
In this short section we describe two properties of Newton’s method.
C.1 Oscillations
Sometimes, the Newton method resuls in oscillations of fixed amplitude. It is not surprising, and asimilar phenomenon is easily observed in 1 dimension. Let f be a function such that f ( x ) ∼ x → | x | α .It is known that the root at x = 0 can be reached by the standard Newton method only for α > / α = 1 /
2, one is stuck in a cycle of 2-point oscillations, which are therefore of O (1) if the startingpoint is O (1) away from the root. For 0 < α < /
2, the Newton method overshoots the root andeventually diverges.
C.2 Fractals
Here we give review a simple fact about the fractal structure of the Newton algorithm. It is awell known property of the Newton method that it gives fractal bassins of attraction (Fatou sets),separated by so-called Julia curves. Newton method applied to find the roots of the polynomial p ( z ) = z − D Coupling maximization
D.1 Coupling maximization
It is interesting to consider the following variation of the coupling maximization problem. Considera scattering amplitude that has a single bound state at the location m p . Let us also assume inaddition that the amplitude satisfies the following bound at high energieslim s →∞ | T ( s ) | ≤ c | s | α , α ≤ , (D.1)30 � - � � � � - � - ���� - � - � � � � - � - ���� ����� �� ������ π / �� π / � Figure 13: Plot of bassins of attraction of Newton method applied to find the roots of p ( z ) = z −
1. Inhigher dimensions, an even more complicated structure is to be expected, which is consistent with what weobserve. where α ≤ g that is defined as the residue of the amplitude at s = m p .It is easy to see that the coupling is maximized by minimizing inelasticity. Indeed, we canwrite [14] g = g e (cid:82) ∞ m ds (cid:48) π log[1 − f i ( s (cid:48) )] (cid:18) m p (4 m − m p ) s (cid:48) ( s (cid:48)− m (cid:19) (cid:18) s (cid:48)− m p + s (cid:48)− (4 m − m p ) (cid:19) , (D.2)where g comes from S elastic ( s ) in (2.10). It is clear from (D.2) that the exponent argument isnonpositive and is minimized by setting f i ( s ) = 0. It is easy to see than that adding CDD zeroscan only reduce the coupling as well, and as a result the maximum coupling is achieved by a singleCDD pole ± S poleCDD ( s ).This argument is however not valid for general α . Indeed, a single CDD pole corresponds to(D.1) with α = 0. Therefore if we want to consider amplitudes with a different behavior in theUV we should modify this amplitude. Let us argue however that such modifications do not affectthe maximal value of the coupling. In other words, we can introduce a small inelasticity that canimplement (D.1) while making its contribution to (D.2) arbitrarily small.Let us demonstrate this very explicitly for the case α = −
1. To implement such an amplitudewe would like to introduce an inelasticity that cancels the constant piece in the expansion of theCDD pole. To this extent we get the following equationlim s →∞ T ( s ) = 2 (cid:113) m p (4 m − m p ) + (cid:90) ∞ m ds (cid:48) π log[1 − f i ( s (cid:48) )] (cid:18) s (cid:48) ( s (cid:48) − m ) (cid:19) (cid:0) s (cid:48) − m (cid:1) + O (cid:18) s (cid:19) . (D.3)By taking f i ( s ) = c θ ( s − s ) (cid:15)s ( ss ) − (cid:15) and choosing c appropriately, it is clear that (D.3) can besatisfied for arbitrarily small (cid:15) and arbitrarily large s . On the other hand, the contribution of suchinelasticity to the coupling (D.2) can be made arbitrarily small.Therefore, at least in 2d we see that there are infinitely many amplitudes that satisfy all therequired properties arbitrary close to the maximal value of the coupling. In some sense we can hide31igh energy properties of the amplitude so far in the UV such that they do not affect the IR. Itwould be very interesting to understand if this principle continues to hold in higher dimensions aswell. D.2 Shift of the zero due to inelasticity
Let us consider next al situation, where the amplitude goes to a constant lim s →∞ T ( s ) = c ∞ , orequivalently S = 1 + i c ∞ s + ... . Let us also assume that the amplitude has poles at m p i and zerosat m z i .Using the general solution (2.10), such asymptotic behavior leads to the following relation2 (cid:88) i (cid:113) m p i (4 m − m p i ) − (cid:88) j (cid:113) m z j (4 m − m z j )= c ∞ − (cid:90) ∞ m ds (cid:48) π log[1 − f i ( s (cid:48) )] (cid:18) s (cid:48) ( s (cid:48) − m ) (cid:19) (cid:0) s (cid:48) − m (cid:1) . (D.4)The relation (2.25) means that, if we want to keep c ∞ and m p i fixed, while turning on inelasticity,zeros has to shift towards the two-particle threshold 2 m . Indeed, since − log[1 − f i ( s (cid:48) )] increases aswe increase f i ( s (cid:48) ) the LHS of (2.25) should increase as well. Given that m p i are fixed the only wayto achieve this is by reducing the contribution of zeros. This in turn requires moving them closerto 2 m . If the amplitude does not have any zeros, as is the case for the optimal coupling, turningon inelasticity demands to decrease c ∞ .Let us we compare the formula above to the case of an S-matrix with one pole and zero, withthe same constant at infinity but no inelasticity. For an S-matrix with a pole at m p , a constant atinfinity c ∞ , it is easy to see that the location of the zero m (cid:48) z is given by2 (cid:113) m p (4 m − m p ) − (cid:112) m (cid:48) z (4 m − m (cid:48) z ) = c ∞ (D.5)Therefore, equating eqs. (2.25) and (D.5) gives that2 (cid:112) m (cid:48) z (4 m − m (cid:48) z ) − (cid:112) m z (4 m − m z ) = − (cid:90) ∞ m ds (cid:48) π log[1 − f i ( s (cid:48) )] (cid:18) s (cid:48) ( s (cid:48) − m ) (cid:19) (cid:0) s (cid:48) − m (cid:1) ≥ . (D.6)Adding inelasticity shifts the position of m z towards 2 m (compared to the analogous elastic am-plitude location m (cid:48) z ), and as a result increases the coupling. The converse operation of addinginelasticity but keeping fixed the position of the zero results in lowering the constant at infinityand the coupling in agreement with (D.2). These comments describe the basic behaviour of thefixed-point iteration algorithm that we observe in section 3.
References [1] P. Dorey,
Exact S matrices , in
Eotvos Summer School in Physics: Conformal Field Theoriesand Integrable Models , pp. 85–125, 8, 1996, hep-th/9810026 . Looking at (D.2) one can object that increasing inelasticity has also the opposite effect of decreasing the coupling.We observe that among the two effects on the coupling shift of the zero towards 2 m wins and overall the couplingincreases as we increase inelasticity. Proof that scattering implies production in quantum field theory , Journal ofMathematical Physics (1965) 516–532.[3] D. Atkinson, A Proof of the Existence of Functions That Satisfy Exactly Both Crossing andUnitarity: I. Neutral Pion-Pion Scattering. No Subtractions. , Nucl. Phys. B (1968)375–408.[4] D. Atkinson, A Proof of the Existence of Functions That Satisfy Exactly Both Crossing andUnitarity (Ii) Charged Pions. No Subtractions , Nucl. Phys. B (1968) 377–390.[5] D. Atkinson, A proof of the existence of functions that satisfy exactly both crossing andunitarity (iii). subtractions , Nucl. Phys. B (1969) 415–436.[6] D. Atkinson, A proof of the existence of functions that satisfy exactly both crossing andunitarity. iv. nearly constant asymptotic cross-sections , Nucl. Phys. B (1970) 397–412.[7] D. Atkinson, S matrix construction project: existence theorems, rigorous bounds and models , .[8] B. Bellazzini, J. Elias Mir´o, R. Rattazzi, M. Riembau and F. Riva,
Positive Moments forScattering Amplitudes , .[9] A. J. Tolley, Z.-Y. Wang and S.-Y. Zhou, New positivity bounds from full crossing symmetry , .[10] S. Caron-Huot and V. Van Duong, Extremal Effective Field Theories , .[11] N. Arkani-Hamed, T.-C. Huang and Y.-t. Huang, The EFT-Hedron , .[12] A. L. Guerrieri, A. Homrich and P. Vieira, Dual S-matrix bootstrap. Part I. 2D theory , JHEP (2020) 084, [ ].[13] M. F. Paulos, J. Penedones, J. Toledo, B. C. van Rees and P. Vieira, The S-matrix bootstrap.Part I: QFT in AdS , JHEP (2017) 133, [ ].[14] M. F. Paulos, J. Penedones, J. Toledo, B. C. van Rees and P. Vieira, The S-Matrix BootstrapII: Two Dimensional Amplitudes , JHEP (2017) 143, [ ].[15] M. F. Paulos, J. Penedones, J. Toledo, B. C. van Rees and P. Vieira, The S-MatrixBootstrap. Part Iii: Higher Dimensional Amplitudes , JHEP (2019) 040, [ ].[16] A. Homrich, J. a. Penedones, J. Toledo, B. C. van Rees and P. Vieira, The S-matrixBootstrap IV: Multiple Amplitudes , JHEP (2019) 076, [ ].[17] A. Guerrieri, J. Penedones and P. Vieira, S-matrix Bootstrap for Effective Field Theories:Massless Pions , .[18] M. Correia, A. Sever and A. Zhiboedov, An Analytical Toolkit for the S-matrix Bootstrap , .[19] J. Kupsch, Towards the Saturation of the Froissart Bound , .[20] P. Tourkine and A. Zhiboedov, Scattering from production in 4d, work in progress , .[21] K. Symanzik,
The asymptotic condition and dispersion relations , in
Lectures on field theoryand the many-body problem (E. R. Caianiello, ed.), ch. 10, pp. 67–92. Academic Press, 1961.3322] M. Creutz,
Rigorous Bounds on Coupling Constants in Two-Dimensional Field Theories , Phys. Rev. D6 (1972) 2763–2765.[23] G. Mussardo and P. Simon, Bosonic Type S Matrix, Vacuum Instability and CddAmbiguities , Nucl. Phys. B (2000) 527–551, [ hep-th/9903072 ].[24] P. Vieira,
S-matrix bootstrap , in
TASI 2019 school lecture notes , 2019.[25] D. Atkinson,
Introduction to the Use of Non-Linear Techniques in S-Matrix Theory , ActaPhys. Austriaca Suppl. (1970) 32–70.[26] J. Boguta, Numerical Strategies in the Construction of Amplitudes Satisfying Unitarity,Analyticity and Crossing Symmetry. I , Nucl. Phys. B (1974) 167–188.[27] V. Rubakov, Nonperturbative aspects of multiparticle production , in , 10, 1995, hep-ph/9511236 .[28] F. Bezrukov, M. Libanov, D. Son and S. V. Troitsky,
Singular classical solutions and treemultiparticle cross-sections in scalar theories , in , pp. 228–238, 9, 1995, hep-ph/9512342 .[29] G. Badel, G. Cuomo, A. Monin and R. Rattazzi,
The Epsilon Expansion Meets Semiclassics , JHEP (2019) 110, [ ].[30] B. Gabai, D. Maz´aˇc, A. Shieber, P. Vieira and Y. Zhou, No Particle Production in TwoDimensions: Recursion Relations and Multi-Regge Limit , JHEP (2019) 094,[ ].[31] Y. He, A. Irrgang and M. Kruczenski, A note on the S-matrix bootstrap for the 2d O(N)bosonic model , JHEP (2018) 093, [ ].[32] L. C´ordova and P. Vieira, Adding flavour to the S-matrix bootstrap , JHEP (2018) 063,[ ].[33] M. F. Paulos and Z. Zheng, Bounding scattering of charged particles in dimensions , JHEP (2020) 145, [ ].[34] L. C´ordova, Y. He, M. Kruczenski and P. Vieira, The O(N) S-matrix Monolith , JHEP (2020) 142, [ ].[35] S. B. Giddings and M. Srednicki, High-energy gravitational scattering and black holeresonances , Phys. Rev. D (2008) 085025, [0711.5012