Light Curve Calculations for Triple Microlensing Systems
Renkun Kuang, Shude Mao, Tianshu Wang, Weicheng Zang, Richard J. Long
MMNRAS , 1–13 (2021) Preprint 19 February 2021 Compiled using MNRAS L A TEX style file v3.0
Light Curve Calculations for Triple Microlensing Systems
Renkun Kuang, , (cid:63) Shude Mao, , Tianshu Wang, Weicheng Zang, Richard J. Long , Department of Astronomy and Tsinghua Centre for Astrophysics, Tsinghua University, Beijing 100084, China National Astronomical Observatories, Chinese Academy of Sciences, 20A Datun Road, Chaoyang District, Beijing 100012, China Department of Engineering Physics, Tsinghua University, Beijing 100084, China Department of Astrophysical Sciences, Princeton University, Princeton, NJ 08544, USA Jodrell Bank Centre for Astrophysics, Department of Physics and Astronomy, The University of Manchester, Oxford Road, Manchester M13 9PL, UK
Accepted 2021 February 17. Received 2021 February 11; in original form 2020 August 27
ABSTRACT
We present a method to compute the magnification of a finite source star lensed bya triple lens system based on the image boundary (contour integration) method. Wedescribe a new procedure to obtain continuous image boundaries from solutions ofthe tenth-order polynomial obtained from the lens equation. Contour integration isthen applied to calculate the image areas within the image boundaries, which yieldsthe magnification of a source with uniform brightness. We extend the magnificationcalculation to limb-darkened stars approximated with a linear profile. In principle,this method works for all multiple lens systems, not just triple lenses. We also in-clude an adaptive sampling and interpolation method for calculating densely coveredlight curves. The C++ source code and a corresponding Python interface are publiclyavailable.
Key words:
Gravitational microlensing – triple lens system – finite source magnifi-cation
Gravitational microlensing has opened a unique window forprobing extrasolar planets beyond the snowline (the mini-mum radius from a star at which water ice could have con-densed) (Mao & Paczynski 1991; Gould & Loeb 1992; Gaudi2012; Mao 2012; Gould 2016). Among the 132 confirmed mi-crolens planets, 12 were detected in a triple lens event .Triple lens systems are usually categorised into twogroups, a host star plus two planets or a binary star sys-tem plus a single planet. Three two-planetary systems havebeen firmly established (OGLE-2006-BLG-109, Gaudi et al.2008; Bennett et al. 2010; OGLE-2012-BLG-0026, Han et al.2013; Beaulieu et al. 2016; Madsen & Zhu 2019; OGLE-2018-BLG-1011, Han et al. 2019), and three likely candidates(OGLE-2014-BLG-1722, Suzuki et al. 2018; OGLE-2018-BLG-0532, Ryu et al. 2019; KMT-2019-BLG-1953, Hanet al. 2020b). A planet in a binary star system (i.e., circum-binary planet) is another triple lens case. So far, six caseshave been reported in the literature (OGLE-2006-BLG-284,Bennett et al. 2020; OGLE-2007-BLG-349, Bennett et al.2016; OGLE-2008-BLG-092, Poleski et al. 2014b; OGLE-2013-BLG-0341, Gould et al. 2014; OGLE-2016-BLG-0613,Han et al. 2017; OGLE-2018-BLG-1700, Han et al. 2020a). (cid:63) E-mail: [email protected] Information on the confirmed extrasolar planets is from http://exoplanet.eu as of Nov 20, 2020.
The discovery rate of triple lens systems has nearly dou-bled since 2016, which is mainly attributed to the inaugu-ration of the Korea Microlensing Telescope Network (KMT-Net, Kim et al. 2016) in that year. With the continuousoperation of KMTNet and the upcoming new facilities likeEUCLID (Beaulieu et al. 2010; Bachelet & Penny 2019) andWFIRST (Penny et al. 2019), we expect to encounter moretriple lens events.Analysing microlensing events is time consuming. Somebinary events may even take years (Bozza 2010). The situa-tion will be even worse when handling triple lens events be-cause of the exponentially increasing parameter space. Thus,an efficient method is needed to model light curves for suchsystems. However, due to the inverse nature of solving thelens equation, and the finite source effect, the computationof magnification by a triple lens system is challenging. Threeaspects need to be addressed, namely solving the lens equa-tion numerically, dealing with complex caustic structuresand image topologies, and handling finite source effects in-cluding limb-darkening. Our approach is explained in thenext section.Currently, there are two main schemes for modelling triplelens events. The first approach is based on the “binary su-perposition” approximation method (Han et al. 2001; Rat-tenbury et al. 2002; Han 2005; Han et al. 2013), while thesecond is based on the ray-shooting method (Kayser et al. © a r X i v : . [ a s t r o - ph . I M ] F e b Kuang et al. §
2, is based on calculating image areas by contour in-tegration but with an alternative procedure for obtainingthe image boundaries. In §
3, we introduce the complex lensequation and our notations, while in §
4, we present the de-tails of our method. In § § Modelling lensed light curves is achieved by setting up arepresentation of the lensing system and then repeatedlymoving a source across the lensing system and using its mag-nifying effect to generate model light curves. Every modellight curve is compared with the observed light curve andvarious lensing parameters are adjusted until a best-fittingcurve is found. The focus of this paper is on the magnifi-cation calculation required for a limb-darkened, finite-sizedsource.Real source stars have finite sizes, and this causes theirlight curves to be significantly modified in high magnifica-tion regions (Witt & Mao 1994; Gould 1994). Finite sourceeffects are essential for binary and triple lens systems be-cause they are related to the mass ratios of the lens compo-nents, and can lead to measurements of the angular Einsteinradius θ E when combined with knowledge of the angularsource radius θ ∗ (e.g., Yoo et al. 2004). Finite source effects are particularly important when the source crosses a caus-tic (where the magnification diverges to infinity) or comesclose to a cusp caustic (Witt & Mao 1994; Gould 1994; Ne-miroff & Wickramasinghe 1994). In these cases, the sourcecannot be regarded as point-like, and the observed magni-fication is an average of the magnification pattern over theface of the source. The surface brightness of a star is how-ever not uniform (Milne 1921). This is known as the limb-darkening effect, which also affects the microlensing magni-fication (Gould & Welch 1996; Gaudi & Gould 1999; Hey-rovsk´y 2003; Dominik 2005). Limb-darkening effect needs tobe considered to model precisely observed light curves, e.g.,as in the first limb-darkening measurement by microlensing,the event MACHO 1997-BLG-28 (Albrow et al. 2001). It isthus crucial to have a reliable and efficient way to calculatethe magnification of a limb-darkened, finite size source fortriple lens systems.For every position of the finite source, the lens equationhas to be solved at multiple points around the source bound-ary so that the boundaries of the images can be determined.Gravitational lensing preserves surface brightness (Misneret al. 1973). For a source with uniform surface brightness,the magnification due to lensing is equal to the ratio ofthe total image area and the source area. Usually, a twodimensional image area is computed with double integralsbut these can be converted into a line integral according toStokes’ theorem. This is the contour integration technique(Schramm & Kayser 1987; Dominik 1993, 1995, 1998; Gould& Gaucherel 1997; Bozza 2010; Bozza et al. 2018) that weuse in our work.The triple lens equation has to be solved numerically. Thelens equation for a single lens can be easily solved: the twoimage positions and magnifications can be derived analyti-cally (Einstein 1936; Paczynski 1986). The lens equation fora binary lens is considerably more complicated, as it is nolonger analytically tractable. The binary lens equation canbe transformed into a fifth-order complex polynomial (Witt1990). Skowron & Gould (2012) provided an algorithm tosolve complex polynomial equations, and we use this algo-rithm in our modelling. It has been used elsewhere in a pub-lic package named VBBinaryLensing to calculate microlens-ing light curves for binary lens systems (Bozza 2010; Bozzaet al. 2018). They found that ∼
80% of computer time isspent in the root finding routine (Bozza 2010). This usagewould increase for the tenth order polynomial for a triplelens system if they were to extend their method to the triplelens scenario.We choose not to use the image boundaries obtainingmethod in VBBinaryLensing as we believe it would requiresignificant effort to extend it to the triple lens scenario. Wehave implemented an alternative strategy for determiningimage boundaries which we describe in § § In this section we introduce the complex lens equation andthe parametrizations we use.
MNRAS , 1–13 (2021) ight Curve Calculations for Triple Microlensing Systems Using complex notations, the N point lens equation can bewritten as (Witt 1990) ζ = z + f ( z ) , f ( z ) ≡ − N (cid:88) j =1 m j z − z j , (1)where ζ = y + i y is the source position, z = x + i x isthe corresponding image position. m j , z j are the fractionalmass and position of j -th lens, (cid:80) j m j = 1, z and z j arethe complex conjugates of z and z j . If a point z in the lensplane satisfies the lens equation, it will map back to thesource position ζ through the lens equation, in this case wecall z a true image of ζ . The position coordinates ζ, z, z j are in units of the angular Einstein radius θ E of the lensingsystem, θ E = (cid:114) GMc D ls D l D s , (2)where D l , D s are the angular diameter distances of theobserver to the lens and to the source, D ls is the angulardiameter distance between the lens plane and the sourceplane, M is the total mass of the lens system, c is the speedof light, G is the gravitational constant.Taking the conjugate of equation (1), we obtain an expres-sion for z , which one can substitute back into the originallens equation to obtain an N +1 order complex polynomialin z only (Witt 1990). So the maximum number of roots(images) cannot exceed N + 1. In fact, some of these rootsare not true images, i.e., although they satisfy the complexpolynomial, they do not satisfy the original lens equation, inthis case, they are called false images. In Bozza (2010), suchimages are called ghost images. It has been shown that thetrue upper limit of true images is 5( N −
1) (Rhie 2001, 2003;Khavinson & Neumann 2004). Notice that for N >
3, theremust be false images from solving the polynomial for anygiven source position. We will introduce how to distinguishtrue and false images in § z , its magnification ( µ ) is relatedto the Jacobian J by µ = J − , J = 1 − f (cid:48) ( z ) f (cid:48) ( z ) , f (cid:48) ( z ) = d f ( z ) /dz, (3)where f ( z ) is defined in equation (1). Some image positionswill lead to J = 0 with the magnification µ becoming in-finite. These image positions constitute “critical curves” inthe lens plane, with the corresponding source positions form-ing “caustics” in the source plane. The total magnification µ PS can be obtained by summing the inverse Jacobian de-terminant for each true image z I ( I = 1 , · · · , N im ), corre-sponding to the source ζµ PS = N im (cid:88) I =1 | J ( z I ) | , (4)where N im is the total number of true images. We ignore the source baseline flux (source flux before mi-crolensing event happens, Han 2000) and blending flux (fluxthat is not physically associated with the lensed source, DiStefano & Esin 1995; Smith et al. 2007), which are easy to
Figure 1.
Schematic diagram of a triple lens system. m , m and m are the fractional masses of the three lenses ( m + m + m =1, m ≥ m ≥ m ). The separations of m , m relative to m are labelled as s and s , m and m are on the horizontal axis, ψ is the direction angle of m relative to the horizontal axis. Thesource trajectory is parametrized by the trajectory angle α andthe impact parameter u . model linearly. With this simplification, there are ten pa-rameters in modelling triple lens light curve, five lens pa-rameters ( s , q , s , q , ψ ) and four trajectory parameters( t , u , t E , α ), plus the source radius ρ . ρ = θ ∗ /θ E , where θ ∗ and θ E are the angular source radius and the angular Ein-stein ring radius of the lensing system, respectively. s and q are the separation and mass ratio between the first andsecond lenses, i.e., q = m /m . Similarly, s and q are theseparation and mass ratio between the first and third lenses. ψ is the orientation angle of the third body. t is the timewhen the source is closest to the primary mass m . The im-pact parameter u is the primary lens-source separation at t . s , s and u are in units of θ E . The Einstein timescale t E controls the duration of the event. α is the source trajec-tory angle with respect to the m - m axis. We note that thenumber of parameters in many triple events is even higherdue to non-trivial lens motion (parallax or orbital motioneffects). In addition, for a finite source, limb-darkening pa-rameters (filter-specific) are relevant too. To start, we focuson the magnification of a uniform brightness star. A graphi-cal illustration of the triple lens system (similar to Fig. 1 ofHan et al. 2017) is shown as Fig. 1.Since most of the observed triple microlensing events havea third, low-mass body (either a second planet or a planetin a binary), we choose the origin of the coordinate systemto be the centre of mass of the first two masses. Thus theconversion from ( s , q , s , q , ψ ) to m j , z j is as follows m = 1 / (1 + q + q ) ,m = q m ,m = q m = 1 − m − m ,z = − q s / (1 + q ) + i ,z = s / (1 + q ) + i ,z = − q s / (1 + q ) + s cos( ψ ) + i s sin( ψ ) . (5)We note that whichever coordinate system is chosen, mag-nification can still be calculated using this method. The in-put parameters are m j , z j , ρ , and ζ , and the output is themagnification at ζ . MNRAS000
Schematic diagram of a triple lens system. m , m and m are the fractional masses of the three lenses ( m + m + m =1, m ≥ m ≥ m ). The separations of m , m relative to m are labelled as s and s , m and m are on the horizontal axis, ψ is the direction angle of m relative to the horizontal axis. Thesource trajectory is parametrized by the trajectory angle α andthe impact parameter u . model linearly. With this simplification, there are ten pa-rameters in modelling triple lens light curve, five lens pa-rameters ( s , q , s , q , ψ ) and four trajectory parameters( t , u , t E , α ), plus the source radius ρ . ρ = θ ∗ /θ E , where θ ∗ and θ E are the angular source radius and the angular Ein-stein ring radius of the lensing system, respectively. s and q are the separation and mass ratio between the first andsecond lenses, i.e., q = m /m . Similarly, s and q are theseparation and mass ratio between the first and third lenses. ψ is the orientation angle of the third body. t is the timewhen the source is closest to the primary mass m . The im-pact parameter u is the primary lens-source separation at t . s , s and u are in units of θ E . The Einstein timescale t E controls the duration of the event. α is the source trajec-tory angle with respect to the m - m axis. We note that thenumber of parameters in many triple events is even higherdue to non-trivial lens motion (parallax or orbital motioneffects). In addition, for a finite source, limb-darkening pa-rameters (filter-specific) are relevant too. To start, we focuson the magnification of a uniform brightness star. A graphi-cal illustration of the triple lens system (similar to Fig. 1 ofHan et al. 2017) is shown as Fig. 1.Since most of the observed triple microlensing events havea third, low-mass body (either a second planet or a planetin a binary), we choose the origin of the coordinate systemto be the centre of mass of the first two masses. Thus theconversion from ( s , q , s , q , ψ ) to m j , z j is as follows m = 1 / (1 + q + q ) ,m = q m ,m = q m = 1 − m − m ,z = − q s / (1 + q ) + i ,z = s / (1 + q ) + i ,z = − q s / (1 + q ) + s cos( ψ ) + i s sin( ψ ) . (5)We note that whichever coordinate system is chosen, mag-nification can still be calculated using this method. The in-put parameters are m j , z j , ρ , and ζ , and the output is themagnification at ζ . MNRAS000 , 1–13 (2021)
Kuang et al.
For a circular source with radius ρ , centred at ζ ( c ) = y ( c )1 + i y ( c )2 , its boundary can be represented as ζ ( θ ) = ζ ( c ) + ρe iθ , θ ∈ [0 , π ] . (6)In practice, we approximate the circular source boundaryby a polygon with n different vertices ζ ( θ k ), where θ < θ < · · · < θ k < · · · < θ n = θ + 2 π . The images of the sourceare distorted by the lens system (see Fig. 3). The θ k are notnecessarily sampled at equal intervals. For each θ k , we needto solve the corresponding lens equation to obtain both trueand false images, and attach them to image tracks. Then wepick the true image segments out to obtain the true imageboundaries. Finally, we can apply Stokes’ theorem to obtainthe enclosed area of these image boundaries. Before applying Stokes’ theorem to calculate the magnifica-tion of a uniform brightness star, one needs to obtain contin-uous image boundaries. In § § µ PS issufficient to approximate the magnification µ FS of a uniformbrightness star. The flowchart in Fig. 2 shows the majorstages we execute. Once we can calculate the magnificationof a uniform brightness star, we can model limb-darkeninglight curves by regarding the source star as a set of annuliweighted by the radial limb-darkening profile, as introducedin § We first illustrate the topology of both true and false imageboundaries for the circular source lensed by a triple systemin Fig. 3. We adopt the parameters of the triple lens solution“Sol C (wide)” of microlensing event OGLE-2016-BLG-0613(Han et al. 2017). According to their Table 3, t = 7494 . u = 0 . t E = 74 . s = 1 . q = 0 . α = 2 . s = 1 . q = 3 . × − ; ψ = 5 . ρ =2 . × − in their solution, to visualise better the imageboundaries, we set ρ = 0 .
1. Two source centres are shown.In the right panel, the two primary image boundaries arenested, forming a “ring-like” image, which needs special carewhen calculating the enclosed image areas.In Fig. 3, the blue curves are true image boundaries, andthe grey curves are false image boundaries arising from thetenth-order polynomial. In the left panel, there are four trueimage boundaries (three large ones plus a small one nearthe third lens mass, in blue) and six false image boundaries(three large ones and a small one near the second mass plustwo small ones near the third mass, in gray). In the rightpanel, there are four true image boundaries (two large onesform the ring structure, and the smaller two are close to thesecond and third lens masses) and four false image bound-aries (two large mushroom-like images and two small circu-lar boundaries close to the second and third lens masses). Such topological figures merely give a preliminary impres-sion about the shape and configuration of the image bound-aries. The plotted data are created by solving the lens equa-tion to give points along the boundary. Quantitative infor-mation, i.e., enclosed areas can not be calculated using lineintegrals until these points are connected in order (clockwiseor anti-clockwise).
We use the configuration in the left panel of Fig. 3 as anexample, i.e., the source centred at (0 . , § N + 1 linked lists tostore images from solving lens equations, here N is the num-ber of lenses. At each source position ζ ( θ k ), k = 0 , , · · · , n ,we solve the corresponding polynomial lens equation, whichwill generate N +1 solutions. Each solution is attached to alinked list, depending on its distance from the tails of linkedlists.After the above process, we will obtain N +1 linked lists,which store the image points. Each linked list contains thesame number of points. As Fig. 4 shows, points in differentlinked lists are plotted with different colours, and the headpoint of the i -th linked list is labelled as H { i } . We addarrows to indicate the direction in which points are linkedfrom the head to the tail of a linked list. For triple lens, i.e., N = 3, there are ten linked lists. In Fig. 4, we show only asubset of those ten linked lists to help visualisation.Connecting points at this stage is just a preliminary pro-cedure, since it is not guaranteed that every point will beattached to the right place. Some linked lists contain onlytrue images (e.g., the 9-th list in Fig. 4), some contain onlyfalse images (e.g., the 8-th list in Fig. 4), while others maycontain both (e.g., the 0 , , § N + 1 linked lists obtained previously into several im-age segments, as shown in Fig. 5. Notice that the number ofresulting segments is not necessarily N + 1 anymore. Dif-ferent segments are labelled as S { i } . Arrows indicate thedirection in which points are linked in each segment fromhead to tail. The colour of each segment is inherited fromthe original linked list in Fig. 4. The last step before apply-ing Stokes’ theorem is to connect those image segments intoclosed continuous image boundaries.To do this, we first check whether any given segment is MNRAS , 1–13 (2021) ight Curve Calculations for Triple Microlensing Systems Given lens and source parametersObtain point sourcemagnification 𝜇 !" , set 𝜇 = 𝜇 !" .Apply quadrupole test, needs finite sourcecomputation?Sample 𝑛 points at the source boundarySolve lens equations, link solutions (both true and false) to continuous boundaries Remove false images, select true solution boundarysegments Connect segments into continuous closed image boundariesApply Stokes’ theorem to obtain finite source magnification 𝜇 $" 𝜇 $" − 𝜇 𝜇 < 𝜀 ?𝜇 ← 𝜇 $" 𝑛 ← 2𝑛 Return 𝜇 Return 𝜇 $" No NoYes Yes
Figure 2.
A flowchart describing our procedure to calculate the magnification of a uniform brightness star. The three procedures alongthe bottom of the flowchart (solve the lens equations, remove false images, and connect segments) are described in detail in § Figure 3.
Illustrations of image boundary topologies formed by the roots of the complex polynomial for a triple lens system. Parametersof this system are introduced in § ρ = 0 .000
Illustrations of image boundary topologies formed by the roots of the complex polynomial for a triple lens system. Parametersof this system are introduced in § ρ = 0 .000 , 1–13 (2021) Kuang et al.
Figure 4.
Illustrations of the N + 1 = 10 linked lists aftertriple lens equation solving and preliminary points-connecting.Parameters are the same as the left panel of Fig. 3. Points indifferent linked lists are plotted with different colours, and thehead point of the i -th linked list is labelled as H { i } . Arrows areadded to indicate the direction in which points are linked insideeach linked list, from head to tail. We just show part of all tenlinked lists to help visualisation. Figure 5.
Demonstration of true image solution segments afterremoving false image points in Fig. 4. Different segments are la-belled with S { i } . The colour of each segment is inherited fromtheir original linked list in Fig. 4. Arrows indicate the directionin which points are linked from the head to the tail of each indi-vidual segment. closed by evaluating the distance d between the head andtail of a segment (we choose a threshold 10 − , which will beexplained in more detail in § d ≤ − , then this com-pletes our task for the current segment (see segment “S7”in Fig. 5). We move to the next segment until all segmentsare processed. If the current segment is not closed, we firstcheck whether its tail connects with the head or tail of otherimage segments by checking whether they have identical im-age positions. If not, we also conduct the check for the headof the current segment (see the segment “S0” and “S1” inFig. 5, the head of “S0” is connected with the tail of “S1”).Another possibility to connect two segments occurs whenthe image boundary crosses the critical curve (see the tailof “S0” and “S2” in Fig. 5). In this case, one must “jump”over the critical curve to connect a close pair of images. Thecondition is that the close image pair must have the samesource position, and their magnifications have a comparableabsolute value but with opposite sign. In practice, the “con-necting” procedure is repeated until all the image segmentsare closed.Once the connecting procedure is complete, we obtainedclosed image boundaries from previously un-closed true so-lution segments . And finally, we can move on to calculatethe enclosed area using line integral. For a triple lens system ( N =3), the lens equation can be con-verted into a tenth-order complex polynomial (Rhie 2002),which can be solved numerically (Skowron & Gould 2012),yielding ten (complex) roots. In most cases, however, not allof these are necessarily true images of the original lens equa-tion, i.e., eq. (1). In the following, we discuss in more detailhow to check whether a root from the complex polynomialroots solver is a true root to the original lens equation.Numerically, the complex roots solver in Skowron & Gould(2012) locates roots one by one. At each step, they deflatethe original polynomial by the found root, then proceed tosearch for the next root. The deflation process introducesnumerical noise, so they also conduct a “root polishing” pro-cedure. This involves taking each root as the initial guess inNewton’s (or Laguerre’s) method to find a more accurateroot of the full polynomial.The roots coming from the polynomial solver are not nec-essarily true images of the original lens equation. For a givensource position ζ , theoretically a true solution z should sat-isfy: δ = | ζ − z − f ( z ) | = 0. Nevertheless, due to numericalnoise, in practice δ is not strictly zero even for true im-ages. We found that in general, true images correspond to δ ∼ − to 10 − while false images lead to δ ∼ − to 10 .There is usually a clear separation between true images andfalse images in terms of δ . We set the criterion to be 10 − ,i.e., an image is true if it satisfies δ < − .We note that 10 − is not valid in all cases. When thesource is just outside cusps or folds, there will be “nearly An animation shows how closed image boundaries are linkedcan be visualised at: https://github.com/rkkuang/triplelens, to-gether with the source code and documentation.MNRAS , 1–13 (2021) ight Curve Calculations for Triple Microlensing Systems true” false images. Alternatively, if the source is just in-side cusps or folds, there will be “nearly false” true images.Skowron & Gould (2012) implemented their algorithm indouble precision. They found for fifth order polynomials,the limiting precision for very close roots is 10 − . for twoclose roots (near a fold caustic), and 10 − . for three veryclose roots (near a cusp).The situations discussed above do not affect our wholemethod for the following reasons. • Only a small fraction of the source boundary will experi-ence caustic crossing. The probability that discretely sam-pled points happen to be very close to the caustics is low.Thus only a few points or no points will experience this am-biguity. • Even if we miss several true images, we can still link thepoints into image boundaries using procedures which intro-duced in § • If we include some false image points, since they are closeto the true images, they would not affect the final area sig-nificantly. In § d between their heads or tails, or thedistance between corresponding source positions. We nowelaborate further why we choose a threshold of 10 − . It ac-tually relies on several numerical observations. The first isthat although we use n different points to approximate thesource boundary, there are actually n + 1 points for us touse in the code with θ n = θ + 2 π . These two source po-sitions, ζ ( θ ) and ζ ( θ n ), corresponding to exactly the sameimage points, and they usually correspond to the head andtail of a linked list. Thus the distance d between the headand the tail in this case is exactly zero. The second is whentwo segments happen to be separated by the critical curves.The segments’ head or tail correspond to exactly the samesource positions. Finally, in our method, we use this connec-tion check only a few times. It is used after the linked listsare linked and the false images are removed. At that timeseveral segments are left. They correspond to either individ-ual boundaries (e.g., “S7” in Fig. 5), or they are connectedat the same points (e.g., “S0” and “S1” in Fig. 5), or theyare “jumping” over critical curves (e.g., “S6” and “S3” inFig. 5).It may seem that the threshold 10 − is too large whenthere are multiple close images, but we note that in a previ-ous step the procedure has already separated different seg-ments from each other. Additionally, in general, image seg-ments which belong to different image boundaries are wellseparated, or at least their heads and tails are not in thesame place. By checking head-tail distance, we will not mixdifferent segments together if they do not belong to the sameimage boundary. One could set a stricter threshold, e.g.,10 − (although for a typical source radius in microlens-ing events, 10 − is already sufficient). It is also possible tocheck for segments connectivity by looking for the nearestloose end (head or tail) of a segment. However, when sam-pled points are not dense enough, the segments obtained inthe previous step may be incorrectly linked (especially whentwo image boundaries are very close to each other, or whena ring-like structure is formed), and the image areas calcu- lated could be wrong. Taking all these points into account,we choose to use the distance criterion. Previously published papers use different ways to obtain theimage boundaries. Gould & Gaucherel (1997), which intro-duced the contour integration method, included a methodfor constructing continuous image boundaries. To avoid in-verting the lens equation, Schramm & Kayser (1987) pro-posed the contour plot method to find the image positions.This contour plot method was improved by Dominik (1995).By constructing a squared deviation function and plottingits contour, the image boundaries corresponding to a circu-lar source can be obtained. The overall image information,such as image numbers and shapes, are encoded in the con-tour plot of the squared deviation function. This method hasa limitation: it only provides plots, but not precise numeri-cal values about the image boundaries. To find more preciseimage contours, one needs high density sampling in the lensplane.Dominik (1993, 1995) further promoted this idea into acontour-plot-and-correct method. The contour plot data canbe stored in purpose designed structures, which can poten-tially be analysed to obtain microlensing light curves. Do-minik (1998) used the contour plot method to obtain imageboundaries and then applied Stokes’s theorem to calculateimage areas. In a further development, Dominik (2007) pro-posed an adaptive contouring algorithm to determine theimage contour.Later works used numerical algorithms to solve the lensequation. In Bozza (2010), he introduced several error es-timators which enable adaptive sampling on the sourceboundary. He starts with two points on the source boundary,and then inserts new points one by one, between the pair ofpoints which has the largest error estimate. This strategy al-lows efficient sampling near caustics. Accurate image areascan be obtained with the minimum number of calculations.This method has been developed into the widely used VB-BinaryLensing package (Bozza et al. 2018), which has beenintegrated into some microlensing events modeling Pythonpackages like pyLIMA (Bachelet et al. 2017) and Mulens-Model (Poleski & Yee 2019).The caustic structures in triple lenses are much more intri-cate (Danˇek & Heyrovsk´y 2015, 2019) than those for binarylenses, resulting in more complicated image topologies anddegeneracies (Song et al. 2014). This poses challenge on ob-taining the area of highly distorted images of a source star.Bozza’s strategy will require effort to extend it to the triplelens scenario. As a consequence, we have designed and imple-mented a different approach to determine image boundaries(see § Given any continuous image boundaries, (cid:110) z ( k ) = x ( k )1 + i x ( k )2 (cid:111) , k = 0 , , · · · , n , where the firstand last points are identical, the enclosed area can becalculated as A = 12 n (cid:88) k =1 ( x ( k )2 + x ( k − )( x ( k )1 − x ( k − ) , (7) MNRAS000
Demonstration of true image solution segments afterremoving false image points in Fig. 4. Different segments are la-belled with S { i } . The colour of each segment is inherited fromtheir original linked list in Fig. 4. Arrows indicate the directionin which points are linked from the head to the tail of each indi-vidual segment. closed by evaluating the distance d between the head andtail of a segment (we choose a threshold 10 − , which will beexplained in more detail in § d ≤ − , then this com-pletes our task for the current segment (see segment “S7”in Fig. 5). We move to the next segment until all segmentsare processed. If the current segment is not closed, we firstcheck whether its tail connects with the head or tail of otherimage segments by checking whether they have identical im-age positions. If not, we also conduct the check for the headof the current segment (see the segment “S0” and “S1” inFig. 5, the head of “S0” is connected with the tail of “S1”).Another possibility to connect two segments occurs whenthe image boundary crosses the critical curve (see the tailof “S0” and “S2” in Fig. 5). In this case, one must “jump”over the critical curve to connect a close pair of images. Thecondition is that the close image pair must have the samesource position, and their magnifications have a comparableabsolute value but with opposite sign. In practice, the “con-necting” procedure is repeated until all the image segmentsare closed.Once the connecting procedure is complete, we obtainedclosed image boundaries from previously un-closed true so-lution segments . And finally, we can move on to calculatethe enclosed area using line integral. For a triple lens system ( N =3), the lens equation can be con-verted into a tenth-order complex polynomial (Rhie 2002),which can be solved numerically (Skowron & Gould 2012),yielding ten (complex) roots. In most cases, however, not allof these are necessarily true images of the original lens equa-tion, i.e., eq. (1). In the following, we discuss in more detailhow to check whether a root from the complex polynomialroots solver is a true root to the original lens equation.Numerically, the complex roots solver in Skowron & Gould(2012) locates roots one by one. At each step, they deflatethe original polynomial by the found root, then proceed tosearch for the next root. The deflation process introducesnumerical noise, so they also conduct a “root polishing” pro-cedure. This involves taking each root as the initial guess inNewton’s (or Laguerre’s) method to find a more accurateroot of the full polynomial.The roots coming from the polynomial solver are not nec-essarily true images of the original lens equation. For a givensource position ζ , theoretically a true solution z should sat-isfy: δ = | ζ − z − f ( z ) | = 0. Nevertheless, due to numericalnoise, in practice δ is not strictly zero even for true im-ages. We found that in general, true images correspond to δ ∼ − to 10 − while false images lead to δ ∼ − to 10 .There is usually a clear separation between true images andfalse images in terms of δ . We set the criterion to be 10 − ,i.e., an image is true if it satisfies δ < − .We note that 10 − is not valid in all cases. When thesource is just outside cusps or folds, there will be “nearly An animation shows how closed image boundaries are linkedcan be visualised at: https://github.com/rkkuang/triplelens, to-gether with the source code and documentation.MNRAS , 1–13 (2021) ight Curve Calculations for Triple Microlensing Systems true” false images. Alternatively, if the source is just in-side cusps or folds, there will be “nearly false” true images.Skowron & Gould (2012) implemented their algorithm indouble precision. They found for fifth order polynomials,the limiting precision for very close roots is 10 − . for twoclose roots (near a fold caustic), and 10 − . for three veryclose roots (near a cusp).The situations discussed above do not affect our wholemethod for the following reasons. • Only a small fraction of the source boundary will experi-ence caustic crossing. The probability that discretely sam-pled points happen to be very close to the caustics is low.Thus only a few points or no points will experience this am-biguity. • Even if we miss several true images, we can still link thepoints into image boundaries using procedures which intro-duced in § • If we include some false image points, since they are closeto the true images, they would not affect the final area sig-nificantly. In § d between their heads or tails, or thedistance between corresponding source positions. We nowelaborate further why we choose a threshold of 10 − . It ac-tually relies on several numerical observations. The first isthat although we use n different points to approximate thesource boundary, there are actually n + 1 points for us touse in the code with θ n = θ + 2 π . These two source po-sitions, ζ ( θ ) and ζ ( θ n ), corresponding to exactly the sameimage points, and they usually correspond to the head andtail of a linked list. Thus the distance d between the headand the tail in this case is exactly zero. The second is whentwo segments happen to be separated by the critical curves.The segments’ head or tail correspond to exactly the samesource positions. Finally, in our method, we use this connec-tion check only a few times. It is used after the linked listsare linked and the false images are removed. At that timeseveral segments are left. They correspond to either individ-ual boundaries (e.g., “S7” in Fig. 5), or they are connectedat the same points (e.g., “S0” and “S1” in Fig. 5), or theyare “jumping” over critical curves (e.g., “S6” and “S3” inFig. 5).It may seem that the threshold 10 − is too large whenthere are multiple close images, but we note that in a previ-ous step the procedure has already separated different seg-ments from each other. Additionally, in general, image seg-ments which belong to different image boundaries are wellseparated, or at least their heads and tails are not in thesame place. By checking head-tail distance, we will not mixdifferent segments together if they do not belong to the sameimage boundary. One could set a stricter threshold, e.g.,10 − (although for a typical source radius in microlens-ing events, 10 − is already sufficient). It is also possible tocheck for segments connectivity by looking for the nearestloose end (head or tail) of a segment. However, when sam-pled points are not dense enough, the segments obtained inthe previous step may be incorrectly linked (especially whentwo image boundaries are very close to each other, or whena ring-like structure is formed), and the image areas calcu- lated could be wrong. Taking all these points into account,we choose to use the distance criterion. Previously published papers use different ways to obtain theimage boundaries. Gould & Gaucherel (1997), which intro-duced the contour integration method, included a methodfor constructing continuous image boundaries. To avoid in-verting the lens equation, Schramm & Kayser (1987) pro-posed the contour plot method to find the image positions.This contour plot method was improved by Dominik (1995).By constructing a squared deviation function and plottingits contour, the image boundaries corresponding to a circu-lar source can be obtained. The overall image information,such as image numbers and shapes, are encoded in the con-tour plot of the squared deviation function. This method hasa limitation: it only provides plots, but not precise numeri-cal values about the image boundaries. To find more preciseimage contours, one needs high density sampling in the lensplane.Dominik (1993, 1995) further promoted this idea into acontour-plot-and-correct method. The contour plot data canbe stored in purpose designed structures, which can poten-tially be analysed to obtain microlensing light curves. Do-minik (1998) used the contour plot method to obtain imageboundaries and then applied Stokes’s theorem to calculateimage areas. In a further development, Dominik (2007) pro-posed an adaptive contouring algorithm to determine theimage contour.Later works used numerical algorithms to solve the lensequation. In Bozza (2010), he introduced several error es-timators which enable adaptive sampling on the sourceboundary. He starts with two points on the source boundary,and then inserts new points one by one, between the pair ofpoints which has the largest error estimate. This strategy al-lows efficient sampling near caustics. Accurate image areascan be obtained with the minimum number of calculations.This method has been developed into the widely used VB-BinaryLensing package (Bozza et al. 2018), which has beenintegrated into some microlensing events modeling Pythonpackages like pyLIMA (Bachelet et al. 2017) and Mulens-Model (Poleski & Yee 2019).The caustic structures in triple lenses are much more intri-cate (Danˇek & Heyrovsk´y 2015, 2019) than those for binarylenses, resulting in more complicated image topologies anddegeneracies (Song et al. 2014). This poses challenge on ob-taining the area of highly distorted images of a source star.Bozza’s strategy will require effort to extend it to the triplelens scenario. As a consequence, we have designed and imple-mented a different approach to determine image boundaries(see § Given any continuous image boundaries, (cid:110) z ( k ) = x ( k )1 + i x ( k )2 (cid:111) , k = 0 , , · · · , n , where the firstand last points are identical, the enclosed area can becalculated as A = 12 n (cid:88) k =1 ( x ( k )2 + x ( k − )( x ( k )1 − x ( k − ) , (7) MNRAS000 , 1–13 (2021)
Kuang et al. or as a more symmetrical expression (Dominik 1998), A = 14 (cid:2) n (cid:88) k =1 ( x ( k )2 + x ( k − )( x ( k )1 − x ( k − )+( x ( k )1 + x ( k − )( x ( k )2 − x ( k − ) (cid:3) . (8)If there are no nested image boundaries, the magnificationis then simply the total area of all the image boundaries di-vided by the source area. However, there will be nested im-ages in some cases. We handle this by assigning each imageboundary object a “parity” attribute with value +1 or − πρ .The simplest scheme is to start with an approximation ofthe source limb with e.g., n = 256 uniformly sampled points,and find the finite source magnification. We can then double n , and compare the change in the magnification. We iterateuntil the relative change is smaller than a preset accuracy,e.g., (cid:15) = 10 − . However, sampling the source boundary uni-formly does not take care of special places on the sourceboundary, for example, when the source straddles the caus-tics; these special source boundary places need denser sam-pling. So we first uniformly sample e.g., n = 45 differentpoints on the source boundary, i.e., the k -th point ζ ( θ k ) cor-responding to an angle θ k = 2 πk/n , with θ = 0 , θ n = 2 π .For each θ k , we compute the point source magnification µ PS ( ζ ( θ k )), which controls the density of points to be sam-pled around ζ ( θ k ). In this way, we will get an initial sampleon the source boundary which takes special care around highmagnification places. Since there is no fundamental difference between binary andtriple lens system, we adopt the quadrupole test as intro-duced in Bozza et al. (2018), to detect that whether thesource star is close to the caustics, and decide if it is neces-sary to use finite source computation. If it is not necessary,we use point source magnification µ PS as an approximation.The finite source magnification of a uniform brightnesssource can be expanded as (Pejcha & Heyrovsk´y 2009) µ FS = µ PS + ρ µ PS + ρ
192 ∆ µ PS + O ( ρ ) , (9)where ∆ = ∂ ∂x + ∂ ∂y is the Laplacian and ∆ = ∆∆ is thebiharmonic operator. The quadrupole term in equation (9) for each image z I can be written as (Bozza 2018) µ Q I = − f (cid:48) f (cid:48)(cid:48) − (3 − J + J / | f (cid:48)(cid:48) | + Jf (cid:48) f (cid:48)(cid:48)(cid:48) ] J ρ , (10)where f (cid:48) ( z ) = df/dz , f (cid:48)(cid:48) ( z ) = d f/dz and f (cid:48)(cid:48)(cid:48) ( z ) = d f/dz .To detect the cusp caustic, Bozza et al. (2018) also con-structed an error estimator µ C = 6 Im[ f (cid:48) f (cid:48)(cid:48) ] J ρ . (11)Thus the condition in the quadrupole test can be written as (cid:88) I c Q ( | µ Q I | + | µ C I | ) < δ, (12)where c Q and δ are to be chosen empirically so as to makesure there is enough safety margin, in our code, c Q = 1 , δ =10 − ∼ − , similar to those chosen in Bozza et al. (2018). In practice, precise modelling of observed light curves needsto include limb-darkening. The linear profile is a reasonableapproximation to the limb-darkening for most stars (Milne1921) I ( r ) = If ( r ) , f ( r ) = 33 − u (cid:104) − u (1 − (cid:112) − r ) (cid:105) , (13)where r = ρ i /ρ is the fractional radius at a certain radius ρ i to the source radius ρ , and I is the average surface bright-ness. u is the limb-darkening coefficient. It relates to the Γconvention limb-darkening law (An et al. 2002) I ( ϑ ) = I (cid:20) (1 − Γ) + 3Γ2 cos ϑ (cid:21) , (14)by u = 3Γ / (2 + Γ), and r = sin ϑ , where ϑ is the emergentangle, 0 ≤ Γ ≤ F ( r ) = 2 (cid:90) r xf ( x ) dx. (15)We note that Dominik (1998) introduced a differentmethod to calculate the magnification of a limb-darkenedsource star, involving two-dimensional numerical integra-tion. We show several examples of light curves, the triple lens pa-rameters (other than ρ ) are the same as in § MNRAS , 1–13 (2021) ight Curve Calculations for Triple Microlensing Systems Figure 6.
The dashed magenta line shows the source trajectoryfor which the light curves will be shown in Fig. 7 for four sourcesizes indicated by three circles and a dot (for a point source) atthe upper right. The red curve shows the caustics. are shown in Fig. 7 for four source sizes. As the sourcesize increases, the values of magnification peaks are moresmoothed out compared to the point source case, and thenumber of magnification peaks may differ for different sourcesizes.We show an example of a light curve of a limb-darkenedsource in Fig. 8. The source radius ρ = 0 .
01, the limb-darkening coefficient Γ = 0 .
51. To test our method, we com-pare our results with those from ray-shooting. The rays areuniformly generated in the lens plane, with 1 . × raysper θ . Thus, without lensing, there will be ∼ × raysinside the source boundary. We make comparisons both withand without limb-darkening. The top panel of Fig. 8 showstwo light curves calculated using our method for the uniformbrightness (blue) and limb-darkened (red) cases. The secondand third panels show the relative error of magnification ofour method ( µ ours ) relative to the result from ray-shooting( µ ray ) for the uniform brightness and limb-darkened cases,respectively. In both cases, the relative errors are ∼ × − .The bottom panel shows the residual of magnification oflimb-darkened star to the magnification of uniform bright-ness star using ray-shooting results, i.e., ( µ limb − µ uni ) /µ uni ,which shows that the limb-darkening deviation mainly hap-pens during caustics crossing. For example, during causticentrance (HJD − ∼ ∼ ∼
20 CPU minutes. This is due to the needto use 15-30 annuli to reduce the modelling error of limb-darkening light curves to be ∼ × − . In practice, thecomputing speed can be adjusted by changing the accuracygoal, and by avoiding unnecessary calculations, e.g., duringHJD − − Since the strategy we adopted to model limb-darkening lightcurves is based on the magnification calculation of uniformbrightness stars, we compare the magnification map fromour method with both ray-shooting and VBBinaryLensingresults for a uniform brightness star.
We show one magnification map generated using ourmethod, and compare it with one generated using ray-shooting to test the accuracy of our computation. The triplelens parameters (other than ρ ) are the same as in § ρ = 0 .
01. The number density of rays shotis 2 . × per θ . The left panel of Fig. 9 shows the mag-nification map generated by our method. The right panelof Fig. 9 shows the relative error of magnification comparedwith the ray-shooting result, which is of the order of 10 − ,with the maximum relative error of magnification (absolutevalue) being 5 . × − .In generating our magnification maps, obtaining polyno-mial coefficients from lens equations costs 24% of compu-tation time, 59% for solving complex polynomials, 4.7% forinitial sampling on the source boundary, and 2.7% for check-ing whether a solution is true. The rest part (9.6%) of thecomputation time is mainly spent on obtaining continuousimage boundaries. We have applied our code to the binary lens case, and com-pared our code to the VBBinaryLensing package (Bozzaet al. 2018). We used the MulensModel to calculate thefull finite source magnification. To obtain a suitable run forcomparison purpose, the accuracy goal (VBBL.Tol) is set tobe 10 − .We choose the binary lens separation s = 0 .
8, mass ra-tio q = 0 .
1, and source radius ρ = 0 .
01. The results areshown in Fig. 10. The left panel shows the magnificationmap generated by our method, and the right panel showsthe relative error of magnification compared with the re-sult from the VBBinaryLensing package. We note that, al-though both software packages use contour integration toobtain the image areas, this does not imply that the resul-tant magnification maps will be exactly the same. Althoughwe both use polygons to approximate source boundaries aswell as image boundaries, our sampling strategies are differ-ent. In addition, the stopping criteria of Bozza’s algorithm ismore optimized, being controlled by error estimators. Eventhough the results are different, the maximum relative erroris 3 . × − . Overall, we find our magnification calculatingcode to be slower by a factor of ∼
15 compared to Bozza’spackage for binary lens systems. This is mostly due to moreefficient sampling of points on the limb of source star ac-cording to error estimators in their package. https://github.com/rpoleski/MulensModelMNRAS000
15 compared to Bozza’spackage for binary lens systems. This is mostly due to moreefficient sampling of points on the limb of source star ac-cording to error estimators in their package. https://github.com/rpoleski/MulensModelMNRAS000 , 1–13 (2021) Kuang et al.
Figure 7.
Light curves corresponding to the trajectory indicated in Fig. 6 for four source sizes, ρ = 0 , . , . , .
15 (black, green, gold,and blue). The time starts from HJD − − µ is the magnification. Notice that as the sourcesize increases, the values of magnification peaks usually decrease. Figure 8.
Example light curves for the green source in Fig. 6. ρ = 0 .
01. The time starts from HJD − µ isthe magnification. The top panel shows two light curves calculated using our method for the uniform brightness (blue) and limb-darkened(red) cases. The second and third panels show the relative error of magnification of our method ( µ ours ) to the result from ray-shooting( µ ray ) for the uniform brightness and limb-darkened cases, respectively. In both uniform brightness and limb-darkened cases, relativeerrors of magnification are ∼ × − . The bottom panel shows the residual of magnification of limb-darkened star to the magnificationof uniform brightness star, i.e., ( µ limb − µ uni ) /µ uni . MNRAS , 1–13 (2021) ight Curve Calculations for Triple Microlensing Systems Figure 9.
Triple lens magnification ( µ ) map of a uniform brightness star generated by our method (left panel), and the relative error ofmagnification between the results from our method and ray-shooting (right panel), the source radius is ρ = 0 .
01. The yellow curves inboth panels show the caustics. The pixel size of each map is 100 ×
60. The colourbar scale goes from − × − to 6 × − in the rightpanel, the maximum error (absolute value) is 5 . × − . The triple lens parameters (other than ρ ) are the same as in § Figure 10.
Binary lens magnification ( µ ) map of a uniform brightness star generated by our method (left panel), and the relative errorof magnification between the results from our method and the VBBinaryLensing package (right panel). Binary lens separation s = 0 . q = 0 .
1, and source radius ρ = 0 .
01. The yellow curve in the left panel shows the caustics. The pixel size of each map is128 × − × − to 3 × − in the right panel, the maximum relative error (absolute value) is3 . × − . VBBinaryLensing package is used to calculate the finite source magnification as a baseline. Other than the region close tothe caustics, our code uses the point source approximation, and the relative error of this approximation is ∼ − × − . In our method, which based on Stokes’ theorem, the lightcurve calculation is time-consuming since we have to solvethe lens equation many times and need many points to con-nect the image boundaries. To remedy the situation, we in-troduce another refinement to speed up the light curve cal-culation with adaptive sampling.As shown in Fig. 7, the light curve of a typical event isglobally smooth except when approaching/crossing the caus-tic. Thus, we can sample the most important points (usu-ally places with large slopes) in the light curve adaptively,and perform interpolation elsewhere to obtain the full lightcurve. In this way, we can reduce the number of finite sourcecomputations substantially.The adaptive sampling procedure is performed as follows:we first compute the magnification A , A at points p , p , we then compute the magnification A c in the mid-point, p c ,and compare with A mid = ( A + A ) /
2. If the differencebetween A c and A mid is larger than a threshold, e.g., (cid:15) =5 × − , then we add p c to our sampled points and repeatthe procedure for p , p c and p c , p . This process is stoppeduntil the error is smaller than (cid:15) .Fig. 11 shows one example of this adaptive sampling pro-cedure. The full light curve is uniformly sampled with 10 points, while for the adaptive sampled light curve only 314points are needed. With linear interpolation, we recover thefull light curve with a relative error ∼ × − and the CPUtime is reduced by a factor of ∼
3. Notice that the speedup isnot simply the ratio of the number of points (10 / ≈ MNRAS000
3. Notice that the speedup isnot simply the ratio of the number of points (10 / ≈ MNRAS000 , 1–13 (2021) Kuang et al.
We note that, higher order spline interpolations convergefaster globally, yet at the entrance and exit of caustics, thelight curve is steeper and higher order interpolation yieldsoscillations. This is known as “Runge phenomenon” (Runge1901).
Currently modelling of microlensing light curves of triplelens events often uses methods based on perturbation orray-shooting. We have developed a method based on es-tablishing continuous image boundaries and contour inte-gration to calculate triple microlensing light curves. Beforethis work, the contour integration method has not been de-veloped beyond binary lens systems. We first implementeda procedure to obtain the magnification of a source starwith uniform brightness, and then extended the procedureto handle limb-darkening. It is efficient for small source sizes,and complements the ray-shooting method. Our approachhas two advantages: 1) Unlike the contour plot method,we obtain image boundaries accurately from solving lensequations. 2) It starts from successively sampled points onthe source boundary, which corresponds to successive im-age tracks. From these image tracks, which contain bothtrue and false images, we identify the true image boundariesand calculate the enclosed areas. Our method is a generalmethod which can be applied to any multiple lens system,not just to triple lens systems. Ray-shooting is efficient forbig finite source sizes, due to Poisson noise in the number ofrays each pixel collects. Our independent modelling code isavailable to be used for cross-checking with other methods.We have tested our method on light curves and magnifica-tion maps, and compared the results with the ray-shootingmethod. Due to the need to connect continuous closed imageboundaries, our method requires more CPU time when thesource radius is large (e.g., ρ = 0 . m/t E ∼ × − for t E = 20 days). In such cases, there is no need to calculatethe magnification for every epoch. In practice, the speedupwill likely depend on the source size and the number of datapoints in the light curve.Our code is publicly available , including both theC++ source codes and materials to build a Python module“TripleLensing” to call the C++ codes from Python scripts.We plan to improve further the efficiency of our code andapply it to real triple lens events. We thank the anonymous referee for providing many help-ful comments and suggestions. We thank Valerio Bozza, JanSkowron, and Andrew Gould for making their codes pub-licly available. This work is partly supported by the Na-tional Science Foundation of China (Grant No. 11821303,11761131004 and 11761141012 to SM). https://github.com/rkkuang/triplelens The data generated as part of this project may be sharedon a reasonable request to the corresponding author.
REFERENCES
Albrow M. D., et al., 2001, ApJ, 549, 759An J. H., et al., 2002, ApJ, 572, 521Atwood B., et al., 2012, in Proc. SPIE. p. 84466G,doi:10.1117/12.925383Bachelet E., Penny M., 2019, ApJ, 880, L32Bachelet E., Norbury M., Bozza V., Street R., 2017, AJ, 154, 203Beaulieu J. P., et al., 2010, in Coud´e du Foresto V., Gelino D. M.,Ribas I., eds, Astronomical Society of the Pacific ConferenceSeries Vol. 430, Pathways Towards Habitable Planets. p. 266( arXiv:1001.3349 )Beaulieu J. P., et al., 2016, ApJ, 824, 83Bennett D. P., 2010, ApJ, 716, 1408Bennett D. P., Rhie S. H., 1996, ApJ, 472, 660Bennett D. P., et al., 2010, ApJ, 713, 837Bennett D. P., et al., 2016, AJ, 152, 125Bennett D. P., et al., 2020, AJ, 160, 72Bozza V., 2010, MNRAS, 408, 2188Bozza V., Bachelet E., Bartoli´c F., Heintz T. M., Hoag A. R.,Hundertmark M., 2018, MNRAS, 479, 5157Danˇek K., Heyrovsk´y D., 2015, ApJ, 806, 99Danˇek K., Heyrovsk´y D., 2019, ApJ, 880, 72Di Stefano R., Esin A. A., 1995, ApJ, 448, L1Dominik M., 1993, in Surdej J., Fraipont-Caro D., Gosset E.,Refsdal S., Remy M., eds, Liege International AstrophysicalColloquia Vol. 31, Liege International Astrophysical Collo-quia. p. 541Dominik M., 1995, A&AS, 109, 597Dominik M., 1998, A&A, 333, L79Dominik M., 2005, MNRAS, 361, 300Dominik M., 2007, MNRAS, 377, 1679Dong S., et al., 2006, ApJ, 642, 842Dong S., et al., 2009, ApJ, 698, 1826Einstein A., 1936, Science, 84, 506Gaudi B. S., 2012, ARA&A, 50, 411Gaudi B. S., Gould A., 1999, ApJ, 513, 619Gaudi B. S., et al., 2008, Science, 319, 927Gould A., 1994, ApJ, 421, L71Gould A., 2016, Microlensing Planets. p. 135, doi:10.1007/978-3-319-27458-4˙3Gould A., Gaucherel C., 1997, ApJ, 477, 580Gould A., Loeb A., 1992, ApJ, 396, 104Gould A., Welch D. L., 1996, ApJ, 464, 212Gould A., et al., 2014, Science, 345, 46Han C., 2000, MNRAS, 312, 807Han C., 2005, ApJ, 629, 1102Han C., Chang H.-Y., An J. H., Chang K., 2001, MNRAS, 328,986Han C., et al., 2013, ApJ, 762, L28Han C., et al., 2017, AJ, 154, 223Han C., et al., 2019, AJ, 158, 114Han C., et al., 2020a, AJ, 159, 48Han C., et al., 2020b, AJ, 160, 17Heyrovsk´y D., 2003, ApJ, 594, 464Kayser R., Refsdal S., Stabell R., 1986, A&A, 166, 36Khavinson D., Neumann G., 2004, arXiv Mathematics e-prints,p. math/0401188Kim S.-L., et al., 2010, in Proc. SPIE. p. 77333F,doi:10.1117/12.856833Kim S.-L., et al., 2016, Journal of Korean Astronomical Society,49, 37 MNRAS , 1–13 (2021) ight Curve Calculations for Triple Microlensing Systems Figure 11.
Result of adaptive sampling of the light curve (top panel), the full light curve (green solid line) uniformly sample 10 points, corresponding to a sampling of 5 . × minutes. By adaptive sampling, merely 314 points (red dot) are sampled, and thelinear interpolated light curve (blue solid line) deviate from the full light curve within a relative error 5 × − (bottom panel), µ is themagnification.Madsen S., Zhu W., 2019, AJ, 878, L29Mao S., 2012, Research in Astronomy and Astrophysics, 12, 947Mao S., Paczynski B., 1991, ApJ, 374, L37Mediavilla E., Mu˜noz J. A., Lopez P., Mediavilla T., Abajas C.,Gonzalez-Morcillo C., Gil-Merino R., 2006, ApJ, 653, 942Mediavilla E., Mediavilla T., Mu˜noz J. A., Ariza O., Lopez P.,Gonzalez-Morcillo C., Jimenez-Vicente J., 2011, ApJ, 741, 42Milne E. A., 1921, MNRAS, 81, 361Misner C. W., Thorne K. S., Wheeler J. A., 1973, GravitationNemiroff R. J., Wickramasinghe W. A. D. T., 1994, ApJ, 424,L21Paczynski B., 1986, ApJ, 304, 1Pejcha O., Heyrovsk´y D., 2009, ApJ, 690, 1772Penny M. T., Gaudi B. S., Kerins E., Rattenbury N. J., Mao S.,Robin A. C., Calchi Novati S., 2019, ApJS, 241, 3Poleski R., Yee J. C., 2019, Astronomy and Computing, 26, 35Poleski R., et al., 2014a, ApJ, 782, 47Poleski R., et al., 2014b, ApJ, 795, 42Rattenbury N. J., Bond I. A., Skuljan J., Yock P. C. M., 2002,MNRAS, 335, 159Rhie S. H., 2001, arXiv e-prints, pp astro–ph/0103463Rhie S. H., 2002, arXiv e-prints, pp astro–ph/0202294Rhie S. H., 2003, arXiv e-prints, pp astro–ph/0305166Runge C., 1901, magazine for mathematics and physics, 46, 20Ryu Y.-H., et al., 2019, arXiv e-prints, p. arXiv:1905.08148Schneider P., Weiss A., 1987, A&A, 171, 49Schramm T., Kayser R., 1987, A&A, 174, 361Skowron J., Gould A., 2012, arXiv e-prints, p. arXiv:1203.1034Smith M. C., Wo´zniak P., Mao S., Sumi T., 2007, MNRAS, 380,805Song Y.-Y., Mao S., An J. H., 2014, MNRAS, 437, 4006Suzuki D., et al., 2018, AJ, 155, 263Vermaak P., 2000, MNRAS, 319, 1011Witt H. J., 1990, A&A, 236, 311Witt H. J., Mao S., 1994, ApJ, 430, 505Yoo J., et al., 2004, ApJ, 603, 139 Zhu W., Gould A., Penny M., Mao S., Gendron R., 2014, ApJ,794, 53This paper has been typeset from a TEX/L A TEX file prepared bythe author.MNRAS000