Tails: Chasing Comets with the Zwicky Transient Facility and Deep Learning
Dmitry A. Duev, Bryce T. Bolin, Matthew J. Graham, Michael S. P. Kelley, Ashish Mahabal, Eric C. Bellm, Michael W. Coughlin, Richard Dekany, George Helou, Shrinivas R. Kulkarni, Frank J. Masci, Thomas A. Prince, Reed Riddle, Maayane T. Soumagnac, Stéfan J. van der Walt
DDraft version March 1, 2021
Typeset using L A TEX twocolumn style in AASTeX63
Tails: Chasing Comets with the Zwicky Transient Facility and Deep Learning
Dmitry A. Duev , Bryce T. Bolin ,
1, 2
Matthew J. Graham , Michael S. P. Kelley , Ashish Mahabal ,
1, 4
Eric C. Bellm , Michael W. Coughlin , Richard Dekany , George Helou , Shrinivas R. Kulkarni , Frank J. Masci , Thomas A. Prince , Reed Riddle , Maayane T. Soumagnac ,
8, 9 and St´efan J. van der Walt Division of Physics, Mathematics, and Astronomy, California Institute of Technology, Pasadena, CA 91125, USA IPAC, California Institute of Technology, 1200 E. California Blvd, Pasadena, CA 91125, USA Department of Astronomy, University of Maryland, College Park, MD 20742, USA Center for Data Driven Discovery, California Institute of Technology, Pasadena, CA 91125, USA DIRAC Institute, Department of Astronomy, University of Washington, 3910 15th Avenue NE, Seattle, WA 98195, USA School of Physics and Astronomy, University of Minnesota, Minneapolis, Minnesota 55455, USA Caltech Optical Observatories, California Institute of Technology, Pasadena, CA 91125, USA Lawrence Berkeley National Laboratory, 1 Cyclotron Road, Berkeley, CA 94720, USA Department of Particle Physics and Astrophysics, Weizmann Institute of Science, Rehovot 76100, Israel Berkeley Institute for Data Science, University of California, Berkeley, Berkeley, CA 94720, USA (Received January 11, 2021; Revised February 23, 2021; Accepted February 25, 2021)
Submitted to AJABSTRACTWe present Tails, an open-source deep-learning framework for the identification and localizationof comets in the image data of the Zwicky Transient Facility (ZTF), a robotic optical time-domainsurvey currently in operation at the Palomar Observatory in California, USA. Tails employs a customEfficientDet-based architecture and is capable of finding comets in single images in near real time,rather than requiring multiple epochs as with traditional methods. The system achieves state-of-the-art performance with 99% recall, 0.01% false positive rate, and 1-2 pixel root mean square error inthe predicted position. We report the initial results of the Tails efficiency evaluation in a productionsetting on the data of the ZTF Twilight survey, including the first AI-assisted discovery of a comet(C/2020 T2) and the recovery of a comet (P/2016 J3 = P/2021 A3).
Keywords: astroinformatics — astronomy data analysis — convolutional neural networks — comets— surveys INTRODUCTIONComets have mesmerized humans for millennia, fre-quently offering, arguably, some of the most spectacularsights in the night sky. Containing the original materialsfrom when the Solar System first formed, comets providea unique insight into the distant past of our Solar Sys-tem. The recent discovery of the first interstellar comet2I/Borisov by amateur astronomer Gennadiy Borisovpredictably sparked much excitement and enthusiasm
Corresponding author: Dmitry A. [email protected] among astronomers and the general public alike (e.g.,Bolin et al. 2020; Fitzsimmons et al. 2019; Guzik et al.2020). Such objects could potentially provide impor-tant information on the formation of other stellar sys-tems. It is a very exciting time to look for comets: thelarge-scale time-domain surveys that are currently inoperation, such as the ZTF (Bellm et al. 2019a; Gra-ham et al. 2019), Pan-STARRS (Chambers et al. 2016),or ATLAS (Tonry et al. 2018), and the upcoming onessuch as BlackGEM (Bloemen et al. 2016) and Vera Ru-bin Observatory / LSST (Ivezi´c et al. 2008) offer therichest data sets ever available to mine for comets.Traditional comet detection algorithms rely on mul-tiple observations of cometary objects that are linked a r X i v : . [ a s t r o - ph . I M ] F e b Duev et al. together and used to fit an orbital solution. To thebest of our knowledge, the previous attempts to takethe comet’s morphology in the optical image data intoconsideration in the detection algorithms have not ledto reliable and robust results.In this work, we present Tails – a state-of-the-art deep-learning-based system for the identification and localiza-tion of comets in the image data of ZTF. Tails employsan EfficientDet-based architecture (Tan et al. 2019) andis thus capable of finding comets in single images in nearreal time, rather than requiring multiple epochs as withtraditional methods.The Tails’ code is open-source and can be found in the“dmitryduev/tails” repository on GitHub. The versionof the code aligned with this publication is archived onZenodo at 10.5281/zenodo.4563226.1.1.
The Zwicky Transient Facility
The Zwicky Transient Facility (ZTF) is a state-of-the-art robotic time-domain sky survey capable of visit-ing the entire visible sky north of − ◦ declination everynight. ZTF observes the sky in the g , r , and i bands atdifferent cadences depending on the scientific programand sky region (Bellm et al. 2019a; Graham et al. 2019).The 576 megapixel camera with a 47 deg field of view,installed on the Samuel Oschin 48-inch (1.2-m) SchmidtTelescope, can scan more than 3750 deg per hour, to a5 σ detection limit of 20.7 mag in the r band with a 30-second exposure during new moon (Masci et al. 2019a;Dekany et al. 2020).The ZTF Partnership has been running a specializedsurvey, the Twilight Survey (ZTF-TS) that operates atSolar elongations down to 35 degrees with an r -bandlimiting magnitude of 19.5 (Ye et al. 2020; Bellm et al.2019b). ZTF-TS has so far resulted in the discovery of anumber of Atira asteroids (orbits interior to the Earth’s)as well as the first inner-Venus object, 2020 AV2 (Ipet al. 2020). Motivated by the success, ZTF-TS will beexpanded in Phase II of the project, which commencedin December 2020.Comets become more easily detectable when closeto the Sun as they become brighter and start exhibit-ing more pronounced coma and tails. Furthermore, ithas been shown that the most detectable direction ofapproach of an interstellar object is from directly be-hind the Sun because of observational selection effects(Jedicke et al. 2016) and the fact that this direction hasa greater cross section for asteroids to bend around and https://ztf.caltech.edu Figure 1.
Distribution of over 60,000 individual observa-tions of comets as a function of the predicted total magnitude(as reported by JPL Horizons) used in the seed sample. pass into the visibility volume (Engelhardt et al. 2017;Do et al. 2018).Tails automates the search for comets with detectablemorphology. While trained and evaluated on a largecorpus of ZTF data, in this work we focus on Tails’performance when applied to the ZTF-TS data. TAILS: A DEEP LEARNING FRAMEWORKFOR THE IDENTIFICATION ANDLOCALIZATION OF COMETSDeep learning (DL) is a subset of machine learningthat employs artificial many-layer neural networks (Mc-Culloch & Pitts 1943). DL systems are able to discover,in a highly automated manner, efficient representationsof the data, simplifying the task of finding the meaning-ful sought-after patterns in them. We refer the reader toa brilliant introduction into DL given in G´eron (2019).DL systems often reach near-optimal performance fora given task and are able to learn even very complicated,highly non-linear mappings between the input and out-put spaces. The art of building applied DL systemsinvolves two major challenges: finding a suitable net-work architecture and, more importantly, constructinga large, labeled, representative data set for the networktraining. In the case of comet detection, the trainingset must reflect the possible variations across differentseeing conditions, filters, sky location, CCDs, and in-clude data artifacts caused by, for example, cross-talkor telescope reflections.2.1.
Data set
To build a seed sample for labeling, we first identifiedall potential observations of known comets conductedwith ZTF from March 5, 2018 - March 4, 2020, basedon their predicted position and brightness. The codefor accomplishing that is based on the Python librariespypride (Duev et al. 2016) and solarsyslib (Jensen-Clemet al. 2018) and uses the comet ephemerides obtained ails: Chasing Comets with ZTF and Deep Learning (a) 114P/Wiseman–Skiff observed on 2019/10/19 (b) 29P/Schwassmann–Wachmann observed on 2019/10/19(c) C/2019 D1 (Flewelling) observed on 2019/07/22 (d) 260P/McNaught observed on 2019/10/19 Figure 2.
Comet examples from the training set. The first image in a triplet is the epochal science image, the second – thereference image, the third – the ZOGY difference image. from the Minor Planet Center (MPC) for a coarsesearch, followed by a JPL Horizons (Giorgini et al.1996) query for precision.To provide more contextual information, epochal im-age data are supplemented by properly aligned referenceimages of the corresponding patches of sky and differ-ence (epochal minus reference) images generated withthe ZOGY algorithm (Zackay et al. 2016), all producedby the ZTF Science Data System at Caltech’s IPAC(Masci et al. 2019a). Finally, we generate image tripletcutouts of size 256 by 256 pixel, which in angular mea-sure translates into 4 (cid:48) . (cid:48) . (cid:48)(cid:48) . /pix .We selected over 60,000 individual observations withthe total comet magnitude ranging from 10 to 23 (asreported by JPL Horizons; see Fig. 1), out of whichabout 20,000 were sourced for manual annotation. Thisresulted in an initial sample of 3,000 examples with iden-tifiable morphology.We also compiled a set of approximately 20,000 neg-ative examples consisting of point-like cometary detec-tions, patches of sky with no identified transient or vari-able sources, CCD-edge cases, and a wide range of real(point-source) transient and bogus (e.g. artifacts due tobright stars, optical ghosts and “dementors”) samplesfrom the Braai data set (Duev et al. 2019).To expand the data set, we then assembled a stan-dard ResNet-based (He et al. 2015) classifier for cometidentification. With this basic classification model, we https://minorplanetcenter.net/iau/MPCORB/CometEls.txt https://ssd.jpl.nasa.gov/horizons.cgi ran several rounds of an active-learning-like procedure,where we would first train the classifier, evaluate it onthe whole data set, sample both confident predictionsand the cases close to the classifier’s decision bound-ary, manually inspect and label those examples and addthem to the training set. Roughly 2,000 positive and2,000 negative examples were added to the training setvia this method.The resulting training data set contains about 5,000positive and 22,000 negative examples (see Fig. 2). Eachtriplet in the set has been assigned a label [ p c , x, y ],where p c marks the presence of a comet in the image and x, y ∈ [0 ,
1] is the relative positions of the comet’s “cen-ter of mass”, as reported by JPL Horizons. For positiveexamples this translates into [1 , x
JP L , y
JP L ], for neg-ative ones – [0 , ? , ?], where question marks mean thatthese do not affect the loss in this case.2.2. Deep neural network architecture and training
Tails adopts a custom architecture (see Fig. 3) basedon EfficientDet D0 (Tan et al. 2019), a variant of a state-of-the-art architecture designed for object detection -a computer vision technique for the identification andlocation of objects in image data.This architecture delivers best-in-class object detec-tion efficiency and performance across a wide range ofresource constraints. This is achieved by using Efficient-Net – state-of-the art backbone networks for feature ex-traction, a weighted bi-directional feature pyramid net-work (BiFPN), which allows easy and fast multi-scalefeature fusion, and a compound scaling method that si-multaneously and uniformly scales the resolution, depth,
Duev et al.
Figure 3.
Tails architecture: a custom EfficientDet D0-based network (Tan et al. 2019). A batch of duplet or triplet imagestacks of size ( n b , , , [2 | n b is the number ofstacks in the batch. The extracted features from the last five blocks/levels of the backbone network are passed through abidirectional feature-pyramid network (BiFPN). The resulting five output tensors denoted in colored circles are fed into thehead network, which outputs the probability of the image containing a comet p c and its predicted relative position ( x, y ). and width for all backbone, feature, and location/classprediction networks (Tan et al. 2019).The use of a BiFPN, which effectively represents andprocesses multi-scale features, makes this architectureparticularly well-suited for the problem of morphology-based comet identification and localization.A batch of triplet image stacks of size ( n b , , , n b is the number of stacks in the batch, is passedthrough an EfficientNet B0 backbone (Tan & Le 2019).The extracted features from the last five blocks/levelsof the network are passed through the BiFPN. The re-sulting five output tensors denoted in colored circles inFig. 3 are fed into the head network, which outputs theprobability of the image containing a comet p c and itscentroid’s predicted relative ( x, y ) position .We defined the loss function as follows: L = w c · L c + w p · L p (1)where L c denotes the binary cross-entropy functionfor the label c (1 – there is a comet in the image, 0 –there is no comet) and the predicted probability p c . If (cid:98) p c (cid:101) = 1, L p is computed as an L loss for the relative We note that standard object detection algorithms typically out-put bounding boxes and corresponding object class probabili-ties, i.e. sets of (4 + n classes ) numbers. Our approach allowedus to simplify the head network architecture and both simplifyand speed up the assembly of the training data set, bypassingthe unnecessary complexity and potential inaccuracy of drawingbounding boxes around known comet detections. position ( x, y ) and its prediction ( x p , y p ) with a small L regularizing term (with (cid:15) = 10 − ), and w c and w p denote the weights of the two terms, respectively: L c = (cid:88) c · log( p c ) + (1 − c ) · log(1 − p c ) L p = (cid:88) (cid:98) p c (cid:101) · (cid:16) | x − x p | + | y − y p | + (cid:15) · (cid:113) | x − x p | + | y − y p | (cid:17) (2)We employed the Adam optimizer (Kingma & Ba2014), a batch size of 32, and a 81%/9%/10% train-ing/validation/test data split. For data augmentation,we applied random horizontal and vertical flips of theinput data; no random rotations and translations wereadded. We note that the test/validation sets did notcontain augmented data from the training set. We usedstandard techniques to maximize training performance:if no improvement in validation loss was observed for 10epochs, the learning rate was reduced by a factor of 2,and training was stopped early if no improvement wasobserved for 30 epochs.The EfficientNet’s weights were randomly initialized .We first set w c = 10 , w p = 1 to allow for a fast conver-gence of the feature-extracting part of the network. To We experimented with pre-trained weights, however that neitherhelped the network to reach convergence faster, nor did it affectthe final performance. We believe this is likely due to the fact thatastronomical images are very different from those in commonly-used data sets. ails: Chasing Comets with ZTF and Deep Learning w c = 1 . , w p = 1 and moni-tored the validation loss for early stopping, then bumped w p = 2 and monitored the validation positional RMSE;finally, added the omitted negative examples and againmonitored the validation loss for early stopping.The resulting classifiers were put through the sameactive-learning-like procedure as was employed in theinitial data set assembly, using several months of ZTFTwilight survey data. TAILS PERFORMANCEEvaluated on the test set, with a score p c threshold of0.5, Tails demonstrates false positive and false negativerates (FPR and FNR) of 1.7%, and a ∼ − × science CCDs. The raw ZTF image dataare split into four readout quadrants per CCD and allprocessing is conducted independently on each CCDreadout quadrant. We tessellate each × CCD-quadrant image into a 13 ×
13 grid of overlapping 256 ×
256 pixels tiles and evaluate Tails on those. Tails has been deployed in production since late June2020. We have implemented a “sentinel” service thatprocesses the incoming data in real time and posts theplausible candidates to Fritz , the ZTF Phase II open-source science data platform (van der Walt et al. 2019;Duev et al. 2019; Kasliwal et al. 2019), for further man-ual inspection and vetting. The candidates are auto-annotated with the detailed information on the detec-tion such as the score, CCD and sky positions, and cross-matches with known Solar System objects. Fig. 5 showsscreenshots of the Fritz user interfaces used in the pro-cess.It takes about 5 hours to run inference on a typicalset of nightly ZTF Twilight data ( ∼
45 30-second expo-sures) on an e2-highcpu-32 virtual machine instance (32vCPU, 32 GB memory, SSD disk) on the Google CloudPlatform, including I/O operations.Consistently with the expected rate of comet obser-vations, a typical run on nightly Twilight data yields afew dozen candidates, which, given the typical numberof processed tiles, gives an empirical false positive rate(FPR) value of about 0.01%. Standard fully-convolutional approaches often used in computervision proved to be an overkill in this case. See https://github.com/dmitryduev/tails https://github.com/fritz-marshal/fritz element valuee 0.9934213Incl. 27.87307Peri. 150.38279Node 83.04834q 2.0546940T 2021 July 11.14638 TT Table 1.
Orbital elements of C/2020 T2 provided by theMPC.
The scanning results are accumulated and used to ex-pand the training set and improve Tails’ performance.We have evaluated Tails’ performance on a randomsample of 200 observations of known comets with iden-tifiable morphology in July-August 2020 and found anempirical recall value of 99%.Fig. 6 shows a number of comet candidates not fromthe training set identified by Tails, including some of theZTF observations of the comet 2I/Borisov. Optical ar-tifacts resembling cometary objects are the main sourceof contamination.3.1.
Discovery of comet C/2020 T2
On October 7, 2020, Tails discovered a candidate thatwas posted to MPC’s Possible Comet Confirmation Page(PCCP) as ZTFDD01 (see Fig. 8). It was later con-firmed to be a long-period comet and designated C/2020T2 (Palomar), marking the first DL-assisted comet dis-covery (Duev 2020). The candidate was found in theTwilight survey data; it was at 19.3 mag in the ZTF r band. The FWHM of the object was approximately2 (cid:48)(cid:48) . (cid:48)(cid:48) , compared to nearby background stars that haveFWHM of ∼ (cid:48)(cid:48) . The object showed a tail extending upto 5 (cid:48)(cid:48) in the westward direction. Table 1 summarizes theorbital elements of C/2020 T2 provided by the MPC andFig. 7 shows its orbit as of the discovery date.To determine if Tails could have discovered C/2020T2 before 2020 October 7, we searched the ZTF archivefor all Twilight Survey data covering the ephemeris po-sition of the comet with the ZChecker software (Kelleyet al. 2019). Eleven nights of data were found between2020 June 11–20 (evening twilight) and October 7–21(morning twilight). The comet was in conjunction withthe Sun between the two sets, and not observable byZTF. We measured the brightness of the coma in 4-pix radius apertures, and aperture corrected the pho-tometry according to the ZTF pipeline documentation.The data are shown in Fig. 9. Typical seeing was 2 (cid:48)(cid:48) in https://minorplanetcenter.net/iau/NEO/pccp tabular.html Duev et al. (a) False positive rate (FPR) and false negative rate (FNR) as afunction of score p c . FPR and FNR balance out at around 1.7%for a score threshold of 0.5. (b) RMSE of the predicted comet position versus the reported onJPL Horizon for the 650 positive samples from the test set. Figure 4.
Test set performance of Tails. The set contains 1,400 negative and 650 positive examples. (a) Candidate scanning page. The users can inspect the candidatesand save vetted objects to one or more groups. Candidates thatare not saved to any group within 7 days are removed from Fritz. (b) Source page. It aggregates and displays in an interactive man-ner all kinds of information related to an object that exists onFritz, such as photometry, spectroscopy, auto-annotations, com-ments, finder charts, follow-up requests, and other data.
Figure 5.
Screenshots of the Fritz user interfaces used for Tails candidate inspection and vetting.
June, the comet was very faint ( r =20.2 mag), near thesingle-image detection limit ( r =20.4–20.9 mag, 5 σ pointsource), and had no morphological features for Tails topick up. Thus October 7, 2020 was really the first op-portunity for Tails to discover the comet.3.2. Recovery of comet P/2016 J3 = P/2021 A3(STEREO)
A comet candidate was identified by a combinationof Tails and the ZTF Moving Object Detection En-gine (Masci et al. 2019b) on 2020 January 04 UTCand submitted to the PCCP as ZTF0Ion (see Fig. 10).It was later identified as a recovery of comet P/2016J3 (STEREO) and given the designation P/2016 J3= P/2021 A3 (STEREO) (Bolin 2021). P/2021 A3was identified in the evening Twilight survey data at r =19.3 mag with a clearly-extended appearance scoring 0.9 with a coma ∼ (cid:48)(cid:48) wide and a tail extending past20 (cid:48)(cid:48) in the north east direction. DISCUSSIONThis work demonstrates the potential of the state-of-the-art deep-learning computer-vision architecture de-signs when applied to the problem of astronomicalsource detection and localization, with a specific focuson comets.We experimented with the input data and traineda version of Tails that instead of triplet image stacksuses duplets – epochal/reference images, omitting theZOGY difference images. Our tests show that this ver-sion achieves essentially the same performance as theone trained on triplets without requiring image differ-encing, expanding the range of potential use cases ofTails. ails: Chasing Comets with ZTF and Deep Learning (a) 2I/Borisov observed on 2019/10/05(b) 2I/Borisov observed on 2019/10/15(c) C/2010 U3 (Boattini) observed on 2020/08/11 (d) Bogus detection next to a bright star(e) Bogus detection due to a brightened satellite trail(f) Bogus detection due to a telescope reflection Figure 6.
Candidates identified by Tails. Panels (a), (b), and (c) on the left show the detections of real comets. Typical falsepositives are shown on Panels (d), (e), and (f) on the right. For each image triplet, the left pane shows the epochal scienceexposure, the middle pane – the reference image of the corresponding patch of sky, and the right pane – the ZOGY differenceimage.
Figure 7.
The orbit of comet C/2020 T2 as of October 7,2020. Image credit: NASA/JPL-Caltech / D. Duev.
While Tails is trained only on ZTF data, with trans-fer learning, it can be adapted to other sky surveys, in-cluding the upcoming Vera Rubin Observatory’s LegacySurvey of Space and Time (LSST) (Ivezi´c et al. 2008). ACKNOWLEDGMENTSD. A. Duev would like to thank Ivan Duev for assis-tance with data labeling. D. A. Duev acknowledges sup-port from Google Cloud and from the Heising-SimonsFoundation under Grant No. 12540303.Based on observations obtained with the SamuelOschin Telescope 48-inch and the 60-inch Telescope atthe Palomar Observatory as part of the Zwicky Tran-sient Facility project. ZTF is supported by the NationalScience Foundation under Grant No. AST-1440341and a collaboration including Caltech, IPAC, the Weiz-mann Institute for Science, the Oskar Klein Centerat Stockholm University, the University of Maryland,the University of Washington, Deutsches Elektronen-Synchrotron and Humboldt University, Los Alamos Na-tional Laboratories, the TANGO Consortium of Taiwan,the University of Wisconsin at Milwaukee, and LawrenceBerkeley National Laboratories. Operations are con-ducted by COO, IPAC, and UW.This research has made use of data and/or servicesprovided by the International Astronomical Union’s Mi-nor Planet Center.
Duev et al.
Figure 8.
Discovery image of C/2020 T2 (Palomar), the first DL-assisted comet discovery by Tails. Taken on October 7, 2020with the ZTF camera on the 48-inch Schmidt telescope at Palomar. The left pane shows the epochal science exposure (256x256pix cutout), the middle pane – the reference image, the right pane – the ZOGY difference image. East is to the left, north isdown.
Figure 9.
Photometry of comet C/2020 T2 (Palomar) de-rived from ZTF-TS images ( r -band) versus time from per-ihelion. A best-fit model lightcurve is also shown: r =9 .
85 + 9 .
54 log ( r h ) + 5 log (∆) − Φ( θ ), where r h is the he-liocentric distance in au, ∆ is the comet-observer distance inau, and Φ( θ )) is the phase angle correction from Schleicheret al. (1998). T p denotes the time of perihelion passage (July11, 2021). The authors would like to express gratitude to theanonymous referee.
Facilities:
PO:1.2m, ZTF
Software: astropy (Astropy Collaboration et al.2018), Fritz (https://github.com/fritz-marshal/fritz),Kowalski (Duev et al. 2019), matplotlib (Hunter 2007),numpy (Harris et al. 2020), pandas (pandas developmentteam 2020), pypride (Duev et al. 2016), SEP (Barbary2016), sbpy (Mommert et al. 2019), TensorFlow (Abadiet al. 2016), ZChecker (Kelley et al. 2019)REFERENCES
Abadi, M., Agarwal, A., Barham, P., et al. 2016, arXive-prints, arXiv:1603.04467.https://arxiv.org/abs/1603.04467Astropy Collaboration, Price-Whelan, A. M., Sip˝ocz, B. M.,et al. 2018, AJ, 156, 123, doi: 10.3847/1538-3881/aabc4fBarbary, K. 2016, Journal of Open Source Software, 1(6),58, doi: 10.21105/joss.00058Bellm, E. C., Kulkarni, S. R., Graham, M. J., et al. 2019a,PASP, 131, 018002, doi: 10.1088/1538-3873/aaecbe Bellm, E. C., Kulkarni, S. R., Barlow, T., et al. 2019b,PASP, 131, 068003, doi: 10.1088/1538-3873/ab0c2aBloemen, S., Groot, P., Woudt, P., et al. 2016, in Society ofPhoto-Optical Instrumentation Engineers (SPIE)Conference Series, Vol. 9906, Ground-based and AirborneTelescopes VI, 990664, doi: 10.1117/12.2232522Bolin, B. T., Lisse, C. M., Kasliwal, M. M., et al. 2020, AJ,160, 26, doi: 10.3847/1538-3881/ab9305 ails: Chasing Comets with ZTF and Deep Learning Figure 10.