A fast algorithm for the detection of faint orbital debris tracks in optical images
aa r X i v : . [ phy s i c s . s p ace - ph ] S e p A fast algorithm for the detection of faint orbital debris tracks in optical images
P. Hickson a,b,c, ∗ a Department of Physics and Astronomy, The University of British Columbia, 6224 Agricultural Road, Vancouver, BC, V6T1Z1, Canada b Space science, Technologies and Astrophysics Research (STAR) Institute,Universit´e de Li`ege, Institut d ′ Astrophysique et de G´eophysique,All´ee du 6 Aoˆut 19c, 4000 Li`ege, Belgium c National Astronomical Observatories, Chinese Academy of Sciences, 20A Datun Road, Chaoyang District, Beijing, China
Abstract
Moving objects leave extended tracks in optical images acquired with a telescope that is tracking stars or other targets.By searching images for these tracks, one can obtain statistics on populations of space debris in Earth orbit. Thealgorithm described here combines matched filtering with a Fourier implementation of the discrete Radon transformand can detect long linear tracks with high sensitivity and speed. Monte-Carlo simulations show that such tracks, ina background of Poisson random noise, can be reliably detected even if they are invisible to the eye. On a 2.2 GHzcomputer the algorithm can process a 4096 × Keywords: space debris, streak detection
1. Introduction
The detection of linear tracks in a two dimensional im-age is a common problem in image processing. One im-portant application is for the detection of orbital debris.While known objects can be tracked with a telescope, in-creasing the signal-to-noise ratio, unknown objects can-not. An appropriate observing strategy is to track at thesidereal rate, thereby minimizing image contamination bystars, and search for tracks in the image produced by ob-jects moving across the field of view during the exposure.Automated algorithms can process large data sets and canreach detection limits fainter than can human observers.Many groups have developed algorithms to find streakedimages, for the detection of moving celestial objects (Sara& Cvrcek, 2017; Waszczak et al. , 2017) as well as forsatellite or debris detection (Zimmer et al. , 2013; Ciurte &Danescu, 2014; Vananti et al. , 2015; Virtanen et al. , 2017;Vallduriola et al. , 2018). A variety of techniques have beenemployed. In segmentation-based methods (Liu, 1992; Vir-tanen et al. , 2017), pixels having intensities above a thresh-old are analyzed. This is well-suited to short, relatively-bright streaks. Stacking methods are useful when an ob-ject is observed in multiple images (Yanagisawa et al. ,2012). Other methods employ the Radon or Hough trans-form (Zimmer et al. , 2013; Ciurte & Danescu, 2014) or ∗ Corresponding author
Email address: [email protected] (P. Hickson)c (cid:13) matched filtering (Gural et al. , 2005; Schildknecht et al. , 2015; Sara & Cvrcek, 2017).The classical Radon transform (Radon, 1917; Radon &Parks, 1986), and its close relative the Hough transform(Hough, 1959; Duda et al. , 1972), have long been em-ployed to identify linear features in images. These trans-forms map lines to points in a two-dimensional “Houghspace”, whose axes correspond to position and angle. Theangle is that between the normal to the line and the ref-erence axis, and the position is the perpendicular dis-tance from the line to the origin, usually taken to bethe centre of the image. Thus the problem of detectinglong streaks becomes a simpler problem of detecting lo-cal maxima in Hough space. A great advantage of thisapproach is that fast techniques have been developed forthe computation of the Radon transform (Beylkin, 1987;G¨otz & Druckm¨uller, 1995; Press, 2006). These are fastin the sense that for an N × N image, processing timegrows roughly in proportion to N ln N , rather than N for the classical Radon transform. For a typical astronom-ical CCD image, N ∼ − , so the fast Radon trans-form requires typically two to three orders of magnitudefewer computations.This paper describes a method that combines matchedfiltering with a fast discrete Radon transform in order toachieve high sensitivity and speed. It is best suited for thedetection of long faint streaks in a single image, as wouldbe produced by fast-moving objects. The algorithm wastested by Monte Carlo simulation of random linear tracks,having a Gaussian cross-section of specified FWHM, su-perimposed on a constant background, to which was thenadded random Poisson noise. It was found to be capableof reliably detecting faint tracks that were invisible to the Preprint submitted to Advances in Space Research September 5, 2018 ye.The sensitivity of the method comes from the use ofmatched filtering, which provides the highest possible signal-to-noise ratio of any linear detection technique, allowingthe faintest possible detection limit. Direct application ofmatched filtering, by integrating along all possible direc-tions and positions in the image, would be equally sensitivebut very slow. The Radon transform decreases the num-ber of dimensions of the search, providing a large increasein speed.We begin by briefly reviewing the concepts of optimaldetection and the Radon transform. The algorithm is thendescribed and results of the simulations are presented. Ourpython source code implementing the Radon transform isreproduced in Appendix A.
2. Method
It has been known since 1953 that the optimal lineartechnique for the detection and measurement of a signalin the presence of uniform uncorrelated stochastic noise isthat of matched filtering (Woodward, 1953; Turin, 1960).It is optimal in the sense that no other linear filter cangive a higher signal-to-noise ratio.In order to apply this method to the problem of detect-ing and measuring faint tracks in noisy images, let us firstsuppose that we know a priori the angle that the trackmakes with one of the axes of the image. Then, we canintegrate along this direction, summing the pixel valuesalong lines parallel to the track, in order to produce a one-dimensional mean profile of the cross-section of the track.Effectively, this amounts to projecting the image onto aline that is orthogonal (transverse) to the track. In orderto measure the position of the track, and the total flux thatit contains, we can search this one-dimensional projectionfor a local maximum.If we further know the shape and width of the track, inthe transverse direction, we can employ matched filtering.This will generally be true as tracks left by orbital de-bris are typically unresolved, having a transverse intensityprofile that is well-approximated by the one-dimensionalprojected profile of the point-spread function (PSF), foundfrom images of stars in the field. Denote the summed in-tensity along the projection line by p ( r ), where r is a co-ordinate in the direction transverse to the track. Assumethat any constant intensity I , such as the sky background,has first been subtracted. Let f ( r ) be the expected pro-file, i.e. the projected PSF, normalized so that its one-dimensional integral is unity, Z f ( r ) dr = 1 . (1)For simplicity we show here the results for continuous func-tions and the integral extends over the entire domain of the function. The extension to discrete values can be made byreplacing integrals by summations.If the track has the expected profile, and is centred at r = r , we may write p ( r ) = F f ( r − r ) + n ( r ) , (2)where F is the total flux in the track (the integral along theprojection line of the summed intensity in the track) and n ( r ) is a random variable having zero mean and variance σ n ( r ), representing the noise. Normally this will be thesingle-pixel noise variance σ p multiplied by the number ofpixels that were summed in each line parallel to the track.To detect the track with optimal sensitivity, we cross-correlate the summed intensity profile with a function h ( r )that is proportional to the expected signal divided by thenoise variance (King, 1983), h ( r ) = α f ( r ) σ n ( r ) . (3)The constant α is chosen to make the integral of h ( r ) unity, Z h ( r ) dr = 1 , (4)The cross correlation will have a maximum at the locationof the track, where it takes the value g ( r ) = F Z h ( r ) f ( r ) dx + Z h ( r ) n ( r ) dr. (5)This is a fluctuating quantity having an ensemble average g ( r ) = F Z f ( r ) h ( r ) dr. (6)The second term has disappeared by virtue of n ( r ) havingzero mean. The best estimate of the true flux F is thereforeˆ F = g ( r ) Q , (7)where Q = Z f ( r ) h ( r ) dr. (8)The variance of this estimate isVar ˆ F = 1 Q Var g ( r ) , = 1 Q Z h ( r ) σ n ( r ) dr, = αQ , (9)so the signal-to-noise ratio is s = F r Qα . (10)If the noise variance can be assumed to be constant,independent of r , then Eqns. (1), (3) and (4) require that2 = σ n , and therefore h ( r ) = f ( r ). The matched filter isproportional to the expected signal. In that case, Q = Z f ( r ) dr, (11)and the signal-to-noise ratio (SNR) becomes s = Fσ n p Q = g ( r ) σ n √ Q . (12)This shows the importance of a sharp PSF (small FWHM),which increases Q , improving the SNR.Although the noise in astronomical images arises from anumber of different sources (Newberry, 1991; Howell et al. , 2003), it is often dominated by the Poisson statistics ofthe detected photons. In that case, the noise variance willbe σ n ( r ) = F + F f ( r ) , (13)where F represents the projected intensity of the back-ground light, before sky subtraction. Even though themean background has been subtracted, its noise remains.For the Poisson case, the optimal filter, Eqn. (3), be-comes h ( r ) = αF f ( r )1 + βf ( r ) , (14)where α = F (cid:20)Z f ( r ) dr βf ( r ) (cid:21) − . (15)Here β = F/F is a measure of the relative brightness ofthe track compared to the background. The constant Q isnow Q = αF Z f ( r ) dr βf ( r ) , (16)so the SNR, Eqn. (10), becomes s = Fσ (cid:20)Z f ( r ) dr βf ( r ) (cid:21) / . (17)where σ = F is the variance of the background.For the Poisson case we see that the optimal filter de-pends on the flux of the track, which is generally notknown in advance. But, the problem considered in thispaper is the efficient detection of faint tracks in a noisyimage. In that case, β .
1, and the Poisson equationsapproach those of the constant-variance case, as expected.For bright streaks, it does not matter if the matched filterthat is used is somewhat less than optimal. They will bedetected in any case. This is primary justification for em-ploying the constant-variance matched filter when search-ing for faint tracks.
Of course if one knew in advance the orientation ofthe track, the detection problem would be relatively sim-ple. But in general the orientation is not known. Thus, the algorithm must search all possible orientations. Di-rectly computing projections for ∼ N orientations is timeconsuming. However, the speed of the process can be in-creased greatly by the use of the Fast Radon Transform.The algorithm that we employ to compute the Radontransform of the image is based on the Fourier Slice The-orem (Bracewell, 1960). This theorem, which is easilyproved, states that the values on a slice through the ori-gin of the two-dimensional Fourier transform of the imageis equal to the one-dimensional Fourier transform of theprojection of the image onto a line parallel to the slice.This allows one to employ fast Fourier transforms to com-pute the Radon transform, by taking the inverse Fouriertransform of each slice, for a complete set of angles.One way to compute the values on the slice would beto use two-dimensional interpolation in the transformedimage to estimate the values at integer distances (in unitsof pixels) along the slice. However, a simpler and fastermethod is employed here. If the angle θ between the sliceand the x axis is in the range | θ | < = 45 ◦ (for squareimages), a value on the slice is determined for every x pixel by taking the value of the image at ( x, y ), where y = x tan θ . Sinc interpolation is used to estimate thevalue of the transformed image at fractional values of y .In this way, the problem of interpolation in two dimensionsis reduced to one-dimensional interpolation. The resultingintensities along the slice have a spacing of x/ cos θ . Tocompensate, according to the Fourier scaling theorem theintervals of r after taking the inverse Fourier transform,are multiplied by a factor of cos θ and the intensity is mul-tiplied by a factor of sec θ . If | θ | > ◦ , the approach issimilar but with x and y interchanged and tan θ replacedby cot θ and cos θ replaced by sin θ .This differs from the standard Radon transform in thatthe position coordinate no longer measures perpendiculardistance from the track to the origin, but distance alongthe x or y axis, depending on the value of θ . The matchedfilter is easily adapted to this by scaling the FWHM of thePSF by a factor of sec θ (or csc θ if | θ | > ◦ ) in order toaccount for the oblique cut through the track.The interpolation scheme that is used to compute val-ues on the slice is important. Simple linear interpolation,or even polynomial interpolation, produces artifacts in theRadon transform, which degrade the photometric accu-racy. The correct approach is to use sinc interpolation.Here there is a tradeoff between the order of the interpo-lation (the number of pixels that are included in the sum-mation) and the speed of the technique. A full N − pointsinc filter completely eliminates the artifacts, but signifi-cantly increases processing time. On the other hand, linearinterpolation, which is very fast, results in systematic pho-tometric errors that can be as great as 15%. Employing asinc interpolating filter encompassing 7 pixels reduces pho-tometric errors to less than 5%. Increasing this to ∼ Our method involves the following steps: • If the image is not square, divide it into overlappingsquare sub-images. Then for each sub-image: • Mask stars and image defects. • Subtract the median background. • Compute the Radon transform. • Determine the RMS noise σ n in the Radon imageand the threshold value of g corresponding to thedesired SNR limit (from Eqn 12). • Find the highest value in the Radon image and recordthe corresponding position, angle, flux and SNR. • Mask the region around the highest value boundedby specified tolerances in position and angle. • Repeat this, finding the highest value in the maskedRadon image, and continue until the highest valuefalls below the threshold. • Combine the detections for all sub-images and rejectduplicate detections.The procedure requires some judgement about whatconstitutes a duplicate detection. This is best found fromexperience, but typically, two detections that have posi-tions within a few FWHM of each other and angles withintwo or three degrees are considered equivalent and the de-tection with the highest flux is selected. In some cases,“ghost” detections may occur, which have the same anglebut differ in position by the number of pixels along an axisof the image. This results from the periodicity of the fastFourier transform.
3. Simulations and results
The algorithm was coded in Python 3 and uses theNumpy and Scipy libraries. In order to test it, an im-age was created and filled with random values from astandard normal distribution ( σ p = 1). Then, a streakwas constructed having a random orientation and a trans-verse profile given by a Gaussian having standard devi-ation σ = w/ √ w is the desired FWHM inpixels. The profile was normalized so that its integral,multiplied by the number of pixels along the length of thetrack, equals F , the flux required to achieve the desiredsignal-to-noise ratio according to Eqn. (12). An exampleof a simulated image containing several tracks, and thecorresponding Radon transform, is shown in Figure 1. Figure 1: Simulated image containing three orbital debris tracks(upper) and its Radon transform (lower). The horizontal scale onthe lower image is pixel number, but it represents a range of anglesfrom − ◦ to 90 ◦ . The vertical scale is the x position of the midpointof the streak. The tracks have a FWHM of 3.0 pixels and signal-to-noise ratios of 100, 50 and 25. Tracks having a signal-to-noise ratioof ∼
20 or less are generally invisible to the eye in a 1K ×
1K or largerimage, but are nevertheless detected by the algorithm.
A total of 2700 simulations were run using a rangeof SNR and FWHM. The results are summarized in Ta-ble 1. Here FWHM is measured in pixels, and the suc-cess rate is the fraction of runs for which the strongestdetection found by the algorithm matched the simulatedtrack. The last column lists the magnitude error, definedby − . ( F measured /F true ).The SNR values listed in Table 1 are computed usingEqn (12), where F is the modelled total flux of the trackand σ n = √ F is the background Poisson noise variance.
4. Effect of track curvature
The algorithm is designed to find linear tracks. Tracksthat are slightly curved will still be detected, but withlower sensitivity. Tracks produced by orbital debris are notperfectly straight, although the deviation from linearityover the field of view of a typical astronomical camera isgenerally quite small.An exact analysis of the curvature of debris tracks isbeyond the scope of this paper, but a simple approximatetreatment will suffice to provide an estimate magnitude of4 able 1: Monte-Carlo simulation summary
SNR Size No of trials FWHM Success rate Mag error2.0 1024 1000 2.0 0.001 1.153.0 0.002 -1.024.0 0.001 1.013.0 1024 1000 2.0 0.008 0.763.0 0.011 0.664.0 0.027 0.664.0 1024 1000 2.0 0.075 0.393.0 0.094 0.424.0 0.125 0.375.0 1024 1000 2.0 0.255 0.203.0 0.328 0.224.0 0.370 0.226.0 1024 1000 2.0 0.620 0.133.0 0.642 0.144.0 0.687 0.157.0 1024 1000 2.0 0.859 0.143.0 0.879 0.134.0 0.914 0.148.0 1024 1000 2.0 0.964 0.153.0 0.971 0.144.0 0.979 0.139.0 1024 1000 2.0 0.997 0.153.0 0.999 0.144.0 0.997 0.1310.0 1024 1000 2.0 0.999 0.143.0 1.000 0.134.0 1.000 0.12
Figure 2: Completeness and photometric error vs. signal-to-noiseratio, for tracks having a FWHM of 3.0 pixels. the effect. Long tracks are made by fast-moving debris inlow or middle Earth orbit, which cross a typical imager ina few minutes or less. For such objects, there is little errorin ignoring the motion of the observer due to the rotationof the Earth. Also, for simplicity, we shall assume that theorbit is circular.The relevant geometry is shown in Figure 3. We choose
Figure 3: Geometry for estimating track curvature. A satellite atpoint S in a circular orbit is observed from point O on the surface ofthe Earth. The track appears elliptical when projected perpendicularto the line of sight. a barycentric Cartesian coordinate system in which theobject orbits in the x − y plane. Consider an observer O viewing the orbiting object when it is highest in thesky, which is the point where it crosses the x − z axis.The observer sees the orbit projected on the sky, whereit appears to be elliptical. The apparent curvature of thetrack (the reciprocal of the angular radius of curvature inradians) is κ = rR sin α = R ⊕ R sin θ, (18)where R is the orbital radius, R ⊕ is the radius of the Earth, r is the distance from the observer to the object, α is theangle between the line of sight and the orbital plane, and θ is the angle between the line connecting the observer tocentre and the orbital plane. The second equality followsby application of the sine rule for plane triangles.From this we see that for a given angle θ , the curva-ture is maximized by making R as small as possible. Thesmallest possible value is R = R ⊕ / cos θ , which places theobject on the observer’s horizon. With this choice of R ,Equation (18) becomes κ = 12 sin 2 θ, (19)which has a maximum value κ max = 1 / θ = π/ β radians, the maximumangular deviation from the best-fit straight line is ǫ = 116 κβ . (20)In order for there to be no significant loss of sensitivity,this deviation should be smaller than the half-width of thePSF. For example, if the PSF has a FWHM of 1 arcsec, themaximum track length is 0.5 degrees. Longer tracks willspread beyond the extend of the matched filter, lowering5he detection sensitivity. A lower limit on the sensitivitycan be obtained by assuming that the outer regions of thetrack, beyond this maximum length, contribute nothing tothe detection. In that case, the signal-to-noise ratio for thedetection of a maximally-curved 1-degree track would bereduced by a factor of two. In practice, the loss would besmaller than this. Also, this is the worst-case curvature.Most tracks should have much less deviation from linearity.This analysis suggests that track curvature should nothave a significant impact for imagers having a field of viewless than one degree. However, the deviation increasesquadratically with track length, so it is clear that curvaturecould be an issue for wide-field cameras having a largerfield of view.A second important source of nonlinearity is distortionwithin the telescope and camera optics. This distortionneeds to be corrected to a fairly tight tolerance, on theorder of 0.1% or less, in order to prevent significant dis-tortion of the tracks.
5. Discussion
It can be seen from Table 1 and Figure 1 that the com-pleteness depends primarily on the total signal-to-noiseratio. For a given SNR, There is no significant variationwith the width of the track or with the intensity or fluxof the track. The 50% completeness limit corresponds to
SN R ∼ .
5, and essentially all objects with
SN R ≥ SN R ∼
20 or less are generally invisible to the eye. Yetthey are readily detected by the algorithm described here.The photometric errors found in these simulations areconsistent with the expected Poisson noise. As a checkfor systematic errors, several simulations were run withvery bright tracks, for which the Poisson noise was negli-gible. These had photometric errors that were less than0.01 magnitudes (approximately 1%) when 51-point inter-polaton was used and 0.05 magnitudes for 7-point inter-polation.No attempt was made to simulate stars, which need tobe masked before running this algorithm on astronomicalimages. Masking of stars can be done automatically, andthe effect on streaks is generally quite small unless the fieldis very crowded (Zimmer et al. , 2013).The method is most sensitive for the detection of tracksthat completely cross the image. Tracks that end withinthe image can also be detected but with lower efficiency.The signal-to-noise ratio for such objects is proportional tothe track length. Tracks that cross the image are subopti-mal for the estimation of orbital and photometric param-eters because the angular speed and intrinsic luminosityof the object cannot be determined unless both endpoints are contained within the image. Nevertheless, the relativebrightness of the track, its orientation, and the time ofpassage all provide useful information.The algorithm was implemented and tested on a com-puter having a 2.2 GHz 64-bit processor. Execution timewas ∼ ×
1K image, ∼
10 s for a 2K ×
2K im-age and ∼
30 s for a 4K ×
4K image. This speed could beincreased by parallelizing the code in order to take advan-tage of multiple cores, however, it is already fast enoughfor many applications. For example, it could be usefulfor surveys such as that planned for the International Liq-uid Mirror Telescope, which will regularly scan the sky at29 . ◦ N latitude, acquiring a 16-Mpixel image every 102seconds (Surdej et al. , 2006). Such images can be scannedfor streaks in near-real time in order to acquire statisticson orbital debris populations.
Acknowledgements
I am grateful to Prof. J. Surdej for many discussionsand a careful reading of the manuscript, and to the Uni-versity of Li`ege for hospitality during a sabbatical visit.This work was supported by grants from the Natural Sci-ences and Engineering Research Council of Canada andthe Fonds de la Recherche Scientifique (FNRS) of Belgium,R.FNRS.4164-J-F-G. The hospitality of NAOC and sup-port from the Chinese Academy of Sciences, via the CASPresidents International Fellowship Initiative, 2017VMA0013,is also gratefully acknowledged.
ReferencesReferences
Beylkin, G. 1987. Discrete Radon transform.
IEEE Transactions onAcoustics, Speech and Signal Processing , , 162–171.Bracewell, R. N. 1960. Strip Integration in Radio Astronomy. Aust.J. Phys. , , 198.Ciurte, A., & Danescu, R. 2014. Automatic detection of MEO satel-lite streaks from single long exposure astronomic images. Pages538–544 of:
Danesy, D. (ed),
International Conference on Com-puter Vision Theory and Applications , vol. 1.Duda, R. O., Hart, P. E., & Johnson, C. G. 1972. Use of the HoughTransformation to Detect Lines and Curves in Pictures.
Comm.ACM , , 11–15.G¨otz, W. A., & Druckm¨uller, H. J. 1995. A fast digital Radon trans-form - An efficient means for evaluating the Hough transform. Pattern Recognition , , 1985–1992.Gural, P. S., Larsen, J. A., & Gleason, A. E. 2005. Matched fil-ter processing for asteroid detection. Astronomical Journal , ,1951.Hough, P. V. C. 1959. Machine analysis of bubble chamber pictures. In: Proc. Int. Conf. High Energy Accelerators and Instrumenta-tion .Howell, S. B., Everett, M. E., L., Tonrey J., Pickles, A., & Dain, C.2003. Photometric observations using orthogonal transfer CCDs.
Pub. Astron. Soc. Pacific , , 1340–1350.King, I. R. 1983. Accuracy of measurement of star images on a pixelarray. Pub. Astron. Soc. Pacific , , 163–168.Liu, J.-G. 1992 (Aug.). A computer vision process to detect and trackspace debris using ground-based optical tele- photo images. Pages522–525 of: Proceedings 11th IAPR International Conference onPattern Recognition . ewberry, M. V. 1991. Signal-to-noise considerations for sky-subtracted CCD data. Pub. Astron. Soc. Pacific , , 122–130.Press, W. H. 2006. Discrete Radon transform has an exact, fastinverse and generalizes to operations other than sums along lines. Proc. Nat. Acad. Sciences of the U.S.A. , , 19249–19254.Radon, J. 1917. Zur Bestimmung von Funktionen durch ihre In-tegralwerte bestimmter Mannigfaltigkeiten. Reports on the pro-ceedings of the Royal Saxonian Academy of Sciences at Leipzig,mathematical and physical section , , 262–277.Radon, J., & Parks, P. C. 1986. On the determination of functionsfrom their integral values along certain manifolds. Pages 170–176of: IEEE Transactions on Medical Imaging , vol. 5.Sara, R., & Cvrcek, V. 2017. Faint streak detection with certificateby adaptive multi-level Bayesian inference.
In: Proceedings of the7th European Conference on Space Debris .Schildknecht, T., Schild, K., & Vannanti, A. 2015. Streak DetectionAlgorithm for Space Debris Detection on Optical Images.
Page136 of: Advanced Maui Optical and Space Surveillance Technolo-gies Conference .Surdej, J., Absil, O., Bartczak, P., Borra, E., Chisogne, J.-P.,Claeskens, J.-F., Collin, B., De Becker, M., Defr`ere, D., Denis,S., Flebus, C., Garcet, O., Gloesener, P., Jean, C., Lampens, P.,Libbrecht, C., Magette, A., Manfroid, J., Mawet, D., Nakos, T.,Ninane, N., Poels, J., Pospieszalska, A., Riaud, P., Sprimont, P.-G., & Swings, J.-P. 2006. The 4m international liquid mirrortelescope (ILMT).
Page 626704 of: Society of Photo-Optical In-strumentation Engineers (SPIE) Conference Series , vol. 6267.Turin, G. L. 1960. An introduction to matched filters.
IRE Trans-actions on Information Theory. , , 311–329.Vallduriola, G. V., Trujillo, D. A. S., Helfers, T., Daens, D., Utz-mann, J., Pittet, J.-N., & Li`evre, N. 2018. The use of streakobservations to detect space debris. International Journal of Re-mote Sensing , , 2066–2077.Vananti, A., Schild, K., & Schildknecht, T. 2015. Streak detectionalgorithm for space debris detection on optical images. In: Pro-ceedings of the Advanced Maui Optical and Space SurveillanceTechnologies Conference .Virtanen, J., Poikonen, J., S¨antti, T., Komulainen, T., Torppa, J.,Granvik, M., Muinonen, K., Pentik¨ainen, H., Martikainen, J.,N¨ar¨anen, J., Lehti, J., & Flohrer, T. 2017. Streak detectionand analysis pipeline for space-debris optical images.
Advancesin Space Research , , 1607–1623.Waszczak, A., Prince, T. A., Laher, R., Masci, F., Bue, B., Reb-bapragada, U., Barlow, T., Surace, J., Helou, G., & Kulkarni, S.2017. Small near-Earth asteroids in the Palomar Transient Fac-tory Survey: A real-time streak-detection system. Publications ofthe Astronomical Society of the Pacific , , 0344029.Woodward, P. M. 1953. Probability and information theory withapplications to radar . London: Permagon Press.Yanagisawa, T., Kurosaki, H., Banno, H., Kitazawa, Y., Uetsuhara,M., & Hanada, T. 2012. Comparison between four detection al-gorithms for GEO objects.
In: Proceedings of the Advanced MauiOptical and Space Surveillance Technologies Conference .Zimmer, P. C., Ackermann, M. R., & McGraw, J. T. 2013. GPU-accelerated faint streak detection for uncued surveillance of LEO.