Real-Time Ellipse Detection for Robotics Applications
Azarakhsh Keipour, Guilherme A. S. Pereira, Sebastian Scherer
RReal-Time Ellipse Detection for Robotics Applications
Azarakhsh Keipour , Guilherme A. S. Pereira and Sebastian Scherer Abstract — We propose a new algorithm for real-time detec-tion and tracking of elliptic patterns suitable for real-worldrobotics applications. The method fits ellipses to each contour inthe image frame and rejects ellipses that do not yield a good fit.It can detect complete, partial, and imperfect ellipses in extremeweather and lighting conditions and is lightweight enough tobe used on robots’ resource-limited onboard computers. Themethod is used on an example application of autonomousUAV landing on a fast-moving vehicle to show its performanceindoors, outdoors, and in simulation on a real-world roboticstask. The comparison with other well-known ellipse detectionmethods shows that our proposed algorithm outperforms othermethods with the F1 score of 0.981 on a dataset with over 1500frames. The videos of experiments, the source codes, and thecollected dataset are provided with the paper.
I. INTRODUCTIONReal-time detection and tracking of circular and ellipticshapes using an onboard vision system are essential for manyreal-world (mainly robotics) applications. For example, manyaerial robot’ landing zones consist of an elliptical shape, andautonomous cars need to detect the circular road signs.Detecting ellipses in images was a topic of interest forresearchers for a long time. In general, it is possible toclassify the available ellipse detection approaches into fourclasses.The first class contains voting-based algorithms, includingHough Transform (HT) [1] and the methods based on it.The HT algorithm uses a 5-dimensional parametric spacefor the ellipse detection task and is too slow for real-time applications. Other methods try to enhance HT byreducing the dimensionality in parametric space [2], [3], [4],performing piecewise-linear approximation for the curvedsegments [5] or randomizing the method [6], [7], [8]. Someof these modified HT-based methods are fast but generallyless accurate and not suitable for many robotics applications.The second class contains optimization-based methods.Most popular methods convert the ellipse fitting problem intoa least-squares optimization problem and solve the problemto find the best fit [9], [10]. These methods are generally notrobust and tend to produce many false positives. However,these methods’ high processing speed is useful for estimationof the ellipse as the first step in other methods. Anothergroup of optimization-based methods try to solve the non-linear optimization problem of fitting an ellipse using agenetic algorithm [11], [12]. These algorithms perform wellin noisy images with multiple ellipses, but their processingtime makes them impractical for real-time applications. , Robotics Institute, Carnegie Mellon University, Pittsburgh, PA [keipour, basti]@cmu.edu Department of Mechanical and Aerospace Engineering, West VirginiaUniversity, WV [email protected]
The third class consists of methods are based on edgelinking and curve grouping [13]. These methods can detectellipses from complex and noisy images but are generallycomputationally too expensive and can not be used in real-time applications.The last class is the methods that use an ad-hoc approachor combine the methods from the first three groups to achievea faster and more accurate ellipse detection. A real-timemethod proposed by Nguyen et al. [14] detects arc-segmentsfrom the image and groups them into elliptical arcs to esti-mate the ellipse parameters using a least-square optimization.A method proposed by Fornaciari et al. [15] combines arcgrouping with Hough Transform in a decomposed parameterspace to estimate the ellipse parameters. In this way, itachieves a real-time performance comparable to slower, morerobust algorithms.While many ellipse detection algorithms are proposedfor computer vision tasks ([16], [17], [18]), these methods’performance drops when used in real-world robotics tasks.Some of the challenges in these applications include: • The algorithm should work on-line (with a frequencygreater than 10 Hz), generally on the robot’s resource-limited onboard computer. • The elliptical pattern should be detected when it is far fromand close to the robot. • The shape of the pattern is transformed by a projectivedistortion, which occurs when the shape is seen fromdifferent points of view. • There is a wide range of possible illumination conditions(e.g., cloudy, sunny, morning, evening, indoor lighting). • Due to the reflection of the light (e.g., from sources likesun, bulbs), the pattern may not always be seen in all theframes, even when the camera is close to it. • In some frames, there may be shadows on the pattern (e.g.,the shadow of the robot or trees around). • In some frames, the pattern may be seen only partially(due to occlusion or being outside of the camera view).Considering these challenges, different approaches havebeen devised in the literature to detect and track an ellipticpattern in real applications. For example, [19] uses thecircular Hough-transform algorithm for the initial detection,which is a slow algorithm, and then uses other features intheir pattern for the tracking. [20] has developed a Convo-lutional Neural Network to detect the elliptic pattern, whichwas trained with over 5,000 images collected from the patternmoving at a maximum of km/h at various heights. [21]uses the [15] method for ellipse detection only when theirrobot is far from the pattern and switches to other features a r X i v : . [ c s . R O ] F e b n their pattern for the closer frames.This paper proposes a novel ellipse detection method thatperforms on resource-limited onboard computers in real-time. Our proposed method first extracts all contours fromthe input image and then uses a least-square method to fit anellipse to each contour. It tests how well the estimated ellipsefits the contour and starts rejecting the contours using severalcriteria. The remaining contours are accepted as resultedellipses. The detection method is combined with a simpletracking algorithm that changes detection parameters asnecessary and can significantly increase the elliptical patterndetection’s precision and performance. The resulting processcan deal with lighting variations and detect the pattern evenwhen the view is partial. Comparing the results of ourexperiments on a collected dataset to the other methods testedon similar datasets shows that our method outperforms all theother real-time ellipse detection methods proposed so far.Section II explains the developed method for detectionand tracking of the elliptical targets; Section III showsan example real-time application that uses the proposedalgorithm and compares the performance with some otheravailable well-known methods. Finally, Section IV discusseshow the proposed method can be further improved in thefuture.II. ELLIPSE DETECTION AND TRACKINGThe proposed ellipse detection algorithm’s idea is to fitellipses to all the contours in a frame or the region of interestand then decide if the ellipse is a good fit. Utilizing real-time ellipse fitting algorithms and fast criteria for checkingthe fit, the result is a real-time detection algorithm that candetect ellipses as long as the elliptic pattern’s contours are (atleast partially) extracted during the contour extraction. Witha suitable contour extraction method, the whole detectionbecomes robust to the lighting and environmental changes.The pseudocode for the proposed ellipse detector is shownin Algorithm 1.The detection function receives a frame and a set ofthreshold values used in the function and returns a set ofdetected ellipses. A step-by-step explanation of the algorithmis as follows:1) The first step of the algorithm is to extract edges from theinput frame. We used the Canny edge detector [22], con-sidering that the thresholds should be selected carefullyto extract suitable edges in a large variety of conditions(e.g., illumination) while preventing the generation of toomany edges. Usually, it is beneficial for smaller targetsto produce more edges; this action will increase theprocessing time but reduces the probability of not havingan edge for the elliptic target.2) The resulted edges are utilized to extract contours usingthe algorithm proposed by Suzuki and Abe [23]. Thisstep helps make connections between relevant edges andenables the extraction of shapes in a frame. Ideally, eachcontour is a collection of connected points constructinga shape’s border in the edge image. Algorithm 1
Proposed approach for ellipse detection (cid:46) This function detects all the ellipses found using theinput criteria (thresholds) function D ETECT E LLIPSES ( frame , thresholds ) (cid:46) Initialize an empty set for the detected ellipses Detections ← ∅ (cid:46)
Detect all edges in the frame edges ← D ETECT E DGES ( frame ) (cid:46) Extract contours from the detected edges contours ← E XTRACT C ONTOURS ( edges ) for each c ∈ contours do (cid:46) Reject if the contour is too small if | c.pixels | is small then continue (cid:46) Fit an ellipse to each contour ellipse ← F IT E LLIPSE ( c ) (cid:46) Reject ellipses with unreasonable dimensions if ellipse.largeAxis is too small then continue if ellipse.smallAxis is too small then continue axisRatio ← ellipse.largeAxis / ellipse.smallAxis if axisRatio is too large then continue (cid:46) Reject contours that have small overlap per-centage with their ellipse overlap ← c.pixels ∩ ellipse.pixels if | overlap | / | c.pixels | is small then continue (cid:46) Reject ellipses that have small overlap percent-age with the edges overlap ← ellipse.pixels ∩ edges.pixels if | overlap | / | ellipse.pixels | is small then con-tinue (cid:46) Add the detected ellipse to the set of detections
Detections .Insert( ellipse ) end for (cid:46) Return the detections after all contours are pro-cessed return
Detections end function
3) Each contour is processed individually to determine if itis a part of an ellipse or not. For robotics tasks, an ellipsecan have one of the following contour types: • A single contour containing a full ellipse without anyocclusions, or broken and additional contours (Fig.1(a)). • A single contour containing a full ellipse with addi-tional connected contour branches from the rest of theig. 1: Contour types containing ellipses in different con-ditions: (a) A single contour containing a full ellipse. (b)A single contour containing a full ellipse with additionalconnected contour branches from the rest of the pattern.(c) Multiple contours, together constructing a full ellipse.(d) A single contour containing an ellipse that is partiallyoccluded by another object. (e) A single contour containingan ellipse that is partially seen in the frame. (f) A singlecontour containing a full ellipse with additional connectedcontour branches that is partially seen in the frame.pattern (Fig. 1(b)). • Multiple contours, together constructing a full ellipse(Fig. 1(c)). • A single contour containing an ellipse that is partiallyoccluded by other objects (Fig. 1(d)). • A single contour containing an ellipse that is partiallyseen in the frame (Fig. 1(e)). • A combination of the above contour types (e.g.Fig. 1(f)).In order to correctly detect the above contour types, thefollowing process is performed on each contour:a) If a contour has a very small number of pixels, it isignored since it is most probably just noise.b) An ellipse is fit to the contour using the least-squareapproximation method described by Fitzgibbon andFisher [10]. The method fits an ellipse to any inputcontour; therefore, the contour should be furtherprocessed to determine the actual ellipses.c) The resulting ellipse will be ignored if any of theaxes are too small or if the ellipse’s eccentricity ishigh (close to 1). In high eccentricity, the resultingellipse is similar to a line and is not really an ellipse.d) The current contour is intersected with the resultedellipse, and the ratio of the number of pixels in theintersection to the number of all pixels in the contouris calculated:
ContourOverlap = | Contour ∩ Ellipse || Contour | , (1) where |·| is used for the number of pixels. A low resultmeans that the contour and the resulted ellipse do notfit well, and a significant portion of the contour is notlying on the fitted ellipse. In this case, the contour isignored and not further processed.e) Finally, the ellipse is intersected with the edges, andthe ratio of the number of pixels in the intersection tothe number of all the pixels in the ellipse is calculated: EllipseOverlap = | Ellipse ∩ Edges || Ellipse | , (2)where | · | is used for the number of pixels. A lowresult means that a significant portion of the ellipsedoes not correspond to any contours in the image.The reason that the ellipse is intersected with theedge image instead of only its constructing contouris that due to noise or imperfect contour detectionstep, sometimes an ellipse is broken into two ormore contours (e.g., the cases like Fig. 1(c)). In thesecases, checking the ellipse against a single contourwill give a low ratio and results in false negatives.To take care of the cases similar to Figure 1(e), itis essential to count only the ellipse pixels that areactually lying in the image; otherwise, the result willbe too low, and a partially viewed ellipse may getrejected. Additionally, to make the algorithm morerobust to slightly imperfect ellipses, it is beneficial toincrease the edges’ thickness before intersecting themwith the ellipse.f) If an ellipse is not rejected in the previous steps, itrepresents a real ellipse in the image and is added tothe set of detected ellipses.4) Due to the target ellipse’s thickness, there is a chance ofdetecting two or more concentric ellipses. Therefore, thereturned set of detected ellipses in the frame is furtherprocessed to find the concentric ellipses. All the non-concentric sets of ellipses are ignored when this happens.The proposed algorithm can also detect the ellipses withpartial occlusion or the ellipses exceeding the image bound-aries. The rejection criteria can be chosen in a way to acceptellipses in such cases.Choosing higher rejection criteria for ellipse detection gen-erally helps to eliminate potential false positives. Therefore,it is beneficial to have higher rejection thresholds when thereis no prior information about the pattern location and size inthe image. However, after the first detection of the ellipticalpattern, it is possible to change the initial parameters andconditions for the detection to enhance the performance andincrease the detection rate. Decreasing the detection thresh-old values reduces the probability of losing the target ellipsein the following frames due to illumination variation, noise,occlusion, or other changes in the conditions. Additionally,if the approximate movement of the target is known, settingthe region of interest (ROI) to the area where the patternis expected to be seen will decrease both the processingtime and the probability of falsely detecting other similarshapes that were rejected in the initial frame due to theigher thresholds. Furthermore, whenever the detected targetis large enough to be detected in the input frame with asmaller scale, the frame can be scaled down to increase theprocessing speed. Performing ellipse detection on the smallerframe takes less CPU time and is much faster.We propose these steps that can be combined with theellipse detection algorithm to track the detected ellipticpattern in the next frames more efficiently:1) Significantly decrease the ContourOverlap and
EllispeOverlap threshold values. This threshold changeincreases the detection rate and helps the algorithm tokeep tracking the target.2) Determine the region of interest, which includes thecurrent detected target and expands in all directions basedon the distance from the target, the robot’s relative speedand the pattern, and other available information. Forexample, if the distance is far and the relative linearand angular speeds are low, the target is expected to befound close to the current detection coordinates in thenext frame.3) Decreasing the scale of the frame by order of two (upuntil a set threshold) every time that the detected targetis larger than a specified size and scaling the frame upagain by order of two (up to the actual frame scale)every time the detected target is smaller than a chosenthreshold. To make the approach more robust, ellipsedetection with initial higher parameters is performed onceagain on a higher scale if no candidate targets are foundon a lower scale. The scale change is performed to reducethe execution time, as the algorithm needs to process onlya quarter number of pixels every time it scales the framedown. III. EXPERIMENTS AND RESULTS
A. Elliptic Target Dataset and Parameter Selection
We created a dataset with sequences recorded using a UAVfrom a stationary and moving vehicle carrying an ellipticalplatform at various distances, angles, and illumination condi-tions. The dataset contains 1,511 frames (1,378 positive and133 negative frames) and 456 frames of a thinner versionof the same pattern. The size of the frames is × ,and the ground truth for the detections is provided. Thedataset can be accessed from http://theairlab.org/landing-on-vehicle .The selection of the proposed ellipse detection algorithmthresholds depends on the tolerance for false positives vs.false negatives in the application. However, in practice,for most cases, the detection is not too sensitive to theparameters, and they can be selected from a broad range.For our tests on the AirLab Elliptic Target Detection Dataset,we empirically chose the values shown in Table I. TheGUI tool provided with the code helps with the calibrationprocess letting the user see the parameters’ effects in real-time. The ellipse detection parameters are independent of thelighting conditions. Therefore after a one-time calibration,the algorithm should detect the pattern in a wide range of TABLE I: Ellipse Detection parameters chosen for the testson the AirLab Elliptic Target Detection Dataset. Parameter Value Justification
ContourOverlap for detection 0.95 To prevent False Positives.
ContourOverlap for tracking 0.7 To enhance target tracking.
EllipseOverlap for detection 0.95 To prevent False Positives.
EllipseOverlap for tracking 0.3 To enhance target tracking. (a) (b)(c) (d)
Fig. 2: Performance and execution times of our algorithmvs. contour overlap threshold (
ContourOverlap parameter).The value of
EllipseOverlap threshold is fixed to . for allthe experiments. Wrong Positives are the wrong detectionswhen the target was present in the frame. (a) Accuracy,Recall, and F1 Score of the algorithm increase with theincrease of ContourOverlap up to a point. (b) Increasing
ContourOverlap increases the number of True Negatives,while may result in fewer True Positives after some point.(c) Increasing
ContourOverlap results in a lower number ofFalse Positives and Wrong Detections, while it may resultin the increase of the number of False Negatives after somepoint. (d) The execution time decreases by increasing the
ContourOverlap parameter.weather conditions (e.g., sunny, cloudy, snowy) as long asthe light is enough for the camera to capture the pattern.In order to assess the sensitivity of the algorithm againstthe thresholds, Figure 2 shows the performance of thealgorithm for different values of
ContourOverlap with thevalue of
EllipseOverlap fixed to . . Additionally, Figure 3shows the performance of the algorithm for different valuesof EllipseOverlap with the value of
ContourOverlap fixed to . .Let us define N T P , N F P , N T N , N F N , N All , and N W D asthe number of True Positive detections, False Positives, TrueNegatives, False Negatives, the total number of frames andthe number of frames with visible targets where the detection a) (b)(c) (d)
Fig. 3: Performance and execution times of our algorithmvs. ellipse overlap threshold (
EllipseOverlap parameter). Thevalue of
ContourOverlap threshold is fixed to . for allthe experiments. (a) Accuracy, Recall, and F1 Score ofthe algorithm very slowly increase with the increase of EllipseOverlap up to a breaking point, where they suddenlydrop. (b) Increasing
EllipseOverlap increases the number ofTrue Negatives while reducing the number of True Posi-tives after a certain point after some point. (c) Increasing
EllipseOverlap results in a lower number of False Positives,while it may result in the increase of the number of FalseNegatives after some point. (d) The execution time generallydecreases by increasing the
EllipseOverlap parameter.was wrong. Then the measures in the plots are defined asfollows. • Accuracy measures the ratio of all the frames that thealgorithm gives the correct result (either the target isdetected correctly or no target is detected in a framewithout a target). It is defined as ( N T P + N T N ) /N All . • Precision measures what ratio of all the target detectionsis actually correct. It is defined as N T P / ( N T P + N F P + N W D ) . • Recall or sensitivity measures the ratio of all the framescontaining targets that are correctly detected. It is definedas N T P / ( N T P + N F N + N W D ) . • F1 Score is the weighted average of precision and re-call and takes both false positives and false negativesinto account. F1 Score is defined as × ( Recall × P recision ) / ( Recall + P recision ) .It can be seen that increasing the value of ContourOverlap up to some point generally decreases the number of falseresults (both false positives and false negatives) and thereforeincreases the number of correct results and statistical mea-sures. Although, after some point, the number of detections TABLE II: Results of the ellipse detector methods onthe AirLab Elliptic Target Detection Dataset.
Method TP (cid:63) TN (cid:63) FP (cid:63) FN (cid:63) WD (cid:63) Ours Xie & Ji [24] 55 0 133
166 4 (cid:63)
TP = True Positives, TN = True Negatives, FP = FalsePositives, FN = False Negatives, WD = Wrong Detections(wrong results when the target was present in the frame).
TABLE III: Performance of the ellipse detector meth-ods on the AirLab Elliptic Target Detection Dataset.
Method Accuracy (cid:63)
Precision (cid:63)
Recall (cid:63)
F1 Score (cid:63)
Ours [24] 3.64% 3.64% 3.96% 0.038[15] 88.75% 99.67% 87.98% 0.935 (cid:63)
As defined in Section III-A. (true or false positives) starts dropping, and the performanceslowly decreases. Additionally, the higher
ContourOverlap results in more candidate contours being eliminated beforefurther processing, reducing the algorithm’s execution time.On the other hand, the value of
EllipseOverlap has asmaller effect on performance and execution time, slightlyimproving the performance up to a point. If it is set too high,the algorithm starts rejecting more ellipses, causing a dropin the number of positive results, which will significantlydecrease the algorithm’s performance.Finally, the True Negative frames are faster to process thanthe True Positives due to the rejection of the contours in thefirst few steps of the algorithm, eliminating the need for thefurther processing of the contours.
B. Comparison With Other Methods
To compare the performance of our algorithm with othermethods, we ran two other methods on the dataset introducedin Section III-A:1) The first method is a MATLAB implementation of a well-known Hough Transform-inspired approach proposedby [24] with a random sub-sampling inspired by the workin [25].2) The second method is the C++ implementation of a real-time ellipse detection method proposed by [15], whichis currently the most used method in real-time roboticsapplications.Tables II and III show the results and performance of thealgorithms on the dataset.The results show that our implemented algorithm outper-forms the other two methods in all the criteria. Xie & Ji’smethod was unable to perform well on the real frames ofour test environment due to the relatively low resolution ofour frames and the small size of the target in the frames;the few cases it could detect the elliptic target were whenthe target was covering a large portion of the frame. Themain problem with the Fornaciari method was that it wasunable to detect the elliptic targets when more than 25% ofthe target was outside of the frame (case (e) in Figure 1). At a) (b)(c) (d)
Fig. 4: Result of the proposed algorithm on sample frames.The red ellipse indicates the detected pattern on the deck ofthe moving vehicle.the same time, our proposed algorithm was still able to detectthe target’s elliptic pattern in partial views. Additionally, weshould note that all the false positive cases of our algorithmon the dataset were detecting elliptical drawings on theground, which would have been rejected if the UAV’s altitudeinformation was used.Figure 4 shows results for the detection of the ellipticalpattern in some sample frames from the dataset. The methodby Fornaciari et al. is unable to detect the ellipses inFigure 4(a) and Figure 4(d).Table IV compares the execution times of the ellipsedetection methods on the same dataset (using Intel Corei5-4460 CPU @ 3.20 GHz). The implementation of Xie &Ji’s method is done in MATLAB, which gives much higherexecution times than C++ implementations. Therefore weexcluded it from Table IV. Fornaciari et al. method hassimilar average execution times to our algorithm. However,their approach provides slightly more consistent executiontimes, which can be convenient for control systems usingthe detection output for robot control. On the other hand,our method’s speed significantly increases (with frame pro-cessing time going down to just a few milliseconds) when therobot gets closer to the target pattern. This increase in speedespecially helps the system to have a much higher detectionrate when a higher processing speed is needed for the robotto approach the moving vehicle for landing.
C. Example Application
To test the proposed ellipse detection and trackingmethod’s performance, it was used in a visual servoingmethod for an autonomous UAV landing on a circular patternpainted on top of moving platforms in indoor, outdoor andsimulated environments. Figure 5 shows screenshots of themethod in these different lighting conditions. Videos forthese experiments and the project details can be accessed at http://theairlab.org/landing-on-vehicle . (a) (b)(c) (d) Fig. 5: Screenshots from video sequences showing ourautonomous UAV landing on a circular pattern moving atup to . m/s speed in various lighting and environmentalconditions.IV. FUTURE WORK AND DISCUSSIONThe proposed ellipse detection and tracking algorithm hasshown its performance in an example application and hasoutperformed other standard methods in our tests. However,there are possible improvements that can further enhance theperformance and increase the usability of the method in real-world robotics applications.The underlying algorithms used in the ellipse detectionsteps (see Algorithm 1) are the most common algorithmsalready available in the OpenCV library. The choice has beenmade to allow fast implementation by the potential readerand convenience. If better performance is required, the wholemethod’s execution speed and performance can be improvedby replacing steps such as ellipse fitting with faster and betteralgorithms.Additionally, we have noticed that if the robot’s camera isnot perfectly calibrated, it may lose track of the elliptic targetat close distances where only a small portion of the patternis visible at such a skewed angle that it causes the circle tobe seen as non-elliptic in the camera. The problem exists forany ellipse detection algorithm but can be improved using arobust tracker instead of a detector to track the target ellipsewhen the robot’s camera is too close to the elliptical pattern.ACKNOWLEDGMENTAuthors want to thank Rogerio Bonatti, Rohit Garg, PuruRastogi, Geetesh Dubey, Nikhil Baheti, Miaolei He, Zihan(Atlas) Yu, Koushil Sreenath and Near Earth Autonomyfor their support and help in this project. The project wassponsored by Carnegie Mellon University Robotics Instituteand Mohamed Bin Zayed International Robotics Challenge.During the realization of this work, G. Pereira was supportedby UFMG and CNPq/Brazil.R EFERENCES[1] J. Illingworth and J. Kittler, “A survey of the hough transform,”
Computer Vision, Graphics, and Image Processing , vol. 44, no. 1, pp.
ABLE IV: Execution times of the ellipse detector methods on the AirLab Elliptic Target DetectionDataset. (cid:63)
Method True Positive True Negative False Positive False Negative
Avg. Max. Min. Avg. Max. Min. Avg. Max. Min. Avg. Max. Min.
Our Method 32.2
Fornaciari, et al [15] (cid:63)
All times are in milliseconds. , vol. 4, 1997,pp. 3154–3157 vol.4.[3] H. Yuen, J. Illingworth, and J. Kittler, “Detecting partiallyoccluded ellipses using the hough transform,”
Image and VisionComputing
Pattern Recognition
PatternRecognition
International Conferenceon Information Technology: Coding and Computing, 2004. Proceed-ings. ITCC 2004.
Pattern Recognition Letters
Pattern RecognitionLetters
Pattern Recognition
In British Machine Vision Conference , 1995.[11] K.-U. Kasemir and K. Betzler, “Detecting ellipses of limitedeccentricity in images with high noise levels,”
Image andVision Computing
Pattern Analysisand Applications , vol. 8, no. 1, pp. 149–162, Sep 2005. [Online].Available: https://doi.org/10.1007/s10044-005-0252-7[13] F. Mai, Y. Hung, H. Zhong, and W. Sze, “A hierarchicalapproach for fast and robust ellipse extraction,”
Pattern Recognition , Oct 2009, pp. 3280–3286.[15] M. Fornaciari, A. Prati, and R. Cucchiara, “A fast and effective ellipsedetector for embedded vision applications,”
Pattern Recognition
Pattern Recognition
Machine Vision andApplications , vol. 31, no. 7, p. 64, Sep 2020. [Online]. Available:https://doi.org/10.1007/s00138-020-01113-1[18] C. Meng, Z. Li, X. Bai, and F. Zhou, “Arc adjacency matrix-based fastellipse detection,”
IEEE Transactions on Image Processing , vol. 29,pp. 4406–4420, 2020.[19] M. Beul, M. Nieuwenhuisen, J. Quenzel, R. A. Rosu, J. Horn,D. Pavlichenko, S. Houben, and S. Behnke, “Team nimbro atmbzirc 2017: Fast landing on a moving target and treasurehunting with a team of micro aerial vehicles,”
Journal of FieldRobotics , vol. 36, no. 1, pp. 204–229, 2019. [Online]. Available:https://onlinelibrary.wiley.com/doi/abs/10.1002/rob.21817[20] R. Jin, H. M. Owais, D. Lin, T. Song, and Y. Yuan, “Ellipse proposaland convolutional neural network discriminant for autonomouslanding marker detection,”
Journal of Field Robotics , vol. 36, no. 1,pp. 6–16, 2019. [Online]. Available: https://onlinelibrary.wiley.com/doi/abs/10.1002/rob.21814[21] Z. Li, C. Meng, F. Zhou, X. Ding, X. Wang, H. Zhang, P. Guo,and X. Meng, “Fast vision-based autonomous detection of movingcooperative target for unmanned aerial vehicle landing,”
Journal ofField Robotics , vol. 36, no. 1, pp. 34–48, 2019. [Online]. Available:https://onlinelibrary.wiley.com/doi/abs/10.1002/rob.21815[22] J. Canny, “A computational approach to edge detection,”
IEEE Trans-actions on pattern analysis and machine intelligence , no. 6, pp. 679–698, 1986.[23] S. Suzuki et al. , “Topological structural analysis of digitized binaryimages by border following,”
Computer vision, graphics, and imageprocessing , vol. 30, no. 1, pp. 32–46, 1985.[24] Y. Xie and Q. Ji, “A new efficient ellipse detection method,” in
Objectrecognition supported by user interaction for service robots , vol. 2,2002, pp. 957–960 vol.2.[25] C. A. Basca, M. Talos, and R. Brad, “Randomized hough transformfor ellipse detection with result clustering,” in