Autonomous Quadrotor Flight despite Rotor Failure with Onboard Vision Sensors: Frames vs. Events
Sihao Sun, Giovanni Cioffi, Coen de Visser, Davide Scaramuzza
11 This paper has been accepted for publication at theIEEE Robotics and Automation Letters, 2021. © IEEE
Autonomous Quadrotor Flight despite Rotor Failurewith Onboard Vision Sensors: Frames vs. Events
Sihao Sun , Giovanni Cioffi , Coen de Visser , Davide Scaramuzza Abstract —Fault-tolerant control is crucial for safety-criticalsystems, such as quadrotors. State-of-art flight controllers canstabilize and control a quadrotor even when subjected to thecomplete loss of a rotor. However, these methods rely on externalsensors, such as GPS or motion capture systems, for stateestimation. To the best of our knowledge, this has not yetbeen achieved with only onboard sensors. In this paper, wepropose the first algorithm that combines fault-tolerant controland onboard vision-based state estimation to achieve positioncontrol of a quadrotor subjected to complete failure of onerotor. Experimental validations show that our approach is ableto accurately control the position of a quadrotor during a motorfailure scenario, without the aid of any external sensors. Theprimary challenge to vision-based state estimation stems from theinevitable high-speed yaw rotation (over 20 rad/s) of the damagedquadrotor, causing motion blur to cameras, which is detrimentalto visual inertial odometry (VIO). We compare two types of visualinputs to the vision-based state estimation algorithm: standardframes and events. Experimental results show the advantage ofusing an event camera especially in low light environments dueto its inherent high dynamic range and high temporal resolution.We believe that our approach will render autonomous quadrotorssafer in both GPS denied or degraded environments. We releaseboth our controller and VIO algorithm open source.
Index Terms —Aerial Systems: Perception and Autonomy,Robot Safety, Sensor-based Control, Event Camera.
Source code and videoThe source code of both our controller and VIO algorithm isavailable at: https://github.com/uzh-rpg/fault_tolerant_controlA video of the experiments is available at: https://youtu.be/Ww8u0KH7Ugs I. Introduction M ULTI-ROTOR drones have been widely used in manyapplications, such as inspection, delivery, surveillance,agriculture, and entertainment. Among different types ofmulti-rotor drones, quadrotors are the most popular by virtueof their simple structure and relatively high aerodynamicefficiency. However, due to less rotor redundancy, quadrotorsare also more vulnerable to motor failures.Because safety is always a major concern, which restrictsthe expansion of the drone industry, it is crucial to developmethods to prevent quadrotors from crashing after motorfailures. Fault-tolerant flight control of quadrotors is a feasible S.Sun, G. Cioffi and D. Scaramuzza are with the Robotics and PerceptionGroup, University of Zurich, Switzerland (http://rpg.ifi.uzh.ch). C. de Visser is with Faculty of Aerospace Engineering, Delft Universityof Technology, 2629 HS Delft, The Netherlands.
Fig. 1: Controlling a quadrotor with complete failure of arotor causes fast yaw rotation, over 20 rad/s (top figure).Bottom figures show a standard frame and events capturedby an onboard event camera.
Bottom Left : standard framewith motion blur.
Bottom Center : Events only (blue: positiveevents, red: negative events).
Bottom Right : Event framegenerated from events. Blue circles are detected features, greendots indicate tracked features.solution since only software adaptation is required, which is anobvious advantage over adding rotor redundancy or parachutes.Previous works have achieved autonomous flight of aquadrotor subjected to complete failure of a single rotor [1],[2], and even in high-speed flight conditions where aerody-namic disturbances are apparent [3], [4]. However, these meth-ods rely on external sensors, such as GPS or motion capturesystems, for position tracking, which completely eliminate oralleviate the effects of state estimation errors. To improvequadrotor safety in GPS denied or degraded environments, weneed to resort to fully onboard solutions, such as vision-basedstate estimation.Complete failure of a rotor results in a fast spinning motion( >
20 rad/s) of the quadrotor [5]. This high-speed motionbrings a significant challenge to onboard state-estimationmethods. First, in vision-based estimators, it causes motionblur (bottom left plot in Fig. 1). Such motion blur deterio- a r X i v : . [ c s . R O ] F e b rates feature detection and tracking and subsequently degradesvisual-inertial odometry (VIO) performance, especially in low-light environments. Secondly, a large centrifugal accelerationread by the inertial measurement unit (IMU) often causes largeerrors in commonly-used attitude estimators. These problemsneed to be resolved to achieve autonomous quadrotor flightdespite rotor failure, using only onboard sensors. A. Contributions
To address the aforementioned problems, we make thefollowing contributions:1) We achieve and demonstrate the first closed-loop flightof a quadrotor subjected to complete failure of a ro-tor, using only onboard sensors and computation. Wedemonstrate that a damaged quadrotor can both hoveras well as follow a sequence of setpoints.2) We propose a novel state-estimation algorithm combin-ing measurements from an IMU, a range sensor, and avision sensor that can be either a standard camera oran event camera [6]. We show that an event camera be-comes advantageous in low-light environments becausethe sensor does not suffer from motion blur and has avery high dynamic range.3) We improve the complementary filter, commonly usedfor attitude estimation, to account for the measurementerror (up to 1 𝑔 ) induced by the fast spinning motion onthe accelerometer. We show that this yields significantimprovements in pitch and roll estimates in this high-speed spinning flight condition. B. Related Work1) Quadrotor Fault-Tolerant control:
The first flight con-troller resilience to complete failure of a single rotor wasproposed by [5] using a feedback linearization approach. Theauthors demonstrated that sacrificing the stability in the yawdirection becomes inevitable in order to keep the full control ofpitch, roll, and thrust. As a consequence, the drone fast spinsdue to the imbalanced drag torques from remaining rotors. Fol-lowing this idea, different approaches were proposed for thisproblem, such as PID [7], backstepping [8], robust feedbacklinearization [9], and incremental nonlinear dynamic inversion(INDI) [10]. However, these works were only validated insimulation environments.The first real-world flight test of a quadrotor with a completefailure of a rotor was achieved by [11], where a linear quadraticregulator (LQR) was applied. The authors also proposed the relaxed-hovering solution as a quasi-equilibrium, where thequadrotor spins about a fixed point in the 3-D space, withconstant body rates, pitch, and roll angles [1]. The authorsof [2] considered the initial spinning phase with varying yawrate, using a linear parameter varying controller. As the abovecontrollers did not consider aerodynamic effects, the INDIapproach was applied in [3] and [4] to render the controllerresilient to significant aerodynamic disturbances in high-speedwind-tunnel flight tests.
2) Visual Inertial Odometry:
VIO [12], [13] fuses visualand inertial measurements to estimate the 6-DoF pose ofthe sensor rig. Recent progress has made VIO algorithmscomputationally efficient to be deployed on resource con-strained platforms such as quadrotors [14]. In this work,we are interested in sliding-window optimization-based , alsoknown as fixed-lag smoothing , VIO estimators. Such methodsestimate a sliding-window of the most recent system statesin a nonlinear least-squares optimization problem. They arenormally more accurate than filter-based methods [15], [16].Two successful sliding-window optimization-based estimatorsare [17], [18]. These estimators minimize a cost functioncontaining visual, inertial, and marginalization residuals. Alimited number of the most recent system states is kept in thesliding window. Old states are marginalized together with partof the measurements. In this work, we use a sliding-windowoptimization-based approach by virtue of its favorable trade-off between computational efficiency and estimate accuracy.Recent works [19], [20] showed that event cameras allow todeploy VIO-based state estimators in scenarios where standardcameras are not reliable, such as high dynamic range scenesand high speed motions, which cause large motion blur.In [19], an event-based algorithm was presented, which is ableto estimate the pose of the camera and build a semi-dense3D map when the camera undergoes fast motion and operatesin challenging illumination conditions. In [20], both standardand event cameras were combined into a sliding-window-based VIO pipeline. The authors showed that leveraging thecomplementary properties of the standard and event camerasleads to large accuracy improvements with respect to thestandard frame-only or event-only algorithms.
C. Notation
Throughout the paper, we use bold lowercase letter torepresent vector variables and bold capital letters for matrices;otherwise, they are scalars. Superscript 𝐵 indicates that thevector is in the body frame, and 𝐶 stands for the camera frame.By default, a vector without superscript is represented in theinertial frame. A 3-D vector with subscript ’ × ’ means its skew-symmetric matrix for cross product, such that 𝒂 × 𝒃 = 𝒂 × 𝒃 forany 𝒂 ∈ R . diag ( 𝑎, 𝑏 ) represents diagonal matrix with 𝑎 and 𝑏 as diagonal elements. D. Organization
This paper is organized as follows: We first introduce thefault-tolerant flight controller in Section II. Then Section IIIdetails the state estimator, including the VIO algorithm andthe improved complementary filter. The evaluation and closed-loop flight results are given in Section IV. Afterwards theconclusions are drawn in Section V.II. Flight Controller Design
A. Quadrotor Model1) Quadrotor Dynamic Model:
The translational and rota-tion dynamic equations of a quadrotor are: 𝑚 (cid:165) 𝝃 = 𝑚 𝒈 + 𝑹 IB 𝒇 𝐵 , (1) 𝑰 𝐵𝑣 (cid:164) 𝝎 𝐵 + 𝝎 𝐵 × 𝑰 𝐵𝑣 𝝎 𝐵 = 𝒎 𝐵 + 𝒎 𝐵𝑔 , (2)where 𝝃 = [ 𝑥, 𝑦, 𝑧 ] 𝑇 indicates the center of gravity (c.g.)location of the drone in the inertial frame. 𝑹 IB ∈ SO(3)is the rotational matrix representing the quadrotor attitude.The angular velocity of the body frame w.r.t the inertialframe is expressed as 𝝎 𝐵 = [ 𝜔 𝑥 , 𝜔 𝑦 , 𝜔 𝑧 ] 𝑇 . The vehiclemass and inertia are denoted by 𝑚 and 𝑰 𝐵𝑣 respectively, and 𝒈 = [ , , 𝑔 ] 𝑇 denotes the gravity vector, where 𝑔 is thelocal gravitational constant. 𝒎 𝐵𝑔 is the propeller gyroscopicmoments. 𝒇 𝐵 = [ , , 𝑇 ] 𝑇 is the external force vector pro-jecting on the body frame.The collective thrust 𝑇 and control torques 𝒎 𝐵 = [ 𝜏 𝑥 , 𝜏 𝑦 , 𝜏 𝑧 ] 𝑇 are generated by rotors from the control al-location model: (cid:2) 𝑇, 𝜏 𝑥 , 𝜏 𝑦 , 𝜏 𝑧 (cid:3) 𝑇 = 𝑮𝒖 , (3)where 𝒖 ∈ R is the control input vector containing thrustsgenerated by each rotor. 𝑮 ∈ R × is the control effective matrix projecting individual thrust of each rotor to the collective thrustand torques.
2) Reduced Control Allocation Model:
After the occurrenceof the complete failure of the 𝑖 -th rotor, where 𝑖 ∈ { , , , } ,a quadrotor cannot maintain a stationary condition [5]. It is acommon strategy to stop controlling quadrotor heading angleleading to high-speed yaw rotation. In this case, we removethe last row and the 𝑖 -th column of 𝑮 , and obtain a reducedcontrol effective matrix ˜ 𝑮 ∈ R × . The control input 𝒖 is alsoreduced to ˜ 𝒖 ∈ R where the 𝑖 -th element is removed. Then,we obtain the reduced control allocation model (cid:2) 𝑇, 𝜏 𝑥 , 𝜏 𝑦 (cid:3) 𝑇 = ˜ 𝑮 ˜ 𝒖 . (4)
3) Reduced Attitude Kinematic Model:
Since the heading isnot being controlled, instead of a full attitude kinematic model,we use a reduced attitude kinematic model [3] to design theflight controller: (cid:164) 𝒏 𝐵 = 𝒏 𝐵 × 𝝎 𝐵 + 𝑹 𝑇 IB (cid:164) 𝒏 , (5)where 𝒏 is an arbitrary unit vector and 𝒏 𝐵 = [ 𝑛 𝑥 , 𝑛 𝑦 , 𝑛 𝑧 ] 𝑇 . B. Fault Tolerant Controller
Given 𝒂 des the reference acceleration from a PD positioncontroller [11], we can obtain the desired thrust orientationas 𝒏 = ( 𝒂 des − 𝒈 )/||( 𝒂 des − 𝒈 )|| . The reduced attitude con-troller aims at aligning 𝒏 with a body frame fixed vector 𝒏 𝐵 fix = [ ¯ 𝑛 𝑥 , ¯ 𝑛 𝑦 , ¯ 𝑛 𝑧 ] 𝑇 , where ¯ 𝑛 𝑥 , ¯ 𝑛 𝑦 , ¯ 𝑛 𝑧 are constants [21].A straightforward option is to select 𝒏 𝐵 fix = [ , , ] 𝑇 , namelythe thrust direction, to align with 𝒏 . However, we keep a slightangle (around 15 deg) between 𝒏 𝐵 fix and 𝒏 . This reduces thespinning rate and causes wobbling motion of the drone. Werefer the reader to [11] for more details.To align 𝒏 fix with 𝒏 , we use a nonlinear dynamic inversion(NDI) approach [22]. We define the control variable as 𝒚 = (cid:20) 𝑛 𝑥 − ¯ 𝑛 𝑥 𝑛 𝑦 − ¯ 𝑛 𝑦 (cid:21) = (cid:20) 𝑦 𝑦 (cid:21) . (6) Then we need to design the control input ˜ 𝒖 leading to a stablesecond-order closed-loop dynamic of 𝒚 : (cid:165) 𝒚 ( ˜ 𝒖 ) = − diag ( 𝑘 𝑝 , 𝑘 𝑝 ) 𝒚 − diag ( 𝑘 𝑑 , 𝑘 𝑑 ) (cid:164) 𝒚 , (7)where 𝑘 𝑝 and 𝑘 𝑑 are proportional and derivative gains. Anintegral term may be added to (7) as well to address slightmodel mismatch. To simplify the derivation of ˜ 𝒖 , we assumethat 𝜔 𝑧 and 𝑛 𝑧 are constant after rotor failure. Similarly to [11],we assume a slowly changing 𝒏 and (cid:164) 𝒏 = . The quadrotor in-ertia matrix 𝑰 𝐵𝑣 is approximated by diag ( 𝐼 𝑥 , 𝐼 𝑦 , 𝐼 𝑧 ) . Propellergyroscopic moment 𝒎 𝐵𝑔 is also negligible compared with rotorthrust generated torques, thus we omit it in the controllerdesign. We validate these assumptions in the real flights. Then,we substitute (2)(5)(6) into (7), yielding expressions of roll andpitch torque commands: (cid:2) 𝜏 𝑥, cmd , 𝜏 𝑦, cmd (cid:3) 𝑇 = (cid:2) 𝑘 𝑝 𝑦 + 𝑘 𝑑 ( 𝑛 𝑥 𝜔 𝑧 − 𝑛 𝑧 𝜔 𝑥 ) + ( 𝑛 𝑦 𝜔 𝑧 − 𝑛 𝑧 𝜔 𝑥 ) 𝜔 𝑧 (cid:3) 𝐼 𝑥 / 𝑛 𝑧 + 𝐼 𝑧 𝜔 𝑦 𝜔 𝑧 − 𝐼 𝑥 𝜔 𝑦 𝜔 𝑧 − (cid:2) 𝑘 𝑝 𝑦 + 𝑘 𝑑 ( 𝑛 𝑧 𝜔 𝑦 − 𝑛 𝑦 𝜔 𝑧 ) + ( 𝑛 𝑥 𝜔 𝑧 − 𝑛 𝑧 𝜔 𝑦 ) 𝜔 𝑧 (cid:3) 𝐼 𝑦 / 𝑛 𝑧 + 𝐼 𝑥 𝜔 𝑥 𝜔 𝑧 − 𝐼 𝑧 𝜔 𝑥 𝜔 𝑧 . (8)The thrust command 𝑇 cmd can be obtained from the desiredvertical acceleration 𝒂 𝑧, des and (1), yielding 𝑇 cmd = 𝑚 ( 𝒂 𝑧, des + 𝑔 )/ cos 𝜙 cos 𝜃, (9)where 𝜃 and 𝜙 are pitch and roll angles of the quadrotor.Finally, substituting (8) and (9) into (4), we can get the con-trol input ˜ 𝒖 , namely thrust commands of the three remainingrotors ˜ 𝒖 = ˜ 𝑮 − (cid:2) 𝑇 cmd , 𝜏 𝑥, cmd , 𝜏 𝑦, cmd (cid:3) 𝑇 . (10)Note that this controller is modified from that in [4] toavoid using motor speed measurements unobtainable for someplatforms. As a consequence, this controller is less robustagainst model mismatch caused by, e.g., high-speed inducedaerodynamic effects. If the rotor speeds are measurable, werefer readers to use the controller proposed in [4] to improvethe resilience against aerodynamic disturbances.The proposed method also assumes that the motor failure isalready known by the controller, which can be easily detectedby methods such as monitoring the motor current [23], or usinga Kalman filter [24].III. Onboard State EstimationTo achieve autonomous flight with rotor failure, the stateestimator provides orientation and position information usingonly onboard sensors, including an IMU, a downward lookingrange sensor, and a downward looking camera. The camera canbe either a standard camera or an event camera. The blockdiagram of our system is given in Fig. 2. A range-sensor-aided monocular VIO algorithm provides pose estimates ofthe camera at 50 Hz. These estimates are fused with the IMUfrom a low-level flight control board located at the center ofgravity, using a Kalman filter and a complementary filter, toprovide position, velocity, and orientation estimates at 200 Hz Frames(50Hz)
Visual InertialOdometry
Low-level FlightControl Board
Complementary FilterKalman Filter
Fault-TolerantFlight Controller
Pitch / Roll (200Hz)Range to ground(600Hz) Height (200Hz) Position(50Hz)Yaw (50Hz)IMU (200Hz) Attitude (200Hz)Pos, Vel(200Hz)Motor thrust commands (200Hz)Motor throttle
State Estimator
MotorsRangeSensorStandardCameraEventCamera
Events
Fig. 2: General diagram of the onboard state estimator and thefault-tolerant controller.
Event FrameGenerator
Events
Feature Detectionand TrackingRange-basedTriangulation height (200 Hz)
Optimization
LandmarksStates
Front-end
Feature observations positionyaw(50 Hz)
Back-end
New landmarks
Visual Inertial Odometry
IMUEvent CameraStandardCamera
Frames(50Hz) Event frames(200Hz) (50Hz)
Fig. 3: Diagram of the visual-inertial odometry (either eventbased or frame based).for the fault-tolerant flight controller. A rotation compensatedcomplementary filter is proposed for orientation estimationinstead of directly using the orientation from the VIO. Thiscan improve the robustness of the algorithm in case of loss offeature tracks.
A. Visual Inertial Odometry
The proposed VIO provides position and yaw estimations.It can use either standard frames or events as visual inputs. Ablock diagram of the algorithm is given in Fig. 3.
1) Front-end:
If events are selected as visual input, wefirst of all generate synthetic event frames [25]. Each eventframe 𝐼 𝑘 ( 𝒙 ) is aggregated from events from a spatio-temporalwindow 𝑊 𝑘 : 𝐼 𝑘 ( 𝒙 ) = ∑︁ 𝑒 𝑗 ∈ 𝑊 𝑘 𝑝 𝑗 𝛿 ( 𝒙 − 𝒙 (cid:48) 𝑗 ) , (11)where function 𝛿 (·) is the Kronecker delta, 𝑝 𝑗 is the polarityof a certain event represented by 𝑒 𝑗 , 𝒙 (cid:48) 𝑗 is the corrected eventposition considering the motion of the camera: 𝒙 (cid:48) 𝑗 = 𝜋 (cid:16) 𝑇 𝑡 𝑘 ,𝑡 𝑗 (cid:16) 𝑍 ( 𝒙 𝑖 ) 𝜋 − ( 𝒙 𝑖 ) (cid:17)(cid:17) . (12)where 𝜋 (·) is the camera projection model, 𝑇 𝑡 𝑘 ,𝑡 𝑗 is thetransformation of camera pose between 𝑡 𝑘 and 𝑡 𝑗 , 𝑍 ( 𝒙 𝑖 ) isthe scene depth approximated by the range sensor.As the rotation of the damaged quadrotor in this timewindow is more dominant than the linear motion, we use a pure rotation transformation to approximate 𝑇 𝑡 𝑘 ,𝑡 𝑘 − , which isgenerated by integrating angular rate measurements from thegyroscope. Then 𝑇 𝑡 𝑘 ,𝑡 𝑗 is approximated by a linear interpola-tion 𝑇 𝑡 𝑘 ,𝑡 𝑗 = 𝑇 𝑡 𝑘 ,𝑡 𝑘 − ( 𝑡 𝑗 − 𝑡 𝑘 − )/( 𝑡 𝑘 − 𝑡 𝑘 − ) .Then we use a FAST feature detector [26] and Lucas-Kanade tracker [27] to detect and track features. If a featurehas been tracked over 3 consecutive frames, it is determinedas persistent and is triangulated. The corresponding landmarkis added to the map. These settings are the same as appliedin [20] and [25].If standard frames are selected as visual input, we simplyreplace the synthetic event frames in the above procedure.
2) Range-Based Landmark Triangulation:
Since the dam-aged quadrotor motion is mostly rotation, triangulating land-marks based on the disparity is inaccurate. For this reason,we use a downward range sensor to detect the range of thecamera to the ground, where most features are detected. Wealso assume that all landmarks lie within the ground plane.For the 𝑗 -th landmark whose position in the inertial coordinateframe is defined as 𝒑 𝑗 we have 𝜆 𝑢𝑣 = 𝑲 𝑹 CI (cid:16) 𝒑 𝑗 − 𝒑 𝑐 (cid:17) ,𝑝 𝑐,𝑧 − 𝑝 𝑗,𝑧 = ˆ ℎ = 𝑟 cos 𝜃 cos 𝜙, (13)where 𝑢 , 𝑣 are observations of the landmark in the last(event) frame, 𝒑 𝑐 is the position of the camera in the inertialcoordinate frame, 𝑲 is the camera calibration matrix, 𝑹 CI isthe rotation matrix from camera frame to the inertial frame,ˆ ℎ is the height estimate, 𝑟 is the range measurement from therange sensor.As 𝑹 CI and 𝒑 𝑐 are obtained from the backend optimization, 𝜙 and 𝜃 are estimated from the complementary filter, we cansubsequently solve 𝒑 𝑗 from (13). The position estimates oftriangulated landmarks are then used as initial guess for thenonlinear optimization problem in the backend.
3) Back-end:
We use a keyframe-based fixed-lag smootherinspired by [17] to estimate the pose of the camera. Theoptimization cost function is formulated as 𝐽 ( 𝑿 ) = 𝐾 ∑︁ 𝑘 = ∑︁ 𝑗 ∈ 𝐽 ( 𝑘 ) || 𝒆 𝑗,𝑘𝑣 || 𝑾 𝑗,𝑘𝑣 + 𝐾 − ∑︁ 𝑘 = || 𝒆 𝑘𝑖 || 𝑾 𝑘𝑖 + 𝐾 ∑︁ 𝑘 = ∑︁ 𝑗 ∈ 𝐽 ( 𝑘 ) 𝑤 𝑗,𝑘ℎ ( 𝑝 𝑐,𝑧 − ˆ ℎ − 𝑝 𝑗,𝑧 ) , (14)where 𝑘 is the frame index, 𝐾 denotes number of frame inthe sliding window, 𝑗 is the landmark index, 𝐽 ( 𝑘 ) is theset containing all the visible landmarks from the frame 𝑘 .The first and second term in (14) represent reprojection errorand inertial error respectively. The optimization variables arethe states of the 𝐾 frames in the sliding window, which arerepresented by 𝑿 and include position, velocity, orientation,and IMU biases.Differently from the optimization-based back-end in [17],we add the third term in the cost function (14), where 𝑤 𝑗,𝑘ℎ is the weight. It forces the vertical differences between the position of the camera and the observed landmark to be equalto the height estimate ˆ ℎ from the range sensor. By this meanswe add additional scale information from the range sensorwhile the IMU based scale information becomes less reliablein this fast rotational motion dominated task.The optimization is run when a new (event) frame isgenerated. To improve computation efficiency, we do not per-form marginalization and discard the states and measurementsoutside the sliding window. We use the Google Ceres Solver[28] to solve this nonlinear least square problem. B. Rotation Corrected Complementary Filter
The complementary filter is widely used to provide attitudeestimates of a quadrotor. It has a major advantage that thepitch and roll are estimated only from IMU measurements,thus it is robust to failure of other sensors. The accelerometermeasurement is expressed as: 𝒂 IMU = 𝒂 𝐵 − 𝒈 𝐵 + 𝝎 𝐵 × ( 𝝎 𝐵 × 𝒅 𝐵 ba ) + (cid:164) 𝝎 𝐵 × 𝒅 𝐵 ba + 𝒘 acc + 𝒃 acc , (15)where 𝒂 𝐵 is the acceleration of the center of gravity, 𝒘 acc and 𝒃 acc are the measurement noise and bias respectively. 𝒅 𝐵 ba is thedisplacement from the center of gravity to the accelerometer.A standard complementary filter assumes that the ac-celerometer measures the negative gravitational vector ex-pressed the body frame in a long-term period. In other words,it neglects other terms in (15) except 𝒈 𝐵 . This works wellwhen a quadrotor spends significant periods of time in hoveror slow forward flight [29]. However, when the quadrotorfast spins at a near constant body rate 𝝎 𝐵 and enter therelaxed-hovering condition [1], 𝒂 𝐵 becomes a constant non-zero centripetal acceleration, and the displacement inducedterm 𝝎 𝐵 × ( 𝝎 𝐵 × 𝒅 𝐵 ba ) also becomes non-negligible. These twoterms need to be considered in the filter, yielding the rotationcorrected complementary filter.We estimate acceleration 𝒂 𝐵 by assuming the quadrotorstays in the relaxed-hovering condition in a long-term period.According to Fig. 4, we can obtain the estimated 𝒂 𝐵 asˆ 𝒂 𝐵 = (cid:20) − 𝜔 𝑥 || 𝝎 || 𝑔, − 𝜔 𝑦 || 𝝎 || 𝑔, 𝑔 tan 𝛼 sin 𝛼 (cid:21) 𝑇 , (16)where 𝛼 = arccos ( cos 𝜃 cos 𝜙 ) . Note that (16) is valid only atrelaxed-hovering condition where quadrotor fast spins. There-fore, we still assume a zero ˆ 𝒂 𝐵 vector when | 𝜔 𝑧 | < ¯ 𝜔 , where ¯ 𝜔 is a positive threshold. We use ¯ 𝜔 =
10 rad/s in our experiments.Knowing 𝒂 𝐵 , we can subsequently estimate the displacement 𝒅 ba from (15) by a least-square estimator, using real flight data.Fig. 5 presents the diagram of the proposed complementaryfilter, where the rotation compensation block is marked inyellow. Since there is no magnetometer used in our platform,we adopt the yaw angle from the VIO to fuse the gyroscopemeasurement and to obtain the yaw estimate. Finally, wecan estimate the full attitude from this rotation correctedcomplementary filter. IV. Experiments A. Hardware Descriptions
As Fig. 6 shows, the tested quadrotor is built with a carbonfiber frame and 3D printed parts. It is powered by four 2306- Fig. 4: In the relaxed-hovering condition, the accelerationprojection on the body frame ( 𝒂 𝐵 ) is constant. GyroscopeAccelerometer - ++ - + -Rotation correction - + Fig. 5: Diagram of the rotation corrected complementary filter.Fig. 6: Photo of a quadrotor flying with three rotors, where anevent camera is used for state estimation.2400KV motors with 5-inch propellers. The state estimationand the control algorithm are run on a Nvidia Jetson TX2,which contains a quad-core 2.0 Hz CPU and a dual-core2.5 Hz CPU running Ubuntu 18.04 and ROS [30]. The motorthrust commands from the control algorithm are sent to motorsthrough a Radix FC flight control board, which runs a self-built firmware that also sends the IMU measurements to theTX2 at 200 Hz. We used TeraRanger One, a LED time-of-flight range sensor, to measure the distance to the ground. Bothstandard and event cameras are facing downward. They bothuse a 110 ◦ field-of-view lens. For the event camera, we usean Inivation DAVIS-346 with a resolution of 346 ×
240 pixels.For the standard camera, we use a mvBlueFox-220w [31] witha resolution of 376 ×
240 pixels, chosen intentionally to beclose to that of the event camera to enable a fair comparison.It is worth noting that the maximum gain (12 dB) of themvBlueFox-200w camera is used to minimize the required
Fig. 7:
Top two plots : Pitch and roll angle estimates ofthe proposed rotation corrected complementary filter and astandard complementary filter, compared with the groundtruth measured by the motion capture system.
Bottom plot :comparison between the gravity projection on the body frameand the accelerometer measurement.exposure time, which is found essential in reducing the motionblur in the standard frames.
B. Validation of the Rotation Corrected Complementary Filter
We use the flight data to evaluate the performance ofthe proposed complementary filter, against the one withoutrotation motion considered. Fig. 7 shows that the roll error ofthe standard complementary filter became large as the quadro-tor started spinning at around 5 s. By contrast, the rotationcorrected complementary filter could well estimate the pitchand roll angles despite the over 20 rad/s spinning rate. Bottomplot of Fig. 7 explains the large error of the roll estimates. Asstandard complementary filter assumes that the accelerometermeasures the negative gravitational vector, 𝒈 𝐵 should alignwith 𝒂 IMU ,𝑥 . However, this assumption becomes invalid whenconsiderable centripetal acceleration appears, which leads tosignificant bias of the accelerometer measurement. C. Closed-loop Flight Validation
We conducted setpoint tracking tasks in the closed-loopflight experiment to validate the entire algorithm, includingthe vision-based state estimator and the fault-tolerant flightcontroller. During the test, the quadrotor took off with fourrotors. The VIO was initialized while the drone was inhovering. Then, we switched off one rotor and the fault-tolerantflight controller started controlling the quadrotor. As shown inFig. 8a, nine setpoints formed into a square were given to the . . . . .
0x [m] − . . . . . . . y [ m ] GroundtruthStandard framesStart positionEnd positionReference traj. (a) Closed-loop flight trajectory using standard frames . − . . . . . .
0x [m] − . . . . . . . y [ m ] GroundtruthEventsStart positionEnd positionReference traj. (b) Closed-loop flight trajectory using events . Fig. 8: Top view of the closed-loop flight trajectories. Ground-truth from the motion capture system (blue), estimated tra-jectory (green), and the reference trajectory (red) including 9setpoints.flight controller in steps of 5 seconds. The damaged quadrotorthen flew a square trajectory by tracking these setpoints.Two different tests were performed where standard framesand events were respectively used as visual input to the stateestimator. Fig. 8a shows the closed-loop flight result usingstandard frames, where the estimated trajectory from the VIOin the inertial frame, the ground truth trajectory measured bythe motion capture system, and the reference trajectory arepresented. The difference between the real trajectory and thereference is caused by the position estimation error and thecontroller tracking error. Although the tracking performanceis not perfect, it is sufficient for controlling a damagedquadrotor to a safe area for landing. Similarly, Fig. 8b shows
TABLE I: Position tracking accuracy (RMSE) with stateestimators using standard frames or events in different lightconditions. For standard frames, gains of the camera areset as 12 dB. Env lux: environment illuminance. Cam lux:illuminance at the camera lens. The left column shows photosof the test environment.another test where events are used as visual input to the stateestimator. Both tests were conducted in a well-illuminatedenvironment (500 lux), which is a bright indoor office lightingcondition [32].
D. Comparison between Frames and Events
In this section, we test the algorithm in different envi-ronment’s lighting conditions, and compare between usingframes and events as visual input. In these tests, we let thedamaged quadrotor track a single setpoint (i.e., hovering).Then, we measure the root mean square error (RMSE) ofthe closed-loop position tracking to evaluate the performanceof the entire algorithm. The exposure times of the standardcamera are changed according to the environment brightnessto capture frames with sufficient intensity for detecting andtracking features. For the event camera, we observe that thenumber of events generated in a fixed time window is smallerin a darker environment. Hence, we accordingly reduce thenumber of events needed to construct an event frame in low-light conditions.Table ?? reports the position tracking RMSE when usingstandard frames or events in different lighting conditions. Ascan be observed, when the environment illuminance is around500 lux, both frames and events can accomplish the task withsimilar tracking error. However, with standard frames as visualinput, the tracking error doubles as the illuminance drops to100 lux, and the damaged quadrotor crashes immediately whenthe illuminance gets lower than 100 lux. By contrast, withevents as visual input, the damaged quadrotor can even flywhen the illuminance is decreased to 10 lux. These compar-isons clearly show the advantage of using an event camera forstate estimation in low-light conditions.Fig. 9 presents the standard frames and the event frames ina bright (500 lux) and a dark (50 lux) indoor environment,respectively. When the environment illuminance is 50 lux, arelatively long exposure time (12 ms) of the standard camera Fig. 9: Standard and event frames, including features (lightblue circle are detected features, green dots are persistentfeatures), in a bright (500 lux) and a dark (50 lux) environ-ment. (a) Standard frame in the bright environment with 2 msexposure time. (b) Event frame in the bright environment. (c)Standard frame in the dark environment with 12 ms exposuretime. (d) Event frame in the dark environment.Fig. 10: Outdoor flight to validate the proposed fault-tolerantflight solution. Left : snapshot of the qudarotor flying with onlythree propeller.
Right : top view of the forward-backward flighttrajectory given by the state estimator.is required. Hence, the standard frames experience significantmotion blur (Fig. 9c) owning to the quadrotor fast rotationalmotion caused by the motor failure. Although more than 20features are detected from this blurry image, few of themare successfully tracked (i.e., persistent features) and addedto the map, causing failure of the standard frame-based VIO.By contrast, the event frames are sharp enough for featuredetection and tracking in both bright and dark conditions.
E. Outdoor Flight Test
Finally, we validate the proposed algorithm in an outdoorenvironment with natural light and textures. Fig. 10 showsthe snapshot of the flying quadrotor, and the top view of the flight trajectory from the state estimator. The quadrotor wascommanded to conduct forward flight for 4 meters, and thenreturn to the origin. The entire flight can be found in the sup-plementary video. In this flight, the environment illuminancewas 2000 lux. According to Table ?? , both standard framesand events are reliable in such bright conditions. Hence, astandard camera was used in this flight.V. ConclusionsIn this work, to the best of our knowledge we achievedthe first autonomous flight of a quadrotor despite loss of asingle rotor, using only onboard sensors. A new state esti-mation pipeline was proposed, including a rotation correctedcomplementary filter and a VIO algorithm aided by a rangesensor. Despite the fast spinning motion of the damagedquadrotor, we demonstrated that the proposed method canprovide reliable state estimates to perform hovering flights andsetpoint-tracking tasks with only onboard sensors.Comparisons were made between different visual inputs tothe proposed algorithm: standard frames and events. In a well-illuminated environment, we demonstrated that the algorithmsusing both forms of visual input can provide sufficientlyaccurate state estimates. However, in a relatively low-lightenvironment with illuminance lower than 100 lux, the standardframes were affected by significant motion blur due to the fastrotation and the long exposure time required. By contrast, theevent-based algorithm could stand closed-loop tests in a muchdarker environment (10 lux).Finally, we conducted outdoor flight tests to validate theproposed method in realistic conditions, with natural light andtexture.We believe that the this work can significantly improvequadrotor flight safety in both GPS denied or degraded en-vironments. ACKNOWLEDGMENTWe appreciate valuable discussions and suggestions fromZichao Zhang, Henri Rebecq, and Philipp Föhn. We alsoappreciate the help from Thomas Längle, Manuel Sutter,Roberto Tazzari, Tobias Büchli, Yunlong Song, Daniel Gehrig,and Elia Kaufmann. References [1] M. W. Mueller and R. D’Andrea, “Relaxed hover solutions for multi-copters: Application to algorithmic redundancy and novel vehicles,” Int.J. Robot. Research , vol. 35, no. 8, pp. 873–889, 2016.[2] J. Stephan, L. Schmitt, and W. Fichter, “Linear parameter-varying controlfor quadrotors in case of complete actuator loss,”
Journal of Guidance,Control, and Dynamics , vol. 41, no. 10, pp. 2232–2246, 2018.[3] S. Sun, L. Sijbers, X. Wang, and C. de Visser, “High-Speed Flight ofQuadrotor Despite Loss of Single Rotor,”
IEEE Robot. Autom. Lett. (RA-L) , vol. 3, no. 4, pp. 3201–3207, 2018.[4] S. Sun, X. Wang, Q. Chu, and C. d. Visser, “Incremental nonlinearfault-tolerant control of a quadrotor with complete loss of two opposingrotors,”
IEEE Trans. Robot. , pp. 1–15, 2020.[5] A. Freddi, A. Lanzon, and S. Longhi, “A feedback linearization approachto fault tolerance in quadrotor vehicles,”
IFAC proceedings volumes ,vol. 44, no. 1, pp. 5413–5418, 2011.[6] G. Gallego, T. Delbruck, G. M. Orchard, C. Bartolozzi, B. Taba,A. Censi, S. Leutenegger, A. Davison, J. Conradt, K. Daniilidis, andD. Scaramuzza, “Event-based vision: A survey,”
IEEE Transactions onPattern Analysis and Machine Intelligence , pp. 1–1, 2020. [7] V. Lippiello, F. Ruggiero, and D. Serra, “Emergency landing for aquadrotor in case of a propeller failure: A backstepping approach,” in
IEEE/RSJ Int. Conf. Intell. Robot. Syst. (IROS) , pp. 4782–4788, 2014.[8] V. Lippiello, F. Ruggiero, and D. Serra, “Emergency landing for aquadrotor in case of a propeller failure: A PID Based Approach,” in
IEEE Intern. Symp. on Safety, Security, and Rescue Robot. , pp. 4782–4788, 2014.[9] A. Lanzon, A. Freddi, and S. Longhi, “Flight Control of a QuadrotorVehicle Subsequent to a Rotor Failure,”
Journal of Guidance, Control,and Dynamics , vol. 37, no. 2, pp. 580–591, 2014.[10] P. Lu and E.-J. van Kampen, “Active Fault-Tolerant Control for Quadro-tors subjected to a Complete Rotor Failure,” in
IEEE/RSJ Int. Conf.Intell. Robot. Syst. (IROS) , 2015.[11] M. W. Mueller and R. D’Andrea, “Stability and control of a quadrocopterdespite the complete loss of one, two, or three propellers,” in
IEEE Int.Conf. Robot. Autom. (ICRA) , 2014.[12] G. Huang, “Visual-inertial navigation: A concise review,” in
IEEE Int.Conf. Robot. Autom. (ICRA) , 2019.[13] D. Scaramuzza and Z. Zhang, “Visual-inertial odometry of aerialrobots,”
Encyclopedia of Robotics , 2019.[14] J. Delmerico and D. Scaramuzza, “A benchmark comparison of monoc-ular visual-inertial odometry algorithms for flying robots,” in
IEEE Int.Conf. Robot. Autom. (ICRA) , 2018.[15] H. Strasdat, J. M. Montiel, and A. J. Davison, “Visual slam: why filter?,”
Image and Vision Computing , vol. 30, no. 2, pp. 65–77, 2012.[16] J. Delmerico and D. Scaramuzza, “A benchmark comparison of monoc-ular visual-inertial odometry algorithms for flying robots,” in
IEEE Int.Conf. Robot. Autom. (ICRA) , pp. 2502–2509, 2018.[17] S. Leutenegger, S. Lynen, M. Bosse, R. Siegwart, and P. Furgale,“Keyframe-based visual-inertial odometry using nonlinear optimiza-tion,”
Int. J. Robot. Research , vol. 34, no. 3, pp. 314–334, 2015.[18] T. Qin, P. Li, and S. Shen, “Vins-mono: A robust and versatile monocularvisual-inertial state estimator,”
IEEE Trans. Robot. , vol. 34, no. 4,pp. 1004–1020, 2018.[19] H. Rebecq, T. Horstschäfer, G. Gallego, and D. Scaramuzza, “Evo: Ageometric approach to event-based 6-dof parallel tracking and mappingin real time,”
IEEE Robot. Autom. Lett. (RA-L) , vol. 2, no. 2, pp. 593–600, 2016.[20] A. R. Vidal, H. Rebecq, T. Horstschäfer, and D. Scaramuzza, “Ultimateslam? combining events, images, and imu for robust visual slam in hdrand high-speed scenarios,”
IEEE Robot. Autom. Lett. (RA-L) , vol. 3,no. 2, pp. 994–1001, 2018.[21] N. A. Chaturvedi, A. K. Sanyal, and N. H. McClamroch, “Rigid-bodyattitude control,”
IEEE Control Systems Magazine , vol. 31, no. 3, pp. 30–51, 2011.[22] R. Da Costa, Q. Chu, and J. Mulder, “Reentry flight controller designusing nonlinear dynamic inversion,”
Journal of Spacecraft and Rockets ,vol. 40, no. 1, pp. 64–71, 2003.[23] O. Moseler and R. Isermann, “Application of model-based fault detectionto a brushless dc motor,”
IEEE Transactions on Industrial Electronics ,vol. 47, no. 5, pp. 1015–1020, 2000.[24] M. H. Amoozgar, A. Chamseddine, and Y. Zhang, “Experimental testof a two-stage kalman filter for actuator fault detection and diagnosisof an unmanned quadrotor helicopter,”
Journal of Intelligent & RoboticSystems , vol. 70, no. 1-4, pp. 107–117, 2013.[25] H. Rebecq, T. Horstschaefer, and D. Scaramuzza, “Real-time visual-inertial odometry for event cameras using keyframe-based nonlinearoptimization,”
British Machine Vision Conf. (BMVC) , 2017.[26] E. Rosten and T. Drummond, “Machine learning for high-speed cornerdetection,” in
European Conf. on Comp. Vis. (ECCV) , pp. 430–443,Springer, 2006.[27] B. D. Lucas, T. Kanade, et al. , “An iterative image registration techniquewith an application to stereo vision,” 1981.[28] N. S. S. Agarwal, “Ceres solver.” http://ceres-solver.org/. Accessed:2020-10-14.[29] R. Mahony, V. Kumar, and P. Corke, “Multirotor aerial vehicles:Modeling, estimation, and control of quadrotor,”
IEEE Robotics andAutomation magazine , vol. 19, no. 3, pp. 20–32, 2012.[30] M. Quigley, K. Conley, B. Gerkey, J. Faust, T. Foote, J. Leibs,R. Wheeler, and A. Y. Ng, “Ros: an open-source robot operating system,”in