Self-calibration of a differential wheeled robot using only a gyroscope and a distance sensor
aa r X i v : . [ c s . R O ] S e p I MPERIAL C OLLEGE L OND ON D EPARTMENT OF C OMPUTING
Self-calibration of a differential wheeled robotusing only a gyroscope and a distance sensor
Author:
Carlos Garcia-Saura
Supervisor:
Prof. Andrew J. Davison
Submitted in partial fulfillment of the requirements for the MSc degree inComputing Science (Specialism in A.I.) of Imperial College LondonSeptember 2015 bstract
Research in mobile robotics often demands platforms that have an adequate balancebetween cost and reliability. In the case of terrestrial robots, one of the available op-tions is the GNBot, an open-hardware project intended for the evaluation of swarmsearch strategies. The lack of basic odometry sensors such as wheel encoders hadso far difficulted the implementation of an accurate high-level controller in this plat-form. Thus, the aim of this thesis is to improve motion control in the GNBot byincorporating a gyroscope whilst maintaining the requisite of no wheel encoders.Among the problems that have been tackled are: accurate in-place rotations, mini-mal drift during linear motions, and arc-performing functionality. Additionally, theresulting controller is calibrated autonomously by using both the gyroscope moduleand the infrared rangefinder on board each robot, greatly simplifying the calibrationof large swarms. The report first explains the design decisions that were made inorder to implement the self-calibration routine, and then evaluates the performanceof the new motion controller by means of off-line video tracking. The motion accu-racy of the new controller is also compared with the previously existing solution inan odor search experiment.
Keywords:
Mobile robotics, differential wheeled robot, motion control, self-calibration,continuous-rotation servomotor, PID auto-tuning, multimodal sensing, MEMS gyro-scope, IR distance sensor, open-source cknowledgements
In first place I must thank professor Andrew J. Davison at Imperial College London forhis patient guidance and for pointing this project in the right direction. I found theRobotics course to be very inspiring so I want to thank as well the rest of coordinatorsand course assistants.I am also thankful to professors Pablo Varona and Francisco de Borja Rodr´ıguez atUniversidad Aut´onoma de Madrid, for allowing me to use their infrastructure to performthe experiments. My friends Alejandro, Irene and Aar´on also deserve a mention formaking the research time in the lab (and outside the lab!) way more enjoyable.From the great people I have met this year at Imperial College London, I want to men-tion in special Hadeel, Bob, Matthew, Stefan, Marta, DonyDoniyor, Pawel, Tony, Si-wook, Yui, and Win for always being there, for the good and the bad, no matter howcomplicated the assignment or big the cake! :-) I must mention as well the IC RoboticsSociety and IC Advanced Hackspace, in special Larissa and Audrey for helping a lot withthe 3D printers and laser cutters. Also, my year in London wouldn’t have been the samewithout yOPERO, Erin, Charlaine, Christian and Cliona.I also want to thank the whole open-source community (specially GNU/Linux, Python,Numpy, Matplotlib, OpenCV, Arduino, RepRap, etc) for making it possible to performthis project without requiring any proprietary software or machinery.Finally, big thanks go to my parents for their continuous loving support, and also thanksto Rodrigo, FJ and Sandra for tolerating my mood swings while working in this projectduring our family holidays.¡Muchas gracias a todos!– Carlos iii ontents
References 31 v ist of Figures ω R , the contribution of the right motorto the robot’s rotational velocity ω . . . . . . . . . . . . . . . . . . . . 112.6 Measured motor response curves . . . . . . . . . . . . . . . . . . . . 122.7 Region of motor input values that correspond to a linear trajectory . 132.8 Response of the yaw controller to a step function and to a sinusoid . 142.9 PID auto-tuning process: Empirical measure of the values K u & T u . 152.10 Analysis of the response curve of the infrared distance sensor . . . . 162.11 Linearisation of the IR rangefinder sensor response . . . . . . . . . . 172.12 Linear velocity calibration for two different ground surfaces . . . . . 183.1 Perspective correction for accurate video tracking . . . . . . . . . . . 193.2 How color markers are tracked with the OpenCV library . . . . . . . 20vii ist of Figures viii hapter 1Introduction The starting point of this project is the
GNBot swarm robot platform that was pre-sented in “Design principles for cooperative robots with uncertainty-aware and resource-wise adaptive behavior” [1], and “Cooperative strategies for the detection and localiza-tion of odorants with robots and artificial noses” [2]. Figure 1.1 describes the mostrelevant characteristics of these robots.
Figure 1.1:
One of the robots from the GNBot swarm used for the project.
The electronicsare based on the Arduino MEGA board. A custom shield contains all of the multimodalsensors, which provide the robot with gas sensing capabilities (TGS-2900), as well as dis-tance sensing (GP2Y0A21YK0F), temperature & humidity (DHT11), and light intensitysensing.
This project additionally incorporated a gyroscope module (MPU6050) .The main actuators are two continuous-rotation servomotors, and each robot also hasa multicolor RGB LED and a piezoelectric speaker. Finally, the wireless communicationlayer is based on ZigBee, and the green/yellow top markers allow for external videotracking of the swarm. From [1, 2].
The GNBot is a differential wheeled robot , which means that it has two active wheelssharing the same axis, and each of them is actuated independently. There is also a1hapter 1. Introductionpassive, free turning ball caster that acts as a third stand. If the desired trajectoryis a straight path, the velocities are set to equal magnitude; if a rotation is requiredinstead, then different velocities can be applied to each motor.Using this setup it is possible to estimate the motion of the robot in the XY planegiven its starting position ( P xyα ), the rotational velocity of each motor ( ω L , ω R ), thediameter of the wheels ( D L , D R ) and their separation ( d ). This estimation processis known as dead reckoning .The accuracy of dead reckoning relies on how dependable the measurements usedin the calculations are. These include wheel separation and their diameter, but mostimportantly the actual rotational velocities of each motor. For instance, small devia-tions from the theoretical velocities can turn a desirably linear trajectory into an arc.An example of this issue is displayed in Figure 1.2. Figure 1.2:
Example of an odor search strategy based on L´evy walks, as performed bythe original robot controller.
The search is defined at a high level as random in-placerotations combined with linear trajectories whose lengths vary according to a heavy-tailed probabilistic distribution.
It can be appreciated that the segments that weresupposed to be linear, are actually performed as arcs (there is drift in yaw) . Inthis experiment, the target odor source (red marker) took 12 minutes to be found. Theground truth trajectory was recorded with a webcam, post perspective correction andcolor marker tracking with the OpenCV software library. https://en.wikipedia.org/wiki/Dead_reckoning hapter 1. IntroductionThis thesis has tackled the improvement of those low-level control routines in orderto minimize motion drift, as well as the implementation of a distance-based abstrac-tion layer that facilitates the use of the GNBot platform in practical applications. Italso explores the automatic calibration of the robots using only on-board sensors (agyroscope and a distance sensor).The following two sections are an overview of the project’s approach towards theimprovement the motion controller; Chapter 2 provides an in-depth explanation ofthe self-calibration algorithm; Next, Chapter 3 evaluates the resulting controller;Finally, Chapter 4 analyses the outcome of the project and suggests future researchpaths. Undesired drift in the estimation of position, very characteristic when using deadreckoning in mobile robotics, can be reduced with the incorporation of basic odom-etry such as rotational encoders . Wheel encoders are a very common choice giventheir simplicity and reduced cost, but they do have some drawbacks: • Incorporating wheel encoders into a previously existing design is often nontriv-ial, as the process involves hardware modifications near critical moving parts. • Electrical, optical or magnetic wheel encoders can be susceptible to dirt; plac-ing them near the wheels may make periodic maintenance necessary. • Most importantly, wheel encoders cannot easily detect whether there is anyslipping between the wheel and the ground surface.For these reasons the incorporation of wheel encoders in the GNBot has been dis-missed. Instead, the first trials towards the reduction of yaw drift included the use ofan electronic compass. The goal was to obtain a global, unbiased measure of head-ing (yaw) by measuring the earth’s magnetic field. Unfortunately the compass wasdeemed unusable for indoor environments during the first trials of the GNBot[2],since the magnetometer measurements are substantially distorted by nearby metals(i.e. buried cables) commonly present indoors.Another option for odometry is the gyroscope, which is the approach studied in thisproject (see Fig. 1.3). Electronic gyroscopes are a form of inertial sensor that canmeasure relative rotations in any orientation, without requiring interactions withexternal moving parts. As opposed to wheel encoders, the physical incorporation ofa gyroscope into the GNBot would be quite straightforward, and would also havethe advantage of being capable of easily detecting wheel slipping. Odometry:
Use of sensory data to improve motion estimates Rotational encoder:
A sensor that can measure wheel rotations accurately and in real time hapter 1. Introduction Figure 1.3:
Detail of the gyroscope module installed in the GNBot.
The selected boardis based on the MPU6050 module, which is Arduino-compatible. The module fits onthe I C socket of the GNBot without any additional modification. An orange wire wassoldered to the interrupt pin, but it was not needed in the end, as polling was used toread values from the module. The use of inertial sensors for position estimation has been a matter of research formany decades. The last few years in particular have seen the development of smart-phones, tablets, intelligent cameras and other devices in need of inertial sensors,which has greatly boosted the industry. The technology has been able to achieve highminiaturization, high accuracy and low power consumption, whilst substantially re-ducing costs, yielding the excellent inertial sensors that are now widely available inthe market.In robotics, Inertial Measurement Units (IMUs) are quite often employed for navi-gation in different environments. For instance, the combination of IMUs with GPSallows for a remarkably accurate 3D localization in aerial robotics[3]. These tech-niques have also been demonstrated in ground robots without GPS[4, 5], as well asin underwater robots without encoders[6, 7, 8]. More recently, 3D position track-ing with inertial sensors has also been employed to correct the effect of the rollingshutter of RGB[9] and RGB-D[10] camera sensors.Many authors use
Kalman filters in their motion controllers. Kalman filtering isa probabilistic method for sensor fusion, which means it can be used to efficientlycombine measurements from the inertial sensor with other forms of odometry suchas wheel encoders, a compass, or even barometers; this way it is possible to achievevery robust motion controllers with minimal drift[4, 9, 11, 12]. In fact, most ofthe commercially-available IMUs, such as the one selected for this project, alreadycontain internal Kalman filters that can integrate measurements from a gyroscope,an accelerometer or a compass. https://en.wikipedia.org/wiki/Kalman_filter hapter 1. IntroductionFor the final evaluation of the positioning accuracy of the controller, the project tookadvice from work by J. Borenstein et al.[11, 13], where the robots were programmedto describe square trajectories that were first logged and then compared againstground truth. At the start of this project, the motion control of the GNBot robots was speed-basedand open-loop (with no wheel encoders). This resulted in very inaccurate motionsthat drifted rapidly from the desired path. For example, when a robot was com-manded to describe a straight trajectory, an arc path would be observed instead.In-place rotations were also quite inaccurate.Thus the goal of this project has been to improve the motion accuracy of these robotsand to provide a high-level distance-based controller. In particular the followingmatters have been tackled: • Incorporation of an inertial sensor (MPU6050) into the GNBot design, as wellas the implementation of low-level functionality that reads yaw, pitch and roll from the gyroscope module with minimal drift. • Analysis of the servomotor response curves. Since there are no wheel encoders,the rotations were tracked using the gyroscope, by independently rotating eachwheel over the entire range of velocities. • Implementation of a PID controller for accurate yaw regulation; Additionallythe constants are auto-calibrated with a process based on the Ziegler-Nicholsmethod. • Analysis of the infrared sensor response curve, and linearisation using expo-nential curve fitting. • Analysis of the nonlinear effect of the PID yaw controller over the robot’s linearvelocity for the whole range. • The results of these analysis have been integrated into a high-level distance-based motion controller that supports linear motions and arcs. Additionally,the calibration of each robot is done autonomously by using only on-boardsensors (the gyroscope and the IR rangefinder) and requiring minimal userinteraction. • Implementation of an off-line video tracking method capable of reliably record-ing ground truth robot paths. The process involved perspective & distancecorrection using the OpenCV library. hapter 1. Introduction • Evaluation of the new controller by making the robots perform lines, circlesand squares of known dimensions; Also the system was compared against thepreviously existing solution in an odor search task based on a wall-bouncestrategy.The new motion controller has a linear drift in the order of cm/m in the low velocitysettings ( to cm/s ) and around cm/m at higher velocities ( to cm/s ). Therotational drift is in the order of deg/min when the robot is moving.In summary, this project has provided the high-level functionality needed in orderto achieve accurate control over the GNBot robots. The self-calibrating nature of theapproach also facilitates its use in large robot swarms.The outcome of the project is open-source (Attribution-ShareAlike 4.0 International )and can be accessed in the following GitHub respository: https://github.com/carlosgs/GNBot http://creativecommons.org/licenses/by-sa/4.0/ hapter 2The self-calibration algorithm The original controller on-board the GNBot was velocity-based and its input unitswere arbitrary – it did not have an internal mapping between the motor’s inputs andreal-world units (i.e. cm/s ). It was also lacking methods for basic calibration, inpart due to the absence of any form of odometry. Altogether, these facts lead to largeamounts of drift. Far from being a constant that could have been compensated bybasic calibration, drift varied with the velocity of the robots -not only in magnitudebut also in direction-, so straightforward calibration was not an option (see Figure2.1). This project has tackled the problem by incorporating a gyroscope for odome-try. The advantages of this approach in contrast with the use of wheel encoders werealready discussed in Section 1.1.The selected
Inertial Measurement Unit (IMU) is the MPU6050 chip, which is basedon MEMS technology (Micro-Electro-Mechanical Systems). It is commonly availableas an Arduino-compatible break-out board that conveniently fits on the I C socketof the GNBot without any additional modifications (c.f. Figure 1.3).The gyroscope module has the same I C pinout as the magnetometer that was orig-inally present in the robot, but the routines that handle both modules are very dif-ferent. In fact, the example code for the inertial sensor (a part of the I C DeviceLibrary ) depends on the use of hardware interruptions rather than polling . Theuse of interruptions was highly undesirable since the main processor needs to han-dle time sensitive tasks such as sensing and radio communications as well as motioncontrol, so the gyroscope library was modified to use a polling scheme.The MPU6050 incorporates a Digital Motion Processor (DMP) that can be used tooffload computations from the robot’s main processor. These computations include Interrupt:
Signal to the processor indicating that some event needs immediate attention Polling:
As opposed to interruptions, polling actively samples the status of an external device yaw, pitch and roll . In the GNBot controllerthe raw values are sampled and processed at Hz by the DMP co-processor.The gyroscope module also has an internal FIFO buffer that allows for reliable high-frequency data readings (up to Hz ), though in this project it was deemed unnec-essary since the sample rate is low ( < Hz ). Once the IMU had been incorporatedinto the GNBot, it was possible to read the yaw, pitch and roll of the robot back intothe computer (see Figure 2.1). Figure 2.1:
Initial experiments to analyse drift in the original controller.
The robot wascommanded to describe a straight path by setting both motors to the same velocityduring 3 seconds. The trajectory (upper panel) was recorded with a ceiling camerafor three different velocities, while yaw (lower panel) was being logged using the on-board gyroscope. It can be observed that drift is accurately tracked by the empiricalyaw measurements. The input speed setting is arbitrary since both continuous-rotationservomotors are commanded with the built-in
Servo.write ([ degrees ]) functions from theArduino IDE, which do not have a direct mapping with real world velocities. At this point, a basic proportional yaw controller was implemented in order to FIFO: “First In First Out” queuing policy hapter 2. The self-calibration algorithmdemonstrate the viability of the system. The controller could now successfully cor-rect the direction of the robot so it followed a straight path, even after externalperturbations had been applied (i.e. rotating the robot or placing obstacles).With those results as a motivation, the design of a high-level self-calibrating motioncontroller was tackled. An overview of the resulting calibration routine is shown inFigure 2.2 – Further sections will describe the technical details in greater depth. Figure 2.2:
Full self-calibration procedure. (a)
The robot is first placed in front of a wall,and remains static while the gyro drift is compensated, (b)
The minimum motion thresh-old is found by gradually increasing the input of each motor until a rotation is perceived (c)
The motors are then independently run at their maximum speed setting in orderto record the corresponding real-world rotational velocity, (d)
Next, the yaw PID con-troller is heuristically calibrated by analysing the oscillations for different parameters, (e)
Finally, the robot uses the distance sensor to measure its approach velocity towardsthe wall, creating a map between the input values ( rad/s ) and the resulting real-worldlinear velocity ( cm/s ).Technical details can be found in sections: ( a ) ( b & c ) ( d ) ( e ) hapter 2. The self-calibration algorithm Most electronic gyroscopes calculate rotational magnitudes by integrating measure-ments from rotational accelerometers over time. Ideally this integration processwould consistently yield the same outputs when a sequence of rotations is applied,but instead, rotational accelerometers often have inaccuracies (i.e. bias, sensitivitylimits, variability with temperature, etc.) that cause drift in the integration process.This would imply for instance that the yaw measurement of a static robot woulderroneously vary over time.Gyroscope drift is often minimized with either a proper calibration of the accelerom-eter bias, with signal filtering or by pausing the integration process while no motionis detected. Fortunately, the DMP co-processor of the MPU6050 already implementsthese features very efficiently (see Fig. 2.3).In order to measure yaw drift rate (whose units are degrees per minute ) it was nec-essary to differentiate the yaw measurements provided by the gyroscope ( radian units). This was achieved by accurately timing distinct yaw measurements with thefunction millis() from Arduino . drif t rate = yaw ( t ) − yaw ( t ) t − t · degπrad · min s deg/min (2.1) Figure 2.3:
Analysis of gyroscope drift (with static robot).
The gyroscope has an initialtransient that must be respected, during which the internal DMP (Digital Motion Proces-sor) performs self-calibration to account for drift. The module is fully calibrated after 23seconds of the robot being static, when drift becomes inferior to 0.5 deg/min (step A inFig. 2.2). It must be noted though that the magnitude of this drift will not be constant,but is instead greatly affected by the robot’s motions. hapter 2. The self-calibration algorithm This section tackles the calibration of the continuous-rotation servomotors, whichare actuators that generate a rotational velocity that is proportional to a PWM inputsignal (Pulse Width Modulation). The relationship between the input and outputmagnitudes is arbitrary, so a real-world mapping needs to be learned first.To better understand the calibration process it helps to be familiar with the followingnotation:
L R- (cid:0) + (cid:0) d D R D L v R v L Figure 2.4:
Diagram of the robot’s top view, displaying the notation used.
The grey arrowpoints towards the front of the robot. D L and D R are the wheel diameters, and d is theirseparation. V L and V R are the contributions of each motor to the robot’s linear velocity,and α is the yaw measurement (positive for clockwise rotations). Most mobile robotic platforms have encoders that simplify the calibration processby measuring the real-world rotational velocities of the wheels. In the case of theGNBot there are no wheel encoders, so a different approach was needed.The implemented method can measure the response curve of each motor indepen-dently by using only the gyroscope. This process is described in Figure 2.5.
L R L R L R (cid:0) v R v R v R (cid:0)(cid:0) R = (cid:0) V L =0 V R =C Figure 2.5:
Method employed to measure ω R , the contribution of the right motor to therobot’s rotational velocity ω . A constant velocity is first applied to the right wheel ( V R = C ) while the left wheel is fixed ( V L = 0 ). This results in a rotation of the robot aroundthe ground-contact point of the left wheel. The gyroscope is then used to measure therotational velocity ω R = ω . Afterwards the same process is repeated for the left motor.This method allows to separately evaluate the response curves of each motor withoutrequiring wheel encoders. hapter 2. The self-calibration algorithmThe angular velocity ω , directly measured by the gyroscope, is coupled with therotational speed of both motors with constants K L & K R , that account for D L , D R (wheel diameters), and d (distance between wheels). Using this technique it waspossible to record the response curve of each motor by independently performing asweep over the full range of velocity inputs. The result is shown in Figure 2.6. Figure 2.6:
Measured motor response curves.
The panels show the response curves of theleft (red) and right (blue) motors for different wheel sizes. The input of the continuous-rotation servomotors is PWM (Pulse Width Modulation), and the output ( ω ) is the ro-tational rate of the robot around its vertical axis (the setup is described in Fig. 2.5).Each trial was run for a single motor at a time, and the sweep took around 30 secondsper motor. Three regions can be appreciated: a flat dead-zone ( ω = 0 rad/s ), a linearresponse region ( < ω < . rad/s ), and saturation ( ω ≥ . rad/s , which has beenhighlighted in red). For the same wheel size -middle panel- the L/R response curvesare almost identical. Robots with larger left or right wheels -left & right panels- insteadhave visibly different L/R response curves. The calibration routine implemented in thisproject accounts for these differences. Self-calibration of these curves has been implemented by means of linear fittingbetween the minimum and maximum velocities within the linear region. In firstplace, the robot measures the dead-zone of a motor (the minimum motion threshold)by gradually increasing its input velocity until a rotation is perceived. Then, themaximum velocity is measured by running the motor at the maximum speed settingwithin the linear range (PWM ≈ ≈ ω in anadditive manner ( ω = ω L + ω R , i.e. setting ω L = 0 . rad/s and ω R = 0 . rad/s hapter 2. The self-calibration algorithmyields ω = 1 . rad/s ). Knowing this fact, and using the data from Figure 2.6, itwas possible to determine the input region that corresponds to a linear motion.This is when both rotational contributions cancel out so the robot does not rotate: ω = ω L + ω R = 0 rad/s → ω L = − ω R . This region is represented in Figure 2.7. Figure 2.7:
Region of motor input values that correspond to a linear trajectory.
The redand green data points are from robots with larger left or right wheels respectively, whilethe blue data points are from a robot with identical wheels. It can be observed that eachof these curves has a different steepness: a greater slope indicates that the right motorneeds to rotate faster in order to account for a larger left wheel, and vice-versa.
At this point a basic motor controller had been implemented, so the rotational rate( rad/s ) of the robot could be accurately commanded. Closed-loop yaw control wasnow possible with the implementation of a PID regulator (Proportional-Integral-Derivative) that combines the motor controller with real-time gyroscope measure-ments (Eqn. 2.2). The error measure is calculated as e ( t ) = yaw target − yaw ( t ) . output rotation rate ( t ) = K p e ( t ) + K i Z t e ( τ ) dτ + K d ddt e [ rad/s ] (2.2) hapter 2. The self-calibration algorithmA PID controller is based on user-tunable gains ( K p , K i & K d ) that, when properlyadjusted, can minimize the overshoot of the transient response; in the case of theGNBot, the PID is the responsible of achieving fast and accurate changes in yaw. Thisis represented in Figure 2.8. Figure 2.8:
Response of the yaw controller to a step function (upper panel) and to asinusoid (lower panel).
The controller was commanded with time-varying yaw targetswhile the transitions were recorded using the gyroscope. It can be appreciated that aproportional controller either is too slow (cyan curve), has over-shoot (blue curve) oreven ripple (red curve). The auto-tuned PID (green curve) does have some overshoot aswell, but it is a much better approximation to the ideal response.
Rather than manually specifying a fixed parameter set for the PID, this project tack-led the self-calibration of the controller with the popular technique proposed byZiegler and Nichols[14]. This heuristic tuning method is performed by setting thethree gains ( K p , K i & K d ) to zero, and then increasing the proportional term K p until it reaches the ultimate gain K u at which the controller presents self-sustainedoscillations of period T u . Ziegler and Nichols (ZN) then provide with the mapping hapter 2. The self-calibration algorithmbetween K u & T u and the three gains of the PID controller. In this project the mod-ified tuning values proposed by Tyreus and Luyben[15] were used instead as theyincreased the robustness of the controller. Table 2.1 compares the gain specificationsof both tuning methods. K p t I t D Ziegler-Nichols (ZN) K u / . T u / T u / Tyreus-Luyben (TLC) K u / . . T u T u / . Table 2.1:
Prescribed PID gain values for the Ziegler-Nichols (ZN) & Tyreus-Luyben (TLC)heuristic tuning methods.
Compared to the original values proposed by Ziegler & Nichols,the TLC rules tend to reduce the oscillatory effects and improve the robustness of theyaw controller. The t I and t D values were converted into the standard PID form using K i = K p /T i and K d = K p T d . In order to provide a consistent PID calibration, the parameters K u -ultimate gain-and T u -oscillation period- needed to be measured accurately and reliably. An auto-mated method was implemented for this purpose (see Figure 2.9). Figure 2.9:
PID auto-tuning process: Empirical measure of the values K u & T u . Thetuning is done as follows: In first place, K i & K d are set to zero, and K p is set to a verylarge value that ensures oscillation (the value is such that the maximum motor speed isapplied at 1 degree of error). Next, the proportional controller is perturbed by setting a45 degree angle as the target yaw. Then the algorithm checks whether the oscillation isself-sustained, in which case it updates K p = K p / . . This process is repeated until theoscillations become attenuated; at that point K u & T u are finally registered (step D inFig. 2.2). hapter 2. The self-calibration algorithm Using the basic motion controller it was now possible to command the robot toperform linear trajectories quite accurately; but the inputs units were still rotationalmagnitudes related to yaw ( ω L & ω R [ rad/s ] ). The next step was to create a mappingbetween these inputs and the real-world linear velocities of the robot (i.e. cm/s ).In first place it was necessary to have a method that could accurately measure therobot’s motions. External tracking systems (i.e. VICON or a ceiling camera) areamong the best solutions available for this purpose, but they were deemed too so-phisticated to be incorporated into the basic calibration routine. These methodswere employed for the final evaluation of the motion controller instead (Chapter 3).Rather than requiring an external tracking system, the problem of measuring therobot’s real-world velocity has been tackled with the use of the distance sensor on-board the GNBot. The sensor is the Sharp GP2Y0A21YK0F infrared rangefinder, andit has the voltage response curve analysed in Figure 2.10. Figure 2.10:
Analysis of the response curve of the infrared distance sensor.
The figuredisplays the output voltage of the
Sharp GP2Y0A21YK0F rangefinder and its variabilitywith the distance between the sensor and a wall. Manufacturer specifications rate thismodule for distance measurements in the range cm < d < cm , and indeed it can beappreciated that the response outside that region is either flat ( d > cm ) or nonlinear( d < cm , highlighted in red). The approach undertaken towards the calibration of linear motions was the use ofthe IR distance sensor on-board the GNBot to achieve accurate real-world velocity hapter 2. The self-calibration algorithmmeasurements. Theoretically, the robot would be driven towards a wall at a constantspeed, and then the robot’s velocity could be calculated by differentiating distancemeasurements over time.In order to achieve a fair degree of resolution in these measurements, the output ofthe IR sensor needed to be calibrated first by means of linearisation. The method isintroduced in Figure 2.11. Figure 2.11:
Linearisation of the IR rangefinder sensor response.
The left panel shows theresponse curves that were recorded for three different GP2Y0A21YK0F distance sensors(red, green and blue curves). These curves were fitted with an exponential decay (cyancurve) in the range of highest sensitivity ( cm < d < cm ). The right panel evaluatesthe accuracy of the fitted model, showing a maximum theoretical deviation of ≈ cm . The fitting function in Figure 2.11 is an inverse function with two fitting parameters K = . V /cm & C = . V that were calculated using two data points located in thelimits of the highest sensitivity range ( d = cm & d = cm ), using the followingequations: v fit ( d ) = K · d + C → K = d v ( d ) − v ( d )1 − d d C = v ( d ) − Kd (2.3)The calibration of the IR distance sensor is not a part of the self-calibration routinesince it requires user interaction; instead, the fitting curve can be generalised toevery robot that uses the GP2Y0A21YK0F sensor model.This newly-calibrated distance sensing functionality was verified to work within thetheoretically-calculated tolerances for different wall surfaces. Having this infrastruc-ture in place, the problem of linear velocity calibration could finally be tackled. Forthis purpose, the GNBot was commanded to move towards a wall at different veloci-ties while yaw was corrected by the closed-loop PID controller. The real-world linear hapter 2. The self-calibration algorithmapproach velocity ( v cm/s ) was then recorded for multiple input velocity settings( ω c rad/s ), yielding the results shown in Figure 2.12. Figure 2.12:
Linear velocity calibration for two different ground surfaces.
The panels showhow the real-world velocity of the robot varies over its input range, for both backwardand forward motions (red and blue curves) and for two different ground surfaces (leftand right panels). The green and magenta dashed lines represent a linear fitting of thevelocity response of each material. It can be appreciated that the speed of the robotover foam (right panel) is lower for the same input values; this is due to friction effects.Friction also causes an asymmetry in the velocity responses for backward and forwardmotions; this is because the robot is not perfectly symmetric. The nonlinear region isa cause of the PID operating the motors outside their linear range (c.f. Fig. 2.6); sucheffect could be minimized with velocity-dependent PID gains.
Self-calibration was then implemented by means of a linear fitting that measuresthe real-world velocity at the highest velocity setting within the linear region. Thismeasurement is performed four times and then averaged (step E in Fig. 2.2), beingthe last step of the self-calibration process. Each of the calibration parameters arethen stored into the non-volatile EEPROM memory built into the Arduino, so thetuning process is not required every time the robots are powered on.Finally, high-level distance-based control was implemented by integrating over timethe current velocity setting; this way the robot can stop whenever a specified dis-tance has been reached. This is a very simple method that does not account fortransients in velocity. Arc functionality was implemented by interpolating the tar-get yaw throughout a distance-based motion; specifying different initial/final yawangles effectively converts a linear segment into an arc. hapter 3Evaluation of the motion controller The purpose of the controller described in Chapter 2 is to provide accurate high-level motion control for the GNBot robots. This chapter evaluates the real-worldperformance of this controller.In first place, a series of experiments were designed in order to be able to measurethe different motion drift characteristics. For instance, the robots would be com-manded to perform squares and circles of known dimensions while external videotracking provided unbiased measurements of the actual trajectories. The trackingprocess is described in Figures 3.1 and 3.2.
Figure 3.1:
Perspective correction for accurate video tracking.
The video tracking systemwas calibrated using perspective correction with a square of known dimensions. Thisallows for a 1:1 mapping between image units and real-world distances in the XY plane.
Figure 3.2:
How color markers are tracked with the OpenCV library.
In first place thereference points for perspective correction need to be specified by using the cursor (1) .Next, the marker color is selected (2) , and then a threshold is applied (3) . The algorithmfinally looks for the largest blob, which yields the coordinates of the robot (4) . From [2].
Using this tracking system it was possible to record the trajectories of each robot witha very high resolution. Next sections describes the experiments that were performed.
Though gyroscope drift is always compensated upon the initialization of the GN-Bots, such compensation is far from ideal, and as a consequence the robot’s motionspresent yaw drift over time. This effect is analysed in Figure 3.3.
Figure 3.3:
Cumulative yaw error due to gyroscope drift.
In this experiment, the robotwas commanded to repeat a linear trajectory multiple times (alternating the target yawbetween ◦ and ◦ ) for both forward motion (red path) and backward motion (greenpath) at a velocity setting of cm/s . Using this data, the rotational drift of the controllerhas been calculated to be in the order of deg/min in a moving robot. hapter 3. Evaluation of the motion controller Next, the general performance of the distance-based controller was evaluated byfirst commanding the robots to perform squares and circles and then comparing theresults against ground-truth dimensions. Square trajectories were implemented withstraight line segments interleaved with in-place rotations of ◦ . This functionalityyielded the results shown in Figure 3.4. Figure 3.4:
Evaluation of the linear accuracy of the distance-based controller.
A GNBotrobot (lower-right corner) was commanded to describe square trajectories of differentsizes while its path was recorded using the vision-based method described in Figures 3.1& 3.2. The red path corresponds to a purposely-miscalibrated robot (it was calibratedwith a different set of wheels). The cyan and magenta paths correspond to robots withdifferent wheel sizes that were properly self-calibrated. Finally, the green path corre-sponds to a self-calibrated robot with same wheel diameters. Linear drift has been cal-culated to be around cm/m . This was done by scaling up the maximum deviation fromthe lower-left corner of the largest square trajectory – about . cm for the calibratedrobots. It can be appreciated that the shape of the squares is correctly maintained thanksto the gyroscope -which is always calibrated upon start up-, and the square dimensionsonly fall out of the specification for the miscalibrated robot (red path). In order to simplify the video tracking process, measurements were conducted inone single continuous experiment for each robot. In the case of square trajectories,the GNBot would perform the different square sizes one after another – without an hapter 3. Evaluation of the motion controllerintermediate re-positioning of the robot. This generated an undesired drift in thestarting position of each trial, which made manual alignment necessary. The postprocessing consisted in a affine translation from the starting points to the origin( x, y = , ) and posterior re-setting of the initial yaw angle by means of a globalrotation. This process effectively removed the effects of yaw drift between each trial(the effect had already been analysed in Figure 3.3).Circular motions were implemented with the arc functionality, by setting the initialand final arc angles to ◦ and ◦ . Path length was calculated as l = πr . Thisapproach yielded the results shown in Figure 3.5. Figure 3.5:
Evaluation of the accuracy of arcs performed by the controller.
A GNBot robot(lower-right corner) was commanded to describe circular trajectories of different sizeswhile its path was recorded. The cyan and magenta paths correspond to robots withdifferent wheel sizes that were properly self-calibrated. The green path corresponds toa self-calibrated robot with same wheel diameters. Finally, the red path correspondsto a different robot that was also self-calibrated. The maximum drift from ground-truth diameters is around cm/m , though it can be appreciated that the circular shape iscorrectly maintained thanks to the gyroscope (which is always calibrated upon start-up). hapter 3. Evaluation of the motion controller The implemented motion controller relies on the integration of velocities over timefor its distance estimations. In Section 2.4 it was observed that there is a nonlinearregion in the velocity response curves (c.f. Figure 2.12). Since these responses arelinearly fitted, the use of velocities outside the linear range will result in inaccuratedistance estimations. The effect of velocity is analysed in Figures 3.6 & 3.7.
Figure 3.6:
Effect of velocity in the repeatability of square trajectories.
The setup of thesetrials is similar to the one in Figure 3.4, but in this case the size of the squares were setto a constant l = cm and performed at different speeds. The maximum drift is in theorder of cm/m in the low velocity settings ( to cm/s ) and around cm/m at highervelocities ( to cm/s ). hapter 3. Evaluation of the motion controller Figure 3.7:
Effect of velocity in the repeatability of circular trajectories.
The setup of thesetrials is similar to the one in Figure 3.5, but in this case the circle diameters were set to aconstant D = cm and performed at different speeds. The maximum drift in diameter isaround cm/m in the low velocity settings ( to cm/s ) and around . cm/m at highervelocities ( to cm/s ). hapter 3. Evaluation of the motion controller The GNBot robots were originally designed as a platform that could evaluate swarmsearch strategies in real-world environments[1, 2]. The target of those searches areodor sources based on volatile compounds (such as ethanol) that can be detected bythe gas sensor on-board each robot.Among the search strategies that have been so far evaluated with this platform areL´evy-based search (c.f. Fig. 1.2) and wall-bounce search. Unfortunately, at the timeof those trials a calibrated motion controller was not yet present. This fact causedgreat uncertainty in the evaluation process: the calculated motion commands for aparticular search were not being faithfully reproduced by the swarm.Some of those experiments have been re-run for this project – this time using theself-calibrated motion controller in order to evaluate the improvement. Trials wereperformed in the seminar room shown in Figure 3.8.
Figure 3.8:
Room used for the odor search experiments.
The odor search experimentsrequired an effective area of approximately m , an indoors environment free of dis-turbances and with uniform light conditions that facilitate video tracking. Chairs weremoved to the back of the room, and the search arena was installed in the center (bluewalls). Air conditioning was turned off during the experiments on order to minimize aircurrents. The robot swarm can be seen on top of the table, as well as the computer usedto record tracking data. Wall-bounce search is a very primitive form of search where a robot is commandedto maintain a constant velocity until either an obstacle or the target are encountered;for obstacles, the robot must rotate away and continue with a different heading. Thesimplicity of the wall-bounce strategy makes it very sensible to the linearity of the hapter 3. Evaluation of the motion controllerrobot’s motions. For instance, a robot with enough yaw drift may rotate in circlesover the same area without ever finding any obstacle that modifies its path. Given itssensitivity to drift, wall-bounce search is a very good candidate for the comparisonof both motion controllers. The experimental setup is described in Figure 3.9. b) a) Odor sources (goal)
Square of known dimensions(for perspective correction)
Starting positionof the robots
Total search time:13min 19sec
Figure 3.9:
Setup of a wall-bounce search experiment (conducted with the new motioncontroller).
The upper panel shows the starting robot positions, as well as the location ofthe odor targets. These are based on cotton pads impregnated with ethanol. The lowerpanel shows an overlay of the search path described by each robot. It can be appreciatedthat the trajectories are accurately performed as lines, which faithfully matches the high-level definition of the wall-bounce algorithm.
The results of these search experiments are further discussed in the next section. hapter 4Conclusions This thesis has tackled the improvement of the motion controller on board the GNBotby incorporating gyroscope-based odometry that does not require wheel encoders.The outcome is a high-level controller that can faithfully perform in-place rotations,arcs, and linear trajectories. This controller is calibrated, so its inputs are real-worldunits (centimeters and degrees). Additionally this calibration process is automaticand uses only two on-board sensing capabilities: the gyroscope and the distancesensor.The general approach of the project has been to separately analyse the sources ofmotion uncertainty, and then try to find ways to minimise them (i.e. by observinga response curve and working only within its linear region). Many of the errorsources could not be separated, so the calibration process had to progressively builda layered confidence base, which in the end allowed for complete self-calibration: • In first place, the output of the gyroscope (the MPU6050 inertial sensor) wasanalysed. It was found that the yaw measurement varied over time even whenthe robot was static, so this drift had to be compensated (Section 2.1). • Then the response curves of the motors were analysed using the calibratedgyroscope. This was done by activating a single motor at a time and measuringthe rotational velocities of the robot (c.f. Fig. 2.5). The response curves wereplotted for robots with different wheel diameters (c.f. Fig. 2.6). These werethen linearised, which allowed control over real-world units of rotational speed(Section 2.2). • Using the calibrated motor functions, a PID yaw controller was implementedand tested in order to guarantee accurate yaw control (c.f. Fig. 2.8). Thisallowed for fast and accurate yaw transitions. It also allowed for very accurateclosed-loop linear trajectories (Section 2.3). • The IR distance sensor on-board the GNBot (Sharp GP2Y0A21YK0F) was then27hapter 4. Conclusionsevaluated and linearised (c.f. Figs. 2.10 & 2.11). Finally, the PID yaw con-troller was used in combination with the distance sensing capability in orderto perform accurate velocity measurements. This way it was possible to mea-sure the linear velocity response curves of the robot (c.f. Fig. 2.12). Basicfitting allowed for accurate velocity-based control with real-world input units(Section 2.4).The self-calibration process that has been implemented (c.f. Fig. 2.2) uses only thegyroscope and distance sensors on board the GNBot to automatically measure all thenecessary parameters, without requiring any user interaction. The process simplyrequires manual placement of the robot in front of a wall, and takes approximately92 seconds.Video tracking was used for the evaluation of the controller’s accuracy (Chapter3). Tracking was implemented using the OpenCV library, first performing perspec-tive correction, and then identifying the position of the color marker by applyinga threshold (c.f. Figs 3.1 & 3.2). This tracking method was used to compare thenew controller with the previously existing solution, in a wall-bounce odor searchexperiment (see Figure 4.1).
Figure 4.1:
Comparison of a “Wall bounce” search using the original controller (left) andthe new self-calibration (right).
The figure shows the trajectories of four GNBot robotsin two different odor search experiments. The starting positions are represented withtriangle markers and the targets are represented as squares. It can be appreciated thatthe original controller (left panel) has large amounts of yaw drift. The new controller(right panel) faithfully matches the theoretical definition of wall-bounce search. hapter 4. ConclusionsThe tracking method also allowed to measure the effect of gyroscope drift (by hav-ing the robot repeatedly perform straight lines, as seen in Section 3.1), the linearpositioning accuracy (by performing squares and circles of different sizes, as seen inSection 3.2), and the effect of velocity in the controller (by performing squares andcircles at different velocities, as seen in Section 3.3).The new motion controller has a linear drift in the order of cm/m in the low velocitysettings ( to cm/s ) and around cm/m at higher velocities ( to cm/s ). Therotational drift is in the order of deg/min when the robot is moving.In summary, this project has implemented the high-level functionality needed inorder to achieve accurate motion control of the GNBot robots. The self-calibratingnature of the approach also facilitates its use in large robot swarms. The outcome ofthe project has been published as open-source in a GitHub repository . Using the new controller it will be possible to evaluate search strategies that de-pend on accurate position-based control (i.e. environment mapping, efficient areacovering, flocking behaviours, etc).In regards to self-calibration, the studied method provides static tuning parame-ters that remain unchanged until a re-calibration is manually issued. Some authorshave studied SLAM approaches that instead update the calibration estimates auto-matically in real time[16, 17, 18]. These techniques could be explored with theincorporation of other forms of odometry such as wheel encoders.The inertial sensor used in this project includes not only a gyroscope but also anaccelerometer. It would be very interesting to incorporate these measurements intothe motion controller. For instance, the accelerometer could be used in order todetect changes in ground surface and account for the differences in friction. https://github.com/carlosgs/GNBot eferences [1] Carlos Garc´ıa-Saura, Francisco de Borja Rodr´ıguez, and Pablo Varona. De-sign principles for cooperative robots with uncertainty-aware and resource-wise adaptive behavior. In Armin Duff, NathanF. Lepora, Anna Mura, TonyJ.Prescott, and PaulF.M.J. Verschure, editors, Biomimetic and Biohybrid Systems ,volume 8608 of
Lecture Notes in Computer Science , pages 108–117. SpringerInternational Publishing, 2014.[2] Carlos Garc´ıa-Saura. Cooperative strategies for the detection and localizationof odorants with robots and artificial noses. Technical report, 2014.[3] M. Barczyk, M. Jost, D.R. Kastelan, A.F. Lynch, and K.D. Listmann. An ex-perimental validation of magnetometer integration into a gps-aided helicopteruav navigation system. In
American Control Conference (ACC), 2010 , pages4439–4444, June 2010.[4] Pei-Chun Lin, H. Komsuoglu, and D.E. Koditschek. Sensor data fusion for bodystate estimation in a hexapod robot with dynamical gaits.
Robotics, IEEE Trans-actions on , 22(5):932–943, Oct 2006.[5] Seungbeom Woo, Jaeyong Kim, Jungmin Kim, and Sungshin Kim. Calibrationof accelerometer using fuzzy inference system. In
Control, Automation andSystems (ICCAS), 2011 11th International Conference on , pages 1448–1450,Oct 2011.[6] D.J. Stilwell, C.E. Wick, and B.E. Bishop. Small inertial sensors for a miniatureautonomous underwater vehicle. In
Control Applications, 2001. (CCA ’01).Proceedings of the 2001 IEEE International Conference on , pages 841–846, 2001.[7] J.C. Kinsey and L.L. Whitcomb. Towards in-situ calibration of gyro and dopplernavigation sensors for precision underwater vehicle navigation. In
Roboticsand Automation, 2002. Proceedings. ICRA ’02. IEEE International Conference on ,volume 4, pages 4016–4023 vol.4, 2002.[8] R. Panish and M. Taylor. Achieving high navigation accuracy using inertialnavigation systems in autonomous underwater vehicles. In
OCEANS, 2011IEEE - Spain , pages 1–7, June 2011.31EFERENCES[9] Hyoung-Ki Lee, Kiwan Choi, Jiyoung Park, and Hyun Myung. Self-calibrationof gyro using monocular slam for an indoor mobile robot.
International Journalof Control, Automation and Systems , 10(3):558–566, 2012.[10] H. Ovren, P. Forssen, and D. Tornqvist. Why would i want a gyroscope on myrgb-d sensor? In
Robot Vision (WORV), 2013 IEEE Workshop on , pages 68–75,Jan 2013.[11] Hakyoung Chung, L. Ojeda, and J. Borenstein. Sensor fusion for mobile robotdead-reckoning with a precision-calibrated fiber optic gyroscope. In
Roboticsand Automation, 2001. Proceedings 2001 ICRA. IEEE International Conferenceon , volume 4, pages 3588–3593 vol.4, 2001.[12] Surachai Panich and Nitin Afzulpurkar. Mobile robot integrated with gyroscopeby using ikf.
International Journal of Advanced Robotic Systems , 8(2):122, 2011.[13] J. Borenstein. Experimental evaluation of a fiber optics gyroscope for improv-ing dead-reckoning accuracy in mobile robots. In
Robotics and Automation,1998. Proceedings. 1998 IEEE International Conference on , volume 4, pages3456–3461 vol.4, May 1998.[14] John G Ziegler and Nathaniel B Nichols. Optimum settings for automatic con-trollers. trans. ASME , 64(11), 1942.[15] Michael L Luyben and William L Luyben.
Essentials of process control . McGraw-Hill College, 1997.[16] Agostino Martinelli, Nicola Tomatis, and Roland Siegwart. Simultaneous lo-calization and odometry self calibration for mobile robot.
Autonomous Robots ,22(1):75–85, 2007.[17] M. De Cecco. Self-calibration of agv inertial-odometric navigation usingabsolute-reference measurements. In
Instrumentation and Measurement Tech-nology Conference, 2002. IMTC/2002. Proceedings of the 19th IEEE , volume 2,pages 1513–1518 vol.2, 2002.[18] Nicholas Roy and Sebastian Thrun. Online self-calibration for mobile robots.In
Robotics and Automation, 1999. Proceedings. 1999 IEEE International Confer-ence on , volume 3, pages 2292–2297. IEEE, 1999., volume 3, pages 2292–2297. IEEE, 1999.