SSynchronous Robotic Framework
Nagarathna Hema Balaji, Jyothsna Kilaru, and Oscar Morales-Ponce
Department of Computer Engineering and Computer Science, California State University LongBeach [email protected], [email protected],[email protected]
Abstract.
We present a synchronous robotic testbed called SyROF that allowsfast implementation of robotic swarms. Our main goal is to lower the entry bar-riers to cooperative-robot systems for undergraduate and graduate students. Thetestbed provides a high-level programming environment that allows the imple-mentation of Timed Input/Output Automata (TIOA). SyROF offers the followingunique characteristics: 1) a transparent mechanism to synchronize robot maneu-vers, 2) a membership service with a failure detector, and 3) a transparent serviceto provide common knowledge in every round. These characteristics are funda-mental to simplifying the implementation of robotic swarms. The software is or-ganized in five layers: The lower layer consists of a real-time publish-subscribesystem that allows efficient communication between tasks. The next layer is animplementation of a Kalman filter to estimate the position, orientation, and speedof the robot. The third layer consists of a synchronizer that synchronously exe-cutes the robot maneuvers, provides common knowledge to all the active partici-pants, and handles failures. The fifth layer consists of the programming environ-ment.
In natural disasters, the response time for the search-and-rescue team is a critical factorto minimize the casualties. A swarm of robots, e.g. drones can be used to concurrentlysweep the region to minimize the response time without risk to the search-and-rescueteam [12]. Some researchers have followed a centralized approach where the robots aresupervised and operated from a central control [5], [2]. However, these solutions arenot scalable and are prone to failure. More robust solutions focus on fully autonomoussystems, where the robots self-coordinate their actions without the help of preexistinginfrastructure [1]. However, the implementation of these systems requires an interdisci-plinary team with expertise in robotics, distributed systems, wireless sensor networks,etc, making the systems accessible only for experienced research teams. In this paperwe present a testbed, called
Synchronous Robotic Framework (SyROF), that gives ac-cess to undergraduate computer science/engineer at California State University LongBeach to rapidly implement and demonstrate robotic swarms. In essence, SyROF pro-vides the look-compute-move model proposed by Suzuki and Yamashita in their sem-inar paper [14]. In our framework, robots broadcast their state infinitely often. Then,every participant robot obtains the state of every other active robot using a membershipservice, which can be used to compute its next movement. In the look-compute-move a r X i v : . [ c s . R O ] J un odel, we can distinguish two main variants: 1) the fully synchronous FSYNC , whereall robots start the cycles at the same time [6], and 2) the asynchronous
ASYNC , whereno assumption is made about the cycles. We choose to implement
FSYNC to reducethe complexity of implementing cooperative algorithms. The power of the model hasbeen shown in [3], [18], [17]. However, SyROF can also be used to implement theasynchronous variant. We implement a synchronizer to execute the synchronized cycle.In
FSYNC every robot has common knowledge, i.e., every robot knows that every otherrobot knows, and so on, the state of the participant robots. Thus, the output of any de-terministic function is identical in all robots. However, their output can be conflictingif robots do not attain common knowledge. We consider wireless networks where mes-sages can be dropped at any time which makes it impossible to attain common knowl-edge deterministically. To overcome the impossibility, we consider the stream consen-sus protocol proposed by Morales-Ponce et al., [8] that guarantees common knowledgeat almost every time with period of disagreement of bounded length.In this paper we describe the design of SyROF testbed that consists of multiple as-sorted mobile robots including omnidirectional robots, drones and rovers. Each robotis equipped with a microprocessor, a gyroscope/accelerometer sensor, a flow sensor, aGPS like sensor and a Bluetooth chip. We design and implement a real-time publish-subscribe system as a base system. Then we design and implement a Kalman filter tocompute the state of the robot that reduces the noise of the sensory data. Then we imple-ment a synchronizer to synchronize the task of the robots. On top of the synchronizer,we implement a programing interface, called virtual machine, that allows implement-ing Timed Input/Output Automata. The main software architecture is shown in Figure 1.Our design allows adding external sensors such as small computers for more complexapplications. P ub li s h - Sub s c r i b e S y s t e m KalmanFilterSynchronizerApplicationV irtualMachine R e a l - T i m e O p e r a t i n g S y s t e m Physical System
Fig. 1.
Software architecture
The paper is organized as follows: in Section 2, we present the architecture of therobots. The formal requirements of the system are then presented in Section 3, fol-owed by the related work in Section 4. We present the design for the real-time publish-subscribe system in Section 5, then we present the design of the Kalman filter in Sec-tion 6 and the membership service and synchronizer in Section 7. The programminginterface is presented in Section 8. Finally, we conclude the paper in Section 9.
We design the main components of the system to be compatible with different roboticmodels. Currently, we have built the testbed with omnidirectional robots that are able tomove in any direction without rotations. However, the components and software weredesigned to support different robot models, such as rovers that can rotate and moveforward and backward, non-holonomic mobile robots, drones, etc.The main components of SyROF are implemented in a Cortex M4 with a real rimeoperating system. In particular, we use Teensy 3.2 and FreeRTOS for the omnidirec-tional robots that allows executing tasks in bounded time. SyROF uses different sensorsto compute the robot’s state. Namely, an MPU-9265 (Inertial Measurement Unit orIMU) to obtain the robot’s orientation, a PMW390 (optical flow sensor) to computethe distance traversed by the robots, an NRF51822 (bluetooth module) to communicateand a DWM1000 (an ultra-wide band sensor) to compute the robot’s absolute position.SyROF also controls all the actuators. To simplify the connections, we design a printedcircuit board (PCB) depicted in Figure 2.
Fig. 2.
Autopilot PCB
Many robotic applications can be enhanced with the use of computer vision andmachine learning algorithms. However, these algorithms require more computationalpower. Hence, we design the system to be highly configurable to include different sen-sors and actuators including external small computers. Throughout the paper we assumethat robots are in communication range and that the clock drift of the process is boundedby a constant.
In this section, we list the main functional requirements of SyROF.
Provide a membership service . In robotic swarms, every robot needs to be awareof every other robot that is participating in the system. This requirement adds dy-namism to the system by allowing robots to join and leave at any time. – Synchronize robotic primitives . The time to complete a maneuver (or primitive)is non-deterministic due to inaccuracies in the actuators and sensors, battery lev-els, etc. Robots should synchronize primitives to provide the look-compute-move
FSYNC model. – Provide common knowledge infinitely often, even with communication faults . Com-mon knowledge is attained when every robot knows the state of all participantrobots. If all robots communicate successfully, they must switch to cooperativemode and provide their state including position, speed, intentions, etc., to everyother robot. If any robot does not receive the message of at least one robot, allrobots must switch to autonomous mode. – Provide a high level programming environment to implement cooperative roboticsystems using Timed Input/Output Automata (TIOA) . The programming environ-ment should provide instructions to control the actuators, read sensory data, providecomplex queries.
Testbed for lowering the entrance barrier to multi-robot systems have been proposedpreviously, see for example [9], [13], where the the authors present a testbed, calledGRITSBot and Kilobots, respectively, using inexpensive robots. In GRITSBot, a centralserver is responsible for executing the code in each robot. Moreover, it requires a precisetracking system to determine the position of each robot. Thus, user can design andtest robotic synchronous system. However, any failure in the server affects the wholesystem. Unlike GRITSBot, in SyROF all the robots have the same role and, therefore,there is no single point of failure. In the Kilobot testbed robots asynchronously runs itsown code. However, those robots do not have enough computational power to estimatetheir positions. Unlike Kilobots, our proposed testbed, robots can estimate their ownposition and synchronize the primitives.Another closely related framework is ROS (Robot Operating System) [11]. A sys-tem built using ROS consists of several processes connected in a peer-to-peer topology.ROS has been very successful and many packages for a great variety of applications areavailable including SLAM (Simultaneously Localization and Mapping). However, ROSdoes not provide support to synchronize robotic primitives nor common knowledgeguarantees. Our framework provides a new high-level platform-independent program-ming language that provides users with synchronous executions of robotic primitivesand common knowledge in realtime. Moreover, SyROF can be integrated with ROS toextend its functionalities without compromising the realtime execution.Our synchronization mechanism is based on consensus among all robots. Manypapers have based their systems on consensus where processes need to agree on specifictasks [4, ? , ? ]. Another approach is to use of tuple spaces. Processes insert tuples in acommon space that can be read by every process [10], but, there is no guarantee that aprocess is able to read. Real-Time Publish-Subscribe System
In this section we present the design of the real-time publish-subscribe system thatsimplifies the implementation of robotic systems. The publish-subscribe system can belogically separated into three parts- the Broker, Publishers, and Subscribers as shownin Figure 3. Each process can be either publisher or subscriber or both. – The Broker.
The main functionality of the broker is to disseminate data from thepublisher to subscribers that are interested in the topic. Thus, it plays a key rolefor the expected functionality of the system. Publishers register through the Brokerusing unique identifiers and subscribers subscribe to the publisher with the Brokerusing the identifiers. It is assumed that identifiers are unique.
MPU 9250FlowBreakoutDWM1000
RTPubSub
Kalman Filter
Publishers Subscribers
Broker
Fig. 3.
Publish Subscribe System – Publishers.
They are dedicated to produce data without knowing which processesconsume it. For example, publishers can read data from sensors, execute an algo-rithm to filter data, etc. When a publisher is created, it registers as publisher usinga unique identifier. When data ready, it notifies the broker. – Subscribers.
They are dedicated to consume data. When a subscriber is created, itsubscribes to the topic that it is interested in using their unique identifier. It cansubscribe to more than one topic. When a publisher publishes the data, the brokernotifies all the subscribers that have declared interest in it. Subscribers can also playthe role of publisher.In this paper we report the implementation in FreeRTOS. However, the design canbe also implemented in other real-time operating systems. FreeRTOS is based on co-routines to emulate a multiprocessor system. Essentially, a co-routine can have multipleentry points and maintains its state between activations. In FreeRTOS, each process ex-ecutes a task that does not return during the execution of the program. To switch to othertasks, an operating system method must be invoked such as vTaskDelayUntil , vTaskDe-lay , etc. Since we are interested in providing a system for time-critical applications,each task must complete a step of computation in bounded time. .1 Broker Interface The real-time publish-subscribe system interface is presented in Interface 1 which con-sists of three methods – registerPublisher. Register a publisher into the real-time publish-subscribe systemusing a unique id. First, the publisher task instantiates a FreeRTOS queue using theoperating system method xQueueCreate and passes the handler to the system aswell as the size of the tuple. The Broker inserts these values into a list of publishers.The function returns the handler (index) of the publisher which is later used topublish new data. – subscribe. Register a subscriber to a given publisher. Before the subscriber calls subscribe , it creates a task and a queue that stores the publisher produced tuple.In FreeRTOS, xCreateTask and xQueueCreate are used to create the task and thequeue, respectively. These values are passed as parameters to the function subscribe and the Broker inserts these two values into a publisher queue. – publish. Publishers use the function publish to publish new data. To avoid blocking,the broker notifies the main broker task. When the broker receives the notification,it reads the data from the publisher using the queue handler and then copy it to allthe subscribers using their queue handlers. Interface 1
Publish-Subscribe Interface uint8 t registerPublisher( uint8 t id, uint8 t size, QueueHandle t queueHandler)2: bool subscribe( uint8 t id, uint32 t taskHandler,
QueueHandle t queueHandler)3: uint8 t publish( uint8 t index)
In this section we present the implementation of the Kalman filter [7]. The Kalmanfilter estimates the state of the system in real-time from noisy sensory information,assuming that the noise follows a Gaussian distribution. The Kalman filter is criticalfor the correct functionality of our testbed. Sensors suffer from errors occurring due tomany reasons. For instance, GPS sensors might suffer deviation while the device doesnot have a direct sight-of-view to the satellite. Furthermore, odometric values must beaccurate to determine the actual position of the vehicle.
Let ∆ be the rate at which the Kalman filter runs and X ( t ) = ( x ( t ) , y ( t ) , ˙ x ( t ) , ˙ y ( t ) , θ ( t )) be the robot’s state at time t where x ( t ) and y ( t ) are the absolute position, ˙ x ( t ) and˙ y ( t ) are the speed and θ ( t ) the orientation of the robot. Since the current state onlydepends on the previous state, we omit t . Let P be the covariance matrix of X at time . P represents the uncertainties in the current state. Let u a be the motor thrust and u θ be the steering angle. Based on the dynamic model, we can compute the new state asfollows: ˆ x = x + ∆ ( ˙ x ) ˆ y = y + ∆ ( ˙ y ) ˆ˙ x = ˙ x + ∆ µ a cos ( ∆ µ θ ) ˆ˙ y = ˙ y + ∆ µ a sin ( ∆ µ θ ) ˆ θ = θ + ∆ µ θ It can be simple written as ˆ X = f ( X ) + f ( U ) . Let J be the Jacobian matrix of f ( X ) and Q be the covariance matrix representing the process noise. The estimation of co-variance matrix at time t + P = FPF T + Q .Let Z = ( gyro z , f low x , pos x , pos y ) be the sensor readings at time t . Let R be thecovariance matrix measuring the noise. We use the following equations to deduce h ( ˆ X ) . f low x = ˆ˙ xf low y = ˆ˙ ypos x = ˆ xpos y = ˆ ygyro z = ˆ˙ θ Let F be the system state matrix. We can summarize the Kalman filter as:Model forecast ˆ X = f ( X ) + f ( U ) ˆ P = FPF T + Q (1)Step correction K = PF T ( FPF T + R ) − X = ˆ X T + K ( Z − h ( ˆ X )) P = ( I − KJ h ) ˆ P (2) We implement the described Kalman filter using the real-time publish-subscribe sys-tem. Thus, we create dedicated tasks for each sensor. Namely,
IMU that consists ofreading the gyroscope/accelerometer data, optical flow that reads the offset betweentwo consecutive images and ultra-wideband that reads the ranging between the sensorand the fix stations. We register these tasks as publishers and publish the data at theupdated refresh time. The core of the Kalman filter is implemented using two tasks:1. The state task. The state subscribes to
IMU , optical flow and ultra-wideband . Whena publisher publishes new data, the state updates Z and sets the uncertainties of thesensor in R , accordingly. IOA 2
Synchronization protocol
1: Signature:2: ext
SendP (cid:104)
Id, mbrInP, OM,
Progress (cid:105) ext SendW (cid:104)
Id, mbrInW, OM,
Wait (cid:105) ext SendV (cid:104)
Id, mbrInV, OM,
Vote (cid:105) ext Recieve (cid:104)
Id, information,OM, state (cid:105) ext FailureDetector ext GoToAutonomous ext GoToCooperative State: int
Progress int
Wait int
Vote analog now ∈ R initially analog nextSend ∈ R initially discrete in f romation , is the the set ofmember which can be list of nodes inProgress, Wait or Vote16: discrete OM , Operation Mode initially /017: discrete state ∈ { Progress , Waite , Vote } ,state of the nodes18: discrete memberInP , is a list of the mem-ber in Progress state19: discrete memberInW , is a list of themember in Wait state20: discrete memberInV , is a list of the mem-ber in Vote state21: discrete messageLost , is a list to keeptrack of message lost for each process22: Actions: external
SendP (cid:104)
Id, mbrInP, OM,
Progress (cid:105) precondition :25: state = Progress ∧ nextSend ← now + α effect :27: nextSend ← now + α external SendW (cid:104)
Id, mbrInW, OM,
Wait (cid:105) precondition :30: state = Wait ∧ nextSend ← now + α effect :32: nextSend ← now + α external SendV (cid:104)
Id, mbrInV, OM,
Vote (cid:105) precondition :35: state = Vote ∧ nextSend ← now + α effect :37: nextSend ← now + α external receive (cid:104) Id, information,OM,state (cid:105) precondition :40: f ull ( Chanel ) =
True effect :42: if state = PROGRESS mbrInP ← mbrInP ∪ { id } if state = Wait mbrInW ← mbrInW ∪ { id } if state = Vote mbrInV ← mbrInV ∪ { id } external FailureDetector precondition :50: f ull ( Chanel ) =
False effect :52: messageLost I d ← messageLost I d + external GoToAutonomous precondition :55: messageLost > K effect :57: OM ← external GoToCooperative precondition :60: OM = ∨ receive OM = f rom allother Nodes effect :62: OM ← internal Progress precondition :65: actionCompleted = True effect :67: state ← Wait mbrInW ← { Id } internal Wait precondition :71: mbrInP = mbrInW ∨ | mbrInV | > effect :73: state ← Vote mbrInV ← mbrInV ∪ { Id } internal Vote precondition :77: mbrInW = mbrInV effect :79: state ← Progress mbrInV ← /081: mbrInP ← { id } a) MPU 9250: The MPU returns the linear acceleration as a unitary vector. Toconvert to m / s , we multiply by G = . m / s . The MPU also returns theangular speed in degree per seconds. Therefore, we simple convert to radians.(b) Flow Breakout (Optical flow): It returns the difference of pixels in x and y fromtwo consecutive images. As depicted in the Figure 4, the Flow Breakout sensor D C BAh xP aa + xcb βθ α Fig. 4.
Flow Breakout data with orientation α . measures the lateral velocity of the sensor in both directions, that is δ x and δ y .The effective viewing angle of the sensor θ is assumed to be equal to 25 ◦ . Sincethere will be a slight orientation in the sensor’s axis we are adding an angle α to be the angle of orientation. Let h be the height of the sensor from the ground.Observe that β = π − θ − α and a + x can be calculated using β as we alreadyknow the height of the sensor. That is, a + x = h cot ( β ) . The Flow Breakoutsensor measures the lateral velocity in pixels per second. The number of pixelsin the x and y axis of the image frame is 30 each. Therefore, the normalizedspeed is x = h cot ( β ) − a .(c) DWM1000 (Ultra-wide band): The sensor returns the distance to a fixed num-ber of anchors. An anchor is a station at a well known position. We computethe position using simple trigonometry.We consider four anchors with a layout depicted in Figure 5 where anchor a , a , a , a are located at ( , , ) , ( , , ) , ( , , ) and ( , , ) , respectively.Let p = ( x , y , z ) be the robot position and let [ p , p , p ] denote the plane deter-mined by the three points p , p , p .Consider any two different anchors a i , a j , the line segment ( a i , a j ) and its per-pendicular line segment ( q , p ) such that q is on ( a i , a j ) . Observe that the per-pendicular line has an angle of 45 o and dist ( a i , a j ) = √
2. Let dist ( a i , q ) = b , dist ( q , a j ) = √ − b , dist ( q , p ) = c , dist ( a i , p ) = d i and dist ( a j , p ) = d j where d i and d j are returned by DWM1000. From the Pythagorean theorem, b + c = d i and ( √ − b ) + c = d j . Therefore, b = d i − d j + √ . Observe that q = ( x , y ) is the intersection of the perpendicular lines of ( a , a ) , ( a , a ) in the plane XY , q = ( x , z ) is the intersection of the perpendicular lines a a a xzy d d b c a a a a xyy = x − m , y = x + m , m , = d − d +22 m , = 1 − d − d +22 p Fig. 5.
Upper: Layout, Lower: Plane xy . of ( a , a ) , ( a , a ) in the plane XZ and q = ( y , z ) is the intersection of theperpendicular lines of ( a , a ) , ( a , a ) in the plane Y Z .Let y = x − m , be the equation of the perpendicular line ( a , a ) passingthrough p . Since the angle is 45 o , m , = d − d + ; see Figure 5. Similarly,let y = − x + m , be the equation of the perpendicular ( a , a ) passing through p . Therefore, m , = − d − d + . Similarly we can compute for the other per-pendicular; see Figure 5.We can now compute the position using the following equations: x = m , + m , = m , + m , y = m , − m , = m , + m , z = m , − m , = m , + m ,
2. The Kalman task executes at a fixed rate. First it updates the state using equa-tion 1 and 2 and then it sets high uncertainties in R for the new calculations. Recallhat in the Kalman filter the weight of each sensor information depends on the un-certainties. Once a new sensory reading is completed the uncertainties matrix R isadjusted. Observe that if one step of the Kalman filter is executing without readingthe sensory data, the state will be determined by the model. This approach allowsthe Kalman filter to handle sensors reading at different rates and reduces the main-tenance time.We show a sketch of the code that receives the data to populate the matrices Z and R . void stateTask(void* args)while (true)if (xTaskNotifyWait(0xffffffff, 0xffffffff, &publisherId,portMAX_DELAY) == pdTRUE)switch (publisherId)case MPU_DATA:xQueueReceive(mpuQueue, &gyroAcc,(TickType_t )0); // gyroAcc contains the dataof the IMUcase FLOW_DATA:xQueueReceive(flowQueue, &flowData,(TickType_t )0); // flowData contains the dataof the flowdeckcase DWM_DATA:xQueueReceive(dwmQueue, &dwmData,(TickType_t )0); // dwmData contains the dataof the DWM100 The task of the Kalman Filter is executed at the frequency of
FREQUENCY KF .It executes one step of the Kalman filter, update the variables for the next step andpublish the new state. After publishing the data, the matrices Z and R are reset to highuncertainties. void kalmanTask(void* args)while (true)vTaskDelayUntil(&lastWakeTime,FREQUENCY_KF / portTICK_RATE_MS);if (ekf_step(&ekf)) // Executes the Kalman FilterxQueueOverwrite(stateQueue, &state);// insert in the queuepublish(stateId); // publish// reset the Matrices Z and R In this section, we describe the protocol which allows the synchronization of TimedInput/Output Automata. We present the Algorithm as a Timed Input/Output Automatain Protocol 2. Let R = { r , r , ..., r n } denote the set of robots. The protocol relies onthree states: PROGRESS where robots are performing a maneuver,
WAIT where robotshave completed the maneuver and
VOT ING where robots are voting to start the nextinstruction. Let M i , initially set to /0, be the set of boolean values that indicates when arobot r i has completed the maneuvers, and let v i denote the number of rounds in votingof robot r i For simplicity of presentation, we assume that every robot is aware of the rest ofthe robots. Later we explain how to remove this assumption. Every robot broadcasts itsid, state of the automata and the number of rounds it has been in the
VOT ING state.hile robots are in the state
PROGRESS , they are performing a maneuver. When arobot completes it maneuver, it changes its state to
WAIT and waits for the other robotsto complete their maneuver. When a robot r i receives the WAIT state from robot r k ,it sets a M i [ k ] = true . Robot r i changes its state from WAIT ING to VOT ING when ∧ ni = M i = T RUE or if it receives the
VOT ING state from a robot r k in which case itsets v i = v k . Robot r i increases v i and when v i = K , it changes to PROGRESS , sets M i = /0, v i = K , all robots start a new maneuver at the sametime. We assume that all the maneuver take a bounded number of rounds and at least K rounds. Theorem 1.
If the number of consecutive rounds with message lost in the system isless than K , Algorithm 2 satisfies the following properties:1) Safety: If a robot is in WAIT state, then eventually all robots reach WAIT state.Further, if a robot changes its state to IN PROGRESS at round r, all other robotschange the state to IN PROGRESS at round r, and2) Liveness: Each robot executes infinitely often in IN PROGRESS state.Proof. We first prove that property (1) holds under the assumptions. Consider any robot r i that reaches the WAIT state. Observe that, r i remains in WAIT until it receives that allother robots are in the
WAIT state since ∧ ni = M i = FALSE or it receives
VOT ING froma robot. Therefore, the earliest time that a robot can change its state to
VOT ING is whenall robots are in
WAIT for at least one round. Let m be the round where the last robotchanges its state to WAIT . Observe that there is a round in the interval [ m , m + K ] where all robots receive the message with the state WAIT , since the number of consec-utive rounds with message lost in the system is less than K . Consider the first robot r i that listens the WAIT from all other robots in round m ∈ [ m , m + K ] . Therefore, r i changes its state to VOT E and broadcasts v i + j in each round j ∈ [ m + m + K ] . Ob-serve that r i changes its state to PROGRESS when v i + j > K . When any other robot,say r k , receives the message from r i at any round during the interval [ m , m + K ] itchanges its state to VOT ING and set v k = v i + j . Therefore, every other robot changesits state to PROGRESS when v i + j > K .It is not difficult to see that Property (2) holds under the assumptions that the numberof consecutive rounds with message loss is less than K .Observe that from Theorem 1, robots reach a consensus to start a new maneuverat the same time. Therefore, it provides a synchronous robot system. Although onecan expect that the number of consecutive rounds with message loss is less than K ,in practice we can have communication faults, i.e., the number of consecutive roundswith message loss is at least K . Therefore, properties (1) and (2) may not hold. In-deed, it is known that the consensus problem cannot be solved deterministically if mes-sages can be lost. To alleviate the problem, we consider two operation modes. Namely COOPERAT IV E mode where every robot receives the message from every other robotand
AUT ONOMOUS mode where at least one robot does not receive the message fromother robot. Observe that if there is a partition among the robots such that some robotsre in the
AUT ONOMOUS mode and others are in
COOPERAT IV E mode, a coop-erative robotic application may not be completed successfully or robots may collide.Therefore, the goal is to let robots agree on the operation mode. However, it is again aconsensus problem and consequently there is no deterministic algorithm that can solveit. However, we can minimize the number of rounds where robots do not agree using theprotocol proposed in [8]. In this protocol, when a robot does not receive a message atround r from another robot or when it receives AUT ONOMOUS operation mode fromany other robot, then it changes its operation mode to
AUT ONOMOUS . The robotsreturn to
COOPERAT IV E operation mode when they receive the
AUT ONOMOUS op-eration mode from every other robot.
Lemma 1. Stream Consensus [8]
Robots may disagree in at most two continuousrounds.
Consider the case where robots are in
WAIT or VOT ING state. Observe that if therobots change to
AUT ONOMOUS operation mode due to message loss, then it is safeto start the next maneuver when they change to
COOPERAT IV E operation mode. FromLemma 1, robots may start within two rounds of difference. However, when robots arein the
PROGRESS state, they always send
COOPERAT IV E as the operation mode anddo not execute the
Stream Consensus protocol. The following theorem summarizes thethe previous discussion and is presented without a proof.
Theorem 2.
Protocol 2 allows the robots to start a new maneuver within at most tworounds of difference. Further, all robots have common knowledge before starting a newmaneuver.
Now, we show how to remove the assumption that robots need to know the numberof active robots. Indeed, during the
PROGRESS state, robots implement a member-ship service. More precisely, robots maintain the set members i of participating robotsthat they receive messages from. Further, robots maintain the set membersInWait i ofrobots that are in the WAIT state. Robots broadcast membersInWait k while they arein the WAIT state. A robot decides to change its state to
VOT ING when it receives membersInWait k = members i from every other robot. In this section we present the implementation of the programming interface that is usedto implement robotic applications in the framework which consists of three callbackfunctions and one method to send messages; see Interface 3. – init. It is called when the system starts. It is used to initialize the local state of therobot. – newRound. It is called at the beginning of each application round. In other words,when the robots change their state to
PROGRESS . This function allows setting therobot’s commands. Application rounds are suggested to be of 1 or 2 seconds to letthe robot make progress. The function receives the local dynamic map that contains nterface 3
Programming Interface void *init()2: uint8 t (*newRound)(CarCommand *command, LDMap *ldmap, uint8 t globalState, uint32 t clock)3: uint8 t (*endOfRound)(LDMap *ldmap, uint8 t globalState, uint32 t clock)4: void sendMessageData( uint8 t *payload) the state of all the participant robots at the end of previous round. If the operationmode in the local dynamic map is COOPERAT IV E , then all robots agree in thecontent of the local dynamic map. It also receives the globalState and the localclock. – endOfRound. The function receives the local dynamic map that contains the stateof all the participant robots in the end of the round, i.e., when the robot completethe maneouver. It also receives the globalState, and the local clock. The TIOA isresponsible for performing the transitions of globalState accordingly. – sendMessageData. It allows to broadcast up to 9 bytes in each round.We enhance SyROF by implementing a set of local and distributed queries to sim-plify the design of cooperative robotic applications: – NumberOfMembers.
Returns the number of active members performing the currentprimitive. – Leader.
Returns the leader defined as the robot with minimum id. – ShortestEnclosingCircle.
Returns the center and the radius of the shortest circle thatcontains all the robots. – ConvexHull.
Returns the list of active robots in the convex hull. – MinWeightedMatching.
Given a set of points, it returns the matching point for eachrobot that minimizes the maximum sum of distances from each point to its destina-tion.
In this paper, we present the current work on SyROF, a testbed that allows undergrad-uate students to implement and demo collaborative robotic applications. SyROF pro-vides a synchronous robotic system to implement cooperative applications. We expectthat the testbed lowers the barriers for the students of computer science and computerengineering at CSULB. We plan to perform extensive experiments with different robots.The implementation has demonstrated that SyROF is feasible.
References
1. Haris Balta, Janusz Bedkowski, Shashank Govindaraj, Karol Majek, Pawel Musialik, DanielSerrano, Kostas Alexis, Roland Siegwart, and Geert De Cubber. Integrated data managementfor a fleet of search-and-rescue robots.
Journal of Field Robotics , 34(3):539–582, 2017.. Shih-Yi Chien, Michael Lewis, Siddharth Mehrotra, Nathan Brooks, and Katia Sycara.Scheduling operator attention for multi-robot control. In
Intelligent Robots and Systems(IROS), 2012 IEEE/RSJ International Conference on , pages 473–479. IEEE, 2012.3. Reuven Cohen and David Peleg. Local spreading algorithms for autonomous robot systems.
Theoretical Computer Science , 399(1-2):71–82, 2008.4. Duane T Davis, Timothy H Chung, Michael R Clement, and Michael A Day. Consensus-based data sharing for large-scale aerial swarm coordination in lossy communications envi-ronments. In , pages 3801–3808. IEEE, 2016.5. Barzin Doroodgar, Yugang Liu, and Goldie Nejat. A learning-based semi-autonomous con-troller for robotic exploration of unknown disaster scenes while searching for victims.
IEEETrans. Cybernetics , 44(12):2719–2732, 2014.6. Paola Flocchini, Giuseppe Prencipe, Nicola Santoro, and Peter Widmayer. Arbitrary patternformation by asynchronous, anonymous, oblivious robots.
Theoretical Computer Science ,407(1-3):412–447, 2008.7. Simon J Julier and Jeffrey K Uhlmann. New extension of the kalman filter to nonlinearsystems. In
Signal processing, sensor fusion, and target recognition VI , volume 3068, pages182–194. International Society for Optics and Photonics, 1997.8. Oscar Morales-Ponce, Elad M Schiller, and Paolo Falcone. How to stop disagreeing and startcooperatingin the presence of asymmetric packet loss.
Sensors (Basel, Switzerland) , 18(4),2018.9. Daniel Pickem, Myron Lee, and Magnus Egerstedt. The gritsbot in its natural habitat-a multi-robot testbed. In ,pages 4062–4067. IEEE, 2015.10. Carlo Pinciroli, Adam Lee-Brown, and Giovanni Beltrame. A tuple space for data sharing inrobot swarms. In
Proceedings of the 9th EAI International Conference on Bio-inspired In-formation and Communications Technologies (formerly BIONETICS) , pages 287–294. ICST(Institute for Computer Sciences, Social-Informatics and ?, 2016.11. Morgan Quigley, Brian Gerkey, Ken Conley, Josh Faust, Tully Foote, Jeremy Leibs, EricBerger, Rob Wheeler, and Andrew Ng. Ros: an open-source robot operating system.
ICRAworkshop on open source software , 2009.12. Ariel Rosenfeld, Noa Agmon, Oleg Maksimov, and Sarit Kraus. Intelligent agent supportinghuman–multi-robot team collaboration.
Artificial Intelligence , 252:211–231, 2017.13. Michael Rubenstein, Christian Ahler, and Radhika Nagpal. Kilobot: A low cost scalablerobot system for collective behaviors. In , pages 3293–3298. IEEE, 2012.14. Ichiro Suzuki. Formation and agreement problems for anonymous mobile robots. In
Pro-ceedings of the 31th Annual Allerton Conference on Communication, Control, and Comput-ing,, University of Illinois, Urbana, Illinois, October 1993 , 1993.15. Vivek Shankar Varadharajan, David St Onge, Christian Guß, and Giovanni Beltrame. Over-the-air updates for robotic swarms.
IEEE software , 35(2):44–50, 2018.16. Xu Wei, Duan Fengyang, Zhang Qingjie, Zhu Bing, and Sun Hongchang. A new fast consen-sus algorithm applied in rendezvous of multi-uav. In
The 27th Chinese Control and DecisionConference (2015 CCDC) , pages 55–60. IEEE, 2015.17. Yukiko Yamauchi, Taichi Uehara, Shuji Kijima, and Masafumi Yamashita. Plane formationby synchronous mobile robots in the three-dimensional euclidean space.
Journal of the ACM(JACM) , 64(3):16, 2017.18. Yukiko Yamauchi and Masafumi Yamashita. Pattern formation by mobile robots with lim-ited visibility. In