Miguel Aranda
University of Zaragoza
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Miguel Aranda.
systems man and cybernetics | 2012
Gonzalo López-Nicolás; Miguel Aranda; Youcef Mezouar; Carlos Sagüés
This paper addresses the problem of visual control of a set of mobile robots. In our framework, the perception system consists of an uncalibrated flying camera performing an unknown general motion. The robots are assumed to undergo planar motion considering nonholonomic constraints. The goal of the control task is to drive the multirobot system to a desired rendezvous configuration relying solely on visual information given by the flying camera. The desired multirobot configuration is defined with an image of the set of robots in that configuration without any additional information. We propose a homography-based framework relying on the homography induced by the multirobot system that gives a desired homography to be used to define the reference target, and a new image-based control law that drives the robots to the desired configuration by imposing a rigidity constraint. This paper extends our previous work, and the main contributions are that the motion constraints on the flying camera are removed, the control law is improved by reducing the number of required steps, the stability of the new control law is proved, and real experiments are provided to validate the proposal.
IEEE Transactions on Robotics | 2015
Miguel Aranda; Gonzalo López-Nicolás; Carlos Sagüés; Youcef Mezouar
This paper describes a new vision-based control method to drive a set of robots moving on the ground plane to a desired formation. As the main contribution, we propose to use multiple camera-equipped unmanned aerial vehicles (UAVs) as control units. Each camera views, and is used to control, a subset of the ground team. Thus, the method is partially distributed, combining the simplicity of centralized schemes with the scalability and robustness of distributed strategies. Relying on a homography computed for each UAV-mounted camera, our approach is purely image-based and has low computational cost. In the control strategy we propose, if a robot is seen by multiple cameras, it computes its motion by combining the commands it receives. Then, if the intersections between the sets of robots viewed by the different cameras satisfy certain conditions, we formally guarantee the stabilization of the formation, considering unicycle robots. We also propose a distributed algorithm to control the camera motions that preserves these required overlaps, using communications. The effectiveness of the presented control scheme is illustrated via simulations and experiments with real robots.
Automatica | 2015
Miguel Aranda; Gonzalo López-Nicolás; Carlos Sagüés; Michael M. Zavlanos
This paper presents a method to stabilize a group of agents moving in a two-dimensional space to a desired rigid geometric configuration. A common approach is to use information of relative interagent position vectors to carry out this specific control task. However, existing works in this vein either require the agents to express their measurements in a global coordinate reference, or generally fail to provide global stability guarantees. Our contribution is a globally convergent method that uses relative position information expressed in each agents local reference frame, and can be implemented in a distributed networked fashion. The proposed control strategy, which is shown to have exponential convergence properties, makes each agent move so as to minimize a cost function that encompasses all the agents in the team and captures the collective control objective. The coordinate-free nature of the method emerges through the introduction of a rotation matrix, computed by each agent, in the cost function. We consider that the agents form a nearest-neighbor communications network, and they obtain the required relative position information via multi-hop propagation, which is inherently affected by time-delays. We support the feasibility of such distributed networked implementation by obtaining global stability guarantees for the formation controller when these time-delays are incorporated in the analysis. The performance of our approach is illustrated with simulations.
international conference on robotics and automation | 2010
Miguel Aranda; Gonzalo López-Nicolás; Carlos Sagüés
This paper presents a new method for visual homing to be used on a robot moving on the ground plane. A relevant issue in vision-based navigation is the field-of-view constraints of conventional cameras. We overcome this problem by means of omnidirectional vision and we propose a vision-based homing control scheme that relies on the 1D trifocal tensor. The technique employs a reference set of images of the environment previously acquired at different locations and the images taken by the robot during its motion. In order to take advantage of the qualities of omnidirectional vision, we define a purely angle-based approach, without requiring any distance information. This approach, taking the planar motion constraint into account, motivates the use of the 1D trifocal tensor. In particular, the additional geometric constraints enforced by the tensor improve the robustness of the method in the presence of mismatches. The interest of our proposal is that the designed control scheme computes the robot velocities only from angular information, being this very precise information; in addition, we present a procedure that computes the angular relations between all the views even if they are not directly related by feature matches. The feasibility of the proposed approach is supported by the stability analysis and the results from simulations and experiments with real images.
IEEE Transactions on Automatic Control | 2016
Miguel Aranda; Gonzalo López-Nicolás; Carlos Sagüés; Michael M. Zavlanos
In this paper, we present a novel distributed method to stabilize a set of agents moving in a two dimensional environment to a desired rigid formation. In our approach, each agent computes its control input using the relative positions of a set of formation neighbors but, contrary to most existing works, this information is expressed in the agents own independent local coordinate frame, without requiring any common reference. The controller is based on the minimization of a Lyapunov function that includes locally computed rotation matrices, which are required due to the absence of a common orientation. Our contribution is that the proposed distributed coordinate-free method achieves global stabilization to a rigid formation with the agents using only partial information of the team, does not require any leader units, and is applicable to both single-integrator or unicycle agents. To guarantee global stability, we require that the network induced by the agent interactions belongs to a certain class of undirected rigid graphs in two dimensions, which we explicitly characterize. The performance of the proposed method is illustrated with numerical simulations.
intelligent robots and systems | 2014
Miguel Aranda; Gonzalo López-Nicolás; Carlos Sagüés; Michael M. Zavlanos
This paper presents a novel method that enables a team of aerial robots to enclose a target in 3D space by attaining a desired geometric formation around it. We propose an approach in which each robot obtains its motion commands using measurements of the relative position of the other agents and of the target, without the need for a central coordinator. As contribution, our method permits any desired 3D target enclosing configuration to be defined, in contrast with the planar circular patterns commonly encountered in the literature. The proposed control strategy relies on the minimization of a cost function that captures the collective motion objective. In our method, the robots do not need to use a common reference frame. This coordinate independence is achieved through the introduction in the cost function of a rotation matrix computed locally by each robot. We prove that our motion controller is exponentially stable, and illustrate its performance through simulations.
american control conference | 2013
Miguel Aranda; Youcef Mezouar; Gonzalo López-Nicolás; Carlos Sagüés
We present a new method for visual control of a set of robots moving on the ground plane. As contributions, we first propose a purely image-based control strategy that drives the set to a desired configuration while minimizing at all times the sum of the squared distances the robots have to travel. This homography-based method, which has low computational cost and generates smooth trajectories for the robots, is then used in a multirobot control framework featuring multiple cameras, each of them observing a subset of the robot team. In particular, we present a novel control approach that makes the complete robot set reach its global target configuration when there exists partial overlap between the subsets of robots observed by the different cameras. Each camera is associated to a control unit which sends the control commands to its observed subset of robots, but no other communication is required between the robots or control units. Our method, which overcomes the field-of-view limitations of single-camera methods and increases their scalability, exploits the advantages of both centralized and distributed multirobot control strategies. Simulations are provided to illustrate the performance of the proposal.
Autonomous Robots | 2013
Miguel Aranda; Gonzalo López-Nicolás; Carlos Sagüés
This paper presents a visual homing method for a robot moving on the ground plane. The approach employs a set of omnidirectional images acquired previously at different locations (including the goal position) in the environment, and the current image taken by the robot. We present as contribution a method to obtain the relative angles between all these locations, using the computation of the 1D trifocal tensor between views and an indirect angle estimation procedure. The tensor is particularly well suited for planar motion and provides important robustness properties to our technique. Another contribution of our paper is a new control law that uses the available angles, with no range information involved, to drive the robot to the goal. Therefore, our method takes advantage of the strengths of omnidirectional vision, which provides a wide field of view and very precise angular information. We present a formal proof of the stability of the proposed control law. The performance of our approach is illustrated through simulations and different sets of experiments with real images.
international conference on control, automation, robotics and vision | 2012
Miguel Aranda; Gonzalo López-Nicolás; Carlos Sagüés
This paper addresses the estimation of planar camera motion using 1D homographies. As contributions, we show analytically that, contrary to what occurs with the 2D homography, there is a family of infinite solutions to the 1D homography decomposition, and therefore infinite possible motion reconstructions. In addition, we propose a new method to compute the planar motion between two images from the information provided by two different 1D homographies, employing their associated homology transformations. Therefore, our approach computes a general planar camera motion from only two 1D views, when previous works needed three 1D views for this task. The use of 1D information makes the method particularly suitable for omnidirectional cameras, due to the wide field of view and precise angular information provided by this kind of sensors. The performance of our proposal is illustrated through simulations and experiments on real images.
advances in computing and communications | 2016
Miguel Aranda; Rosario Aragues; Gonzalo López-Nicolás; Carlos Sagüés
In this paper, we present a novel formation control method to stabilize the positions of a multiagent team moving in a two-dimensional environment to a specified rigid pattern. Agent interactions are typically range-constrained in this kind of system, which makes it critical to maintain the connectivity of the underlying network formed by the mobile agents to enable completion of the desired task. To address this issue, we study the problem of connectivity preservation coupled with the formation control objective. Our contribution is a globally stable formation stabilization approach that maintains connectivity and is designed for unicycle kinematics. Each agent computes its motion using the relative positions of the other agents, expressed in its local arbitrarily oriented coordinate frame. To preserve connectivity, our method relies on a procedure where the desired formation is adaptively scaled to ensure maintenance of the links in the Minimum-distance Spanning Tree of the communications graph. This way, instead of requiring additional control components for connectivity management which may interfere with the formation control objective, we integrate the two goals in the same, bounded, control input. We show formally that the controller provides global stability and ensures connectivity maintenance, and we illustrate its performance in simulation.