Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Adrian Carrio is active.

Publication


Featured researches published by Adrian Carrio.


international conference on robotics and automation | 2014

Robust real-time vision-based aircraft tracking from Unmanned Aerial Vehicles

Changhong Fu; Adrian Carrio; Miguel A. Olivares-Mendez; Ramon Suarez-Fernandez; Pascual Campoy

Aircraft tracking plays a key and important role in the Sense-and-Avoid system of Unmanned Aerial Vehicles (UAVs). This paper presents a novel robust visual tracking algorithm for UAVs in the midair to track an arbitrary aircraft at real-time frame rates, together with a unique evaluation system. This visual algorithm mainly consists of adaptive discriminative visual tracking method, Multiple-Instance (MI) learning approach, Multiple-Classifier (MC) voting mechanism and Multiple-Resolution (MR) representation strategy, that is called Adaptive M3 tracker, i.e. AM3. In this tracker, the importance of test sample has been integrated to improve the tracking stability, accuracy and real-time performances. The experimental results show that this algorithm is more robust, efficient and accurate against the existing state-of-art trackers, overcoming the problems generated by the challenging situations such as obvious appearance change, variant surrounding illumination, partial aircraft occlusion, blur motion, rapid pose variation and onboard mechanical vibration, low computation capacity and delayed information communication between UAVs and Ground Station (GS). To our best knowledge, this is the first work to present this tracker for solving online learning and tracking freewill aircraft/intruder in the UAVs.


Sensors | 2015

Automated Low-Cost Smartphone-Based Lateral Flow Saliva Test Reader for Drugs-of-Abuse Detection.

Adrian Carrio; Carlos Sampedro; Jose Luis Sanchez-Lopez; Miguel Pimienta; Pascual Campoy

Lateral flow assay tests are nowadays becoming powerful, low-cost diagnostic tools. Obtaining a result is usually subject to visual interpretation of colored areas on the test by a human operator, introducing subjectivity and the possibility of errors in the extraction of the results. While automated test readers providing a result-consistent solution are widely available, they usually lack portability. In this paper, we present a smartphone-based automated reader for drug-of-abuse lateral flow assay tests, consisting of an inexpensive light box and a smartphone device. Test images captured with the smartphone camera are processed in the device using computer vision and machine learning techniques to perform automatic extraction of the results. A deep validation of the system has been carried out showing the high accuracy of the system. The proposed approach, applicable to any line-based or color-based lateral flow test in the market, effectively reduces the manufacturing costs of the reader and makes it portable and massively available while providing accurate, reliable results.


international conference on unmanned aircraft systems | 2015

Efficient visual odometry and mapping for Unmanned Aerial Vehicle using ARM-based stereo vision pre-processing system

Changhong Fu; Adrian Carrio; Pascual Campoy

Visual odometry and mapping methods can provide accurate navigation and comprehensive environment (obstacle) information for autonomous flights of Unmanned Aerial Vehicle (UAV) in GPS-denied cluttered environments. This work presents a new light small-scale low-cost ARM-based stereo vision pre-processing system, which not only is used as onboard sensor to continuously estimate 6-DOF UAV pose, but also as onboard assistant computer to pre-process visual information, thereby saving more computational capability for the onboard host computer of the UAV to conduct other tasks. The visual odometry is done by one plugin specifically developed for this new system with a fixed baseline (12cm). In addition, the pre-processed infromation from this new system are sent via a Gigabit Ethernet cable to the onboard host computer of UAV for real-time environment reconstruction and obstacle detection with a octree-based 3D occupancy grid mapping approach, i.e. OctoMap. The visual algorithm is evaluated with the stereo video datasets from EuRoC Challenge III in terms of efficiency, accuracy and robustness. Finally, the new system is mounted and tested on a real quadrotor UAV to carry out the visual odometry and mapping task.


international conference on unmanned aircraft systems | 2014

A Vision-based Quadrotor Swarm for the participation in the 2013 International Micro Air Vehicle Competition

Jesús Pestana; Jose Luis Sanchez-Lopez; Paloma de la Puente; Adrian Carrio; Pascual Campoy

This paper presents a completely autonomous solution to participate in the 2013 International Micro Air Vehicle Indoor Flight Competition (IMAV2013). Our proposal is a modular multi-robot swarm architecture, based on the Robot Operating System (ROS) software framework, where the only information shared among swarm agents is each robots position. Each swarm agent consists of an AR Drone 2.0 quadrotor connected to a laptop which runs the software architecture. In order to present a completely visual-based solution the localization problem is simplified by the usage of ArUco visual markers. These visual markers are used to sense and map obstacles and to improve the pose estimation based on the IMU and optical data flow by means of an Extended Kalman Filter localization and mapping method. The presented solution and the performance of the CVG UPM team were awarded with the First Prize in the Indoors Autonomy Challenge of the IMAV2013 competition.


Journal of Intelligent and Robotic Systems | 2016

A Vision-based Quadrotor Multi-robot Solution for the Indoor Autonomy Challenge of the 2013 International Micro Air Vehicle Competition

Jesús Pestana; Jose Luis Sanchez-Lopez; Paloma de la Puente; Adrian Carrio; Pascual Campoy

This paper presents a completely autonomous solution to participate in the Indoor Challenge of the 2013 International Micro Air Vehicle Competition (IMAV 2013). Our proposal is a multi-robot system with no centralized coordination whose robotic agents share their position estimates. The capability of each agent to navigate avoiding collisions is a consequence of the resulting emergent behavior. Each agent consists of a ground station running an instance of the proposed architecture that communicates over WiFi with an AR Drone 2.0 quadrotor. Visual markers are employed to sense and map obstacles and to improve the pose estimation based on Inertial Measurement Unit (IMU) and ground optical flow data. Based on our architecture, each robotic agent can navigate avoiding obstacles and other members of the multi-robot system. The solution is demonstrated and the achieved navigation performance is evaluated by means of experimental flights. This work also analyzes the capabilities of the presented solution in simulated flights of the IMAV 2013 Indoor Challenge. The performance of the CVG_UPM team was awarded with the First Prize in the Indoor Autonomy Challenge of the IMAV 2013 competition.


Journal of Sensors | 2017

A Review of Deep Learning Methods and Applications for Unmanned Aerial Vehicles

Adrian Carrio; Carlos Sampedro; Alejandro Rodriguez-Ramos; Pascual Campoy

Deep learning is recently showing outstanding results for solving a wide variety of robotic tasks in the areas of perception, planning, localization, and control. Its excellent capabilities for learning representations from the complex data acquired in real environments make it extremely suitable for many kinds of autonomous robotic applications. In parallel, Unmanned Aerial Vehicles (UAVs) are currently being extensively applied for several types of civilian tasks in applications going from security, surveillance, and disaster rescue to parcel delivery or warehouse management. In this paper, a thorough review has been performed on recent reported uses and applications of deep learning for UAVs, including the most relevant developments as well as their performances and limitations. In addition, a detailed explanation of the main deep learning techniques is provided. We conclude with a description of the main challenges for the application of deep learning for UAV-based solutions.


Robot | 2014

Visual Quadrotor Swarm for the IMAV 2013 Indoor Competition

Jose Luis Sanchez-Lopez; Jesús Pestana; Paloma de la Puente; Adrian Carrio; Pascual Campoy

This paper presents a low-cost framework for visual quadrotor swarm prototyping which will be utilized to participate in the 2013 International Micro Air Vehicle Indoor Flight Competition. The testbed facilitates the swarm design problem by utilizing a cost-efficient quadrotor platform, the Parrot AR Drone 2.0; by using markers to simplify the visual localization problem, and by broadcoasting the estimated location of the swarm members to obviate the partner dectection problem. The development team can then focus their attention on the design of a succesful swarming behaviour for the problem at hand. ArUco Codes [2] are used to sense and map obstacles and to improve the pose estimation based on the IMU data and optical flow by means of an Extended Kalman Filter localization and mapping method. A free-collision trajectory for each drone is generated by using a combination of well-known trajectory planning algorithms: probabilistic road maps, the potential field map algorithm and the A-Star algorithm. The control loop of each drone of the swarm is closed by a robust mid-level controller. A very modular design for integration within the Robot Operating System (ROS) [13] is proposed.


international conference on unmanned aircraft systems | 2017

A fully-autonomous aerial robotic solution for the 2016 International Micro Air Vehicle competition

Carlos Sampedro; Hriday Bavle; Alejandro Rodriguez-Ramos; Adrian Carrio; Ramón Suárez Fernández; Jose Luis Sanchez-Lopez; Pascual Campoy

In this paper, a fully-autonomous quadrotor aerial robot for solving the different missions proposed in the 2016 International Micro Air Vehicle (IMAV) Indoor Competition is presented. The missions proposed in the IMAV 2016 competition involve the execution of high-level missions such as entering and exiting a building, exploring an unknown indoor environment, recognizing and interacting with objects, landing autonomously on a moving platform, etc. For solving the aforementioned missions, a fully-autonomous quadrotor aerial robot has been designed, based on a complete hardware configuration and a versatile software architecture, which allows the aerial robot to complete all the missions in a fully autonomous and consecutive manner. A thorough evaluation of the proposed system has been carried out in both simulated flights, using the Gazebo simulator in combination with PX4 Software-In-The-Loop, and real flights, demonstrating the appropriate capabilities of the proposed system for performing high-level missions and its flexibility for being adapted to a wide variety of applications.


international conference on unmanned aircraft systems | 2014

A ground-truth video dataset for the development and evaluation of vision-based Sense-and-Avoid systems

Adrian Carrio; Changhong Fu; Jesús Pestana; Pascual Campoy

The importance of vision-based systems for Sense-and-Avoid is increasing nowadays as remotely piloted and autonomous UAVs become part of the non-segregated airspace. The development and evaluation of these systems demand flight scenario images which are expensive and risky to obtain. Currently Augmented Reality techniques allow the compositing of real flight scenario images with 3D aircraft models to produce useful realistic images for system development and benchmarking purposes at a much lower cost and risk. With the techniques presented in this paper, 3D aircraft models are positioned firstly in a simulated 3D scene with controlled illumination and rendering parameters. Realistic simulated images are then obtained using an image processing algorithm which fuses the images obtained from the 3D scene with images from real UAV flights taking into account on board camera vibrations. Since the intruder and camera poses are user-defined, ground truth data is available. These ground truth annotations allow to develop and quantitatively evaluate aircraft detection and tracking algorithms. This paper presents the software developed to create a public dataset of 24 videos together with their annotations and some tracking application results.


Robot | 2016

UBRISTES: UAV-Based Building Rehabilitation with Visible and Thermal Infrared Remote Sensing

Adrian Carrio; Jesús Pestana; Jose-Luis Sanchez-Lopez; Ramon Suarez-Fernandez; Pascual Campoy; Ricardo Tendero; Beatriz González-Rodrigo; Javier Bonatti; Juan Gregorio Rejas-Ayuga; Rubén Martínez-Marín; Miguel Marchamalo-Sacristán

Building inspection is a critical issue for designing rehabilitation projects, which are recently gaining importance for environmental and energy efficiency reasons. Image sensors on-board unmanned aerial vehicles are a powerful tool for building inspection, given the diversity and complexity of facades and materials, and mainly, their vertical disposition. The UBRISTES (UAV-based Building Rehabilitation with vISible and ThErmal infrared remote Sensing) system is proposed as an effective solution for facade inspection in urban areas, validating a method for the simultaneous acquisition of visible and thermal aerial imaging applied to the detection of the main types of facade anomalies/pathologies, and showcasing its possibilities using a first principles analysis. Two public buildings have been considered for evaluating the proposed system. UBRISTES is ready to use in building inspection and has been proved as a useful tool in the design of rehabilitation projects for inaccessible, complex building structures in the context of energy efficiency.

Collaboration


Dive into the Adrian Carrio's collaboration.

Top Co-Authors

Avatar

Pascual Campoy

Technical University of Madrid

View shared research outputs
Top Co-Authors

Avatar

Changhong Fu

Spanish National Research Council

View shared research outputs
Top Co-Authors

Avatar

Jesús Pestana

Spanish National Research Council

View shared research outputs
Top Co-Authors

Avatar

Jose Luis Sanchez-Lopez

Spanish National Research Council

View shared research outputs
Top Co-Authors

Avatar

Carlos Sampedro

Spanish National Research Council

View shared research outputs
Top Co-Authors

Avatar

Alejandro Rodriguez-Ramos

Spanish National Research Council

View shared research outputs
Top Co-Authors

Avatar

Jean-François Collumeau

Spanish National Research Council

View shared research outputs
Top Co-Authors

Avatar

Paloma de la Puente

Technical University of Madrid

View shared research outputs
Top Co-Authors

Avatar

Ramon Suarez-Fernandez

Spanish National Research Council

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge