Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Luis Mejias is active.

Publication


Featured researches published by Luis Mejias.


Journal of Field Robotics | 2006

Visual servoing of an autonomous helicopter in urban areas using feature tracking

Luis Mejias; Srikanth Saripalli; Pascual Campoy; Gaurav S. Sukhatme

We present the design and implementation of a vision-based feature tracking system for an autonomous helicopter. Visual sensing is used for estimating the position and velocity of features in the image plane (urban features like windows) in order to generate velocity references for the flight control. These visual-based references are then combined with GPS-positioning references to navigate towards these features and then track them. We present results from experimental flight trials, performed in two UAV systems and under different conditions that show the feasibility and robustness of our approach.


Journal of Field Robotics | 2011

Airborne vision-based collision-detection system

John Lai; Luis Mejias; Jason J. Ford

Machine vision represents a particularly attractive solution for sensing and detecting potential collision-course targets due to the relatively low cost, size, weight, and power requirements of vision sensors (as opposed to radar and Traffic Alert and Collision Avoidance System). This paper describes the development and evaluation of a real-time, vision-based collision-detection system suitable for fixed-wing aerial robotics. Using two fixed-wing unmanned aerial vehicles (UAVs) to recreate various collision-course scenarios, we were able to capture highly realistic vision (from an onboard camera perspective) of the moments leading up to a collision. This type of image data is extremely scarce and was invaluable in evaluating the detection performance of two candidate target detection approaches. Based on the collected data, our detection approaches were able to detect targets at distances ranging from 400 to about 900 m. These distances (with some assumptions about closing speeds and aircraft trajectories) translate to an advance warning of between 8 and 10 s ahead of impact, which approaches the 12.5-s response time recommended for human pilots. We overcame the challenge of achieving real-time computational speeds by exploiting the parallel processing architectures of graphics processing units (GPUs) found on commercial-off-the-shelf graphics devices. Our chosen GPU device suitable for integration onto UAV platforms can be expected to handle real-time processing of 1,024 × 768 pixel image frames at a rate of approximately 30 Hz. Flight trials using manned Cessna aircraft in which all processing is performed onboard will be conducted in the near future, followed by further experiments with fully autonomous UAV platforms.


Journal of Intelligent and Robotic Systems | 2009

Computer Vision Onboard UAVs for Civilian Tasks

Pascual Campoy; Juan F. Correa; Iván F. Mondragón; Carol Martinez; Miguel Olivares; Luis Mejias; Jorge Artieda

Computer vision is much more than a technique to sense and recover environmental information from an UAV. It should play a main role regarding UAVs’ functionality because of the big amount of information that can be extracted, its possible uses and applications, and its natural connection to human driven tasks, taking into account that vision is our main interface to world understanding. Our current research’s focus lays on the development of techniques that allow UAVs to maneuver in spaces using visual information as their main input source. This task involves the creation of techniques that allow an UAV to maneuver towards features of interest whenever a GPS signal is not reliable or sufficient, e.g. when signal dropouts occur (which usually happens in urban areas, when flying through terrestrial urban canyons or when operating on remote planetary bodies), or when tracking or inspecting visual targets—including moving ones—without knowing their exact UMT coordinates. This paper also investigates visual servoing control techniques that use velocity and position of suitable image features to compute the references for flight control. This paper aims to give a global view of the main aspects related to the research field of computer vision for UAVs, clustered in four main active research lines: visual servoing and control, stereo-based visual navigation, image processing algorithms for detection and tracking, and visual SLAM. Finally, the results of applying these techniques in several applications are presented and discussed: this study will encompass power line inspection, mobile target tracking, stereo distance estimation, mapping and positioning.


Autonomous Robots | 2010

Unmanned aerial vehicles UAVs attitude, height, motion estimation and control using visual systems

Iván F. Mondragón; Miguel A. Olivares-Mendez; Pascual Campoy; Carol Martinez; Luis Mejias

This paper presents an implementation of an aircraft pose and motion estimator using visual systems as the principal sensor for controlling an Unmanned Aerial Vehicle (UAV) or as a redundant system for an Inertial Measure Unit (IMU) and gyros sensors. First, we explore the applications of the unified theory for central catadioptric cameras for attitude and heading estimation, explaining how the skyline is projected on the catadioptric image and how it is segmented and used to calculate the UAV’s attitude. Then we use appearance images to obtain a visual compass, and we calculate the relative rotation and heading of the aerial vehicle. Additionally, we show the use of a stereo system to calculate the aircraft height and to measure the UAV’s motion. Finally, we present a visual tracking system based on Fuzzy controllers working in both a UAV and a camera pan and tilt platform. Every part is tested using the UAV COLIBRI platform to validate the different approaches, which include comparison of the estimated data with the inertial values measured onboard the helicopter platform and the validation of the tracking schemes on real flights.


intelligent robots and systems | 2010

Vision-based detection and tracking of aerial targets for UAV collision avoidance

Luis Mejias; Scott McNamara; John Lai; Jason J. Ford

Machine vision represents a particularly attractive solution for sensing and detecting potential collision-course targets due to the relatively low cost, size, weight, and power requirements of the sensors involved (as opposed to radar). This paper describes the development and evaluation of a vision-based collision detection algorithm suitable for fixed-wing aerial robotics. The system was evaluated using highly realistic vision data of the moments leading up to a collision. Based on the collected data, our detection approaches were able to detect targets at distances ranging from 400m to about 900m. These distances (with some assumptions about closing speeds and aircraft trajectories) translate to an advanced warning of between 8–10 seconds ahead of impact, which approaches the 12.5 second response time recommended for human pilots. We make use of the enormous potential of graphic processing units to achieve processing rates of 30Hz (for images of size 1024-by-768). Currently, integration in the final platform is under way.


ieee international symposium on intelligent signal processing, | 2007

Visual Model Feature Tracking For UAV Control

Iván F. Mondragón; Pascual Campoy; Juan F. Correa; Luis Mejias

This paper explores the possibilities to use robust object tracking algorithms based on visual model features as generator of visual references for UAV control. A scale invariant feature transform (SIFT) algorithm is used for detecting the salient points at every processed image, then a projective transformation for evaluating the visual references is obtained using a version of the RANSAC algorithm, in which a series of matched key-points pairs that fulfill the transformation equations are selected, rejecting otherwise the corrupted data. The system has been tested using diverse image sequences showing its capability to track objects significantly changed in scale, position, rotation, generating at the same time velocity references to the UAV flight controller. The robustness our approach has also been validated using images taken from real flights showing noise and lighting distortions. The results presented are promising in order to be used as reference generator for the control system.


international conference on robotics and automation | 2006

A visual servoing approach for tracking features in urban areas using an autonomous helicopter

Luis Mejias; Pascual Campoy; Srikanth Saripalli; Gaurav S. Sukhatme

The use of unmanned aerial vehicles (UAVs) in civilian and domestic applications is highly demanding, requiring a high-level of capability from the vehicles. This work addresses the design and implementation of a vision-based feature tracker for an autonomous helicopter. Using vision in the control loop allows estimating the position and velocity of a set of features with respect to the helicopter. The helicopter is then autonomously guided to track these features (in this case windows in an urban environment) in real time. The results obtained from flight trials in a real world scenario demonstrate that the algorithm for tracking features in an urban environment, used for visual servoing of an autonomous helicopter is reliable and robust


Journal of Field Robotics | 2013

Characterization of Sky-region Morphological-temporal Airborne Collision Detection

John Lai; Jason J. Ford; Luis Mejias; Peter O'Shea

Automated airborne collision-detection systems are a key enabling technology for facilitat- ing the integration of unmanned aerial vehicles (UAVs) into the national airspace. These safety-critical systems must be sensitive enough to provide timely warnings of genuine air- borne collision threats, but not so sensitive as to cause excessive false-alarms. Hence, an accurate characterisation of detection and false alarm sensitivity is essential for understand- ing performance trade-offs, and system designers can exploit this characterisation to help achieve a desired balance in system performance. In this paper we experimentally evaluate a sky-region, image based, aircraft collision detection system that is based on morphologi- cal and temporal processing techniques. (Note that the examined detection approaches are not suitable for the detection of potential collision threats against a ground clutter back- ground). A novel collection methodology for collecting realistic airborne collision-course target footage in both head-on and tail-chase engagement geometries is described. Under (hazy) blue sky conditions, our proposed system achieved detection ranges greater than 1540m in 3 flight test cases with no false alarm events in 14.14 hours of non-target data (under cloudy conditions, the system achieved detection ranges greater than 1170m in 4 flight test cases with no false alarm events in 6.63 hours of non-target data). Importantly, this paper is the first documented presentation of detection range versus false alarm curves generated from airborne target and non-target image data.


IEEE Transactions on Geoscience and Remote Sensing | 2009

Investigation of Fish-Eye Lenses for Small-UAV Aerial Photography

Alex Gurtner; Duncan G. Greer; Richard Glassock; Luis Mejias; Rodney A. Walker; Wageeh W. Boles

Aerial photography obtained by unmanned aerial vehicles (UAVs) is a rising market for their civil application. Small UAVs are believed to close gaps in niche markets, such as acquiring airborne image data for remote sensing purposes. Small UAVs can fly at low altitudes, in dangerous environments, and over long periods of time. However, their small lightweight construction leads to new problems, such as higher agility and more susceptibility to turbulence, which has a big impact on the quality of the data and their suitability for aerial photography. This paper investigates the use of fish-eye lenses to overcome field-of-view (FOV) issues for highly agile UAV platforms susceptible to turbulence. The fish-eye lens has the benefit of a large observation area (large FOV) and does not add additional weight to the aircraft, such as traditional mechanical stabilizing systems. We present the implementation of a fish-eye lens for aerial photography and mapping purposes, with potential use in remote sensing applications. We describe a detailed investigation from the fish-eye lens distortion to the registering of the images. Results of the process are presented using low-quality sensors typically found on small UAVs. The system was flown on a midsize platform (a more stable Cessna aircraft) and also on ARCAAs small (<10 kg) UAV platform. The effectiveness of the approach is compared for the two sized platforms.


IEEE Transactions on Geoscience and Remote Sensing | 2010

Evaluation of Aerial Remote Sensing Techniques for Vegetation Management in Power-Line Corridors

Steven Mills; Marcos P.G. Castro; Zhengrong Li; Jinhai Cai; Ross F. Hayward; Luis Mejias; Rodney A. Walker

This paper presents an evaluation of airborne sensors for use in vegetation management in power-line corridors. Three integral stages in the management process are addressed, including the detection of trees, relative positioning with respect to the nearest power line, and vegetation height estimation. Image data, including multispectral and high resolution, are analyzed along with LiDAR data captured from fixed-wing aircraft. Ground truth data are then used to establish the accuracy and reliability of each sensor, thus providing a quantitative comparison of sensor options. Tree detection was achieved through crown delineation using a pulse-coupled neural network and morphologic reconstruction applied to multispectral imagery. Through testing, it was shown to achieve a detection rate of 96%, while the accuracy in segmenting groups of trees and single trees correctly was shown to be 75%. Relative positioning using LiDAR achieved root-mean-square-error (rmse) values of 1.4 and 2.1 m for cross-track distance and along-track position, respectively, while direct georeferencing achieved rmse of 3.1 m in both instances. The estimation of pole and tree heights measured with LiDAR had rmse values of 0.4 and 0.9 m, respectively, while stereo matching achieved 1.5 and 2.9 m. Overall, a small number of poles were missed with detection rates of 98% and 95% for LiDAR and stereo matching.

Collaboration


Dive into the Luis Mejias's collaboration.

Top Co-Authors

Avatar

Pascual Campoy

Technical University of Madrid

View shared research outputs
Top Co-Authors

Avatar

Jason J. Ford

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar

John Lai

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar

Rodney A. Walker

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar

Xilin Yang

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar

Aaron Mcfadyen

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar

Peter O'Shea

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar

Ben Upcroft

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar

Michael Warren

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar

Pillar C. Eng

Queensland University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge