Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Luis E. Navarro-Serment is active.

Publication


Featured researches published by Luis E. Navarro-Serment.


Autonomous Robots | 2000

Heterogeneous Teams of Modular Robots for Mapping and Exploration

Robert Grabowski; Luis E. Navarro-Serment; Christiaan J.J. Paredis; Pradeep K. Khosla

In this article, we present the design of a team of heterogeneous, centimeter-scale robots that collaborate to map and explore unknown environments. The robots, called Millibots, are configured from modular components that include sonar and IR sensors, camera, communication, computation, and mobility modules. Robots with different configurations use their special capabilities collaboratively to accomplish a given task. For mapping and exploration with multiple robots, it is critical to know the relative positions of each robot with respect to the others. We have developed a novel localization system that uses sonar-based distance measurements to determine the positions of all the robots in the group. With their positions known, we use an occupancy grid Bayesian mapping algorithm to combine the sensor data from multiple robots with different sensing modalities. Finally, we present the results of several mapping experiments conducted by a user-guided team of five robots operating in a room containing multiple obstacles.


The International Journal of Robotics Research | 2010

Pedestrian Detection and Tracking Using Three-dimensional LADAR Data

Luis E. Navarro-Serment; Christoph Mertz; Martial Hebert

The approach investigated in this work employs three-dimensional LADAR measurement to detect and track pedestrians over time. The sensor is employed on a moving vehicle. The algorithm quickly detects the objects which have the potential of being humans using a subset of these points, and then classifies each object using statistical pattern recognition techniques. The algorithm uses geometric and motion features to recognize human signatures. The perceptual capabilities described form the basis for safe and robust navigation in autonomous vehicles, necessary to safeguard pedestrians operating in the vicinity of a moving robotic vehicle.


Journal of Field Robotics | 2013

Moving object detection with laser scanners

Christoph Mertz; Luis E. Navarro-Serment; Robert A. MacLachlan; Paul E. Rybski; Aaron Steinfeld; Arne Suppé; Chris Urmson; Nicolas Vandapel; Martial Hebert; Charles E. Thorpe; David Duggins; Jay Gowdy

The detection and tracking of moving objects is an essential task in robotics. The CMU-RI Navlab group has developed such a system that uses a laser scanner as its primary sensor. We will describe our algorithm and its use in several applications. Our system worked successfully on indoor and outdoor platforms and with several different kinds and configurations of two-dimensional and three-dimensional laser scanners. The applications vary from collision warning systems, people classification, observing human tracks, and input to a dynamic planner. Several of these systems were evaluated in live field tests and shown to be robust and reliable.


intelligent robots and systems | 2001

Fault tolerant localization for teams of distributed robots

Renato Tinós; Luis E. Navarro-Serment; Christiaan J.J. Paredis

To combine sensor information from distributed robot teams, it is critical to know the locations of all the robots relative to each other. This paper presents a novel fault tolerant localization algorithm developed for centimeter-scale robots, called Millibots. To determine their locations, the Millibots measure the distances between themselves with an ultrasonic distance sensor. They then combine these distance measurements with dead reckoning in a maximum likelihood estimator. The focus of this paper is on detecting and isolating measurement faults that commonly occur in this localization system. Such failures include dead reckoning errors when the robots collide with undetected obstacles, and distance measurement errors due to destructive interference between direct and multi-path ultrasound wavefronts. Simulations show that the fault tolerance algorithm accurately detects erroneous measurements and significantly improves the reliability and accuracy of the localization system.


international conference on robotics and automation | 2004

Optimal sensor placement for cooperative distributed vision

Luis E. Navarro-Serment; John M. Dolan; Pradeep K. Khosla

This work describes a method for observing maneuvering targets using a group of mobile robots equipped with video cameras. These robots are part of a team of small-size (7/spl times/7/spl times/7 cm) robots configured from modular components that collaborate to accomplish a given task. The cameras seek to observe the target while facing it as much as possible from their respective viewpoints. This work considers the problem of scheduling and maneuvering the cameras based on the evaluation of their current positions in terms of how well can they maintain a frontal view of the target. We describe our approach, which distributes the task among several robots and avoids extensive energy consumption on a single robot. We explore the concept in simulation and present results.


international conference on intelligent transportation systems | 2012

Detection of parking spots using 2D range data

Jifu Zhou; Luis E. Navarro-Serment; Martial Hebert

This paper addresses the problem of reliably detecting parking spots in semi-filled parking lots using on-board laser line scanners. In order to identify parking spots, one needs to detect parked vehicles and interpret the parking environment. Our approach uses a supervised learning technique to achieve vehicle detection by identifying vehicle bumpers from laser range scans. In particular, we use AdaBoost to train a classifier based on relevant geometric features of data segments that correspond to car bumpers. Using the detected bumpers as landmarks of vehicle hypotheses, our algorithm constructs a topological graph representing the structure of the parking space. Spatial analysis is then performed on the topological graph to identify potential parking spots. Algorithm performance is evaluated through a series of experimental tests.


10th Biennial International Conference on Engineering, Construction, and Operations in Challenging Environments and Second NASA/ARO/ASCE Workshop on Granular Materials in Lunar and Martian Exploration | 2006

A Robot Supervision Architecture for Safe and Efficient Space Exploration and Operation

Ehud Halberstam; Luis E. Navarro-Serment; Ronald Conescu; Sandra Mau; Gregg Podnar; Alan D. Guisewite; H. Benjamin Brown; Alberto Elfes; John M. Dolan; Marcel Bergerman

Current NASA plans envision human beings returning to the Moon in 2018 and, once there, establishing a permanent outpost from which we may initiate a long-term effort to visit other planetary bodies in the Solar System. This will be a bold, risky, and costly journey, comparable to the Great Navigations of the fifteenth and sixteenth centuries. Therefore, it is important that all possible actions be taken to maximize the astronauts’ safety and productivity. This can be achieved by deploying fleets of autonomous robots for mineral prospecting and mining, habitat construction, fuel production, inspection and maintenance, etc.; and by providing the humans with the capability to telesupervise the robots’ operation and to teleoperate them whenever necessary or appropriate, all from a safe, “shirtsleeve” environment.


Proceedings of SPIE | 2012

Semantic perception for ground robotics

Martial Hebert; J. A. Bagnell; Max Bajracharya; Kostas Daniilidis; Larry H. Matthies; L. Mianzo; Luis E. Navarro-Serment; J. Shi; M. Wellfare

Semantic perception involves naming objects and features in the scene, understanding the relations between them, and understanding the behaviors of agents, e.g., people, and their intent from sensor data. Semantic perception is a central component of future UGVs to provide representations which 1) can be used for higher-level reasoning and tactical behaviors, beyond the immediate needs of autonomous mobility, and 2) provide an intuitive description of the robots environment in terms of semantic elements that can shared effectively with a human operator. In this paper, we summarize the main approaches that we are investigating in the RCTA as initial steps toward the development of perception systems for UGVs.


automotive user interfaces and interactive vehicular applications | 2010

Semi-autonomous virtual valet parking

Arne Suppé; Luis E. Navarro-Serment; Aaron Steinfeld

Despite regulations specifying parking spots that support wheelchair vans, it is not uncommon for end users to encounter problems with clearance for van ramps. Even if a driver elects to park in the far reaches of a parking lot as a precautionary measure, there is no guarantee that the spot next to their van will be empty when they return. Likewise, the prevalence of older drivers who experience significant difficulty with ingress and egress from vehicles is nontrivial and the ability to fully open a car door is important. This work describes a method and user interaction for low cost, short-range parking without a driver in car. This will enable ingress/egress without the doors being blocked by neighboring cars.


intelligent robots and systems | 2016

Reducing adaptation latency for multi-concept visual perception in outdoor environments

Maggie Wigness; John G. Rogers; Luis E. Navarro-Serment; Arne Suppé; Bruce A. Draper

Multi-concept visual classification is emerging as a common environment perception technique, with applications in autonomous mobile robot navigation. Supervised visual classifiers are typically trained with large sets of images, hand annotated by humans with region boundary outlines followed by label assignment. This annotation is time consuming, and unfortunately, a change in environment requires new or additional labeling to adapt visual perception. The time is takes for a human to label new data is what we call adaptation latency. High adaptation latency is not simply undesirable but may be infeasible for scenarios with limited labeling time and resources. In this paper, we introduce a labeling framework to the environment perception domain that significantly reduces adaptation latency using unsupervised learning in exchange for a small amount of label noise. Using two real-world datasets we demonstrate the speed of our labeling framework, and its ability to collect environment labels that train high performing multi-concept classifiers. Finally, we demonstrate the relevance of this label collection process for visual perception as it applies to navigation in outdoor environments.

Collaboration


Dive into the Luis E. Navarro-Serment's collaboration.

Top Co-Authors

Avatar

Martial Hebert

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Arne Suppé

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Pradeep K. Khosla

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Robert Grabowski

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Christiaan J.J. Paredis

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Christoph Mertz

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Jean Oh

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Aaron Steinfeld

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Anthony Stentz

Carnegie Mellon University

View shared research outputs
Researchain Logo
Decentralizing Knowledge