Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Christoph Mertz is active.

Publication


Featured researches published by Christoph Mertz.


intelligent robots and systems | 2009

Planning-based prediction for pedestrians

Brian D. Ziebart; Nathan D. Ratliff; Garratt Gallagher; Christoph Mertz; Kevin M. Peterson; J. Andrew Bagnell; Martial Hebert; Anind K. Dey; Siddhartha S. Srinivasa

We present a novel approach for determining robot movements that efficiently accomplish the robots tasks while not hindering the movements of people within the environment. Our approach models the goal-directed trajectories of pedestrians using maximum entropy inverse optimal control. The advantage of this modeling approach is the generality of its learned cost function to changes in the environment and to entirely different environments. We employ the predictions of this model of pedestrian trajectories in a novel incremental planner and quantitatively show the improvement in hindrance-sensitive robot trajectory planning provided by our approach.


The International Journal of Robotics Research | 2010

Pedestrian Detection and Tracking Using Three-dimensional LADAR Data

Luis E. Navarro-Serment; Christoph Mertz; Martial Hebert

The approach investigated in this work employs three-dimensional LADAR measurement to detect and track pedestrians over time. The sensor is employed on a moving vehicle. The algorithm quickly detects the objects which have the potential of being humans using a subset of these points, and then classifies each object using statistical pattern recognition techniques. The algorithm uses geometric and motion features to recognize human signatures. The perceptual capabilities described form the basis for safe and robust navigation in autonomous vehicles, necessary to safeguard pedestrians operating in the vicinity of a moving robotic vehicle.


Mechatronics | 2003

PERCEPTION FOR COLLISION AVOIDANCE AND AUTONOMOUS DRIVING

Romuald Aufrère; Jay Gowdy; Christoph Mertz; Charles E. Thorpe; Chieh-Chih Wang; Teruko Yata

The Navlab group at Carnegie Mellon University has a long history of development of automated vehicles and intelligent systems for driver assistance. The earlier work of the group concentrated on road following, cross-country driving, and obstacle detection. The new focus is on short-range sensing, to look all around the vehicle for safe driving. The current system uses video sensing, laser rangefinders, a novel light-stripe rangefinder, software to process each sensor individually, a map-based fusion system, and a probability based predictive model. The complete system has been demonstrated on the Navlab 11 vehicle for monitoring the environment of a vehicle driving through a cluttered urban environment, detecting and tracking fixed objects, moving objects, pedestrians, curbs, and roads.


Journal of Field Robotics | 2013

Moving object detection with laser scanners

Christoph Mertz; Luis E. Navarro-Serment; Robert A. MacLachlan; Paul E. Rybski; Aaron Steinfeld; Arne Suppé; Chris Urmson; Nicolas Vandapel; Martial Hebert; Charles E. Thorpe; David Duggins; Jay Gowdy

The detection and tracking of moving objects is an essential task in robotics. The CMU-RI Navlab group has developed such a system that uses a laser scanner as its primary sensor. We will describe our algorithm and its use in several applications. Our system worked successfully on indoor and outdoor platforms and with several different kinds and configurations of two-dimensional and three-dimensional laser scanners. The applications vary from collision warning systems, people classification, observing human tracks, and input to a dynamic planner. Several of these systems were evaluated in live field tests and shown to be robust and reliable.


intelligent vehicles symposium | 2003

Multiple sensor fusion for detecting location of curbs, walls, and barriers

Romuald Aufrère; Christoph Mertz; Charles E. Thorpe

Knowledge of the location of curbs, walls, or barriers is important for guidance of vehicles or for the understanding of their surroundings. We have developed a method to detect such continuous objects alongside and in front of a host vehicle. We employ a laser line stripper, a vehicle state estimator, a video camera, and a laser scanner to detect the object at one location, track it alongside the vehicle, search for it in front of the vehicle and eliminate erroneous readings caused by occlusion from other objects.


international conference on intelligent transportation systems | 2006

Tracking of Moving Objects from a Moving Vehicle Using a Scanning Laser Rangefinder

Robert A. MacLachlan; Christoph Mertz

The capability to use a moving sensor to detect moving objects and predict their future path enables both collision warning systems and autonomous navigation. This paper describes a system that combines linear feature extraction, tracking and a motion evaluator to accurately estimate motion of vehicles and pedestrians with a low rate of false motion reports. The tracker was used in a prototype collision warning system that was tested on two transit buses during 7000 km of regular passenger service


Information Visualization | 2002

Eye-safe laser line striper for outside use

Christoph Mertz; John Kozar; J.R. Miller; Charles E. Thorpe

Collision warning or autonomous driving in cluttered environments like urban areas require short-range high-resolution sensors. Rangefinders using laser triangulation fulfil these requirements but they face the additional challenge of operating in bright sunlight while remaining eye-safe. The laser line striper introduced in this paper achieves all these requirements without the need of expensive components. The sensor can be built with a variety of field-of-views.


ieee intelligent vehicles symposium | 2000

Side collision warning systems for transit buses

Christoph Mertz; Sue McNeil; Charles E. Thorpe

Transit buses are involved in many more accidents than other vehicles. Collision warning systems (CWS) are therefore placed most efficiently on these buses. In our project, we investigate their operating environment and available technologies to develop performance specifications for such CWS. The paper discusses our findings of transit buses driving through very cluttered surroundings and being involved in many different types of accidents where currently available CWS no not work effectively. One of the focuses of our work is pedestrians around the bus and their detection.


The International Journal of Robotics Research | 2005

Safe Robot Driving in Cluttered Environments

Charles E. Thorpe; Justin Carlson; Dave Duggins; Jay Gowdy; Robert A. MacLachlan; Christoph Mertz; Arne Suppé; Bob Wang

The Navlab group at Carnegie Mellon University has a long history of development of automated vehicles and intelligent systems for driver assistance. The earlier work of the group concentrated on road following, cross-country driving, and obstacle detection. The new focus is on short-range sensing, to look all around the vehicle for safe driving. The current system uses video sensing, laser rangefinders, a novel light-stripe rangefinder, software to process each sensor individually, and a map-based fusion system. The complete system has been demonstrated on the Navlab 11 vehicle for monitoring the environment of a vehicle driving through a cluttered urban environment, detecting and tracking fixed objects, moving objects, pedestrians, curbs, and roads.


workshop on applications of computer vision | 2014

Vision for road inspection

Srivatsan Varadharajan; Sobhagya Jose; Karan Sharma; Lars Wander; Christoph Mertz

Road surface inspection in cities is for the most part, a task performed manually. Being a subjective and labor intensive process, it is an ideal candidate for automation. We propose a solution based on computer vision and data-driven methods to detect distress on the road surface. Our method works on images collected from a camera mounted on the windshield of a vehicle. We use an automatic procedure to select images suitable for inspection based on lighting and weather conditions. From the selected data we segment the ground plane and use texture, color and location information to detect the presence of pavement distress. We describe an over-segmentation algorithm that identifies coherent image regions not just in terms of color, but also texture. We also discuss the problem of learning from unreliable human-annotations and propose using a weakly supervised learning algorithm (Multiple Instance Learning) to train a classifier. We present results from experiments comparing the performance of this approach against multiple individual human labelers, with the ground-truth labels obtained from an ensemble of other human labelers. Finally, we show results of pavement distress scores computed using our method over a subset of a citywide road network.

Collaboration


Dive into the Christoph Mertz's collaboration.

Top Co-Authors

Avatar

Charles E. Thorpe

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Martial Hebert

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Sue McNeil

University of Delaware

View shared research outputs
Top Co-Authors

Avatar

Arne Suppé

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Jay Gowdy

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

John Kozar

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David Duggins

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge