Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Herman is active.

Publication


Featured researches published by Herman.


The International Journal of Robotics Research | 2006

Toward Reliable Off Road Autonomous Vehicles Operating in Challenging Environments

Alonzo Kelly; Anthony Stentz; Omead Amidi; Mike Bode; David M. Bradley; Antonio Diaz-Calderon; Michael Happold; Herman Herman; Robert Mandelbaum; Thomas Pilarski; Peter Rander; Scott M. Thayer; Nick Vallidis; Randy Warner

The DARPA PerceptOR program has implemented a rigorous evaluative test program which fosters the development of field relevant outdoor mobile robots. Autonomous ground vehicles were deployed on diverse test courses throughout the USA and quantitatively evaluated on such factors as autonomy level, waypoint acquisition, failure rate, speed, and communications bandwidth. Our efforts over the three year program have produced new approaches in planning, perception, localization, and control which have been driven by the quest for reliable operation in challenging environments. This paper focuses on some of the most unique aspects of the systems developed by the CMU PerceptOR team, the lessons learned during the effort, and the most immediate challenges that remain to be addressed.


Autonomous Robots | 2002

A System for Semi-Autonomous Tractor Operations

Anthony Stentz; Cristian Dima; Carl Wellington; Herman Herman; David Stager

Tractors are the workhorses of the modern farm. By automating these machines, we can increase the productivity, improve safety, and reduce costs for many agricultural operations. Many researchers have tested computer-controlled machines for farming, but few have investigated the larger issues such as how humans can supervise machines and work amongst them. In this paper, we present a system for tractor automation. A human programs a task by driving the relevant routes. The task is divided into subtasks and assigned to a fleet of tractors that drive portions of the routes. Each tractor uses on-board sensors to detect people, animals, and other vehicles in the path of the machine, stopping for such obstacles until it receives advice from a supervisor over a wireless link. A first version of the system was implemented on a single tractor. Several features of the system were validated, including accurate path tracking, the detection of obstacles based on both geometric and non-geometric properties, and self-monitoring to determine when human intervention is required. Additionally, the complete system was tested in a Florida orange grove, where it autonomously drove seven kilometers.


The International Journal of Robotics Research | 2011

Real-time photorealistic virtualized reality interface for remote mobile robot control

Alonzo Kelly; Nicholas Chan; Herman Herman; Daniel Huber; Roberty Meyers; Peter Rander; Randy Warner; Jason Ziglar; Erin Capstick

The task of teleoperating a robot over a wireless video link is known to be very difficult. Teleoperation becomes even more difficult when the robot is surrounded by dense obstacles, or speed requirements are high, or video quality is poor, or wireless links are subject to latency. Due to high-quality lidar data, and improvements in computing and video compression, virtualized reality has the capacity to dramatically improve teleoperation performance — even in high-speed situations that were formerly impossible. In this paper, we demonstrate the conversion of dense geometry and appearance data, generated on-the-move by a mobile robot, into a photorealistic rendering model that gives the user a synthetic exterior line-of-sight view of the robot, including the context of its surrounding terrain. This technique converts teleoperation into virtual line-of-sight remote control. The underlying metrically consistent environment model also introduces the capacity to remove latency and enhance video compression. Display quality is sufficiently high that the user experience is similar to a driving video game where the surfaces used are textured with live video.


Journal of Field Robotics | 2015

CHIMP, the CMU Highly Intelligent Mobile Platform

Anthony Stentz; Herman Herman; Alonzo Kelly; Eric Meyhofer; G. Clark Haynes; David Stager; Brian Zajac; J. Andrew Bagnell; Jordan Brindza; Christopher M. Dellin; Michael David George; Jose Gonzalez-Mora; Sean Hyde; Morgan Jones; Michel Laverne; Maxim Likhachev; Levi Lister; Matthew Powers; Oscar Ramos; Justin Ray; David Rice; Justin Scheifflee; Raumi Sidki; Siddhartha S. Srinivasa; Kyle Strabala; Jean-Philippe Tardif; Jean-Sebastien Valois; Michael Vande Weghe; Michael D. Wagner; Carl Wellington

We have developed the CHIMP CMU Highly Intelligent Mobile Platform robot as a platform for executing complex tasks in dangerous, degraded, human-engineered environments. CHIMP has a near-human form factor, work-envelope, strength, and dexterity to work effectively in these environments. It avoids the need for complex control by maintaining static rather than dynamic stability. Utilizing various sensors embedded in the robots head, CHIMP generates full three-dimensional representations of its environment and transmits these models to a human operator to achieve latency-free situational awareness. This awareness is used to visualize the robot within its environment and preview candidate free-space motions. Operators using CHIMP are able to select between task, workspace, and joint space control modes to trade between speed and generality. Thus, they are able to perform remote tasks quickly, confidently, and reliably, due to the overall design of the robot and software. CHIMPs hardware was designed, built, and tested over 15i¾?months leading up to the DARPA Robotics Challenge. The software was developed in parallel using surrogate hardware and simulation tools. Over a six-week span prior to the DRC Trials, the software was ported to the robot, the system was debugged, and the tasks were practiced continuously. Given the aggressive schedule leading to the DRC Trials, development of CHIMP focused primarily on manipulation tasks. Nonetheless, our team finished 3rd out of 16. With an upcoming year to develop new software for CHIMP, we look forward to improving the robots capability and increasing its speed to compete in the DRC Finals.


intelligent robots and systems | 2001

Extending shape-from-motion to noncentral onmidirectional cameras

Dennis Strelow; Jeffrey Mishler; Sanjiv Singh; Herman Herman

Algorithms for shape-from-motion simultaneously estimate the camera motion and scene structure. When extended to omnidirectional cameras, shape-from-motion algorithms are likely to provide robust motion estimates, in particular, because of the cameras wide field of view. In this paper, we describe both batch and online shape-from-motion algorithms for omnidirectional cameras, and a precise calibration technique that improves the accuracy of both methods. The shape-from-motion and calibration methods are general, and they handle a wide variety of omnidirectional camera geometries. In particular, the methods do not require that the camera-mirror combination have a single center of projection. We describe a noncentral camera that we have developed, and show experimentally that combining shape-from-motion with this design produces highly accurate motion estimates.


Information Visualization | 2002

Stereo perception on an off-road vehicle

A. Rieder; B. Southall; Garbis Salgian; Robert Mandelbaum; Herman Herman; Peter Rander; T. Stentz

This paper presents a vehicle for autonomous off-road navigation built in the framework of DARPAs PerceptOR program. Special emphasis is given to the perception system. A set of three stereo camera pairs provide color and 3D data in a wide field of view (greater 100 degree) at high resolution (2160 by 480 pixel) and high frame rates (5 Hz). This is made possible by integrating a powerful image processing hardware called Acadia. These high data rates require efficient sensor fusion, terrain reconstruction and path planning algorithms. The paper quantifies sensor performance and shows examples of successful obstacle avoidance.


international conference on computer vision | 2009

Real-time photo-realistic visualization of 3D environments for enhanced tele-operation of vehicles

Daniel Huber; Herman Herman; Alonzo Kelly; Pete Rander; Jason Ziglar

This paper describes a method for creating photorealistic three-dimensional (3D) models of real-world environments in real-time for the purpose of improving and extending the capabilities of vehicle tele-operation. Our approach utilizes the combined data from a laser scanner (for modeling 3D geometry) and a video camera (for modeling surface appearance). The sensors are mounted on a moving vehicle platform, and a photo-realistic 3D model of the vehicles environment is generated and displayed to the remote operator in real time. Our model consists of three main components: a textured ground surface, textured or colorized non-ground objects, and a textured background for representing regions beyond the laser scanners sensing horizon. Our approach enables many unique capabilities for vehicle tele-operation, including viewing the scene from virtual viewpoints (e.g., behind the vehicle or top down), seamless augmentation of the environment with digital objects, and improved robustness to transmission latencies and data dropouts.


international symposium on experimental robotics | 2009

Coordinated Control and Range Imaging for Mobile Manipulation

Dean Anderson; Thomas M. Howard; David Apfelbaum; Herman Herman; Alonzo Kelly

Mobile manipulators currently deployed for explosive ordinance disposal are typically controlled via crude forms of teleoperation.Manipulator joints are actuated individually in joint space, making precise motions in state space difficult. Scene understanding is limited, as monocular cameras provide little (if any) depth information. Furthermore, the operator must manually coordinate the manipulator articulation with the positioning of the mobile base. These limitations place greater demands on the operator, decrease task efficiency and can increase exposure in dangerous environments.


Proceedings of SPIE, the International Society for Optical Engineering | 2008

Remote operation of the Black Knight unmanned ground combat vehicle

Jean-Sebastien Valois; Herman Herman; John Bares; David Rice

The Black Knight is a 12-ton, C-130 deployable Unmanned Ground Combat Vehicle (UGCV). It was developed to demonstrate how unmanned vehicles can be integrated into a mechanized military force to increase combat capability while protecting Soldiers in a full spectrum of battlefield scenarios. The Black Knight is used in military operational tests that allow Soldiers to develop the necessary techniques, tactics, and procedures to operate a large unmanned vehicle within a mechanized military force. It can be safely controlled by Soldiers from inside a manned fighting vehicle, such as the Bradley Fighting Vehicle. Black Knight control modes include path tracking, guarded teleoperation, and fully autonomous movement. Its state-of-the-art Autonomous Navigation Module (ANM) includes terrain-mapping sensors for route planning, terrain classification, and obstacle avoidance. In guarded teleoperation mode, the ANM data, together with automotive dials and gages, are used to generate video overlays that assist the operator for both day and night driving performance. Remote operation of various sensors also allows Soldiers to perform effective target location and tracking. This document covers Black Knights system architecture and includes implementation overviews of the various operation modes. We conclude with lessons learned and development goals for the Black Knight UGCV.


international conference on multimedia information networking and security | 2000

Training and performance assessment of land mine detector operator using motion tracking and virtual mine lane

Herman Herman; Jeffrey D. McMahill; George Kantor

Landmine detection is a complex and highly dangerous task. Most demining operations are done using hand-held detectors, which means that the operator is always at risk of serious injury or death. One of the most important factor that determines the probability of detecting is the operator performance. Therefore, it is very important that we train the operator well and are able to assess their performance accurately. To achieve these objectives, we have been developing two training tools, the 3D tracker for real-time feedback during training, and the virtual mien lane for interactive training. We have been using the 3D tracker successfully to assess the performance of an operator as a prat of a successful training program.

Collaboration


Dive into the Herman's collaboration.

Top Co-Authors

Avatar

Anthony Stentz

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Sanjiv Singh

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alonzo Kelly

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Aaron J. Bruns

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mark M. Chaney

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Peter Rander

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge