Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jason Ziglar is active.

Publication


Featured researches published by Jason Ziglar.


Journal of Field Robotics | 2006

A Robust Approach to High-Speed Navigation for Unrehearsed Desert Terrain

Chris Urmson; Charlie Ragusa; David Ray; Joshua Anhalt; Daniel Bartz; Tugrul Galatali; Alexander Gutierrez; Josh Johnston; Sam Harbaugh; Hiroki Kato; William C. Messner; Nicholas Miller; Kevin M. Peterson; Bryon Smith; Jarrod M. Snider; Spencer Spiker; Jason Ziglar; Michael Clark; Phillip L. Koon; Aaron Mosher; Joshua Struble

This article presents a robust approach to navigating at high-speed across desert terrain. A central theme of this approach is the combination of simple ideas and components to build a capable and robust system. A pair of robots were developed which completed a 212 kilometer Grand Challenge desert race in approximately seven hours. A path-centric navigation system uses a combination of LIDAR and RADAR based perception sensors to traverse trails and avoid obstacles at speeds up to 15m/s. The onboard navigation system leverages a human based pre-planning system to improve reliability and robustness. The robots have been extensively tested, traversing over 3500 kilometers of desert trails prior to completing the challenge. This article describes the mechanisms, algorithms and testing methods used to achieve this performance.


The International Journal of Robotics Research | 2011

Real-time photorealistic virtualized reality interface for remote mobile robot control

Alonzo Kelly; Nicholas Chan; Herman Herman; Daniel Huber; Roberty Meyers; Peter Rander; Randy Warner; Jason Ziglar; Erin Capstick

The task of teleoperating a robot over a wireless video link is known to be very difficult. Teleoperation becomes even more difficult when the robot is surrounded by dense obstacles, or speed requirements are high, or video quality is poor, or wireless links are subject to latency. Due to high-quality lidar data, and improvements in computing and video compression, virtualized reality has the capacity to dramatically improve teleoperation performance — even in high-speed situations that were formerly impossible. In this paper, we demonstrate the conversion of dense geometry and appearance data, generated on-the-move by a mobile robot, into a photorealistic rendering model that gives the user a synthetic exterior line-of-sight view of the robot, including the context of its surrounding terrain. This technique converts teleoperation into virtual line-of-sight remote control. The underlying metrically consistent environment model also introduces the capacity to remove latency and enhance video compression. Display quality is sufficiently high that the user experience is similar to a driving video game where the surfaces used are textured with live video.


intelligent robots and systems | 2008

Fast feature detection and stochastic parameter estimation of road shape using multiple LIDAR

Kevin M. Peterson; Jason Ziglar; Paul E. Rybski

This paper describes an algorithm for an autonomous car to identify the shape of a roadway by detecting geometric features via LIDAR. The data from multiple LIDAR are fused together to detect both obstacles as well as geometric features such as curbs, berms, and shoulders. These features identify the boundaries of the roadway and are used by a stochastic state estimator to identify the most likely road shape. This algorithm has been used successfully to allow an autonomous car to drive on paved roadways as well as on off-road trails without requiring different sets of parameters for the different domains.


international conference on computer vision | 2009

Real-time photo-realistic visualization of 3D environments for enhanced tele-operation of vehicles

Daniel Huber; Herman Herman; Alonzo Kelly; Pete Rander; Jason Ziglar

This paper describes a method for creating photorealistic three-dimensional (3D) models of real-world environments in real-time for the purpose of improving and extending the capabilities of vehicle tele-operation. Our approach utilizes the combined data from a laser scanner (for modeling 3D geometry) and a video camera (for modeling surface appearance). The sensors are mounted on a moving vehicle platform, and a photo-realistic 3D model of the vehicles environment is generated and displayed to the remote operator in real time. Our model consists of three main components: a textured ground surface, textured or colorized non-ground objects, and a textured background for representing regions beyond the laser scanners sensing horizon. Our approach enables many unique capabilities for vehicle tele-operation, including viewing the scene from virtual viewpoints (e.g., behind the vehicle or top down), seamless augmentation of the environment with digital objects, and improved robustness to transmission latencies and data dropouts.


Journal of Aerospace Computing Information and Communication | 2008

Software Infrastructure for an Autonomous Ground Vehicle

Matthew McNaughton; Christopher R. Baker; Tugrul Galatali; Bryan Salesky; Chris Urmson; Jason Ziglar

The DARPA Urban Challenge required robots to drive 60 miles on suburban roads while following the rules of the road in interactions with human drivers and other robots. Tartan Racing’s Boss won the competition, completing the course in just over 4 hours. This paper describes the software infrastructure developed by the team to support the perception, planning, behavior generation, and other artificial intelligence components of Boss. We discuss the organizing principles of the infrastructure, as well as details of the operator interface, interprocess communications, data logging, system configuration, process management, and task framework, with attention to the requirements that led to the design. We identify the requirements as valuable re-usable artifacts of the development process.


Journal of Field Robotics | 2008

Autonomous driving in urban environments: Boss and the Urban Challenge

Chris Urmson; Joshua Anhalt; Drew Bagnell; Christopher R. Baker; Robert Bittner; M. N. Clark; John M. Dolan; Dave Duggins; Tugrul Galatali; Christopher Geyer; Michele Gittleman; Sam Harbaugh; Martial Hebert; Thomas M. Howard; Sascha Kolski; Alonzo Kelly; Maxim Likhachev; Matthew McNaughton; Nicholas Miller; Kevin M. Peterson; Brian Pilnick; Raj Rajkumar; Paul E. Rybski; Bryan Salesky; Young-Woo Seo; Sanjiv Singh; Jarrod M. Snider; Anthony Stentz; Ziv Wolkowicki; Jason Ziglar


Archive | 2007

Tartan Racing: A Multi-Modal Approach to the DARPA Urban Challenge

Chris Urmson; J. Andrew Bagnell; Christopher R. Baker; Martial Hebert; Alonzo Kelly; Raj Rajkumar; Paul E. Rybski; Sebastian Scherer; Reid G. Simmons; Sanjiv Singh; Anthony Stentz; Jason Ziglar; Darpa Urban Challenge Team


Archive | 2007

Obstacle detection arrangements in and for autonomous vehicles

Joshua Johnston; Jason Ziglar


Archive | 2008

Plowing for Controlled Steep Crater Descents

Jason Ziglar; David Kohanbash; David Wettergreen


Archive | 2014

System and Method for Terrain Mapping

Kenneth L. Stratton; Louis Bojarski; Peter Rander; Randon Warner; Jason Ziglar

Collaboration


Dive into the Jason Ziglar's collaboration.

Top Co-Authors

Avatar

Chris Urmson

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tugrul Galatali

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Alonzo Kelly

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Bryan Salesky

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Kevin M. Peterson

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Paul E. Rybski

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Anthony Stentz

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Jarrod M. Snider

Carnegie Mellon University

View shared research outputs
Researchain Logo
Decentralizing Knowledge