Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jay Gowdy is active.

Publication


Featured researches published by Jay Gowdy.


Mechatronics | 2003

PERCEPTION FOR COLLISION AVOIDANCE AND AUTONOMOUS DRIVING

Romuald Aufrère; Jay Gowdy; Christoph Mertz; Charles E. Thorpe; Chieh-Chih Wang; Teruko Yata

The Navlab group at Carnegie Mellon University has a long history of development of automated vehicles and intelligent systems for driver assistance. The earlier work of the group concentrated on road following, cross-country driving, and obstacle detection. The new focus is on short-range sensing, to look all around the vehicle for safe driving. The current system uses video sensing, laser rangefinders, a novel light-stripe rangefinder, software to process each sensor individually, a map-based fusion system, and a probability based predictive model. The complete system has been demonstrated on the Navlab 11 vehicle for monitoring the environment of a vehicle driving through a cluttered urban environment, detecting and tracking fixed objects, moving objects, pedestrians, curbs, and roads.


systems man and cybernetics | 1990

Annotated maps for autonomous land vehicles

Charles E. Thorpe; Jay Gowdy

The use of annotated maps to manage the information needed by autonomous mobile robots is discussed. Annotations tie specific information to particular locations of objects in the map, such as the description of a landmark or the proper control strategy for crossing an intersection. Descriptor annotations are retrieved on demand. Triggers are automatically sent to a specified process when the robot reaches a given location. The most ambitious runs involved navigating through a suburban neighborhood, including image processing to follow roads, 3-D perception for landmark identification, and inertial navigation to turn at intersections. Annotated maps were built by first driving the robot by hand and recording the location of roads and the location and description of objects along the way. During mission planning, triggers were added to specify how and where to drive and what to look for. While executing the mission, the triggers specified when to use image processing, when to slow down and identify landmarks, how to negotiate intersections, and when to stop at the goal.<<ETX>>


The International Journal of Robotics Research | 2001

Distributed Coordination in Modular Precision Assembly Systems

Alfred A. Rizzi; Jay Gowdy; Ralph L. Hollis

A promising approach to enabling the rapid deployment and reconfiguration of automated assembly systems is to make use of cooperating, modular, robust robotic agents. Over the past 5 years, the authors have been constructing just such a system suitable for assembly of high-precision, high-value products. Within this environment, each robotic agent executes its own program, coordinating its activity with that of its peers to produce globally cooperative precision behavior. To simplify the problems associated with deploying such systems, each agent adheres to a strict notion of modularity, both physically and computationally. The intent is to provide an architecture within which it is straightforward to specify strategies for the robust execution of potentially complex and fragile cooperative behaviors. The underlying behaviors use a runtime environment that includes tools to automatically sequence the activities of an agent. Taken together, these abstractions enable a designer to rapidly and effectively describe the high-level behavior of a collection of agents while relying on a set of formally correct control strategies to properly execute and sequence the necessary continuous behaviors.


Engineering Applications of Artificial Intelligence | 1991

COMBINING ARTIFICIAL NEURAL NETWORKS AND SYMBOLIC PROCESSING FOR AUTONOMOUS ROBOT GUIDANCE

Dean A. Pomerleau; Jay Gowdy; Charles E. Thorpe

Abstract Artificial neural networks are capable of performing the reactive aspects of autonomous driving, such as staying on the road and avoiding obstacles. This paper describes an efficient technique for training individual networks to perform these reactive driving tasks. But driving requires more than a collection of isolated capabilities. To achieve true autonomy, a system must determine which capabilities should be employed in the current situation to achieve its objectives. Such goal-directed behavior is difficult to implement in an entirely connectionist system. This paper describes a rule-based technique for combining multiple artificial neural networks with map-based symbolic reasoning to achieve high-level behaviors. The resulting system is not only able to stay on the road, it is able to follow a route to a predetermined destination, turning appropriately at intersections and stopping when it has reached its goal.


The International Journal of Robotics Research | 2005

Safe Robot Driving in Cluttered Environments

Charles E. Thorpe; Justin Carlson; Dave Duggins; Jay Gowdy; Robert A. MacLachlan; Christoph Mertz; Arne Suppé; Bob Wang

The Navlab group at Carnegie Mellon University has a long history of development of automated vehicles and intelligent systems for driver assistance. The earlier work of the group concentrated on road following, cross-country driving, and obstacle detection. The new focus is on short-range sensing, to look all around the vehicle for safe driving. The current system uses video sensing, laser rangefinders, a novel light-stripe rangefinder, software to process each sensor individually, and a map-based fusion system. The complete system has been demonstrated on the Navlab 11 vehicle for monitoring the environment of a vehicle driving through a cluttered urban environment, detecting and tracking fixed objects, moving objects, pedestrians, curbs, and roads.


international conference on intelligent transportation systems | 2004

Development of the side component of the transit integrated collision warning system

Aaron Steinfeld; David Duggins; Jay Gowdy; John Kozar; Robert A. MacLachlan; Christoph Mertz; Arne Suppé; Charles E. Thorpe; Chieh-Chih Wang

This paper describes the development activities leading up to field testing of the transit integrated collision warning system, with special attention to the side component. Two buses, one each in California and Pennsylvania, have been outfitted with sensors, cameras, computers, and driver-vehicle interfaces in order to detect threats and generate appropriate warnings. The overall project goals, integrated concept, side component features, and future plans are documented here.


Proc. SPIE 4715, Unmanned Ground Vehicle Technology | 2002

Driving in traffic: short-range sensing for urban collision avoidance

Charles E. Thorpe; David Duggins; Jay Gowdy; Rob MacLaughlin; Christoph Mertz; Mel Siegel; Arne Suppé; Bob Wang; Teruko Yata

Intelligent vehicles are beginning to appear on the market, but so far their sensing and warning functions only work on the open road. Functions such as runoff-road warning or adaptive cruise control are designed for the uncluttered environments of open highways. We are working on the much more difficult problem of sensing and driver interfaces for driving in urban areas. We need to sense cars and pedestrians and curbs and fire plugs and bicycles and lamp posts; we need to predict the paths of our own vehicle and of other moving objects; and we need to decide when to issue alerts or warnings to both the driver of our own vehicle and (potentially) to nearby pedestrians. No single sensor is currently able to detect and track all relevant objects. We are working with radar, ladar, stereo vision, and a novel light-stripe range sensor. We have installed a subset of these sensors on a city bus, driving through the streets of Pittsburgh on its normal runs. We are using different kinds of data fusion for different subsets of sensors, plus a coordinating framework for mapping objects at an abstract level.


Archive | 1997

SAUSAGES: Between Planning and Action

Jay Gowdy

Early in the development of unmanned ground vehicles, it became apparent that some form of mission execution and monitoring was needed to integrate the capabilities of the perception systems. We had road followers that robustly followed roads [3.6], object detectors that avoided obstacles [3.3], landmark recognizers that localized the vehicle position in a map [3.3], but we had no consistent architectural glue to join them. A robust road follower is impressive on its own, but a road follower alone has no way to know which way to turn at an intersection, no way to know when to speed up or slow down for important events, etc. A system that can execute a complex mission cannot simply be the sum of its perceptual modalities; there needs to be a “plan” which uses high level knowledge about goals and intentions to direct the behaviors of the low level perception and actuation modules.


international conference on robotics and automation | 1999

Programming in the architecture for agile assembly

Jay Gowdy; Alfred A. Rizzi

The goal of the architecture for agile assembly (AAA) is to enable rapid deployment and reconfiguration of automated assembly systems through the use of cooperating, modular, robust, robotic agents. AAA agent programs must be completely distributed and specify cooperative precision behavior in a structured, well known environment. Thus, the structure of agent programs is carefully designed to allow packaging of all the information necessary for coordinated execution when downloaded to a physical agent. To make the specification and execution of the potentially complex and fragile cooperative behaviors robust, our programs define ordered sets of control strategies and allow a low level real-time hybrid control system to sequence the strategies rather than burdening the agent program with the management of this critical detail. This novel approach to programming automation systems has been tested both in simulation and on prototype hardware.


The International Journal of Robotics Research | 2000

Distributed Programming and Coordination for Agent-Based Modular Automation

Alfred A. Rizzi; Jay Gowdy; Ralph L. Hollis

A promising approach to enabling the rapid deployment and reconfiguration of automated assembly systems is to make use of cooperating, modular, robust robotic agents. Within such an environment, each robotic agent will execute its own program, while coordinating with peers to produce globaly cooperative precision behavior. To simplify the problem of agent programming, the structure of those programs is carefully designed to enable the automatic encapsulation of information necessary for execution during distribution. Similarly, the programming model incorporates structures for the compact specification and robust execution of potentially complex and fragile cooperative behaviors. These behaviors utilize a run-time environment that includes tools to automatically sequence the activities of an agent. Taken together, these abstractions enable a programmer to compactly describe the high-level behavior of the agent while relying on a set of formally correct control strategies to properly execute and sequence the necessary continuous behaviors.

Collaboration


Dive into the Jay Gowdy's collaboration.

Top Co-Authors

Avatar

Charles E. Thorpe

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Christoph Mertz

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Ralph L. Hollis

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Arne Suppé

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

David Duggins

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chieh-Chih Wang

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Aaron Steinfeld

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Bob Wang

Carnegie Mellon University

View shared research outputs
Researchain Logo
Decentralizing Knowledge