Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Stephen F. Peters is active.

Publication


Featured researches published by Stephen F. Peters.


intelligent robots and systems | 1996

Application of intelligent monitoring for super long distance teleoperation

Yujin Wakita; Shigeoki Hirai; Kazuo Machida; Kenji Ogimoto; Toshiyuki Itoko; Paul G. Backes; Stephen F. Peters

Time delay and limited communication capacity are the primary constraints in super-long distance telerobotic systems such as space telerobotic systems. Intelligent monitoring is efficient for this problem to provide a function which selects important scenes to help the operator through a monitoring camera. We constructed a telerobotic testbed which includes a connection through the international ISDN and typical space structure (space robot, truss structure land ORU). We conducted trans-Pacific teleoperation experiments using the testbed in ETL as the remote site and a telerobotic console at JPL (Jet Propulsion Laboratory in Pasadena, California) as a local site. Experimental results showed intelligent monitoring to be effective for the above problems.


international conference on robotics and automation | 1996

Task lines and motion guides

Paul G. Backes; Stephen F. Peters; Linh Phan; Kam S. Tso

The new task lines and motion guides approaches to telerobotics are described. Motion guides is a new paradigm for teleoperation of a robot where the path is teleoperated rather than the robot, and the robot is constrained to follow the path. Continuous commands to the robot are only one dimensional: forward, back, or halt along the motion guide. Task lines have subtasks attached to motion guides. The task lines and motion guides have been implemented in a virtual reality environment to enable task description and execution to be done in a natural virtual reality graphics environment rather than via direct interaction with a command program. Subtasks are represented in the virtual reality environment by icons attached to the motion guides. The combination of task lines and motion guides is valuable for ground control of Space Station robots, which is the initial application for this technology.


simulation modeling and programming for autonomous robots | 2008

A Lunar Surface Operations Simulator

Hari Nayar; Bob Balaram; Jonathan Cameron; Abhinandan Jain; Christopher Lim; Rudranarayan Mukherjee; Stephen F. Peters; Marc Pomerantz; Leonard Reder; Partha Shakkottai; Stephen D. Wall

The Lunar Surface Operations Simulator (LSOS) is being developed to support planning and design of space missions to return astronauts to the moon. Vehicles, habitats, dynamic and physical processes and related environment systems are modeled and simulated in LSOS to assist in the visualization and design optimization of systems for lunar surface operations. A parametric analysis tool and a data browser were also implemented to provide an intuitive interface to run multiple simulations and review their results. The simulator and parametric analysis capability are described in this paper.


Presence: Teleoperators & Virtual Environments | 1998

Task Lines and Motion Guides

Paul G. Backes; Stephen F. Peters; Linh Phan; Kam S. Tso

The new task lines and motion guides approaches to telerobotics are described. Motion guides is a new paradigm for teleoperation of a robot where the path is teleoperated rather than the robot, and the robot is constrained to follow the path. Continuous commands to the robot are only onedimensional: forward, back, or halt along the motion guide. Task lines have subtasks attached to motion guides. The task lines and motion guides have been implemented in a virtual reality environment to enable task description and execution in a natural, virtual reality graphics environment rather than via direct interaction with a command program. Subtasks are represented in the virtual reality environment by icons attached to the motion guides. The combination of task lines and motion guides is valuable for ground control of Space Station robots, which is the initial application for this technology.


Space Station Automation IV | 1988

Autonomy Through Interaction: The JPL Telerobot Interactive Planning System

Stephen F. Peters

A fully autonomous system which has rich interaction with the world will not be realized in the near future. Systems with autonomous capability will require use of knowledge external to themselves. Even human beings frequently use references and ask for advice. An interface between a partially autonomous system and external sources of knowledge is a feature which enables application of technology not yet fully autonomous. This is the strategy taken in the development of the Telerobot Interactive Planning System.


IEEE Transactions on Systems, Man, and Cybernetics | 2014

Onboard centralized frame tree database for intelligent space operations of the Mars Science Laboratory Rover.

Won S. Kim; Antonio Diaz-Calderon; Stephen F. Peters; Joseph Carsten; Chris Leger

Planetary surface science operations performed by robotic space systems frequently require pointing cameras at various objects and moving a robotic arm end effector tool toward specific targets. Earlier NASA Mars Exploration Rovers did not have the ability to compute actual coordinates for given object coordinate frame names and had to be provided with explicit coordinates. Since it sometimes takes hours to more than a day to get final approval of certain calculated coordinates for command uplink via the Earth-based mission operations procedures, a highly desired enhancement for future rovers was to have the onboard automated capability to compute the coordinates for a given frame name. The Mars Science Laboratory (MSL) rover mission is the first to have a centralized coordinate transform database to maintain the knowledge of spatial relations. This onboard intelligence significantly simplifies communication and control between Earth-based human mission operators and the robotic rover on Mars by supporting higher level abstraction of commands using object and target names instead of coordinates. More specifically, the spatial relations of many object frames are represented hierarchically in a tree data structure, called the frame tree. Individual frame transforms are populated by their respective modules that have specific knowledge of the frames. Through this onboard centralized frame tree database, client modules can query transforms between any two frames and support spacecraft commands that use any frames maintained in the frame tree. Various operational examples in the MSL mission that have greatly benefitted from this onboard centralized frame tree database are presented.


international conference on system of systems engineering | 2013

Mars science laboratory frame manager for centralized frame tree database and target pointing

Won S. Kim; Chris Leger; Stephen F. Peters; Joseph Carsten; Antonio Diaz-Calderon

The FM (Frame Manager) flight software module is responsible for maintaining the frame tree database containing coordinate transforms between frames. The frame tree is a proper tree structure of directed links, consisting of surface and rover subtrees. Actual frame transforms are updated by their owner. FM updates site and saved frames for the surface tree. As the rover drives to a new area, a new site frame with an incremented site index can be created. Several clients including ARM and RSM (Remote Sensing Mast) update their related rover frames that they own. Through the onboard centralized FM frame tree database, client modules can query transforms between any two frames. Important applications include target image pointing for RSM-mounted cameras and frame-referenced arm moves. The use of frame tree eliminates cumbersome, error-prone calculations of coordinate entries for commands and thus simplifies flight operations significantly.


Telematics and Informatics | 1990

The JPL/KSC telerobotic inspection demonstration

David S. Mittman; Bruce Bon; John Brogdon; Carol E. Collins; Gerry Fleischer; Bob Humeniuk; Alex Ladd; Jose Lago; Todd Litwin; Jack Morrison; Jacquie S. O'Meara; Stephen F. Peters; Mike Sklar; James Spencer; Dan Wegerif

Abstract An ASEA IRB90 robotic manipulator with attached inspection cameras was moved through a Space Shuttle Payload Assist Module (PAM) Cradle under computer control. The Operator and Operator Control Station, including graphics simulation, gross-motion spatial planning, and machine vision processing, were located at the Jet Propulsion Laboratory (JPL) in California. The Safety and Support personnel, PAM Cradle, IRB90, and image acquisition system, were stationed at the Kennedy Space Center (KSC) in Florida. Images captured at KSC were used both for processing by a machine vision system at JPL, and for inspection by the JPL Operator. The system found collision-free paths through the PAM Cradle, demonstrated accurate knowledge of the locations of obstacles and of objects of interest, and operated with a communication delay of two seconds. Safe operation of the IRB90 near Shuttle flight hardware was obtained through the use of both a gross-motion spatial planner developed at JPL using artificial intelligence techniques, and infrared beams and pressure sensitive strips mounted to the critical surfaces of the flight hardware at KSC. The Demonstration showed that ground/remote telerobotics is effective for real tasks, safe for personnel and hardware, and highly productive and reliable for Shuttle payload operations and Space Station external operations.


Archive | 2003

Rover Technology Development and Infusion for the 2009 Mars Science Laboratory Mission

Richard Volpe; Stephen F. Peters


international joint conference on artificial intelligence | 1991

Planning robot control parameter values with qualitative reasoning

Stephen F. Peters; Shigeoki Hirai; Toru Ornata; Tomomasa Sato

Collaboration


Dive into the Stephen F. Peters's collaboration.

Top Co-Authors

Avatar

Paul G. Backes

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Carol E. Collins

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

David S. Mittman

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Kam S. Tso

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Linh Phan

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Chris Leger

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Jacquie S. O'Meara

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Joseph Carsten

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Won S. Kim

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge