Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Thomas C. Bryan is active.

Publication


Featured researches published by Thomas C. Bryan.


Defense and Security | 2004

Advanced Video Guidance Sensor (AVGS) Development Testing

Richard T. Howard; Albert S. Johnston; Thomas C. Bryan; Michael L. Book

NASAs Marshall Space Flight Center was the driving force behind the development of the Advanced Video Guidance Sensor, an active sensor system that provides near-range sensor data as part of an automatic rendezvous and docking system. The sensor determines the relative positions and attitudes between the active sensor and the passive target at ranges up to 300 meters. The AVGS uses laser diodes to illuminate retro-reflectors in the target, a solid-state camera to detect the return from the target, and image capture electronics and a digital signal processor to convert the video information into the relative positions and attitudes. The AVGS will fly as part of the Demonstration of Autonomous Rendezvous Technologies (DART) in October, 2004. This development effort has required a great deal of testing of various sorts at every phase of development. Some of the test efforts included optical characterization of performance with the intended target, thermal vacuum testing, performance tests in long range vacuum facilities, EMI/EMC tests, and performance testing in dynamic situations. The sensor has been shown to track a target at ranges of up to 300 meters, both in vacuum and ambient conditions, to survive and operate during the thermal vacuum cycling specific to the DART mission, to handle EMI well, and to perform well in dynamic situations.


Laser Radar Technology and Applications II | 1997

Active sensor system for automatic rendezvous and docking

Richard T. Howard; Thomas C. Bryan; Michael L. Book; John Larkin Jackson

NASAs Marshall Space Flight Center has developed an active sensor system, the ideo guidance sensor (VGS), to provide near-range relative position and attitude data. The VGS will be part of an automatic rendezvous and docking system. The VGS determines the relative positions and attitudes between the active sensor and the passive target. It works by using laser diodes to illuminate the retro-reflectors in the target, a solid-state camera to detect the return from the target retro-reflectors, and a frame grabber and digital signal processor to convert the video information into relative positions and attitudes. The current sensor design is the result of several years of development and testing, and it is being built to fly as an experiment payload on the space shuttle. The VGS system is designed to operate with the target completely visible within a relative azimuth of +/- 10.5 degrees and a relative elevation of +/- 8 degrees. The system will acquire and track and target within that field-of-view anywhere from 1.0 meters to 110 meters range at any relative roll angle and relative pitch and yaw attitudes of up to +/- 10 degrees. The data is output from the sensor at 5 Hz, and the target and sensor software have been designed to permit two independent sensors to operate simultaneously for redundancy.


Proceedings of SPIE, the International Society for Optical Engineering | 2007

DART AVGS flight results

Richard T. Howard; Thomas C. Bryan

The Advanced Video Guidance Sensor (AVGS) was designed to be the proximity operations sensor for the Demonstration of Autonomous Rendezvous Technologies (DART). The DART mission flew in April of 2005 and was a partial success. The AVGS did not get the opportunity to operate in every mode in orbit, but those modes in which it did operate were completely successful. This paper will detail the development, testing, and on-orbit performance of the AVGS.


Laser radar technology and applications. Conference | 1999

On-orbit testing of the video guidance sensor

Richard T. Howard; Thomas C. Bryan; Michael L. Book

The Video Guidance Sensor (VGS), part of NASAs Automated Rendezvous and Capture program, was flown on Shuttle mission STS-95 in October of 1998 to test on-orbit the functional characteristics of the VGS. This was the second flight of the VGS (the first flight was in 1997 on STS-87), and this time long-range tracking data was gathered during the experiment. The flight experiment sensor was designed to operate from 1.5 meter range out to 110 meter range, with a field-of-view of 16 by 21 degrees. The VGS tracked its target at a 5 Hz rate and returned 6-degree-of-freedom information on the targets position and attitude relative to the sensor. The VGS was mounted in the Shuttle cargo bay, and its target was mounted on the Spartan spacecraft being carried on this mission. The orbital testing of the VGS included operations with the target on the Shuttles Remote Manipulator System (RMS) at the start of the 10-day mission, long-range data during the Shuttle rendezvous with the Spartan two days later, and some more RMS operations later in the mission. The data returned from the orbital testing included VGS diagnostics, acquisition, and tracking data, RMS positions, hand-held laser range data, tapes of the data from the VGS video camera, and orbital positioning data from the Spartan and the Shuttle to allow correlation of the VGS data with orbital best- estimate-of-truth data. The Video Guidance Sensor performed well in all phases of the testing, and the VGS is being incorporated into the ground testing of a complete automated rendezvous and docking system. Work on the development of the next generation VGS is continuing.


Proceedings of SPIE | 1998

Video guidance sensor flight experiment results

Richard T. Howard; Thomas C. Bryan; Michael L. Book

An active video sensor system for determining target range and attitude was flown on STS-87. The Video Guidance Sensor (VGS), developed at NASAs Marshall Space Flight Center, demonstrated its capability in space and collected performance data. The VGS was designed to provide near-range sensor data as part of an automatic rendezvous and docking system. The sensor determines the relative positions and attitudes between the active sensor and the passive target. The VGS uses laser diodes to illuminate retro-reflectors in the target, a solid-state camera to detect the return from the target, and a frame grabber and digital signal processor to convert the video information into the relative positions and attitudes. The system is designed to operate with the target within a relative azimuth of +/- 9.5 degrees and a relative elevation of +/- 7.5 degrees. The system will acquire and track the target within the defined field-of- view between 1.5 meters and 110 meters range, and the VGS is designed to acquire at relative attitudes of +/- 10 degrees in pitch and yaw and at any roll angle. The sensor outputs the data at 5 Hz, and the target and sensor software and hardware have been designed to permit two independent sensors to operate simultaneously. This allows for redundant sensors. The data from the flight experiment includes raw video data from the VGS camera, relative position and attitude measurements from the VGS, solar angle data, and Remote Manipulator System position data to correlate with the VGS data. The experiment was quite successful and returned significant verification of the sensors capabilities. The experience gained from the design and flight of this experiment will lead to improved video sensors in the future.


SPACE TECHNOLOGY AND APPLICATIONS INTERNATIONAL FORUM‐STAIF 2008: 12th Conference on Thermophysics Applications in Microgravity; 1st Symposium on Space Resource Utilization; 25th Symposium on Space Nuclear Power and Propulsion; 6th Conference on Human/Robotic Technology and the Vision for Space Exploration; 6th Symposium on Space Colonization; 5th Symposium on New Frontiers and Future Concept | 2008

The Advanced Video Guidance Sensor: Orbital Express and the Next Generation

Richard T. Howard; Andrew Heaton; Robin M. Pinson; Connie L. Carrington; James E. Lee; Thomas C. Bryan; Bryan Robertson; Susan H. Spencer; Jimmie E. Johnson

The Orbital Express (OE) mission performed the first autonomous rendezvous and docking in the history of the United States on May 5-6, 2007 with the Advanced Video Guidance Sensor (AVGS) acting as one of the primary docking sensors. Since that event, the OE spacecraft performed four more rendezvous and docking maneuvers, each time using the AVGS as one of the docking sensors. The Marshall Space Flight Centers (MSFCs) AVGS is a near- field proximity operations sensor that was integrated into the Autonomous Rendezvous and Capture Sensor System (ARCSS) on OE. The ARCSS provided the relative state knowledge to allow the OE spacecraft to rendezvous and dock. The AVGS is a mature sensor technology designed to support Automated Rendezvous and Docking (AR&D) operations. It is a video-based laser-illuminated sensor that can determine the relative position and attitude between itself and its target. Due to parts obsolescence, the AVGS that was flown on OE can no longer be manufactured. MSFC has been working on the next generation of AVGS for application to future Constellation missions. This paper provides an overview of the performance of the AVGS on Orbital Express and discusses the work on the Next Generation AVGS (NGAVGS).


ieee aerospace conference | 2008

Next Generation Advanced Video Guidance Sensor

Thomas C. Bryan; Richard T. Howard; Jimmie E. Johnson; James E. Lee; Lucinda Murphy; Susan H. Spencer

The first autonomous rendezvous and docking in the history of the U.S. Space Program was successfully accomplished by Orbital Express (OE) in May of 2007, using the advanced video guidance sensor (AVGS) as the primary docking sensor. The United States now has a mature and flight proven sensor technology for supporting crew exploration vehicles (CEV) and commercial orbital transportation services (COTS) automated rendezvous and docking (AR&D).


document analysis systems | 2001

An advanced sensor for automated docking

Richard T. Howard; Thomas C. Bryan; M.L. Book

This paper describes the current developments in video-based sensors at the Marshall Space Flight Center. The Advanced Video Guidance Sensor is the latest in a line of video-based sensors designed for use in automated docking systems. The X-33, X-34, X-38, and X-40 are all spacecraft designed to be unpiloted vehicles; such vehicles will require a sensor system that will provide adequate data for the vehicle to accomplish its mission. One of the primary tasks planned for re-usable launch vehicles is to resupply the space station. In order to approach the space station in a self-guided manner, the vehicle must have a reliable and accurate sensor system to provide relative position and attitude information between the vehicle and the space station. The Advanced Video Guidance Sensor is being designed and built to meet this requirement, particularly for the Demonstration of Autonomous Rendezvous Technology (DART), as well as requirements for other vehicles docking to a variety of target spacecraft.


Atmospheric propagation, adaptive systems, and laser radar technology for remote sensing. Conference | 2001

Video-based sensor for tracking 3-dimensional targets

Richard T. Howard; Michael L. Book; Thomas C. Bryan

The National Aeronautics and Space Administrations (NASAs) Marshall Space Flight Center (MSFC) has ben developing and testing video-based sensors for automated spacecraft guidance for several years. The video sensor currently under development is to have a tracking rate of 50 Hz while delivering full 3-dimensional relative information (X,Y,Z, Pitch, Yaw, and Roll). Prior systems have been developed and tested in both open- loop and closed-loop simulations. The prototype Video Guidance Sensor (VGS) was built for a flight experiment and performed well on two separate Space Shuttle flights. The VGS uses two wavelengths of light to illuminate a target that has a pattern of filtered retro-reflectors. The filters pass only one wavelength of light and absorb the other. Two fast, successive pictures are taken of the target, each picture illuminated by a different wavelength. When the background picture is subtracted from the foreground,a high signal to noise ratio is achieved, and the target spots are easy to track. The next generation VGS will be using a CMOS imaging chip for higher-speed target tracking and a Texas Instruments DSP for higher-speed image processing. The system is being designed to have lower weight and power requirements than the previous generation, and it will be suitable for other applications.


ieee aerospace conference | 2009

Proximity operations and docking sensor development

Richard T. Howard; Thomas C. Bryan; Linda L. Brewster; James E. Lee

The Next Generation Advanced Video Guidance Sensor (NGAVGS) has been under development for the last three years as a long-range proximity operations and docking sensor for use in an Automated Rendezvous and Docking (AR&D) system. The first autonomous rendezvous and docking in the history of the U.S. Space Program was successfully accomplished by Orbital Express, using the Advanced Video Guidance Sensor (AVGS) as the primary docking sensor. That flight proved that the United States now has a mature and flight proven sensor technology for supporting Crew Exploration Vehicles (CEV) and Commercial Orbital Transport Systems (COTS) Automated Rendezvous and Docking (AR&D). NASA video sensors have worked well in the past: the AVGS used on the Demonstration of Autonomous Rendezvous Technology (DART) mission operated successfully in “spot mode” out to 2 km, and the first generation rendezvous and docking sensor, the Video Guidance Sensor (VGS), was developed and successfully flown on Space Shuttle flights in 1997 and 1998.

Collaboration


Dive into the Thomas C. Bryan's collaboration.

Top Co-Authors

Avatar

Richard T. Howard

Marshall Space Flight Center

View shared research outputs
Top Co-Authors

Avatar

Fred D. Roe

Marshall Space Flight Center

View shared research outputs
Top Co-Authors

Avatar

James E. Lee

Marshall Space Flight Center

View shared research outputs
Top Co-Authors

Avatar

Albert S. Johnston

Marshall Space Flight Center

View shared research outputs
Top Co-Authors

Avatar

Jimmie E. Johnson

Marshall Space Flight Center

View shared research outputs
Top Co-Authors

Avatar

Bryan Robertson

Marshall Space Flight Center

View shared research outputs
Top Co-Authors

Avatar

Susan H. Spencer

Marshall Space Flight Center

View shared research outputs
Top Co-Authors

Avatar

Andrew Heaton

Marshall Space Flight Center

View shared research outputs
Top Co-Authors

Avatar

Linda L. Brewster

Marshall Space Flight Center

View shared research outputs
Top Co-Authors

Avatar

Lucinda Murphy

Marshall Space Flight Center

View shared research outputs
Researchain Logo
Decentralizing Knowledge