Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Theodore T. Blackmon is active.

Publication


Featured researches published by Theodore T. Blackmon.


Journal of Electronic Imaging | 2001

Representation of human vision in the brain: How does human perception recognize images?

Lawrence Stark; Claudio M. Privitera; Huiyang Yang; Michela Azzariti; Yeuk Fai Ho; Theodore T. Blackmon; Dimitri A. Chernyak

The repetitive scanpath eye movement, EM, sequence enabled an approach to the representation of visual images in the human brain. We supposed that there were several levels of binding—semantic or symbolic binding; structural binding for the spatial locations of the regions-of-interest; and sequential binding for the dynamic execution program that yields the sequence of EMs. The scanpath sequences enable experimental evaluation of these various bindings that appear to play independent roles and are likely located in different parts of the modular cortex. EMs play an essen- tial role in top-down control of the flow of visual information. The scanpath theory proposes that an internal spatial-cognitive model controls perception and the active looking EMs. Evidence support- ing the scanpath theory includes experiments with ambiguous fig- ures, visual imagery, and dynamic scenes. It is further explicated in a top-down computer vision tracking scheme for telerobots using design elements from the scanpath procedures. We also introduce procedures—calibration of EMs, identification of regions-of-interest, and analysis and comparison programs—for studying scanpaths. Although philosophers have long speculated that we see in our minds eye, yet until the scanpath theory, no strong scientific evi- dence was available to support these conjectures.


Presence: Teleoperators & Virtual Environments | 1996

Model-based supervisory control in telerobotics

Theodore T. Blackmon; Lawrence Stark

Model-based approaches can be used to confront several of the challenging performance issues in teleoperation. This paper describes a model-based supervisory control technique for telerobotics. A human-machine interface (HMI) was developed for online, interactive task segmentation and planning utilizing a world model of the telerobotic working environment (TRWE). The task model is transferred intermittently over a low bandwidth communication channel for interpretation, planning, and execution of the task segments through the autonomous control capabilities of a telerobot. For the purposes of outlining tasks, a human operator controls a simulation model to generate a task sequence script as a sequential list of desired sub-goals for a telerobot. A graphic user interface (GUI) facilitates the development of the task sequence script with viewing perspectives of the graphic display automatically selected as a function of the operational state and model parameters. Also, because the human operator is specifying discrete model set-points of the TRWE, and allowing the autonomous control capabilities of the telerobot to coordinate the actual trajectory between set-points, a provision is made to preview the proposed trajectory for approval or modification before execution. Preliminary results with a manipulator arm remotely controlled via the Internet demonstrate the utility of the model-based supervisory control technique.


Teleoperators and Virtual Environments | 1998

Evaluation of the Effects of a Head-mounted Display on Ocular Accommodation

Charles F. Neveu; Theodore T. Blackmon; Lawrence Stark

We evaluated a commercially produced head-mounted display (HMD) to determine its short-term effects on human ocular accommodation. Thirteen subjects (seven men and six women, ranging from 13 to 44 years old) were tested for changes in a number of parameters before and after viewing a full-length movie (approximately two hours) on a HMD. As a control, subjects were also tested before and after viewing a movie on a high-quality NTSC color television, and also before and after a one-hour intermission. Accommodation dynamics and range were measured. Data showed wellknown trends due to subject age. Only one statistically significant change was found: a slight increase in the latency of relaxation accommodation after HMD viewing.


international conference on advanced robotics | 1997

Human hand trajectory analysis in point-and-direct telerobotics

Theodore T. Blackmon; Murat Cenk Cavusoglu; Fuji Lai; Lawrence Stark

Human hand trajectories have been recorded and analyzed in an experimental supervisory control interface for a telerobot manipulator. Using a six degree-of-freedom tracking device to control the gripper of a computer graphics simulation model of the telerobot, human operators were instructed to command the telerobot under a variety of visual conditions. Analysis of the human hand trajectories shows considerable distortion and adaptation effects in the virtual environments. Also, the nature of the 3D hand trajectories for the reaching task lends support for a sampled-data model of human neurological control; this has design implications for telerobotic interfaces.


ieee virtual reality conference | 1999

An operator interface for a robot-mounted, 3D camera system: Project Pioneer

Fitzgerald Steele; Geb W. Thomas; Theodore T. Blackmon

The purpose of Project Pioneer is to develop an exploratory robot capable of creating a three-dimensional photo-realistic map of the inside of the damaged Chernobyl nuclear reactor, measure the environmental and radioactive conditions, and collect samples of concrete from the physical structure to determine its mechanical stability. This paper describes the virtual reality interface for Pioneers three-dimensional mapping system. This interface addresses a wide variety of technical challenges, including several that are unique to the hostile Chernobyl environment.


human vision and electronic imaging conference | 1999

Dynamic scanpaths: eye movement analysis methods

Theodore T. Blackmon; Yeuk Fai Ho; Dimitri A. Chernyak; Michela Azzariti; Lawrence Stark

An eye movements sequence, or scanpath, during viewing of a stationary stimulus has been described as a set of fixations onto regions-of-interest, ROIs, and the saccades or transitions between them. Such scanpaths have high similarity for the same subject and stimulus both in the spatial loci of the ROIs and their sequence; scanpaths also take place during recollection of a previously viewed stimulus, suggesting that they play a similar role in visual memory and recall.


international conference of the ieee engineering in medicine and biology society | 1997

Eye movements while viewing dynamic and static stimuli

Theodore T. Blackmon; Yeuk Fai Ho; K. Matsunaga; T. Yanagida; Lawrence Stark

Utilizing video camera eye tracking technology, an experimental system has been developed to track the eye of a person viewing a head-mounted display (HMD). The HMD and eye tracking system is well suited for investigations of human scanpaths and visual search strategies in virtual environments. Calibration tests demonstrate the robustness of measurements to head motion and stimulus duration. Preliminary results compare eye movements while viewing the motion of a simulated telerobot manipulator vs. eye movements while viewing a static snapshot of the same scene. Differences in the neurological coordination of eye movements while viewing dynamic vs. static stimuli has motivated the development of an automatic parsing algorithm for classifying the eye movements into phases of saccades, smooth pursuit and fixations.


electronic imaging | 1997

Video engraving for virtual environments

Geb W. Thomas; Theodore T. Blackmon; Michael H. Sims; Daryl Rassmussen

Some applications require a user to consider both geometric and image information. Consider, for example, an interface that presents both a three-dimensional model of an object, built from a CAD model or laser-range data, and an image of the same object, gathered from a surveillance camera or a carefully calibrated photograph. The easiest way to provide these information sets to a user is in separate, side-by-side displays. A more effective alternative combines both types of information in a single, integrated display by projecting the image onto the model. A perspective transformation that assigns image coordinates to model vertices can visually engrave the image onto corresponding surfaces of the model. Combining the image and geometric information in this manner provides several advantages. It allows an operator to visually confirm the accuracy of the modeling geometry and also provides realistic textures for the geometric model. We discuss several of our procedural methods to implement the integrated displays and discuss the benefits gained from applying these techniques to projects including robotic hazardous waste remediation, the virtual exploration of Mars, and remote mobile robot control.


human vision and electronic imaging conference | 2000

Segmentation of stereo terrain images

Debra A. George; Claudio M. Privitera; Theodore T. Blackmon; Eric Zbinden; Lawrence Stark

We have studied four approaches to segmentation of images: three automatic ones using image processing algorithms and a fourth approach, human manual segmentation. We were motivated toward helping with an important NASA Mars rover mission task -- replacing laborious manual path planning with automatic navigation of the rover on the Mars terrain. The goal of the automatic segmentations was to identify an obstacle map on the Mars terrain to enable automatic path planning for the rover. The automatic segmentation was first explored with two different segmentation methods: one based on pixel luminance, and the other based on pixel altitude generated through stereo image processing. The third automatic segmentation was achieved by combining these two types of image segmentation. Human manual segmentation of Martian terrain images was used for evaluating the effectiveness of the combined automatic segmentation as well as for determining how different humans segment the same images. Comparisons between two different segmentations, manual or automatic, were measured using a similarity metric, SAB. Based on this metric, the combined automatic segmentation did fairly well in agreeing with the manual segmentation. This was a demonstration of a positive step towards automatically creating the accurate obstacle maps necessary for automatic path planning and rover navigation.


Presence: Teleoperators & Virtual Environments | 1996

The effects of pictorial realism, delay of visual feedback, and observer interactivity on the subjective sense of presence

Robert B. Welch; Theodore T. Blackmon; Andrew Liu; Barbara A. Mellers; Lawrence Stark

Collaboration


Dive into the Theodore T. Blackmon's collaboration.

Top Co-Authors

Avatar

Lawrence Stark

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

James Teza

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mark W. Maimone

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Martial Hebert

Carnegie Mellon University

View shared research outputs
Researchain Logo
Decentralizing Knowledge