Barbara T. Sweet
Ames Research Center
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Barbara T. Sweet.
Quarterly Journal of Experimental Psychology | 2003
Patricia R. DeLucia; Mary K. Kaiser; Jason M. Bush; Les E. Meyer; Barbara T. Sweet
Time to contact (TTC) is specified optically by tau, and studies suggest that observers are sensitive to this information. However, TTC judgements also are influenced by other sources of information, including pictorial depth cues. Therefore, it is useful to identify these sources of information and to determine whether and how their effects combine when multiple sources are available. We evaluated the effect of five depth cues on TTC judgements. Results indicate that relative size, height in field, occlusion, and motion parallax influence TTC judgements. When multiple cues are available, an integration (rather than selection) strategy is used. Finally, the combined effects of multiple cues are not always consistent with a strict additive model and may be task dependent.
Journal of Vision | 2006
Li Li; Barbara T. Sweet; Leland S. Stone
It has previously been reported that humans can determine their direction of 3D translation (heading) from the 2D velocity field of retinal motion experienced during self-motion through a rigid environment, as is done by current computational models of visual heading estimation from optic flow. However, these claims were supported by studies that used stimuli that contained low rotational flow rates and/or additional visual cues beyond the velocity field or a task in which observers were asked to indicate their future trajectory of self-motion (path). Thus, previous conclusions about heading estimation have been confounded by the presence of other visual factors beyond the velocity field, by the use of a path-estimation task, or both. In particular, path estimation involves an exocentric computation with respect to an environmental reference, whereas heading estimation is an egocentric computation with respect to ones line of sight. Here, we use a heading-adjustment task to demonstrate that humans can precisely estimate their heading from the velocity field, independent of visual information about path, displacement, layout, or acceleration, with accuracy robust to rotation rates at least as high as 20 deg/s. Our findings show that instantaneous velocity-field information about heading is directly available for the visual control of locomotion and steering.
AIAA Modeling and Simulation Technologies Conference | 2011
Peter M.T. Zaal; Barbara T. Sweet
Human control behavior is rarely completely stationary over time due to fatigue or loss of attention. In addition, there are many control tasks for which human operators need to adapt their control strategy to vehicle dynamics that vary in time. In previous studies on the identification of time-varying pilot control behavior wavelets were used to estimate the time-varying frequency response functions. However, the estimation of time-varying pilot model parameters was not considered. Estimating these parameters can be a valuable tool for the quantification of different aspects of human time-varying manual control. This paper presents two methods for the estimation of time-varying pilot model parameters, a two-step method using wavelets and a windowed maximum likelihood estimation method. The methods are evaluated using simulations of a closed-loop control task with time-varying pilot equalization and vehicle dynamics. Simulations are performed with and without remnant. Both methods give accurate results when no pilot remnant is present. The wavelet transform is very sensitive to measurement noise, resulting in inaccurate parameter estimates when considerable pilot remnant is present. Maximum likelihood estimation is less sensitive to pilot remnant, but cannot detect fast changes in pilot control behavior.
systems man and cybernetics | 2006
Li Li; Barbara T. Sweet; Leland S. Stone
Humans perceive isoluminant visual stimuli (i.e., stimuli that show little or no luminance variation across space) to move more slowly than their luminance-defined counterparts. To explore whether impaired motion perception at isoluminance also affects visuomotor control tasks, the authors examined the performance as humans actively controlled a moving line. They tested two types of displays matched for an overall salience: a luminant display composed of a luminance-defined Gaussian-blurred horizontal line and an isoluminant display composed of a color-defined line with the same spatial characteristics, but near-zero luminance information. Six subjects were asked to use a joystick to keep the line centered on a cathode ray tube display as its vertical position was perturbed pseudorandomly by a sum of ten sinusoids under two control regimes (velocity and acceleration control). The mean root mean square position error was larger for the isoluminant than for the luminant line (mean across subjects: 22% and 29% larger, for the two regimes, respectively). The describing functions (Bode plots) showed that, compared to the luminant line, the isoluminant line showed a lower open-loop gain (mean decrease: 3.4 and 2.9 dB, respectively) and an increase in phase lag, which can be accounted for by an increase in reaction time (mean increase: 103 and 155 ms, respectively). The performance data are generally well fit by McRuers classical crossover model. In conclusion, both our model-independent and model-dependent analyses show that the selective loss of luminance information impairs human active control performance, which is consistent with the preferential loss of information from cortical visual motion processing pathways. Display engineers must therefore be mindful of the importance of luminance-contrast per se (not just total stimulus salience) in the design of effective visual displays for closed-loop active control tasks
AIAA Modeling and Simulation Technologies Conference and Exhibit | 2006
Barbara T. Sweet; Mary K. Kaiser
and Mary K. Kaiser NASA Ames Research Center, Moffett Field, CA 94035 Humans rely on a plethora of visual cues to inform judgments of the depth or range of a particular object or feature. One source of information is binocular (or stereo) vision, enabled by the slight differences in the images between the two eyes. Another source of information is the size, in visual angle, of the feature or object. Although it is possible to provide binocular disparity in a display, it typically implies increased system cost or lower update rates; therefore it is useful to determine how critical it is to provide binocular disparity to the user. This paper describes the results of several experiments investigating the integration of stereo and size cues in performing manual control tasks. In the first experiment, a visual cue integration model was developed for two types of manual control tasks (rate-control and acceleration-control) and different levels of cue salience. The results of this experiment were that stereo disparity dominated judgments of depth position, while size dominated judgment of depth rate. From this experiment, it was hypothesized that stereo disparity would do more to improve performance on rate-control tasks more than on acceleration-control tasks. Two additional experiments were conducted, with and without stereo display, to test this hypothesis at different update rates. The results confirmed that while stereo disparity improved performance on rate-control tasks, it did not improve performance on acceleration-control tasks; in fact, performance was reduced with stereo disparity when the display method reduced update rate below a particular threshold.
AIAA Modeling and Simulation Technologies Conference and Exhibit | 2005
Barbara T. Sweet; Mary K. Kaiser
Pilots rely upon visual information conveyed by computer-generated perspective displays in cockpits, and also by out-the-window (OTW) scenes rendered in simulators, to perform control tasks. Because even the most advanced graphics system cannot recreate the full complexity of a natural perspective scene (due to limitations in spatial and temporal resolution, dynamic range, field-of-view, and scene detail), it is necessary to understand which characteristics of the perspective scene are used by the pilot to accomplish vehicular control. In the current study we investigated the effects of simulated OTW visual scene content on an operator’s ability to control pitch attitude of a simulated vehicle in the presence of both pitch disturbances and longitudinal position disturbances. Two types of vehicle dynamics were simulated, first-order and second-order; two types of ground textures were assessed, and the task was performed with and without a visible horizon. Data from 12 participants were analyzed to determine task performance, and to estimate describing functions of the human operator characteristics. Task performance with both types of vehicle dynamics was significantly better with a visible horizon. The type of ground texture did not affect performance. Parameterized models were fit to the describing function measurements. The resulting models exhibited excellent correlation with the describing functions. Identified parameters related to the visual cue usage showed excellent correlation with the available scene features.
AIAA Modeling and Simulation Technologies Conference | 2012
Peter Zaal; Barbara T. Sweet
Recent developments in fly-by-wire control architectures for rotorcraft have introduced new interest in the identification of time-varying pilot control behavior in multi-axis control tasks. In this paper a maximum likelihood estimation method is used to estimate the parameters of a pilot model with time-dependent sigmoid functions to characterize timevarying human control behavior. An experiment was performed by 9 general aviation pilots who had to perform a simultaneous roll and pitch control task with time-varying aircraft dynamics. In 8 different conditions, the axis containing the time-varying dynamics and the growth factor of the dynamics were varied, allowing for an analysis of the performance of the estimation method when estimating time-dependent parameter functions. In addition, a detailed analysis of pilots’ adaptation to the time-varying aircraft dynamics in both the roll and pitch axes could be performed. Pilot control behavior in both axes was significantly affected by the time-varying aircraft dynamics in roll and pitch, and by the growth factor. The main effect was found in the axis that contained the time-varying dynamics. However, pilot control behavior also changed over time in the axis not containing the time-varying aircraft dynamics. This indicates that some cross coupling exists in the perception and control processes between the roll and pitch axes.
SID Symposium Digest of Technical Papers | 2008
Marc Winterbottom; James Gaska; George A. Geri; Barbara T. Sweet
An evaluation of a prototype grating light valve laser projector indicates it has properties well-suited to flight-simulation applications. Full-field luminance and contrast, spatial resolution, temporal resolution, and color stability were equal to or better than those of CRT projectors typically used in flight-simulator applications. In addition, this projector is capable of providing refresh rates greater than 60 Hz. The higher refresh rates eliminate perceived flicker, and greatly reduce (120 Hz) or eliminate (240 Hz) motion artifacts over the range of target speeds tested.
AIAA Modeling and Simulation Technologies Conference | 2011
Barbara T. Sweet; Mary K. Kaiser
Humans rely on a variety of visual cues to inform them of the depth or range of a particular object or feature. Some cues are provided by physiological mechanisms, others from pictorial cues that are interpreted psychologically, and still others by the relative motions of objects or features induced by observer (or vehicle) motions. These cues provide different levels of information (ordinal, relative, absolute) and saliency depending upon depth, task, and interaction with other cues. Display technologies used for head-down and head-up displays, as well as out-the-window displays, have differing capabilities for providing depth cueing information to the observer/operator. In addition to technologies, display content and the source (camera/sensor versus computer rendering) provide varying degrees of cue information. Additionally, most displays create some degree of cue conflict. In this paper, visual depth cues and their interactions will be discussed, as well as display technology and content and related artifacts. Lastly, the role of depth cueing in performing closed-loop control tasks will be discussed.
AIAA Modeling and Simulation Technologies Conference | 2010
Peter M.T. Zaal; Barbara T. Sweet
Visual display systems such as the out-the-window or head-down displays of a simulator present a visual scene that is sampled in both the spatial domain (by the display resolution) and the time domain (by the display refresh rate). For a given human visual temporal sensitivity, spatial-frequency content of the scene, and speed of the image motion, spatiotemporal aliasing can occur when the image is sampled at a rate that is too low. The effects of spatio-temporal aliasing on visual perception are understood to some extend. However, not much is known about the effects on pilot performance in active control tasks. This paper presents the results of an experiment to determine the effects of spatio-temporal aliasing on pilot performance and control behavior in a target-tracking task. To induce different levels of spatio-temporal aliasing, the refresh rate of the experimental display was varied among five different levels. The results indicate that pilots adopt a different control strategy when the display refresh rate is increased from 60 to 120 Hz. The visual gain and neuromuscular frequency of the identified pilot model increase, while the visual time delay decreases. This change in control strategy allows for a higher tracking performance at higher display refresh rates as indicated by a decrease in root mean square of the error signal and an increase in crossover frequency.