Joseph L. Cooper
University of Texas at Austin
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Joseph L. Cooper.
Journal of Field Robotics | 2008
Michael A. Goodrich; Bryan S. Morse; Damon Gerhardt; Joseph L. Cooper; Morgan Quigley; Julie A. Adams; Curtis M. Humphrey
Wilderness Search and Rescue (WiSAR) entails searching over large regions in often rugged remote areas. Because of the large regions and potentially limited mobility of ground searchers, WiSAR is an ideal application for using small (human-packable) unmanned aerial vehicles (UAVs) to provide aerial imagery of the search region. This paper presents a brief analysis of the WiSAR problem with emphasis on practical aspects of visual-based aerial search. As part of this analysis, we present and analyze a generalized contour search algorithm, and relate this search to existing coverage searches. Extending beyond laboratory analysis, lessons from field trials with search and rescue personnel indicated the immediate need to improve two aspects of UAV-enabled search: How video information is presented to searchers and how UAV technology is integrated into existing WiSAR teams. In response to the first need, three computer vision algorithms for improving video display presentation are compared; results indicate that constructing temporally localized image mosaics is more useful than stabilizing video imagery. In response to the second need, a goal-directed task analysis of the WiSAR domain was conducted and combined with field observations to identify operational paradigms and field tactics for coordinating the UAV operator, the payload operator, the mission manager, and ground searchers.
Journal of Vision | 2013
Gabriel Diaz; Joseph L. Cooper; Constantin A. Rothkopf; Mary Hayhoe
Despite general agreement that prediction is a central aspect of perception, there is relatively little evidence concerning the basis on which visual predictions are made. Although both saccadic and pursuit eye-movements reveal knowledge of the future position of a moving visual target, in many of these studies targets move along simple trajectories through a fronto-parallel plane. Here, using a naturalistic and racquet-based interception task in a virtual environment, we demonstrate that subjects make accurate predictions of visual target motion, even when targets follow trajectories determined by the complex dynamics of physical interactions and the head and body are unrestrained. Furthermore, we found that, following a change in ball elasticity, subjects were able to accurately adjust their prebounce predictions of the balls post-bounce trajectory. This suggests that prediction is guided by experience-based models of how information in the visual image will change over time.
human-robot interaction | 2008
Joseph L. Cooper; Michael A. Goodrich
Wilderness search and rescue (WiSAR) is a challenging problem because of the large areas and often rough terrain that must be searched. Using mini-UAVs to deliver aerial video to searchers has potential to support WiSAR efforts, but a number of technology and human factors problems must be overcome to make this practical. At the source of many of these problems is a desire to manage the UAV using as few people as possible, so that more people can be used in ground-based search efforts. This paper uses observations from two informal studies and one formal experiment to identify what human operators may be unaware of as a function of autonomy and information display. Results suggest that progress is being made on designing autonomy and information displays that may make it possible for a single human to simultaneously manage the UAV and its camera in WiSAR, but that adaptable displays that support systematic navigation are probably needed.
Philosophical Transactions of the Royal Society B | 2013
Gabriel Diaz; Joseph L. Cooper; Mary Hayhoe
In addition to stimulus properties and task factors, memory is an important determinant of the allocation of attention and gaze in the natural world. One way that the role of memory is revealed is by predictive eye movements. Both smooth pursuit and saccadic eye movements demonstrate predictive effects based on previous experience. We have previously shown that unskilled subjects make highly accurate predictive saccades to the anticipated location of a ball prior to a bounce in a virtual racquetball setting. In this experiment, we examined this predictive behaviour. We asked whether the period after the bounce provides subjects with visual information about the ball trajectory that is used to programme the pursuit movement initiated when the ball passes through the fixation point. We occluded a 100 ms period of the balls trajectory immediately after the bounce, and found very little effect on the subsequent pursuit movement. Subjects did not appear to modify their strategy to prolong the fixation. Neither were we able to find an effect on interception performance. Thus, it is possible that the occluded trajectory information is not critical for subsequent pursuit, and subjects may use an estimate of the balls trajectory to programme pursuit. These results provide further support for the role of memory in eye movements.
Journal of Vision | 2013
Gabriel Diaz; Joseph L. Cooper; Dmitry Kit; Mary Hayhoe
Despite the growing popularity of virtual reality environments, few laboratories are equipped to investigate eye movements within these environments. This primer is intended to reduce the time and effort required to incorporate eye-tracking equipment into a virtual reality environment. We discuss issues related to the initial startup and provide algorithms necessary for basic analysis. Algorithms are provided for the calculation of gaze angle within a virtual world using a monocular eye-tracker in a three-dimensional environment. In addition, we provide algorithms for the calculation of the angular distance between the gaze and a relevant virtual object and for the identification of fixations, saccades, and pursuit eye movements. Finally, we provide tools that temporally synchronize gaze data and the visual stimulus and enable real-time assembly of a video-based record of the experiment using the Quicktime MOV format, available at http://sourceforge.net/p/utdvrlibraries/. This record contains the visual stimulus, the gaze cursor, and associated numerical data and can be used for data exportation, visual inspection, and validation of calculated gaze movements.
Proceedings of SPIE, the International Society for Optical Engineering | 2006
Joseph L. Cooper; Michael A. Goodrich
Although advancing levels of technology allow UAV operators to give increasingly complex commands with expanding temporal scope, it is unlikely that the need for immediate situation awareness and local, short-term flight adjustment will ever be completely superseded. Local awareness and control are particularly important when the operator uses the UAV to perform a search or inspection task. There are many different tasks which would be facilitated by search and inspection capabilities of a camera-equipped UAV. These tasks range from bridge inspection and news reporting to wilderness search and rescue. The system should be simple, inexpensive, and intuitive for non-pilots. An appropriately designed interface should (a) provide a context for interpreting video and (b) support UAV tasking and control, all within a single display screen. In this paper, we present and analyze an interface that attempts to accomplish this goal. The interface utilizes a georeferenced terrain map rendered from publicly available altitude data and terrain imagery to create a context in which the location of the UAV and the source of the video are communicated to the operator. Rotated and transformed imagery from the UAV provides a stable frame of reference for the operator and integrates cleanly into the terrain model. Simple icons overlaid onto the main display provide intuitive control and feedback when necessary but fade to a semi-transparent state when not in use to avoid distracting the operators attention from the video signal. With various interface elements integrated into a single display, the interface runs nicely on a small, portable, inexpensive system with a single display screen and simple input device, but is powerful enough to allow a single operator to deploy, control, and recover a small UAV when coupled with appropriate autonomy. As we present elements of the interface design, we will identify concepts that can be leveraged into a large class of UAV applications.
motion in games | 2012
Joseph L. Cooper; Dana H. Ballard
Finding torques that control a physical model to follow motion capture marker data can be complicated. We present a simple method for first constraining a physical model to follow marker data using forward simulation with intuitive parametrization. Essentially, the markers attach to the model through joint constraints and drag the body into position. We then use forward simulation to compute joint torques that produce the same movement. This is accomplished by adding constraints on the relative angular velocities of the body parts to the physical model. Framing the movement in terms of constraints on the model allows us to use the Open Dynamics physics engine (ODE) to find torques that simultaneously account for joint limits, body momentum, and ground/contact constraints. Although balanced movement still requires some external stabilizing torques, these torques are generally quite small and can potentially be addressed by minor changes in foot placement.
international symposium on neural networks | 2013
Leif Johnson; Joseph L. Cooper; Dana H. Ballard
Sparsity and redundancy reduction have been shown to be useful in machine learning, but empirical evaluation has been performed primarily on classification tasks using datasets of natural images and sounds. Similarly, the performance of unsupervised feature learning followed by supervised fine-tuning has primarily focused on classification tasks. In comparison, relatively little work has investigated the use of sparse codes for representing human movements and poses, or for using these codes in regression tasks with movement data. This paper defines a basic coding and regression architecture for evaluating the impact of sparsity when coding human pose information, and tests the performance of several coding methods within this framework for the task of mapping from a kinematic (joint angle) modality to a dynamic (joint torque) one. In addition, we evaluate the performance of unified loss functions defined on the same class of models. We show that, while sparse codes are useful for effective mappings between modalities, their primary benefit for this task seems to be in admitting overcomplete codebooks. We make use of the proposed architecture to examine in detail the sources of error for each stage in the model under various coding strategies. Furthermore, we show that using a unified loss function that passes gradient information between stages of the coding and regression architecture provides substantial reductions in overall error.
Journal of Field Robotics | 2008
Michael A. Goodrich; Bryan S. Morse; Damon Gerhardt; Joseph L. Cooper; Morgan Quigley; Julie A. Adams; Curtis M. Humphrey
international symposium on safety, security, and rescue robotics | 2007
Michael A. Goodrich; Joseph L. Cooper; Julie A. Adams; Curtis M. Humphrey; Ron Zeeman; Brian G. Buss