Robert W. Lindeman
University of Canterbury
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Robert W. Lindeman.
human factors in computing systems | 1999
Robert W. Lindeman; John L. Sibert; James K. Hahn
This paper reports empirical results from a study into the useof 2D widgets in 3D immersive virtual environments. Severalresearchers have proposed the use of 2D interaction techniques in3D environments, however little empirical work has been done totest the usability of such approaches. We present the results oftwo experiments conducted on low-level 2D manipulation tasks withinan immersive virtual environment. We empirically show that theaddition of passive-haptic feedback for use in precise UImanipulation tasks can significantly increase user performance.Furthermore, users prefer interfaces that provide a physicalsurface, and that allow them to work with interface widgets in thesame visual field of view as the objects they are modifying.
ieee virtual reality conference | 1999
Robert W. Lindeman; John L. Sibert; James K. Hahn
The study of human-computer interaction within immersive virtual environments requires us to balance what we have learned from the design and use of desktop interfaces with novel approaches to allow us to work effectively in three dimensions. While some researchers have called for revolutionary interfaces for these new environments, devoid of two-dimensional (2D) desktop widgets, others have taken a more evolutionary approach. Windowing within immersive virtual environments is an attempt to apply 2D interface techniques to three-dimensional (3D) worlds. 2D techniques are attractive because of their proven acceptance and widespread use on the desktop. With current methods environments, however, it is difficult for users of 3D worlds to perform precise manipulations, such as dragging sliders, or precisely positioning or orienting objects. We have developed a testbed designed to take advantage of bimanual interaction, proprioception, and passive-haptic feedback. We present preliminary results from an empirical study of 2D interaction in 3D environments using this system. We use a window registered with a tracked, physical surface, to provide support for precise manipulation of interface widgets displayed in the virtual environment.
human factors in computing systems | 2005
Robert W. Lindeman; John L. Sibert; Erick Mendez-Mendez; Sachin Patil; Daniel Phifer
This paper presents empirical results to support the use of vibrotactile cues as a means of improving user performance on a spatial task. In a building-clearing exercise, directional vibrotactile cues were employed to alert subjects to areas of the building that they had not yet cleared, but were currently exposed to. Compared with performing the task without vibrotactile cues, subjects were exposed to uncleared areas a smaller percentage of time, and cleared more of the overall space, when given the added vibrotactile stimulus. The average length of each exposure was also significantly less when vibrotactile cues were present.
ieee international conference on automatic face gesture recognition | 2004
Jose L. Hernandez-Rebollar; Nicholas Kyriakopoulos; Robert W. Lindeman
This work discusses an approach for capturing and translating isolated gestures of American Sign Language into spoken and written words. The instrumented part of the system combines an AcceleGlove and a two-link arm skeleton. Gestures of the American Sign Language are broken down into unique sequences of phonemes called poses and movements, recognized by software modules trained and tested independently on volunteers with different hand sizes and signing ability. Recognition rates of independent modules reached up to 100% for 42 postures, orientations, 11 locations and 7 movements using linear classification. The overall sign recognizer was tested using a subset of the American Sign Language dictionary comprised by 30 one-handed signs, achieving 98% accuracy. The system proved to be scalable: when the lexicon was extended to 176 signs and tested without retraining, the accuracy was 95%. This represents an improvement over classification based on hidden Markov models (HMMs) and neural networks (NNs).
virtual reality software and technology | 2004
Robert W. Lindeman; Robert C. Page; Yasuyuki Yanagida; John L. Sibert
This paper presents work we have done on the design and implementation of an untethered system to deliver haptic cues for use in immersive virtual environments through a body-worn garment. Our system can control a large number of body-worn vibration units, each with individually controllable vibration intensity. Several design iterations have helped us to refine the system and improve such aspects as robustness, ease of donning and doffing, weight, power consumption, cable management, and support for many different types of feedback units, such as pager motors, solenoids, and muffin fans. In addition, experience integrating the system into an advanced virtual reality system has helped define some of the design constraints for creating wearable solutions, and to further refine our implementation.
international symposium on haptic interfaces for virtual environment and teleoperator systems | 2004
Yasuyuki Yanagida; Mitsuhiro Kakita; Robert W. Lindeman; Yuichiro Kume; Nobuji Tetsutani
Vibrotactile displays have been studied for several decades in the context of sensory substitution. Recently, a number of vibrotactile displays have been developed to extend sensory modalities in virtual reality. Some of these target the whole body as the stimulation region, but existing systems are only designed for discrete stimulation points at specific parts of the body. However, since human tactile sensation has more resolution, a higher density might be required in factor alignment in order to realize general-purpose vibrotactile displays. One problem with this approach is that it might result in an impractically high number of required tactors. Our current focus is to explore ways of simplifying the system while maintaining an acceptable level of expressive ability. As a first step, we chose a well-studied task: tactile letter reading. We examined the possibility of distinguishing alphanumeric letters by using only a 3-by-3 array of vibrating motors on the back of a chair. The tactors are driven sequentially in the same sequence as if someone were tracing the letter on the chairs back. The results showed 87% successful letter recognition in some cases, which was close to the results in previous research with much larger arrays.
ieee virtual reality conference | 2001
Robert W. Lindeman; John L. Sibert; James N. Templeman
The paper reports empirical results from two studies of effective user interaction in immersive virtual environments. The use of 2D interaction techniques in 3D environments has received increased attention recently. We introduce two new concepts to the previous techniques: the use of 3D widget representations; and the imposition of simulated surface constraints. The studies were identical in terms of treatments, but differed in the tasks performed by subjects. In both studies, we compared the use of two-dimensional (2D) versus three-dimensional (3D) interface widget representations, as well as the effect of imposing simulated surface constraints on precise manipulation tasks. The first study entailed a drag-and-drop task, while the second study looked at a slider-bar task. We empirically show that using 3D widget representations can have mixed results on user performance. Furthermore, we show that simulated surface constraints can improve user performance on typical interaction tasks in the absence of a physical manipulation surface. Finally, based on these results, we make some recommendations to aid interface designers in constructing effective interfaces for virtual environments.
international conference on computer graphics and interactive techniques | 2002
Jose L. Hernandez-Rebollar; Nicholas Kyriakopoulos; Robert W. Lindeman
We present The AcceleGlove, a novel whole-hand input device to manipulate three different virtual objects: a virtual hand, icons on a virtual desktop and a virtual keyboard using the 26 postures of the American Sign Language (ASL) alphabet.
Computer Education | 1997
Charles W. Kann; Robert W. Lindeman; Rachelle S. Heller
Abstract Algorithm animation would seem to be a useful tool for teaching algorithms. However, previous empirical studies of using algorithm animation have produced mixed results. This paper presents an empirical study in which the subjects programmed the algorithm which they had seen animated. The results of the experiment indicate that combining the animation with the implementation of the algorithm was an effective way to teach the animation, and also produced transfer effects for general recursion problems.
ieee virtual reality conference | 2003
Robert W. Lindeman; Yasuyuki Yanagida
This paper presents results from two experiments into the use of vibrotactile cues for near-field haptics in virtual environments. In one experiment, subjects were tested on their ability to identify the location of a one-second vibrotactile stimulus presented to a single tactor of a 3-by-3 array on their back. We recorded an 84% correct identification rate. In a second experiment, subjects were asked to match the intensity of a vibrotactile stimulus presented at one location with the intensity at another location. We found that subjects could match the intensities to within 7Hz if the reference and adjustable stimuli were presented at the same location, but only to within 18Hz otherwise.