Lewis L. Chuang
Max Planck Society
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Lewis L. Chuang.
Frontiers in Computational Neuroscience | 2012
Lewis L. Chuang; Quoc C. Vuong; Hh Bülthoff
There is evidence that observers use learned object motion to recognize objects. For instance, studies have shown that reversing the learned direction in which a rigid object rotated in depth impaired recognition accuracy. This motion reversal can be achieved by playing animation sequences of moving objects in reverse frame order. In the current study, we used this sequence-reversal manipulation to investigate whether observers encode the motion of dynamic objects in visual memory, and whether such dynamic representations are encoded in a way that is dependent on the viewing conditions. Participants first learned dynamic novel objects, presented as animation sequences. Following learning, they were then tested on their ability to recognize these learned objects when their animation sequence was shown in the same sequence order as during learning or in the reverse sequence order. In Experiment 1, we found that non-rigid motion contributed to recognition performance; that is, sequence-reversal decreased sensitivity across different tasks. In subsequent experiments, we tested the recognition of non-rigidly deforming (Experiment 2) and rigidly rotating (Experiment 3) objects across novel viewpoints. Recognition performance was affected by viewpoint changes for both experiments. Learned non-rigid motion continued to contribute to recognition performance and this benefit was the same across all viewpoint changes. By comparison, learned rigid motion did not contribute to recognition performance. These results suggest that non-rigid motion provides a source of information for recognizing dynamic objects, which is not affected by changes to viewpoint.
Visual Cognition | 2005
Karen Lander; Lewis L. Chuang
Previous work has suggested that seeing a famous face move aids the recognition of identity, especially when viewing conditions are degraded (Knight & Johnston, 1997; Lander, Christie, & Bruce, 1999). Experiment 1 investigated whether the beneficial effects of motion are related to a particular type of facial motion (expressing, talking, or rigid motion). Results showed a significant beneficial effect of both expressive and talking movements, but no advantage for rigid motion, compared with a single static image. Experiment 2 investigated whether the advantage for motion is uniform across identity. Participants rated moving famous faces for distinctiveness of motion. The famous faces (moving and static freeze frame) were then used as stimuli in a recognition task. The advantage for face motion was significant only when the motion displayed was distinctive. Results suggest that a reason why moving faces are easier to recognize is because some familiar faces have characteristic motion patterns, which act as an additional cue to identity.
IEEE Transactions on Systems, Man, and Cybernetics | 2013
Hyoung Il Son; Antonio Franchi; Lewis L. Chuang; Junsuk Kim; Hh Bülthoff; Paolo Robuffo Giordano
In this paper, we investigate the effect of haptic cueing on a human operators performance in the field of bilateral teleoperation of multiple mobile robots, particularly multiple unmanned aerial vehicles (UAVs). Two aspects of human performance are deemed important in this area, namely, the maneuverability of mobile robots and the perceptual sensitivity of the remote environment. We introduce metrics that allow us to address these aspects in two psychophysical studies, which are reported here. Three fundamental haptic cue types were evaluated. The Force cue conveys information on the proximity of the commanded trajectory to obstacles in the remote environment. The Velocity cue represents the mismatch between the commanded and actual velocities of the UAVs and can implicitly provide a rich amount of information regarding the actual behavior of the UAVs. Finally, the Velocity+Force cue is a linear combination of the two. Our experimental results show that, while maneuverability is best supported by the Force cue feedback, perceptual sensitivity is best served by the Velocity cue feedback. In addition, we show that large gains in the haptic feedbacks do not always guarantee an enhancement in the teleoperators performance.
eye tracking research & application | 2010
Hans-Joachim Bieg; Lewis L. Chuang; Roland W. Fleming; Harald Reiterer; Hh Bülthoff
Selecting a graphical item by pointing with a computer mouse is a ubiquitous task in many graphical user interfaces. Several techniques have been suggested to facilitate this task, for instance, by reducing the required movement distance. Here we measure the natural coordination of eye and mouse pointer control across several search and selection tasks. We find that users automatically minimize the distance to likely targets in an intelligent, task dependent way. When target location is highly predictable, top-down knowledge can enable users to initiate pointer movements prior to target fixation. These findings question the utility of existing assistive pointing techniques and suggest that alternative approaches might be more effective.
Quarterly Journal of Experimental Psychology | 2006
Karen Lander; Lewis L. Chuang; Lee H. V. Wickham
It is easier to identify a degraded familiar face when it is shown moving (smiling, talking; nonrigid motion), than when it is displayed as a static image (Knight & Johnston, 1997; Lander, Christie, & Bruce, 1999). Here we explore the theoretical underpinnings of the moving face recognition advantage. In Experiment 1 we show that the identification of personally familiar faces when shown naturally smiling is significantly better than when the person is shown artificially smiling (morphed motion), as a single static neutral image or as a single static smiling image. In Experiment 2 we demonstrate that speeding up the motion significantly impairs the recognition of identity from natural smiles, but has little effect on morphed smiles. We conclude that the recognition advantage for face motion does not reflect a general benefit for motion, but suggests that, for familiar faces, information about their characteristic motion is stored in memory.
automotive user interfaces and interactive vehicular applications | 2016
Shadan Sadeghian Borojeni; Lewis L. Chuang; Wilko Heuten; Susanne Boll
Take-over situations in highly automated driving occur when drivers have to take over vehicle control due to automation shortcomings. Due to high visual processing demand of the driving task and time limitation of a take-over maneuver, appropriate user interface designs for take-over requests (TOR) are needed. In this paper, we propose applying ambient TORs, which address the peripheral vision of a driver. Conducting an experiment in a driving simulator, we tested a) ambient displays as TORs, b) whether contextual information could be conveyed through ambient TORs, and c) if the presentation pattern (static, moving) of the contextual TORs has an effect on take-over behavior. Results showed that conveying contextual information through ambient displays led to shorter reaction times and longer times to collision without increasing the workload. The presentation pattern however, did not have an effect on take-over performance.
intelligent robots and systems | 2011
Hyoung Il Son; Lewis L. Chuang; Antonio Franchi; Junsuk Kim; Dongjun Lee; Seong Whan Lee; Hh Bülthoff; Paolo Robuffo Giordano
In this paper, we investigate the maneuverability performance of human teleoperators on multi-robots. First, we propose that maneuverability performance can be assessed by a frequency response function that jointly considers the input force of the operator and the position errors of the multi-robot system that is being maneuvered. Doing so allows us to evaluate maneuverability performance in terms of the human teleoperators interaction with the controlled system. This allowed us to effectively determine the suitability of different haptic cue algorithms in improving teleoperation maneuverability. Performance metrics based on the human teleoperators frequency response function indicate that maneuverability performance is best supported by a haptic feedback algorithm which is based on an obstacle avoidance force.
Frontiers in Human Neuroscience | 2016
M Scheer; Hh Bülthoff; Lewis L. Chuang
The current study investigates the demands that steering places on mental resources. Instead of a conventional dual-task paradigm, participants of this study were only required to perform a steering task while task-irrelevant auditory distractor probes (environmental sounds and beep tones) were intermittently presented. The event-related potentials (ERPs), which were generated by these probes, were analyzed for their sensitivity to the steering task’s demands. The steering task required participants to counteract unpredictable roll disturbances and difficulty was manipulated either by adjusting the bandwidth of the roll disturbance or by varying the complexity of the control dynamics. A mass univariate analysis revealed that steering selectively diminishes the amplitudes of early P3, late P3, and the re-orientation negativity (RON) to task-irrelevant environmental sounds but not to beep tones. Our findings are in line with a three-stage distraction model, which interprets these ERPs to reflect the post-sensory detection of the task-irrelevant stimulus, engagement, and re-orientation back to the steering task. This interpretation is consistent with our manipulations for steering difficulty. More participants showed diminished amplitudes for these ERPs in the “hard” steering condition relative to the “easy” condition. To sum up, the current work identifies the spatiotemporal ERP components of task-irrelevant auditory probes that are sensitive to steering demands on mental resources. This provides a non-intrusive method for evaluating mental workload in novel steering environments.
PLOS ONE | 2012
Hans-Joachim Bieg; Jean-Pierre Bresciani; Hh Bülthoff; Lewis L. Chuang
Recent studies provide evidence for task-specific influences on saccadic eye movements. For instance, saccades exhibit higher peak velocity when the task requires coordinating eye and hand movements. The current study shows that the need to process task-relevant visual information at the saccade endpoint can be, in itself, sufficient to cause such effects. In this study, participants performed a visual discrimination task which required a saccade for successful completion. We compared the characteristics of these task-related saccades to those of classical target-elicited saccades, which required participants to fixate a visual target without performing a discrimination task. The results show that task-related saccades are faster and initiated earlier than target-elicited saccades. Differences between both saccade types are also noted in their saccade reaction time distributions and their main sequences, i.e., the relationship between saccade velocity, duration, and amplitude.
Visual Cognition | 2006
Lewis L. Chuang; Quoc C. Vuong; Ian M. Thornton; Hh Bülthoff
75 Interference from irrelevant colour-singletons during serial search depends onvisual attention being spatially diffuseBryan R. Burnham, James H. Neely, Peter B. Walker, and W. Trammell Neill78 The unlikely perception of figural shape from 3-D concavitiesAnthony Cate and Marlene Behrmann82 Does emotion systematically influence visual perception?Arieta Chouchourelou, Toshihiko Matsuka, Kent Harber, and Maggie Shiffrar85 Recognizing novel deforming objectsLewis L. Chuang, Quoc C. Vuong, Ian M. Thornton, and Heinrich H. Bu¨lthoff89 The distinctiveness effect reverses when using well-controlled distractorsNicolas Davidenko and Michael J. A. Ramscar92 Velocity cues improve visual search and multiple object trackingDavid E. Fencsik, Jessenia Urrea, Skyler S. Place, Jeremy M. Wolfe, and Todd S. Horowitz96 Executive load in working memory induces inattentional blindnessDaryl Fougnie and Rene´ Marois100 Spatiotemporal cues for tracking multiple objects through occlusionS. L. Franconeri, Z. W. Pylyshyn, and B. J. Scholl104 Attentional enhancement along the path of a sequence of saccadesTimothy M. Gersch, Brian S. Schnitzer, Priyesh S. Sanghvi, Barbara Dosher, andEileen Kowler108 Affective consequences of attentional inhibition of faces depend on selection taskBrian A. Goolsby, Jane E. Raymond, and Kimron Shapiro112 Contingent capture: A visuospatial effect? Evidence from electrophysiologyE´milie Leblanc, David Prime, and Pierre Jolicoeur116 Effects of object-based visual attention: Sensory enhancement or prioritization?Ashleigh M. Richard, Hyunkyu Lee, and Shaun P. Vecera119 Axes versus averages: High-level representations of dynamic point-light formsJavid Sadr, Nikolaus F. Troje, and Ken Nakayama122 fMRI reactivation of the human lateral occipital complex during delayed actionsto remembered objectsAnthony Singhal, Liam Kaufman, Ken Valyear, and Jody C. Culham125 Control of speed and accuracy set point in visually guided manual-pointing movementsJoo-Hyun Song and Ken NakayamaVISUAL COGNITION, 2006, 14 (1), 74 128# 2006 Psychology Press Ltdhttp://www.psypress.com/viscog DOI: 10.1080/13506280600627756