Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Heinrich H. Buelthoff is active.

Publication


Featured researches published by Heinrich H. Buelthoff.


Zeitschrift für Naturforschung C | 1980

3-D Analysis of the Flight Trajectories of Flies (Drosophila melanogaster)

Heinrich H. Buelthoff; Tomaso Poggio; C Wehrhahn

Abstract We have developed a computer system for reconstructing and analyzing three dimensional flight trajectories of flies. Its application to the study of the free flight behaviour of the fruitfly Drosophila melanogaster is described. The main results are: a) Drosophila males only occasionally track other flies; b) in such cases the fly’s angular velocity is a function of the error angle under which the leading fly is seen; c) body saccades can be demonstrated during cruising flights; d) high angular velocities are strongly correlated with low forward velocities, probably reflecting an aerodynamic constraint of flight. The 3-D technique described may provide an adequate tool for studying the organization of the systems present in flies and for relating the free flight behaviour to previous analyses of tethered flies.


ieee virtual reality conference | 2015

Turbulent motions cannot shake VR

F Soyka; Elena Kokkinara; Markus Leyrer; Heinrich H. Buelthoff; Mel Slater; Betty J. Mohler

The International Air Transport Association forecasts that there will be at least a 30% increase in passenger demand for flights over the next five years. In these circumstances the aircraft industry is looking for new ways to keep passengers occupied, entertained and healthy, and one of the methods under consideration is immersive virtual reality. It is therefore becoming important to understand how motion sickness and presence in virtual reality are influenced by physical motion. We were specifically interested in the use of head-mounted displays (HMD) while experiencing in-flight motions such as turbulence. 50 people were tested in different virtual environments varying in their context (virtual airplane versus magic carpet ride over tropical islands) and the way the physical motion was incorporated into the virtual world (matching visual and auditory stimuli versus no incorporation). Participants were subjected to three brief periods of turbulent motions realized with a motion simulator. Physiological signals (postural stability, heart rate and skin conductance) as well as subjective experiences (sickness and presence questionnaires) were measured. None of our participants experienced severe motion sickness during the experiment and although there were only small differences between conditions we found indications that it is beneficial for both wellbeing and presence to choose a virtual environment in which turbulent motions could be plausible and perceived as part of the scenario. Therefore we can conclude that brief exposure to turbulent motions does not get participants sick.


1st Workshop on Eye Tracking and Visualization, ETVIS 2015 | 2015

Unsupervised Clustering of EOG as a Viable Substitute for Optical Eye Tracking

Nina Flad; Tatiana Fomina; Heinrich H. Buelthoff; Lewis L. Chuang

Eye-movements are typically measured with video cameras and image recognition algorithms. Unfortunately, these systems are susceptible to changes in illumination during measurements. Electrooculography (EOG) is another approach for measuring eye-movements that does not suffer from the same weakness. Here, we introduce and compare two methods that allow us to extract the dwells of our participants from EOG signals under presentation conditions that are too difficult for optical eye tracking. The first method is unsupervised and utilizes density-based clustering. The second method combines the optical eye-tracker’s methods to determine fixations and saccades with unsupervised clustering. Our results show that EOG can serve as a sufficiently precise and robust substitute for optical eye tracking, especially in studies with changing lighting conditions. Moreover, EOG can be recorded alongside electroencephalography (EEG) without additional effort.


bioRxiv | 2017

What's Up: an assessment of Causal Inference in the Perception of Verticality

Ksander N. de Winkel; Mikhail Katliar; Daniel Diers; Heinrich H. Buelthoff

The perceptual upright is thought to be constructed by the central nervous system (CNS) as a vector sum; by combining estimates on the upright provided by the visual system and the body’s inertial sensors with prior knowledge that the upright is usually above the head. Results from a number of recent studies furthermore show that the weighting of the respective sensory signals is proportional to their reliability, consistent with a Bayesian interpretation of the idea of a vector sum (Forced Fusion, FF). However, findings from a study conducted in partial gravity suggest that the CNS may rely on a single sensory system (Cue Capture, CC), or choose to process sensory signals differently based on inferred signal causality (Causal Inference, CI). We developed a novel Alternative-Reality system to manipulate visual and physical tilt independently, and tasked participants (n=28) to indicate the perceived upright for various (in-)congruent combinations of visual-inertial stimuli. Overall, the data appear best explained by the FF model. However, an evaluation of individual data reveals considerable variability, favoring different models in about equal proportions of participants (FF, n=12; CI, n=7, CC, n=9). Given the observed variability, we conclude that the notion of a vector sum does not provide a comprehensive explanation of the perception of the upright.


59th International Annual Meeting of the Human Factors and Ergonomics Society, HFES 2014 | 2015

On the Cognitive Demands of Different Controller Dynamics: A within-subject P300 Analysis

M Scheer; Heinrich H. Buelthoff; Lewis L. Chuang

The cognitive workload of a steering task could reflect its demand on attentional as well as working memory resources under different conditions. These respective demands could be differentiated by evaluating components of the event-related potential (ERP) response to different types of stimulus probes, which are claimed to reflect the availability of either attention (i.e., novelty-P3) or working memory (i.e., target-P3) resources. Here, a within-subject analysis is employed to evaluate the robustness of ERP measurements in discriminating the cognitive demands of different steering conditions. We find that the amplitude of novelty-P3 ERPs to task-irrelevant environmental sounds is diminished when participants are required to perform a steering task. This indicates that steering places a demand on attentional resources. In addition, target-P3 ERPs to a secondary auditory detection task vary when the controller dynamics in the steering task are manipulated. This indicates that differences in controller dynamics vary in their working memory demands.


eurographics | 2008

Effect of the size of the field of view on the perceived amplitude of rotations of the visual scene

M. Ogier; Betty J. Mohler; Heinrich H. Buelthoff; Jean-Pierre Bresciani

Efficient navigation requires a good representation of body position/orientation in the environment and an accurate updating of this representation when the body-environment relationship changes. We tested here whether the visual flow alone - i.e., no landmark - can be used to update this representation when the visual scene is rotated, and whether having a limited horizontal field of view (30 or 60 degrees), as it is the case in most virtual reality applications, degrades the performance as compared to a full field of view. Our results show that (i) the visual flow alone does not allow for accurately estimating the amplitude of rotations of the visual scene, notably giving rise to a systematic underestimation of rotations larger than 30 degrees, and (ii) having more than 30 degrees of horizontal field of view does not really improve the performance. Taken together, these results suggest that a 30 degree field of view is enough to (under)estimate the amplitude of visual rotations when only visual flow information is available, and that landmarks should probably be provided if the amplitude of the rotations has to be accurately perceived.


Journal of Vision | 2007

Effects of experience and task type on unsupervised categorization of novel, 3D objects

Theresa Cooke; Christian Wallraven; Heinrich H. Buelthoff

Background: The Importance of Shape for Visual Categorization Rosch et al. (1976) and others have argued that the shape of objects is a fundamental determinant of category structure. In a previous study [2], we observed that after ten hours of visual exposure to a series of novel, 3D objects, subjects asked to perform free categorization in a sequential presentation task did so primarily on the basis of shape differences as opposed to texture or a combination of both. In contrast, no such preference was found after ten hours of haptic exposure.


Cerebral Cortex | 1994

How Are Three-Deminsional Objects Represented in the Brain?

Heinrich H. Buelthoff; Shimon Edelman; Michael J. Tarr


Journal of Vision | 2010

Is the motor system affected by the hollow face illusion

Bruce Hartung; Volker H. Franz; Daniel Kersten; Heinrich H. Buelthoff


AIAA Modeling and Simulation Technologies Conference: Held at the AIAA SciTech Forum 2017 | 2017

Experimental evaluation of haptic support systems for learning a 2-DoF tracking task

Giulia D'Intino; Mario Olivari; Stefano Geluardi; Joost Venrooij; Lorenzo Pollini; Heinrich H. Buelthoff

Collaboration


Dive into the Heinrich H. Buelthoff's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tomaso Poggio

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge