Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Henrique Galvan Debarba is active.

Publication


Featured researches published by Henrique Galvan Debarba.


symposium on 3d user interfaces | 2012

LOP-cursor: Fast and precise interaction with tiled displays using one hand and levels of precision

Henrique Galvan Debarba; Luciana Porcher Nedel; Anderson Maciel

We present levels of precision (LOP) cursor, a metaphor for high precision pointing and simultaneous cursor controlling using commodity mobile devices. The LOP-cursor uses a two levels of precision representation that can be combined to access low and high resolution of input. It provides a constrained area of high resolution input and a broader area of lower input resolution, offering the possibility of working with a two legs cursor using only one hand. LOP-cursor is designed for interaction with large high resolution displays, e.g. display walls, and distributed screens/computers scenarios. This paper presents the design of the cursor, the implementation of a prototype, and user evaluation experiments showing that our method allows both, the acquisition of small targets, and fast interaction while using simultaneous cursors in a comfortable manner. Targets smaller than 0.3 cm can be selected by users at distances over 1.5 m from the screen with minimum effort.


international conference of the ieee engineering in medicine and biology society | 2010

Efficient liver surgery planning in 3D based on functional segment classification and volumetric information

Henrique Galvan Debarba; Dinamar J. Zanchet; Daiane Fracaro; Anderson Maciel; Antonio Nocchi Kalil

Anatomic hepatectomies are resections in which compromised segments or sectors of the liver are extracted according to the topological structure of its vascular elements. Such structure varies considerably among patients, which makes the current anatomy-based planning methods often inaccurate. In this work we propose a strategy to efficiently and semi-automatically segment and classify patient-specific liver models in 3D. The method is based on standard CT datasets and allows accurate estimation of functional remaining liver volume. Experiments showing effectiveness of the method are presented, and quantitative and qualitative results are discussed.


symposium on 3d user interfaces | 2015

Characterizing embodied interaction in First and Third Person Perspective viewpoints

Henrique Galvan Debarba; Eray Molla; Bruno Herbelin; Ronan Boulic

Third Person Perspective (3PP) viewpoints have the potential to expand how one perceives and acts in a virtual environment. They offer increased awareness of the posture and of the surrounding of the virtual body as compared to First Person Perspective (1PP). But from another standpoint, 3PP can be considered as less effective for inducing a strong sense of embodiment into a virtual body. Following an experimental paradigm based on full body motion capture and immersive interaction, this study investigates the effect of perspective and of visuomotor synchrony on the sense of embodiment. It provides evidence supporting a high sense of embodiment in both 1PP and 3PP during engaging motor tasks, as well as guidelines for choosing the optimal perspective depending on location of targets.


international conference on human-computer interaction | 2013

Disambiguation Canvas: A Precise Selection Technique for Virtual Environments

Henrique Galvan Debarba; Jerônimo Gustavo Grandi; Anderson Maciel; Luciana Porcher Nedel; Ronan Boulic

We present the disambiguation canvas, a technique developed for easy, accurate and fast selection of small objects and objects inside cluttered virtual environments. Disambiguation canvas rely on selection by progressive refinement, it uses a mobile device and consists of two steps. During the first, the user defines a subset of objects by means of the orientation sensors of the device and a volume casting pointing technique. The subsequent step consists of the disambiguation of the desired target among the previously defined subset of objects, and is accomplished using the mobile device touchscreen. By relying on the touchscreen for the last step, the user can disambiguate among hundreds of objects at once. User tests show that our technique performs faster than ray-casting for targets with approximately 0.53 degrees of angular size, and is also much more accurate for all the tested target sizes.


human factors in computing systems | 2017

Design and Evaluation of a Handheld-based 3D User Interface for Collaborative Object Manipulation

Jerônimo Gustavo Grandi; Henrique Galvan Debarba; Luciana Porcher Nedel; Anderson Maciel

Object manipulation in 3D virtual environments demands a combined coordination of rotations, translations and scales, as well as the camera control to change the users viewpoint. Then, for many manipulation tasks, it would be advantageous to share the interaction complexity among team members. In this paper we propose a novel 3D manipulation interface based on a collaborative action coordination approach. Our technique explores a smartphone -- the touchscreen and inertial sensors -- as input interface, enabling several users to collaboratively manipulate the same virtual object with their own devices. We first assessed our interface design on a docking and an obstacle crossing tasks with teams of two users. Then, we conducted a study with 60 users to understand the influence of group size in collaborative 3D manipulation. We evaluated teams in combinations of one, two, three and four participants. Experimental results show that teamwork increases accuracy when compared with a single user. The accuracy increase is correlated with the number of individuals in the team and their work division strategy.


symposium on 3d user interfaces | 2016

Collaborative 3D manipulation using mobile phones

Jerônimo Gustavo Grandi; Iago U. Berndt; Henrique Galvan Debarba; Luciana Porcher Nedel; Anderson Maciel

We present a 3D user interface for collaborative manipulation of three-dimensional objects in virtual environments. It maps inertial sensors, touch screen and physical buttons of a mobile phone into well-known gestures to alter the position, rotation and scale of virtual objects. As these transformations require the control of multiple degrees of freedom (DOFs), collaboration is proposed as a solution to coordinate the modification of each and all the available DOFs. Users are free to decide their own manipulation roles. All virtual elements are displayed in a single shared screen, which is handy to aggregate multiple users in the same physical space.


virtual reality software and technology | 2015

Embodied interaction using non-planar projections in immersive virtual reality

Henrique Galvan Debarba; Sami Perrin; Bruno Herbelin; Ronan Boulic

In this paper we evaluate the use of non-planar projections as a means to increase the Field of View (FoV) in embodied Virtual Reality (VR). Our main goal is to bring the virtual body into the users FoV and to understand how this affects the virtual body/environment relation and quality of interaction. Subjects wore a Head Mounted Display (HMD) and were instructed to perform a selection and docking task while using either Perspective (≈ 106 ° vertical FoV), Hammer or Equirectangular (≈ 180 ° vertical FoV for both) projection. The increased FoV allowed for a shorter search time as well as less head movements. However, quality of interaction was generally inferior, requiring more time to dock, increasing docking error and producing more body/environment collisions. We also assessed cybersickness and the sense of embodiment toward the virtual body through questionnaires, for which the difference between projections seemed to be less pronounced.


PLOS ONE | 2017

Characterizing first and third person viewpoints and their alternation for embodied interaction in virtual reality

Henrique Galvan Debarba; Sidney Bovet; Roy Salomon; Olaf Blanke; Bruno Herbelin; Ronan Boulic

Empirical research on the bodily self has shown that the body representation is malleable, and prone to manipulation when conflicting sensory stimuli are presented. Using Virtual Reality (VR) we assessed the effects of manipulating multisensory feedback (full body control and visuo-tactile congruence) and visual perspective (first and third person perspective) on the sense of embodying a virtual body that was exposed to a virtual threat. We also investigated how subjects behave when the possibility of alternating between first and third person perspective at will was presented. Our results support that illusory ownership of a virtual body can be achieved in both first and third person perspectives under congruent visuo-motor-tactile condition. However, subjective body ownership and reaction to threat were generally stronger for first person perspective and alternating condition than for third person perspective. This suggests that the possibility of alternating perspective is compatible with a strong sense of embodiment, which is meaningful for the design of new embodied VR experiences.


symposium on 3d user interfaces | 2011

The cube of doom: A bimanual perceptual user experience

Henrique Galvan Debarba; Juliano Franz; Vitor Reus; Anderson Maciel; Luciana Porcher Nedel

This paper presents a 3D user interface to solve a three-dimensional wooden blocks puzzle. Such interface aims at reproducing the real scenario of puzzle solving using involving devices and techniques for interaction and visualization which include a mobile device, haptics and enhanced stereo vision. The paper describes our interaction approach, the system implementation and user experiments.


IEEE Transactions on Visualization and Computer Graphics | 2018

Egocentric Mapping of Body Surface Constraints

Eray Molla; Henrique Galvan Debarba; Ronan Boulic

The relative location of human body parts often materializes the semantics of on-going actions, intentions and even emotions expressed, or performed, by a human being. However, traditional methods of performance animation fail to correctly and automatically map the semantics of performer postures involving self-body contacts onto characters with different sizes and proportions. Our method proposes an egocentric normalization of the body-part relative distances to preserve the consistency of self contacts for a large variety of human-like target characters. Egocentric coordinates are character independent and encode the whole posture space, i.e., it ensures the continuity of the motion with and without self-contacts. We can transfer classes of complex postures involving multiple interacting limb segments by preserving their spatial order without depending on temporal coherence. The mapping process exploits a low-cost constraint relaxation technique relying on analytic inverse kinematics; thus, we can achieve online performance animation. We demonstrate our approach on a variety of characters and compare it with the state of the art in online retargeting with a user study. Overall, our method performs better than the state of the art, especially when the proportions of the animated character deviate from those of the performer.

Collaboration


Dive into the Henrique Galvan Debarba's collaboration.

Top Co-Authors

Avatar

Anderson Maciel

Universidade Federal do Rio Grande do Sul

View shared research outputs
Top Co-Authors

Avatar

Ronan Boulic

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Jerônimo Gustavo Grandi

Universidade Federal do Rio Grande do Sul

View shared research outputs
Top Co-Authors

Avatar

Luciana Porcher Nedel

Universidade Federal do Rio Grande do Sul

View shared research outputs
Top Co-Authors

Avatar

Bruno Herbelin

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Eray Molla

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Dinamar J. Zanchet

Universidade Federal de Ciências da Saúde de Porto Alegre

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Olaf Blanke

École Polytechnique Fédérale de Lausanne

View shared research outputs
Researchain Logo
Decentralizing Knowledge