Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ferran Argelaguet is active.

Publication


Featured researches published by Ferran Argelaguet.


IEEE Computer Graphics and Applications | 2009

Efficient 3D Pointing Selection in Cluttered Virtual Environments

Ferran Argelaguet; Carlos Andujar

In this article, we study the impact of such eye-hand visibility mismatch on selection tasks performed with hand-rooted pointing techniques. We propose a new mapping for ray control, called Ray Casting from the Eye (RCE), which attempts to overcome this mismatchs negative effects. In essence, RCE combines the benefits of image-plane techniques (the absence of visibility mismatch and continuity of the ray movement in screen space) with the benefits of ray control through hand rotation (requiring less physical hand movement). This article builds on a previous study on the impact of eye-to-hand separation on 3D pointing selection. Here, we provide empirical evidence that RCE clearly outperforms classic ray casting (RC) selection, both in sparse and cluttered scenes.


International Journal of Human-computer Studies \/ International Journal of Man-machine Studies | 2011

See-through techniques for referential awareness in collaborative virtual reality

Ferran Argelaguet; Alexander Kulik; André Kunert; Carlos Andujar; Bernd Froehlich

Multi-user virtual reality systems enable natural collaboration in shared virtual worlds. Users can talk to each other, gesture and point into the virtual scenery as if it were real. As in reality, referring to objects by pointing results often in a situation whereon objects are occluded from the other users viewpoints. While in reality this problem can only be solved by adapting the viewing position, specialized individual views of the shared virtual scene enable various other solutions. As one such solution we propose show-through techniques to make sure that the objects one is pointing to can always be seen by others. We first study the impact of such augmented viewing techniques on the spatial understanding of the scene, the rapidity of mutual information exchange as well as the proxemic behavior of users. To this end we conducted a user study in a co-located stereoscopic multi-user setup. Our study revealed advantages for show-through techniques in terms of comfort, user acceptance and compliance to social protocols while spatial understanding and mutual information exchange is retained. Motivated by these results we further analyze whether show-through techniques may also be beneficial in distributed virtual environments. We investigated a distributed setup for two users, each participant having its own display screen and a minimalist avatar representation for each participant. In such a configuration there is a lack of mutual awareness, which hinders the understanding of each others pointing gestures and decreases the relevance of social protocols in terms of proxemic behavior. Nevertheless, we found that show-through techniques can improve collaborative interaction tasks even in such situations.


virtual reality software and technology | 2009

Visual feedback techniques for virtual pointing on stereoscopic displays

Ferran Argelaguet; Carlos Andujar

The act of pointing to graphical elements is one of the fundamental tasks in Human-Computer Interaction. In this paper we analyze visual feedback techniques for accurate pointing on stereoscopic displays. Virtual feedback techniques must provide precise information about the pointing tool and its spatial relationship with potential targets. We show both analytically and empirically that current approaches provide poor feedback on stereoscopic displays, resulting in low user performance when accurate pointing is required. We propose a new feedback technique following a camera viewfinder metaphor. The key idea is to locally flatten the scene objects around the pointing direction to facilitate their selection. We present the results of a user study comparing cursor-based and ray-based visual feedback techniques with our approach. Our user studies indicate that our viewfinder metaphor clearly outperforms competing techniques in terms of user performance and binocular fusion.


virtual reality software and technology | 2008

Overcoming eye-hand visibility mismatch in 3D pointing selection

Ferran Argelaguet; Carlos Andujar; Ramon Trueba

Most pointing techniques for 3D selection on virtual environments rely on a ray originating at the users hand whose direction is controlled by the hand orientation. In this paper we study the potential mismatch between visible objects (those which appear unoccluded from the users eye position) and selectable objects (those which appear unoccluded from the users hand position). We study the impact of such eye-hand visibility mismatch on selection performance, and propose a new technique for ray control which attempts to overcome this problem. We present an experiment to compare our ray control technique with classic raycasting in selection tasks with complex 3D scenes. Our user studies show promising results of our technique in terms of speed and accuracy.


smart graphics | 2010

Automatic speed graph generation for predefined camera paths

Ferran Argelaguet; Carlos Andujar

Predefined camera paths are a valuable tool for the exploration of complex virtual environments. The speed at which the virtual camera travels along different path segments is key for allowing users to perceive and understand the scene while maintaining their attention. Current tools for speed adjustment of camera motion along predefined paths, such as keyframing, interpolation types and speed curve editors provide the animators with a great deal of flexibility but offer little support for the animator to decide which speed is better for each point along the path. In this paper we address the problem of computing a suitable speed curve for a predefined camera path through an arbitrary scene. We strive at adapting speed along the path to provide non-fatiguing, informative, interestingness and concise animations. Key elements of our approach include a new metric based on optical flow for quantifying the amount of change between two consecutive frames, the use of perceptual metrics to disregard optical flow in areas with low image saliency, and the incorporation of habituation metrics to keep the user attention. We also present the results of a preliminary user-study comparing user response with alternative approaches for computing speed curves.


smart graphics | 2009

Complexity and Occlusion Management for the World-in-Miniature Metaphor

Ramon Trueba; Carlos Andujar; Ferran Argelaguet

The World in Miniature (WIM) metaphor allows users to interact and travel efficiently in virtual environments. In addition to the first-person perspective offered by typical VR applications, the WIM offers a second dynamic viewpoint through a hand-held miniature copy of the environment. In the original WIM paper the miniature was a scaled down replica of the whole scene, thus limiting its application to simple models being manipulated at a single level of scale. Several WIM extensions have been proposed where the replica shows only a part of the environment. In this paper we present a new approach to handle complexity and occlusion in the WIM. We discuss algorithms for selecting the region of the scene which will be covered by the miniature copy and for handling occlusion from an exocentric viewpoint. We also present the results of a user-study showing that our technique can greatly improve user performance on spatial tasks in densely-occluded scenes.


symposium on 3d user interfaces | 2010

Improving co-located collaboration with show-through techniques

Ferran Argelaguet; André Kunert; Alexander Kulik; Bernd Froehlich

Multi-user virtual reality systems enable natural interaction with shared virtual worlds. Users can talk to each other, gesture and point into the virtual scenery as if it were real. As in reality, referring to objects by pointing, results often in a situation whereon objects are occluded from the other users viewpoints. While in reality this problem can only be solved by adapting the viewing position, specialized individual views of the shared virtual scene enable various other solutions. As one such solution we propose show-through techniques to make sure that the objects one is pointing to can be seen by others. We analyzed the influence of such augmented viewing techniques on the spatial understanding of the scene, the rapidity of mutual information exchange as well as the social behavior of users. The results of our user study revealed that show-through techniques support spatial understanding on a similar level as walking around to achieve a non-occluded view of specified objects. However, advantages in terms of comfort, user acceptance and compliance to social protocols could be shown, which suggest that virtual reality techniques can in fact be better than 3D reality.


Computers & Graphics | 2007

Anisomorphic ray-casting manipulation for interacting with 2D GUIs

Carlos Andujar; Ferran Argelaguet

The accommodation of conventional 2D GUIs with virtual environments (VEs) can greatly enhance the possibilities of many VE applications. In this paper we present a variation of the well-known ray-casting technique for fast and accurate selection of 2D widgets over a virtual window immersed into a 3D world. The main idea is to provide a new interaction mode where hand rotations are scaled down so that the ray is constrained to intersect the active virtual window. This is accomplished by changing the control–display ratio between the orientation of the user’s hand and the ray used for selection. Our technique uses a curved representation of the ray providing visual feedback of the orientation of both the input device and the selection ray. We have implemented this technique and evaluated its effectiveness in terms of performance and user preference. Our experiments on a four-sided CAVE indicate that the proposed technique ncan increase the speed and accuracy of component selection in 2D GUIs immersed into 3D worlds.Abstract The accommodation of conventional 2D GUIs with virtual environments (VEs) can greatly enhance the possibilities of many VE applications. In this paper we present a variation of the well-known ray-casting technique for fast and accurate selection of 2D widgets over a virtual window immersed into a 3D world. The main idea is to provide a new interaction mode where hand rotations are scaled down so that the ray is constrained to intersect the active virtual window. This is accomplished by changing the control–display ratio between the orientation of the users hand and the ray used for selection. Our technique uses a curved representation of the ray providing visual feedback of the orientation of both the input device and the selection ray. We have implemented this technique and evaluated its effectiveness in terms of performance and user preference. Our experiments on a four-sided CAVE indicate that the proposed technique can increase the speed and accuracy of component selection in 2D GUIs immersed into 3D worlds.


eurographics | 2006

Friction surfaces: scaled ray-casting manipulation for interacting with 2D GUIs

Carlos Andujar; Ferran Argelaguet

The accommodation of conventional 2D GUIs with Virtual Environments (VEs) can greatly enhance the possibilities of many VE applications. In this paper we present a variation of the well-known ray-casting technique for fast and accurate selection of 2D widgets over a virtual window immersed into a 3D world. The main idea is to provide a new interaction mode where hand rotations are scaled down so that the ray is constrained to intersect the active virtual window. This is accomplished by changing the control-display ratio between the orientation of the users hand and the ray used for selection. Our technique uses a curved representation of the ray providing visual feedback of the orientation of both the input device and the selection ray. The users feeling is that they control a flexible ray that gets curved as it moves over a virtual friction surface defined by the 2D window. We have implemented this technique and evaluated its effectiveness in terms of accuracy and performance. Our experiments on a four-sided CAVE indicate that the proposed technique can increase the speed and accuracy of component selection in 2D GUIs immersed into 3D worlds.


smart graphics | 2008

Improving 3D Selection in VEs through Expanding Targets and Forced Disocclusion

Ferran Argelaguet; Carlos Andujar

In this paper we explore the extension of 2D pointing facilitation techniques to 3D object selection. We discuss what problems must be faced when adapting such techniques to 3D interaction on VR applications, and we propose two strategies to adapt the expanding targets approach to the 3D realm, either by dynamically scaling potential targets or by using depth-sorting to guarantee that potential targets appear completely unoccluded. We also present three experiments to evaluate both strategies in 3D selection tasks with multiple targets at varying densities. Our user studies show promising results of 3D expanding targets in terms of error rates and, most importantly, user acceptance.

Collaboration


Dive into the Ferran Argelaguet's collaboration.

Top Co-Authors

Avatar

Carlos Andujar

Polytechnic University of Catalonia

View shared research outputs
Top Co-Authors

Avatar

Ramon Trueba

Polytechnic University of Catalonia

View shared research outputs
Top Co-Authors

Avatar

Marta Fairén

Polytechnic University of Catalonia

View shared research outputs
Top Co-Authors

Avatar

Ramón Trueba

Polytechnic University of Catalonia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Marc Wolter

RWTH Aachen University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Pierre Schroeder

Centre national de la recherche scientifique

View shared research outputs
Researchain Logo
Decentralizing Knowledge