Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Patrick Salamin is active.

Publication


Featured researches published by Patrick Salamin.


IEEE Transactions on Learning Technologies | 2010

Quantifying Effects of Exposure to the Third and First-Person Perspectives in Virtual-Reality-Based Training

Patrick Salamin; Tej Tadi; Olaf Blanke; Frédéric Vexo; Daniel Thalmann

In the recent years, usage of the third-person perspective (3PP) in virtual training methods has become increasingly viable and despite the growing interest in virtual reality and graphics underlying third-person perspective usage, not many studies have systematically looked at the dynamics and differences between the third and first-person perspectives (1PPs). The current study was designed to quantify the differences between the effects induced by training participants to the third-person and first-person perspectives in a ball catching task. Our results show that for a certain trajectory of the stimulus, the performance of the participants post3PP training is similar to their performance postnormal perspective training. Performance post1PP training varies significantly from both 3PP and the normal perspective.


virtual reality continuum and its applications in industry | 2008

Improved third-person perspective: a solution reducing occlusion of the 3PP?

Patrick Salamin; Daniel Thalmann; Frédéric Vexo

Pre-existing researches [Salamin et al. 2006] showed that Third-Person Perspective (3PP) enhances user navigation in 3D virtual environments by reducing proprio-perception issues. Nevertheless, this approach has shown drawbacks related to occlusions and adaptation time. The perspective proposed in this paper - our Improved Third-Person Perspective (i-3PP) - does allow the user to see through his/her body in order to fix 3PP limitations like occlusions. As gamers prefer using 3PP for moving actions and the First-Person Perspective (1PP) for fine operations, we verify if this behavior is extensible to simulations in augmented and virtual reality. Finally we check if the i-3PP would be preferred to the other perspectives for any action.


human factors in computing systems | 2011

Natural activation for gesture recognition systems

Mathieu Hopmann; Patrick Salamin; Nicolas Chauvin; Frédéric Vexo; Daniel Thalmann

Gesture recognition is becoming a popular way of interaction, but still suffers of important drawbacks to be integrated in everyday life devices. One of these drawbacks is the activation of the recognition system -- trigger gesture - which is generally tiring and unnatural. In this paper, we propose two natural solutions to easily activate the gesture interaction. The first one requires a single action from the user: grasping a remote control to start interacting. The second one is completely transparent for the user: the gesture system is only activated when the users gaze points to the screen, i.e. when s/he is looking at it. Our first evaluation with the 2 proposed solutions plus a default implementation suggests that the gaze estimation activation is efficient enough to remove the need of a trigger gesture in order to activate the recognition system.


international symposium on computer and information sciences | 2006

Advanced mixed reality technologies for surveillance and risk prevention applications

Daniel Thalmann; Patrick Salamin; Renaud Ott; Mario Gutiérrez; Frédéric Vexo

We present a system that exploits advanced Mixed and Virtual Reality technologies to create a surveillance and security system that could be also extended to define emergency prevention plans in crowdy environments. Surveillance cameras are carried by a mini Blimp which is tele-operated using an innovative Virtual Reality interface with haptic feedback. An interactive control room (CAVE) receives multiple video streams from airborne and fixed cameras. Eye tracking technology allows for turning the users gaze into the main interaction mechanism; the user in charge can examine, zoom and select specific views by looking at them. Video streams selected at the control room can be redirected to agents equipped with a PDA. On-field agents can examine the video sent by the control center and locate the actual position of the airborne cameras in a GPS-driven map. The aerial video would be augmented with real-time 3D crowd to create more realist risk and emergency prevention plans. The prototype we present shows the added value of integrating AR/VR technologies into a complex application and opens up several research directions in the areas of tele-operation, Multimodal Interfaces, simulation, risk and emergency prevention plans, etc.


virtual reality continuum and its applications in industry | 2009

Context aware, multimodal, and semantic rendering engine

Patrick Salamin; Daniel Thalmann; Frédéric Vexo

Nowadays, several techniques exist to render digital content such as graphics, audio, haptic, etc. Unfortunately, they require different faculties that cannot always be applied, e.g. providing a picture to a blind person would be useless. In this paper, we present a new multimodal rendering engine with a server web-connected to other devices to perform ubiquitous computing. In order to take advantage of user capabilities, we defined an ontology populated with the following elements: user, device, and information. Our system, with the help of this ontology, aims to select and launch automatically a rendering application. Several test case applications were implemented to render shape, text, and video information via audio, haptic, and sight channels. Validations demonstrate that our system is flexible, easily extensible, and shows promise.


intelligent technologies for interactive entertainment | 2009

Stay Tuned! An Automatic RSS feeds Trans-coder

Patrick Salamin; Alexandre Wetzel; Daniel Thalmann; Frédéric Vexo

News aggregators are widely used to read RSS feeds but they require the user to be in front of a screen. While moving, people usually do not have any display, or very small ones. Moreover, they need to perform actions to get access to the news: download a tool, choose to generate audio files from the news, and send them to e.g. an MP3 player. We propose in this paper a system that automatically detects when the user leaves the computer room and directly sends the trans-coded news onto the user Smartphone. All the aggregated news are then transmitted to the user who can listen to them without any action. We present in this paper such a system and the very promising results we obtained after testing it.


international conference on e-learning and games | 2008

Two-Arm Haptic Force-Feedbacked Aid for the Shoulder and Elbow Telerehabilitation

Patrick Salamin; Daniel Thalmann; Frédéric Vexo; Stéphanie Giroud

In this paper we present a telerehabilitation system aiming to help physiotherapists on the shoulder and elbow treatment. Our system is based on a two-arm haptic force feedback to avoid excessive efforts and discomfort with the spinal column and is remotely controlled by smart phone. The validation of our system, with the help of muscular effort measurements (EMG) and supervised by a physiotherapist, provides very promising results.


international conference on computer graphics and interactive techniques | 2010

Brain activity underlying third person and first person perspective training in virtual environments

Tej Tadi; Patrick Salamin; Frédéric Vexo; Daniel Thalmann; Olaf Blanke

Over the years, different approaches have been explored to build effective learning methods in virtual reality but the design of effective 3D manipulation techniques still remains an important research problem. To this end, it is important to quantify behavioral and brain mechanisms underlying the geometrical mappings of the body with the environment and external objects, both within the virtual environments (VE), the real world and relative to each other. The successful mapping of such interactions entails the study of fundamental components of these interactions, such as the origin of the visuo-spatial perspective (1PP, 3PP) and how they contribute to the users performance in the virtual environments. Here, we report data using a novel set-up exposing participants - during free navigation - with a scene view from either 3PP or the habitual first-person perspective (1PP).


computer animation and social agents | 2009

Intelligent switch: An algorithm to provide the best third-person perspective in augmented reality

Patrick Salamin; Daniel Thalmann; Frédéric Vexo


international conference on e learning and games | 2007

Visualization learning for visually impaired people

Patrick Salamin; Daniel Thalmann; Frédéric Vexo

Collaboration


Dive into the Patrick Salamin's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Frédéric Vexo

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Olaf Blanke

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Tej Tadi

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Alexandre Wetzel

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Mathieu Hopmann

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Renaud Ott

École Polytechnique Fédérale de Lausanne

View shared research outputs
Researchain Logo
Decentralizing Knowledge