Gerard Llorach
Pompeu Fabra University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Gerard Llorach.
international conference on games and virtual worlds for serious applications | 2016
Gerard Llorach; Alun Evans; Josep Blat; Giso Grimm; Volker Hohmann
Virtual characters are an integral part of many games and virtual worlds. The ability to accurately synchronize lip movement to audio speech is an important aspect in the believability of the character. In this paper we propose a simple rule-based lip-syncing algorithm for virtual agents using the web browser. It works in real-time with live input, unlike most current lip-syncing proposals, which may require considerable amounts of computation, expertise and time to set up. Our method generates reliable speech animation based on live speech using three blend shapes and no training, and it only needs manual adjustment of three parameters for each speaker (sensitivity, smoothness and vocal tract length). Our proposal is based on the limited real-time audio processing functions of the client web browser (thus, the algorithm needs to be simple), but this facilitates the use of web based embodied conversational agents.
iberian conference on information systems and technologies | 2014
Gerard Llorach; Alun Evans; Javi Agenjo; Josep Blat
In this paper the Inertial Measurement Unit (IMU) included inside the Oculus Rift virtual reality headset is considered for head position tracking. While the Oculus is capable of mapping rotational movement to a virtual scene, recovering translational movement is not possible by default. In this study, we extract position data using a different approach for real-time position tracking with double integration, as well as a new method for gravity compensation for accelerometers with different axis calibrations. The proposed tracking system is portable, simple and does not require a controlled environment.
practical applications of agents and multi agent systems | 2017
Leo Wanner; Elisabeth André; Josep Blat; Stamatia Dasiopoulou; Mireia Farrús; Thiago Fraga; Eleni Kamateri; Florian Lingenfelser; Gerard Llorach; Oriol Martinez; Georgios Meditskos; Simon Mille; Wolfgang Minker; Louisa Pragst; Dominik Schiller; Andries Stam; Ludo Stellingwerff; Federico M. Sukno; Bianca Vieru; Stefanos Vrochidis
We present an intelligent embodied conversation agent with linguistic, social and emotional competence. Unlike the vast majority of the state-of-the-art conversation agents, the proposed agent is constructed around an ontology-based knowledge model that allows for flexible reasoning-driven dialogue planning, instead of using predefined dialogue scripts. It is further complemented by multimodal communication analysis and generation modules and a search engine for the retrieval of multimedia background content from the web needed for conducting a conversation on a given topic. The evaluation of the 1st prototype of the agent shows a high degree of acceptance of the agent by the users with respect to its trustworthiness, naturalness, etc. The individual technologies are being further improved in the 2nd prototype.
Proceedings of the 1st International Workshop on Multimedia Analysis and Retrieval for Multimodal Interaction | 2016
Leo Wanner; Josep Blat; Stamatia Dasiopoulou; Mónica Domínguez; Gerard Llorach; Simon Mille; Federico M. Sukno; Eleni Kamateri; Stefanos Vrochidis; Ioannis Kompatsiaris; Elisabeth André; Florian Lingenfelser; Gregor Mehlmann; Andries Stam; Ludo Stellingwerff; Bianca Vieru; Lori Lamel; Wolfgang Minker; Louisa Pragst; Stefan Ultes
We present work in progress on an intelligent embodied conversation agent in the basic care and healthcare domain. In contrast to most of the existing agents, the presented agent is aimed to have linguistic cultural, social and emotional competence needed to interact with elderly and migrants. It is composed of an ontology-based and reasoning-driven dialogue manager, multimodal communication analysis and generation modules and a search engine for the retrieval of multimedia background content from the web needed for conducting a conversation on a given topic.
Speech Communication | 2018
Maartje M.E. Hendrikse; Gerard Llorach; Giso Grimm; Volker Hohmann
Abstract Recent studies of hearing aid benefits indicate that head movement behavior influences performance. To systematically assess these effects, movement behavior must be measured in realistic communication conditions. For this, the use of virtual audiovisual environments with animated characters as visual stimuli has been proposed. It is unclear, however, how these animations influence the head- and eye-movement behavior of subjects. Here, two listening tasks were carried out with a group of 14 young normal hearing subjects to investigate the influence of visual cues on head- and eye-movement behavior; on combined localization and speech intelligibility task performance; as well as on perceived speech intelligibility, perceived listening effort and the general impression of the audiovisual environments. Animated characters with different lip-syncing and gaze patterns were compared to an audio-only condition and to a video of real persons. Results show that movement behavior, task performance, and perception were all influenced by visual cues. The movement behavior of young normal hearing listeners in animation conditions with lip-syncing was similar to that in the video condition. These results in young normal hearing listeners are a first step towards using the animated characters to assess the influence of head movement behavior on hearing aid performance.
Proceedings of the 2018 Workshop on Audio-Visual Scene Understanding for Immersive Multimedia - AVSU'18 | 2018
Gerard Llorach; Giso Grimm; Maartje M.E. Hendrikse; Volker Hohmann
Most current hearing research laboratories and hearing aid evaluation setups are not sufficient to simulate real-life situations and to evaluate future generations of hearing aids that might include gaze information and brain signals. Thus, new methodologies and technologies might need to be implemented in hearing laboratories and clinics in order to generate audiovisual realistic testing environments. The aim of this work is to provide a comprehensive review of the current available approaches and future directions to create audiovisual realistic immersive simulations for hearing research. Additionally, we present the technologies and use cases of our laboratory, as well as the pros and cons of such technologies: From creating 3D virtual simulations with computer graphics and virtual acoustic simulations, to 360º videos and Ambisonic recordings.
Journal of the Acoustical Society of America | 2018
Maartje M.E. Hendrikse; Gerard Llorach; Giso Grimm; Volker Hohmann
With increased complexity of hearing device algorithms a strong interaction between motion behavior of the user and hearing device benefit is likely to be found. To be able to assess this interaction experimentally more realistic evaluation methods are required that mark a transition from conventional (audio-only) lab experiments to the field. In this presentation, we describe our methodology for acquiring ecologically valid behavioral data in realistic virtual audiovisual testing environments. The methods are based on tools to present interactive audiovisual environments while recording subject behavior with gaze and motion tracking systems. The results of a study that evaluated the effect of different types of visual information (e.g., video recordings vs. animated characters) on behavior and subjective user experience are presented. It was found that visual information can have a significant influence on behavior and that it is possible to systematically assess this. Furthermore, first results are presented of two studies that observed head and eye movement behavior: (1) in typical everyday listening situations that were replicated with virtual audiovisual environments in the lab (e.g., cafeteria) and (2) when visual cues were presented via a head-mounted display or projected onto a panoramic cylindrical screen in front of the subject.With increased complexity of hearing device algorithms a strong interaction between motion behavior of the user and hearing device benefit is likely to be found. To be able to assess this interaction experimentally more realistic evaluation methods are required that mark a transition from conventional (audio-only) lab experiments to the field. In this presentation, we describe our methodology for acquiring ecologically valid behavioral data in realistic virtual audiovisual testing environments. The methods are based on tools to present interactive audiovisual environments while recording subject behavior with gaze and motion tracking systems. The results of a study that evaluated the effect of different types of visual information (e.g., video recordings vs. animated characters) on behavior and subjective user experience are presented. It was found that visual information can have a significant influence on behavior and that it is possible to systematically assess this. Furthermore, first results are pres...
intelligent virtual agents | 2017
Gerard Llorach; Josep Blat
The creation and support of Embodied Conversational Agents (ECAs) has been quite challenging, as features required might not be straight-forward to implement and to integrate in a single application. Furthermore, ECAs as desktop applications present drawbacks for both developers and users; the former have to develop for each device and operating system and the latter must install additional software, limiting their widespread use. In this paper we demonstrate how recent advances in web technologies show promising steps towards capable web-based ECAs, through some off-the-shelf technologies, in particular, the Web Speech API, Web Audio API, WebGL and Web Workers. We describe their integration into a simple fully functional web-based 3D ECA accessible from any modern device, with special attention to our novel work in the creation and support of the embodiment aspects.
international conference on 3d web technology | 2015
Gerard Llorach; Javi Agenjo; Alun Evans; Josep Blat
In this paper we present the methods and techniques used to visualize the trajectory of the participants of a massive virtual regatta using a virtual globe in the web browser. The emergence of new web technologies, such as HTML5 and WebGL, have opened new avenues for visualizing and sharing 3D data. However, web-based visualization of big data is still challenging, as the power of the web browser for 3D visualization has still not reached the level of desktop applications. In this work, we use WebGL to visualize the path of the 17000 virtual boats that participated in the MMO game of the Barcelona World Race 2015, and present optimization strategies for the rendering of this Big Data (which is otherwise impossible to render in a web context on standard consumer hardware). We also combine this optimization with a render-to-texture approach to visualize the density of the boat routes, rendering and visualizing the data progressively, and using web workers for processing and managing the data.
virtual reality software and technology | 2014
Gerard Llorach; Alun Evans; Josep Blat