Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Pontus Larsson is active.

Publication


Featured researches published by Pontus Larsson.


In: Dubois, E and Nigay, L and Gray, P, (eds.) The engineering of Mixed Reality Systems. Springer (2010) | 2010

Auditory-Induced Presence in Mixed Reality Environments and Related Technology

Pontus Larsson; Aleksander Väljamäe; Daniel Västfjäll; Ana Tajadura-Jiménez; Mendel Kleiner

Presence, the “perceptual illusion of non-mediation,” is often a central goal in mediated and mixed environments, and sound is believed to be crucial for inducing high-presence experiences. This chapter provides a review of the state of the art within presence research related to auditory environments. Various sound parameters such as externalization and spaciousness and consistency within and across modalities are discussed in relation to their presence-inducing effects. Moreover, these parameters are related to the use of audio in mixed realities and example applications are discussed. Finally, we give an account of the technological possibilities and challenges within the area of presence-inducing sound rendering and presentation for mixed realities and outline future research aims.


IEEE MultiMedia | 2008

Handheld Experiences: Using Audio To Enhance the Illusion of Self-Motion

Aleksander Väljamäe; Ana Tajadura-Jiménez; Pontus Larsson; Daniel Västfjäll; Mendel Kleiner

Handheld multimedia devices could benefit from multisensory technologies. The authors discuss audio, visual, and tactile cues designed to maximize presence and the illusion of self-motion.


international conference on auditory display | 2009

Tools for designing emotional auditory driver-vehicle interfaces

Pontus Larsson

Auditory interfaces are often used in vehicles to inform and alert the driver of various events and hazards. When designed properly, such interfaces can e.g. reduce reaction times and increase the impression of quality of the vehicle. In this paper it is argued that emotional response is an important aspect to consider when designing auditory driver-vehicle interfaces. This paper discusses two applications developed to investigate the emotional dimensions of auditory interfaces. EarconSampler is a tool for designing and modifying earcons. It allows for creating melodic patterns of wav-snippets and adjustment of parameters such as tempo and pitch. It also contains an analysis section where sound quality parameters, urgency and emotional response to the sound is calculated / predicted. SoundMoulder is another tool which offers extended temporal and frequency modifications of earcons. The primary idea with this application is to study how users design sounds given a desired emotional response.


Traffic Injury Prevention | 2015

Using Sound to Reduce Visual Distraction from In-vehicle Human–Machine Interfaces

Pontus Larsson; Mathias Niemand

Objective: Driver distraction and inattention are the main causes of accidents. The fact that devices such as navigation displays and media players are part of the distraction problem has led to the formulation of guidelines advocating various means for minimizing the visual distraction from such interfaces. However, although design guidelines and recommendations are followed, certain interface interactions, such as menu browsing, still require off-road visual attention that increases crash risk. In this article, we investigate whether adding sound to an in-vehicle user interface can provide the support necessary to create a significant reduction in glances toward a visual display when browsing menus. Methods: Two sound concepts were developed and studied; spearcons (time-compressed speech sounds) and earcons (musical sounds). A simulator study was conducted in which 14 participants between the ages of 36 and 59 took part. Participants performed 6 different interface tasks while driving along a highway route. A 3 × 6 within-group factorial design was employed with sound (no sound /earcons/spearcons) and task (6 different task types) as factors. Eye glances and corresponding measures were recorded using a head-mounted eye tracker. Participants’ self-assessed driving performance was also collected after each task with a 10-point scale ranging from 1 = very bad to 10 = very good. Separate analyses of variance (ANOVAs) were conducted for different eye glance measures and self-rated driving performance. Results: It was found that the added spearcon sounds significantly reduced total glance time as well as number of glances while retaining task time as compared to the baseline (= no sound) condition (total glance time M = 4.15 for spearcons vs. M = 7.56 for baseline, p =.03). The earcon sounds did not result in such distraction-reducing effects. Furthermore, participants ratings of their driving performance were statistically significantly higher in the spearcon conditions compared to the baseline and earcon conditions (M = 7.08 vs. M = 6.05 and M = 5.99 respectively, p =.035 and p =.002). Conclusions: The spearcon sounds seem to efficiently reduce visual distraction, whereas the earcon sounds did not reduce distraction measures or increase subjective driving performance. An aspect that must be further investigated is how well spearcons and other types of auditory displays are accepted by drivers in general and how they work in real traffic.


International Journal of Vehicle Noise and Vibration | 2013

Emotional and behavioural responses to auditory interfaces in commercial vehicles

Pontus Larsson; Daniel Västfjäll

In this paper, we argue that emotional reactions may guide the design of in-vehicle auditory displays since emotion has strong consequences for behaviour and information processing. A simulator study with 30 participants was conducted in which auditory icons were contrasted to abstract earcon sounds in more or less imminent collision scenarios and 3D sounds were tested against monophonic sounds in lane change scenarios. Self-report and physiological measures of emotional response as well as behavioural measures (e.g., brake response time) were used. It was found that auditory icons gave up to 600 ms faster brake response times than abstract sounds in imminent collision scenarios and that 3D sound gave a stronger emotional response in lane change scenarios. Moreover, the results indicate that emotion can predict behaviour, e.g., sounds rated as being more activating and negative also gave quicker response times.


Journal of the Acoustical Society of America | 2004

Multimodal interaction in real and virtual concert halls

Pontus Larsson; Daniel Västfjäll; Mendel Kleiner

Recently, researchers within the field of room acoustics have shown an increased interest for the understanding of how different modalities, especially vision and audition, interact in the concert hall experience. Computer auralization and virtual reality technology have brought means to efficiently study such auditory‐visual interaction phenomena in concert halls. However, an important question to address is to what extent the results from such studies agree with real, unmediated situations. In this paper, we discuss some of the auditory–visual cross‐modal effects discovered in previous experiments, and an account of cross‐modal phenomena in room acoustic perception is proposed. Moreover, the importance of measuring simulation fidelity when performing cross‐modal experiments in virtual concert halls is discussed. The conclusions are that one can expect auditory–visual interaction effects to occur in both real and virtual rooms, but that simulation fidelity might affect the results when performing experim...


audio mostly conference | 2016

Speech Feedback Reduces Driver Distraction Caused by In-vehicle Visual Interfaces

Pontus Larsson

Driver distraction and inattention are the main causes of accidents today and one way for vehicle manufacturers to address this problem may be to replace or complement visual information in in-vehicle interfaces with auditory displays. In this paper, we address the specific problem of giving text input to an interface while driving. We test whether the handwriting input method, which previously has been shown to be promising in terms of reducing distraction, can be further improved by adding speech feedback. A driving simulator study was carried out in which 11 persons, (3 female) drove in two different scenarios (curvy road and straight motorway) while performing three different handwriting text input tasks. Glance behavior was measured using a head mounted eyetracker, and subjective responses were also acquired. ANOVA Analysis revealed that speech feedback resulted in less distraction as measured by total glance time compared to the baseline condition (no speech). There were however also interaction effects which indicated that the positive effect of speech feedback were not as prominent for the curvy road scenario. Post-experiment interviews nonetheless showed that the participants felt as if the speech feedback made the text input task safer, and also that they preferred speech feedback over no speech.


european conference on cognitive ergonomics | 2011

Evaluating a vehicle auditory display: comparing a designer's expectations with listeners' experiences

Iain McGregor; Pontus Larsson; Phil Turner

This paper illustrates a method for the early evaluation of auditory displays in context. A designer was questioned about his expectations of an auditory display for Heavy Goods Vehicles, and the results were compared to the experiences of 10 listeners. Sound design is essentially an isolated practice and by involving listeners the process can become collaborative. A review of the level of agreement allowed the identification of attributes that might be meaningful for the design of future auditory displays. Results suggest that traditional auditory display design guidelines that focus on the acoustical properties of sound might not be suitable.


Frontiers in Psychology | 2018

Communicating Intent of Automated Vehicles to Pedestrians

Azra Habibovic; Victor Malmsten Lundgren; Jonas Andersson; Maria Klingegård; Tobias Lagström; Anna Sirkka; Johan Fagerlönn; Claes Edgren; Rikard Fredriksson; Stas Krupenia; Dennis Saluäär; Pontus Larsson

While traffic signals, signs, and road markings provide explicit guidelines for those operating in and around the roadways, some decisions, such as determinations of “who will go first,” are made by implicit negotiations between road users. In such situations, pedestrians are today often dependent on cues in drivers’ behavior such as eye contact, postures, and gestures. With the introduction of more automated functions and the transfer of control from the driver to the vehicle, pedestrians cannot rely on such non-verbal cues anymore. To study how the interaction between pedestrians and automated vehicles (AVs) might look like in the future, and how this might be affected if AVs were to communicate their intent to pedestrians, we designed an external vehicle interface called automated vehicle interaction principle (AVIP) that communicates vehicles’ mode and intent to pedestrians. The interaction was explored in two experiments using a Wizard of Oz approach to simulate automated driving. The first experiment was carried out at a zebra crossing and involved nine pedestrians. While it focused mainly on assessing the usability of the interface, it also revealed initial indications related to pedestrians’ emotions and perceived safety when encountering an AV with/without the interface. The second experiment was carried out in a parking lot and involved 24 pedestrians, which enabled a more detailed assessment of pedestrians’ perceived safety when encountering an AV, both with and without the interface. For comparison purposes, these pedestrians also encountered a conventional vehicle. After a short training course, the interface was deemed easy for the pedestrians to interpret. The pedestrians stated that they felt significantly less safe when they encountered the AV without the interface, compared to the conventional vehicle and the AV with the interface. This suggests that the interface could contribute to a positive experience and improved perceived safety in pedestrian encounters with AVs – something that might be important for general acceptance of AVs. As such, this topic should be further investigated in future studies involving a larger sample and more dynamic conditions.


Journal of the Acoustical Society of America | 2002

Perceptual effects in auralization of virtual rooms

Mendel Kleiner; Pontus Larsson; Daniel Västfjäll; Rendell R. Torres

By using various types of binaural simulation (or ‘‘auralization’’) of physical environments, it is now possible to study basic perceptual issues relevant to room acoustics, as well to simulate the acoustic conditions found in concert halls and other auditoria. Binaural simulation of physical spaces in general is also important to virtual reality systems. This presentation will begin with an overview of the issues encountered in the auralization of room and other environments. We will then discuss the influence of various approximations in room modeling, in particular, edge‐ and surface scattering, on the perceived room response. Finally, we will discuss cross‐modal effects, such as the influence of visual cues on the perception of auditory cues, and the influence of cross‐modal effects on the judgement of ‘‘perceived presence’’ and the rating of room acoustic quality.

Collaboration


Dive into the Pontus Larsson's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mendel Kleiner

Chalmers University of Technology

View shared research outputs
Top Co-Authors

Avatar

Aleksander Väljamäe

Chalmers University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Agneta Agge

University of Gothenburg

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Anders Lindvall

Chalmers University of Technology

View shared research outputs
Top Co-Authors

Avatar

Anna Sirkka

Research Institutes of Sweden

View shared research outputs
Top Co-Authors

Avatar

Azra Habibovic

Research Institutes of Sweden

View shared research outputs
Researchain Logo
Decentralizing Knowledge