Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where György Wersényi is active.

Publication


Featured researches published by György Wersényi.


Journal on Multimodal User Interfaces | 2015

A survey of assistive technologies and applications for blind users on mobile platforms: a review and foundation for research

Adam Csapo; György Wersényi; Hunor Nagy; Tony Stockman

This paper summarizes recent developments in audio and tactile feedback based assistive technologies targeting the blind community. Current technology allows applications to be efficiently distributed and run on mobile and handheld devices, even in cases where computational requirements are significant. As a result, electronic travel aids, navigational assistance modules, text-to-speech applications, as well as virtual audio displays which combine audio with haptic channels are becoming integrated into standard mobile devices. This trend, combined with the appearance of increasingly user-friendly interfaces and modes of interaction has opened a variety of new perspectives for the rehabilitation and training of users with visual impairments. The goal of this paper is to provide an overview of these developments based on recent advances in basic research and application development. Using this overview as a foundation, an agenda is outlined for future research in mobile interaction design with respect to users with special needs, as well as ultimately in relation to sensor-bridging applications in general.


ACM Computing Surveys | 2013

Overview of auditory representations in human-machine interfaces

Adam Csapo; György Wersényi

In recent years, a large number of research projects have focused on the use of auditory representations in a broadened scope of application scenarios. Results in such projects have shown that auditory elements can effectively complement other modalities not only in the traditional desktop computer environment but also in virtual and augmented reality, mobile platforms, and other kinds of novel computing environments. The successful use of auditory representations in this growing number of application scenarios has in turn prompted researchers to rediscover the more basic auditory representations and extend them in various directions. The goal of this article is to survey both classical auditory representations (e.g., auditory icons and earcons) and those auditory representations that have been created as extensions to earlier approaches, including speech-based sounds (e.g., spearcons and spindex representations), emotionally grounded sounds (e.g., auditory emoticons and spemoticons), and various other sound types used to provide sonifications in practical scenarios. The article concludes by outlining the latest trends in auditory interface design and providing examples of these trends.


IEEE Transactions on Audio, Speech, and Language Processing | 2009

Effect of Emulated Head-Tracking for Reducing Localization Errors in Virtual Audio Simulation

György Wersényi

Virtual audio simulation uses head-related transfer function (HRTF) synthesis and headphone playback to create a sound field similar to real-life environments. Localization performance is influenced by parameters such as the recording method and the spatial resolution of the HRTFs, equalization of the measurement chain as well as common headphone playback errors. The most important errors are in-the-head localization and front-back reversals. Among other cues, small movements of the head are considered to be important to avoid these phenomena. This study uses the BEACHTRON sound card and its HRTFs for emulating small head-movements by randomly moving the virtual sound source to emulate head-movements. This method does not need any additional equipment, sensors, or feedback. Fifty untrained subjects participated in the listening tests using different stimuli and presentation speed. A virtual target source was rendered in front of the listener by random movements of 1c-7deg. Experiments showed that this kind of simulation can be helpful to resolve in-the-head localization, but there is no clear benefit for resolving front-back errors. Emulation of small head-movements of 2deg could actually increase externalization rates in about 21% of the subjects while presentation speed is not significant.


international conference on auditory display | 2009

Auditory representations of a graphical user interface for a better human-computer interaction

György Wersényi

As part of a project to improve human computer interaction mostly for blind users, a survey with 50 blind and 100 sighted users included a questionnaire about their user habits during everyday use of personal computers. Based on their answers, the most important functions and applications were selected and results of the two groups were compared. Special user habits and needs of blind users are described. The second part of the investigation included collecting of auditory representations (auditory icons, spearcons etc.), mapping with visual information and evaluation with the target groups. Furthermore, a new design method for auditory events and class was introduced, called “auditory emoticons”. These use non-verbal human voice samples to represent additional emotional content. Blind and sighted users evaluated different auditory representations for the selected events, including spearcons for different languages. Auditory icons using environmental, familiar sounds as well emoticons are received very well, whilst spearcons seem to be redundant except menu navigation for blind users.


international conference on computers helping people with special needs | 2016

Sound of Vision - Spatial Audio Output and Sonification Approaches

Michal Bujacz; Karol Kropidlowski; Gabriel Ivanica; Alin Moldoveanu; Charalampos Saitis; Adam B. Csapo; György Wersényi; Simone Spagnol; Ómar I. Jóhannesson; Runar Unnthorsson; Mikolai Rotnicki; Piotr Witek

The paper summarizes a number of audio-related studies conducted by the Sound of Vision consortium, which focuses on the construction of a new prototype electronic travel aid for the blind. Different solutions for spatial audio were compared by testing sound localization accuracy in a number of setups, comparing plain stereo panning with generic and individual HRTFs, as well as testing different types of stereo headphones vs custom designed quadrophonic proximaural headphones. A number of proposed sonification approaches were tested by sighted and blind volunteers for accuracy and efficiency in representing simple virtual environments.


Wireless Communications and Mobile Computing | 2018

Current Use and Future Perspectives of Spatial Audio Technologies in Electronic Travel Aids

Simone Spagnol; György Wersényi; Michal Bujacz; Oana Bălan; Marcelo Herrera Martínez; Alin Moldoveanu; Runar Unnthorsson

Electronic travel aids (ETAs) have been in focus since technology allowed designing relatively small, light, and mobile devices for assisting the visually impaired. Since visually impaired persons rely on spatial audio cues as their primary sense of orientation, providing an accurate virtual auditory representation of the environment is essential. This paper gives an overview of the current state of spatial audio technologies that can be incorporated in ETAs, with a focus on user requirements. Most currently available ETAs either fail to address user requirements or underestimate the potential of spatial sound itself, which may explain, among other reasons, why no single ETA has gained a widespread acceptance in the blind community. We believe there is ample space for applying the technologies presented in this paper, with the aim of progressively bridging the gap between accessibility and accuracy of spatial audio in ETAs.


international conference on speech and computer | 2016

Evaluation of Response Times on a Touch Screen Using Stereo Panned Speech Command Auditory Feedback

Hunor Nagy; György Wersényi

User interfaces to access mobile and handheld devices usually incorporate touch screens. Fast user responses are in general not critical, however, some applications require fast and accurate reactions from users. Errors and response times depend on many factors such as the user’s abilities, feedback types and latencies from the device, sizes of the buttons to press, etc. We conducted an experiment with 17 subjects to test response time and accuracy to different kinds of speech-based auditory stimuli over headphones. Speech signals were spatialized based on stereo amplitude panning. Results show significantly better response times for 3 directions than for 5, as well as for native language compared to English, and more accurate judgements based on the meaning of the speech sounds rather than their direction.


Journal of The Audio Engineering Society | 2012

Virtual Localization by Blind Persons

György Wersényi


Journal of The Audio Engineering Society | 2003

Localization in a HRTF-based Minimum Audible Angle Listening Test on a 2D Sound Screen for GUIB Applications

György Wersényi


Archive | 2008

EVALUATION OF USER HABITS FOR CREATING AUDITORY REPRESENTATIONS OF DIFFERENT SOFTWARE APPLICATIONS FOR BLIND PERSONS

György Wersényi

Collaboration


Dive into the György Wersényi's collaboration.

Top Co-Authors

Avatar

Hunor Nagy

Széchenyi István University

View shared research outputs
Top Co-Authors

Avatar

Alin Moldoveanu

Politehnica University of Bucharest

View shared research outputs
Top Co-Authors

Avatar

Adam B. Csapo

Széchenyi István University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Adam Csapo

Hungarian Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Michal Bujacz

Lodz University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gabriel Ivanica

Politehnica University of Bucharest

View shared research outputs
Top Co-Authors

Avatar

Oana Bălan

Politehnica University of Bucharest

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge