Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hansjörg Hofmann is active.

Publication


Featured researches published by Hansjörg Hofmann.


intelligent environments | 2012

Speech Interaction with the Internet -- A User Study

Hansjörg Hofmann; Ute Ehrlich; André Berton; Wolfgang Minker

The arrival of smart phones significantly impacts the automotive environment. People tend to use their mobile Internet connection manually while driving which distracts and endangers the drivers safety. In order to reduce driver distraction a speech based interface to Internet services is essential. However, before developing a speech dialog system in a new domain, a data collection from real users is needed. In this report, a web-based user study is conducted, which aims at getting knowledge about how users would interact with Internet services by speech. The user study was separated in a questionnaire and audio recordings based on graphically depicted scenarios the subjects had to solve orally. The results show that the users are willing to use and trust in speech dialog systems. The speaking styles occurring in the audio data were classified into natural, command and keyword style. The occurrence differed depending on the web task category whereas natural speaking style was most frequently used by the subjects.


Natural Interaction with Robots, Knowbots and Smartphones, Putting Spoken Dialog Systems into Practice | 2014

Development of Speech-Based In-Car HMI Concepts for Information Exchange Internet Apps

Hansjörg Hofmann; Anna Silberstein; Ute Ehrlich; André Berton; Christian A. Müller; Angela Mahr

The permanent use of smartphones impacts the automotive environment. People tend to use their smartphone’s Internet capabilities manually while driving, which endangers the driver’s safety. Therefore, an intuitive in-car speech interface to the Internet is crucial in order to reduce driver distraction. Before developing an in-car speech dialog system to a new domain, you have to examine which speech-based human-machine interface concept is the most intuitive. This work in progress report describes the design of various human-machine interface concepts which include speech as main input and output modality. These concepts are based on two different dialog strategies: a command-based and a conversational speech dialog. Different graphical user interfaces, one including an avatar, have been designed in order to best support the speech dialog strategies and to raise the level of naturalness in the interaction. For each human-machine interface concept a prototype which allows for an online hotel booking has been developed. These prototypes will be evaluated in driving simulator experiments on usability and driving performance.


Computer Speech & Language | 2015

Evaluation of speech-based HMI concepts for information exchange tasks

Hansjörg Hofmann; Vanessa Tobisch; Ute Ehrlich; André Berton

HighlightsWe compare different speech-based in-car HMI concepts in a driving simulator study.The HMI concepts are evaluated in terms of usability and driver distraction.The comparison of speech dialog strategies revealed only differences in usability.The use of a GUI impaired the driving performance and raised gaze-based distraction.An avatar does not additionally raise driver distraction but is not accepted by users. Due to the mobile Internet revolution, people tend to browse the Web while driving their car which puts the drivers safety at risk. Therefore, an intuitive and non-distractive in-car speech interface to the Web needs to be developed. Before developing a new speech dialog system (SDS) in a new domain developers have to examine the users preferred interaction style and its influence on driving safety.This paper reports a driving simulation study, which was conducted to compare different speech-based in-car human-machine interface concepts concerning usability and driver distraction. The applied SDS prototypes were developed to perform an online hotel booking by speech while driving. The speech dialog prototypes were based on different speech dialog strategies: a command-based and a conversational dialog. Different graphical user interface (GUI) concepts (one including a human-like avatar) were designed in order to support the respective dialog strategy the most and to evaluate the effect of the GUI on usability and driver distraction.The results show that only few differences concerning speech dialog quality were found when comparing the speech dialog strategies. The command-based dialog was slightly better accepted than the conversational dialog, which seems to be due to the high concept error rate of the conversational dialog. A SDS without GUI also seems to be feasible for the driving environment and was accepted by the users. The comparison of speech dialog strategies did not reveal differences in driver distraction. However, the use of a GUI impaired the driving performance and increased gaze-based distraction. The presence of an avatar was not appreciated by participants and did not affect the dialog performance. Concerning driver distraction, the virtual agent did neither negatively affect the driving performance nor increase visual distraction.The results implicate that in-car SDS developers should take both speaking styles into consideration when designing an SDS for information exchange tasks. Furthermore, developers have to consider reducing the content presented on the screen in order to reduce driver distraction. A human-like avatar was not appreciated by users while driving. Research should further investigate if other kinds of avatars might achieve different results.


intelligent user interfaces | 2014

Comparison of speech-based in-car HMI concepts in a driving simulation study

Hansjörg Hofmann; Vanessa Tobisch; Ute Ehrlich; André Berton; Angela Mahr

This paper reports experimental results from a driving simulation study in order to compare different speech-based in-car human-machine interface concepts. The effects of the use of a command-based and a conversational in-car speech dialog system on usability and driver distraction are evaluated. Different graphical user interface concepts have been designed in order to investigate their potential supportive or distracting effects. The results show that only few differences concerning speech dialog quality were found when comparing the speech dialog strategies. The command-based dialog was slightly better accepted than the conversational dialog, which can be attributed to the limited performance of the systems language understanding component. No differences in driver distraction were revealed. Moreover, the study revealed that speech dialog systems without graphical user interface were accepted by participants in the driving environment and that the use of a graphical user interface impaired the driving performance and increased gaze-based distraction. In the driving scenario, the choice of speech dialog strategies does not have a strong influence on usability and no influence on driver distraction. Instead, when designing the graphical user interface of an in-car speech dialog systems, developers should consider reducing the content presented on the display device in order to reduce driver distraction.


Archive | 2016

Evaluation of In-Car SDS Notification Concepts for Incoming Proactive Events

Hansjörg Hofmann; Mario Hermanutz; Vanessa Tobisch; Ute Ehrlich; André Berton; Wolfgang Minker

Due to the mobile Internet revolution, people communicate increasingly via social networks and instant messaging applications using their smartphones. In order to stay “always connected” they even use their smartphone while driving their car which puts the driver safety at risk. In order to reduce driver distraction an intuitive speech interface which provides the driver with proactively incoming events needs to be developed. Before developing a new speech dialog system developers have to examine what the user’s preferred interaction style is. This paper reports from a recent driving simulation study in which several speech-based proactive notification concepts for incoming events in different contextual situations are evaluated. 4 different speech dialog and 2 graphical user interface concepts, one including an avatar, were designed and evaluated on usability and driving performance. The results show that there are significant differences when comparing the speech dialog concepts. Informing the user verbally achieves the best result concerning usability. Earcons are perceived to be the least distractive. The presence of an avatar was not accepted by the participants and led to an impaired steering performance.


international universal communication symposium | 2010

Improving spontaneous English ASR using a joint-sequence pronunciation model

Hansjörg Hofmann; Sakriani Sakti; Ryosuke Isotani; Hisashi Kawai; Satoshi Nakamura; Wolfgang Minker

The performance of English automatic speech recognition systems decreases when recognizing spontaneous speech mainly due to occurring multiple pronunciation variants in the utterances. Previous approaches address the multiple pronunciation problem by modeling the alteration of the pronunciation on a phoneme to phoneme level. However, the phonetic transformation effects induced by the pronunciation of the whole sentence are not considered yet. In this paper we attempt to recover the original word sequence from the spontaneous phoneme sequence by applying a joint sequence pronunciation model. Hereby, the whole word sequence and its effect on the alternation of the phonemes will be taken into consideration. Moreover, the system not only learns the phoneme transformation but also the mapping from the phoneme to the word directly. In this preliminary study, first the phonemes will be recognized with the present recognition system and afterwards the pronunciation variation model based on the joint-sequence approach will map from the phoneme to the word level. Our experiments use Buckeye as spontaneous speech corpus. The results show that the proposed method improves the word accuracy consistently over the conventional recognition system. The most improved system achieves up to 12.1% relative improvement to the baseline speech recognition.


Archive | 2011

Accessing Web Resources in Different Languages Using a Multilingual Speech Dialog System

Hansjörg Hofmann; Andreas Eberhardt; Ute Ehrlich

While travelling a foreign country by car, drivers would like to retrieve instant local information. However, accessing local web sites while driving causes two problems: First, browsing the Web while driving puts the drivers safety at risk. Second, the information may only be available in the respective foreign language. We present a multilingual speech dialog system which enables the user to extract topic related information from web resources in different languages. The system extracts and understands information from web sites in various languages by parsing HTML code against a predefined semantic net where special topics are modelled. After extraction the content is available in a meta language representation which makes a speech interaction in different languages possible. For miminized driver distraction an intuitive and driver-convenient generic speech dialog has been designed.


annual meeting of the special interest group on discourse and dialogue | 2013

Evaluation of Speech Dialog Strategies for Internet Applications in the Car

Hansjörg Hofmann; Ute Ehrlich; André Berton; Angela Mahr; Rafael Math; Christian A. Müller


Archive | 2013

Development of a Conversational Speech Interface Using Linguistic Grammars

Hansjörg Hofmann; Ute Ehrlich; Sven Reichel; André Berton


IEICE Transactions on Information and Systems | 2012

Sequence-Based Pronunciation Variation Modeling for Spontaneous ASR Using a Noisy Channel Approach

Hansjörg Hofmann; Sakriani Sakti; Chiori Hori; Hideki Kashioka; Satoshi Nakamura; Wolfgang Minker

Collaboration


Dive into the Hansjörg Hofmann's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sakriani Sakti

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Satoshi Nakamura

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Hisashi Kawai

National Institute of Information and Communications Technology

View shared research outputs
Top Co-Authors

Avatar

Ryosuke Isotani

National Institute of Information and Communications Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge