Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Stefan Radomski is active.

Publication


Featured researches published by Stefan Radomski.


Journal on Multimodal User Interfaces | 2013

JVoiceXML as a modality component in the W3C multimodal architecture

Dirk Schnelle-Walka; Stefan Radomski; Max Mühlhäuser

Research regarding multimodal interaction led to a multitude of proposals for suitable software architectures. With all architectures describing multimodal systems differently, interoperability is severely hindered. The W3C MMI architecture is a proposed recommendation for a common architecture. In this article, we describe our experiences integrating JVoiceXML into the W3C MMI architecture and identify general limitations with regard to the available design space.


engineering interactive computing system | 2013

Supporting elastic collaboration: integration of collaboration components in dynamic contexts

Jordan Janeiro; Stephan Lukosch; Stefan Radomski; Mathias Johanson; Massimo Mecella; Jonas Larsson

In dynamic problem-solving situations, groups and organizations have to become more flexible to adapt collaborative workspaces according to their needs. New paradigms propose to bridge two opposing process and ad-hoc perspectives to achieve such flexibility. However, a key challenge relies on the dynamic integration of groupware tools in the same collaborative workspace. This paper proposes a collaborative workspace (Elgar) that supports the Elastic Collaboration concept, and a standard interface to realize the integration of groupware tools, named Elastic Collaboration Components. The paper illustrates the use of such flexible collaborative workspace and the use of groupware tools in a machine diagnosis scenario that requires collaboration.


text speech and dialogue | 2015

Open Source German Distant Speech Recognition: Corpus and Acoustic Model

Stephan Radeck-Arneth; Benjamin Milde; Arvid Lange; Evandro Gouvêa; Stefan Radomski; Max Mühlhäuser; Chris Biemann

We present a new freely available corpus for German distant speech recognition and report speaker-independent word error rate WER results for two open source speech recognizers trained on this corpus. The corpus has been recorded in a controlled environment with three different microphones at a distance of one meter. It comprises 180 different speakers with a total of 36 hours of audio recordings. We show recognition results with the open source toolkit Kaldi 20.5% WER and PocketSphinx 39.6% WER and make a complete open source solution for German distant speech recognition possible.


international conference on computers for handicapped persons | 2014

Multimodal Fusion and Fission within W3C Standards for Nonverbal Communication with Blind Persons

Dirk Schnelle-Walka; Stefan Radomski; Max Mühlhäuser

Multimodal fusion and multimodal fission are well known concepts for multimodal systems but have not been well integrated in current architectures to support collaboration of blind and sighted people. In this paper we describe our initial thoughts of multimodal dialog modeling in multiuser dialog settings employing multiple modalities based on W3C standards like the Multimodal Architecture and Interfaces.


intelligent user interfaces | 2013

SmartObjects: Fourth Workshop on Interacting with Smart Objects

Dirk Schnelle-Walka; Max Mühlhäuser; Stefan Radomski; Oliver Brdiczka; Jochen Huber; Kris Luyten; Tobias Grosse-Puppendahl

Smart objects are everyday objects that have computing capabilities and give rise to new ways of interaction with our environment. The increasing number of smart objects in our life shapes how we interact beyond the desktop. In this workshop we explore various aspects of the design, development and deployment of smart objects including how one can interact with smart objects.


world of wireless mobile and multimedia networks | 2010

Pervasive speech API demo

Stefan Radomski; Dirk Schnelle-Walka

Current approaches for voice based interaction do not meet the special requirements of pervasive environments. While there is an increasing trend towards distributed systems, the concept of the classical client/server paradigm still prevails. We describe a framework wherein functional components are orchestrated dynamically, taking into account the users context and the changing availability and suitability of services in pervasive environments.


Proceedings of the 2017 ACM Workshop on Interacting with Smart Objects | 2017

The W3C MMI Architecture in the Context of the Smart Car

Dirk Schnelle-Walka; Stefan Radomski

With the GENIVI project, an open source approach to ease the development of scalable in-vehicle infotainment systems is available. However, in its current state it only provides limited capabilities when it comes to the addition of new modalities in the smart car. A possible solution is available in the form of the W3C MMI architectural pattern. In this paper, we analyze a potential combination of these two efforts.


Archive | 2017

Multimodal Fusion and Fission within the W3C MMI Architectural Pattern

Dirk Schnelle-Walka; Carlos Duarte; Stefan Radomski

The current W3C recommendation for multimodal interfaces provides a standard for the message exchange and overall structure of modality components in multimodal applications. However, the details for multimodal fusion to combine inputs coming from modality components and for multimodal fission to prepare multimodal presentations are left unspecified. This chapter provides a first analysis of possible integrations for several approaches for fusion and fission and their implications with regard to the standard.


Archive | 2017

SCXML on Resource Constrained Devices

Stefan Radomski; Jens Heuschkel; Dirk Schnelle-Walka; Max Mühlhäuser

Ever since their introduction as a visual formalism by Harel et al. in 1987, state-charts played an important role to formally specify the behavior of reactive systems. However, various shortcomings in their original formalization lead to a plethora of formal semantics for their interpretation in the subsequent years. In 2005, the W3C Voice Browser Working Group started an attempt to specify SCXML as an XML dialect and corresponding semantic for state-charts and their interpretation, promoted to W3C recommendation status in 2015. In the context of multimodal interaction, SCXML derives a special relevance as the markup language proposed to express dialog models as descriptions of interaction in the multimodal dialog system specified by the W3C Multimodal Interaction Working Group. However, corresponding SCXML interpreters are oftentimes embedded in elaborate host environments, are very simplified or require significant resources when interpreted. In this chapter, we present a more compact, equivalent representation for SCXML documents as native data structures with a respective syntactical transformation and their interpretation by an implementation in ANSI C. We discuss the characteristics of the approach in terms of binary size, memory requirements, and processing speed. This will, ultimately, enable us to gain the insights to transform SCXML state-charts for embedded systems with very limited processing capabilities and even integrated circuits.


international conference on computers for handicapped persons | 2014

Towards an Information State Update Model Approach for Nonverbal Communication

Dirk Schnelle-Walka; Stefan Radomski; Stephan Radeck-Arneth; Max Mühlhäuser

The Information State Update (ISU) Model describes an approach to dialog management that was predominantly applied to single user scenarios using voice as the only modality. Extensions to multimodal interaction with multiple users are rarely considered and, if presented, hard to operationalize. In this paper we describe our approach of dialog modeling based on ISU in multiuser dialog settings employing multiple modalities, including nonverbal communication.

Collaboration


Dive into the Stefan Radomski's collaboration.

Top Co-Authors

Avatar

Dirk Schnelle-Walka

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar

Max Mühlhäuser

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar

Stephan Radeck-Arneth

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Benjamin Milde

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar

Jens Heuschkel

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Arvid Lange

Technische Universität Darmstadt

View shared research outputs
Researchain Logo
Decentralizing Knowledge