Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ladislav Seredi is active.

Publication


Featured researches published by Ladislav Seredi.


international conference on multimodal interfaces | 2002

CATCH-2004 multi-modal browser: overview description with usability analysis

Jan Kleindienst; Ladislav Seredi; Pekka Kapanen; Janne Bergman

This paper takes a closer look at the user interface issues in our research multi-modal browser architecture. The browser framework, also briefly introduced in this paper, reuses single-modal browser technologies available for VoiceXML, WML, and HTML browsing. User interface actions on a particular browser are captured, converted to events, and distributed to the other browsers participating (possibly on different hosts) in the multi-modal framework. We have defined a synchronization protocol, which distributes such events with the help of the central component called the Virtual Proxy. The choice of the architecture and the synchronization primitives have profound consequences on handling certain interesting UI use cases. We particularly address those specified by the W3C MultiModal Requirements, which are related to the design of possible strategies of dealing with simultaneous input, solving input inconsistencies, and defining synchronization points. The proposed approaches are illustrated by examples.


Universal Access in The Information Society | 2003

Loosely-coupled approach towards multi-modal browsing

Jan Kleindienst; Ladislav Seredi; Pekka Kapanen; Janne Bergman

Contemplating the concept of universal-access multi-modal browsing comes as one of the emerging “killer” technologies that promises broader and more flexible access to information, faster task completion, and advanced user experience. Inheriting the best from GUI and speech, based on the circumstances, hardware capabilities, and environment, multi-modality’s great advantage is to provide application developers with a scalable blend of input and output channels that may accommodate any user, device, and platform. This article describes a flexible multi-modal browser architecture, named Ferda the Ant, which reuses uni-modal browser technologies available for VoiceXML, WML, and HTML browsing. A central component, the Virtual Proxy, acts as a synchronization coordinator. This browser architecture can be implemented in either a single client configuration, or by distributing the browser components across the network. We have defined and implemented a synchronization protocol to communicate the changes occurring in the context of a component browser to the other browsers participating in the multi-modal browser framework. Browser wrappers implement the required synchronization protocol functionality at each of the component browsers. The component browsers comply with existing content authoring standards, and we have designed a set of markup-level authoring conventions that facilitate maintaining the browser synchronization .


international conference on software maintenance | 2001

Aspects of design and implementation of a multi-channel and multi-modal information system

Vasiliki Demesticha; Jaroslav Gergic; Jan Kleindienst; Marion Mast; Lazaros Polymenakos; Henrik Schulz; Ladislav Seredi

The paper describes an architecture for multi-channel and multi-modal applications. First the design problem is explored and a proposal for a system that can handle multi-modal interaction and delivery of Internet content is proposed. The focus is pertained in some development aspects and the way they are addressed by using state-of-the-art tools. The various components are defined and described in detail. Finally, conclusions and a view of future work on the evolution of such systems is given.


international conference on multimodal interfaces | 2006

CarDialer: multi-modal in-vehicle cellphone control application

Vladimir Bergl; Martin Cmejrek; Martin Fanta; Martin Labský; Ladislav Seredi; Jan Šedivý; Lubos Ures

This demo presents CarDialer - an in-car cellphone control application. Its multi-modal user interface blends state-of-the-art speech recognition technology (including text-to-speech synthesis) with the existing well proven elements of a vehicle information system GUI (buttons mounted on a steering wheel and an LCD equipped with touch-screen). This conversational system provides access to name dialing, unconstrained dictation of numbers, adding new names, operations with lists of calls and messages, notification of presence, etc. The application is fully functional from the first start, no prerequisite steps such as configuration, speech recognition enrollment) are required. The presentation of the proposed multi-modal architecture goes beyond the specific application and presents a modular platform to integrate application logic with various incarnations of UI modalities.


european conference on computer vision | 2004

Djinn: Interaction Framework for Home Environment Using Speech and Vision

Jan Kleindienst; Tomáš Macek; Ladislav Seredi; Jan Šedivý

In this paper we describe an interaction framework that uses speech recognition and computer vision to model new generation of interfaces in the residential environment. We outline the blueprints of the architecture and describe the main building blocks. We show a concrete prototype platform where this novel architecture has been deployed and will be tested at the user field trials. EC co-funds this work as part of HomeTalk IST-2001-33507 project.


Archive | 2001

Systems and methods for implementing modular DOM (Document Object Model)-based multi-modal browsers

David Boloker; Rafah A. Hosn; Photina Jaeyun Jang; Jan Kleindienst; Tomáš Macek; Stephane Herman Maes; Thiruvilwamalai V. Raman; Ladislav Seredi


Archive | 2001

Reusable voiceXML dialog components, subdialogs and beans

Jaroslav Gergic; Rafah A. Hosn; Jan Kleindienst; Stephane Herman Maes; Thiruvilwamalai V. Raman; Jan Sedivy; Ladislav Seredi


2008 Hands-Free Speech Communication and Microphone Arrays | 2008

Far-Field Multimodal Speech Processing and Conversational Interaction in Smart Spaces

Gerasimos Potamianos; Jing Huang; Etienne Marcheret; Vit Libal; Rajesh Balchandran; Mark E. Epstein; Ladislav Seredi; Martin Labsky; Lubos Ures; Matthew P. Black; Patrick Lucey


Image and Vision Computing | 2007

Interaction framework for home environment using speech and vision

Jan Kleindienst; Tomáš Macek; Ladislav Seredi; Jan Šedivý


Archive | 2007

System, method and architecture for control and multi-modal synchronization of speech browsers

Frantisek Bachleda; Jan Kleindlenst; Martin Labsky; Jan Sedivy; Ladislav Seredi; Lubos Ures; Keith Grueneberg

Researchain Logo
Decentralizing Knowledge