Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Daniel Sonntag is active.

Publication


Featured researches published by Daniel Sonntag.


Journal of Web Semantics | 2007

DOLCE ergo SUMO: On foundational and domain models in the SmartWeb Integrated Ontology (SWIntO)

Daniel Oberle; Anupriya Ankolekar; Pascal Hitzler; Philipp Cimiano; Michael Sintek; Malte Kiesel; Babak Mougouie; Stephan Baumann; Shankar Vembu; Massimo Romanelli; Paul Buitelaar; Ralf Engel; Daniel Sonntag; Norbert Reithinger; Berenike Loos; Hans-Peter Zorn; Vanessa Micelli; Robert Porzel; Christian Schmidt; Moritz Weiten; Felix Burkhardt; Jianshen Zhou

Increased availability of mobile computing, such as personal digital assistants (PDAs), creates the potential for constant and intelligent access to up-to-date, integrated and detailed information from the Web, regardless of ones actual geographical position. Intelligent question-answering requires the representation of knowledge from various domains, such as the navigational and discourse context of the user, potential user questions, the information provided by Web services and so on, for example in the form of ontologies. Within the context of the SmartWeb project, we have developed a number of domain-specific ontologies that are relevant for mobile and intelligent user interfaces to open-domain question-answering and information services on the Web. To integrate the various domain-specific ontologies, we have developed a foundational ontology, the SmartSUMO ontology, on the basis of the DOLCE and SUMO ontologies. This allows us to combine all the developed ontologies into a single SmartWeb Integrated Ontology (SWIntO) having a common modeling basis with conceptual clarity and the provision of ontology design patterns for modeling consistency. In this paper, we present SWIntO, describe the design choices we made in its construction, illustrate the use of the ontology through a number of applications, and discuss some of the lessons learned from our experiences.


international joint conference on artificial intelligence | 2007

SmartWeb handheld: multimodal interaction with ontological knowledge bases and semantic web services

Daniel Sonntag; Ralf Engel; Gerd Herzog; Alexander Pfalzgraf; Norbert Pfleger; Massimo Romanelli; Norbert Reithinger

SMARTWEB aims to provide intuitive multimodal access to a rich selection of Web-based information services. We report on the current prototype with a smartphone client interface to the Semantic Web. An advanced ontology-based representation of facts and media structures serves as the central description for rich media content. Underlying content is accessed through conventional web service middleware to connect the ontological knowledge base and an intelligent web service composition module for external web services, which is able to translate between ordinary XML-based data structures and explicit semantic representations for user queries and system responses. The presentation module renders the media content and the results generated from the services and provides a detailed description of the content and its layout to the fusion module. The user is then able to employ multiple modalities, like speech and gestures, to interact with the presented multimedia material in a multimodal way.


international conference on multimodal interfaces | 2005

A look under the hood: design and development of the first SmartWeb system demonstrator

Norbert Reithinger; Simon Bergweiler; Ralf Engel; Gerd Herzog; Norbert Pfleger; Massimo Romanelli; Daniel Sonntag

Experience shows that decisions in the early phases of the development of a multimodal system prevail throughout the life-cycle of a project. The distributed architecture and the requirement for robust multimodal interaction in our project SmartWeb resulted in an approach that uses and extends W3C standards like EMMA and RDFS. These standards for the interface structure and content allowed us to integrate available tools and techniques. However, the requirements in our system called for various extensions, e.g., to introduce result feedback tags for an extended version of EMMA. The interconnection framework depends on a commercial telephone voice dialog system platform for the dialog-centric components while the information access processes are linked using web service technology. Also in the area of this underlying infrastructure, enhancements and extensions were necessary. The first demonstration system is operable now and will be presented at the Football World Cup 2006 in Germany.


intelligent user interfaces | 2014

A mixed reality head-mounted text translation system using eye gaze input

Takumi Toyama; Daniel Sonntag; Andreas Dengel; Takahiro Matsuda; Masakazu Iwamura; Koichi Kise

Efficient text recognition has recently been a challenge for augmented reality systems. In this paper, we propose a system with the ability to provide translations to the user in real-time. We use eye gaze for more intuitive and efficient input for ubiquitous text reading and translation in head mounted displays (HMDs). The eyes can be used to indicate regions of interest in text documents and activate optical-character-recognition (OCR) and translation functions. Visual feedback and navigation help in the interaction process, and text snippets with translations from Japanese to English text snippets, are presented in a see-through HMD. We focus on travelers who go to Japan and need to read signs and propose two different gaze gestures for activating the OCR text reading and translation function. We evaluate which type of gesture suits our OCR scenario best. We also show that our gaze-based OCR method on the extracted gaze regions provide faster access times to information than traditional OCR approaches. Other benefits include that visual feedback of the extracted text region can be given in real-time, the Japanese to English translation can be presented in real-time, and the augmentation of the synchronized and calibrated HMD in this mixed reality application are presented at exact locations in the augmented user view to allow for dynamic text translation management in head-up display systems.


international conference of design, user experience, and usability | 2013

Towards Medical Cyber-Physical Systems: Multimodal Augmented Reality for Doctors and Knowledge Discovery about Patients

Daniel Sonntag; Sonja Zillner; Christian Schulz; Markus Weber; Takumi Toyama

In the medical domain, which becomes more and more digital, every improvement in efficiency and effectiveness really counts. Doctors must be able to retrieve data easily and provide their input in the most convenient way. With new technologies towards medical cyber-physical systems, such as networked head-mounted displays (HMDs) and eye trackers, new interaction opportunities arise. With our medical demo in the context of a cancer screening programme, we are combining active speech based input, passive/active eye tracker user input, and HMD output (all devices are on-body and hands-free) in a convenient way for both the patient and the doctor.


international workshop on spoken dialogue systems technology | 2010

A discourse and dialogue infrastructure for industrial dissemination

Daniel Sonntag; Norbert Reithinger; Gerd Herzog; Tilman Becker

We think that modern speech dialogue systems need a prior usability analysis to identify the requirements for industrial applications. In addition, work from the area of the Semantic Web should be integrated. These requirements can then be met by multimodal semantic processing, semantic navigation, interactive semantic mediation, user adaptation/personalisation, interactive service composition, and semantic output representation which we will explain in this paper. We will also describe the discourse and dialogue infrastructure these components develop and provide two examples of disseminated industrial prototypes.


Journal of Cases on Information Technology | 2009

Pillars of Ontology Treatment in the Medical Domain

Daniel Sonntag; Pinar Wennerberg; Paul Buitelaar; Sonja Zillner

In this chapter the authors describe the three pillars of ontology treatment in the medical domain in a comprehensive case study within the large-scale THESEUS MEDICO project. MEDICO addresses the need for advanced semantic technologies in medical image and patient data search. The objective is to enable a seamless integration of medical images and different user applications by providing direct access to image semantics. Semantic image retrieval should provide the basis for the help in clinical decision support and computer aided diagnosis. During the course of lymphoma diagnosis and continual treatment, image data is produced several times using different image modalities. After semantic annotation, the images need to be integrated with medical (textual) data repositories and ontologies. They build upon the three pillars of knowledge engineering, ontology mediation and alignment, and ontology population and learning to achieve the objectives of the MEDICO project.


IEEE Transactions on Visualization and Computer Graphics | 2015

ModulAR: Eye-Controlled Vision Augmentations for Head Mounted Displays

Jason Orlosky; Takumi Toyama; Kiyoshi Kiyokawa; Daniel Sonntag

In the last few years, the advancement of head mounted display technology and optics has opened up many new possibilities for the field of Augmented Reality. However, many commercial and prototype systems often have a single display modality, fixed field of view, or inflexible form factor. In this paper, we introduce Modular Augmented Reality (ModulAR), a hardware and software framework designed to improve flexibility and hands-free control of video see-through augmented reality displays and augmentative functionality. To accomplish this goal, we introduce the use of integrated eye tracking for on-demand control of vision augmentations such as optical zoom or field of view expansion. Physical modification of the devices configuration can be accomplished on the fly using interchangeable camera-lens modules that provide different types of vision enhancements. We implement and test functionality for several primary configurations using telescopic and fisheye camera-lens systems, though many other customizations are possible. We also implement a number of eye-based interactions in order to engage and control the vision augmentations in real time, and explore different methods for merging streams of augmented vision into the users normal field of view. In a series of experiments, we conduct an in depth analysis of visual acuity and head and eye movement during search and recognition tasks. Results show that methods with larger field of view that utilize binary on/off and gradual zoom mechanisms outperform snapshot and sub-windowed methods and that type of eye engagement has little effect on performance.


advanced visual interfaces | 2014

A natural interface for multi-focal plane head mounted displays using 3D gaze

Takumi Toyama; Jason Orlosky; Daniel Sonntag; Kiyoshi Kiyokawa

In mobile augmented reality (AR), it is important to develop interfaces for wearable displays that not only reduce distraction, but that can be used quickly and in a natural manner. In this paper, we propose a focal-plane based interaction approach with several advantages over traditional methods designed for head mounted displays (HMDs) with only one focal plane. Using a novel prototype that combines a monoscopic multi-focal plane HMD and eye tracker, we facilitate interaction with virtual elements such as text or buttons by measuring eye convergence on objects at different depths. This can prevent virtual information from being unnecessarily overlaid onto real world objects that are at a different range, but in the same line of sight. We then use our prototype in a series of experiments testing the feasibility of interaction. Despite only being presented with monocular depth cues, users have the ability to correctly select virtual icons in near, mid, and far planes in 98.6% of cases.


intelligent user interfaces | 2015

An Interactive Pedestrian Environment Simulator for Cognitive Monitoring and Evaluation

Jason Orlosky; Markus Weber; Yecheng Gu; Daniel Sonntag; Sergey A. Sosnovsky

Recent advances in virtual and augmented reality have led to the development of a number of simulations for different applications. In particular, simulations for monitoring, evaluation, training, and education have started to emerge for the consumer market due to the availability and affordability of immersive display technology. In this work, we introduce a virtual reality environment that provides an immersive traffic simulation designed to observe behavior and monitor relevant skills and abilities of pedestrians who may be at risk, such as elderly persons with cognitive impairments. The system provides basic reactive functionality, such as display of navigation instructions and notifications of dangerous obstacles during navigation tasks. Methods for interaction using hand and arm gestures are also implemented to allow users explore the environment in a more natural manner.

Collaboration


Dive into the Daniel Sonntag's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alexander Cavallaro

University of Erlangen-Nuremberg

View shared research outputs
Top Co-Authors

Avatar

Matthias Hammon

University of Erlangen-Nuremberg

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

András Lörincz

Eötvös Loránd University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Paul Buitelaar

National University of Ireland

View shared research outputs
Researchain Logo
Decentralizing Knowledge