Dominik Ertl
Vienna University of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Dominik Ertl.
hawaii international conference on system sciences | 2009
Roman Popp; Jürgen Falb; Edin Arnautovic; Hermann Kaindl; Sevan Kavaldjian; Dominik Ertl; Helmut Horacek; Cristian Bogdan
In addition to the structure and “look” of a user interface (UI), its behavior needs to be defined. For a fullyautomated UI generation, of course, it will have to be generated fully automatically as well. We avoid that finite-state machines or similar would have to be created manually by a UI designer. Instead, we start from a largely declarative high-level discourse model including a few procedural constructs. Based on our definitions of the procedural semantics of all parts of such a discourse model, we are able to automatically generate a finite-state machine that fully defines the behavior of the generated UI. In this way, we show how automatic generation of the behavior of a user interface is possible from a high-level discourse model.
engineering interactive computing system | 2011
David Raneburger; Roman Popp; Hermann Kaindl; Jürgen Falb; Dominik Ertl
Any graphical user interface needs to have defined structure and behavior. So, in particular, models of Window / Icon / Menu / Pointing Device (WIMP) UIs need to represent structure and behavior at some level of abstraction, possibly in separate models. High-level conceptual models such as Task or Discourse Models do not model the UI per se. Therefore, in the course of automated generation of (WIMP) UIs from such models, structure and behavior of the UI need to be generated, and they need to fit together. In order to achieve that, we devised a new approach to weaving structural and behavioral models on different levels of abstraction.
systems, man and cybernetics | 2009
Sevan Kavaldjian; David Raneburger; Jürgen Falb; Hermann Kaindl; Dominik Ertl
Development of GUIs (graphical user interfaces) for multiple devices is still a time-consuming and error-prone task. Each class of physical devices - and in addition each application-tailored set of physical devices - has different properties and thus needs a specifically tailored GUI. Current model-driven GUI generation approaches take only few properties into account, like screen resolution. Additional device properties, especially pointing granularity, allow generating GUIs suited for certain classes of devices like touch screens. This paper is based on a model-driven UI development approach for multiple devices based on a discourse model that provides an interaction design. Our approach generates UIs using an extended device specification and applying model transformation rules taking them into account. In particular, we show how to semi-automatically generate finger-based touch screen UIs and compare them with usual UIs for use with a mouse that have also been generated semi-automatically.
engineering interactive computing system | 2009
Dominik Ertl
Multimodal applications are typically developed together with their user interfaces, leading to a tight coupling. Additionally, human-computer interaction is often less considered. This potentially results in a worse user interface when additional modalities have to be integrated and/or the application shall be developed for a different device. A promising way of creating multimodal user interfaces with less effort for applications running on several devices is semi-automatic generation. This work shows the generation of multimodal interfaces where a discourse model is transformed to different automatically rendered modalities. It supports loose coupling of the design of human-computer interaction and the integration of specific modalities. The presented communication platform utilizes this transformation process. It allows for high-level integration of input like speech, hand gesture and a WIMP-UI. The generation of output is possible with the modalities speech and GUI. Integration of other input and output modalities is supported as well. Furthermore, the platform is applicable for several applications as well as different devices, e.g., PDAs and PCs.
hawaii international conference on system sciences | 2011
Hermann Kaindl; Roman Popp; David Raneburger; Dominik Ertl; Jürgen Falb; Alexander Szep; Cristian Bogdan
Computer-support cooperative work has been studied extensively and achieved applications that are widely useful. However, there was not much emphasis on support through robots for cooperative work. So, there is no deep understanding of what is needed to support tasks that involve individually moving communication partners and for which the physical context is relevant. This work shows an example of robot-supported cooperative work, where two robots communicate with each other to indirectly support communication between their human users. This example is a shared-shopping scenario. For its realization, we make use of high-level discourse models both for specifying communication between a robot shopping cart and its human user, and between two robot carts. The emerging communication from such intertwined discourses supports a shared (shopping) task of their two human users, who collaborate based on the shopping list shared in this way. Such support is important in a setting where the physical context is relevant, e.g., the vicinity to products.
international multi-conference on computing in global information technology | 2009
Harald Krapfenbauer; Dominik Ertl; Alois Zoitl; Friederich Kupzog
Industrial automation systems are tested nowadays mainly via system tests at a very late stage of development. These tests are conducted manually, are time-consuming and cost-intensive. Earlier testing of automation software, e.g., component testing, is therefore desired in order to reduce the effort for system testing by detecting errors sooner. In this paper we present an improved concept for a test environment that enables developers of industrial control electronics to test the functionality of IEC 61499 software components. Components can be tested on any hardware with an IEC 61499 runtime environment, even on the target hardware. There is no need to change the automation software for testing. We propose using dynamically typed languages to implement tests because such languages have inherent properties that are useful for this task. We provide example code of a typical test case.
advances in computer-human interaction | 2010
Dominik Ertl; Jürgen Falb; Hermann Kaindl
Fission of several output modalities poses hard problems, and (semi-)automatically configuring it is even more difficult. However, it is important to address the latter in order to broaden the scope of providing user interfaces semi-automatically. Our approach starts from a high-level discourse model created by a human interaction designer. It is modality-independent,so a modality-annotated discourse model is semi-automatically generated. Based on it, our fission is semiautomatically configured. It currently supports output modalities graphical user interface, (canned) speech output, and a new modality that we call movement as communication. The latter involves movements of a semi-autonomous robot in 2D-space for reinforcing the communication of the other modalities.
hawaii international conference on system sciences | 2015
Ralph Hoch; Hermann Kaindl; Roman Popp; Dominik Ertl; Helmut Horacek
Semantic specification of services based on formal logic can be used for automated verification of service composition. In order to make such verifications consistent with validations of service compositions in the context of business processes, more and more knowledge needs to be included in the related specifications. We show using a simple example that after adding such additional knowledge directly to the semantic specifications of services, they may become over-specified. We found that this additional knowledge can be a special kind of business rules. Therefore, we propose to specify them separately, but also based on formal logic. More precisely, the use of the Fluent Calculus and the related FLUX tool enabled automated and guaranteed verification of composed services against the specifications of the single services. Adding the formalized business rules into such verifications made them consistent with validations of service compositions in the context of business processes. Overall, both verification and validation (V&V) are essential for service composition and business processes. As a consequence, this novel approach to V&V should support a comprehensive approach to service design.
international conference on systems | 2009
Hermann Kaindl; Edin Arnautovic; Dominik Ertl; Jürgen Falb
For the (distributed) development of certain highly innovative software-intensive systems such as semi-autonomous robots, it is not clear which life cycle approach to follow best. Especially in a (local) research environment, the development may typically happen in some bottom-up form of prototyping. In contrast, standard systems engineering would (still) prescribe a waterfall life cycle. Software engineering would strongly suggest some form of iterative and incremental development. Iterative development has high potential for improvements in general systems engineering, but in contrast to pure software development it also has inherent limits. We investigate these issues and propose a new iterative but not incremental life cycle approach. It involves iterations of requirements engineering and architecting, but not of low-level design, implementation and testing. The reason for the latter is inherently given by costs and required time for hardware development.
systems, man and cybernetics | 2013
Dominik Ertl; Sven Dominka; Hermann Kaindl
In state-of-the-art automotive systems, many automated driving features such as adaptive cruise control are usually integrated in various combinations. These combinations may lead to undesired feature interaction, where one such feature creates conditions that interfere with the proper execution of one or more of the other features. In such a situation, the usability of the vehicle or even the safety of humans may become compromised. We propose to improve Systems Engineering for coordinating feature implementations to avoid undesired feature interaction. In particular, we reuse knowledge on addressing a certain recurring problem with a proven (generalized) solution in the form of a so-called Pattern that has been devised in the field of object-oriented Software Engineering. The so-called Mediator Pattern organizes the work of software parts that otherwise would have many interfaces among each other, leading to high coupling. We devise such a Mediator at design-time, so that at runtime known undesired feature interactions can be avoided by the resulting system. A prototypical implementation in a real automotive system and its test demonstrate the feasibility of this approach in the sense, that known feature interactions can be avoided in a systematic way.