David Raneburger
Vienna University of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by David Raneburger.
Model-Driven Development of Advanced User Interfaces | 2011
David Raneburger; Roman Popp; Sevan Kavaldjian; Hermann Kaindl; Jürgen Falb
More and more devices with small screens are used to run the same application. In order to reduce usability problems, user interfaces (UIs) specific to screen size (and related resolution) are needed, but it is time consuming and costly to implement all the different UIs manually.
engineering interactive computing system | 2013
Roman Popp; David Raneburger; Hermann Kaindl
Automated generation of graphical user interfaces (GUIs) from models is possible, but their usability is often not good enough for real-world use, in particular not for small devices. Also automated tailoring of GUIs for different devices is still an issue. Our tools provide such tailoring for different devices through automatic optimization of corresponding optimization objectives under given constraints. Currently, two different optimization strategies are implemented, with their focus on tapping and vertical scrolling on touchscreen, respectively. The constraints (relevant properties such as screen size and resolution) are to be provided by the users of our tools in device specifications. Through our tool support, WIMP (window, icon, menu, pointer) GUIs can be generated at a decent level of usability nearly automatically, in particular for small devices. This is important due to the more and more widespread use of smartphones.
intelligent user interfaces | 2009
Juergen Falb; Sevan Kavaldjian; Roman Popp; David Raneburger; Edin Arnautovic; Hermann Kaindl
Automatic generation of user interfaces (UIs) has made some progress, but it still faces many challenges, especially when starting from high-level models. We developed an approach and a supporting tool for modeling discourses, from which the tool can generate WIMP (window, icon, menu, pointer) UIs automatically. This involves several complex steps, most of which we have been able to implement using model-driven transformations. When given specific target platform specifications, UIs for a variety of devices such as PCs, mobile phones and PDAs can be generated automatically.
engineering interactive computing system | 2011
David Raneburger; Roman Popp; Hermann Kaindl; Jürgen Falb; Dominik Ertl
Any graphical user interface needs to have defined structure and behavior. So, in particular, models of Window / Icon / Menu / Pointing Device (WIMP) UIs need to represent structure and behavior at some level of abstraction, possibly in separate models. High-level conceptual models such as Task or Discourse Models do not model the UI per se. Therefore, in the course of automated generation of (WIMP) UIs from such models, structure and behavior of the UI need to be generated, and they need to fit together. In order to achieve that, we devised a new approach to weaving structural and behavioral models on different levels of abstraction.
international conference on electronic commerce | 2011
Roman Popp; David Raneburger
Electronic commerce (eCommerce) environments have been emerging together with the Internet for the last decades. This led to a heterogeneous eCommerce landscape, resulting in interoperability problems between interacting agents. Interaction protocols like FIPA-ACL support the definition of the exchanged messages format and therefore, improve the interoperability. However, they do not support the specification of the exchanged data format, or how this data shall be processed. This leads to further interoperability problems. We propose the use of an interaction ontology — the Communication Ontology — as an agent interaction protocol. A Communication Ontology combines a domain ontology, a discourse ontology and an action ontology to specify the flow of interaction as well as the exchanged data format and messages and how they shall be processed. The combination of these three ontologies into one ontology improves the interoperability between the interacting agents and supports quick adaptations that become necessary, due to quickly evolving markets and rapid technological advances.
systems, man and cybernetics | 2009
Sevan Kavaldjian; David Raneburger; Jürgen Falb; Hermann Kaindl; Dominik Ertl
Development of GUIs (graphical user interfaces) for multiple devices is still a time-consuming and error-prone task. Each class of physical devices - and in addition each application-tailored set of physical devices - has different properties and thus needs a specifically tailored GUI. Current model-driven GUI generation approaches take only few properties into account, like screen resolution. Additional device properties, especially pointing granularity, allow generating GUIs suited for certain classes of devices like touch screens. This paper is based on a model-driven UI development approach for multiple devices based on a discourse model that provides an interaction design. Our approach generates UIs using an extended device specification and applying model transformation rules taking them into account. In particular, we show how to semi-automatically generate finger-based touch screen UIs and compare them with usual UIs for use with a mouse that have also been generated semi-automatically.
hawaii international conference on system sciences | 2015
David Raneburger; Hermann Kaindl; Roman Popp
When the same graphical user interface (GUI) is being used on multiple devices with different properties, usability problems arise, especially when GUI pages are too large for a (small) screen. Scrolling is a usual approach in such a situation, but it also depends on device properties. For desktop PCs used with a mouse, it is well-known that scrolling should be avoided. In contrast, for touch-based devices like tablet PCs or smartphones used with fingers, scrolling especially in vertical direction is widely used and accepted. So, providing several GUIs tailored for multiple devices is desirable but expensive, and it takes time. Automated GUI generation may help, when different GUIs can be generated from the same high-level interaction design model. This is still an issue, however, since usually adaptations to high-level models have to be made manually. Our approach just requires a device specification with a few parameters for automated GUI tailoring, which employs heuristic optimization techniques. Our fully implemented approach even offers different tailoring strategies for automated GUI generation.
engineering interactive computing system | 2010
David Raneburger
The current multitude of devices with different screen resolutions or graphic toolkits requires different user interfaces (UIs) for the same application. Model Driven UI Development solves this problem by transforming one target device independent specification into several target device dependent UIs. However, the established Model Driven Architecture (MDA) transformation process is not flexible enough to fully support all requirements of UI development. The vision of this thesis is to bridge the gap between the capabilities of model driven software engineering and the requirements of UI development. This work introduces an interactive model driven UI development approach that gives the designer control over the UI during the development process. Additional interactive support enables the designer to make informed design decisions which will ultimately lead to more satisfying UIs.
acm symposium on applied computing | 2014
David Raneburger; Hermann Kaindl; Roman Popp; Vedran Šajatović; Alexander Armbruster
A model representing an interaction design is a prerequisite for model-driven generation of graphical user interfaces (GUIs). Related state-of-the-art methodologies typically assume that a suitable interaction model is already available and do not support the exploration of design alternatives or focus on how a high-quality interaction model can be developed. Our tool-supported process facilitates the exploration and evaluation of interaction design alternatives in an iterative manner, using automated GUI generation to achieve a running application more quickly and with reduced effort in comparison to manual (prototype) development. This allows the designer to quickly find a suitable alternative. In general, this approach facilitates the development of high-quality interaction models through automated GUI generation.
asia-pacific software engineering conference | 2013
Roman Popp; Hermann Kaindl; David Raneburger
In the research on automated (design-time) generation of graphical user interfaces (GUIs), the focus is on how such a generation works and on the resulting GUIs appearance. However, the integration of the software part implementing the resulting GUI with an application logic does not appear to be in a well-defined software architecture. We propose an integration in the well-known Model-View-Controller (MVC) architecture, with its decoupling of different concerns and the resulting software properties. We show that most of its components can be automatically generated from Discourse-based Communication Models. We also show how these models connect to an application logic. In this way, we present an implemented and tested approach to connecting a high-level interaction model and the Web-based GUI software generated from it with an application logic and its implementation in the context of model-driven GUI generation.