Lode Hoste
Vrije Universiteit Brussel
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Lode Hoste.
tangible and embedded interaction | 2011
Christophe Scholliers; Lode Hoste; Beat Signer; Wolfgang De Meuter
Over the past few years, multi-touch user interfaces emerged from research prototypes into mass market products. This evolution has been mainly driven by innovative devices such as Apples iPhone or Microsofts Surface tabletop computer. Unfortunately, there seems to be a lack of software engineering abstractions in existing multi-touch development frameworks. Many multi-touch applications are based on hard-coded procedural low level event processing. This leads to proprietary solutions with a lack of gesture extensibility and cross-application reusability. We present Midas, a declarative model for the definition and detection of multi-touch gestures where gestures are expressed via logical rules over a set of input facts. We highlight how our rule-based language approach leads to improvements in gesture extensibility and reusability. Last but not least, we introduce JMidas, an instantiation of Midas for the Java programming language and describe how JMidas has been applied to implement a number of innovative multi-touch gestures.
international conference on multimodal interfaces | 2011
Lode Hoste; Bruno Dumas; Beat Signer
In recent years, multimodal interfaces have gained momentum as an alternative to traditional WIMP interaction styles. Existing multimodal fusion engines and frameworks range from low-level data stream-oriented approaches to high-level semantic inference-based solutions. However, there is a lack of multimodal interaction engines offering native fusion support across different levels of abstractions to fully exploit the power of multimodal interactions. We present Mudra, a unified multimodal interaction framework supporting the integrated processing of low-level data streams as well as high-level semantic inferences. Our solution is based on a central fact base in combination with a declarative rule-based language to derive new facts at different abstraction levels. Our innovative architecture for multimodal interaction encourages the use of software engineering principles such as modularisation and composition to support a growing set of input modalities as well as to enable the integration of existing or novel multimodal fusion engines.
Cognitive Neuropsychiatry | 2015
Lise Docx; Javier de la Asuncion; Bernard Sabbe; Lode Hoste; Robin Baeten; Nattapon Warnaerts; Manuel Morrens
Introduction. Deficits in the initiation and persistence of goal-directed behaviour are key aspects of schizophrenia. In this study, the association between these motivational deficits and discounting of reward value in function of increasing physical effort costs was investigated. Methods. Effort-based decision-making was investigated in 40 patients with a DSM-IV diagnosis of schizophrenia and 30 age- and sex-matched healthy control subjects by means of an effort discounting task. To assess negative symptom severity, we made use of the Scale for the Assessment of Negative Symptoms as well as objective measurements of hedonic response to stimuli and motor activity levels. Results. Patients as well as control subjects discounted the subjective value of rewards significantly with increasing physical effort costs. However, we failed to find a difference in the discounting curves between patients and controls. Furthermore, effort discounting was not associated with any of the negative symptoms measures. Conclusions. Physical effort discounting was not found to be associated with motivational symptoms in schizophrenia if other decision costs are constant. However, recent findings show that more cognitive effort and/or an interaction between effort and other decision costs (e.g. temporal delay or uncertainty) are associated with negative symptoms in schizophrenia. This should be investigated further in future research.
advanced visual interfaces | 2012
Lode Hoste; Bruno Dumas; Beat Signer
We present SpeeG, a multimodal speech- and body gesture-based text input system targeting media centres, set-top boxes and game consoles. Our controller-free zoomable user interface combines speech input with a gesture-based real-time correction of the recognised voice input. While the open source CMU Sphinx voice recogniser transforms speech input into written text, Microsofts Kinect sensor is used for the hand gesture tracking. A modified version of the zoomable Dasher interface combines the input from Sphinx and the Kinect sensor. In contrast to existing speech error correction solutions with a clear distinction between a detection and correction phase, our innovative SpeeG text input system enables continuous real-time error correction. An evaluation of the SpeeG prototype has revealed that low error rates for a text input speed of about six words per minute can be achieved after a minimal learning phase. Moreover, in a user study SpeeG has been perceived as the fastest of all evaluated user interfaces and therefore represents a promising candidate for future controller-free text input.
international conference on software engineering | 2010
Lode Hoste
Multi-touch interfaces allow users to use multiple fingers to provide input to a graphical user interface. The idea of allowing users to touch and manipulate digital information with their hands has been subject of research for more than 25 years [5, 4]. Recently several of these research artifacts have found their way to industry, with examples like the iPhone and the Microsoft Surface. Mainstream programming languages do not offer support to deal with the complexity of these new devices. Unlike the evolution in the hardware technology, the complexity of these new devices has not yet been addressed by adequate software engineering abstractions.
international conference on multimodal interfaces | 2013
Lode Hoste; Beat Signer
With the emergence of smart TVs, set-top boxes and public information screens over the last few years, there is an increasing demand to no longer use these appliances only for passive output. These devices can also be used to do text-based web search as well as other tasks which require some form of text input. However, the design of text entry interfaces for efficient input on such appliances represents a major challenge. With current virtual keyboard solutions we only achieve an average text input rate of 5.79 words per minute (WPM) while the average typing speed on a traditional keyboard is 38 WPM. Furthermore, so-called controller-free appliances such as Samsungs Smart TV or Microsofts Xbox Kinect result in even lower average text input rates. We present SpeeG2, a multimodal text entry solution combining speech recognition with gesture-based error correction. Four innovative prototypes for the efficient controller-free text entry have been developed and evaluated. A quantitative evaluation of our SpeeG2 text entry solution revealed that the best of our four prototypes achieves an average input rate of 21.04 WPM (without errors), outperforming current state-of-the-art solutions for controller-free text input.
advanced visual interfaces | 2012
Bruno Dumas; Tim Broché; Lode Hoste; Beat Signer
We present the Visual Data Explorer (ViDaX), a tool for visualising and exploring large RDF data sets. ViDaX enables the extraction of information from RDF data sources and offers functionality for the analysis of various data characteristics as well as the exploration of the corresponding ontology graph structure. In addition to some basic data mining features, our interactive semantic data visualisation and exploration tool offers various types of visualisations based on the type of data. In contrast to existing semantic data visualisation solutions, ViDaX also offers non-expert users the possibility to explore semantic data based on powerful automatic visualisation and interaction techniques without the need for any low-level programming. To illustrate some of ViDaXs functionality, we present a use case based on semantic data retrieved from DBpedia, a semantic version of the well-known Wikipedia online encyclopedia, which forms a major component of the emerging linked data initiative.
programming based on actors, agents, and decentralized control | 2013
Janwillem Swalens; Thierry Renaux; Lode Hoste; Stefan Marr; Wolfgang De Meuter
Traffic monitoring or crowd management systems produce large amounts of data in the form of events that need to be processed to detect relevant incidents. Rule-based pattern recognition is a promising approach for these applications, however, increasing amounts of data as well as large and complex rule sets demand for more and more processing power and memory. In order to scale such applications, a rule-based pattern detection system needs to be distributable over multiple machines. Todays approaches are however focused on static distribution of rules or do not support reasoning over the full set of events. We propose Cloud PARTE, a complex event detection system that implements the Rete algorithm on top of mobile actors. These actors can migrate between machines to respond to changes in the work load distribution. Cloud PARTE is an extension of PARTE and offers the first rule engine specifically tailored for continuous complex event detection that is able to benefit from elastic systems as provided by cloud computing platforms. It supports fully automatic load balancing and supports online rules with access to the entire event pool.
acm conference on systems programming languages and applications software for humanity | 2015
Jesse Zaman; Lode Hoste; Wolfgang De Meuter
DisCoPar (Distributed Components for Participatory Campaigning) is a framework inspired by flow-based programming (FBP) that enables users to develop and deploy mobile apps for participatory sensing purposes. The high reconfigurability and reusability on different levels of the system ensures that DisCoPar can be used to design a large variety of mobile data gathering apps. In this paper, we focus on the mobile app designer of DisCoPar. Specifically, we discuss how FBP principles such as the component-based design enable flexible app-logic composition, and how the visual aspect of FBP provides an intuitive interface for end-users. We demonstrate these principles by presenting a fully functional participatory sensing app designed with DisCoPar.
programming based on actors, agents, and decentralized control | 2012
Thierry Renaux; Lode Hoste; Stefan Marr; Wolfgang De Meuter
Applying imperative programming techniques to process event streams, like those generated by multi-touch devices and 3D cameras, has significant engineering drawbacks. Declarative approaches solve these problems but have not been able to scale on multicore systems while providing guaranteed response times. We propose PARTE, a parallel scalable complex event processing engine which allows a declarative definition of event patterns and provides soft real-time guarantees for their recognition. It extends the state-saving Rete algorithm and maps the event matching onto a graph of actor nodes. Using a tiered event matching model, PARTEprovides upper bounds on the detection latency. Based on the domain-specific constraints, PARTEs design relies on a combination of 1) lock-free data structures; 2) safe memory management techniques; and 3) message passing between Rete nodes. In our benchmarks, we measured scalability up to 8 cores, outperforming highly optimized sequential implementations.