Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Raphael Menges is active.

Publication


Featured researches published by Raphael Menges.


IEEE MultiMedia | 2016

Eye-Controlled Interfaces for Multimedia Interaction

Chandan Kumar; Raphael Menges; Steffen Staab

The EU-funded MAMEM project (Multimedia Authoring and Management using your Eyes and Mind) aims to propose a framework for natural interaction with multimedia information for users who lack fine motor skills. As part of this project, the authors have developed a gaze-based control paradigm. Here, they outline the challenges of eye-controlled interaction with multimedia information and present initial project results. Their objective is to investigate how eye-based interaction techniques can be made precise and fast enough to let disabled people easily interact with multimedia information.


nordic conference on human-computer interaction | 2016

eyeGUI: A Novel Framework for Eye-Controlled User Interfaces

Raphael Menges; Chandan Kumar; Korok Sengupta; Steffen Staab

The user interfaces and input events are typically composed of mouse and keyboard interactions in generic applications. Eye-controlled applications need to revise these interactions to eye gestures, and hence design and optimization of interface elements becomes a substantial feature. In this work, we propose a novel eyeGUI framework, to support the development of such interactive eye-controlled applications with many significant aspects, like rendering, layout, dynamic modification of content, support of graphics and animation.


international world wide web conferences | 2017

Chromium based Framework to Include Gaze Interaction in Web Browser

Chandan Kumar; Raphael Menges; Daniel Müller; Steffen Staab

Enabling Web interaction by non-conventional input sources like eyes has great potential to enhance Web accessibility. In this paper, we present a Chromium based inclusive framework to adapt eye gaze events in Web interfaces. The framework provides more utility and control to develop a full-featured interactive browser, compared to the related approaches of gaze-based mouse and keyboard emulation or browser extensions. We demonstrate the framework through a sophisticated gaze driven Web browser, which effectively supports all browsing operations like search, navigation, bookmarks, and tab management.


Proceedings of the 14th Web for All Conference on The Future of Accessible Work | 2017

GazeTheWeb: A Gaze-Controlled Web Browser

Raphael Menges; Chandan Kumar; Daniel Müller; Korok Sengupta

Web is essential for most people, and its accessibility should not be limited to conventional input sources like mouse and keyboard. In recent years, eye tracking systems have greatly improved, beginning to play an important role as input medium. In this work, we present GazeTheWeb, a Web browser accessible solely by eye gaze input. It effectively supports all browsing operations like search, navigation and bookmarks. GazeTheWeb is based on a Chromium powered framework, comprising Web extraction to classify interactive elements, and application of gaze interaction paradigms to represent these elements.


Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications | 2018

Hands-free web browsing: enriching the user experience with gaze and voice modality

Korok Sengupta; Min Ke; Raphael Menges; Chandan Kumar; Steffen Staab

Hands-free browsers provide an effective tool for Web interaction and accessibility, overcoming the need for conventional input sources. Current approaches to hands-free interaction are primarily categorized in either voice or gaze-based modality. In this work, we investigate how these two modalities could be integrated to provide a better hands-free experience for end-users. We demonstrate a multimodal browsing approach combining eye gaze and voice inputs for optimized interaction, and to suffice user preferences with unimodal benefits. The initial assessment with five participants indicates improved performance for the multimodal prototype in comparison to single modalities for hands-free Web browsing.


Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications | 2018

Enhanced representation of web pages for usability analysis with eye tracking

Raphael Menges; Hanadi Tamimi; Chandan Kumar; Tina Walber; Christoph Schaefer; Steffen Staab

Eye tracking as a tool to quantify user attention plays a major role in research and application design. For Web page usability, it has become a prominent measure to assess which sections of a Web page are read, glanced or skipped. Such assessments primarily depend on the mapping of gaze data to a Web page representation. However, current representation methods, a virtual screenshot of the Web page or a video recording of the complete interaction session, suffer either from accuracy or scalability issues. We present a method that identifies fixed elements on Web pages and combines user viewport screenshots in relation to fixed elements for an enhanced representation of the page. We conducted an experiment with 10 participants and the results signify that analysis with our method is more efficient than a video recording, which is an essential criterion for large scale Web studies.


Computer Graphics Forum | 2018

Analyzing Residue Surface Proximity to Interpret Molecular Dynamics

Nils Lichtenberg; Raphael Menges; V. Ageev; A. A. Paul George; Pascal Heimer; Diana Imhof; Kai Lawonn

The surface of a molecule holds important information about the interaction behavior with other molecules. In dynamic folding or docking processes, residues of amino acids with different properties change their position within the molecule over time. The atoms of the residues that are accessible to the solvent can directly contribute to binding interactions, while residues buried within the molecular structure contribute to the stability of the molecule. Understanding patterns and causality of structural changes is important for experts in the pharmaceutical domain, e.g., in the process of drug design. We apply an iterative computation of the Solvent Accessible Surface in order to extract virtual layers of a molecule. The extraction allows to track the movement of residues in the body of the molecule, with respect to the distance of the residue to the surface or the core during dynamics simulations. We visualize the obtained layer information for the complete time span of the molecular dynamics simulation as a 2D‐map and for individual time‐steps as a 3D‐representation of the molecule. The data acquisition has been implemented alongside with further analysis functionality in a prototypical application, which is available to the public domain. We underline the feasibility of our approach with a study from the pharmaceutical domain, where our approach has been used for novel insights into the folding behavior of μ‐conotoxins.


computer-based medical systems | 2017

Assessing the Usability of Gaze-Adapted Interface against Conventional Eye-Based Input Emulation

Chandan Kumar; Raphael Menges; Steffen Staab

In recent years, eye tracking systems have greatly improved, beginning to play a promising role as an input medium. Eye trackers can be used for application control either by simply emulating the mouse and keyboard devices in the traditional graphical user interface, or by customized interfaces for eye gaze events. In this work, we evaluate these two approaches to assess their impact in usability. We present a gaze-adapted Twitter application interface with direct interaction of eye gaze input, and compare it to Twitter in a conventional browser interface with gaze-based mouse and keyboard emulation. We conducted an experimental study, which indicates a significantly better subjective user experience for the gaze-adapted approach. Based on the results, we argue the need of user interfaces interacting directly to eye gaze input to provide an improved user experience, more specifically in the field of accessibility.


computer-based medical systems | 2017

Analyzing the Impact of Cognitive Load in Evaluating Gaze-Based Typing

Korok Sengupta; Jun Sun; Raphael Menges; Chandan Kumar; Steffen Staab

Gaze-based virtual keyboards provide an effective interface for text entry by eye movements. The efficiency and usability of these keyboards have traditionally been evaluated with conventional text entry performance measures such as words per minute, keystrokes per character, backspace usage, etc. However, in comparison to the traditional text entry approaches, gaze-based typing involves natural eye movements that are highly correlated with human brain cognition. Employing eye gaze as an input could lead to excessive mental demand, and in this work we argue the need to include cognitive load as an eye typing evaluation measure. We evaluate three variations of gaze-based virtual keyboards, which implement variable designs in terms of word suggestion positioning. The conventional text entry metrics indicate no significant difference in the performance of the different keyboard designs. However, STFT (Short-time Fourier Transform) based analysis of EEG signals indicate variances in the mental workload of participants while interacting with these designs. Moreover, the EEG analysis provides insights into the users cognition variation for different typing phases and intervals, which should be considered in order to improve eye typing usability.


Data in Brief | 2017

A multimodal dataset for authoring and editing multimedia content: The MAMEM project

Spiros Nikolopoulos; Panagiotis C. Petrantonakis; Kostas Georgiadis; Fotis P. Kalaganis; Georgios Liaros; Ioulietta Lazarou; Katerina Adam; Anastasios Papazoglou-Chalikias; Elisavet Chatzilari; Vangelis P. Oikonomou; Chandan Kumar; Raphael Menges; Steffen Staab; Daniel Müller; Korok Sengupta; Sevasti Bostantjopoulou; Zoe Katsarou; Gabi Zeilig; Meir Plotnik; Amihai Gotlieb; Racheli Kizoni; Sofia Fountoukidou; Jaap Ham; Dimitrios Athanasiou; Agnes Mariakaki; Dario Comanducci; Edoardo Sabatini; Walter Nistico; Markus Plank; Ioannis Kompatsiaris

We present a dataset that combines multimodal biosignals and eye tracking information gathered under a human-computer interaction framework. The dataset was developed in the vein of the MAMEM project that aims to endow people with motor disabilities with the ability to edit and author multimedia content through mental commands and gaze activity. The dataset includes EEG, eye-tracking, and physiological (GSR and Heart rate) signals collected from 34 individuals (18 able-bodied and 16 motor-impaired). Data were collected during the interaction with specifically designed interface for web browsing and multimedia content manipulation and during imaginary movement tasks. The presented dataset will contribute towards the development and evaluation of modern human-computer interaction systems that would foster the integration of people with severe motor impairments back into society.

Collaboration


Dive into the Raphael Menges's collaboration.

Top Co-Authors

Avatar

Chandan Kumar

University of Koblenz and Landau

View shared research outputs
Top Co-Authors

Avatar

Steffen Staab

University of Koblenz and Landau

View shared research outputs
Top Co-Authors

Avatar

Korok Sengupta

University of Koblenz and Landau

View shared research outputs
Top Co-Authors

Avatar

Daniel Müller

University of Koblenz and Landau

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hanadi Tamimi

University of Koblenz and Landau

View shared research outputs
Top Co-Authors

Avatar

Jun Sun

University of Koblenz and Landau

View shared research outputs
Top Co-Authors

Avatar

Kai Lawonn

University of Koblenz and Landau

View shared research outputs
Top Co-Authors

Avatar

Min Ke

University of Koblenz and Landau

View shared research outputs
Researchain Logo
Decentralizing Knowledge