Philipp Mock
University of Tübingen
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Philipp Mock.
intelligent user interfaces | 2014
Philipp Mock; Jörg Edelmann; Andreas Schilling; Wolfgang Rosenstiel
Personalized soft-keyboards which adapt to a users individual typing behavior can reduce typing errors on interactive displays. In multi-user scenarios a personalized model has to be loaded for each participant. In this paper we describe a user identification technique that is based on raw sensor data from an optical touch screen. For classification of users we use a multi-class support vector machine that is trained with grayscale images from the optical sensor. Our implementation can identify a specific user from a set of 12 users with an average accuracy of 97.51% after one keystroke. It can be used to automatically select individual typing models during free-text entry. The resulting authentication process is completely implicit. We furthermore describe how the approach can be extended to automatic loading of personal information and settings.
interactive tabletops and surfaces | 2012
Jörg Edelmann; Philipp Mock; Andreas Schilling; Peter Gerjets; Wolfgang Rosenstiel; Wolfgang Straßer
Typing on a touchscreen display usually lacks haptic feedback which is crucial for maintaining finger to key assignment, especially for touch typists who are not looking at their keyboard. This leads to typing being substantially more error prone on these devices. We present a soft keyboard model which we developed from typing data collected from users with diverging typing behavior. For data acquisition, we used a simulated perfect classifier we refer to as The Keyboard of Oz. In order to draw near to this classifier we used the complete sensor data of each keystroke and applied supervised machine learning techniques to learn and evaluate an individual keyboard model. The model not only accounts for individual keystroke distributions but also incorporates a classifier based on the images obtained from an optical touch sensor. The resulting highly individual classifier has remarkable classification accuracy. Additionally, we present an approach to compensate for hand drift during typing utilizing a Kalman filter. We show that this filter performs significantly better with the keyboard model which takes raw sensor data into account.
2012 IEEE International Conference on Emerging Signal Processing Applications | 2012
Jörg Edelmann; Peter Gerjets; Philipp Mock; Andreas Schilling; Wolfgang Strasser
One major benefit of multi-touch interaction is the possibility for passive observers to easily follow interactions of active users. This paper presents a novel system which allows remote users to perform collaborative multi-touch interaction in a remote face-to-face situation with shared virtual material seamlessly integrated in a videoconference application. Each user is captured by a camera located behind the screen and touch interactions are thus made visible to the remote collaborator. In order to enhance the immersion of the setup, the system provides realistic stereoscopic 3D video capture and presentation. We highlight the concept and construction of the system and show sample applications which allow for collaborative interaction with digital 2D and 3D media in an intuitive way.
international conference on multimodal interfaces | 2016
Philipp Mock; Peter Gerjets; Maike Tibus; Ulrich Trautwein; Korbinian Möller; Wolfgang Rosenstiel
Although a great number of today’s learning applications run on devices with an interactive screen, the high-resolution interaction data which these devices provide have not been used for workload-adaptive systems yet. This paper aims at exploring the potential of using touch sensor data to predict user states in learning scenarios. For this purpose, we collected touch interaction patterns of children solving math tasks on a multi-touch device. 30 fourth-grade students from a primary school participated in the study. Based on these data, we investigate how machine learning methods can be applied to predict cognitive workload associated with tasks of varying difficulty. Our results show that interaction patterns from a touchscreen can be used to significantly improve automatic prediction of high levels of cognitive workload (average classification accuracy of 90.67% between the easiest and most difficult tasks). Analyzing an extensive set of features, we discuss which characteristics are most likely to be of high value for future implementations. We furthermore elaborate on design choices made for the used multiple choice interface and discuss critical factors that should be considered for future touch-based learning interfaces.
3dtv-conference: the true vision - capture, transmission and display of 3d video | 2012
Philipp Mock; Andreas Schilling; Wolfgang Strasser; Jörg Edelmann
In this paper we present a system for video based remote collaboration in stereoscopic 3D that supports multi-touch interaction. The presented system seamlessly integrates shared virtual 3D-material into a video conference application. The system utilizes a calibrated stereo camera setup to capture the collaborating users through a semi-transparent holographic screen. The resulting novel form of remote communication is analyzed for its practicability in varying application scenarios. For this, the impact of preconditions of different scenarios on the overall performance of the system and implications on its design are evaluated. Furthermore, we propose techniques to achieve telepresence in those scenarios by improving image quality and suggest an approach for automatic camera calibration to make the installation and maintenance of Face2Face feasible.
australasian computer-human interaction conference | 2015
Philipp Mock; Jonas Jaszkowic; Jörg Edelmann; Yvonne Kammerer; Andreas Schilling; Wolfgang Rosenstiel
We propose a general-purpose framework for the implementation and evaluation of adaptive virtual keyboards based on unprocessed sensory information from an interactive surface. We furthermore describe an implementation on a commercially available optical touchscreen that features real-time visualization of the underlying key classification process. The typing application, which uses support vector machine classifiers and bivariate Gaussian distributions to differentiate between keys, was evaluated in a user study with 24 participants. The adaptive keyboard performed significantly better in terms of typing speed and error rates compared to a standard onscreen keyboard (approximately 40% speedup and 25% reduced error rates). We also performed evaluations with reduced sensor resolutions and additive noise in order to verify the generalizability of the presented approach for other sensing techniques. Our approach showed high robustness in both conditions. Based on these findings, we discuss possible implications for future implementations of virtual keyboards.
cooperative design visualization and engineering | 2013
Jörg Edelmann; Philipp Mock; Andreas Schilling; Peter Gerjets
Distributed working groups rely on collaboration systems that promote working on a project cooperatively over a distance. However, conventional systems for remote cooperative work do not transport important non-verbal cues of face-to-face communication like eye-contact or gaze awareness that would be available in co-located collaboration. Additionally, reference material and annotation tools should be readily accessible for all users. The screen layout should moreover create awareness for the transmitted video of remote participants and reference material alike and allow users to easily follow both at the same time. This paper describes how the presented system Face2Face meets these requirements and thereby supports the collaborative design process. Furthermore, the performance of the system is evaluated in order to validate its practical applicability.
ICMI '18 Proceedings of the 20th ACM International Conference on Multimodal Interaction | 2018
Philipp Mock; Maike Tibus; Ann-Christine Ehlis; Harald Baayen; Peter Gerjets
This paper presents a novel approach for automatic prediction of risk of ADHD in schoolchildren based on touch interaction data. We performed a study with 129 fourth-grade students solving math problems on a multiple-choice interface to obtain a large dataset of touch trajectories. Using Support Vector Machines, we analyzed the predictive power of such data for ADHD scales. For regression of overall ADHD scores, we achieve a mean squared error of 0.0962 on a four-point scale (R² = 0.5667). Classification accuracy for increased ADHD risk (upper vs. lower third of collected scores) is 91.1%.
international conference on distributed ambient and pervasive interactions | 2015
Philipp Mock; Jörg Edelmann; Wolfgang Rosenstiel
We propose an approach for recognition of mobile devices on interactive surfaces that do not support optical markers. Our system only requires an axis aligned bounding box of the object placed on the touchscreen in combination with position data from the mobile devices integrated inertia measurement unit IMU. We put special emphasis on maximum flexibility in terms of compatibility with varying multi-touch sensor techniques and different kinds of mobile devices. A new device can be added to the system with a short training phase, during which the device is moved across the interactive surface. A device model is automatically created from the recorded data using support vector machines. Different devices of the same size are identified by analyzing their IMU data streams for transitions into a horizontally resting state. The system has been tested in a museum environment.
Archive | 2013
Jörg Edelmann; Philipp Mock; Andreas Schilling