Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jörg Edelmann is active.

Publication


Featured researches published by Jörg Edelmann.


intelligent user interfaces | 2014

User identification using raw sensor data from typing on interactive displays

Philipp Mock; Jörg Edelmann; Andreas Schilling; Wolfgang Rosenstiel

Personalized soft-keyboards which adapt to a users individual typing behavior can reduce typing errors on interactive displays. In multi-user scenarios a personalized model has to be loaded for each participant. In this paper we describe a user identification technique that is based on raw sensor data from an optical touch screen. For classification of users we use a multi-class support vector machine that is trained with grayscale images from the optical sensor. Our implementation can identify a specific user from a set of 12 users with an average accuracy of 97.51% after one keystroke. It can be used to automatically select individual typing models during free-text entry. The resulting authentication process is completely implicit. We furthermore describe how the approach can be extended to automatic loading of personal information and settings.


interactive tabletops and surfaces | 2012

Towards the keyboard of oz: learning individual soft-keyboard models from raw optical sensor data

Jörg Edelmann; Philipp Mock; Andreas Schilling; Peter Gerjets; Wolfgang Rosenstiel; Wolfgang Straßer

Typing on a touchscreen display usually lacks haptic feedback which is crucial for maintaining finger to key assignment, especially for touch typists who are not looking at their keyboard. This leads to typing being substantially more error prone on these devices. We present a soft keyboard model which we developed from typing data collected from users with diverging typing behavior. For data acquisition, we used a simulated perfect classifier we refer to as The Keyboard of Oz. In order to draw near to this classifier we used the complete sensor data of each keystroke and applied supervised machine learning techniques to learn and evaluate an individual keyboard model. The model not only accounts for individual keystroke distributions but also incorporates a classifier based on the images obtained from an optical touch sensor. The resulting highly individual classifier has remarkable classification accuracy. Additionally, we present an approach to compensate for hand drift during typing utilizing a Kalman filter. We show that this filter performs significantly better with the keyboard model which takes raw sensor data into account.


2012 IEEE International Conference on Emerging Signal Processing Applications | 2012

Face2Face — A system for multi-touch collaboration with telepresence

Jörg Edelmann; Peter Gerjets; Philipp Mock; Andreas Schilling; Wolfgang Strasser

One major benefit of multi-touch interaction is the possibility for passive observers to easily follow interactions of active users. This paper presents a novel system which allows remote users to perform collaborative multi-touch interaction in a remote face-to-face situation with shared virtual material seamlessly integrated in a videoconference application. Each user is captured by a camera located behind the screen and touch interactions are thus made visible to the remote collaborator. In order to enhance the immersion of the setup, the system provides realistic stereoscopic 3D video capture and presentation. We highlight the concept and construction of the system and show sample applications which allow for collaborative interaction with digital 2D and 3D media in an intuitive way.


3dtv-conference: the true vision - capture, transmission and display of 3d video | 2012

Direct 3D-collaboration with Face2Face - implementation details and application concepts

Philipp Mock; Andreas Schilling; Wolfgang Strasser; Jörg Edelmann

In this paper we present a system for video based remote collaboration in stereoscopic 3D that supports multi-touch interaction. The presented system seamlessly integrates shared virtual 3D-material into a video conference application. The system utilizes a calibrated stereo camera setup to capture the collaborating users through a semi-transparent holographic screen. The resulting novel form of remote communication is analyzed for its practicability in varying application scenarios. For this, the impact of preconditions of different scenarios on the overall performance of the system and implications on its design are evaluated. Furthermore, we propose techniques to achieve telepresence in those scenarios by improving image quality and suggest an approach for automatic camera calibration to make the installation and maintenance of Face2Face feasible.


australasian computer-human interaction conference | 2015

keyValuate: A Framework for Rapid Evaluation of Adaptive Keyboards on Touchscreens

Philipp Mock; Jonas Jaszkowic; Jörg Edelmann; Yvonne Kammerer; Andreas Schilling; Wolfgang Rosenstiel

We propose a general-purpose framework for the implementation and evaluation of adaptive virtual keyboards based on unprocessed sensory information from an interactive surface. We furthermore describe an implementation on a commercially available optical touchscreen that features real-time visualization of the underlying key classification process. The typing application, which uses support vector machine classifiers and bivariate Gaussian distributions to differentiate between keys, was evaluated in a user study with 24 participants. The adaptive keyboard performed significantly better in terms of typing speed and error rates compared to a standard onscreen keyboard (approximately 40% speedup and 25% reduced error rates). We also performed evaluations with reduced sensor resolutions and additive noise in order to verify the generalizability of the presented approach for other sensing techniques. Our approach showed high robustness in both conditions. Based on these findings, we discuss possible implications for future implementations of virtual keyboards.


cooperative design visualization and engineering | 2013

Preserving Non-verbal Features of Face-to-Face Communication for Remote Collaboration

Jörg Edelmann; Philipp Mock; Andreas Schilling; Peter Gerjets

Distributed working groups rely on collaboration systems that promote working on a project cooperatively over a distance. However, conventional systems for remote cooperative work do not transport important non-verbal cues of face-to-face communication like eye-contact or gaze awareness that would be available in co-located collaboration. Additionally, reference material and annotation tools should be readily accessible for all users. The screen layout should moreover create awareness for the transmitted video of remote participants and reference material alike and allow users to easily follow both at the same time. This paper describes how the presented system Face2Face meets these requirements and thereby supports the collaborative design process. Furthermore, the performance of the system is evaluated in order to validate its practical applicability.


international conference on distributed ambient and pervasive interactions | 2015

Learning Instead of Markers: Flexible Recognition of Mobile Devices on Interactive Surfaces

Philipp Mock; Jörg Edelmann; Wolfgang Rosenstiel

We propose an approach for recognition of mobile devices on interactive surfaces that do not support optical markers. Our system only requires an axis aligned bounding box of the object placed on the touchscreen in combination with position data from the mobile devices integrated inertia measurement unit IMU. We put special emphasis on maximum flexibility in terms of compatibility with varying multi-touch sensor techniques and different kinds of mobile devices. A new device can be added to the system with a short training phase, during which the device is moved across the interactive surface. A device model is automatically created from the recorded data using support vector machines. Different devices of the same size are identified by analyzing their IMU data streams for transitions into a horizontally resting state. The system has been tested in a museum environment.


international conference on human computer interaction | 2011

Tangoscope: a tangible audio device for tabletop interaction

Jörg Edelmann; Yvonne Kammerer; Birgit Imhof; Peter Gerjets; Wolfgang Straßer

Tabletop installations allow multiple users to playback digital media simultaneously. With public speakers, however, simultaneous auditory content gets superimposed, leading to a confusing and disturbing user experience. In this paper, we present Tangoscope, a tangible audio output device for tabletop interaction. As we augmented headphones with a visual marker-based identification mechanism, with the Tangoscope each user is provided with individual auditory content. To allow for an instruction-free and intuitive usage of the audio device we employed the metaphor of a real stethoscope. A first user study indicated the self-explaining use of the Tangoscope.


Learning and Instruction | 2012

How temporal and spatial aspects of presenting visualizations affect learning about locomotion patterns

Birgit Imhof; Katharina Scheiter; Jörg Edelmann; Peter Gerjets


Computers in Education | 2013

Learning about locomotion patterns: Effective use of multiple pictures and motion-indicating arrows

Birgit Imhof; Katharina Scheiter; Jörg Edelmann; Peter Gerjets

Collaboration


Dive into the Jörg Edelmann's collaboration.

Top Co-Authors

Avatar

Philipp Mock

University of Tübingen

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge