Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Christoph Amma is active.

Publication


Featured researches published by Christoph Amma.


international symposium on wearable computers | 2012

Airwriting: Hands-Free Mobile Text Input by Spotting and Continuous Recognition of 3d-Space Handwriting with Inertial Sensors

Christoph Amma; Marcus Georgi; Tanja Schultz

We present an input method which enables complex hands-free interaction through 3d handwriting recognition. Users can write text in the air as if they were using an imaginary blackboard. Motion sensing is done wirelessly by accelerometers and gyroscopes which are attached to the back of the hand. We propose a two-stage approach for spotting and recognition of handwriting gestures. The spotting stage uses a Support Vector Machine to identify data segments which contain handwriting. The recognition stage uses Hidden Markov Models (HMM) to generate the text representation from the motion sensor data. Individual characters are modeled by HMMs and concatenated to word models. Our system can continuously recognize arbitrary sentences, based on a freely definable vocabulary with over 8000 words. A statistical language model is used to enhance recognition performance and restrict the search space. We report the results from a nine-user experiment on sentence recognition for person dependent and person independent setups on 3d-space handwriting data. For the person independent setup, a word error rate of 11% is achieved, for the person dependent setup 3% are achieved. We evaluate the spotting algorithm in a second experiment on a realistic dataset including everyday activities and achieve a sample based recall of 99% and a precision of 25%. We show that additional filtering in the recognition stage can detect up to 99% of the false positive segments.


augmented human international conference | 2010

Airwriting recognition using wearable motion sensors

Christoph Amma; Dirk Gehrig; Tanja Schultz

In this work we present a wearable input device which enables the user to input text into a computer. The text is written into the air via character gestures, like using an imaginary blackboard. To allow hands-free operation, we designed and implemented a data glove, equipped with three gyroscopes and three accelerometers to measure hand motion. Data is sent wirelessly to the computer via Bluetooth. We use HMMs for character recognition and concatenated character models for word recognition. As features we apply normalized raw sensor signals. Experiments on single character and word recognition are performed to evaluate the end-to-end system. On a character database with 10 writers, we achieve an average writer-dependent character recognition rate of 94.8% and a writer-independent character recognition rate of 81.9%. Based on a small vocabulary of 652 words, we achieve a single-writer word recognition rate of 97.5%, a performance we deem is advisable for many applications. The final system is integrated into an online word recognition demonstration system to showcase its applicability.


human factors in computing systems | 2015

Advancing Muscle-Computer Interfaces with High-Density Electromyography

Christoph Amma; Thomas Krings; Jonas Böer; Tanja Schultz

In this paper we present our results on using electromyographic (EMG) sensor arrays for finger gesture recognition. Sensing muscle activity allows to capture finger motion without placing sensors directly at the hand or fingers and thus may be used to build unobtrusive body-worn interfaces. We use an electrode array with 192 electrodes to record a high-density EMG of the upper forearm muscles. We present in detail a baseline system for gesture recognition on our dataset, using a naive Bayes classifier to discriminate the 27 gestures. We recorded 25 sessions from 5 subjects. We report an average accuracy of 90% for the within-session scenario, showing the feasibility of the EMG approach to discriminate a large number of subtle gestures. We analyze the effect of the number of used electrodes on the recognition performance and show the benefit of using high numbers of electrodes. Cross-session recognition typically suffers from electrode position changes from session to session. We present two methods to estimate the electrode shift between sessions based on a small amount of calibration data and compare it to a baseline system with no shift compensation. The presented methods raise the accuracy from 59% baseline accuracy to 75% accuracy after shift compensation. The dataset is publicly available.


ubiquitous computing | 2014

Airwriting: a wearable handwriting recognition system

Christoph Amma; Marcus Georgi; Tanja Schultz

We present a wearable input system which enables interaction through 3D handwriting recognition. Users can write text in the air as if they were using an imaginary blackboard. The handwriting gestures are captured wirelessly by motion sensors applying accelerometers and gyroscopes which are attached to the back of the hand. We propose a two-stage approach for spotting and recognition of handwriting gestures. The spotting stage uses a support vector machine to identify those data segments which contain handwriting. The recognition stage uses hidden Markov models (HMMs) to generate a text representation from the motion sensor data. Individual characters are modeled by HMMs and concatenated to word models. Our system can continuously recognize arbitrary sentences, based on a freely definable vocabulary. A statistical language model is used to enhance recognition performance and to restrict the search space. We show that continuous gesture recognition with inertial sensors is feasible for gesture vocabularies that are several orders of magnitude larger than traditional vocabularies for known systems. In a first experiment, we evaluate the spotting algorithm on a realistic data set including everyday activities. In a second experiment, we report the results from a nine-user experiment on handwritten sentence recognition. Finally, we evaluate the end-to-end system on a small but realistic data set.


international conference on bio-inspired systems and signal processing | 2015

Recognizing Hand and Finger Gestures with IMU based Motion and EMG based Muscle Activity Sensing

Marcus Georgi; Christoph Amma; Tanja Schultz

Sessionand person-independent recognition of hand and finger gestures is of utmost importance for the practicality of gesture based interfaces. In this paper we evaluate the performance of a wearable gesture recognition system that captures arm, hand, and finger motions by measuring movements of, and muscle activity at the forearm. We fuse the signals of an Inertial Measurement Unit (IMU) worn at the wrist, and the Electromyogram (EMG) of muscles in the forearm to infer hand and finger movements. A set of 12 gestures was defined, motivated by their similarity to actual physical manipulations and to gestures known from the interaction with mobile devices. We recorded performances of our gesture set by five subjects in multiple sessions. The resulting datacorpus will be made publicly available to build a common ground for future evaluations and benchmarks. Hidden Markov Models (HMMs) are used as classifiers to discriminate between the defined gesture classes. We achieve a recognition rate of 97.8% in session-independent, and of 74.3% in person-independent recognition. Additionally, we give a detailed analysis of error characteristics and of the influence of each modality to the results to underline the benefits of using both modalities together.


international conference on multimodal interfaces | 2012

Vision-based handwriting recognition for unrestricted text input in mid-air

Alexander Schick; Daniel Morlock; Christoph Amma; Tanja Schultz; Rainer Stiefelhagen

We propose a vision-based system that recognizes handwriting in mid-air. The system does not depend on sensors or markers attached to the users and allows unrestricted character and word input from any position. It is the result of combining handwriting recognition based on Hidden Markov Models with multi-camera 3D hand tracking. We evaluated the system for both quantitative and qualitative aspects. The system achieves recognition rates of 86.15% for character and 97.54% for small-vocabulary isolated word recognition. Limitations are due to slow and low-resolution cameras or physical strain. Overall, the proposed handwriting recognition system provides an easy-to-use and accurate text input modality without placing restrictions on the users.


KI'10 Proceedings of the 33rd annual German conference on Advances in artificial intelligence | 2010

BiosignalsStudio: a flexible framework for biosignal capturing and processing

Dominic Heger; Felix Putze; Christoph Amma; Michael Wand; Igor Plotkin; Thomas Wielatt; Tanja Schultz

In this paper we introduce BiosignalsStudio (BSS), a framework for multimodal sensor data acquisition. Due to its flexible architecture it can be used for large scale multimodal data collections as well as a multimodal input layer for intelligent systems. The paper describes the software framework and its contributions to our research work and systems.


ACM Crossroads Student Magazine | 2013

Airwriting: bringing text entry to wearable computers

Christoph Amma; Tanja Schultz

It may be possible to enable text entry by writing freely in the air, using only the hand as a stylus.


human factors in computing systems | 2015

Design and Evaluation of a Self-Correcting Gesture Interface based on Error Potentials from EEG

Felix Putze; Christoph Amma; Tanja Schultz

Any user interface which automatically interprets the users input using natural modalities like gestures makes mistakes. System behavior depending on such mistakes will confuse the user and lead to an erroneous interaction flow. The automatic detection of error potentials in electroencephalographic data recorded from a user allows the system to detect such states of confusion and automatically bring the interaction back on track. In this work, we describe the design of such a self-correcting gesture interface, implement different strategies to deal with detected errors, use a simulation approach to analyze performance and costs of those strategies and execute a user study to evaluate user satisfaction. We show that self-correction significantly improves gesture recognition accuracy at lower costs and with higher acceptance than manual correction.


intelligent user interfaces | 2012

Airwriting: demonstrating mobile text input by 3D-space handwriting

Christoph Amma; Tanja Schultz

We demonstrate our airwriting interface for mobile hands-free text entry. The interface enables a user to input text into a computer by writing in the air like on an imaginary blackboard. Hand motion is measured by an accelerometer and a gyroscope attached to the back of the hand and data is sent wirelessly to the processing computer. The system can continuously recognize arbitrary sentences based on a predefined vocabulary in real-time. The recognizer uses Hidden Markov Models (HMM) together with a statistical language model. We achieve a user-independent word error rate of 11% for a 8K vocabulary based on an experiment with nine users.

Collaboration


Dive into the Christoph Amma's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Marcus Georgi

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Felix Putze

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Dominic Heger

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dirk Gehrig

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Andreas Fischer

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Christian Herff

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Daniel Morlock

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Dominic Telaar

Karlsruhe Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge