Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Bernd Tessendorf is active.

Publication


Featured researches published by Bernd Tessendorf.


international conference on pervasive computing | 2011

Recognition of hearing needs from body and eye movements to improve hearing instruments

Bernd Tessendorf; Andreas Bulling; Daniel Roggen; Thomas Stiefmeier; Manuela Feilner; Peter Derleth; Gerhard Tröster

Hearing instruments (HIs) have emerged as true pervasive computers as they continuously adapt the hearing program to the users context. However, current HIs are not able to distinguish different hearing needs in the same acoustic environment. In this work, we explore how information derived from body and eye movements can be used to improve the recognition of such hearing needs. We conduct an experiment to provoke an acoustic environment in which different hearing needs arise: active conversation and working while colleagues are having a conversation in a noisy office environment. We record body movements on nine body locations, eye movements using electrooculography (EOG), and sound using commercial HIs for eleven participants. Using a support vector machine (SVM) classifier and person-independent training we improve the accuracy of 77% based on sound to an accuracy of 92% using body movements. With a view to a future implementation into a HI we then perform a detailed analysis of the sensors attached to the head. We achieve the best accuracy of 86% using eye movements compared to 84% for head movements. Our work demonstrates the potential of additional sensor modalities for future HIs and motivates to investigate the wider applicability of this approach on further hearing situations and needs.


international conference on intelligent sensors, sensor networks and information processing | 2011

An IMU-based sensor network to continuously monitor rowing technique on the water

Bernd Tessendorf; Franz Gravenhorst; Bert Arnrich; Gerhard Tröster

In the sport of rowing athletes and coaches are concerned with optimizing a rowers technique in order to improve rowing performance. In this paper, we present the design and real-world evaluation of a sensor network approach to support improving the rowers performance. In cooperation with professional rowing teams, we found that a network of inertial measurement units (IMUs) is well suited to continuously and unobtrusively monitor important indicators relating to rowing technique. In a feasibility study with 5 participants we first investigated the optimal sensor setup, and in the final setup we attached 3 IMUs to the oars and the boat. From 18 participants (including both ambitious amateurs and world-class rowers) we recorded both training and racing sessions which each consisted of 1000m rowing. We present 4 rowing technique indicators for all 18 participants. Using the example of two world-class rowers we demonstrate in detail how sensor networks support the iterative process of optimizing the individual rowing technique.


international conference of the ieee engineering in medicine and biology society | 2009

Unsupervised monitoring of sitting behavior

Bernd Tessendorf; Bert Arnrich; Johannes Schumm; Cornelia Setz; Gerhard Tröster

In recent years, the monitoring of sitting postures was discovered to be a promising measure of healthy sitting behavior, comfort, physical wellness and emotions. Most state-of-the-art systems for monitoring sitting behavior are based on supervised methods that are limited to a fixed set of classes. We present a method that does not rely on training but distinguishes between different postures autonomously. We designed and implemented a system to monitor sitting behavior in an unsupervised manner. Based on the pressure distribution acquired from a pressure mat we generate prototypes of sitting postures. The prototypes are stored in a database and serve as reference for comparing and classifying incoming pressure data. The system relies on only a few, interpretable system parameters and performs in real-time. We conducted an experiment with a collective of 8 subjects and recorded the data of 16 different postures for each subject. Our proposed method generates on average 15.57 prototypes of postures. This reflects well the 16 postures that actually occurred in the experiment. In 91% of all cases an unambiguous assignment of a posture to exactly one generated prototype was achieved. On the other hand an unambiguous assignment of a prototype to a posture was obtained in 86%.


international conference of the ieee engineering in medicine and biology society | 2012

Ear-worn reference data collection and annotation for multimodal context-aware hearing instruments

Bernd Tessendorf; Peter Derleth; Manuela Feilner; Franz Gravenhorst; Andreas Kettner; Daniel Roggen; Thomas Stiefmeier; Gerhard Tröster

In this work we present a newly developed ear-worn sensing and annotation device to unobtrusively capture head movements in real life situations. It has been designed in the context of developing multimodal hearing instruments (HIs), but is not limited to this application domain. The ear-worn device captures triaxial acceleration, rate of turn and magnetic field and features a one-button-approach for real-time data annotation through the user. The system runtime is over 5 hours at a sampling rate of 128 Hz. In a user study with 21 participants the device was perceived as comfortable and showed a robust hold at the ear. On the example of head acceleration data we perform unsupervised clustering to demonstrate the benefit of head movements for multimodal HIs. We believe the novel technology will help to push the boundaries of HI technology.


Computer Science and Information Systems | 2013

Design of a multimodal hearing system

Bernd Tessendorf; Matjaz Debevc; Peter Derleth; Manuela Feilner; Franz Gravenhorst; Daniel Roggen; Thomas Stiefmeier; Gerhard Tröster

Hearing instruments (HIs) have become context-aware devices that analyze the acoustic environment in order to automatically adapt sound processing to the user’s current hearing wish. However, in the same acoustic environment an HI user can have different hearing wishes requiring different behaviors from the hearing instrument. In these cases, the audio signal alone contains too little contextual information to determine the user’s hearing wish. Additional modalities to sound can provide the missing information to improve the adaption. In this work, we review additional modalities to sound in HIs and present a prototype of a newly developed wireless multimodal hearing system. The platform takes into account additional sensor modalities such as the user’s body movement and location. We characterize the system regarding runtime, latency and reliability of the wireless connection, and point out possibilities arising from the novel approach.


international conference on computers helping people with special needs | 2012

Improving game accessibility with vibrotactile-enhanced hearing instruments

Bernd Tessendorf; Peter Derleth; Manuela Feilner; Daniel Roggen; Thomas Stiefmeier; Gerhard Tröster

In this work we present enhanced hearing instruments (HIs) that provide vibrotactile feedback behind the users ears in parallel to sound. Using an additional feedback modality we display dedicated vibrotactile patterns to support the user in localizing sound sources. In a study with 4 HI users and 5 normal hearing participants we deploy the system in a gaming scenario. The open source availability of the mainstream 3D first person shooter game used in the study allowed us to add code for accessibility. We evaluate the system qualitatively with user questionnaires and quantitatively with performance metrics calculated from statistics within the game. The system was perceived as beneficial and allowed the HI users to achieve gaming performance closer to that of normal hearing participants.


conference on computers and accessibility | 2011

Design of a bilateral vibrotactile feedback system for lateralization

Bernd Tessendorf; Daniel Roggen; Michael Spuhler; Thomas Stiefmeier; Gerhard Tröster; Tobias Grämer; Manuela Feilner; Peter Derleth

We present a bilateral vibrotactile feedback system for accurate lateralization of target angles in the complete 360 degree-range. We envision integrating this system into context-aware hearing instruments (HIs) or cochlear implants (CIs) to support users that experience lateralization difficulties. As a foundation for this it is vital to investigate which kind of feedback and vibration patterns are optimal to provide support for lateralization. Our system enables to evaluate and compare different encoding schemes with respect to resolution, reaction time, intuitiveness and user dependency. The system supports bilateral vibrotactile feedback to reflect integration into HIs or CIs worn at both ears and implemented two approaches: Quantized Absolute Heading (QAH) and Continuous Guidance Feedback (CGF). We provide a detailed description of our hardware that was designed to be also applicable for generic vibrotactile feedback applications.


international symposium on wearable computers | 2010

Towards multi-modal context recognition for hearing instruments

Bernd Tessendorf; Andreas Bulling; Daniel Roggen; Thomas Stiefmeier; Gerhard Tröster; Manuela Feilner; Peter Derleth

Current hearing instruments (HI) only rely on auditory scene analysis to adapt to the situation of the user. It is for this reason that these systems are limited in the number and type of situations they can detect. We investigate how context information derived from eye and head movements can be used to resolve such situations. We focus on two example problems that are challenging for current HIs: To distinguish concentrated from interaction, and to detect whether a person is walking alone or walking while having a conversation. We collect an eleven participant (6 male, 5 female, age 24–59) dataset that covers different typical office activities. Using person-independent training and isolated recognition we achieve an average precision of 71.7% (recall: 70.1%) for recognising concentrated work and 57.2% precision (recall: 81.3%) for detecting walking while conversing.


ubiquitous computing | 2014

Exploration of head gesture control for hearing instruments

Bernd Tessendorf; Franz Gravenhorst; Daniel Roggen; Thomas Stiefmeier; Christina Strohrmann; Gerhard Tröster; Peter Derleth; Manuela Feilner

In this work, we investigated the BENEFIT of head gestures as a user interface to control hearing instruments (HIs). We developed a prototype of a head-gesture-controlled HI, which was based on a customised wireless acceleration sensor for unconstrained and continuous real-time monitoring of the users head movements. We evaluated the system from a technical point of view and achieved a precision of 96% and a recall of 97% for spotting the two head gestures used: tilting the head to the left and right side. We further evaluated the system from the users point of view based on the feedback from 6 hearing-impaired HI users (4 men, 2 women, age 27-60).We compared our head-gesture-based control to existing HI user interfaces: HI-integrated buttons and HI remote control. We found that the benefit of the different HI interaction solutions depends on the users current situations and that all participating HI users would appreciate head gesture control as an additional, complementing user interface.


ambient intelligence | 2012

Automatic Power-Off for Binaural Hearing Instruments

Bernd Tessendorf; Peter Derleth; Manuela Feilner; Daniel Roggen; Thomas Stiefmeier; Gerhard Tröster

Users of state-of-the-art hearing instruments (HIs) switch their devices on and off by closing and opening the battery compartment. Switching the HIs off is important for the users to maintain the battery lives of their HIs. However, users currently need to switch off their devices manually, which is easy to forget or which can be difficult, e.g. for elderly with reduced dexterity. In this work, we propose an approach to avoid the need to manually switch off HIs. We assume, that whenever the user’s HIs are not moved the same way, they cannot be at the user’s ear and are, thus, not in use. We exploit the binaural communication between the user’s HIs available in the latest generation of HIs together with the concept of multimodal HIs, which integrates sensors such as accelerometers. On a data set of one hour comprising acceleration data of two HIs worn by three male participants (age 26–31) we achieve a precision of 100% and a recall of 93% in detecting power-off events.

Collaboration


Dive into the Bernd Tessendorf's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge