Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kai Kunze is active.

Publication


Featured researches published by Kai Kunze.


ubiquitous computing | 2008

Dealing with sensor displacement in motion-based onbody activity recognition systems

Kai Kunze; Paul Lukowicz

We present a set of heuristics that significantly increase the robustness of motion sensor-based activity recognition with respect to sensor displacement. In this paper placement refers to the position within a single body part (e.g, lower arm). We show how, within certain limits and with modest quality degradation, motion sensorbased activity recognition can be implemented in a displacement tolerant way. We first describe the physical principles that lead to our heuristic. We then evaluate them first on a set of synthetic lower arm motions which are well suited to illustrate the strengths and limits of our approach, then on an extended modes of locomotion problem (sensors on the upper leg) and finally on a set of exercises performed on various gym machines (sensors placed on the lower arm). In this example our heuristic raises the displaced recognition rate from 24% for a displaced accelerometer, which had 96% recognition when not displaced, to 82%.


computational intelligence and games | 2006

Using Wearable Sensors for Real-Time Recognition Tasks in Games of Martial Arts - An Initial Experiment

Ernst A. Heinz; Kai Kunze; Matthias Gruber; David Bannach; Paul Lukowicz

Beside their stunning graphics, modern entertainment systems feature ever-higher levels of immersive user-interaction. Today, this is mostly achieved by virtual (VR) and augmented reality (AK) setups. On top of these, we envision to add ambient intelligence and context awareness to gaming applications in general and games of martial arts in particular. To this end, we conducted an initial experiment with inexpensive body-worn gyroscopes and acceleration sensors for the chum kiu motion sequence in wing tsun (a popular form of kung fu). The resulting data confirm the feasibility of our vision. Fine-tuned adaptations of various thresholding and pattern-matching techniques known from the fields of computational intelligence and signal processing should suffice to automate the analysis and recognition of important wing tsun movements in real time. Moreover, the data also seem to allow for the possibility of automatically distinguishing between certain levels of expertise and quality in executing the movements.


location and context awareness | 2005

Where am i: recognizing on-body positions of wearable sensors

Kai Kunze; Paul Lukowicz; Holger Junker; Gerhard Tröster

The paper describes a method that allows us to derive the location of an acceleration sensor placed on the users body solely based on the sensors signal. The approach described here constitutes a first step in our work towards the use of sensors integrated in standard appliances and accessories carried by the user for complex context recognition. It is also motivated by the fact that device location is an important context (e.g. glasses being worn vs. glasses in a jacket pocket). Our method uses a (sensor) location and orientation invariant algorithm to identify time periods where the user is walking and then leverages the specific characteristics of walking motion to determine the location of the body-worn sensor. In the paper we outline the relevance of sensor location recognition for appliance based context awareness and then describe the details of the method. Finally, we present the results of an experimental study with six subjects and 90 walking sections spread over several hours indicating that reliable recognition is feasible. The results are in the low nineties for frame by frame recognition and reach 100% for the more relevant event based case.


international symposium on wearable computers | 2009

Which Way Am I Facing: Inferring Horizontal Device Orientation from an Accelerometer Signal

Kai Kunze; Paul Lukowicz; Kurt Partridge; Bo Begole

We present a method to infer the orientation of mobile device carried in a pocket from the acceleration signal acquired when the user is walking. Whereas previous work has shown how to determine the the orientation in the vertical plane (angle towards earth gravity), we demonstrate how to compute the orientation within the horizontal plane. To validate our method we compare the output of our method with GPS heading information when walking in a straight line. On a total of 16 different orientations and traces we have a mean difference of 5 degrees with 2.5 degrees standard deviation.


augmented human international conference | 2014

In the blink of an eye: combining head motion and eye blink frequency for activity recognition with Google Glass

Shoya Ishimaru; Kai Kunze; Koichi Kise; Jens Weppner; Andreas Dengel; Paul Lukowicz; Andreas Bulling

We demonstrate how information about eye blink frequency and head motion patterns derived from Google Glass sensors can be used to distinguish different types of high level activities. While it is well known that eye blink frequency is correlated with user activity, our aim is to show that (1) eye blink frequency data from an unobtrusive, commercial platform which is not a dedicated eye tracker is good enough to be useful and (2) that adding head motion patterns information significantly improves the recognition rates. The method is evaluated on a data set from an experiment containing five activity classes (reading, talking, watching TV, mathematical problem solving, and sawing) of eight participants showing 67% recognition accuracy for eye blinking only and 82% when extended with head motion patterns.


international conference on document analysis and recognition | 2013

The Wordometer -- Estimating the Number of Words Read Using Document Image Retrieval and Mobile Eye Tracking

Kai Kunze; Hitoshi Kawaichi; Kazuyo Yoshimura; Koichi Kise

We introduce the Wordometer, a novel method to estimate the number of words a user reads using a mobile eye tracker and document image retrieval. We present a reading detection algorithm which works with over 91 % accuracy over 10 test subjects using 10-fold cross validation. We implement two algorithms to estimate the read words using a line break detector. A simple version gives an average error rate of 13,5 % for 9 users over 10 documents. A more sophisticated word count algorithm based on support vector regression with an RBF kernel reaches an average error rate from only 8.2 % (6.5 % if one test subject with abnormal behavior is excluded). The achieved error rates are comparable to pedometers that count our steps in our daily life. Thus, we believe the Wordometer can be used as a step counter for the information we read to make our knowledge life healthier.


International Journal of Sensors Wireless Communications and Controle | 2012

The OPPORTUNITY Framework and Data Processing Ecosystem for Opportunistic Activity and Context Recognition

Marc Kurz; Gerold Hölzl; Alois Ferscha; Alberto Calatroni; Daniel Roggen; Gerhard Tröster; Hesam Sagha; Ricardo Chavarriaga; José del R. Millán; David Bannach; Kai Kunze; Paul Lukowicz

Opportunistic sensing can be used to obtain data from sensors that just happen to be present in the user’s surroundings. By harnessing these opportunistic sensor configurations to infer activity or context, ambient intelligence environments become more robust, have improved user comfort thanks to reduced requirements on body-worn sensor deployment and they are not limited to a predefined and restricted location, defined by sensors specifically deployed for an application. We present the OPPORTUNITY Framework and Data Processing Ecosystem to recognize human activities or contexts in such opportunistic sensor configurations. It addresses the challenge of inferring human activities with limited guarantees about placement, nature and run-time availability of sensors. We realize this by a combination of: (i) a sensing/context framework capable of coordinating sensor recruitment according to a high level recognition goal, (ii) the corresponding dynamic instantiation of data processing elements to infer activities, (iii) a tight interaction between the last two elements in an “ecosystem” allowing to autonomously discover novel knowledge about sensor characteristics that is reusable in subsequent recognition queries. This allows the system to operate in open-ended environments. We demonstrate OPPORTUNITY on a large-scale dataset collected to exhibit the sensor richness and related characteristics, typical of opportunistic sensing systems. The dataset comprises 25 hours of activities of daily living, collected from 12 subjects. It contains data of 72 sensors covering 10 modalities and 15 networked sensor systems deployed in objects, on the body and in the environment. We show the mapping from a recognition goal to an instantiation of the recognition system. We also show the knowledge acquisition and reuse of the autonomously discovered semantic meaning of a new unknown sensor, the autonomous update of the trust indicator of a sensor due to unforeseen deteriorations, and the autonomous discovery of the on-body sensor placement.


automation, robotics and control systems | 2006

Distributed modular toolbox for multi-modal context recognition

David Bannach; Kai Kunze; Paul Lukowicz; Oliver Amft

We present a GUI-based C++ toolbox that allows for building distributed, multi-modal context recognition systems by plugging together reusable, parameterizable components. The goals of the toolbox are to simplify the steps from prototypes to online implementations on low-power mobile devices, facilitate portability between platforms and foster easy adaptation and extensibility. The main features of the toolbox we focus on here are a set of parameterizable algorithms including different filters, feature computations and classifiers, a runtime environment that supports complex synchronous and asynchronous data flows, encapsulation of hardware-specific aspects including sensors and data types (e.g., int vs. float), and the ability to outsource parts of the computation to remote devices. In addition, components are provided for group-wise, event-based sensor synchronization and data labeling. We describe the architecture of the toolbox and illustrate its functionality on two case studies that are part of the downloadable distribution.


IEEE Pervasive Computing | 2014

Sensor Placement Variations in Wearable Activity Recognition

Kai Kunze; Paul Lukowicz

This article explores how placement variations in user-carried electronic appliances influence human action recognition and how such influence can be mitigated. The authors categorize possible variations into three classes: placement on different body parts (such as a jacket pocket versus a hip holster versus a trouser pocket), small displacement within a given coarse location (such as a device shifting in a pocket), and different orientations. For each of these variations, they present a systematic evaluation of the impact on human action recognition and give an overview of possible approaches to deal with them. They conclude with a detailed practical example on how to compensate for on-body placements variations that builds on an extension of their previous work. This article is part of a special issue on wearable computing.


IEEE Pervasive Computing | 2015

Making Regular Eyeglasses Smart

Oliver Amft; Florian Wahl; Shoya Ishimaru; Kai Kunze

The authors discuss the vast application potential of multipurpose smart eyeglasses that integrate into the form factor of traditional spectacles and provide frequently needed sensing and interaction. In combination with software apps running on smart eyeglasses, the authors develop universal assistance systems that remain unobtrusive and thus can support wearers throughout their daily life. They describe a blueprint of the embedded architecture of smart eyeglasses and identify various software app clusters. They discuss findings from using smart eyeglasses prototypes in three case studies: to recognize cognitive workload, quantify reading habits, and monitor light exposure to estimate the circadian phase. This article is part of a special issue on digitally enhanced reality.

Collaboration


Dive into the Kai Kunze's collaboration.

Top Co-Authors

Avatar

Koichi Kise

Osaka Prefecture University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Shoya Ishimaru

Osaka Prefecture University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Masakazu Iwamura

Osaka Prefecture University

View shared research outputs
Top Co-Authors

Avatar

Katsuma Tanaka

Osaka Prefecture University

View shared research outputs
Researchain Logo
Decentralizing Knowledge