Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Lukas Köping is active.

Publication


Featured researches published by Lukas Köping.


international conference on indoor positioning and indoor navigation | 2015

Multi sensor 3D indoor localisation

Frank Ebner; Toni Fetzer; Frank Deinzer; Lukas Köping; Marcin Grzegorzek

We present an indoor localisation system that integrates different sensor modalities, namely Wi-Fi, barometer, iBeacons, step-detection and turn-detection for localisation of pedestrians within buildings over multiple floors. To model the pedestrians movement, which is constrained by walls and other obstacles, we propose a state transition based upon random walks on graphs. This model also frees us from the burden of frequently updating the system. In addition we make use of barometer information to estimate the current floor. Furthermore, we present a statistical approach to avoid the incorporation of faulty heading information caused by changing the smartphones position. The evaluation of the system within a 77m × 55m sized building with 4 floors shows that high accuracy can be achieved while also keeping the update-rates low.


international conference on indoor positioning and indoor navigation | 2016

On Monte Carlo smoothing in multi sensor indoor localisation

Toni Fetzer; Frank Ebner; Frank Deinzer; Lukas Köping; Marcin Grzegorzek

Indoor localisation continues to be a topic of growing importance. Despite the advances made, several profound problems are still present. For example, estimating an accurate position from a multimodal distribution or recovering from the influence of faulty measurements. Within this work, we solve such problems with the help of Monte Carlo smoothing methods, namely forward-backward smoother and backward simulation. In contrast to normal filtering procedures like particle filtering, smoothing methods are able to incorporate future measurements instead of just using current and past data. This enables many possibilities for further improving the position estimation. Both smoothing techniques are deployed as fixed-lag and fixed-interval smoother and a novel approach for incorporating them easily within a conventional localisation system is presented. All this is evaluated on four floors within our faculty building. The results show that smoothing methods offer a great tool for improving the overall localisation. Especially fixed-lag smoothing provides a great runtime support by reducing temporal errors and improving the overall estimation with affordable costs.


Sensors | 2018

Comparison of Feature Learning Methods for Human Activity Recognition Using Wearable Sensors

Frédéric Li; Kimiaki Shirahama; Muhammad Adeel Nisar; Lukas Köping; Marcin Grzegorzek

Getting a good feature representation of data is paramount for Human Activity Recognition (HAR) using wearable sensors. An increasing number of feature learning approaches—in particular deep-learning based—have been proposed to extract an effective feature representation by analyzing large amounts of data. However, getting an objective interpretation of their performances faces two problems: the lack of a baseline evaluation setup, which makes a strict comparison between them impossible, and the insufficiency of implementation details, which can hinder their use. In this paper, we attempt to address both issues: we firstly propose an evaluation framework allowing a rigorous comparison of features extracted by different methods, and use it to carry out extensive experiments with state-of-the-art feature learning approaches. We then provide all the codes and implementation details to make both the reproduction of the results reported in this paper and the re-use of our framework easier for other researchers. Our studies carried out on the OPPORTUNITY and UniMiB-SHAR datasets highlight the effectiveness of hybrid deep-learning architectures involving convolutional and Long-Short-Term-Memory (LSTM) to obtain features characterising both short- and long-term time dependencies in the data.


international conference on indoor positioning and indoor navigation | 2014

Robust self-localization using Wi-Fi, step/turn-detection and recursive density estimation

Frank Ebner; Frank Deinzer; Lukas Köping; Marcin Grzegorzek

Indoor positioning systems are required for many new applications and, ideally, should provide high accuracy at zero costs for initial setup, maintenance and per user. Many approaches thus use an existing Wi-Fi infrastructure and smart-phones for pedestrian location estimation. While the well-known Wi-Fi fingerprinting provides good localization accuracy down to one meter, necessary time and costs are tremendous. Alternatives, like model-based signal strength estimation, are easy to setup but supply viable results only for line of sight conditions. To provide an inexpensive yet accurate solution, we combine Wi-Fi localization using a signal strength prediction model together with step/turn-detection based on a smartphones accelerometer/gyroscope and incorporate a priori knowledge utilizing the buildings floorplan. Our technique uses a statistical model for both, Wi-Fi and step/turn-detection, to calculate the probability of the pedestrian residing at some arbitrary position and leverages the buildings floorplan to determine the likelihood for any possible movement between two positions. Latter probabilities are combined with the two densities from Wi-Fi and step/turn-detection applying recursive density estimation implemented using well-known particle filtering techniques. While the density estimated through Wi-Fi measurements provides a vague, absolute position at a significant uncertainty, the step-detector supplies a fine resolution at the downside of a cumulative error due to its estimation relative to previous steps. The fusion of these two sensor densities compensates for this error and the floorplan further enhances the estimation result. We will show that our statistical approach provides a robust, long-term stable location estimation, much better than Wi-Fi on its own while requiring only a few (empirical) parameters.


ubiquitous computing | 2016

Codebook approach for sensor-based human activity recognition

Kimiaki Shirahama; Lukas Köping; Marcin Grzegorzek

One crucial problem in sensor-based human activity recognition is how to model features that can precisely represent characteristics of a sequence of sensor values. For this, we study a codebook approach that represents the sequence as a distribution of characteristic subsequences. The extensive experiments on different recognition tasks for physical, mental and eye-based activities validate the effectiveness, generality and usability of the codebook approach, where only a few intuitive parameters need to be tuned.


Computers in Biology and Medicine | 2018

A general framework for sensor-based human activity recognition

Lukas Köping; Kimiaki Shirahama; Marcin Grzegorzek

Todays wearable devices like smartphones, smartwatches and intelligent glasses collect a large amount of data from their built-in sensors like accelerometers and gyroscopes. These data can be used to identify a persons current activity and in turn can be utilised for applications in the field of personal fitness assistants or elderly care. However, developing such systems is subject to certain restrictions: (i) since more and more new sensors will be available in the future, activity recognition systems should be able to integrate these new sensors with a small amount of manual effort and (ii) such systems should avoid high acquisition costs for computational power. We propose a general framework that achieves an effective data integration based on the following two characteristics: Firstly, a smartphone is used to gather and temporally store data from different sensors and transfer these data to a central server. Thus, various sensors can be integrated into the system as long as they have programming interfaces to communicate with the smartphone. The second characteristic is a codebook-based feature learning approach that can encode data from each sensor into an effective feature vector only by tuning a few intuitive parameters. In the experiments, the framework is realised as a real-time activity recognition system that integrates eight sensors from a smartphone, smartwatch and smartglasses, and its effectiveness is validated from different perspectives such as accuracies, sensor combinations and sampling rates.


international conference on pattern recognition applications and methods | 2017

Real-Time Gesture Recognition using a Particle Filtering Approach.

Frédéric Li; Lukas Köping; Sebastian Schmitz; Marcin Grzegorzek

In this paper we present an approach for real-time gesture recognition using exclusively 1D sensor data, based on the use of Particle Filters and Dynamic Time Warping Barycenter Averaging (DBA). In a training phase, sensor records of users performing different gestures are acquired. For each gesture, the associated sensor records are then processed by the DBA method to produce one average record called template gesture. Once trained, our system classifies one gesture performed in real-time, by computing -using particle filtersan estimation of its probability of belonging to each class, based on the comparison of the sensor values acquired in real-time to those of the template gestures. Our method is tested on the accelerometer data of the Multimodal Human Activities Dataset (MHAD) using the Leave-One-Out cross validation, and compared with state-ofthe-art approaches (SVM, Neural Networks) adapted for real-time gesture recognition. It manages to achieve a 85.30% average accuracy and outperform the others, without the need to define hyper-parameters whose choice could be restrained by real-time implementation considerations.


international conference on artificial intelligence and soft computing | 2017

Classification of Physiological Data for Emotion Recognition

Philip Gouverneur; Joanna Jaworek-Korjakowska; Lukas Köping; Kimiaki Shirahama; Paweł Kłeczek; Marcin Grzegorzek

Emotion recognition is seen to be important not only for computer science or sport activity but also for old and sick people to live independently in their own homes as long as possible. In this paper Empatica E4 wristband is used to collect the date and assess the stress level of the user. We describe an algorithm for the classification of physiological data for emotion recognition. The algorithm has been divided into the following steps: data acquisition, signal preprocessing, feature extraction, and classification. The data acquired during various daily activities consist of more than 3 h of wristband signal. Through various stress tests we achieve a maximum accuracy of 71% for a stressed/relaxed classification. These results lead to the conclusion that Empatica E4 wristband can be used as a device for emotion recognition.


international conference on indoor positioning and indoor navigation | 2014

Statistical indoor localization using fusion of depth-images and step detection

Toni Fetzer; Frank Deinzer; Lukas Köping; Marcin Grzegorzek

This paper presents a method for indoor localization of humans. Our new approach combines an imaged-based position estimation with a given step and turn detection. Estimating the position of an object is not only a question of accuracy, it is also a question of performance and time required. Our approach uses a fusion of an uncalibrated depth-sensor with a smartphones accelerometer and gyroscope. Both components do not rely on time-consuming practices like fingerprinting and calibration techniques. This approach allows for a real time-tracking by using well-known methods of recursive density propagation and particle filtering. Unlike other image-based methods in autonomous robotics, the depth sensors are mounted at a fixed position. Therefore we will show how our new statistical sensor model covers the four main conditions of an image-based approach: a person is either inside or outside the field of view and is detected or not detected by the depth sensor. This involves the possibility to estimate the position of multiple persons at each point in time. Finally, the experimental results show how the integration of an image-based approach increases the accuracy of the localization estimation and counteracts the increasing error of the given step and turn detection.


Annual of Navigation | 2012

Indoor Navigation Using Particle Filter and Sensor Fusion

Lukas Köping; Thomas Mühsam; Christian Ofenberg; Bernhard Czech; Michael Bernard; Jens Schmer; Frank Deinzer

Abstract In this paper we present an indoor localization system based on particle filter and multiple sensor data like acceleration, angular velocity and compass data. With this approach we tackle the problem of documentation on large building yards during the construction phase. Due to the circumstances of such an environment we cannot rely on any data from GPS, Wi-Fi or RFID. Moreover this work should serve us as a first step towards an all-in-one navigation system for mobile devices. Our experimental results show that we can achieve high accuracy in position estimation.

Collaboration


Dive into the Lukas Köping's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Grzegorz J. Nalepa

AGH University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Joanna Jaworek-Korjakowska

AGH University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Mateusz Slazynski

AGH University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Paweł Kłeczek

AGH University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Szymon Bobek

AGH University of Science and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge