Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Thomas Kopinski is active.

Publication


Featured researches published by Thomas Kopinski.


international conference on intelligent transportation systems | 2014

A real-time applicable 3D gesture recognition system for automobile HMI

Thomas Kopinski; Stefan Geisler; Louis-Charles Caron; Alexander Gepperth; Uwe Handmann

We present a system for 3D hand gesture recognition based on low-cost time-of-flight(ToF) sensors intended for outdoor use in automotive human-machine interaction. As signal quality is impaired compared to Kinect-type sensors, we study several ways to improve performance when a large number of gesture classes is involved. Our system fuses data coming from two ToF sensors which is used to build up a large database and subsequently train a multilayer perceptron (MLP). We demonstrate that we are able to reliably classify a set of ten hand gestures in real-time and describe the setup of the system, the utilised methods as well as possible application scenarios.


international conference on artificial neural networks | 2014

Neural Network Based Data Fusion for Hand Pose Recognition with Multiple ToF Sensors

Thomas Kopinski; Alexander Gepperth; Stefan Geisler; Uwe Handmann

We present a study on 3D based hand pose recognition using a new generation of low-cost time-of-flight(ToF) sensors intended for outdoor use in automotive human-machine interaction. As signal quality is impaired compared to Kinect-type sensors, we study several ways to improve performance when a large number of gesture classes is involved. We investigate the performance of different 3D descriptors, as well as the fusion of two ToF sensor streams. By basing a data fusion strategy on the fact that multilayer perceptrons can produce normalized confidences individually for each class, and similarly by designing information-theoretic online measures for assessing confidences of decisions, we show that appropriately chosen fusion strategies can improve overall performance to a very satisfactory level. Real-time capability is retained as the used 3D descriptors, the fusion strategy as well as the online confidence measures are computationally efficient.


international symposium on neural networks | 2015

A pragmatic approach to multi-class classification

Thomas Kopinski; Stephane Magand; Uwe Handmann; Alexander Gepperth

We present a novel hierarchical approach to multi-class classification which is generic in that it can be applied to different classification models (e.g., support vector machines, perceptrons), and makes no explicit assumptions about the probabilistic structure of the problem as it is usually done in multi-class classification. By adding a cascade of additional classifiers, each of which receives the previous classifiers output in addition to regular input data, the approach harnesses unused information that manifests itself in the form of, e.g., correlations between predicted classes. Using multilayer perceptrons as a classification model, we demonstrate the validity of this approach by testing it on a complex ten-class 3D gesture recognition task.


ieee intelligent vehicles symposium | 2015

A light-weight real-time applicable hand gesture recognition system for automotive applications

Thomas Kopinski; Stephane Magand; Alexander Gepperth; Uwe Handmann

We present a novel approach for improved hand-gesture recognition by a single time-of-flight(ToF) sensor in an automotive environment. As the sensors lateral resolution is comparatively low, we employ a learning approach comprising multiple processing steps, including PCA-based cropping, the computation of robust point cloud descriptors and training of a Multilayer perceptron (MLP) on a large database of samples. A sophisticated temporal fusion technique boosts the overall robustness of recognition by taking into account data coming from previous classification steps. Overall results are very satisfactory when evaluated on a large benchmark set of ten different hand poses, especially when it comes to generalization on previously unknown persons.


2016 International Conference on Computing, Networking and Communications (ICNC) | 2016

Touchless interaction for future mobile applications

Thomas Kopinski; Uwe Handmann

We present a light-weight real-time applicable 3D-gesture recognition system on mobile devices for improved Human-Machine Interaction. We utilize time-of-flight data coming from a single sensor and implement the whole gesture recognition pipeline on two different devices outlining the potential of integrating these sensors onto mobile devices. The main components are responsible for cropping the data to the essentials, calculation of meaningful features, training and classifying via neural networks and realizing a GUI on the device. With our system we achieve recognition rates of up to 98% on a 10-gesture set with frame rates reaching 20Hz, more than sufficient for any real-time applications.


Development and Learning and Epigenetic Robotics (ICDL-Epirob), 2014 Joint IEEE International Conferences on | 2014

Multimodal space representation driven by self-evaluation of predictability

Mathieu Lefort; Thomas Kopinski; Alexander Gepperth

PROPRE is a generic and modular neural learning paradigm that autonomously extracts meaningful concepts of multimodal data flows driven by predictability across modalities in an unsupervised, incremental and online way. For that purpose, PROPRE consists of the combination of projection and prediction. Firstly, each data flow is topologically projected with a self-organizing map, largely inspired from the Kohonen model. Secondly, each projection is predicted by each other map activities, by mean of linear regressions. The main originality of PROPRE is the use of a simple and generic predictability measure that compares predicted and real activities for each modal stream. This measure drives the corresponding projection learning to favor the mapping of predictable stimuli across modalities at the system level (i.e. that their predictability measure overcomes some threshold). This predictability measure acts as a self-evaluation module that tends to bias the representations extracted by the system so that to improve their correlations across modalities. We already showed that this modulation mechanism is able to bootstrap representation extraction from previously learned representations with artificial multimodal data related to basic robotic behaviors [1] and improves performance of the system for classification of visual data within a supervised learning context [2]. In this article, we improve the self-evaluation module of PROPRE, by introducing a sliding threshold, and apply it to the unsupervised classification of gestures caught from two time-of-flight (ToF) cameras. In this context, we illustrate that the modulation mechanism is still useful although less efficient than purely supervised learning.


international conference on intelligent transportation systems | 2015

A Real-Time Applicable Dynamic Hand Gesture Recognition Framework

Thomas Kopinski; Alexander Gepperth; Uwe Handmann

We present a system for efficient dynamic hand gesture recognition based on a single time-of-flight sensor. As opposed to other approaches, we simply rely on depth data to interpret user movement with the hand in mid-air. We set up a large database to train multilayer perceptrons (MLPs) which are subsequently used for classification of static hand poses that define the targeted dynamic gestures. In order to remain robust against noise and to balance the low sensor resolution, PCA is used for data cropping and highly descriptive features, obtainable in real-time, are presented. Our simple yet efficient definition of a dynamic hand gesture shows how strong results are achievable in an automotive environment allowing for interesting and sophisticated applications to be realized.


international symposium on computational intelligence and informatics | 2014

Time-of-flight based multi-sensor fusion strategies for hand gesture recognition

Thomas Kopinski; Darius Malysiak; Alexander Gepperth; Uwe Handmann

Building upon prior results, we present an alternative approach to efficiently classifying a complex set of 3D hand poses obtained from modern Time-Of-Flight-Sensors (TOF). We demonstrate it is possible to achieve satisfactory results in spite of low resolution and high noise (inflicted by the sensors) and a demanding outdoor environment. We set up a large database of pointclouds in order to train multilayer perceptrons as well as support vector machines to classify the various hand poses. Our goal is to fuse data from multiple TOF sensors, which observe the poses from multiple angles. The presented contribution illustrates that real-time capability can be maintained with such a setup as the used 3D descriptors, the fusion strategy as well as the online confidence measures are computationally efficient.


international conference on artificial neural networks | 2018

An Energy-Based Convolutional SOM Model with Self-adaptation Capabilities

Alexander Gepperth; Ayanava Sarkar; Thomas Kopinski

We present a new self-organized neural model that we term ReST (Resilient Self-organizing Tissue). ReST can be run as a convolutional neural network (CNN), possesses a \(C^\infty \) energy function as well as a probabilistic interpretation of neural activities, which arises from the constraint of log-normal activity distribution over time that is enforced during learning. We discuss the advantages of a \(C^\infty \) energy function and present experiments demonstrating the self-organization and self-adaptation capabilities of ReST. In addition, we provide a performance benchmark for the publicly available TensorFlow-implementation.


international symposium on neural networks | 2017

A large-scale multi-pose 3D-RGB object database

Fabian Sachara; Finn Handmann; Nico Cremer; Thomas Kopinski; Alexander Gepperth; Uwe Handmann

We present a new RGB-D database for multi-pose object recognition tasks. With the help of a multi-axis rotation framework, we are capable of capturing depth and color data of arbitrary small objects from virtually any viewpoint. In addition, recording is performed in a nearly lossless fashion, avoiding typical bleeding artifacts present in related reference data bases. This contribution presents the main advantages of our setup and contrasts it against other reference data bases. Furthermore, it outlines possible use cases and application scenarios of our data set and is complemented by experiments with standard machine learning techniques used in, e.g., object recognition tasks within the robotics domain. The experiments demonstrate the validity of our data base as they corroborate that viewpoint variance is indeed an important factor to take into account for object detection, which, from our perspective, is sometimes not considered at the required level. Detection accuracy is high if samples can be trained on data taking into account as many viewpoints as possible.

Collaboration


Dive into the Thomas Kopinski's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Fabian Sachara

Université Paris-Saclay

View shared research outputs
Top Co-Authors

Avatar

Ayanava Sarkar

Birla Institute of Technology and Science

View shared research outputs
Top Co-Authors

Avatar

Alexander Gepperth

Superior National School of Advanced Techniques

View shared research outputs
Researchain Logo
Decentralizing Knowledge