Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Alexander Gepperth is active.

Publication


Featured researches published by Alexander Gepperth.


ieee international conference on technologies for practical robot applications | 2012

RGBD object recognition and visual texture classification for indoor semantic mapping

David Filliat; Emmanuel Battesti; Stéphane Bazeille; Guillaume Duceux; Alexander Gepperth; Lotfi Harrath; Islem Jebari; Rafael Pereira; Adriana Tapus; Cedric Meyer; Sio-Hoi Ieng; Ryad Benosman; Eddy Cizeron; Jean-Charles Mamanna; Benoit Pothier

We present a mobile robot whose goal is to autonomously explore an unknown indoor environment and to build a semantic map containing high-level information similar to those extracted by humans. This information includes the rooms, their connectivity, the objects they contain and the material of the walls and ground. This robot was developed in order to participate in a French exploration and mapping contest called CAROTTE whose goal is to produce easily interpretable maps of an unknown environment. In particular we present our object detection approach based on a color+depth camera that fuse 3D, color and texture information through a neural network for robust object recognition. We also present the material recognition approach based on machine learning applied to vision. We demonstrate the performances of these modules on image databases and provide examples on the full system working in real environments.


ieee intelligent vehicles symposium | 2008

Towards a human-like vision system for Driver Assistance

Jannik Fritsch; Thomas Michalke; Alexander Gepperth; Sven Bone; Falko Waibel; Marcus Kleinehagenbrock; Jens Gayko; Christian Goerick

Several advanced driver assistance systems realizing elementary perception and analysis tasks have been introduced to market in recent years. For example, collision mitigation brake systems detect the distance and relative velocity of vehicles in front to assess the risk of a rear-end collision in a clearly defined following situation. In order to go beyond such elementary analysis tasks, todaypsilas research is focusing more and more on powerful perception systems for driver assistance. We believe computer vision will play a central role for achieving a full understanding of generic traffic situations. Besides individual processing algorithms, general vision architectures enabling integrated and more flexible processing are needed. Here we present the first instantiation of a vision architecture for driver assistance systems inspired by the human visual system that is based on task-dependent perception. Core element of our system is a state of the art attention system integrating bottom-up and top-down visual saliency. Combining this task-dependent tunable visual saliency with object recognition and tracking enables for instance warnings according to the context of the scene. We demonstrate the performance of our approach in a construction site setup, where a traffic jam ending within the site is a dangerous situation that the system has to identify in order to warn the driver.


ieee intelligent vehicles symposium | 2011

Behavior prediction at multiple time-scales in inner-city scenarios

Michaël Garcia Ortiz; Jannik Fritsch; Franz Kummert; Alexander Gepperth

We present a flexible and scalable architecture that can learn to predict the future behavior of a vehicle in inner-city traffic. While behavior prediction studies have mainly been focusing on lane change events on highways, we apply our approach to a simple inner-city scenario: approaching a traffic light. Our system employs dynamic information about the current ego-vehicle state as well as static information about the scene, in this case position and state of nearby traffic lights.


international conference on intelligent transportation systems | 2010

System approach for multi-purpose representations of traffic scene elements

Jens Schmuedderich; Nils Einecke; Stephan Hasler; Alexander Gepperth; Bram Bolder; Robert Kastner; Mathias Franzius; Sven Rebhan; Benjamin Dittes; Heiko Wersing; Julian Eggert; Jannik Fritsch; Christian Goerick

A major step towards intelligent vehicles lies in the acquisition of an environmental representation of sufficient generality to serve as the basis for a multitude of different assistance-relevant tasks. This acquisition process must reliably cope with the variety of environmental changes inherent to traffic environments. As a step towards this goal, we present our most recent integrated system performing object detection in challenging environments (e.g., inner-city or heavy rain). The system integrates unspecific and vehicle-specific methods for the detection of traffic scene elements, thus creating multiple object hypotheses. Each detection method is modulated by optimized models of typical scene context features which are used to enhance and suppress hypotheses. A multi-object tracking and fusion process is applied to make the produced hypotheses spatially and temporally coherent. In extensive evaluations we show that the presented system successfully analyzes scene elements under diverse conditions, including challenging weather and changing scenarios. We demonstrate that the used generic hypothesis representations allow successful application to a variety of tasks including object detection, movement estimation, and risk assessment by time-to-contact evaluation.


BMC Bioinformatics | 2008

Automatic detection of exonic splicing enhancers (ESEs) using SVMs

Britta Mersch; Alexander Gepperth; Sándor Suhai; Agnes Hotz-Wagenblatt

BackgroundExonic splicing enhancers (ESEs) activate nearby splice sites and promote the inclusion (vs. exclusion) of exons in which they reside, while being a binding site for SR proteins. To study the impact of ESEs on alternative splicing it would be useful to have a possibility to detect them in exons. Identifying SR protein-binding sites in human DNA sequences by machine learning techniques is a formidable task, since the exon sequences are also constrained by their functional role in coding for proteins.ResultsThe choice of training examples needed for machine learning approaches is difficult since there are only few exact locations of human ESEs described in the literature which could be considered as positive examples. Additionally, it is unclear which sequences are suitable as negative examples. Therefore, we developed a motif-oriented data-extraction method that extracts exon sequences around experimentally or theoretically determined ESE patterns. Positive examples are restricted by heuristics based on known properties of ESEs, e.g. location in the vicinity of a splice site, whereas negative examples are taken in the same way from the middle of long exons. We show that a suitably chosen SVM using optimized sequence kernels (e.g., combined oligo kernel) can extract meaningful properties from these training examples. Once the classifier is trained, every potential ESE sequence can be passed to the SVM for verification. Using SVMs with the combined oligo kernel yields a high accuracy of about 90 percent and well interpretable parameters.ConclusionThe motif-oriented data-extraction method seems to produce consistent training and test data leading to good classification rates and thus allows verification of potential ESE motifs. The best results were obtained using an SVM with the combined oligo kernel, while oligo kernels with oligomers of a certain length could be used to extract relevant features.


international conference on intelligent transportation systems | 2013

Real-time pedestrian detection and pose classification on a GPU

Alexander Gepperth; Michael Garcia Ortiz; Bernd Heisele

In this contribution, we present a real-time pedestrian detection and pose classification system which makes use of the computing power of Graphical Processing Units (GPUs). The aim of the pose classification presented here is to determine the orientation and thus the likely future movement of the pedestrian. We focus on the evaluation of pose detection performance and show that, without resorting to complex tracking or attention mechanism, a small number of safety-relevant pedestrian poses can be reliably distinguished during live operation. Additionally, we show that detection and pose classification can share the same visual low-level features, achieving a very high frame rate at high image resolutions using only off-the-shelf hardware.


international conference on artificial neural networks | 2007

Color object recognition in real-world scenes

Alexander Gepperth; Britta Mersch; Jannik Fritsch; Christian Goerick

This work investigates the role of color in object recognition. We approach the problem from a computational perspective by measuring the performance of biologically inspired object recognition methods. As benchmarks, we use image datasets proceeding from a real-world object detection scenario and compare classification performance using color and gray-scale versions of the same datasets. In order to make our results as general as possible, we consider object classes with and without intrinsic color, partitioned into 4 datasets of increasing difficulty and complexity. For the same reason, we use two independent bio-inspired models of object classification which make use of color in different ways. We measure the qualitative dependency of classification performance on classifier type and dataset difficulty (and used color space) and compare to results on gray-scale images. Thus, we are able to draw conclusions about the role and the optimal use of color in classification and find that our results are in good agreement with recent psychophysical results.


international conference on intelligent transportation systems | 2014

A real-time applicable 3D gesture recognition system for automobile HMI

Thomas Kopinski; Stefan Geisler; Louis-Charles Caron; Alexander Gepperth; Uwe Handmann

We present a system for 3D hand gesture recognition based on low-cost time-of-flight(ToF) sensors intended for outdoor use in automotive human-machine interaction. As signal quality is impaired compared to Kinect-type sensors, we study several ways to improve performance when a large number of gesture classes is involved. Our system fuses data coming from two ToF sensors which is used to build up a large database and subsequently train a multilayer perceptron (MLP). We demonstrate that we are able to reliably classify a set of ten hand gestures in real-time and describe the setup of the system, the utilised methods as well as possible application scenarios.


international conference on intelligent transportation systems | 2011

Situation-specific learning for ego-vehicle behavior prediction systems

Michaël Garcia Ortiz; Jens Schmüdderich; Franz Kummert; Alexander Gepperth

We present a system able to predict the future behavior of the ego-vehicle in an inner-city environment. Our system learns the mapping between the current perceived scene (information about the ego-vehicle and the preceding vehicle, as well as information about the possible traffic lights) and the future driving behavior of the ego-vehicle. We improve the prediction accuracy by estimating the prediction confidence and by discarding unconfident samples. The behavior of the driver is represented as a sequence of elementary states termed behavior primitives. These behavior primitives are abstractions from the raw actuator states. Behavior prediction is therefore considered to be a multi-class learning problem. In this contribution, we explore the possibilities of situation-specific learning. We show that decomposing the perceived complex situation into a combination of simpler ones, each of them with a dedicated prediction, allows the system to reach a performance equivalent to a system without situation-specificity. We believe that this is advantageous for the scalability of the approach to the number of possible situations that the driver will encounter. The system is tested on a real world scenario, using streams recorded in inner-city scenes. The prediction is evaluated for a prediction horizon of 3s into the future, and the quality of the prediction is measured using established evaluation methods.


Automatisierungstechnik | 2008

An Attention-based System Approach for Scene Analysis in Driver Assistance Ein aufmerksamkeitsbasierter Systemansatz zur Szenenanalyse in der Fahrerassistenz

Thomas Michalke; Robert Kastner; Jürgen Adamy; Sven Bone; Falko Waibel; Marcus Kleinehagenbrock; Jens Gayko; Alexander Gepperth; Jannik Fritsch; Christian Goerick

Abstract Research on computer vision systems for driver assistance resulted in a variety of isolated approaches mainly performing very specialized tasks like, e. g., lane keeping or traffic sign detection. However, for a full understanding of generic traffic situations, integrated and flexible approaches are needed. We here present a highly integrated vision architecture for an advanced driver assistance system inspired by human cognitive principles. The system uses an attention system as the flexible and generic front-end for all visual processing, allowing a task-specific scene decomposition and search for known objects (based on a short term memory) as well as generic object classes (based on a long term memory). Knowledge fusion, e. g., between an internal 3D representation and a reliable road detection module improves the system performance. The system heavily relies on top-down links to modulate lower processing levels, resulting in a high system robustness.

Collaboration


Dive into the Alexander Gepperth's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David Filliat

Université Paris-Saclay

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Fabian Sachara

Université Paris-Saclay

View shared research outputs
Researchain Logo
Decentralizing Knowledge