Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mika Fischer is active.

Publication


Featured researches published by Mika Fischer.


advanced video and signal based surveillance | 2010

Multi-pose Face Recognition for Person Retrieval in Camera Networks

Martin Bäuml; Keni Bernardin; Mika Fischer; Haz m Kemal Ekenel; Rainer Stiefelhagen

In this paper, we study the use of facial appearancefeatures for the re-identification of persons using distributedcamera networks in a realistic surveillance scenario.In contrast to features commonly used for person reidentification,such as whole body appearance, facial featuresoffer the advantage of remaining stable over muchlarger intervals of time. The challenge in using faces forsuch applications, apart from low captured face resolutions,is that their appearance across camera sightings is largelyinfluenced by lighting and viewing pose. Here, a numberof techniques to address these problems are presented andevaluated on a database of surveillance-type recordings. Asystem for online capture and interactive retrieval is presentedthat allows to search for sightings of particular personsin the video database. Evaluation results are presentedon surveillance data recorded with four cameras over severaldays. A mean average precision of 0.60 was achievedfor inter-camera retrieval using just a single track as queryset, and up to 0.86 after relevance feedback by an operator.


computer vision and pattern recognition | 2007

Multi-modal Person Identification in a Smart Environment

Hazim Kemal Ekenel; Mika Fischer; Qin Jin; Rainer Stiefelhagen

In this paper, we present a detailed analysis of multimodal fusion for person identification in a smart environment. The multi-modal system consists of a video-based face recognition system and a speaker identification system. We investigated different score normalization, modality weighting and modality combination schemes during the fusion of the individual modalities. We introduced two new modality weighting schemes, namely, the cumulative ratio of correct matches (CRCM) and distance-to-second-closest (DT2ND) measures. In addition, we also assessed the effects of the well-known score normalization and classifier combination methods on the identification performance. Experimental results obtained on the CLEAR 2007 evaluation corpus, which contains audio-visual recordings from different smart rooms, show that CRCM-based modality weighting improves the correct identification rates significantly.


Multimedia Tools and Applications | 2011

Person re-identification in TV series using robust face recognition and user feedback

Mika Fischer; Hazim Kemal Ekenel; Rainer Stiefelhagen

In this paper, we present a system for person re-identification in TV series. In the context of video retrieval, person re-identification refers to the task where a user clicks on a person in a video frame and the system then finds other occurrences of the same person in the same or different videos. The main characteristic of this scenario is that no previously collected training data is available, so no person-specific models can be trained in advance. Additionally, the query data is limited to the image that the user clicks on. These conditions pose a great challenge to the re-identification system, which has to find the same person in other shots despite large variations in the person’s appearance. In the study, facial appearance is used as the re-identification cue, since, in contrast to surveillance-oriented re-identification studies, the person can have different clothing in different shots. In order to increase the amount of available face data, the proposed system employs a face tracker that can track faces up to full profile views. This makes it possible to use a profile face image as query image and also to retrieve images with non-frontal poses. It also provides temporal association of the face images in the video, so that instead of using single images for query or target, whole tracks can be used. A fast and robust face recognition algorithm is used to find matching faces. If the match result is highly confident, our system adds the matching face track to the query set. Finally, if the user is not satisfied with the number of returned results, the system can present a small number of candidate face images and lets the user confirm the ones that belong to the queried person. These features help to increase the variation in the query set, making it possible to retrieve results with different poses, illumination conditions, etc. The system is extensively evaluated on two episodes of the TV series Coupling, showing very promising results.


Multimodal Technologies for Perception of Humans | 2008

ISL Person Identification Systems in the CLEAR 2007 Evaluations

Hazim Kemal Ekenel; Qin Jin; Mika Fischer; Rainer Stiefelhagen

In this paper, we present ISL person identification systems in the CLEAR 2007 evaluations.The identification systems consist of a face recognition system, a speaker identification system and a multi-modal identification system that combines the individual systems. The experimental results show that the face recognition system outperforms the speaker identification system significantly on the short duration test segments. They perform equally well on the longer duration test segments. Combination of the individual systems improves the performance further.


international conference on machine learning | 2007

Face recognition in smart rooms

Hazim Kemal Ekenel; Mika Fischer; Rainer Stiefelhagen

In this paper, we present a detailed analysis of the face recognition problem in smart room environment. We first examine the wellknown face recognition algorithms in order to observe how they perform on the images collected under such environments. Afterwards, we investigate two aspects of doing face recognition in a smart room. These are: utilizing the images captured by multiple fixed cameras located in the room and handling possible registration errors due to the low resolution of the aquired face images. In addition, we also provide comparisons between frame-based and video-based face recognition and analyze the effect of frame weighting. Experimental results obtained on the CHIL database, which has been collected from different smart rooms, show that benefiting from multi-view video data and handling registration errors reduce the false identification rates significantly.


international conference on pattern recognition | 2011

Combined head localization and head pose estimation for video-based advanced driver assistance systems

Andreas Schulz; Naser Damer; Mika Fischer; Rainer Stiefelhagen

This work presents a novel approach for pedestrian head localization and head pose estimation in single images. The presented method addresses an environment of low resolution gray-value images taken from a moving camera with large variations in illumination and object appearance. The proposed algorithms are based on normalized detection confidence values of separate, pose associated classifiers. Those classifiers are trained using a modified one vs. all framework that tolerates outliers appearing in continuous head pose classes. Experiments on a large set of real world data show very good head localization and head pose estimation results even on the smallest considered head size of 7×7 pixels. These results can be obtained in a probabilistic form, which make them of a great value for pedestrian path prediction and risk assessment systems within video-based driver assistance systems or many other applications.


Pattern Recognition Letters | 2014

An evaluation of the compactness of superpixels

Alexander Schick; Mika Fischer; Rainer Stiefelhagen

Abstract Superpixel segmentation is the oversegmentation of an image into a connected set of homogeneous regions. Depending on the algorithm, superpixels have specific properties. One property that almost all authors claim for their superpixels is compactness. However, the compactness of superpixels has not yet been measured and the implications of compactness have not been investigated for superpixels. As our first contribution, we propose a metric to measure the compactness of superpixels. We further discuss implications of compactness and demonstrate the benefits of compact superpixels with an example application. Most importantly, we show that there is a negative correlation between compactness and boundary recall. A second desirable property for superpixel segmentations is conforming to a lattice. This regular structure, similar to the pixel grid of an image, can then be used for more efficient algorithms. As our second contribution, we propose an algorithm that offers both a transparent and easy-to-use compactness control with an optional lattice guarantee. We show in our evaluation with six benchmark algorithms, that the proposed algorithm outperforms the state-of-the-art.


international conference on biometrics theory applications and systems | 2012

Analysis of partial least squares for pose-invariant face recognition

Mika Fischer; Hazim Kemal Ekenel; Rainer Stiefelhagen

Face recognition across large pose changes is one of the hardest problems for automatic face recognition. Recently, approaches that use partial least squares (PLS) to compute pairwise pose-independent coupled subspaces have achieved good results on this problem. In this paper, we perform a thorough experimental analysis of the PLS approach for pose-invariant face recognition. We find that the use of different alignment methods can have a significant influence on the results. We propose a simple and consistent alignment method that is easily reproducible and uses only few hand-tuned parameters. Further, we find that block-based approaches outperform those using a holistic face representation. However, we note that the size, positioning and selection of the extracted blocks has a large influence on the performance of PLS-based approaches, with the optimal sizes and selections differing significantly for different feature representations. Finally, we show that local PLS using simple intensity values performs almost as well as more sophisticated feature extraction methods like Gabor features for frontal gallery images. However, Gabor features perform significantly better with non-frontal gallery images. The achieved results exceed the previously reported results for the CMU Multi-PIE dataset on this task with an average recognition rate of 90.1% when using frontal images as gallery and 82.0% when considering all pose pairs.


signal processing and communications applications conference | 2008

Local binary pattern domain local appearance face recognition

Hazim Kemal Ekenel; Mika Fischer; Erkin Tekeli; Rainer Stiefelhagen; Aytül Erçil

This paper presents a fast face recognition algorithm that combines the discrete cosine transform based local appearance face recognition technique with the local binary pattern (LBP) representation of the face images. The underlying idea is to benefit from both the robust image representation capability of local binary patterns, and the compact representation capability of local appearance-based face recognition. In the proposed method, prior to local appearance modeling, the input face image is transformed into the local binary pattern domain. The obtained LBP-representation is then decomposed into non-overlapping blocks and on each local block the discrete cosine transform is applied to extract the local features. The extracted local features are then concatenated to construct the overall feature vector. The proposed algorithm is tested extensively on the face images from the CMU PIE and the FRGC version 2 face databases. The experimental results show that the combined approach improves the performance significantly.


content based multimedia indexing | 2010

Interactive person re-identification in TV series

Mika Fischer; Hazim Kemal Ekenel; Rainer Stiefelhagen

In this paper, we present a system for person re-identification in TV series. In the context of video retrieval, person re-identification refers to the task where a user clicks on a person in a video frame and the system then finds other occurrences of the same person in the same or different videos. The main characteristic of this scenario is that no previously collected training data is available, so no person-specific models can be trained in advance. Additionally, the query data is limited to the image that the user clicks on. These conditions pose a great challenge to the re-identification system, which has to find the same person in other shots despite large variations in the persons appearance. In the study, facial appearance is used as the re-identification cue. In order to increase the amount of available face data, the proposed system employs a face tracker that can track faces up to full profile views. A fast and robust face recognition algorithm is used to find matching faces. If the match result is highly confident, our system adds the matching face track to the query set. This approach help to increase the variation in the query set, making it possible to retrieve results with different poses, illumination conditions, etc. Furthermore, if the user is not satisfied with the number of returned results, the system can present a small number of candidate face images and lets the user confirm the ones that belong to the queried person. The system is extensively evaluated on two episodes of the TV series Coupling, showing very promising results.

Collaboration


Dive into the Mika Fischer's collaboration.

Top Co-Authors

Avatar

Rainer Stiefelhagen

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Hazim Kemal Ekenel

Istanbul Technical University

View shared research outputs
Top Co-Authors

Avatar

Hua Gao

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Keni Bernardin

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Martin Bäuml

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Bertram E. Shi

Hong Kong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Feijun Jiang

Hong Kong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Qin Jin

Renmin University of China

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge