Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Christophe Garcia is active.

Publication


Featured researches published by Christophe Garcia.


european conference on computer vision | 2016

The Visual Object Tracking VOT2014 Challenge Results

Matej Kristan; Roman P. Pflugfelder; Aleš Leonardis; Jiri Matas; Luka Cehovin; Georg Nebehay; Tomas Vojir; Gustavo Fernández; Alan Lukezic; Aleksandar Dimitriev; Alfredo Petrosino; Amir Saffari; Bo Li; Bohyung Han; CherKeng Heng; Christophe Garcia; Dominik Pangersic; Gustav Häger; Fahad Shahbaz Khan; Franci Oven; Horst Bischof; Hyeonseob Nam; Jianke Zhu; Jijia Li; Jin Young Choi; Jin-Woo Choi; João F. Henriques; Joost van de Weijer; Jorge Batista; Karel Lebeda

Visual tracking has attracted a significant attention in the last few decades. The recent surge in the number of publications on tracking-related problems have made it almost impossible to follow the developments in the field. One of the reasons is that there is a lack of commonly accepted annotated data-sets and standardized evaluation protocols that would allow objective comparison of different tracking methods. To address this issue, the Visual Object Tracking (VOT) workshop was organized in conjunction with ICCV2013. Researchers from academia as well as industry were invited to participate in the first VOT2013 challenge which aimed at single-object visual trackers that do not apply pre-learned models of object appearance (model-free). Presented here is the VOT2013 benchmark dataset for evaluation of single-object visual trackers as well as the results obtained by the trackers competing in the challenge. In contrast to related attempts in tracker benchmarking, the dataset is labeled per-frame by visual attributes that indicate occlusion, illumination change, motion change, size change and camera motion, offering a more systematic comparison of the trackers. Furthermore, we have designed an automated system for performing and evaluating the experiments. We present the evaluation protocol of the VOT2013 challenge and the results of a comparison of 27 trackers on the benchmark dataset. The dataset, the evaluation tools and the tracker rankings are publicly available from the challenge website (http://votchallenge.net).


human behavior unterstanding | 2011

Sequential deep learning for human action recognition

Moez Baccouche; Franck Mamalet; Christian Wolf; Christophe Garcia; Atilla Baskurt

We propose in this paper a fully automated deep model, which learns to classify human actions without using any prior knowledge. The first step of our scheme, based on the extension of Convolutional Neural Networks to 3D, automatically learns spatio-temporal features. A Recurrent Neural Network is then trained to classify each sequence considering the temporal evolution of the learned features for each timestep. Experimental results on the KTH dataset show that the proposed approach outperforms existing deep models, and gives comparable results with the best related works.


international conference on pattern recognition | 2002

A neural architecture for fast and robust face detection

Christophe Garcia; Manolis Delakis

In this paper, we present a connectionist approach for detecting and precisely localizing semi-frontal human faces in complex images, making no assumption about the content or the lighting conditions of the scene, or about the size or the appearance of the faces. We propose a convolutional neural network architecture designed to recognize strongly variable face patterns directly from pixel images with no preprocessing, by automatically synthesizing its own set of feature extractors from a large training set of faces. We present in details the optimized design of our architecture, our learning strategy and the resulting process of face detection. We also provide experimental results to demonstrate the robustness of our approach and its capability to precisely detect extremely variable faces in uncontrolled environments.


international conference on artificial neural networks | 2012

Simplifying convnets for fast learning

Franck Mamalet; Christophe Garcia

In this paper, we propose different strategies for simplifying filters, used as feature extractors, to be learnt in convolutional neural networks (ConvNets) in order to modify the hypothesis space, and to speed-up learning and processing times. We study two kinds of filters that are known to be computationally efficient in feed-forward processing: fused convolution/sub-sampling filters, and separable filters. We compare the complexity of the back-propagation algorithm on ConvNets based on these different kinds of filters. We show that using these filters allows to reach the same level of recognition performance as with classical ConvNets for handwritten digit recognition, up to 3.3 times faster.


ieee international conference on automatic face gesture recognition | 2015

The FG 2015 Kinship Verification in the Wild Evaluation

Jiwen Lu; Junlin Hu; Venice Erin Liong; Xiuzhuang Zhou; Andrea Giuseppe Bottino; Ihtesham Ul Islam; Tiago Figueiredo Vieira; Xiaoqian Qin; Xiaoyang Tan; Songcan Chen; Shahar Mahpod; Yosi Keller; Lilei Zheng; Khalid Idrissi; Christophe Garcia; Stefan Duffner; Atilla Baskurt; Modesto Castrillón-Santana; Javier Lorenzo-Navarro

The aim of the Kinship Verification in the Wild Evaluation (held in conjunction with the 2015 IEEE International Conference on Automatic Face and Gesture Recognition, Ljubljana, Slovenia) was to evaluate different kinship verification algorithms. For this task, two datasets were made available and three possible experimental protocols (unsupervised, image-restricted, and image-unrestricted) were designed. Five institutions submitted their results to the evaluation: (i) Politecnico di Torino, Italy; (ii) LIRIS-University of Lyon, France; (iii) Universidad de Las Palmas de Gran Canaria, Spain; (iv) Nanjing University of Aeronautics and Astronautics, China; and (v) Bar Ilan University, Israel. Most of the participants tackled the image-restricted challenge and experimental results demonstrated better kinship verification performance than the baseline methods provided by the organizers.


british machine vision conference | 2012

Spatio-Temporal Convolutional Sparse Auto-Encoder for Sequence Classification.

Moez Baccouche; Franck Mamalet; Christian Wolf; Christophe Garcia; Atilla Baskurt

We present in this paper a novel learning-based approach for video sequence classification. Contrary to the dominant methodology, which relies on hand-crafted features that are manually engineered to be optimal for a specific task, our neural model automatically learns a sparse shift-invariant representation of the local 2D+t salient information, without any use of prior knowledge. To that aim, a spatio-temporal convolutional sparse auto-encoder is trained to project a given input in a feature space, and to reconstruct it from its projection coordinates. Learning is performed in an unsupervised manner by minimizing a global parametrized objective function. The sparsity is ensured by adding a sparsifying logistic between the encoder and the decoder, while the shift-invariance is handled by including an additional hidden variable to the objective function. The temporal evolution of the obtained sparse features is learned by a long short-term memory recurrent neural network trained to classify each sequence. We show that, since the feature learning process is problem-independent, the model achieves outstanding performances when applied to two different problems, namely human action and facial expression recognition. Obtained results are superior to the state of the art on the GEMEP-FERA dataset and among the very best on the KTH dataset.


international conference on document analysis and recognition | 2007

Robust Binarization for Video Text Recognition

Zohra Saidane; Christophe Garcia

This paper presents an automatic binarization method for color text areas in images or videos, which is robust to complex background, low resolution or video coding artefacts. Based on a specific architecture of convolutional neural networks, the proposed system automatically learns how to perform binarization, from a training set of synthesized text images and their corresponding desired binary images, without making any assumptions or using tunable parameters. The proposed method is compared to state-of-the-art binarization techniques, with respect to Gaussian noise and contrast variations, demonstrating the robustness and the efficiency of our method. Text recognition experiments on a database of images extracted from video frames and web pages, with two classical OCRs applied on the obtained binary images show a strong enhancement of the recognition rate by more than 40%.


international conference on artificial neural networks | 2010

Action classification in soccer videos with long short-term memory recurrent neural networks

Moez Baccouche; Franck Mamalet; Christian Wolf; Christophe Garcia; Atilla Baskurt

In this paper, we propose a novel approach for action classification in soccer videos using a recurrent neural network scheme. Thereby, we extract from each video action at each timestep a set of features which describe both the visual content (by the mean of a BoW approach) and the dominant motion (with a key point based approach). A Long Short-Term Memory-based Recurrent Neural Network is then trained to classify each video sequence considering the temporal evolution of the features for each timestep. Experimental results on the MICC-Soccer-Actions-4 database show that the proposed approach outperforms classification methods of related works (with a classification rate of 77%), and that the combination of the two features (BoW and dominant motion) leads to a classification rate of 92%.


international conference on acoustics, speech, and signal processing | 2014

3D gesture classification with convolutional neural networks

Stefan Duffner; Samuel Berlemont; Grégoire Lefebvre; Christophe Garcia

In this paper, we present an approach that classifies 3D gestures using jointly accelerometer and gyroscope signals from a mobile device. The proposed method is based on a convolutional neural network with a specific structure involving a combination of 1D convolution, averaging, and max-pooling operations. It directly classifies the fixed-length input matrix, composed of the normalised sensor data, as one of the gestures to be recognises. Experimental results on different datasets with varying training/testing configurations show that our method outperforms or is on par with current state-of-the-art methods for almost all data configurations.


Computer Vision and Image Understanding | 2014

Evaluation of video activity localizations integrating quality and quantity measurements

Christian Wolf; Eric Lombardi; Julien Mille; Oya Celiktutan; Mingyuan Jiu; Emre Dogan; Gonen Eren; Moez Baccouche; Emmanuel Dellandréa; Charles-Edmond Bichot; Christophe Garcia; Bülent Sankur

Evaluating the performance of computer vision algorithms is classically done by reporting classification error or accuracy, if the problem at hand is the classification of an object in an image, the recognition of an activity in a video or the categorization and labeling of the image or video. If in addition the detection of an item in an image or a video, and/or its localization are required, frequently used metrics are Recall and Precision, as well as ROC curves. These metrics give quantitative performance values which are easy to understand and to interpret even by non-experts. However, an inherent problem is the dependency of quantitative performance measures on the quality constraints that we need impose on the detection algorithm. In particular, an important quality parameter of these measures is the spatial or spatio-temporal overlap between a ground-truth item and a detected item, and this needs to be taken into account when interpreting the results. We propose a new performance metric addressing and unifying the qualitative and quantitative aspects of the performance measures. The performance of a detection and recognition algorithm is illustrated intuitively by performance graphs which present quantitative performance values, like Recall, Precision and F-Score, depending on quality constraints of the detection. In order to compare the performance of different computer vision algorithms, a representative single performance measure is computed from the graphs, by integrating out all quality parameters. The evaluation method can be applied to different types of activity detection and recognition algorithms. The performance metric has been tested on several activity recognition algorithms participating in the ICPR 2012 HARL competition.

Collaboration


Dive into the Christophe Garcia's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Moez Baccouche

Institut national des sciences Appliquées de Lyon

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge