Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mateusz Budnik is active.

Publication


Featured researches published by Mateusz Budnik.


content based multimedia indexing | 2015

Learned features versus engineered features for semantic video indexing

Mateusz Budnik; Efrain-Leonardo Gutierrez-Gomez; Bahjat Safadi; Georges Quénot

In this paper, we compare “traditional” engineered (hand-crafted) features (or descriptors) and learned features for content-based semantic indexing of video documents. Learned (or semantic) features are obtained by training classifiers for other target concepts on other data. These classifiers are then applied to the current collection. The vector of classification scores is the new feature used for training a classifier for the current target concepts on the current collection. If the classifiers used on the other collection are of the Deep Convolutional Neural Network (DCNN) type, it is possible to use as a new feature not only the score values provided by the last layer but also the intermediate values corresponding to the output of all the hidden layers. We made an extensive comparison of the performance of such features with traditional engineered ones as well as with combinations of them. The comparison was made in the context of the TRECVid semantic indexing task. Our results confirm those obtained for still images: features learned from other training data generally outperform engineered features for concept recognition. Additionally, we found that directly training SVM classifiers using these features does significantly better than partially retraining the DCNN for adapting it to the new data. We also found that, even though the learned features performed better that the engineered ones, the fusion of both of them perform significantly better, indicating that engineered features are still useful, at least in this case.


Multimedia Tools and Applications | 2017

Learned features versus engineered features for multimedia indexing

Mateusz Budnik; Efrain-Leonardo Gutierrez-Gomez; Bahjat Safadi; Denis Pellerin; Georges Quénot

In this paper, we compare “traditional” engineered (hand-crafted) features (or descriptors) and learned features for content-based indexing of image or video documents. Learned (or semantic) features are obtained by training classifiers on a source collection containing samples annotated with concepts. These classifiers are applied to the samples of a destination collection and the classification scores for each sample are gathered into a vector that becomes a feature for it. These feature vectors are then used for training another classifier for the destination concepts on the destination collection. If the classifiers used on the source collection are Deep Convolutional Neural Networks (DCNNs), it is possible to use as a new feature vector also the intermediate values corresponding to the output of all the hidden layers. We made an extensive comparison of the performance of such features with traditional engineered ones as well as with combinations of them. The comparison was made in the context of the TRECVid semantic indexing task. Our results confirm those obtained for still images: features learned from other training data generally outperform engineered features for concept recognition. Additionally, we found that directly training KNN and SVM classifiers using these features performs significantly better than partially retraining the DCNN for adapting it to the new data. We also found that, even though the learned features performed better that the engineered ones, fusing both of them performs even better, indicating that engineered features are still useful, at least in the considered case. Finally, the combination of DCNN features with KNN and SVM classifiers was applied to the VOC 2012 object classification task where it currently obtains the best performance with a MAP of 85.4 %.


content-based multimedia indexing | 2014

Automatic propagation of manual annotations for multimodal person identification in TV shows

Mateusz Budnik; Johann Poignant; Laurent Besacier; Georges Quénot

In this paper an approach to human annotation propagation for person identification in the multimodal context is proposed. A system is used, which combines speaker diarization and face clustering to produce multimodal clusters. The whole multimodal clusters are later annotated rather than just single tracks, which is done by propagation. Optical character recognition systems provides initial annotation. Four different strategies, which select candidates for annotation, are tested. The initial results of annotation propagation are promising. With the use of a proper active learning selection strategy the human annotator involvement could be reduced even further.


Odyssey 2016 | 2016

Deep complementary features for speaker identification in TV broadcast data

Mateusz Budnik; Ali Khodabakhsh; Laurent Besacier

This work tries to investigate the use of a Convolutional Neu-ral Network approach and its fusion with more traditional systems such as Total Variability Space for speaker identification in TV broadcast data. The former uses spectrograms for training, while the latter is based on MFCC features. The dataset poses several challenges such as significant class imbalance or background noise and music. Even though the performance of the Convolutional Neural Network is lower than the state-of-the-art, it is able to complement it and give better results through fusion. Different fusion techniques are evaluated using both early and late fusion.


cooperative design visualization and engineering | 2014

Collaborative Annotation of Multimedia Resources

Pierrick Bruneau; Mickaël Stefas; Mateusz Budnik; Johann Poignant; Hervé Bredin; Thomas Tamisier; Benoît Otjacques

Reference multimedia corpora for use in automated indexing algorithms require lots of manual work. The Camomile project advocates the joint progress of automated annotation methods and tools for improving the benchmark resources. This paper shows some work in progress in interactive visualization of annotations, and perspectives in harnessing the collaboration between manual annotators, algorithm designers, and benchmark administrators.


Proceedings of TRECVID | 2014

LIG at TRECVid 2014: Semantic Indexing

Bahjat Safadi; Nadia Derbas; Abdelkader Hamadi; Mateusz Budnik; Philippe Mulhem; Georges Quénot


language resources and evaluation | 2016

The CAMOMILE collaborative annotation platform for multi-modal, multi-lingual and multi-media documents

Johann Poignant; Mateusz Budnik; Hervé Bredin; Claude Barras; Mickaël Stefas; Pierrick Bruneau; Gilles Adda; Laurent Besacier; Hazim Kemal Ekenel; Gil Francopoulo; Javier Hernando; Joseph Mariani; Ramon Morros; Georges Quénot; Sophie Rosset; Thomas Tamisier


conference of the international speech communication association | 2014

Active selection with label propagation for minimizing human effort in speaker annotation of TV shows.

Mateusz Budnik; Johann Poignant; Laurent Besacier; Georges Quénot


Proceedings of TRECVid | 2015

IRIM at TRECVID 2015: Semantic Indexing

Hervé Le Borgne; Philippe Gosselin; David Picard; Miriam Redi; Bernard Merialdo; Boris Mansencal; Jenny Benois-Pineau; Stéphane Ayache; Abdelkader Hamadi; Bahjat Safadi; Nadia Derbas; Mateusz Budnik; Georges Quénot; Boyang Gao; Chao Zhu; Yuxing Tang; Emmanuel Dellandréa; Charles-Edmond Bichot; Liming Chen; Alexandre Benoit; Patrick Lambert; Tiberius Strat


MediaEval 2015 Workshop | 2015

LIG at MediaEval 2015 Multimodal Person Discovery in Broadcast TV Task

Mateusz Budnik; Bahjat Safadi; Laurent Besacier; Georges Quénot; Ali Khodabakhsh

Collaboration


Dive into the Mateusz Budnik's collaboration.

Top Co-Authors

Avatar

Georges Quénot

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Johann Poignant

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Laurent Besacier

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Bahjat Safadi

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Philippe Mulhem

Joseph Fourier University

View shared research outputs
Top Co-Authors

Avatar

Hervé Bredin

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Abdelkader Hamadi

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Claude Barras

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Efrain-Leonardo Gutierrez-Gomez

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Nadia Derbas

Centre national de la recherche scientifique

View shared research outputs
Researchain Logo
Decentralizing Knowledge