Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Nicolas Riche is active.

Publication


Featured researches published by Nicolas Riche.


international conference on image processing | 2012

Rare: A new bottom-up saliency model

Nicolas Riche; Matei Mancas; Bernard Gosselin; Thierry Dutoit

In this paper, a new bottom-up visual saliency model is proposed. Based on the idea that locally contrasted and globally rare features are salient, this model will be called “RARE” in the following sections. It uses a sequential bottom-up features extraction where first low-level features as luminance and chrominance are computed and from those results medium-level features as image orientations are extracted. A qualitative and a quantitative comparison are achieved on a 120 images dataset. The RARE algorithm powerfully predicts human fixations compared with most of the freely available saliency models.


international conference on image processing | 2011

Abnormal motion selection in crowds using bottom-up saliency

Matei Mancas; Nicolas Riche; Julien Leroy; Bernard Gosselin

This paper deals with the selection of relevant motion from multi-object movement. The proposed method is based on a multi-scale approach using features extracted from optical flow and global rarity quantification to compute bottom-up saliency maps. It shows good results from four objects to dense crowds with increasing performance. The results are convincing on synthetic videos, simple real video movements, a pedestrian database and they seem promising on very complex videos with dense crowds. This algorithm only uses motion features (direction and speed) but can be easily generalized to other dynamic or static features. Video surveillance, social signal processing and, in general, higher level scene understanding can benefit from this method.


asian conference on computer vision | 2012

Dynamic saliency models and human attention: a comparative study on videos

Nicolas Riche; Matei Mancas; Dubravko Culibrk; Vladimir S. Crnojevic; Bernard Gosselin; Thierry Dutoit

Significant progress has been made in terms of computational models of bottom-up visual attention (saliency). However, efficient ways of comparing these models for still images remain an open research question. The problem is even more challenging when dealing with videos and dynamic saliency. The paper proposes a framework for dynamic-saliency model evaluation, based on a new database of diverse videos for which eye-tracking data has been collected. In addition, we present evaluation results obtained for 4 state-of-the-art dynamic-saliency models, two of which have not been verified on eye-tracking data before.


international conference on computer vision systems | 2011

3D saliency for abnormal motion selection: the role of the depth map

Nicolas Riche; Matei Mancas; Bernard Gosselin; Thierry Dutoit

This paper deals with the selection of relevant motion within a scene. The proposed method is based on 3D features extraction and their rarity quantification to compute bottom-up saliency maps. We show that the use of 3D motion features namely the motion direction and velocity is able to achieve much better results than the same algorithm using only 2D information. This is especially true in close scenes with small groups of people or moving objects and frontal view. The proposed algorithm uses motion features but it can be easily generalized to other dynamic or static features. It is implemented on a platform for real-time signal analysis called Max/Msp/Jitter. Social signal processing, video games, gesture processing and, in general, higher level scene understanding can benefit from this method.


international conference on image processing | 2013

Spatio-temporal saliency based on rare model

Marc Decombas; Nicolas Riche; Frederic Dufaux; Béatrice Pesquet-Popescu; Matei Mancas; Bernard Gosselin; Thierry Dutoit

In this paper, a new spatio-temporal saliency model is presented. Based on the idea that both spatial and temporal features are needed to determine the saliency of a video, this model builds upon the fact that locally contrasted and globally rare features are salient. The features used in the model are both spatial (color and orientations) and temporal (motion amplitude and direction) at several scales. To be more robust to moving camera a module computes the global motion and to be more consistent in time, the saliency maps are combined together after a temporal filtering. The model is evaluated on a dataset of 24 videos split into 5 categories (Abnormal, Surveillance, Crowds, Moving camera, and Noisy). This model achieves better performance when compared to several state-of-the-art saliency models.


arXiv: Computer Vision and Pattern Recognition | 2016

Study of Parameters Affecting Visual Saliency Assessment

Nicolas Riche

The computational modelling of visual attention has been developed and expanded considerably during the past 10 years. Many different saliency models are now available online (for still images and videos). At the same time, many popular image-video datasets with human gaze data or binary masks have been released to evaluate saliency models with commonly used evaluation metrics. The new challenges and future directions for this field are therefore to establish evaluation protocols and saliency benchmarks. Although some evaluation studies and online benchmarks have already been proposed and are major contributions, a key underlying issue is: how can one fairly evaluate all these models? In this chapter, we investigate this question with an evaluation, divided into four experiments, leading to the proposition of a new evaluation framework.


european signal processing conference | 2015

A CBIR-based evaluation framework for visual attention models

Dounia Awad; Matei Mancas; Nicolas Riche; Vincent Courboulay; Arnaud Revel

The computational models of visual attention, originally proposed as cognitive models of human attention, nowadays are being used as front-ends to numerous vision systems like automatic object recognition. These systems are generally evaluated against eye tracking data or manually segmented salient objects in images. We previously showed that this comparison can lead to different rankings depending on which of the two ground truths is used. These findings suggest that the saliency models ranking might be different for each application and the use of eye-tracking rankings to choose a model for a given application is not optimal. Therefore, in this paper, we propose a new saliency evaluation framework optimized for object recognition. This paper aims to answer the question: 1) Is the application-driven saliency models rankings consistent with classical ground truth like eye-tracking? 2) If not, which saliency models one should use for the precise CBIR applications?.


Archive | 2016

Metrics for Saliency Model Validation

Nicolas Riche

Different scores have been used in the literature to validate saliency models. While reviews of databases or saliency models exist, reviews of metrics are harder to come by. In this chapter, we will explain the standard measures used to evaluate the salient object detection and eye tracking models. While some metrics focus on eye scanpath, here we will deal with approaches involving 2D maps. The metrics are described and compared to show that they are more or less complementary.


audio mostly conference | 2014

AudioMetro: directing search for sound designers through content-based cues

Christian Frisson; Stéphane Dupont; Willy Yvart; Nicolas Riche; Xavier Siebert; Thierry Dutoit

Sound designers source sounds in massive collections, heavily tagged by themselves and sound librarians. For each query, once successive keywords attained a limit to filter down the results, hundreds of sounds are left to be reviewed. AudioMetro combines a new content-based information visualization technique with instant audio feedback to facilitate this part of their workflow. We show through user evaluations by known-item search in collections of textural sounds that a default grid layout ordered by filename unexpectedly outperforms content-based similarity layouts resulting from a recent dimension reduction technique (Student-t Stochastic Neighbor Embedding), even when complemented with content-based glyphs that emphasize local neighborhoods and cue perceptual features. We propose a solution borrowed from image browsing: a proximity grid, whose density we optimize for nearest neighborhood preservation among the closest cells. Not only does it remove overlap but we show through a subsequent user evaluation that it also helps to direct the search. We based our experiments on an open dataset (the OLPC sound library) for replicability.


Archive | 2016

Bottom-Up Saliency Models for Still Images: A Practical Review

Nicolas Riche; Matei Mancas

There is an increasing interest to utilize human visual attention abilities on computational systems. This is especially the case for computer vision which needs to select the most relevant parts within a large amount of data. Therefore, modeling visual attention, particularly the bottom-up part, has been a very active research area over the past 20 years. Many different models of visual bottom-up attention are now available online. They take as input natural images and output a saliency map which gives the probability of each pixels to grab our attention. In this chapter, a state of the art of static saliency-based models has been performed. The models are grouped into different families depending if they predict human gaze or salient objects. Different features of those models are also provided to show the main differences between them.

Collaboration


Dive into the Nicolas Riche's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Loïc Reboursière

Faculté polytechnique de Mons

View shared research outputs
Researchain Logo
Decentralizing Knowledge