Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Matei Mancas is active.

Publication


Featured researches published by Matei Mancas.


electronic imaging | 2005

Segmentation Using a Region Growing Thresholding

Matei Mancas; Bernard Gosselin; Benoît Macq

Our research deals with a semi-automatic region-growing segmentation technique. This method only needs one seed inside the region of interest (ROI). We applied it for spinal cord segmentation but it also shows results for parotid glands or even tumors. Moreover, it seems to be a general segmentation method as it could be applied in other computer vision domains then medical imaging. We use both the thresholding simplicity and the spatial information. The gray-scale and spatial distances from the seed to all the other pixels are computed. By normalizing and subtracting to 1 we obtain the probability for a pixel to belong to the same region as the seed. We will explain the algorithm and show some preliminary results which are encouraging. Our method has low computational cost and very encouraging results in 2D. Future work will consist in a C implementation and a 3D generalisation.


international conference on image processing | 2012

Rare: A new bottom-up saliency model

Nicolas Riche; Matei Mancas; Bernard Gosselin; Thierry Dutoit

In this paper, a new bottom-up visual saliency model is proposed. Based on the idea that locally contrasted and globally rare features are salient, this model will be called “RARE” in the following sections. It uses a sequential bottom-up features extraction where first low-level features as luminance and chrominance are computed and from those results medium-level features as image orientations are extracted. A qualitative and a quantitative comparison are achieved on a 120 images dataset. The RARE algorithm powerfully predicts human fixations compared with most of the freely available saliency models.


international conference on image processing | 2011

Abnormal motion selection in crowds using bottom-up saliency

Matei Mancas; Nicolas Riche; Julien Leroy; Bernard Gosselin

This paper deals with the selection of relevant motion from multi-object movement. The proposed method is based on a multi-scale approach using features extracted from optical flow and global rarity quantification to compute bottom-up saliency maps. It shows good results from four objects to dense crowds with increasing performance. The results are convincing on synthetic videos, simple real video movements, a pedestrian database and they seem promising on very complex videos with dense crowds. This algorithm only uses motion features (direction and speed) but can be easily generalized to other dynamic or static features. Video surveillance, social signal processing and, in general, higher level scene understanding can benefit from this method.


international conference on image processing | 2013

Memorability of natural scenes: The role of attention

Matei Mancas; Olivier Le Meur

The image memorability consists in the faculty of an image to be recalled after a period of time. Recently, the memorability of an image database was measured and some factors responsible for this memorability were highlighted. In this paper, we investigate the role of visual attention in image memorability around two axis. The first one is experimental and uses results of eye-tracking performed on a set of images of different memorability scores. The second investigation axis is predictive and we show that attention-related features can advantageously replace low-level features in image memorability prediction. From our work it appears that the role of visual attention is important and should be more taken into account along with other low-level features.


asian conference on computer vision | 2012

Dynamic saliency models and human attention: a comparative study on videos

Nicolas Riche; Matei Mancas; Dubravko Culibrk; Vladimir S. Crnojevic; Bernard Gosselin; Thierry Dutoit

Significant progress has been made in terms of computational models of bottom-up visual attention (saliency). However, efficient ways of comparing these models for still images remain an open research question. The problem is even more challenging when dealing with videos and dynamic saliency. The paper proposes a framework for dynamic-saliency model evaluation, based on a new database of diverse videos for which eye-tracking data has been collected. In addition, we present evaluation results obtained for 4 state-of-the-art dynamic-saliency models, two of which have not been verified on eye-tracking data before.


international conference on acoustics, speech, and signal processing | 2005

Fast and automatic tumoral area localisation using symmetry

Matei Mancas; Bernard Gosselin; Benoît Macq

Our research deals with a fully automatic and fast localization of possible tumoral areas on computed tomography scanner (CT scan) images. The aim of this method is not to segment tumors but only to highlight the areas where tumor has the greater probability to be located. To achieve this task, we use the bilateral symmetry of the human body and the asymmetry introduced by the presence of tumors. Our work was initially dedicated to the head and neck area but it should work well for any other body part and even better for more symmetric areas like the brain.


Eurasip Journal on Image and Video Processing | 2007

Perceptual image representation

Matei Mancas; Bernard Gosselin; Benoît Macq

This paper describes a rarity-based visual attention model working on both still images and video sequences. Applications of this kind of models are numerous and we focus on a perceptual image representation which enhances the perceptually important areas and uses lower resolution for perceptually less important regions. Our aim is to provide an approximation of human perception by visualizing its gradual discovery of the visual environment. Comparisons with classical methods for visual attention show that the proposed algorithm is well adapted to anisotropic filtering purposes. Moreover, it has a high ability to preserve perceptually important areas as defects or abnormalities from an important loss of information. High accuracy on low-contrast defects and scalable real-time video compression may be some practical applications of the proposed image representation.


international conference on computer vision systems | 2011

3D saliency for abnormal motion selection: the role of the depth map

Nicolas Riche; Matei Mancas; Bernard Gosselin; Thierry Dutoit

This paper deals with the selection of relevant motion within a scene. The proposed method is based on 3D features extraction and their rarity quantification to compute bottom-up saliency maps. We show that the use of 3D motion features namely the motion direction and velocity is able to achieve much better results than the same algorithm using only 2D information. This is especially true in close scenes with small groups of people or moving objects and frontal view. The proposed algorithm uses motion features but it can be easily generalized to other dynamic or static features. It is implemented on a platform for real-time signal analysis called Max/Msp/Jitter. Social signal processing, video games, gesture processing and, in general, higher level scene understanding can benefit from this method.


international conference on image processing | 2013

Spatio-temporal saliency based on rare model

Marc Decombas; Nicolas Riche; Frederic Dufaux; Béatrice Pesquet-Popescu; Matei Mancas; Bernard Gosselin; Thierry Dutoit

In this paper, a new spatio-temporal saliency model is presented. Based on the idea that both spatial and temporal features are needed to determine the saliency of a video, this model builds upon the fact that locally contrasted and globally rare features are salient. The features used in the model are both spatial (color and orientations) and temporal (motion amplitude and direction) at several scales. To be more robust to moving camera a module computes the global motion and to be more consistent in time, the saliency maps are combined together after a temporal filtering. The model is evaluated on a dataset of 24 videos split into 5 categories (Abnormal, Surveillance, Crowds, Moving camera, and Noisy). This model achieves better performance when compared to several state-of-the-art saliency models.


Medical Imaging 2004: Image Processing | 2004

Towards an automatic tumor segmentation using iterative watersheds

Matei Mancas; Bernard Gosselin

This paper introduces a simple knowledge model on CT (Computed Tomography) images which provides high level information. A novel method called iterative watersheds is then used in order to segment the tumors. Moreover, a fully automatic tumor segmentation method was tested by using image registration. Some preliminary results are very encouraging and give us hope to obtain an interesting tool for the clinic. Tests were made on head and neck images, nevertheless, this is a generic method working on all kinds of tumors. The iterative watersheds and our model are first introduced, then PET (Positron Emission Tomography) images registration on CT is described. Some results of iterative watersheds are compared using either the semi-automatic or fully automatic mode. Finally we conclude by a discussion about operators interaction and important future work.

Collaboration


Dive into the Matei Mancas's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Benoît Macq

Université catholique de Louvain

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge