Bernard Gosselin
University of Mons
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Bernard Gosselin.
electronic imaging | 2005
Matei Mancas; Bernard Gosselin; Benoît Macq
Our research deals with a semi-automatic region-growing segmentation technique. This method only needs one seed inside the region of interest (ROI). We applied it for spinal cord segmentation but it also shows results for parotid glands or even tumors. Moreover, it seems to be a general segmentation method as it could be applied in other computer vision domains then medical imaging. We use both the thresholding simplicity and the spatial information. The gray-scale and spatial distances from the seed to all the other pixels are computed. By normalizing and subtracting to 1 we obtain the probability for a pixel to belong to the same region as the seed. We will explain the algorithm and show some preliminary results which are encouraging. Our method has low computational cost and very encouraging results in 2D. Future work will consist in a C implementation and a 3D generalisation.
Computer Vision and Image Understanding | 2007
Céline Mancas-Thillou; Bernard Gosselin
Natural scene images usually contain varying colors which make segmentation more difficult. Without any a priori knowledge of degradations and based on physical light reflectance, we propose a selective metric-based clustering to extract textual information in real-world images. The proposed method uses several metrics to merge similar color together for an efficient text-driven segmentation in the RGB color space. However, color information by itself is not sufficient to solve all natural scene issues; hence we complement it with intensity and spatial information obtained using Log-Gabor filters, thus enabling the processing of character segmentation into individual components to increase final recognition rates. Hence, our selective metric-based clustering is integrated into a dynamic method suitable for text extraction and character segmentation. Quantitative results on a public database are presented to assess the efficiency and the complementarity of metrics, together with the importance of a dynamic system for natural scene text extraction. Finally running time is detailed to show the usability of our method.
international conference on image processing | 2012
Nicolas Riche; Matei Mancas; Bernard Gosselin; Thierry Dutoit
In this paper, a new bottom-up visual saliency model is proposed. Based on the idea that locally contrasted and globally rare features are salient, this model will be called “RARE” in the following sections. It uses a sequential bottom-up features extraction where first low-level features as luminance and chrominance are computed and from those results medium-level features as image orientations are extracted. A qualitative and a quantitative comparison are achieved on a 120 images dataset. The RARE algorithm powerfully predicts human fixations compared with most of the freely available saliency models.
international conference on image processing | 2011
Matei Mancas; Nicolas Riche; Julien Leroy; Bernard Gosselin
This paper deals with the selection of relevant motion from multi-object movement. The proposed method is based on a multi-scale approach using features extracted from optical flow and global rarity quantification to compute bottom-up saliency maps. It shows good results from four objects to dense crowds with increasing performance. The results are convincing on synthetic videos, simple real video movements, a pedestrian database and they seem promising on very complex videos with dense crowds. This algorithm only uses motion features (direction and speed) but can be easily generalized to other dynamic or static features. Video surveillance, social signal processing and, in general, higher level scene understanding can benefit from this method.
Archive | 2007
Céline Mancas-Thillou; Bernard Gosselin
In a society driven by visual information and with the drastic expansion of low-priced cameras, vision techniques are more and more considered and text recognition is nowadays a fast changing field, which is included in a large spectrum, named text understanding. Previously, text recognition was dealing with documents only; those which were acquired with flatbed, sheet-fed or mounted imaging devices. Recently, handheld scanners such as pen-scanners appeared to acquire small parts of text on a fairly planar surface such as that of a business card. Issues having an impact on image processing are limited to sensor noise, skewed documents and inherent degradations to the document itself. Based on this classical acquisition method, optical character recognition (OCR) systems have been designed for many years to reach a high level of recognition with constrained documents, meaning those falling into traditional layout, with relatively clean backgrounds such as regular letters, forms, faxes, checks and so on and with a sufficient resolution (at least 300 dots per inch (dpi)). With the recent explosion of handheld imaging devices (HIDs), i.e. digital cameras, standalone or embedded in cellular phones or personal digital assistants (PDAs), research on document image analysis entered a new era where breakthroughs are required: traditional document analysis systems fail against this new and promising acquisition mode and main differences and reasons of failures will be detailed in this section. Small, light, and handy, these devices enable the removal of all constraints and all objects, such as natural scenes (NS) in different situations in streets, at home or in planes may be now acquired! Moreover, recent studies [Kim, 2005] announced a decline in scanner sales while projecting that sales of HIDs will keep increasing over the next 10 years.
EURASIP Journal on Advances in Signal Processing | 2005
Céline Thillou; Silvio Ferreira; Bernard Gosselin
This paper describes a mobile device which tries to give the blind or visually impaired access to text information. Three key technologies are required for this system: text detection, optical character recognition, and speech synthesis. Blind users and the mobile environment imply two strong constraints. First, pictures will be taken without control on camera settings and a priori information on text (font or size) and background. The second issue is to link several techniques together with an optimal compromise between computational constraints and recognition efficiency. We will present the overall description of the system from text detection to OCR error correction.
international conference on image processing | 2006
Céline Mancas-Thillou; Bernard Gosselin
Natural scene images brought new challenges for a few years and one of them is text understanding over images or videos. Text extraction which consists to segment textual foreground from the background succeeds using color information. Faced to the large diversity of text information in daily life and artistic ways of display, we are convinced that this only information is no more enough and we present a color segmentation algorithm using spatial information. Moreover, a new method is proposed in this paper to handle uneven lighting, blur and complex backgrounds which are inherent degradations to natural scene images. To merge text pixels together, complementary clustering distances are used to support simultaneously clear and well-contrasted images with complex and degraded images. Tests on a public database show finally efficiency of the whole proposed method.
color imaging conference | 2005
Céline Thillou; Bernard Gosselin
This paper describes a new automatic color thresholding based on wavelet denoising and color clustering with K-means in order to segment text information in a camera-based image. Several parameters bring different information and this paper tries to explain how to use this complementarity. It is mainly based on the discrimination between two kinds of backgrounds: clean or complex. On one hand, this separation is useful to apply a particular algorithm on each of these cases and on the other hand to decrease the computation time for clean cases for which a faster method could be considered. Finally, several experiments were done to discuss results and to conclude that the use of a discrimination between kinds of backgrounds gives better results in terms of Precision and Recall.
international conference on pattern recognition | 2006
Céline Mancas-Thillou; Bernard Gosselin
Natural scene images coming usually from low-resolution sensors in embedded context suffer from low text recognition results. Due to several types of degradations, existing algorithms are not robust enough. In order to improve recognition, we present in this paper a character segmentation using log-Gabor filters to take advantage simultaneously of gray-level variation and spatial location. The recognition step is used to determine dynamically some of the parameters needed for the filter. Finally, several quantified results are presented to highlight the efficiency of this method against several issues
asian conference on computer vision | 2012
Nicolas Riche; Matei Mancas; Dubravko Culibrk; Vladimir S. Crnojevic; Bernard Gosselin; Thierry Dutoit
Significant progress has been made in terms of computational models of bottom-up visual attention (saliency). However, efficient ways of comparing these models for still images remain an open research question. The problem is even more challenging when dealing with videos and dynamic saliency. The paper proposes a framework for dynamic-saliency model evaluation, based on a new database of diverse videos for which eye-tracking data has been collected. In addition, we present evaluation results obtained for 4 state-of-the-art dynamic-saliency models, two of which have not been verified on eye-tracking data before.