Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Franck Lebourgeois is active.

Publication


Featured researches published by Franck Lebourgeois.


international conference on pattern recognition | 2002

Bayesian networks classifiers applied to documents

Souad Souafi-Bensafi; Marc Parizeau; Franck Lebourgeois; Hubert Emptoz

This paper discusses the use of the Bayesian network model for a classification problem related to the document image understanding field. Our application is focused on logical labeling in documents, which consists in assigning logical labels to text blocks. The objective is to map a set of logical tags, composing the document logical structure, to the physical text components. We build a Bayesian network model that allows this mapping using supervised learning, and without imposing a priori constraints on the document structure. The learning strategy is based partly on genetic programming tools. A prototype has been implemented, and tested on tables of contents found in periodicals and magazines.


international conference on document analysis and recognition | 2013

Multiple Learned Dictionaries Based Clustered Sparse Coding for the Super-Resolution of Single Text Image

Rim Walha; Fadoua Drira; Franck Lebourgeois; Christophe Garcia; Adel M. Alimi

This paper addresses the problem of generating a super-resolved version of a low-resolution textual image by using Sparse Coding (SC) which suggests that image patches can be sparsely represented from a suitable dictionary. In order to enhance the learning performance and improve the reconstruction ability, we propose in this paper a multiple learned dictionaries based clustered SC approach for single text image super resolution. For instance, a large High-Resolution/Low-Resolution (HR/LR) patch pair database is collected from a set of high quality character images and then partitioned into several clusters by performing an intelligent clustering algorithm. Two coupled HR/LR dictionaries are learned from each cluster. Based on SC principle, local patch of a LR image is represented from each LR dictionary generating multiple sparse representations of the same patch. The representation that minimizes the reconstruction error is retained and applied to generate a local HR patch from the corresponding HR dictionary. The performance of the proposed approach is evaluated and compared visually and quantitatively to other existing methods applied to text images. In addition, experimental results on character recognition illustrate that the proposed method outperforms the other methods, involved in this study, by providing better recognition rates.


international conference on document analysis and recognition | 2001

Logical labeling using Bayesian networks

Souad Souafi-Bensafi; Marc Parizeau; Franck Lebourgeois; Hubert Emptoz

This paper discusses logical labeling in documents, which is one basic step in logical structure recognition. Logical labels have to be attributed to text blocks composing the layout structure. Our study is based on physical characteristics having a visual aspect: typographic, geometric and/or topologic attributes. Our objective is to map a low level logical structure, which consists of a set of logical labels, on the extracted layout structure components. We have to build a model that allows this mapping. However, the documents we consider have various layout and logical structures, thus, we chose to perform this task by supervised learning on the basis of a set of training documents. This allows us to define a generic method to solve this problem, without imposing any constraint on document structure. We propose a probabilistic model represented by a Bayesian Network (BN), which is a graphical model used in our problem as a classifier. A prototype has been implemented, and applied to tables of contents in periodicals.


international conference on pattern recognition | 2014

Sparse Coding with a Coupled Dictionary Learning Approach for Textual Image Super-resolution

Rim Walha; Fadoua Drira; Franck Lebourgeois; Christophe Garcia; Adel M. Alimi

Sparse coding is widely known as a methodology where an input signal can be sparsely represented from a suitable dictionary. It was successfully applied on a wide range of applications like the textual image Super-Resolution. Nevertheless, its complexity limits enormously its application. Looking for a reduced computational complexity, a coupled dictionary learning approach is proposed to generate dual dictionaries representing coupled feature spaces. Under this approach, we optimize the training of a first dictionary for the high-resolution image space and then a second dictionary is simply deduced from the latter for the low-resolution image space. In contrast with the classical dictionary learning approaches, the proposed approach allows a noticeable speedup and a major simplification of the coupled dictionary learning phase both in terms of algorithm architecture and computational complexity. Furthermore, the resolution enhancement results achieved by applying the proposed approach on poorly resolved textual images lead to image quality improvements.


international conference on frontiers in handwriting recognition | 2012

Denoising Textual Images Using Local/Non-local Smoothing Filters: A Comparative Study

Fadoua Drira; Franck Lebourgeois

Textual document image denoising is the main issue of this work. Therefore, we introduce a comparative study between two state-of-the-art denoising frameworks : local and non-local smoothing filters. The choice of both of these frameworks is directly related to their ability to deal with local data corruption and to process oriented patterns, a major characteristic of textual documents. Local smoothing filters incorporate anisotropic diffusion approaches where as non-local filters introduce non-local means. Experiments conducted on synthetic and real degraded document images illustrate the behaviour of the studied frameworks on the visual quality and even on the optical recognition accuracy rates.


international conference on image analysis and processing | 2013

Single Textual Image Super-Resolution Using Multiple Learned Dictionaries Based Sparse Coding

Rim Walha; Fadoua Drira; Franck Lebourgeois; Christophe Garcia; Adel M. Alimi

In this paper, we propose a new approach based on sparse coding for single textual image Super-Resolution (SR). The proposed approach is able to build more representative dictionaries learned from a large training Low-Resolution/High-Resolution (LR/HR) patch pair database. In fact, an intelligent clustering is employed to partition such database into several clusters from which multiple coupled LR/HR dictionaries are constructed. Based on the assumption that patches of the same cluster live in the same subspace, we exploit for each local LR patch its similarity to clusters in order to adaptively select the appropriate learned dictionary over that such patch can be well sparsely represented. The obtained sparse representation is hence applied to generate a local HR patch from the corresponding HR dictionary. Experiments on textual images show that the proposed approach outperforms its counterparts in visual fidelity as well as in numerical measures.


document recognition and retrieval | 2015

Gaussian Process Style Transfer Mapping for Historical Chinese Character Recognition

Jixiong Feng; Liangrui Peng; Franck Lebourgeois

Historical Chinese character recognition is very important to larger scale historical document digitalization, but is a very challenging problem due to lack of labeled training samples. This paper proposes a novel non-linear transfer learning method, namely Gaussian Process Style Transfer Mapping (GP-STM). The GP-STM extends traditional linear Style Transfer Mapping (STM) by using Gaussian process and kernel methods. With GP-STM, existing printed Chinese character samples are used to help the recognition of historical Chinese characters. To demonstrate this framework, we compare feature extraction methods, train a modified quadratic discriminant function (MQDF) classifier on printed Chinese character samples, and implement the GP-STM model on Dunhuang historical documents. Various kernels and parameters are explored, and the impact of the number of training samples is evaluated. Experimental results show that accuracy increases by nearly 15 percentage points (from 42.8% to 57.5%) using GP-STM, with an improvement of more than 8 percentage points (from 49.2% to 57.5%) compared to the STM approach.


international conference on pattern recognition | 1998

A pretopology-based supervised pattern classifier

Carl Frélicot; Franck Lebourgeois

Deals with the pretopological approach to the supervised pattern classification problem. An improved learning process which results in a reduced number of neighborhoods to be learned is proposed. We also present extensions concerning the use of different metrics in the /spl isin/-neighborhood definition in order to improve the class boundaries. Performances in terms of storage requirements, computation time and misclassification are provided and compared with the 1-nearest neighbor rule.


international conference on document analysis and recognition | 2015

Joint denoising and magnification of noisy Low-Resolution textual images

Rim Walha; Fadoua Drira; Franck Lebourgeois; Christophe Garcia; Adel M. Alimi

Current issues on textual image magnification have been focused on noise-free low-resolution images. Nevertheless, real circumstances are far from these assumptions and existing systems are generally confronted with noisy images; limiting thus the efficiency of the magnification process. The scope of this study is to propose a joint denoising and magnification system based on sparse coding to tackle such a problem. The underlying idea suggests the representation of an image patch by a linear combination of few elements from a suitable dictionary. The proposed system uses both online and offline learned dictionaries that are selected adaptively for each image patch of the input Low-Resolution (LR) noisy image to generate its corresponding noise-free High-Resolution (HR) version. In fact, the online learned dictionaries are trained on a clustered dataset of the image patches selected from the input image and used for the denoising purpose in order to take benefit of the non-local self-similarity assumption in textual images. For the offline learned dictionaries, they are trained on an external LR/HR image patch pair dataset and employed for the magnification purpose. The performance of the proposed system is evaluated visually and quantitatively on different LR noisy textual images and promising results are achieved when compared with other existing systems and conventional approaches dealing with such kind of images.


Proceedings of the 3rd International Workshop on Historical Document Imaging and Processing | 2015

Layout Analysis Algorithm Based on Probabilistic Graphical Model for Dunhuang Historical Documents

Boqiang Fan; Liangrui Peng; Franck Lebourgeois

The Dunhuang historical documents are of great significance to the study of ancient Chinese Buddhist culture and other topics. It would greatly benefit the protection and the study of historical documents with full-text information generated by historical document recognition technology. However, many historical documents from Dunhuang are old and broken, and to make it more challenging, the style and layout of these documents are casual as well. Traditional layout analysis algorithm failed to pay much attention to these problems. In this paper, a new layout analysis algorithm based on Probabilistic Graphical Model is proposed, including both rough segmentation and fine segmentation. After the input historical document images are pre-processed by Gaussian smoothed filtering and binarization, the rough segmentation step uses projection information to get rough text-column regions. In the fine segmentation step, a connected component analysis algorithm based on Probabilistic Graphical Model is developed. The method models the extracted connected components based on Markov Random Field, and combines connected components to get output text columns. Experiments were conducted on some Dunhuang historical documents, and the proposed method could correctly segment text columns with a recall rate of 90.0% and an accuracy of 77.7%. The segmented text-column regions could cover 99.2% characters in historical document images. The result shows that the proposed layout analysis algorithm could be successfully applied to degraded historical document images.

Collaboration


Dive into the Franck Lebourgeois's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hubert Emptoz

Institut national des sciences appliquées

View shared research outputs
Top Co-Authors

Avatar

Souad Souafi-Bensafi

Institut national des sciences Appliquées de Lyon

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge