Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Frank Lebourgeois is active.

Publication


Featured researches published by Frank Lebourgeois.


Pattern Recognition | 2007

Text search for medieval manuscript images

Yann Leydier; Frank Lebourgeois; Hubert Emptoz

In this article we introduce a text search algorithm designed for ancient manuscripts. Word-spotting is the best alternative to word recognition on this type of document. Our method is based on differential features that are compared using a cohesive elastic matching method, based on zones of interest in order to match only the informative parts of the words. Thus we improved both the accuracy and the runtime of the word-spotting process. The proposed method is tested on medieval manuscripts of Latin and Semitic alphabets as well as on more recent manuscripts.


Pattern Recognition | 2009

Towards an omnilingual word retrieval system for ancient manuscripts

Yann Leydier; Asma Ouji; Frank Lebourgeois; Hubert Emptoz

In this article, we introduce the first method that allows the indexation of ancient manuscripts of any language and alphabet. We describe a word retrieval engine inspired by recent word-spotting advances on ancient manuscripts. Our approach does not need any layout segmentation and makes use of features fitted to any type of alphabet (Latin, Arabic, Chinese, etc.) and writing. The engine is tested on numerous documents and in several use-cases.


international conference on document analysis and recognition | 2009

Document Images Restoration by a New Tensor Based Diffusion Process: Application to the Recognition of Old Printed Documents

Fadoua Drira; Frank Lebourgeois; Hubert Emptoz

A modification of the Weickert coherence enhancing diffusion filter is proposed for which new constraints formulated form the Perona-Malik equation are added. The new diffusion filter, driven by local tensors fields, takes benefit from both of these approaches and avoids problems known to affect them. This filter reinforces character discontinuity and eliminates the inherent problem of corner rounding while smoothing. Experiments conducted on degraded document images illustrate the effectiveness of the proposed method compared to another anisotropic diffusion approaches. A visual quality improvement is thus achieved on these images. Such improvement leads to a noticeable improvement of the OCR systems accuracy proven through the comparison of OCR recognition rates before and after the diffusion process.


international conference on pattern recognition | 1992

A fast and efficient method for extracting text paragraphs and graphics from unconstrained documents

Frank Lebourgeois; Z. Bublinski; H. Emptoz

Outlines a fast and efficient method for extracting graphics and text paragraphs from printed documents. The method presented is based on bottom-up approach to document analysis and it achieves very good performance in most cases. During the preprocessing characters are linked together to form blocks. Created blocks are segmented, labelled and merged into paragraphs. Simultaneously, graphics are extracted from the image. Algorithms for each step of processing are presented. Also, the obtained experimental results are included.<<ETX>>


International Journal on Document Analysis and Recognition | 2012

A new PDE-based approach for singularity-preserving regularization: application to degraded characters restoration

Fadoua Drira; Frank Lebourgeois; Hubert Emptoz

The massive digitization of heritage documents has raised new prospects for research like degraded document image restoration. Degradations harm the legibility of the digitized documents and limit their processing. As a solution, we propose to tackle the problem of degraded text characters with PDE (partial differential equation)-based approaches. Existing PDE approaches do not preserve singularities and edge continuities while smoothing. Hence, we propose a new anisotropic diffusion by adding new constraints to the Weickert coherence-enhancing diffusion filter in order to control the diffusion process and to eliminate the inherent corner rounding. A qualitative improvement in the singularity preservation is thus achieved. Experiments conducted on degraded document images illustrate the effectiveness of the proposed method compared with other anisotropic diffusion approaches. We illustrate the performance with the study of the optical recognition accuracy rates.


document analysis systems | 2006

Contribution to the discrimination of the medieval manuscript texts: application in the palaeography

Ikram Moalla; Frank Lebourgeois; Hubert Emptoz; Adel M. Alimi

This work presents our first contribution to the discrimination of the medieval manuscript texts in order to assist the palaeographers to date the ancient manuscripts. Our method is based on the Spatial Grey-Level Dependence (SGLD) which measures the join probability between grey levels values of pixels for each displacement. We use the Haralick features to characterise the 15 medieval text styles. The achieved discrimination results are between 50% and 81%, which is encouraging.


International Journal on Document Analysis and Recognition | 2015

Resolution enhancement of textual images via multiple coupled dictionaries and adaptive sparse representation selection

Rim Walha; Fadoua Drira; Frank Lebourgeois; Christophe Garcia; Adel M. Alimi

Resolution enhancement has become a valuable research topic due to the rapidly growing need for high-quality images in various applications. Various resolution enhancement approaches have been successfully applied on natural images. Nevertheless, their direct application to textual images is not efficient enough due to the specificities that distinguish these particular images from natural images. The use of insufficient resolution introduces substantial loss of details which can make a text unreadable by humans and unrecognizable by OCR systems. To address these issues, a sparse coding-based approach is proposed to enhance the resolution of a textual image. Three major contributions are presented in this paper: (1) Multiple coupled dictionaries are learned from a clustered database and selected adaptively for a better reconstruction. (2) An automatic process is developed to collect the training database, which contains writing patterns extracted from high-quality character images. (3) A new local feature descriptor well suited for writing specificities is proposed for the clustering of the training database. The performance of these propositions is evaluated qualitatively and quantitatively on various types of low-resolution textual images. Significant improvements in visual quality and character recognition rates are achieved using the proposed approach, confirmed by a detailed comparative study with state-of-the-art upscaling approaches.


Second International Conference on Document Image Analysis for Libraries (DIAL'06) | 2006

Image analysis for palaeography inspection

Ikram Moalla; Frank Lebourgeois; Hubert Emptoz; Adel M. Alimi

This paper presents our first contribution to the discrimination of the medieval manuscript texts in order to assist palaeographers to date the ancient manuscripts. Our method is based on spatial grey-level dependence (SGLD) which measures the join probability between grey level values of pixels for each displacement. We use the Haralick features to characterise 15 Latin medieval text styles and then to characterise 7 Arabic styles. The achieved discrimination results are between 50% and 81% for the Medieval Latin styles, and up to 100% for Arabic ones


international conference on document analysis and recognition | 2011

Chromatic / Achromatic Separation in Noisy Document Images

Asma Ouji; Yann Leydier; Frank Lebourgeois

This paper presents a new method to split an image into chromatic and achromatic zones. The proposed algorithm is dedicated to document images. It is robust to the color noise introduced by scanners and image compression. It is also parameter-free since it automatically adapts to the image content.


International Journal on Document Analysis and Recognition | 2008

Improvement of postal mail sorting system

Djamel Gaceb; Véronique Eglin; Frank Lebourgeois; Hubert Emptoz

An efficient mail sorting system is mainly based on an accurate optical recognition of the addresses on the envelopes. However, the localizing of the address block (ABL) should be done before the OCR recognition process. The location step is very crucial as it has a great impact on the global performance of the system. Consequently a good localizing step leads to a better recognition rate. The limits of current methods are mainly caused by modular linear architectures used for ABL and the lack of cooperation between modules: their performances greatly depend on each independent module performance. We are presenting in this paper a new approach for ABL based on a pyramidal data organization and on a hierarchical graph coloring for classification process. This new approach presents the advantage to guarantee a good coherence between different modules and it also reduces both the computation time and the rejection rate. The proposed method gives a very satisfying rate of 98% of good locations on a set of 750 envelope images.

Collaboration


Dive into the Frank Lebourgeois's collaboration.

Top Co-Authors

Avatar

Hubert Emptoz

Institut national des sciences Appliquées de Lyon

View shared research outputs
Top Co-Authors

Avatar

Véronique Eglin

Institut national des sciences Appliquées de Lyon

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Djamel Gaceb

Institut national des sciences Appliquées de Lyon

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge