Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Joseph Chazalon is active.

Publication


Featured researches published by Joseph Chazalon.


international conference on document analysis and recognition | 2015

ICDAR2015 competition on smartphone document capture and OCR (SmartDoc)

Jean-Christophe Burie; Joseph Chazalon; Mickaël Coustaty; Sébastien Eskenazi; Muhammad Muzzamil Luqman; Maroua Mehri; Nibal Nayef; Jean-Marc Ogier; Sophea Prum; Marçal Rusiñol

Smartphones are enabling new ways of capture, hence arises the need for seamless and reliable acquisition and digitization of documents, in order to convert them to editable, searchable and a more human-readable format. Current state-of-the-art works lack databases and baseline benchmarks for digitizing mobile captured documents. We have organized a competition for mobile document capture and OCR in order to address this issue. The competition is structured into two independent challenges: smartphone document capture, and smartphone OCR. This report describes the datasets for both challenges along with their ground truth, details the performance evaluation protocols which we used, and presents the final results of the participating methods. In total, we received 13 submissions: 8 for challenge-1, and 5 for challenge-2.


document analysis systems | 2014

Combining Focus Measure Operators to Predict OCR Accuracy in Mobile-Captured Document Images

Marçal Rusiñol; Joseph Chazalon; Jean-Marc Ogier

Mobile document image acquisition is a new trend raising serious issues in business document processing workflows. Such digitization procedure is unreliable, and integrates many distortions which must be detected as soon as possible, on the mobile, to avoid paying data transmission fees, and losing information due to the inability to re-capture later a document with temporary availability. In this context, out-of-focus blur is major issue: users have no direct control over it, and it seriously degrades OCR recognition. In this paper, we concentrate on the estimation of focus quality, to ensure a sufficient legibility of a document image for OCR processing. We propose two contributions to improve OCR accuracy prediction for mobile-captured document images. First, we present 24 focus measures, never tested on document images, which are fast to compute and require no training. Second, we show that a combination of those measures enables state-of-the art performance regarding the correlation with OCR accuracy. The resulting approach is fast, robust, and easy to implement in a mobile device. Experiments are performed on a public dataset, and precise details about image processing are given.


international conference on document analysis and recognition | 2015

SmartDoc-QA: A dataset for quality assessment of smartphone captured document images - single and multiple distortions

Nibal Nayef; Muhammad Muzzamil Luqman; Sophea Prum; Sébastien Eskenazi; Joseph Chazalon; Jean-Marc Ogier

Smartphones are enabling new ways of capture, hence arises the need for seamless and reliable acquisition and digitization of documents. The quality assessment step is an important part of both the acquisition and the digitization processes. Assessing document quality could aid users during the capture process or help improve image enhancement methods after a document has been captured. Current state-of-the-art works lack databases in the field of document image quality assessment. In order to provide a baseline benchmark for quality assessment methods for mobile captured documents, we present in this paper a dataset for quality assessment that contains both singly- and multiply-distorted document images. The proposed dataset could be used for benchmarking quality assessment methods by the objective measure of OCR accuracy, and could be also used to benchmark quality enhancement methods. There are three types of documents in the dataset: modern documents, old administrative letters and receipts. The document images of the dataset are captured under varying capture conditions (light, different types of blur and perspective angles). This causes geometric and photometric distortions that hinder the OCR process. The ground truth of the dataset images consists of the text transcriptions of the documents, the OCR results of the captured documents and the values of the different capture parameters used for each image. We also present how the dataset could be used for evaluation in the field of no-reference quality assessment. The dataset is freely and publicly available for use by the research community at http://navidomass.univ-lr.fr/SmartDoc-QA.


international conference on document analysis and recognition | 2015

A semi-automatic groundtruthing tool for mobile-captured document segmentation

Joseph Chazalon; Marçal Rusiñol; Jean-Marc Ogier; Josep Lladós

This paper presents a novel way to generate ground-truth data for the evaluation of mobile document capture systems, focusing on the first stage of the image processing pipeline involved: document object detection and segmentation in low-quality preview frames. We introduce and describe a simple, robust and fast technique based on color markers which enables a semi-automated annotation of page corners. We also detail a technique for marker removal. Methods and tools presented in the paper were successfully used to annotate, in few hours, 24889 frames in 150 video files for the smartDOC competition at ICDAR 2015.


document analysis systems | 2014

Efficient Example-Based Super-Resolution of Single Text Images Based on Selective Patch Processing

Nibal Nayef; Joseph Chazalon; Petra Gomez-Krämer; Jean-Marc Ogier

Example-based super-resolution (SR) methods learn the correspondences between low resolution (LR) and high-resolution (HR) image patches, where the patches are extracted from a training database. To reconstruct a single LR image into a HR one, each LR image patch is processed by the previously trained model to recover its corresponding HR patch. For this reason, they are computationally inefficient. We propose the use of a selective patch processing technique to carry out the super-resolution step more efficiently, while maintaining the output quality. In this technique, only patches of high variance are processed by the costly reconstruction steps, while the rest of the patches are processed by fast bicubic interpolation. We have applied the proposed improvement on representative example-based SR methods to super-resolve text images. The results show a significant speed up for text SR without a drop in theocrat accuracy. In order to carry out an extensive and solid performance evaluation, we also present a public database of text images for training and testing example-based SR methods.


international conference on document analysis and recognition | 2015

A comparative study of local detectors and descriptors for mobile document classification

Marçal Rusiñol; Joseph Chazalon; Jean-Marc Ogier; Josep Lladós

In this paper we conduct a comparative study of local key-point detectors and local descriptors for the specific task of mobile document classification. A classification architecture based on direct matching of local descriptors is used as baseline for the comparative study. A set of four different key-point detectors and four different local descriptors are tested in all the possible combinations. The experiments are conducted in a database consisting of 30 model documents acquired on 6 different backgrounds, totaling more than 36.000 test images.


international conference on document analysis and recognition | 2015

Improving document matching performance by local descriptor filtering

Joseph Chazalon; Marçal Rusiñol; Jean-Marc Ogier

In this paper we propose an effective method aimed at reducing the amount of local descriptors to be indexed in a document matching framework. In an off-line training stage, the matching between the model document and incoming images is computed retaining the local descriptors from the model that steadily produce good matches. We have evaluated this approach by using the ICDAR2015 SmartDOC dataset containing near 25 000 images from documents to be captured by a mobile device. We have tested the performance of this filtering step by using ORB and SIFT local detectors and descriptors. The results show an important gain both in quality of the final matching as well as in time and space requirements.


Multimedia Tools and Applications | 2018

Augmented songbook: an augmented reality educational application for raising music awareness

Marçal Rusiñol; Joseph Chazalon; Katerine Diaz-Chito

This paper presents the development of an Augmented Reality mobile application which aims at sensibilizing young children to abstract concepts of music. Such concepts are, for instance, the musical notation or the idea of rhythm. Recent studies in Augmented Reality for education suggest that such technologies have multiple benefits for students, including younger ones. As mobile document image acquisition and processing gains maturity on mobile platforms, we explore how it is possible to build a markerless and real-time application to augment the physical documents with didactic animations and interactive virtual content. Given a standard image processing pipeline, we compare the performance of different local descriptors at two key stages of the process. Results suggest alternatives to the SIFT local descriptors, regarding result quality and computational efficiency, both for document model identification and perspective transform estimation. All experiments are performed on an original and public dataset we introduce here.


acm ieee joint conference on digital libraries | 2017

Touchdoc: a tool to bridge the gap between physical and digital libraries

Nicolas Sidere; Cyrille Suire; Mickaël Coustaty; Joseph Chazalon; Jean-Christophe Burie; Jean-Marc Ogier

In this paper, we explore the concept of augmented document and present a new user experience to digitize a document, modify its layout and edit its content by designing speci c interfaces on multi-touch devices and using advanced techniques in document analysis. This framework exploits image processing tools to facilitate manipulations that are natural considering paper documents and complex in their digital versions. In addition, we open discussions on bridging the gap between physical and digital libraries by improving user experience with the use of this platform.


International MICCAI Brainlesion Workshop | 2017

White Matter Hyperintensities Segmentation in a Few Seconds Using Fully Convolutional Network and Transfer Learning

Yongchao Xu; Thierry Géraud; Élodie Puybareau; Isabelle Bloch; Joseph Chazalon

In this paper, we propose a fast automatic method that segments white matter hyperintensities (WMH) in 3D brain MR images, using a fully convolutional network (FCN) and transfer learning. This FCN is the Visual Geometry Group neural network (VGG for short) pre-trained on ImageNet for natural image classification, and fine tuned with the training dataset of the MICCAI WMH Challenge. We consider three images for each slice of the volume to segment: the T1 slice, the FLAIR slice, and the result of a morphological operator that emphasizes small bright structures. These three 2D images are assembled to form a 2D color image, that inputs the FCN to obtain the 2D segmentation of the corresponding slice. We process all slices, and stack the results to form the 3D output segmentation. With such a technique, the segmentation of WMH on a 3D brain volume takes about 10 s including pre-processing. Our technique was ranked 6-th over 20 participants at the MICCAI WMH Challenge.

Collaboration


Dive into the Joseph Chazalon's collaboration.

Top Co-Authors

Avatar

Jean-Marc Ogier

University of La Rochelle

View shared research outputs
Top Co-Authors

Avatar

Marçal Rusiñol

Autonomous University of Barcelona

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nibal Nayef

University of La Rochelle

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sophea Prum

University of La Rochelle

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge