Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Lluis Gomez is active.

Publication


Featured researches published by Lluis Gomez.


international conference on document analysis and recognition | 2013

Multi-script Text Extraction from Natural Scenes

Lluis Gomez; Dimosthenis Karatzas

Scene text extraction methodologies are usually based in classification of individual regions or patches, using a priori knowledge for a given script or language. Human perception of text, on the other hand, is based on perceptual organisation through which text emerges as a perceptually significant group of atomic objects. Therefore humans are able to detect text even in languages and scripts never seen before. In this paper, we argue that the text extraction problem could be posed as the detection of meaningful groups of regions. We present a method built around a perceptual organisation framework that exploits collaboration of proximity and similarity laws to create text-group hypotheses. Experiments demonstrate that our algorithm is competitive with state of the art approaches on a standard dataset covering text in variable orientations and two languages.


Pattern Recognition | 2017

TextProposals: A text-specific selective search algorithm for word spotting in the wild

Lluis Gomez; Dimosthenis Karatzas

Motivated by the success of powerful while expensive techniques to recognize words in a holistic way, object proposals techniques emerge as an alternative to the traditional text detectors. In this paper we introduce a novel object proposals method that is specifically designed for text. We rely on a similarity based region grouping algorithm that generates a hierarchy of word hypotheses. Over the nodes of this hierarchy it is possible to apply a holistic word recognition method in an efficient way. Our experiments demonstrate that the presented method is superior in its ability of producing good quality word proposals when compared with class-independent algorithms. We show impressive recall rates with a few thousand proposals in different standard benchmarks, including focused or incidental text datasets, and multi-language scenarios. Moreover, the combination of our object proposals with existing whole-word recognizers shows competitive performance in end-to-end word spotting, and, in some benchmarks, outperforms previously published results. Concretely, in the challenging ICDAR2015 Incidental Text dataset, we overcome in more than 10 percent f-score the best-performing method in the last ICDAR Robust Reading Competition. Source code of the complete end-to-end system is available at this https URL


International Journal on Document Analysis and Recognition | 2016

A fast hierarchical method for multi-script and arbitrary oriented scene text extraction

Lluis Gomez; Dimosthenis Karatzas

Typography and layout lead to the hierarchical organization of text in words, text lines, paragraphs. This inherent structure is a key property of text in any script and language, which has nonetheless been minimally leveraged by existing scene text detection methods. This paper addresses the problem of text segmentation in natural scenes from a hierarchical perspective. Contrary to existing methods, we make explicit use of text structure, aiming directly to the detection of region groupings corresponding to text within a hierarchy produced by an agglomerative similarity clustering process over individual regions. We propose an optimal way to construct such an hierarchy introducing a feature space designed to produce text group hypotheses with high recall and a novel stopping rule combining a discriminative classifier and a probabilistic measure of group meaningfulness based on perceptual organization. Results obtained over four standard datasets, covering text in variable orientations and different languages, demonstrate that our algorithm, while being trained in a single mixed dataset, outperforms state-of-the-art methods in unconstrained scenarios.


Pattern Recognition | 2017

Improving patch-based scene text script identification with ensembles of conjoined networks

Lluis Gomez; Anguelos Nicolaou; Dimosthenis Karatzas

We present a patch-based classification method for script identificattion in the wild.We describe a novel method based on the use of ensembles of conjoined networks (ECN).The ECN learns discriminative local features and their relative importance in a global classification rule.Our experiments demonstrate state-of-the-art results in three script identification datasets. This paper focuses on the problem of script identification in scene text images. Facing this problem with state of the art CNN classifiers is not straightforward, as they fail to address a key characteristic of scene text instances: their extremely variable aspect ratio. Instead of resizing input images to a fixed aspect ratio as in the typical use of holistic CNN classifiers, we propose here a patch-based classification framework in order to preserve discriminative parts of the image that are characteristic of its class.We describe a novel method based on the use of ensembles of conjoined networks to jointly learn discriminative stroke-parts representations and their relative importance in a patch-based classification scheme. Our experiments with this learning procedure demonstrate state-of-the-art results in two public script identification datasets.In addition, we propose a new public benchmark dataset for the evaluation of multi-lingual scene text end-to-end reading systems. Experiments done in this dataset demonstrate the key role of script identification in a complete end-to-end system that combines our script identification method with a previously published text detector and an off-the-shelf OCR engine.


document analysis systems | 2016

A Fine-Grained Approach to Scene Text Script Identification

Lluis Gomez; Dimosthenis Karatzas

This paper focuses on the problem of script identification in unconstrained scenarios. Script identification is an important prerequisite to recognition, and an indispensable condition for automatic text understanding systems designed for multi-language environments. Although widely studied for document images and handwritten documents, it remains an almost unexplored territory for scene text images. We detail a novel method for script identification in natural images that combines convolutional features and the Naive-Bayes Nearest Neighbor classifier. The proposed framework efficiently exploits the discriminative power of small stroke-parts, in a fine-grained classification framework. In addition, we propose a new public benchmark dataset for the evaluation of joint text detection and script identification in natural scenes. Experiments done in this new dataset demonstrate that the proposed method yields state of the art results, while it generalizes well to different datasets and variable number of scripts. The evidence provided shows that multi-lingual scene text recognition in the wild is a viable proposition. Source code of the proposed method is made available online.


asian conference on computer vision | 2014

Scene Text Recognition: No Country for Old Men?

Lluis Gomez; Dimosthenis Karatzas

It is a generally accepted fact that Off-the-shelf OCR engines do not perform well in unconstrained scenarios like natural scene imagery, where text appears among the clutter of the scene. However, recent research demonstrates that a conventional shape-based OCR engine would be able to produce competitive results in the end-to-end scene text recognition task when provided with a conveniently preprocessed image. In this paper we confirm this finding with a set of experiments where two off-the-shelf OCR engines are combined with an open implementation of a state-of-the-art scene text detection framework. The obtained results demonstrate that in such pipeline, conventional OCR solutions still perform competitively compared to other solutions specifically designed for scene text recognition.


document analysis systems | 2016

Visual Script and Language Identification

Anguelos Nicolaou; Andrew D. Bagdanov; Lluis Gomez; Dimosthenis Karatzas

In this paper we introduce a script identification method based on hand-crafted texture features and an artificial neural network. The proposed pipeline achieves near state-of-the-art performance for script identification of video-text and state-of-the-art performance on visual language identification of handwritten text. More than using the deep network as a classifier, the use of its intermediary activations as a learned metric demonstrates remarkable results and allows the use of discriminative models on unknown classes. Comparative experiments in video-text and text in the wild datasets provide insights on the internals of the proposed deep network.


international conference on document analysis and recognition | 2015

Object proposals for text extraction in the wild

Lluis Gomez; Dimosthenis Karatzas

Object Proposals is a recent computer vision technique receiving increasing interest from the research community. Its main objective is to generate a relatively small set of bounding box proposals that are most likely to contain objects of interest. The use of Object Proposals techniques in the scene text understanding field is innovative. Motivated by the success of powerful while expensive techniques to recognize words in a holistic way, Object Proposals techniques emerge as an alternative to the traditional text detectors. In this paper we study to what extent the existing generic Object Proposals methods may be useful for scene text understanding. Also, we propose a new Object Proposals algorithm that is specifically designed for text and compare it with other generic methods in the state of the art. Experiments show that our proposal is superior in its ability of producing good quality word proposals in an efficient way. The source code of our method is made publicly available1.


document analysis systems | 2014

An On-line Platform for Ground Truthing and Performance Evaluation of Text Extraction Systems

Dimosthenis Karatzas; Sergi Robles; Lluis Gomez

This paper presents a set of on-line software tools for creating ground truth and calculating performance evaluation metrics for text extraction tasks such as localization, segmentation and recognition. The platform supports the definition of comprehensive ground truth information at different text representation levels while it offers centralised management and quality control of the ground truthing effort. It implements a range of state of the art performance evaluation algorithms and offers functionality for the definition of evaluation scenarios, on-line calculation of various performance metrics and visualisation of the results. The presented platform, which comprises the backbone of the ICDAR 2011 (challenge 1) and 2013 (challenges 1 and 2) Robust Reading competitions, is now made available for public use.


computer vision and pattern recognition | 2017

Self-Supervised Learning of Visual Features through Embedding Images into Text Topic Spaces

Lluis Gomez; Yash Patel; Marçal Rusiñol; Dimosthenis Karatzas; C. V. Jawahar

End-to-end training from scratch of current deep architectures for new computer vision problems would require Imagenet-scale datasets, and this is not always possible. In this paper we present a method that is able to take advantage of freely available multi-modal content to train computer vision algorithms without human supervision. We put forward the idea of performing self-supervised learning of visual features by mining a large scale corpus of multi-modal (text and image) documents. We show that discriminative visual features can be learnt efficiently by training a CNN to predict the semantic context in which a particular image is more probable to appear as an illustration. For this we leverage the hidden semantic structures discovered in the text corpus with a well-known topic modeling technique. Our experiments demonstrate state of the art performance in image classification, object detection, and multi-modal retrieval compared to recent self-supervised or natural-supervised approaches.

Collaboration


Dive into the Lluis Gomez's collaboration.

Top Co-Authors

Avatar

Dimosthenis Karatzas

Autonomous University of Barcelona

View shared research outputs
Top Co-Authors

Avatar

Marçal Rusiñol

Autonomous University of Barcelona

View shared research outputs
Top Co-Authors

Avatar

Anguelos Nicolaou

Autonomous University of Barcelona

View shared research outputs
Top Co-Authors

Avatar

Raul Gomez

Autonomous University of Barcelona

View shared research outputs
Top Co-Authors

Avatar

Dena Bazazian

Autonomous University of Barcelona

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Masakazu Iwamura

Osaka Prefecture University

View shared research outputs
Top Co-Authors

Avatar

Naoyuki Morimoto

Osaka Prefecture University

View shared research outputs
Top Co-Authors

Avatar

Andrés Mafla

Autonomous University of Barcelona

View shared research outputs
Top Co-Authors

Avatar

Ernest Valveny

Autonomous University of Barcelona

View shared research outputs
Researchain Logo
Decentralizing Knowledge