Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Muhammad Muzzamil Luqman is active.

Publication


Featured researches published by Muhammad Muzzamil Luqman.


Pattern Recognition | 2013

Fuzzy multilevel graph embedding

Muhammad Muzzamil Luqman; Jean-Yves Ramel; Josep Lladós; Thierry Brouard

Structural pattern recognition approaches offer the most expressive, convenient, powerful but computational expensive representations of underlying relational information. To benefit from mature, less expensive and efficient state-of-the-art machine learning models of statistical pattern recognition they must be mapped to a low-dimensional vector space. Our method of explicit graph embedding bridges the gap between structural and statistical pattern recognition. We extract the topological, structural and attribute information from a graph and encode numeric details by fuzzy histograms and symbolic details by crisp histograms. The histograms are concatenated to achieve a simple and straightforward embedding of graph into a low-dimensional numeric feature vector. Experimentation on standard public graph datasets shows that our method outperforms the state-of-the-art methods of graph embedding for richly attributed graphs. Highlights? We propose an explicit graph embedding method. ? We perform multilevel analysis of graph to extract global, topological/structural and attribute information. ? We use homogeneity of subgraphs in graph for extracting topological/structural details. ? We encode numeric information by fuzzy histograms and symbolic information by crisp histograms. ? Our method outperforms graph embedding methods for richly attributed graphs.


international conference on document analysis and recognition | 2009

Graphic Symbol Recognition Using Graph Based Signature and Bayesian Network Classifier

Muhammad Muzzamil Luqman; Thierry Brouard; Jean-Yves Ramel

We present a new approach for recognition of complex graphic symbols in technical documents. Graphic symbol recognition is a well known challenge in the field of document image analysis and is at heart of most graphic recognition systems. Our method uses structural approach for symbol representation and statistical classifier for symbol recognition. In our system we represent symbols by their graph based signatures: a graphic symbol is vectorized and is converted to an attributed relational graph, which is used for computing a feature vector for the symbol. This signature corresponds to geometry and topology of the symbol. We learn a Bayesian network to encode joint probability distribution of symbol signatures and use it in a supervised learning scenario for graphic symbol recognition. We have evaluated our method on synthetically deformed and degraded images of pre-segmented 2D architectural and electronic symbols from GREC databases and have obtained encouraging recognition rates.


international conference on document analysis and recognition | 2015

ICDAR2015 competition on smartphone document capture and OCR (SmartDoc)

Jean-Christophe Burie; Joseph Chazalon; Mickaël Coustaty; Sébastien Eskenazi; Muhammad Muzzamil Luqman; Maroua Mehri; Nibal Nayef; Jean-Marc Ogier; Sophea Prum; Marçal Rusiñol

Smartphones are enabling new ways of capture, hence arises the need for seamless and reliable acquisition and digitization of documents, in order to convert them to editable, searchable and a more human-readable format. Current state-of-the-art works lack databases and baseline benchmarks for digitizing mobile captured documents. We have organized a competition for mobile document capture and OCR in order to address this issue. The competition is structured into two independent challenges: smartphone document capture, and smartphone OCR. This report describes the datasets for both challenges along with their ground truth, details the performance evaluation protocols which we used, and presents the final results of the participating methods. In total, we received 13 submissions: 8 for challenge-1, and 5 for challenge-2.


International Workshop on Graph-Based Representations in Pattern Recognition | 2013

A Comparison of Explicit and Implicit Graph Embedding Methods for Pattern Recognition

Donatello Conte; Jean-Yves Ramel; Nicolas Sidère; Muhammad Muzzamil Luqman; Benoit Gaüzère; Jaume Gibert; Luc Brun; Mario Vento

In recent years graph embedding has emerged as a promising solution for enabling the expressive, convenient, powerful but computational expensive graph based representations to benefit from mature, less expensive and efficient state of the art machine learning models of statistical pattern recognition. In this paper we present a comparison of two implicit and three explicit state of the art graph embedding methodologies. Our preliminary experimentation on different chemoinformatics datasets illustrates that the two implicit and three explicit graph embedding approaches obtain competitive performance for the problem of graph classification.


international conference on pattern recognition | 2010

A Content Spotting System for Line Drawing Graphic Document Images

Muhammad Muzzamil Luqman; Thierry Brouard; Jean-Yves Ramel; Josep Llodos

We present a content spotting system for line drawing graphic document images. The proposed system is sufficiently domain independent and takes the keyword based information retrieval for graphic documents, one step forward, to Query By Example (QBE) and focused retrieval. During offline learning mode: we vectorize the documents in the repository, represent them by attributed relational graphs, extract regions of interest (ROIs) from them, convert each ROI to a fuzzy structural signature, cluster similar signatures to form ROI classes and build an index for the repository. During online querying mode: a Bayesian network classifier recognizes the ROIs in the query image and the corresponding documents are fetched by looking up in the repository index. Experimental results are presented for synthetic images of architectural and electronic documents.


international conference on document analysis and recognition | 2011

Subgraph Spotting through Explicit Graph Embedding: An Application to Content Spotting in Graphic Document Images

Muhammad Muzzamil Luqman; Jean-Yves Ramel; Josep Lladós; Thierry Brouard

We present a method for spotting a subgraph in a graph repository. Subgraph spotting is a very interesting research problem for various application domains where the use of a relational data structure is mandatory. Our proposed method accomplishes subgraph spotting through graph embedding. We achieve automatic indexation of a graph repository during off-line learning phase, where we (i) break the graphs into 2-node sub graphs (a.k.a. cliques of order 2), which are primitive building-blocks of a graph, (ii) embed the 2-node sub graphs into feature vectors by employing our recently proposed explicit graph embedding technique, (iii) cluster the feature vectors in classes by employing a classic agglomerative clustering technique, (iv) build an index for the graph repository and (v) learn a Bayesian network classifier. The subgraph spotting is achieved during the on-line querying phase, where we (i) break the query graph into 2-node sub graphs, (ii) embed them into feature vectors, (iii) employ the Bayesian network classifier for classifying the query 2-node sub graphs and (iv) retrieve the respective graphs by looking-up in the index of the graph repository. The graphs containing all query 2-node sub graphs form the set of result graphs for the query. Finally, we employ the adjacency matrix of each result graph along with a score function, for spotting the query graph in it. The proposed subgraph spotting method is equally applicable to a wide range of domains, offering ease of query by example (QBE) and granularity of focused retrieval. Experimental results are presented for graphs generated from two repositories of electronic and architectural document images.


international conference on pattern recognition | 2010

A fuzzy-interval based approach for explicit graph embedding

Muhammad Muzzamil Luqman; Josep Lladós; Jean-Yves Ramel; Thierry Brouard

We present a new method for explicit graph embedding. Our algorithm extracts a feature vector for an undirected attributed graph. The proposed feature vector encodes details about the number of nodes, number of edges, node degrees, the attributes of nodes and the attributes of edges in the graph. The first two features are for the number of nodes and the number of edges. These are followed by w features for node degrees, m features for k node attributes and n features for l edge attributes -- which represent the distribution of node degrees, node attribute values and edge attribute values, and are obtained by defining (in an unsupervised fashion), fuzzy-intervals over the list of node degrees, node attributes and edge attributes. Experimental results are provided for sample data of ICPR2010 contest GEPR.


international conference on document analysis and recognition | 2015

SmartDoc-QA: A dataset for quality assessment of smartphone captured document images - single and multiple distortions

Nibal Nayef; Muhammad Muzzamil Luqman; Sophea Prum; Sébastien Eskenazi; Joseph Chazalon; Jean-Marc Ogier

Smartphones are enabling new ways of capture, hence arises the need for seamless and reliable acquisition and digitization of documents. The quality assessment step is an important part of both the acquisition and the digitization processes. Assessing document quality could aid users during the capture process or help improve image enhancement methods after a document has been captured. Current state-of-the-art works lack databases in the field of document image quality assessment. In order to provide a baseline benchmark for quality assessment methods for mobile captured documents, we present in this paper a dataset for quality assessment that contains both singly- and multiply-distorted document images. The proposed dataset could be used for benchmarking quality assessment methods by the objective measure of OCR accuracy, and could be also used to benchmark quality enhancement methods. There are three types of documents in the dataset: modern documents, old administrative letters and receipts. The document images of the dataset are captured under varying capture conditions (light, different types of blur and perspective angles). This causes geometric and photometric distortions that hinder the OCR process. The ground truth of the dataset images consists of the text transcriptions of the documents, the OCR results of the captured documents and the values of the different capture parameters used for each image. We also present how the dataset could be used for evaluation in the field of no-reference quality assessment. The dataset is freely and publicly available for use by the research community at http://navidomass.univ-lr.fr/SmartDoc-QA.


international conference on document analysis and recognition | 2015

SRIF: Scale and Rotation Invariant Features for camera-based document image retrieval

Quoc Bao Dang; Muhammad Muzzamil Luqman; Mickaël Coustaty; Cao De Tran; Jean-Marc Ogier

In this paper, we propose a new feature vector, named Scale and Rotation Invariant Features (SRIF), for real-time camera-based document image retrieval. SRIF is based on Locally Likely Arrangement Hashing (LLAH), which has been widely used and accepted as an efficient real-time camera-based document image retrieval method based on text. SRIF is computed based on geometrical constraints between pairs of nearest points around a keypoint. It can deal with feature point extraction errors which are introduced as a result of the camera capturing of documents. The experimental results show that SRIF outperforms LLAH in terms of retrieval accuracy and processing time.


international conference on document analysis and recognition | 2015

Content-based comic retrieval using multilayer graph representation and frequent graph mining

Thanh-Nam Le; Muhammad Muzzamil Luqman; Jean-Christophe Burie; Jean-Marc Ogier

Comics has its large audience and market throughout the world, yet despite the huge research interest given to content-based image retrieval (CBIR) systems, the question of how to effectively retrieve comic images has been little studied. In this paper, we propose a scheme to represent and retrieve comic-page images using attributed Region Adjacency Graphs (RAGs) and their frequent subgraphs. We first extract the graphical structures and local features of each panel of the whole comic volume, then separate different categories of local features to different layers of attributed RAGs. After that, a list of frequent subgraphs for each layer is obtained by using frequent subgraph mining (FSM) technique. For indexing and CBIR purpose, the recognition and ranking are done by checking for isomorphism between the graphs representing the query versus the discovered frequent subgraphs. Our experimental results show that the proposed approach can achieve reliable retrieval results of comic images using query-by-example (QBE) model.

Collaboration


Dive into the Muhammad Muzzamil Luqman's collaboration.

Top Co-Authors

Avatar

Jean-Marc Ogier

University of La Rochelle

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jean-Yves Ramel

François Rabelais University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nibal Nayef

University of La Rochelle

View shared research outputs
Top Co-Authors

Avatar

Quoc Bao Dang

University of La Rochelle

View shared research outputs
Top Co-Authors

Avatar

Joseph Chazalon

University of La Rochelle

View shared research outputs
Top Co-Authors

Avatar

Thierry Brouard

François Rabelais University

View shared research outputs
Top Co-Authors

Avatar

Josep Lladós

Autonomous University of Barcelona

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge