Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where K. Pramod Sankar is active.

Publication


Featured researches published by K. Pramod Sankar.


document analysis systems | 2010

Nearest neighbor based collection OCR

K. Pramod Sankar; C. V. Jawahar; R. Manmatha

Conventional optical character recognition (OCR) systems operate on individual characters and words, and do not normally exploit document or collection context. We describe a Collection OCR which takes advantage of the fact that multiple examples of the same word (often in the same font) may occur in a document or collection. The idea here is that an OCR or a reCAPTCHA like process generates a partial set of recognized words. In the second stage, a nearest neighbor algorithm compares the remaining word-images to those already recognized and propagates labels from the nearest neighbors. It is shown that by using an approximate fast nearest neighbor algorithm based on Hierarchical K-Means (HKM), we can do this accurately and efficiently. It is also shown that profile based features perform much better than SIFT and Pyramid Histogram of Gradient (PHOG) features. We believe that this is because profile features are more robust to word degradations (common in our documents). This approach is applied to a collection of Telugu books - a language for which no commercial OCR exists. We show from a selection of 33 Telugu books that starting with OCR labels for only 30% of the collection we can recognize the remaining 70% of the words in the collection with 70% accuracy using this approach. Since the approach makes no language specific assumptions, it should be applicable to a large number of languages. In particular we are interested in its applicability to Indic languages and scripts.


document analysis systems | 2006

Digitizing a million books: challenges for document analysis

K. Pramod Sankar; Vamshi Ambati; Lakshmi Pratha; C. V. Jawahar

This paper describes the challenges for document image analysis community for building large digital libraries with diverse document categories. The challenges are identified from the experience of the on-going activities toward digitizing and archiving one million books. Smooth workflow has been established for archiving large quantity of books, with the help of efficient image processing algorithms. However, much more research is needed to address the challenges arising out of the diversity of the content in digital libraries.


british machine vision conference | 2009

Subtitle-free Movie to Script Alignment.

K. Pramod Sankar; C. V. Jawahar; Andrew Zisserman

A standard solution for aligning scripts to movies is to use dynamic time warping with the subtitles (Everingham et al., BMVC 2006). We investigate the problem of aligning scripts to TV video/movies in cases where subtitles are not available, e.g. in the case of silent films or for film passages which are non-verbal. To this end we identify a number of “modes of alignment” and train classifiers for each of these. The modes include visual features, such as locations and face recognition, and audio features such as speech. In each case the feature gives some alignment information, but is too noisy when used independently. We show that combining the different features into a single cost function and optimizing this using dynamic programming, leads to a performance superior to each of the individual features. The method is assessed on episodes from the situation comedy Seinfeld, and on Charlie Chaplin and Indian movies.


document analysis systems | 2012

Robust Recognition of Degraded Documents Using Character N-Grams

Shrey Dutta; Naveen Sankaran; K. Pramod Sankar; C. V. Jawahar

In this paper we present a novel recognition approach that results in a 15% decrease in word error rate on heavily degraded Indian language document images. OCRs have considerably good performance on good quality documents, but fail easily in presence of degradations. Also, classical OCR approaches perform poorly over complex scripts such as those for Indian languages. We address these issues by proposing to recognize character n-gram images, which are basically groupings of consecutive character/component segments. Our approach is unique, since we use the character n-grams as a primitive for recognition rather than for post processing. By exploiting the additional context present in the character n-gram images, we enable better disambiguation between confusing characters in the recognition phase. The labels obtained from recognizing the constituent n-grams are then fused to obtain a label for the word that emitted them. Our method is inherently robust to degradations such as cuts and merges which are common in digital libraries of scanned documents. We also present a reliable and scalable scheme for recognizing character n-gram images. Tests on English and Malayalam document images show considerable improvement in recognition in the case of heavily degraded documents.


international conference on document analysis and recognition | 2015

Adapting off-the-shelf CNNs for word spotting & recognition

Arjun Sharma; K. Pramod Sankar

The word spotting approach is extremely useful for searching and annotating documents for which robust recognizers are unavailable. Traditionally, hand-designed features were used to represent the word images for spotting. In this paper, we learn a data-driven representation for word-images from Convolutional Neural Networks (CNNs). Previous approaches that learn deep neural networks for a particular task/dataset are difficult to design and train for generic word spotting. Instead, by “adapting” a CNN trained for a different problem, we show tremendous speedup in the training phase. Our experiments show that features extracted from an adapted-CNN handsomely outperform hand-designed features on both spotting and recognition tasks for printed (English and Telugu) and handwritten (IAM) document collections.


International Journal on Document Analysis and Recognition | 2014

Large scale document image retrieval by automatic word annotation

K. Pramod Sankar; R. Manmatha; C. V. Jawahar

In this paper, we present a practical and scalable retrieval framework for large-scale document image collections, for an Indian language script that does not have a robust OCR. OCR-based methods face difficulties in character segmentation and recognition, especially for the complex Indian language scripts. We realize that character recognition is only an intermediate step toward actually labeling words. Hence, we re-pose the problem as one of directly performing word annotation. This new approach has better recognition performance, as well as easier segmentation requirements. However, the number of classes in word annotation is much larger than those for character recognition, making such a classification scheme expensive to train and test. To address this issue, we present a novel framework that replaces naive classification with a carefully designed mixture of indexing and classification schemes. This enables us to build a search system over a large collection of 1,000 books of Telugu, consisting of 120K document images or 36M individual words. This is the largest searchable document image collection for a script without an OCR that we are aware of. Our retrieval system performs significantly well with a mean average precision of 0.8.


international conference on document analysis and recognition | 2013

Character N-Gram Spotting on Handwritten Documents Using Weakly-Supervised Segmentation

Udit Roy; Naveen Sankaran; K. Pramod Sankar; C. V. Jawahar

In this paper, we present a solution towards building a retrieval system over handwritten document images that i) is recognition-free, ii) allows text-querying, iii) can retrieve at sub-word level, iv) can search for out-of-vocabulary words. Unlike previous approaches that operate at either character or word levels, we use character n-gram images (CNG-img) as the retrieval primitive. CNG-img are sequences of character segments, that are represented and matched in the image-space. The word-images are now treated as a bag-of-CNG-img, that can be indexed and matched in the feature space. This allows for recognition-free search (query-by-example), which can retrieve morphologically similar words that have matching sub-words. Further, to enable query-by-keyword, we build an automated scheme to generate labeled exemplars for characters and character n-grams, from unconstrained handwritten documents. We pose this problem as one of weakly-supervised learning, where character/n-gram labeling is obtained automatically from the word labels. The resulting retrieval system can answer queries from an unlimited. vocabulary. The approach is demonstrated on the George Washington collection, results show major improvement in retrieval performance as compared to word-recognition and word-spotting methods.


international conference on document analysis and recognition | 2011

Character n-Gram Spotting in Document Images

M. Sudha Praveen; K. Pramod Sankar; C. V. Jawahar

In this paper, we present a novel approach to search and retrieve from document image collections, without explicit recognition. Existing recognition-free approaches such as word-spotting cannot scale to arbitrarily large vocabulary and document image collections. In this paper we put forth a framework that overcomes three issues of word-spotting: i) retrieving word images not labeled during indexing, ii) allow for query and retrieval of morphological variations of words and iii) scale the retrieval to large collections. We propose a character n-gram spotting framework, where word-images are considered as a bag of visual n-grams. The character n-grams are represented in a visual-feature space and indexed for quick retrieval. In the retrieval phase, the query word is expanded to its constituent n-grams, which are used to query the previously built index. A ranking mechanism is proposed that combines the retrieval results from the multiple lists corresponding to each n-gram. The approach is demonstrated on a size-able collection of English and Malayalam books. With a mean AP of 0.64, the performance of the retrieval system was found to be very promising.


indian conference on computer vision, graphics and image processing | 2006

Enabling search over large collections of telugu document images – an automatic annotation based approach

K. Pramod Sankar; C. V. Jawahar

For the first time, search is enabled over a massive collection of 21 Million word images from digitized document images. This work advances the state-of-the-art on multiple fronts: i) Indian language document images are made searchable by textual queries, ii) interactive content-level access is provided to document images for search and retrieval, iii) a novel recognition-free approach, that does not require an OCR, is adapted and validated iv) a suite of image processing and pattern classification algorithms are proposed to efficiently automate the process and v) the scalability of the solution is demonstrated over a large collection of 500 digitised books consisting of 75,000 pages. Character recognition based approaches yield poor results for developing search engines for Indian language document images, due to the complexity of the script and the poor quality of the documents. Recognition free approaches, based on word-spotting, are not directly scalable to large collections, due to the computational complexity of matching images in the feature space. For example, if it requires 1 mSec to match two images, the retrieval of documents to a single query, from a large collection like ours, would require close to a days time. In this paper we propose a novel automatic annotation based approach to provide textual description of document images. With a one time, offline computational effort, we are able to build a text-based retrieval system, over annotated images. This system has an interactive response time of about 0.01 second. However, we pay the price in the form of massive offline computation, which is performed on a cluster of 35 computers, for about a month. Our procedure is highly automatic, requiring minimal human intervention.


computer vision and pattern recognition | 2011

Interpolation Based Tracking for Fast Object Detection in Videos

Rahul Jain; K. Pramod Sankar; C. V. Jawahar

Detecting objects in images and videos is very challenging due to i) large intra-class variety and ii) pose/scale variations. It is hard to build strong recognition engines for generic object categories, while applying them to large video collections is computationally infeasible (due to the explosion of frames to test). In this paper, we present a detection-by-interpolation framework, where object-tracking is achieved by interpolating between candidate object detections in a subset of the video frames. Given the location of an object in two frames of a video-shot, our algorithm tries to identify the locations of the object in the intermediate frames. We evaluate two tracking solutions based on greedy and dynamic programming approaches, and observe that a hybrid method gives significant performance boost as well as speedup in detection. On 6 hours of HD quality video, we were able to cut-down the detection time from 10000 hours to 1500 hours, while simultaneously improving the detection accuracy from 54% to 68%. As a result of this work, we build a dataset of 100,000 car images, spanning a wide range of viewpoints, scale and make, about 100 times larger than existing collections [2], [3].

Collaboration


Dive into the K. Pramod Sankar's collaboration.

Top Co-Authors

Avatar

C. V. Jawahar

International Institute of Information Technology

View shared research outputs
Top Co-Authors

Avatar

Naveen Sankaran

International Institute of Information Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

R. Manmatha

University of Massachusetts Amherst

View shared research outputs
Top Co-Authors

Avatar

Lakshmi Pratha

International Institute of Information Technology

View shared research outputs
Top Co-Authors

Avatar

M. Sudha Praveen

International Institute of Information Technology

View shared research outputs
Top Co-Authors

Avatar

Mihir Jain

International Institute of Information Technology

View shared research outputs
Top Co-Authors

Avatar

Rahul Jain

International Institute of Information Technology

View shared research outputs
Top Co-Authors

Avatar

Rahul Sharma

International Institute of Information Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge