Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mukund Narasimhan is active.

Publication


Featured researches published by Mukund Narasimhan.


international acm sigir conference on research and development in information retrieval | 2005

Learning to extract information from semi-structured text using a discriminative context free grammar

Paul A. Viola; Mukund Narasimhan

In recent work, conditional Markov chain models (CMM) have been used to extract information from semi-structured text (one example is the Conditional Random Field [10]). Applications range from finding the author and title in research papers to finding the phone number and street address in a web page. The CMM framework combines a priori knowledge encoded as features with a set of labeled training data to learn an efficient extraction process. We will show that similar problems can be solved more effectively by learning a discriminative context free grammar from training data. The grammar has several distinct advantages: long range, even global, constraints can be used to disambiguate entity labels; training data is used more efficiently; and a set of new more powerful features can be introduced. The grammar based approach also results in semantic information (encoded in the form of a parse tree) which could be used for IR applications like question answering. The specific problem we consider is of extracting personal contact, or address, information from unstructured sources such as documents and emails. While linear-chain CMMs perform reasonably well on this task, we show that a statistical parsing approach results in a 50% reduction in error rate. This system also has the advantage of being interactive, similar to the system described in [9]. In cases where there are multiple errors, a single user correction can be propagated to correct multiple errors automatically. Using a discriminatively trained grammar, 93.71% of all tokens are labeled correctly (compared to 88.43% for a CMM) and 72.87% of records have all tokens labeled correctly (compared to 45.29% for the CMM).


international conference on document analysis and recognition | 2005

Efficient geometric algorithms for parsing in two dimensions

Percy Liang; Mukund Narasimhan; Michael Shilman; Paul A. Viola

Grammars are a powerful technique for modeling and extracting the structure of documents. One large challenge, however, is computational complexity. The computational cost of grammatical parsing is related to both the complexity of the input and the ambiguity of the grammar. For programming languages, where the terminals appear in a linear sequence and the grammar is unambiguous, parsing is O(N). For natural languages, which are linear yet have an ambiguous grammar, parsing is O(N/sup 3/). For documents, where the terminals are arranged in two dimensions and the grammar is ambiguous, parsing time can be exponential in the number of terminals. In this paper we introduce (and unify) several types of geometrical data structures which can be used to significantly accelerate parsing time. Each data structure embodies a different geometrical constraint on the set of possible valid parses. These data structures are very general, in that they can be used by any type of grammatical model, and a wide variety of document understanding tasks, to limit the set of hypotheses examined and tested. Assuming a clean design for the parsing software, the same parsing framework can be tested with various geometric constraints to determine the most effective combination.


international conference on machine learning | 2006

Online decoding of Markov models under latency constraints

Mukund Narasimhan; Paul A. Viola; Michael Shilman

The Viterbi algorithm is an efficient and optimal method for decoding linear-chain Markov Models. However, the entire input sequence must be observed before the labels for any time step can be generated, and therefore Viterbi cannot be directly applied to online/interactive/streaming scenarios without incurring significant (possibly unbounded) latency. A widely used approach is to break the input stream into fixed-size windows, and apply Viterbi to each window. Larger windows lead to higher accuracy, but result in higher latency.We propose several alternative algorithms to the fixed-sized window decoding approach. These approaches compute a certainty measure on predicted labels that allows us to trade off latency for expected accuracy dynamically, without having to choose a fixed window size up front. Not surprisingly, this more principled approach gives us a substantial improvement over choosing a fixed window. We show the effectiveness of the approach for the task of spotting semi-structured information in large documents. When compared to full Viterbi, the approach suffers a 0.1 percent error degradation with a average latency of 2.6 time steps (versus the potentially infinite latency of Viterbi). When compared to fixed windows Viterbi, we achieve a 40x reduction in error and 6x reduction in latency.


international conference on acoustics, speech, and signal processing | 2007

A Generative-Discriminative Framework using Ensemble Methods for Text-Dependent Speaker Verification

Amarnag Subramanya; Zhengyou Zhang; Arun C. Surendran; Patrick Nguyen; Mukund Narasimhan; Alex Acero

Speaker verification can be treated as a statistical hypothesis testing problem. The most commonly used approach is the likelihood ratio test (LRT), which can be shown to be optimal using the Neymann-Pearson lemma. However, in most practical situations the Neymann-Pearson lemma does not apply. In this paper, we present a more robust approach that makes use of a hybrid generative-discriminative framework for text-dependent speaker verification. Our algorithm makes use of a generative models to learn the characteristics of a speaker and then discriminative models to discriminate between a speaker and an impostor. One of the advantages of the proposed algorithm is that it does not require us to retrain the generative model. The proposed model, on an average, yields 36.41% relative improvement in EER over a LRT.


uncertainty in artificial intelligence | 2005

A submodular-supermodular procedure with applications to discriminative structure learning

Mukund Narasimhan; Jeff A. Bilmes


Archive | 2005

Extracting data from semi-structured information utilizing a discriminative context free grammar

Paul A. Viola; Mukund Narasimhan; Michael Shilman


uncertainty in artificial intelligence | 2004

PAC-learning bounded tree-width graphical models

Mukund Narasimhan; Jeff A. Bilmes


neural information processing systems | 2005

Q-Clustering

Mukund Narasimhan; Nebojsa Jojic; Jeff A. Bilmes


knowledge discovery and data mining | 2008

Learning from multi-topic web documents for contextual advertisement

Yi Zhang; Arun C. Surendran; John Platt; Mukund Narasimhan


Archive | 2007

Delivery of contextually relevant web data

Baskaran Dharmarajan; Dennis Takchi Cheung; Eliot Spencer Savarese; Mukund Narasimhan; Imran Khan; Denise K. Ho

Collaboration


Dive into the Mukund Narasimhan's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jeff A. Bilmes

University of Washington

View shared research outputs
Top Co-Authors

Avatar

Gregory Druck

University of Massachusetts Amherst

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge