Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kamal Nigam is active.

Publication


Featured researches published by Kamal Nigam.


Machine Learning | 2000

Text Classification from Labeled and Unlabeled Documents using EM

Kamal Nigam; Andrew McCallum; Sebastian Thrun; Tom M. Mitchell

This paper shows that the accuracy of learned text classifiers can be improved by augmenting a small number of labeled training documents with a large pool of unlabeled documents. This is important because in many text classification problems obtaining training labels is expensive, while large quantities of unlabeled documents are readily available.We introduce an algorithm for learning from labeled and unlabeled documents based on the combination of Expectation-Maximization (EM) and a naive Bayes classifier. The algorithm first trains a classifier using the available labeled documents, and probabilistically labels the unlabeled documents. It then trains a new classifier using the labels for all the documents, and iterates to convergence. This basic EM procedure works well when the data conform to the generative assumptions of the model. However these assumptions are often violated in practice, and poor performance can result. We present two extensions to the algorithm that improve classification accuracy under these conditions: (1) a weighting factor to modulate the contribution of the unlabeled data, and (2) the use of multiple mixture components per class. Experimental results, obtained using text from three different real-world tasks, show that the use of unlabeled data reduces classification error by up to 30%.


knowledge discovery and data mining | 2000

Efficient clustering of high-dimensional data sets with application to reference matching

Andrew McCallum; Kamal Nigam; Lyle H. Ungar

important problems involve clustering large datasets. Although naive implementations of clustering are computa- tionally expensive, there are established ecient techniques for clustering when the dataset has either (1) a limited num- ber of clusters, (2) a low feature dimensionality, or (3) a small number of data points. However, there has been much less work on methods of eciently clustering datasets that are large in all three ways at once|for example, having millions of data points that exist in many thousands of di- mensions representing many thousands of clusters. We present a new technique for clustering these large, high- dimensional datasets. The key idea involves using a cheap, approximate distance measure to eciently divide the data into overlapping subsets we call canopies .T hen cluster- ing is performed by measuring exact distances only between points that occur in a common canopy. Using canopies, large clustering problems that were formerly impossible become practical. Under reasonable assumptions about the cheap distance metric, this reduction in computational cost comes without any loss in clustering accuracy. Canopies can be applied to many domains and used with a variety of cluster- ing approaches, including Greedy Agglomerative Clustering, K-means and Expectation-Maximization. We present ex- perimental results on grouping bibliographic citations from the reference sections of research papers. Here the canopy approach reduces computation time over a traditional clus- tering approach by more than an order of magnitude and decreases error in comparison to a previously used algorithm by 25%.


conference on information and knowledge management | 2000

Analyzing the effectiveness and applicability of co-training

Kamal Nigam; Rayid Ghani

Recently there has been signi cant interest in supervised learning algorithms that combine labeled and unlabeled data for text learning tasks. The co-training setting [1] applies to datasets that have a natural separation of their features into two disjoint sets. We demonstrate that when learning from labeled and unlabeled data, algorithms explicitly leveraging a natural independent split of the features outperform algorithms that do not. When a natural split does not exist, co-training algorithms that manufacture a feature split may out-perform algorithms not using a split. These results help explain why co-training algorithms are both discriminative in nature and robust to the assumptions of their embedded classi ers.


Information Retrieval | 2000

Automating the Construction of Internet Portals with Machine Learning

Andrew McCallum; Kamal Nigam; Jason D. M. Rennie; Kristie Seymore

Domain-specific internet portals are growing in popularity because they gather content from the Web and organize it for easy access, retrieval and search. For example, www.campsearch.com allows complex queries by age, location, cost and specialty over summer camps. This functionality is not possible with general, Web-wide search engines. Unfortunately these portals are difficult and time-consuming to maintain. This paper advocates the use of machine learning techniques to greatly automate the creation and maintenance of domain-specific Internet portals. We describe new research in reinforcement learning, information extraction and text classification that enables efficient spidering, the identification of informative text segments, and the population of topic hierarchies. Using these techniques, we have built a demonstration system: a portal for computer science research papers. It already contains over 50,000 papers and is publicly available at www.cora.justresearch.com. These techniques are widely applicable to portal creation in other domains.


Artificial Intelligence | 2000

Learning to construct knowledge bases from the World Wide Web

Mark Craven; Dan DiPasquo; Dayne Freitag; Andrew McCallum; Tom M. Mitchell; Kamal Nigam; Seán Slattery

Abstract The World Wide Web is a vast source of information accessible to computers, but understandable only to humans. The goal of the research described here is to automatically create a computer understandable knowledge base whose content mirrors that of the World Wide Web. Such a knowledge base would enable much more effective retrieval of Web information, and promote new uses of the Web to support knowledge-based inference and problem solving. Our approach is to develop a trainable information extraction system that takes two inputs. The first is an ontology that defines the classes (e.g., company , person , employee , product ) and relations (e.g., employed_by , produced_by ) of interest when creating the knowledge base. The second is a set of training data consisting of labeled regions of hypertext that represent instances of these classes and relations. Given these inputs, the system learns to extract information from other pages and hyperlinks on the Web. This article describes our general approach, several machine learning algorithms for this task, and promising initial results with a prototype system that has created a knowledge base describing university people, courses, and research projects.


knowledge discovery and data mining | 2005

Deriving marketing intelligence from online discussion

Natalie S. Glance; Matthew Hurst; Kamal Nigam; Matthew Siegler; Robert Stockton; Takashi Tomokiyo

Weblogs and message boards provide online forums for discussion that record the voice of the public. Woven into this mass of discussion is a wide range of opinion and commentary about consumer products. This presents an opportunity for companies to understand and respond to the consumer by analyzing this unsolicited feedback. Given the volume, format and content of the data, the appropriate approach to understand this data is to use large-scale web and text data mining technologies.This paper argues that applications for mining large volumes of textual data for marketing intelligence should provide two key elements: a suite of powerful mining and visualization technologies and an interactive analysis environment which allows for rapid generation and testing of hypotheses. This paper presents such a system that gathers and annotates online discussion relating to consumer products using a wide variety of state-of-the-art techniques, including crawling, wrapping, search, text classification and computational linguistics. Marketing intelligence is derived through an interactive analysis framework uniquely configured to leverage the connectivity and content of annotated online discussion.


international world wide web conferences | 2005

Analyzing online discussion for marketing intelligence

Natalie S. Glance; Matthew Hurst; Kamal Nigam; Matthew Siegler; Robert Stockton; Takashi Tomokiyo

We present a system that gathers and analyzes online discussion as it relates to consumer products. Weblogs and online message boards provide forums that record the voice of the public. Woven into this discussion is a wide range of opinion and commentary about consumer products. Given its volume, format and content, the appropriate approach to understanding this data is large-scale web and text data mining. By using a wide variety of state-of-the-art techniques including crawling, wrapping, text classification and computational linguistics, online discussion is gathered and annotated within a framework that provides for interactive analysis that yields marketing intelligence for our customers.


Archive | 1998

A Comparison of Event Models for Naive Bayes Text Classification

Andrew McCallum; Kamal Nigam


Archive | 1999

Using Maximum Entropy for Text Classification

Kamal Nigam; John D. Lafferty; Andrew McCallum


national conference on artificial intelligence | 1998

Learning to extract symbolic knowledge from the World Wide Web

Mark Craven; Dan DiPasquo; Dayne Freitag; Andrew McCallum; Tom M. Mitchell; Kamal Nigam; Seán Slattery

Collaboration


Dive into the Kamal Nigam's collaboration.

Top Co-Authors

Avatar

Andrew McCallum

University of Massachusetts Amherst

View shared research outputs
Top Co-Authors

Avatar

Tom M. Mitchell

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jason D. M. Rennie

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Kristie Seymore

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Mark Craven

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dayne Freitag

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Seán Slattery

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Dan DiPasquo

Carnegie Mellon University

View shared research outputs
Researchain Logo
Decentralizing Knowledge