Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Dekang Lin is active.

Publication


Featured researches published by Dekang Lin.


meeting of the association for computational linguistics | 2006

Bootstrapping Path-Based Pronoun Resolution

Shane Bergsma; Dekang Lin

We present an approach to pronoun resolution based on syntactic paths. Through a simple bootstrapping procedure, we learn the likelihood of coreference between a pronoun and a candidate noun based on the path in the parse tree between the two entities. This path information enables us to handle previously challenging resolution instances, and also robustly addresses traditional syntactic coreference constraints. Highly coreferent paths also allow mining of precise probabilistic gender/number information. We combine statistical knowledge with well known features in a Support Vector Machine pronoun resolution classifier. Significant gains in performance are observed on several datasets.


meeting of the association for computational linguistics | 2006

Names and Similarities on the Web: Fact Extraction in the Fast Lane

Marius Pasca; Dekang Lin; Jeffrey P. Bigham; Andrei Lifchits; Alpa Jain

In a new approach to large-scale extraction of facts from unstructured text, distributional similarities become an integral part of both the iterative acquisition of high-coverage contextual extraction patterns, and the validation and ranking of candidate facts. The evaluation measures the quality and coverage of facts extracted from one hundred million Web documents, starting from ten seed facts and using no additional knowledge, lexicons or complex tools.


empirical methods in natural language processing | 2008

Discriminative Learning of Selectional Preference from Unlabeled Text

Shane Bergsma; Dekang Lin; Randy Goebel

We present a discriminative method for learning selectional preferences from unlabeled text. Positive examples are taken from observed predicate-argument pairs, while negatives are constructed from unobserved combinations. We train a Support Vector Machine classifier to distinguish the positive from the negative instances. We show how to partition the examples for efficient training with 57 thousand features and 6.5 million training instances. The model outperforms other recent approaches, achieving excellent correlation with human plausibility judgments. Compared to Mutual Information, it identifies 66% more verb-object pairs in unseen text, and resolves 37% more pronouns correctly in a pronoun resolution experiment.


international workshop/conference on parsing technologies | 2005

Strictly Lexical Dependency Parsing

Qin Iris Wang; Dale Schuurmans; Dekang Lin

We present a strictly lexical parsing model where all the parameters are based on the words. This model does not rely on part-of-speech tags or grammatical categories. It maximizes the conditional probability of the parse tree given the sentence. This is in contrast with most previous models that compute the joint probability of the parse tree and the sentence. Although the maximization of joint and conditional probabilities are theoretically equivalent, the conditional model allows us to use distributional word similarity to generalize the observed frequency counts in the training corpus. Our experiments with the Chinese Treebank show that the accuracy of the conditional model is 13.6% higher than the joint model and that the strictly lexicalized conditional model outperforms the corresponding unlexicalized model based on part-of-speech tags.


Archive | 2004

Iterative Denoising for Cross-Corpus Discovery

Carey E. Priebe; David J. Marchette; Youngser Park; Edward J. Wegman; Jeffrey L. Solka; Diego A. Socolinsky; Damianos Karakos; Kenneth Ward Church; Roland Guglielmi; Ronald R. Coifman; Dekang Lin; Dennis M. Healy; Marc Q. Jacobs; Anna Tsao

We consider the problem of statistical pattern recognition in a heterogeneous, high-dimensional setting. In particular, we consider the search for meaningful cross-category associations in a heterogeneous text document corpus. Our approach involves “iterative denoising ” — that is, iteratively extracting (corpus-dependent) features and partitioning the document collection into sub-corpora. We present an anecdote wherein this methodology discovers a meaningful cross-category association in a heterogeneous collection of scientific documents.


conference on computational natural language learning | 2009

Glen, Glenda or Glendale: Unsupervised and Semi-supervised Learning of English Noun Gender

Shane Bergsma; Dekang Lin; Randy Goebel

English pronouns like he and they reliably reflect the gender and number of the entities to which they refer. Pronoun resolution systems can use this fact to filter noun candidates that do not agree with the pronoun gender. Indeed, broad-coverage models of noun gender have proved to be the most important source of world knowledge in automatic pronoun resolution systems. n nPrevious approaches predict gender by counting the co-occurrence of nouns with pronouns of each gender class. While this provides useful statistics for frequent nouns, many infrequent nouns cannot be classified using this method. Rather than using co-occurrence information directly, we use it to automatically annotate training examples for a large-scale discriminative gender model. Our model collectively classifies all occurrences of a noun in a document using a wide variety of contextual, morphological, and categorical gender features. By leveraging large volumes of un-labeled data, our full semi-supervised system reduces error by 50% over the existing state-of-the-art in gender classification.


international conference on computational linguistics | 2009

Combining Language Modeling and Discriminative Classification for Word Segmentation

Dekang Lin

Generative language modeling and discriminative classification are two main techniques for Chinese word segmentation. Most previous methods have adopted one of the techniques. We present a hybrid model that combines the disambiguation power of language modeling and the ability of discriminative classifiers to deal with out-of-vocabulary words. We show that the combined model achieves 9% error reduction over the discriminative classifier alone.


Trends in Parsing Technology | 2010

Strictly Lexicalised Dependency Parsing

Qin Iris Wang; Dale Schuurmans; Dekang Lin

There has been a great deal of progress in statistical parsing in the past decade (Collins, 1996, 1997; Charniak, 2000). A common characteristic of these previous generative parsers is their use of lexical statistics. However, it was subsequently discovered that bi-lexical statistics (parameters that involve two words) actually play a much smaller role than previously believed. It has been found by Gildea (2001) that the removal of bi-lexical statistics from a state-of-the-art PCFG parser resulted in little change in the output. Bikel (2004) observes that only 1.49% of the bi-lexical statistics needed in parsing were found in the training corpus. When considering only bigram statistics involved in the highest probability parse, this percentage becomes 28.8%. However, even when bi-lexical statistics do get used, they are remarkably similar to their back-off values using part-of-speech tags. Therefore, the utility of bi-lexical statistics becomes rather questionable. Klein and Manning (2003) present an unlexicalized parser that eliminates all lexical parameters, with a performance score close to the state-of-the-art lexicalised parsers.


conference on artificial intelligence for applications | 1990

A minimal connection model of abductive diagnostic reasoning

Dekang Lin; Randy Goebel

A minimal connection model of abductive diagnostic reasoning is presented. The domain knowledge is represented by a causal network. An explanation of a set of observations is a chain of causation events. These causation events constitute a scenario where all the observations can be observed. The authors define the best explanation to be the most probable explanation. The underlying causal model enables one to compute the probabilities of explanations from the conditional probabilities of the participating causation events. An algorithm for finding the most probable explanations is presented. Although probabilistic inference using belief networks is NP-hard in general, this algorithm is polynomial to the number of nodes in the networks and is exponential only to the number of observations to be explained, which, in any single case, is usually small.<<ETX>>


national conference on artificial intelligence | 2006

Organizing and searching the world wide web of facts - step one: the one-million fact extraction challenge

Marius Pasca; Dekang Lin; Jeffrey P. Bigham; Andrei Lifchits; Alpa Jain

Collaboration


Dive into the Dekang Lin's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Emily Pitler

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Aravind K. Joshi

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jeffrey P. Bigham

Carnegie Mellon University

View shared research outputs
Researchain Logo
Decentralizing Knowledge