Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ni Lao is active.

Publication


Featured researches published by Ni Lao.


european conference on machine learning | 2010

Relational retrieval using a combination of path-constrained random walks

Ni Lao; William W. Cohen

Scientific literature with rich metadata can be represented as a labeled directed graph. This graph representation enables a number of scientific tasks such as ad hoc retrieval or named entity recognition (NER) to be formulated as typed proximity queries in the graph. One popular proximity measure is called Random Walk with Restart (RWR), and much work has been done on the supervised learning of RWR measures by associating each edge label with a parameter. In this paper, we describe a novel learnable proximity measure which instead uses one weight per edge label sequence: proximity is defined by a weighted combination of simple “path experts”, each corresponding to following a particular sequence of labeled edges. Experiments on eight tasks in two subdomains of biology show that the new learning method significantly outperforms the RWR model (both trained and untrained). We also extend the method to support two additional types of experts to model intrinsic properties of entities: query-independent experts, which generalize the PageRank measure, and popular entity experts which allow rankings to be adjusted for particular entities that are especially important.


knowledge discovery and data mining | 2010

Fast query execution for retrieval models based on path-constrained random walks

Ni Lao; William W. Cohen

Many recommendation and retrieval tasks can be represented as proximity queries on a labeled directed graph, with typed nodes representing documents, terms, and metadata, and labeled edges representing the relationships between them. Recent work has shown that the accuracy of the widely-used random-walk-based proximity measures can be improved by supervised learning - in particular, one especially effective learning technique is based on Path-Constrained Random Walks (PCRW), in which similarity is defined by a learned combination of constrained random walkers, each constrained to follow only a particular sequence of edge labels away from the query nodes. The PCRW based method significantly outperformed unsupervised random walk based queries, and models with learned edge weights. Unfortunately, PCRW query systems are expensive to evaluate. In this study we evaluate the use of approximations to the computation of the PCRW distributions, including fingerprinting, particle filtering, and truncation strategies. In experiments on several recommendation and retrieval problems using two large scientific publications corpora we show speedups of factors of 2 to 100 with little loss in accuracy.


knowledge discovery and data mining | 2010

Grafting-light: fast, incremental feature selection and structure learning of Markov random fields

Jun Zhu; Ni Lao; Eric P. Xing

Feature selection is an important task in order to achieve better generalizability in high dimensional learning, and structure learning of Markov random fields (MRFs) can automatically discover the inherent structures underlying complex data. Both problems can be cast as solving an l1-norm regularized parameter estimation problem. The existing Grafting method can avoid doing inference on dense graphs in structure learning by incrementally selecting new features. However, Grafting performs a greedy step to optimize over free parameters once new features are included. This greedy strategy results in low efficiency when parameter learning is itself non-trivial, such as in MRFs, in which parameter learning depends on an expensive subroutine to calculate gradients. The complexity of calculating gradients in MRFs is typically exponential to the size of maximal cliques. In this paper, we present a fast algorithm called Grafting-Light to solve the l1-norm regularized maximum likelihood estimation of MRFs for efficient feature selection and structure learning. Grafting-Light iteratively performs one-step of orthant-wise gradient descent over free parameters and selects new features. This lazy strategy is guaranteed to converge to the global optimum and can effectively select significant features. On both synthetic and real data sets, we show that Grafting-Light is much more efficient than Grafting for both feature selection and structure learning, and performs comparably with the optimal batch method that directly optimizes over all the features for feature selection but is much more efficient and accurate for structure learning of MRFs.


Machine Learning | 2015

Efficient inference and learning in a large knowledge base

William Yang Wang; Kathryn Mazaitis; Ni Lao; William W. Cohen

One important challenge for probabilistic logics is reasoning with very large knowledge bases (KBs) of imperfect information, such as those produced by modern web-scale information extraction systems. One scalability problem shared by many probabilistic logics is that answering queries involves “grounding” the query—i.e., mapping it to a propositional representation—and the size of a “grounding” grows with database size. To address this bottleneck, we present a first-order probabilistic language called ProPPR in which approximate “local groundings” can be constructed in time independent of database size. Technically, ProPPR is an extension to stochastic logic programs that is biased towards short derivations; it is also closely related to an earlier relational learning algorithm called the path ranking algorithm. We show that the problem of constructing proofs for this logic is related to computation of personalized PageRank on a linearized version of the proof space, and based on this connection, we develop a provably-correct approximate grounding scheme, based on the PageRank–Nibble algorithm. Building on this, we develop a fast and easily-parallelized weight-learning algorithm for ProPPR. In our experiments, we show that learning for ProPPR is orders of magnitude faster than learning for Markov logic networks; that allowing mutual recursion (joint learning) in KB inference leads to improvements in performance; and that ProPPR can learn weights for a mutually recursive program with hundreds of clauses defining scores of interrelated predicates over a KB containing one million entities.


international joint conference on natural language processing | 2015

Learning Relational Features with Backward Random Walks

Ni Lao; Einat Minkov; William W. Cohen

The path ranking algorithm (PRA) has been recently proposed to address relational classification and retrieval tasks at large scale. We describe Cor-PRA, an enhanced system that can model a larger space of relational rules, including longer relational rules and a class of first order rules with constants, while maintaining scalability. We describe and test faster algorithms for searching for these features. A key contribution is to leverage backward random walks to efficiently discover these types of rules. An empirical study is conducted on the tasks of graph-based knowledge base inference, and person named entity extraction from parsed text. Our results show that learning paths with constants improves performance on both tasks, and that modeling longer paths dramatically improves performance for the named entity extraction task.


knowledge discovery and data mining | 2011

Conditional topical coding: an efficient topic model conditioned on rich features

Jun Zhu; Ni Lao; Ning Chen; Eric P. Xing

Probabilistic topic models have shown remarkable success in many application domains. However, a probabilistic conditional topic model can be extremely inefficient when considering a rich set of features because it needs to define a normalized distribution, which usually involves a hard-to-compute partition function. This paper presents conditional topical coding (CTC), a novel formulation of conditional topic models which is non-probabilistic. CTC relaxes the normalization constraints as in probabilistic models and learns non-negative document codes and word codes. CTC does not need to define a normalized distribution and can efficiently incorporate a rich set of features for improved topic discovery and prediction tasks. Moreover, CTC can directly control the sparsity of inferred representations by using appropriate regularization. We develop an efficient and easy-to-implement coordinate descent learning algorithm, of which each coding substep has a closed-form solution. Finally, we demonstrate the advantages of CTC on online review analysis datasets. Our results show that conditional topical coding can achieve state-of-the-art prediction performance and is much more efficient in training (one order of magnitude faster) and testing (two orders of magnitude faster) than probabilistic conditional topic models.


empirical methods in natural language processing | 2011

Random Walk Inference and Learning in A Large Scale Knowledge Base

Ni Lao; Tom M. Mitchell; William W. Cohen


national conference on artificial intelligence | 2015

Never-ending learning

Tom M. Mitchell; William W. Cohen; E. Hruschka; Partha Pratim Talukdar; Justin Betteridge; Andrew Carlson; Bhavana Dalvi; Matt Gardner; Bryan Kisiel; Jayant Krishnamurthy; Ni Lao; Kathryn Mazaitis; T. Mohamed; Ndapandula Nakashole; Emmanouil Antonios Platanios; Alan Ritter; Mehdi Samadi; Burr Settles; Richard C. Wang; Derry Tanti Wijaya; Abhinav Gupta; Xi Chen; A. Saparov; M. Greaves; J. Welling


empirical methods in natural language processing | 2012

Reading The Web with Learned Syntactic-Semantic Inference Rules

Ni Lao; Amarnag Subramanya; Fernando Pereira; William W. Cohen


neural information processing systems | 2010

Efficient Relational Learning with Hidden Variable Detection

Ni Lao; Jun Zhu; Liu Xinwang; Yandong Liu; William W. Cohen

Collaboration


Dive into the Ni Lao's collaboration.

Top Co-Authors

Avatar

William W. Cohen

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Eric P. Xing

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Fan Yang

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Kathryn Mazaitis

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Tom M. Mitchell

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

A. Saparov

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Abhinav Gupta

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge