Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kathryn Mazaitis is active.

Publication


Featured researches published by Kathryn Mazaitis.


Machine Learning | 2015

Efficient inference and learning in a large knowledge base

William Yang Wang; Kathryn Mazaitis; Ni Lao; William W. Cohen

One important challenge for probabilistic logics is reasoning with very large knowledge bases (KBs) of imperfect information, such as those produced by modern web-scale information extraction systems. One scalability problem shared by many probabilistic logics is that answering queries involves “grounding” the query—i.e., mapping it to a propositional representation—and the size of a “grounding” grows with database size. To address this bottleneck, we present a first-order probabilistic language called ProPPR in which approximate “local groundings” can be constructed in time independent of database size. Technically, ProPPR is an extension to stochastic logic programs that is biased towards short derivations; it is also closely related to an earlier relational learning algorithm called the path ranking algorithm. We show that the problem of constructing proofs for this logic is related to computation of personalized PageRank on a linearized version of the proof space, and based on this connection, we develop a provably-correct approximate grounding scheme, based on the PageRank–Nibble algorithm. Building on this, we develop a fast and easily-parallelized weight-learning algorithm for ProPPR. In our experiments, we show that learning for ProPPR is orders of magnitude faster than learning for Markov logic networks; that allowing mutual recursion (joint learning) in KB inference leads to improvements in performance; and that ProPPR can learn weights for a mutually recursive program with hundreds of clauses defining scores of interrelated predicates over a KB containing one million entities.


conference on information and knowledge management | 2014

Structure Learning via Parameter Learning

William Yang Wang; Kathryn Mazaitis; William W. Cohen

A key challenge in information and knowledge management is to automatically discover the underlying structures and patterns from large collections of extracted information. This paper presents a novel structure-learning method for a new, scalable probabilistic logic called ProPPR. Our approach builds on the recent success of meta-interpretive learning methods in Inductive Logic Programming (ILP), and we further extends it to a framework that enables robust and efficient structure learning of logic programs on graphs: using an abductive second-order probabilistic logic, we show how first-order theories can be automatically generated via parameter learning. To learn better theories, we then propose an iterated structural gradient approach that incrementally refines the hypothesized space of learned first-order structures. In experiments, we show that the proposed method further improves the results, outperforming competitive baselines such as Markov Logic Networks (MLNs) and FOIL on multiple datasets with various settings; and that the proposed approach can learn structures in a large knowledge base in a tractable fashion.


empirical methods in natural language processing | 2014

Dependency Parsing for Weibo: An Efficient Probabilistic Logic Programming Approach

William Yang Wang; Lingpeng Kong; Kathryn Mazaitis; William W. Cohen

Dependency parsing is a core task in NLP, and it is widely used by many applications such as information extraction, question answering, and machine translation. In the era of social media, a big challenge is that parsers trained on traditional newswire corpora typically suffer from the domain mismatch issue, and thus perform poorly on social media data. We present a new GFL/FUDG-annotated Chinese treebank with more than 18K tokens from Sina Weibo (the Chinese equivalent of Twitter). We formulate the dependency parsing problem as many small and parallelizable arc prediction tasks: for each task, we use a programmable probabilistic firstorder logic to infer the dependency arc of a token in the sentence. In experiments, we show that the proposed model outperforms an off-the-shelf Stanford Chinese parser, as well as a strong MaltParser baseline that is trained on the same in-domain data.


national conference on artificial intelligence | 2015

Never-ending learning

Tom M. Mitchell; William W. Cohen; E. Hruschka; Partha Pratim Talukdar; Justin Betteridge; Andrew Carlson; Bhavana Dalvi; Matt Gardner; Bryan Kisiel; Jayant Krishnamurthy; Ni Lao; Kathryn Mazaitis; T. Mohamed; Ndapandula Nakashole; Emmanouil Antonios Platanios; Alan Ritter; Mehdi Samadi; Burr Settles; Richard C. Wang; Derry Tanti Wijaya; Abhinav Gupta; Xi Chen; A. Saparov; M. Greaves; J. Welling


conference on information and knowledge management | 2013

Programming with personalized pagerank: a locally groundable first-order probabilistic logic

William Yang Wang; Kathryn Mazaitis; William W. Cohen


arXiv: Computation and Language | 2017

Quasar: Datasets for Question Answering by Search and Reading.

Bhuwan Dhingra; Kathryn Mazaitis; William W. Cohen


international conference on artificial intelligence | 2015

A soft version of predicate invention based on structured sparsity

William Yang Wang; Kathryn Mazaitis; William W. Cohen


Archive | 2014

A Tale of Two Entity Linking and Discovery Systems

Kathryn Mazaitis; Richard C. Wang; Frank Lin; Bhavana Dalvi; Jakob Bauer; William W. Cohen


national conference on artificial intelligence | 2016

Bootstrapping Distantly Supervised IE Using Joint Learning and Small Well-Structured Corpora.

Lidong Bing; Bhuwan Dhingra; Kathryn Mazaitis; Jong Hyuk Park; William W. Cohen


national conference on artificial intelligence | 2014

ProPPR: efficient first-order probabilistic logic programming for structure discovery, parameter learning, and scalable inference

William Yang Wang; Kathryn Mazaitis; William W. Cohen

Collaboration


Dive into the Kathryn Mazaitis's collaboration.

Top Co-Authors

Avatar

William W. Cohen

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

William Yang Wang

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Bhuwan Dhingra

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Bhavana Dalvi

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Ni Lao

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Richard C. Wang

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

A. Saparov

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Abhinav Gupta

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andrew Carlson

Carnegie Mellon University

View shared research outputs
Researchain Logo
Decentralizing Knowledge