Zhengzhong Liu
Carnegie Mellon University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Zhengzhong Liu.
meeting of the association for computational linguistics | 2016
Zhiting Hu; Xuezhe Ma; Zhengzhong Liu; Eduard H. Hovy; Eric P. Xing
Combining deep neural networks with structured logic rules is desirable to harness flexibility and reduce uninterpretability of the neural models. We propose a general framework capable of enhancing various types of neural networks (e.g., CNNs and RNNs) with declarative first-order logic rules. Specifically, we develop an iterative distillation method that transfers the structured information of logic rules into the weights of neural networks. We deploy the framework on a CNN for sentiment analysis, and an RNN for named entity recognition. With a few highly intuitive rules, we obtain substantial improvements and achieve state-of-the-art or comparable results to previous best-performing systems.
workshop on events definition detection coreference and representation | 2015
Zhengzhong Liu; Teruko Mitamura; Eduard H. Hovy
Event Mention detection is the first step in textual event understanding. Proper evaluation is important for modern natural language processing tasks. In this paper, we present our evaluation algorithm and results during the Event Mention Evaluation pilot study. We analyze the problems of evaluating multiple event mention attributes and discontinuous event mention spans. In addition, we identify a few limitations in the evaluation algorithm used for the pilot task and propose some potential improvements.
north american chapter of the association for computational linguistics | 2016
Xuezhe Ma; Zhengzhong Liu; Eduard H. Hovy
Coreference resolution is one of the first stages in deep language understanding and its importance has been well recognized in the natural language processing community. In this paper, we propose a generative, unsupervised ranking model for entity coreference resolution by introducing resolution mode variables. Our unsupervised system achieves 58.44% F1 score of the CoNLL metric on the English data from the CoNLL-2012 shared task (Pradhan et al., 2012), outperforming the Stanford deterministic system (Lee et al., 2013) by 3.01%.
conference on information and knowledge management | 2017
Chenyan Xiong; Zhengzhong Liu; Jamie Callan; Eduard H. Hovy
Entity-based ranking systems often employ entity linking systems to align entities to query and documents. Previously, entity linking systems were not designed specifically for search engines and were mostly used as a preprocessing step. This work presents JointSem, a joint semantic ranking system that combines query entity linking and entity-based document ranking. In JointSem, the spotting and linking signals are used to describe the importance of candidate entities in the query, and the linked entities are utilized to provide additional ranking features for the documents. The linking signals and the ranking signals are combined by a joint learning-to-rank model, and the whole system is fully optimized towards end-to-end ranking performance. Experiments on TREC Web Track datasets demonstrate the effectiveness of joint learning of entity linking and entity-based ranking.
international acm sigir conference on research and development in information retrieval | 2018
Chenyan Xiong; Zhengzhong Liu; Jamie Callan; Tie-Yan Liu
This paper presents a Kernel Entity Salience Model (KESM) that improves text understanding and retrieval by better estimating entity salience (importance) in documents. KESM represents entities by knowledge enriched distributed representations, models the interactions between entities and words by kernels, and combines the kernel scores to estimate entity salience. The whole model is learned end-to-end using entity salience labels. The salience model also improves ad hoc search accuracy, providing effective ranking features by modeling the salience of query entities in candidate documents. Our experiments on two entity salience corpora and two TREC ad hoc search datasets demonstrate the effectiveness of KESM over frequency-based and feature-based methods. We also provide examples showing how KESM conveys its text understanding ability learned from entity salience to search.
international acm sigir conference on research and development in information retrieval | 2017
Keyang Xu; Zhengzhong Liu; Jamie Callan
Many URLs on the Internet point to identical contents, which increase the burden of web crawlers. Techniques that detect such URLs (known as URL de-duping) can greatly save resources such as bandwidth and storage for crawlers. Traditional de-duping methods are usually limited to heavily engineered rule matching strategies.In this work, we propose a novel URL de-duping framework based on sequence-to-sequence (Seq2Seq) neural networks. A single concise translation model can take the place of thousands of explicit rules. Experiments indicate that a vanilla Seq2Seq architecture yields robust and accurate results in detecting duplicate URLs. Furthermore, we demonstrate the efficiency of this framework in the real large-scale web environment.
language resources and evaluation | 2014
Zhengzhong Liu; Jun Araki; Eduard H. Hovy; Teruko Mitamura
language resources and evaluation | 2014
Jun Araki; Zhengzhong Liu; Eduard H. Hovy; Teruko Mitamura
Theory and Applications of Categories | 2015
Teruko Mitamura; Zhengzhong Liu; Eduard H. Hovy
NTCIR | 2014
Di Wang; Leonid Boytsov; Jun Araki; Alkesh Patel; Jeff Gee; Zhengzhong Liu; Eric Nyberg; Teruko Mitamura