Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Lifu Huang is active.

Publication


Featured researches published by Lifu Huang.


meeting of the association for computational linguistics | 2016

Liberal Event Extraction and Event Schema Induction

Lifu Huang; Taylor Cassidy; Xiaocheng Feng; Heng Ji; Clare R. Voss; Jiawei Han; Avirup Sil

We propose a brand new “Liberal” Event Extraction paradigm to extract events and discover event schemas from any input corpus simultaneously. We incorporate symbolic (e.g., Abstract Meaning Representation) and distributional semantics to detect and represent event structures and adopt a joint typing framework to simultaneously extract event types and argument roles and discover an event schema. Experiments on general and specific domains demonstrate that this framework can construct high-quality schemas with many event and argument role types, covering a high proportion of event types and argument roles in manually defined schemas. We show that extraction performance using discovered schemas is comparable to supervised models trained from a large amount of data labeled according to predefined event types. The extraction quality of new event types is also promising.


empirical methods in natural language processing | 2016

AFET: Automatic Fine-Grained Entity Typing by Hierarchical Partial-Label Embedding

Xiang Ren; Wenqi He; Meng Qu; Lifu Huang; Heng Ji; Jiawei Han

Distant supervision has been widely used in current systems of fine-grained entity typing to automatically assign categories (entity types) to entity mentions. However, the types so obtained from knowledge bases are often incorrect for the entity mention’s local context. This paper proposes a novel embedding method to separately model “clean” and “noisy” mentions, and incorporates the given type hierarchy to induce loss functions. We formulate a joint optimization problem to learn embeddings for mentions and typepaths, and develop an iterative algorithm to solve the problem. Experiments on three public datasets demonstrate the effectiveness and robustness of the proposed method, with an average 15% improvement in accuracy over the next best compared method1.


meeting of the association for computational linguistics | 2017

Bridge Text and Knowledge by Learning Multi-Prototype Entity Mention Embedding.

Yixin Cao; Lifu Huang; Heng Ji; Xu Chen; Juanzi Li

Integrating text and knowledge into a unified semantic space has attracted significant research interests recently. However, the ambiguity in the common space remains a challenge, namely that the same mention phrase usually refers to various entities. In this paper, to deal with the ambiguity of entity mentions, we propose a novel Multi-Prototype Mention Embedding model, which learns multiple sense embeddings for each mention by jointly modeling words from textual contexts and entities derived from a knowledge base. In addition, we further design an efficient language model based approach to disambiguate each mention to a specific sense. In experiments, both qualitative and quantitative analysis demonstrate the high quality of the word, entity and multi-prototype mention embeddings. Using entity linking as a study case, we apply our disambiguation method as well as the multi-prototype mention embeddings on the benchmark dataset, and achieve state-of-the-art performance.


Big Data | 2017

Liberal entity extraction: Rapid construction of fine-grained entity typing systems

Lifu Huang; Jonathan May; Xiaoman Pan; Heng Ji; Xiang Ren; Jiawei Han; Lin Zhao; James A. Hendler

The ability of automatically recognizing and typing entities in natural language without prior knowledge (e.g., predefined entity types) is a major challenge in processing such data. Most existing entity typing systems are limited to certain domains, genres, and languages. In this article, we propose a novel unsupervised entity-typing framework by combining symbolic and distributional semantics. We start from learning three types of representations for each entity mention: general semantic representation, specific context representation, and knowledge representation based on knowledge bases. Then we develop a novel joint hierarchical clustering and linking algorithm to type all mentions using these representations. This framework does not rely on any annotated data, predefined typing schema, or handcrafted features; therefore, it can be quickly adapted to a new domain, genre, and/or language. Experiments on genres (news and discussion forum) show comparable performance with state-of-the-art supervised typing systems trained from a large amount of labeled data. Results on various languages (English, Chinese, Japanese, Hausa, and Yoruba) and domains (general and biomedical) demonstrate the portability of our framework.


meeting of the association for computational linguistics | 2016

A Language-Independent Neural Network for Event Detection.

Xiaocheng Feng; Lifu Huang; Duyu Tang; Heng Ji; Bing Qin; Ting Liu


meeting of the association for computational linguistics | 2018

Zero-Shot Transfer Learning for Event Extraction

Lifu Huang; Heng Ji; Kyunghyun Cho; Ido Dagan; Sebastian Riedel; Clare R. Voss


empirical methods in natural language processing | 2017

Improving Slot Filling Performance with Attentive Neural Networks on Dependency Structures

Lifu Huang; Avirup Sil; Heng Ji; Radu Florian


Theory and Applications of Categories | 2017

TinkerBell: Cross-lingual Cold-Start Knowledge Base Construction.

Mohamed Al-Badrashiny; Jason Bolton; Arun Tejasvi Chaganty; Kevin Clark; Craig Harman; Lifu Huang; Matthew Lamm; Jinhao Lei; Di Lu; Xiaoman Pan; Ashwin Paranjape; Ellie Pavlick; Haoruo Peng; Peng Qi; Pushpendre Rastogi; Abigail See; Kai Sun; Max Thomas; Chen-Tse Tsai; Hao Wu; Boliang Zhang; Chris Callison-Burch; Claire Cardie; Heng Ji; Christopher D. Manning; Smaranda Muresan; Owen Rambow; Dan Roth; Mark Sammons; Benjamin Van Durme


north american chapter of the association for computational linguistics | 2018

Chengyu Cloze Test.

Zhiying Jiang; Boliang Zhang; Lifu Huang; Heng Ji


meeting of the association for computational linguistics | 2018

Paper Abstract Writing through Rewriting Mechanism

Qingyun Wang; Zhihao Zhou; Lifu Huang; Spencer Whitehead; Boliang Zhang; Heng Ji; Kevin Knight

Collaboration


Dive into the Lifu Huang's collaboration.

Top Co-Authors

Avatar

Heng Ji

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar

Boliang Zhang

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar

Xiaoman Pan

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar

Kevin Knight

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Spencer Whitehead

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar

Xiang Ren

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Xiaocheng Feng

Harbin Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Di Lu

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar

Jonathan May

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge