Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mo Yu is active.

Publication


Featured researches published by Mo Yu.


meeting of the association for computational linguistics | 2017

Improved Neural Relation Detection for Knowledge Base Question Answering

Mo Yu; Wenpeng Yin; Kazi Saidul Hasan; Cícero Nogueira dos Santos; Bing Xiang; Bowen Zhou

Relation detection is a core component for many NLP applications including Knowledge Base Question Answering (KBQA). In this paper, we propose a hierarchical recurrent neural network enhanced by residual learning that detects KB relations given an input question. Our method uses deep residual bidirectional LSTMs to compare questions and relation names via different hierarchies of abstraction. Additionally, we propose a simple KBQA system that integrates entity linking and our proposed relation detector to enable one enhance another. Experimental results evidence that our approach achieves not only outstanding relation detection performance, but more importantly, it helps our KBQA system to achieve state-of-the-art accuracy for both single-relation (SimpleQuestions) and multi-relation (WebQSP) QA benchmarks.


empirical methods in natural language processing | 2016

Leveraging Sentence-level Information with Encoder LSTM for Semantic Slot Filling

Gakuto Kurata; Bing Xiang; Bowen Zhou; Mo Yu

Recurrent Neural Network (RNN) and one of its specific architectures, Long Short-Term Memory (LSTM), have been widely used for sequence labeling. In this paper, we first enhance LSTM-based sequence labeling to explicitly model label dependencies. Then we propose another enhancement to incorporate the global information spanning over the whole input sequence. The latter proposed method, encoder-labeler LSTM, first encodes the whole input sequence into a fixed length vector with the encoder LSTM, and then uses this encoded vector as the initial state of another LSTM for sequence labeling. Combining these methods, we can predict the label sequence with considering label dependencies and information of whole input sequence. In the experiments of a slot filling task, which is an essential component of natural language understanding, with using the standard ATIS corpus, we achieved the state-of-the-art F1-score of 95.66%.


international joint conference on artificial intelligence | 2018

Scheduled Policy Optimization for Natural Language Communication with Intelligent Agents

Wenhan Xiong; Xiaoxiao Guo; Mo Yu; Shiyu Chang; Bowen Zhou; William Yang Wang

We investigate the task of learning to follow natural language instructions by jointly reasoning with visual observations and language inputs. In contrast to existing methods which start with learning from demonstrations (LfD) and then use reinforcement learning (RL) to fine-tune the model parameters, we propose a novel policy optimization algorithm which dynamically schedules demonstration learning and RL. The proposed training paradigm provides efficient exploration and better generalization beyond existing methods. Comparing to existing ensemble models, the best single model based on our proposed method tremendously decreases the execution error by over 50% on a block-world environment. To further illustrate the exploration strategy of our RL algorithm, We also include systematic studies on the evolution of policy entropy during training.


arXiv: Artificial Intelligence | 2018

Knowledge Base Relation Detection via Multi-View Matching.

Yang Yu; Kazi Saidul Hasan; Mo Yu; Wei Zhang; Zhiguo Wang

Relation detection is a core component for Knowledge Base Question Answering (KBQA). In this paper, we propose a KB relation detection model via multi-view matching which utilizes more useful information extracted from question and KB. The matching inside each view is through multiple perspectives to compare two input texts thoroughly. All these components are designed in an end-to-end trainable neural network model. Experiments on SimpleQuestions and WebQSP yield state-of-the-art results.


international conference on learning representations | 2017

A STRUCTURED SELF-ATTENTIVE SENTENCE EMBEDDING

Zhouhan Lin; Minwei Feng; Cícero Nogueira dos Santos; Mo Yu; Bing Xiang; Bowen Zhou; Yoshua Bengio


arXiv: Computation and Language | 2017

Comparative Study of CNN and RNN for Natural Language Processing.

Wenpeng Yin; Katharina Kann; Mo Yu; Hinrich Schütze


arXiv: Computation and Language | 2017

End-to-End Answer Chunk Extraction and Ranking for Reading Comprehension

Yang Yu; Wei Zhang; Bowen Zhou; Kazi Saidul Hasan; Mo Yu; Bing Xiang


international conference on computational linguistics | 2016

Simple Question Answering by Attentive Convolutional Neural Network.

Wenpeng Yin; Mo Yu; Bing Xiang; Bowen Zhou; Hinrich Schütze


neural information processing systems | 2017

Dilated Recurrent Neural Networks

Shiyu Chang; Yang Zhang; Wei Han; Mo Yu; Xiaoxiao Guo; Wei Tan; Xiaodong Cui; Michael J. Witbrock; Mark Hasegawa-Johnson; Thomas S. Huang


arXiv: Computation and Language | 2017

R

Shuohang Wang; Mo Yu; Xiaoxiao Guo; Zhiguo Wang; Tim Klinger; Wei Zhang; Shiyu Chang; Gerald Tesauro; Bowen Zhou; Jing Jiang

Collaboration


Dive into the Mo Yu's collaboration.

Top Co-Authors

Avatar

Kazi Saidul Hasan

University of Texas at Dallas

View shared research outputs
Top Co-Authors

Avatar

Jing Jiang

Singapore Management University

View shared research outputs
Top Co-Authors

Avatar

Shuohang Wang

Singapore Management University

View shared research outputs
Researchain Logo
Decentralizing Knowledge