Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Phong Le is active.

Publication


Featured researches published by Phong Le.


empirical methods in natural language processing | 2014

The Inside-Outside Recursive Neural Network model for Dependency Parsing

Phong Le; Willem H. Zuidema

We propose the first implementation of an infinite-order generative dependency model. The model is based on a new recursive neural network architecture, the Inside-Outside Recursive Neural Network. This architecture allows information to flow not only bottom-up, as in traditional recursive neural networks, but also topdown. This is achieved by computing content as well as context representations for any constituent, and letting these representations interact. Experimental results on the English section of the Universal Dependency Treebank show that the infinite-order model achieves a perplexity seven times lower than the traditional third-order model using counting, and tends to choose more accurate parses in k-best lists. In addition, reranking with this model achieves state-of-the-art unlabelled attachment scores and unlabelled exact match scores.


empirical methods in natural language processing | 2015

The Forest Convolutional Network: Compositional Distributional Semantics with a Neural Chart and without Binarization

Phong Le; Willem H. Zuidema

According to the principle of compositionality, the meaning of a sentence is computed from the meaning of its parts and the way they are syntactically combined. In practice, however, the syntactic structure is computed by automatic parsers which are far-from-perfect and not tuned to the specifics of the task. Current recursive neural network (RNN) approaches for computing sentence meaning therefore run into a number of practical difficulties, including the need to carefully select a parser appropriate for the task, deciding how and to what extent syntactic context modifies the semantic composition function, as well as on how to transform parse trees to conform to the branching settings (typically, binary branching) of the RNN. This paper introduces a new model, the Forest Convolutional Network, that avoids all of these challenges, by taking a parse forest as input, rather than a single tree, and by allowing arbitrary branching factors. We report improvements over the state-of-the-art in sentiment analysis and question classification.


north american chapter of the association for computational linguistics | 2015

Unsupervised Dependency Parsing : Let's Use Supervised Parsers

Phong Le; Willem H. Zuidema

We present a self-training approach to unsupervised dependency parsing that reuses existing supervised and unsupervised parsing algorithms. Our approach, called ‘iterated reranking’ (IR), starts with dependency trees generated by an unsupervised parser, and iteratively improves these trees using the richer probability models used in supervised parsing that are in turn trained on these trees. Our system achieves 1.8% accuracy higher than the stateof-the-part parser of Spitkovsky et al. (2013) on the WSJ corpus.


meeting of the association for computational linguistics | 2016

LSTM-based Mixture-of-Experts for Knowledge-Aware Dialogues

Phong Le; Marc Dymetman; Jean-Michel Renders

We introduce an LSTM-based method for dynamically integrating several word-prediction experts to obtain a conditional language model which can be good simultaneously at several subtasks. We illustrate this general approach with an application to dialogue where we integrate a neural chat model, good at conversational aspects, with a neural question-answering model, good at retrieving precise information from a knowledge-base, and show how the integration combines the strengths of the independent components. We hope that this focused contribution will attract attention on the benefits of using such mixtures of experts in NLP.


meeting of the association for computational linguistics | 2016

Quantifying the Vanishing Gradient and Long Distance Dependency Problem in Recursive Neural Networks and Recursive LSTMs

Phong Le; Willem H. Zuidema

Recursive neural networks (RNN) and their recently proposed extension recursive long short term memory networks (RLSTM) are models that compute representations for sentences, by recursively combining word embeddings according to an externally provided parse tree. Both models thus, unlike recurrent networks, explicitly make use of the hierarchical structure of a sentence. In this paper, we demonstrate that RNNs nevertheless suffer from the vanishing gradient and long distance dependency problem, and that RLSTMs greatly improve over RNNs on these problems. We present an artificial learning task that allows us to quantify the severity of these problems for both models. We further show that a ratio of gradients (at the root node and a focal leaf node) is highly indicative of the success of backpropagation at optimizing the relevant weights low in the tree. This paper thus provides an explanation for existing, superior results of RLSTMs on tasks such as sentiment analysis, and suggests that the benefits of including hierarchical structure and of including LSTM-style gating are complementary.


conference on computational natural language learning | 2017

Optimizing Differentiable Relaxations of Coreference Evaluation Metrics

Phong Le; Ivan Titov

Coreference evaluation metrics are hard to optimize directly as they are non-differentiable functions, not easily decomposable into elementary decisions. Consequently, most approaches optimize objectives only indirectly related to the end goal, resulting in suboptimal performance. Instead, we propose a differentiable relaxation that lends itself to gradient-based optimisation, thus bypassing the need for reinforcement learning or heuristic modification of cross-entropy. We show that by modifying the training objective of a competitive neural coreference system, we obtain a substantial gain in performance. This suggests that our approach can be regarded as a viable alternative to using reinforcement learning or more computationally expensive imitation learning.


international workshop/conference on parsing technologies | 2015

Enhancing the Inside-Outside Recursive Neural Network Reranker for Dependency Parsing

Phong Le

We propose solutions to enhance the Inside-Outside Recursive Neural Network (IORNN) reranker of Le and Zuidema (2014). Replacing the original softmax function with a hierarchical softmax using a binary tree constructed by combining output of the Brown clustering algorithm and frequency-based Huffman codes, we significantly reduce the reranker’s computational complexity. In addition, enriching contexts used in the reranker by adding subtrees rooted at (ancestors’) cousin nodes, the accuracy is increased.


joint conference on lexical and computational semantics | 2015

Compositional Distributional Semantics with Long Short Term Memory

Phong Le; Willem H. Zuidema


international conference on computational linguistics | 2012

Learning Compositional Semantics for Open Domain Semantic Parsing

Phong Le; Willem H. Zuidema


meeting of the association for computational linguistics | 2013

Learning from errors: Using vector-based compositional semantics for parse reranking

Phong Le; Willem H. Zuidema; Remko Scha

Collaboration


Dive into the Phong Le's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ivan Titov

University of Amsterdam

View shared research outputs
Top Co-Authors

Avatar

Remko Scha

University of Amsterdam

View shared research outputs
Researchain Logo
Decentralizing Knowledge