Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Phil Blunsom is active.

Publication


Featured researches published by Phil Blunsom.


empirical methods in natural language processing | 2017

Reference-Aware Language Models

Zichao Yang; Phil Blunsom; Chris Dyer; Wang Ling

We propose a general class of language models that treat reference as an explicit stochastic latent variable. This architecture allows models to create mentions of entities and their attributes by accessing external databases (required by, e.g., dialogue generation and recipe generation) and internal state (required by, e.g. language models which are aware of coreference). This facilitates the incorporation of information that can be accessed in predictable locations in databases or discourse context, even when the targets of the reference may be rare words. Experiments on three tasks shows our model variants based on deterministic attention.


Dagstuhl Reports | 2017

From Characters to Understanding Natural Language (C2NLU): Robust End-to-End Deep Learning for NLP (Dagstuhl Seminar 17042)

Phil Blunsom; Kyunghyun Cho; Chris Dyer; Hinrich Schütze

This report documents the program and the outcomes of Dagstuhl Seminar 17042 From Characters to Understanding Natural Language (C2NLU): Robust End-to-End Deep Learning for NLP. The seminar brought together researchers from different fields, including natural language processing, computational linguistics, deep learning and general machine learning. 31 participants from 22 academic and industrial institutions discussed advantages and challenges of using characters, i.e., raw text, as input for deep learning models instead of language-specific tokens. Eight talks provided overviews of different topics, approaches and challenges in current natural language processing research. In five working groups, the participants discussed current natural language processing/understanding topics in the context of character-based modeling, namely, morphology, machine translation, representation learning, end-to-end systems and dialogue. In most of the discussions, the need for a more detailed model analysis was pointed out. Especially for character-based input, it is important to analyze what a deep learning model is able to learn about language - about tokens, morphology or syntax in general. For an efficient and effective understanding of language, it might furthermore be beneficial to share representations learned from multiple objectives to enable the models to focus on their specific understanding task instead of needing to learn syntactic regularities of language first. Therefore, benefits and challenges of transfer learning were an important topic of the working groups as well as of the panel discussion and the final plenary discussion.


international conference on learning representations | 2016

Reasoning about Entailment with Neural Attention

Tim Rocktäschel; Edward Grefenstette; Karl Moritz Hermann; Tomáš Ko iský; Phil Blunsom


international conference on learning representations | 2017

Learning to Compose Words into Sentences with Reinforcement Learning

Dani Yogatama; Phil Blunsom; Chris Dyer; Edward Grefenstette; Wang Ling


international conference on learning representations | 2017

The Neural Noisy Channel

Lei Yu; Phil Blunsom; Chris Dyer; Edward Grefenstette; Tomas Kocisky


arXiv: Computation and Language | 2018

Understanding Grounded Language Learning Agents

Felix Hill; Karl Moritz Hermann; Phil Blunsom; Stephen Nowland Clark


Archive | 2013

A Simple Model for Learning Multilingual Compositional Semantics.

Karl Moritz Hermann; Phil Blunsom


international conference on learning representations | 2018

Memory Architectures in Recurrent Neural Network Language Models

Dani Yogatama; Yishu Miao; Gábor Melis; Wang Ling; Adhiguna Kuncoro; Chris Dyer; Phil Blunsom


Proceedings of the 1st Workshop on Representation Learning for NLP | 2016

Proceedings of the 1st Workshop on Representation Learning for NLP

Phil Blunsom; Kyunghyun Cho; Shay Cohen; Edward Grefenstette; Karl Moritz Hermann; Laura Rimell; Jason Weston; Scott Wen-tau Yih


Proceedings of the Workshop on Continuous Vector Space Models and their Compositionality | 2013

“Not not bad†is not “badâ€: A distributional account of negation

Karl Moritz Hermann; Edward Grefenstette; Phil Blunsom

Collaboration


Dive into the Phil Blunsom's collaboration.

Researchain Logo
Decentralizing Knowledge