Chris Dyer
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Chris Dyer.
empirical methods in natural language processing | 2017
Zichao Yang; Phil Blunsom; Chris Dyer; Wang Ling
We propose a general class of language models that treat reference as an explicit stochastic latent variable. This architecture allows models to create mentions of entities and their attributes by accessing external databases (required by, e.g., dialogue generation and recipe generation) and internal state (required by, e.g. language models which are aware of coreference). This facilitates the incorporation of information that can be accessed in predictable locations in databases or discourse context, even when the targets of the reference may be rare words. Experiments on three tasks shows our model variants based on deterministic attention.
Dagstuhl Reports | 2017
Phil Blunsom; Kyunghyun Cho; Chris Dyer; Hinrich Schütze
This report documents the program and the outcomes of Dagstuhl Seminar 17042 From Characters to Understanding Natural Language (C2NLU): Robust End-to-End Deep Learning for NLP. The seminar brought together researchers from different fields, including natural language processing, computational linguistics, deep learning and general machine learning. 31 participants from 22 academic and industrial institutions discussed advantages and challenges of using characters, i.e., raw text, as input for deep learning models instead of language-specific tokens. Eight talks provided overviews of different topics, approaches and challenges in current natural language processing research. In five working groups, the participants discussed current natural language processing/understanding topics in the context of character-based modeling, namely, morphology, machine translation, representation learning, end-to-end systems and dialogue. In most of the discussions, the need for a more detailed model analysis was pointed out. Especially for character-based input, it is important to analyze what a deep learning model is able to learn about language - about tokens, morphology or syntax in general. For an efficient and effective understanding of language, it might furthermore be beneficial to share representations learned from multiple objectives to enable the models to focus on their specific understanding task instead of needing to learn syntactic regularities of language first. Therefore, benefits and challenges of transfer learning were an important topic of the working groups as well as of the panel discussion and the final plenary discussion.
Archive | 2009
Shankar Kumar; Wolfgang Macherey; Chris Dyer; Franz Josef Och
international conference on learning representations | 2017
Dani Yogatama; Phil Blunsom; Chris Dyer; Edward Grefenstette; Wang Ling
international conference on learning representations | 2018
Yujia Li; Oriol Vinyals; Chris Dyer; Razvan Pascanu; Peter Battaglia
international conference on learning representations | 2017
Lei Yu; Phil Blunsom; Chris Dyer; Edward Grefenstette; Tomas Kocisky
Archive | 2016
Waleed Ammar; George Mulcaire; Miguel Ballesteros; Chris Dyer; Noah A. Smith
arXiv: Computation and Language | 2018
Dirk Weissenborn; Tomas Kocisky; Chris Dyer
international conference on learning representations | 2018
Dani Yogatama; Yishu Miao; Gábor Melis; Wang Ling; Adhiguna Kuncoro; Chris Dyer; Phil Blunsom
Archive | 2018
Lei Yu; Chris Dyer; Tomas Kocisky; Philip Blunsom