Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Oriol Vinyals is active.

Publication


Featured researches published by Oriol Vinyals.


conference on computational natural language learning | 2016

Generating Sentences from a Continuous Space

Samuel R. Bowman; Luke Vilnis; Oriol Vinyals; Andrew M. Dai; Rafal Jozefowicz; Samy Bengio

The standard recurrent neural network language model (RNNLM) generates sentences one word at a time and does not work from an explicit global sentence representation. In this work, we introduce and study an RNN-based variational autoencoder generative model that incorporates distributed latent representations of entire sentences. This factorization allows it to explicitly model holistic properties of sentences such as style, topic, and high-level syntactic features. Samples from the prior over these sentence representations remarkably produce diverse and well-formed sentences through simple deterministic decoding. By examining paths through this latent space, we are able to generate coherent novel sentences that interpolate between known sentences. We present techniques for solving the difficult learning problem presented by this model, demonstrate its effectiveness in imputing missing words, explore many interesting properties of the models latent sentence space, and present negative results on the use of the model in language modeling.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2017

Show and Tell: Lessons Learned from the 2015 MSCOCO Image Captioning Challenge

Oriol Vinyals; Alexander Toshev; Samy Bengio; Dumitru Erhan

Automatically describing the content of an image is a fundamental problem in artificial intelligence that connects computer vision and natural language processing. In this paper, we present a generative model based on a deep recurrent architecture that combines recent advances in computer vision and machine translation and that can be used to generate natural sentences describing an image. The model is trained to maximize the likelihood of the target description sentence given the training image. Experiments on several datasets show the accuracy of the model and the fluency of the language it learns solely from image descriptions. Our model is often quite accurate, which we verify both qualitatively and quantitatively. Finally, given the recent surge of interest in this task, a competition was organized in 2015 using the newly released COCO dataset. We describe and analyze the various improvements we applied to our own baseline and show the resulting performance in the competition, which we won ex-aequo with a team from Microsoft Research.


empirical methods in natural language processing | 2015

Sentence Compression by Deletion with LSTMs

Katja Filippova; Enrique Alfonseca; Carlos A. Colmenares; Lukasz Kaiser; Oriol Vinyals

We present an LSTM approach to deletion-based sentence compression where the task is to translate a sentence into a sequence of zeros and ones, corresponding to token deletion decisions. We demonstrate that even the most basic version of the system, which is given no syntactic information (no PoS or NE tags, or dependencies) or desired compression length, performs surprisingly well: around 30% of the compressions from a large test set could be regenerated. We compare the LSTM system with a competitive baseline which is trained on the same amount of data but is additionally provided with all kinds of linguistic features. In an experiment with human raters the LSTMbased model outperforms the baseline achieving 4.5 in readability and 3.8 in informativeness.


north american chapter of the association for computational linguistics | 2016

Multilingual Language Processing From Bytes

Daniel Gillick; Cliff Brunk; Oriol Vinyals; Amarnag Subramanya

We describe an LSTM-based model which we call Byte-to-Span (BTS) that reads text as bytes and outputs span annotations of the form [start, length, label] where start positions, lengths, and labels are separate entries in our vocabulary. Because we operate directly on unicode bytes rather than language-specific words or characters, we can analyze text in many languages with a single model. Due to the small vocabulary size, these multilingual models are very compact, but produce results similar to or better than the state-of- the-art in Part-of-Speech tagging and Named Entity Recognition that use only the provided training datasets (no external data sources). Our models are learning from scratch in that they do not rely on any elements of the standard pipeline in Natural Language Processing (including tokenization), and thus can run in standalone fashion on raw text.


arXiv: Machine Learning | 2015

Distilling the Knowledge in a Neural Network

Geoffrey E. Hinton; Oriol Vinyals; Jeffrey Dean


arXiv: Computation and Language | 2016

Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation

Yonghui Wu; Mike Schuster; Zhifeng Chen; Quoc V. Le; Mohammad Norouzi; Wolfgang Macherey; Maxim Krikun; Yuan Cao; Qin Gao; Klaus Macherey; Jeff Klingner; Apurva Shah; Melvin Johnson; Xiaobing Liu; Łukasz Kaiser; Stephan Gouws; Yoshikiyo Kato; Taku Kudo; Hideto Kazawa; Keith Stevens; George Kurian; Nishant Patil; Wei Wang; Cliff Young; Jason Smith; Jason Riesa; Alex Rudnick; Oriol Vinyals; Greg Corrado; Macduff Hughes


arXiv: Sound | 2016

WaveNet: A Generative Model for Raw Audio

Aäron van den Oord; Sander Dieleman; Heiga Zen; Karen Simonyan; Oriol Vinyals; Alex Graves; Nal Kalchbrenner; Andrew W. Senior; Koray Kavukcuoglu


arXiv: Computation and Language | 2016

Exploring the limits of language modeling

Rafal Jozefowicz; Oriol Vinyals; Mike Schuster; Noam Shazeer; Yonghui Wu


arXiv: Computation and Language | 2016

Contextual LSTM (CLSTM) models for Large scale NLP tasks

Shalini Ghosh; Oriol Vinyals; Brian Strope; Scott Roy; Tom Dean; Larry P. Heck


arXiv: Computation and Language | 2016

Adversarial Evaluation of Dialogue Models

Anjuli Kannan; Oriol Vinyals

Collaboration


Dive into the Oriol Vinyals's collaboration.

Researchain Logo
Decentralizing Knowledge