Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Rafal Jozefowicz is active.

Publication


Featured researches published by Rafal Jozefowicz.


conference on computational natural language learning | 2016

Generating Sentences from a Continuous Space

Samuel R. Bowman; Luke Vilnis; Oriol Vinyals; Andrew M. Dai; Rafal Jozefowicz; Samy Bengio

The standard recurrent neural network language model (RNNLM) generates sentences one word at a time and does not work from an explicit global sentence representation. In this work, we introduce and study an RNN-based variational autoencoder generative model that incorporates distributed latent representations of entire sentences. This factorization allows it to explicitly model holistic properties of sentences such as style, topic, and high-level syntactic features. Samples from the prior over these sentence representations remarkably produce diverse and well-formed sentences through simple deterministic decoding. By examining paths through this latent space, we are able to generate coherent novel sentences that interpolate between known sentences. We present techniques for solving the difficult learning problem presented by this model, demonstrate its effectiveness in imputing missing words, explore many interesting properties of the models latent sentence space, and present negative results on the use of the model in language modeling.


Nature Methods | 2018

Inferring single-trial neural population dynamics using sequential auto-encoders

Chethan Pandarinath; Daniel J. O’Shea; Jasmine Collins; Rafal Jozefowicz; Sergey D. Stavisky; Jonathan C. Kao; Eric Trautmann; Matthew T. Kaufman; Stephen I. Ryu; Leigh R. Hochberg; Jaimie M. Henderson; Krishna V. Shenoy; L. F. Abbott; David Sussillo

Neuroscience is experiencing a revolution in which simultaneous recording of thousands of neurons is revealing population dynamics that are not apparent from single-neuron responses. This structure is typically extracted from data averaged across many trials, but deeper understanding requires studying phenomena detected in single trials, which is challenging due to incomplete sampling of the neural population, trial-to-trial variability, and fluctuations in action potential timing. We introduce latent factor analysis via dynamical systems, a deep learning method to infer latent dynamics from single-trial neural spiking data. When applied to a variety of macaque and human motor cortical datasets, latent factor analysis via dynamical systems accurately predicts observed behavioral variables, extracts precise firing rate estimates of neural dynamics on single trials, infers perturbations to those dynamics that correlate with behavioral choices, and combines data from non-overlapping recording sessions spanning months to improve inference of underlying dynamics.LFADS, a deep learning method for analyzing neural population activity, can extract neural dynamics from single-trial recordings, stitch separate datasets into a single model, and infer perturbations, for example, from behavioral choices to these dynamics.


european conference on machine learning | 2015

Maximum Entropy Linear Manifold for learning discriminative low-dimensional representation

Wojciech Marian Czarnecki; Rafal Jozefowicz; Jacek Tabor

Representation learning is currently a very hot topic in modern machine learning, mostly due to the great success of the deep learning methods. In particular low-dimensional representation which discriminates classes can not only enhance the classification procedure, but also make it faster, while contrary to the high-dimensional embeddings can be efficiently used for visual based exploratory data analysis. In this paper we propose Maximum Entropy Linear Manifold (MELM), a multidimensional generalization of Multithreshold Entropy Linear Classifier model which is able to find a low-dimensional linear data projection maximizing discriminativeness of projected classes. As a result we obtain a linear embedding which can be used for classification, class aware dimensionality reduction and data visualization. MELM provides highly discriminative 2D projections of the data which can be used as a method for constructing robust classifiers. We provide both empirical evaluation as well as some interesting theoretical properties of our objective function such us scale and affine transformation invariance, connections with PCA and bounding of the expected balanced accuracy error.


NC'14 Proceedings of the 2014th International Conference on Neural Connectomics - Volume 46 | 2014

Neural connectivity reconstruction from calcium imaging signal using random forest with topological features

Wojciech Marian Czarnecki; Rafal Jozefowicz

Connectomics is becoming an increasingly popular area of research. With the recent advances in optical imaging of the neural activity tens of thousands of neurons can be monitored simultaneously. In this paper we present a method of incorporating topological knowledge inside data representation for Random Forest classifier in order to reconstruct the neural connections from patterns of their activities. Proposed technique leads to the model competitive with state-of-the art methods like Deep Convolutional Neural Networks and Graph Decomposition techniques. This claim is supported by the results (5th place with 0.003 in terms of AUC ROC loss to the top contestant) obtained in the connectomics competition organized on the Kaggle platform.


arXiv: Distributed, Parallel, and Cluster Computing | 2015

TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems

Martín Abadi; Ashish Agarwal; Paul Barham; Eugene Brevdo; Zhifeng Chen; Craig Citro; Gregory S. Corrado; Andy Davis; Jeffrey Dean; Matthieu Devin; Sanjay Ghemawat; Ian J. Goodfellow; Andrew Harp; Geoffrey Irving; Michael Isard; Yangqing Jia; Rafal Jozefowicz; Lukasz Kaiser; Manjunath Kudlur; Josh Levenberg; Dan Mané; Rajat Monga; Sherry Moore; Derek Gordon Murray; Chris Olah; Mike Schuster; Jonathon Shlens; Benoit Steiner; Ilya Sutskever; Kunal Talwar


international conference on machine learning | 2015

An Empirical Exploration of Recurrent Network Architectures

Rafal Jozefowicz; Wojciech Zaremba; Ilya Sutskever


arXiv: Computation and Language | 2016

Exploring the limits of language modeling

Rafal Jozefowicz; Oriol Vinyals; Mike Schuster; Noam Shazeer; Yonghui Wu


arXiv: Learning | 2015

Towards Principled Unsupervised Learning

Ilya Sutskever; Rafal Jozefowicz; Karol Gregor; Danilo Jimenez Rezende; Timothy P. Lillicrap; Oriol Vinyals


arXiv: Learning | 2016

LFADS - Latent Factor Analysis via Dynamical Systems.

David Sussillo; Rafal Jozefowicz; L. F. Abbott; Chethan Pandarinath


neural information processing systems | 2016

Improved Variational Inference with Inverse Autoregressive Flow

Diederik P. Kingma; Tim Salimans; Rafal Jozefowicz; Xi Chen; Ilya Sutskever; Max Welling

Collaboration


Dive into the Rafal Jozefowicz's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Max Welling

University of Amsterdam

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge