Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ian J. Goodfellow is active.

Publication


Featured researches published by Ian J. Goodfellow.


computer and communications security | 2016

Deep Learning with Differential Privacy

Martín Abadi; Andy Chu; Ian J. Goodfellow; H. Brendan McMahan; Ilya Mironov; Kunal Talwar; Li Zhang

Machine learning techniques based on neural networks are achieving remarkable results in a wide variety of domains. Often, the training of models requires large, representative datasets, which may be crowdsourced and contain sensitive information. The models should not expose private information in these datasets. Addressing this goal, we develop new algorithmic techniques for learning and a refined analysis of privacy costs within the framework of differential privacy. Our implementation and experiments demonstrate that we can train deep neural networks with non-convex objectives, under a modest privacy budget, and at a manageable cost in software complexity, training efficiency, and model quality.


international conference on neural information processing | 2013

Challenges in Representation Learning: A Report on Three Machine Learning Contests

Ian J. Goodfellow; Dumitru Erhan; Pierre Carrier; Aaron C. Courville; Mehdi Mirza; Ben Hamner; Will Cukierski; Yichuan Tang; David Thaler; Dong-Hyun Lee; Yingbo Zhou; Chetan Ramaiah; Fangxiang Feng; Ruifan Li; Xiaojie Wang; Dimitris Athanasakis; John Shawe-Taylor; Maxim Milakov; John Park; Radu Tudor Ionescu; Marius Popescu; Cristian Grozea; James Bergstra; Jingjing Xie; Lukasz Romaszko; Bing Xu; Zhang Chuang; Yoshua Bengio

The ICML 2013 Workshop on Challenges in Representation Learning focused on three challenges: the black box learning challenge, the facial expression recognition challenge, and the multimodal learning challenge. We describe the datasets created for these challenges and summarize the results of the competitions. We provide suggestions for organizers of future challenges and some comments on what kind of knowledge can be gained from machine learning competitions.


computer and communications security | 2017

Practical Black-Box Attacks against Machine Learning

Nicolas Papernot; Patrick D. McDaniel; Ian J. Goodfellow; Somesh Jha; Z. Berkay Celik; Ananthram Swami

Machine learning (ML) models, e.g., deep neural networks (DNNs), are vulnerable to adversarial examples: malicious inputs modified to yield erroneous model outputs, while appearing unmodified to human observers. Potential attacks include having malicious content like malware identified as legitimate or controlling vehicle behavior. Yet, all existing adversarial example attacks require knowledge of either the model internals or its training data. We introduce the first practical demonstration of an attacker controlling a remotely hosted DNN with no such knowledge. Indeed, the only capability of our black-box adversary is to observe labels given by the DNN to chosen inputs. Our attack strategy consists in training a local model to substitute for the target DNN, using inputs synthetically generated by an adversary and labeled by the target DNN. We use the local substitute to craft adversarial examples, and find that they are misclassified by the targeted DNN. To perform a real-world and properly-blinded evaluation, we attack a DNN hosted by MetaMind, an online deep learning API. We find that their DNN misclassifies 84.24% of the adversarial examples crafted with our substitute. We demonstrate the general applicability of our strategy to many ML techniques by conducting the same attack against models hosted by Amazon and Google, using logistic regression substitutes. They yield adversarial examples misclassified by Amazon and Google at rates of 96.19% and 88.94%. We also find that this black-box attack strategy is capable of evading defense strategies previously found to make adversarial example crafting harder.


computer vision and pattern recognition | 2016

Improving the Robustness of Deep Neural Networks via Stability Training

Stephan Zheng; Yang Song; Thomas Leung; Ian J. Goodfellow

In this paper we address the issue of output instability of deep neural networks: small perturbations in the visual input can significantly distort the feature embeddings and output of a neural network. Such instability affects many deep architectures with state-of-the-art performance on a wide range of computer vision tasks. We present a general stability training method to stabilize deep networks against small input distortions that result from various types of common image processing, such as compression, rescaling, and cropping. We validate our method by stabilizing the state of-the-art Inception architecture [11] against these types of distortions. In addition, we demonstrate that our stabilized model gives robust state-of-the-art performance on largescale near-duplicate detection, similar-image ranking, and classification on noisy datasets.


Neural Networks | 2015

Challenges in representation learning

Ian J. Goodfellow; Dumitru Erhan; Pierre Carrier; Aaron C. Courville; Mehdi Mirza; Benjamin Hamner; William Cukierski; Yichuan Tang; David Thaler; Dong-Hyun Lee; Yingbo Zhou; Chetan Ramaiah; Fangxiang Feng; Ruifan Li; Xiaojie Wang; Dimitris Athanasakis; John Shawe-Taylor; Maxim Milakov; John Park; Radu Tudor Ionescu; Marius Popescu; Cristian Grozea; James Bergstra; Jingjing Xie; Lukasz Romaszko; Bing Xu; Zhang Chuang; Yoshua Bengio

The ICML 2013 Workshop on Challenges in Representation Learning(1) focused on three challenges: the black box learning challenge, the facial expression recognition challenge, and the multimodal learning challenge. We describe the datasets created for these challenges and summarize the results of the competitions. We provide suggestions for organizers of future challenges and some comments on what kind of knowledge can be gained from machine learning competitions.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2013

Scaling Up Spike-and-Slab Models for Unsupervised Feature Learning

Ian J. Goodfellow; Aaron C. Courville; Yoshua Bengio

We describe the use of two spike-and-slab models for modeling real-valued data, with an emphasis on their applications to object recognition. The first model, which we call spike-and-slab sparse coding (S3C), is a preexisting model for which we introduce a faster approximate inference algorithm. We introduce a deep variant of S3C, which we call the partially directed deep Boltzmann machine (PD-DBM) and extend our S3C inference algorithm for use on this model. We describe learning procedures for each. We demonstrate that our inference procedure for S3C enables scaling the model to unprecedented large problem sizes, and demonstrate that using S3C as a feature extractor results in very good object recognition performance, particularly when the number of labeled examples is low. We show that the PD-DBM generates better samples than its shallow counterpart, and that unlike DBMs or DBNs, the PD-DBM may be trained successfully without greedy layerwise training.


arXiv: Computer Vision and Pattern Recognition | 2018

Adversarial Attacks and Defences Competition

Alexey Kurakin; Ian J. Goodfellow; Samy Bengio; Yinpeng Dong; Fangzhou Liao; Ming Liang; Tianyu Pang; Jun Zhu; Xiaolin Hu; Cihang Xie; Jianyu Wang; Zhishuai Zhang; Zhou Ren; Alan L. Yuille; Sangxia Huang; Yao Zhao; Yuzhe Zhao; Zhonglin Han; Junjiajia Long; Yerkebulan Berdibekov; Takuya Akiba; Seiya Tokui; Motoki Abe

To accelerate research on adversarial examples and robustness of machine learning classifiers, Google Brain organized a NIPS 2017 competition that encouraged researchers to develop new methods to generate adversarial examples as well as to develop new ways to defend against them. In this chapter, we describe the structure and organization of the competition and the solutions developed by several of the top-placing teams.


ieee computer security foundations symposium | 2017

On the Protection of Private Information in Machine Learning Systems: Two Recent Approches

Martín Abadi; Úlfar Erlingsson; Ian J. Goodfellow; H. Brendan McMahan; Ilya Mironov; Nicolas Papernot; Kunal Talwar; Li Zhang

The recent, remarkable growth of machine learning has led to intense interest in the privacy of the data on which machine learning relies, and to new techniques for preserving privacy. However, older ideas about privacy may well remain valid and useful. This note reviews two recent works on privacy in the light of the wisdom of some of the early literature, in particular the principles distilled by Saltzer and Schroeder in the 1970s.


Archive | 2018

Introduction to NIPS 2017 Competition Track

Sergio Escalera; Markus Weimer; Mikhail Burtsev; Valentin Malykh; Varvara Logacheva; Ryan Lowe; Iulian Vlad Serban; Yoshua Bengio; Alexander I. Rudnicky; Alan W. Black; Shrimai Prabhumoye; Łukasz Kidziński; Sharada Prasanna Mohanty; Carmichael F. Ong; Jennifer L. Hicks; Sergey Levine; Marcel Salathé; Scott L. Delp; Iker Huerga; Alexander Grigorenko; Leifur Thorbergsson; Anasuya Das; Kyla Nemitz; Jenna Sandker; Stephen King; Alexander S. Ecker; Leon A. Gatys; Matthias Bethge; Jordan L. Boyd-Graber; Shi Feng

Competitions have become a popular tool in the data science community to solve hard problems, assess the state of the art and spur new research directions. Companies like Kaggle and open source platforms like Codalab connect people with data and a data science problem to those with the skills and means to solve it. Hence, the question arises: What, if anything, could NIPS add to this rich ecosystem?


Communications of The ACM | 2018

Making machine learning robust against adversarial inputs

Ian J. Goodfellow; Patrick D. McDaniel; Nicolas Papernot

Such inputs distort how machine-learning-based systems are able to function in the world as it is.Such inputs distort how machine-learning-based systems are able to function in the world as it is.

Collaboration


Dive into the Ian J. Goodfellow's collaboration.

Top Co-Authors

Avatar

Yoshua Bengio

Université de Montréal

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nicolas Papernot

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar

Patrick D. McDaniel

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

James Bergstra

Université de Montréal

View shared research outputs
Top Co-Authors

Avatar

Mehdi Mirza

Université de Montréal

View shared research outputs
Researchain Logo
Decentralizing Knowledge