Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Léon Bottou is active.

Publication


Featured researches published by Léon Bottou.


computer vision and pattern recognition | 2014

Learning and Transferring Mid-level Image Representations Using Convolutional Neural Networks

Maxime Oquab; Léon Bottou; Ivan Laptev; Josef Sivic

Convolutional neural networks (CNN) have recently shown outstanding image classification performance in the large- scale visual recognition challenge (ILSVRC2012). The success of CNNs is attributed to their ability to learn rich mid-level image representations as opposed to hand-designed low-level features used in other image classification methods. Learning CNNs, however, amounts to estimating millions of parameters and requires a very large number of annotated image samples. This property currently prevents application of CNNs to problems with limited training data. In this work we show how image representations learned with CNNs on large-scale annotated datasets can be efficiently transferred to other visual recognition tasks with limited amount of training data. We design a method to reuse layers trained on the ImageNet dataset to compute mid-level image representation for images in the PASCAL VOC dataset. We show that despite differences in image statistics and tasks in the two datasets, the transferred representation leads to significantly improved results for object and action classification, outperforming the current state of the art on Pascal VOC 2007 and 2012 datasets. We also show promising results for object and action localization.


Neural Networks: Tricks of the Trade (2nd ed.) | 2012

Stochastic Gradient Descent Tricks

Léon Bottou

Chapter 1 strongly advocates the stochastic back-propagation method to train neural networks. This is in fact an instance of a more general technique called stochastic gradient descent (SGD). This chapter provides background material, explains why SGD is a good learning algorithm when the training set is large, and provides useful recommendations.


computer vision and pattern recognition | 2015

Is object localization for free? - Weakly-supervised learning with convolutional neural networks

Maxime Oquab; Léon Bottou; Ivan Laptev; Josef Sivic

Successful methods for visual object recognition typically rely on training datasets containing lots of richly annotated images. Detailed image annotation, e.g. by object bounding boxes, however, is both expensive and often subjective. We describe a weakly supervised convolutional neural network (CNN) for object classification that relies only on image-level labels, yet can learn from cluttered scenes containing multiple objects. We quantify its object classification and object location prediction performance on the Pascal VOC 2012 (20 object classes) and the much larger Microsoft COCO (80 object classes) datasets. We find that the network (i) outputs accurate image-level labels, (ii) predicts approximate locations (but not extents) of objects, and (iii) performs comparably to its fully-supervised counterparts using object bounding box annotation for training.


Machine Learning | 2014

Introduction to the special issue on learning semantics

Antoine Bordes; Léon Bottou; Ronan Collobert; Dan Roth; Jason Weston; Luke Zettlemoyer

A key ambition of AI is to render computers able to evolve and interact with the real world.This can be made possible only if the machine is able to produce an interpretation of its avail-able modalities (image, audio, text, etc.) which can be used to support reasoning and takingappropriate actions. Computational linguists use the term


Archive | 2015

Making Vapnik–Chervonenkis Bounds Accurate

Léon Bottou

This chapter shows how returning to the combinatorial nature of the Vapnik–Chervonenkis bounds provides simple ways to increase their accuracy, take into account properties of the data and of the learning algorithm, and provide empirically accurate estimates of the deviation between training error and test error.


Archive | 2015

Rejoinder: Making VC Bounds Accurate

Léon Bottou

I am very grateful to my colleagues Olivier Catoni and Vladimir Vovk because their insightful comments add considerable value to my article.


Empirical Inference | 2013

In Hindsight: Doklady Akademii Nauk SSSR, 181(4), 1968

Léon Bottou

This short contribution presents the first paper in which Vapnik and Chervonenkis describe the foundations of Statistical Learning Theory (Vapnik, Chervonenkis (1968) Proc USSR Acad Sci 181(4): 781–783).


Journal of Machine Learning Research | 2013

Counterfactual reasoning and learning systems: the example of computational advertising

Léon Bottou; Jonas Peters; Joaquin Quiñonero-Candela; Denis X. Charles; D. Max Chickering; Elon Portugaly; Dipankar Ray; Patrice Y. Simard; Ed Snelson


Machine Learning | 2014

From machine learning to machine reasoning

Léon Bottou


Archive | 2007

Large-Scale Learning with String Kernels

Léon Bottou; Olivier Chapelle; Dennis DeCoste; Jason Weston

Collaboration


Dive into the Léon Bottou's collaboration.

Researchain Logo
Decentralizing Knowledge