Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Alex Krizhevsky is active.

Publication


Featured researches published by Alex Krizhevsky.


The International Journal of Robotics Research | 2018

Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection:

Sergey Levine; Peter Pastor; Alex Krizhevsky; Julian Ibarz; Deirdre Quillen

We describe a learning-based approach to hand-eye coordination for robotic grasping from monocular images. To learn hand-eye coordination for grasping, we trained a large convolutional neural network to predict the probability that task-space motion of the gripper will result in successful grasps, using only monocular camera images independent of camera calibration or the current robot pose. This requires the network to observe the spatial relationship between the gripper and objects in the scene, thus learning hand-eye coordination. We then use this network to servo the gripper in real time to achieve successful grasps. We describe two large-scale experiments that we conducted on two separate robotic platforms. In the first experiment, about 800,000 grasp attempts were collected over the course of two months, using between 6 and 14 robotic manipulators at any given time, with differences in camera placement and gripper wear and tear. In the second experiment, we used a different robotic platform and 8 robots to collect a dataset consisting of over 900,000 grasp attempts. The second robotic platform was used to test transfer between robots, and the degree to which data from a different set of robots can be used to aid learning. Our experimental results demonstrate that our approach achieves effective real-time control, can successfully grasp novel objects, and corrects mistakes by continuous servoing. Our transfer experiment also illustrates that data from different robots can be combined to learn more reliable and effective grasping.


international conference on artificial neural networks | 2011

Transforming auto-encoders

Geoffrey E. Hinton; Alex Krizhevsky; Sida D. Wang

The artificial neural networks that are used to recognize shapes typically use one or more layers of learned feature detectors that produce scalar outputs. By contrast, the computer vision community uses complicated, hand-engineered features, like SIFT [6], that produce a whole vector of outputs including an explicit representation of the pose of the feature. We show how neural networks can be used to learn features that output a whole vector of instantiation parameters and we argue that this is a much more promising way of dealing with variations in position, orientation, scale and lighting than the methods currently employed in the neural networks community. It is also more promising than the hand-engineered features currently used in computer vision because it provides an efficient way of adapting the features to the domain.


british machine vision conference | 2015

Real-Time Pedestrian Detection With Deep Network Cascades

Anelia Angelova; Alex Krizhevsky; Vincent Vanhoucke; Abhijit Ogale; Dave Ferguson

We present a new real-time approach to object detection that exploits the efficiency of cascade classifiers with the accuracy of deep neural networks. Deep networks have been shown to excel at classification tasks, and their ability to operate on raw pixel input without the need to design special features is very appealing. However, deep nets are notoriously slow at inference time. In this paper, we propose an approach that cascades deep nets and fast features, that is both very fast and very accurate. We apply it to the challenging task of pedestrian detection. Our algorithm runs in real-time at 15 frames per second. The resulting approach achieves a 26.2% average miss rate on the Caltech Pedestrian detection benchmark, which is competitive with the very best reported results. It is the first work we are aware of that achieves very high accuracy while running in real-time.


international conference on robotics and automation | 2015

Pedestrian detection with a Large-Field-Of-View deep network

Anelia Angelova; Alex Krizhevsky; Vincent Vanhoucke

Pedestrian detection is of crucial importance to autonomous driving applications. Methods based on deep learning have shown significant improvements in accuracy, which makes them particularly suitable for applications, such as pedestrian detection, where reducing the miss rate is very important. Although they are accurate, their runtime has been at best in seconds per image, which makes them not practical for onboard applications. We present a Large-Field-Of-View (LFOV) deep network for pedestrian detection, that can achieve high accuracy and is designed to make deep networks work faster for detection problems. The idea of the proposed Large-Field-of-View deep network is to learn to make classification decisions simultaneously and accurately at multiple locations. The LFOV network processes larger image areas at much faster speeds than typical deep networks have been able to, and can intrinsically reuse computations. Our pedestrian detection solution, which is a combination of a LFOV network and a standard deep network, works at 280 ms per image on GPU and achieves 35.85 average miss rate on the Caltech Pedestrian Detection Benchmark.


international symposium on experimental robotics | 2016

Learning Hand-Eye Coordination for Robotic Grasping with Large-Scale Data Collection

Sergey Levine; Alex Krizhevsky; Deirdre Quillen

We describe a learning-based approach to hand-eye coordination for robotic grasping from monocular images. To learn hand-eye coordination for grasping, we trained a large convolutional neural network to predict the probability that task-space motion of the gripper will result in successful grasps, using only monocular camera images and independently of camera calibration or the current robot pose. This requires the network to observe the spatial relationship between the gripper and objects in the scene, thus learning hand-eye coordination. We then use this network to servo the gripper in real time to achieve successful grasps. To train our network, we collected over 800,000 grasp attempts over the course of two months, using between 6 and 14 robotic manipulators at any given time, with differences in camera placement and hardware. Our experimental evaluation demonstrates that our method achieves effective real-time control, can successfully grasp novel objects, and corrects mistakes by continuous servoing.


Communications of The ACM | 2017

ImageNet classification with deep convolutional neural networks

Alex Krizhevsky; Ilya Sutskever; Geoffrey E. Hinton


neural information processing systems | 2012

ImageNet Classification with Deep Convolutional Neural Networks

Alex Krizhevsky; Ilya Sutskever; Geoffrey E. Hinton


Journal of Machine Learning Research | 2014

Dropout: a simple way to prevent neural networks from overfitting

Nitish Srivastava; Geoffrey E. Hinton; Alex Krizhevsky; Ilya Sutskever; Ruslan Salakhutdinov


arXiv: Neural and Evolutionary Computing | 2012

Improving neural networks by preventing co-adaptation of feature detectors

Geoffrey E. Hinton; Nitish Srivastava; Alex Krizhevsky; Ilya Sutskever; Ruslan Salakhutdinov


the european symposium on artificial neural networks | 2011

Using very deep autoencoders for content-based image retrieval.

Alex Krizhevsky; Geoffrey E. Hinton

Collaboration


Dive into the Alex Krizhevsky's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sergey Levine

University of California

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge