Justin Johnson
Stanford University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Justin Johnson.
european conference on computer vision | 2016
Justin Johnson; Alexandre Alahi; Li Fei-Fei
We consider image transformation problems, where an input image is transformed into an output image. Recent methods for such problems typically train feed-forward convolutional neural networks using a per-pixel loss between the output and ground-truth images. Parallel work has shown that high-quality images can be generated by defining and optimizing perceptual loss functions based on high-level features extracted from pretrained networks. We combine the benefits of both approaches, and propose the use of perceptual loss functions for training feed-forward networks for image transformation tasks. We show results on image style transfer, where a feed-forward network is trained to solve the optimization problem proposed by Gatys et al. in real-time. Compared to the optimization-based method, our network gives similar qualitative results but is three orders of magnitude faster. We also experiment with single-image super-resolution, where replacing a per-pixel loss with a perceptual loss gives visually pleasing results.
International Journal of Computer Vision | 2017
Ranjay Krishna; Yuke Zhu; Oliver Groth; Justin Johnson; Kenji Hata; Joshua Kravitz; Stephanie Chen; Yannis Kalantidis; Li-Jia Li; David A. Shamma; Michael S. Bernstein; Li Fei-Fei
Despite progress in perceptual tasks such as image classification, computers still perform poorly on cognitive tasks such as image description and question answering. Cognition is core to tasks that involve not just recognizing, but reasoning about our visual world. However, models used to tackle the rich content in images for cognitive tasks are still being trained using the same datasets designed for perceptual tasks. To achieve success at cognitive tasks, models need to understand the interactions and relationships between objects in an image. When asked “What vehicle is the person riding?”, computers will need to identify the objects in an image as well as the relationships riding(man, carriage) and pulling(horse, carriage) to answer correctly that “the person is riding a horse-drawn carriage.” In this paper, we present the Visual Genome dataset to enable the modeling of such relationships. We collect dense annotations of objects, attributes, and relationships within each image to learn these models. Specifically, our dataset contains over 108K images where each image has an average of
computer vision and pattern recognition | 2016
Justin Johnson; Andrej Karpathy; Li Fei-Fei
computer vision and pattern recognition | 2015
Justin Johnson; Ranjay Krishna; Michael Stark; Li-Jia Li; David A. Shamma; Michael S. Bernstein; Li Fei-Fei
35
computer vision and pattern recognition | 2017
Justin Johnson; Bharath Hariharan; Laurens van der Maaten; Li Fei-Fei; C. Lawrence Zitnick; Ross B. Girshick
international conference on computer vision | 2015
Justin Johnson; Lamberto Ballan; Li Fei-Fei
35 objects,
computer vision and pattern recognition | 2017
Jonathan Krause; Justin Johnson; Ranjay Krishna; Li Fei-Fei
european conference on computer vision | 2018
Jiren Zhu; Russell Kaplan; Justin Johnson; Li Fei-Fei
26
arXiv: Learning | 2015
Andrej Karpathy; Justin Johnson; Fei-Fei Li
international conference on computer vision | 2017
Justin Johnson; Bharath Hariharan; Laurens van der Maaten; Judy Hoffman; Li Fei-Fei; C. Lawrence Zitnick; Ross B. Girshick
26 attributes, and