Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jonathan Tompson is active.

Publication


Featured researches published by Jonathan Tompson.


ACM Transactions on Graphics | 2014

Real-Time Continuous Pose Recovery of Human Hands Using Convolutional Networks

Jonathan Tompson; Murphy Stein; Yann LeCun; Ken Perlin

We present a novel method for real-time continuous pose recovery of markerless complex articulable objects from a single depth image. Our method consists of the following stages: a randomized decision forest classifier for image segmentation, a robust method for labeled dataset generation, a convolutional network for dense feature extraction, and finally an inverse kinematics stage for stable real-time pose recovery. As one possible application of this pipeline, we show state-of-the-art results for real-time puppeteering of a skinned hand-model.


asian conference on computer vision | 2014

MoDeep: A Deep Learning Framework Using Motion Features for Human Pose Estimation

Arjun Jain; Jonathan Tompson; Yann LeCun; Christoph Bregler

In this work, we propose a novel and efficient method for articulated human pose estimation in videos using a convolutional network architecture, which incorporates both color and motion features. We propose a new human body pose dataset, FLIC-motion (This dataset can be downloaded from http://cs.nyu.edu/~ajain/accv2014/.), that extends the FLIC dataset [1] with additional motion features. We apply our architecture to this dataset and report significantly better performance than current state-of-the-art pose detection systems.


computer vision and pattern recognition | 2015

Efficient ConvNet-based marker-less motion capture in general scenes with a low number of cameras

Ahmed Elhayek; E. de Aguiar; Arjun Jain; Jonathan Tompson; Leonid Pishchulin; Mykhaylo Andriluka; Christoph Bregler; Bernt Schiele; Christian Theobalt

We present a novel method for accurate marker-less capture of articulated skeleton motion of several subjects in general scenes, indoors and outdoors, even from input filmed with as few as two cameras. Our approach unites a discriminative image-based joint detection method with a model-based generative motion tracking algorithm through a combined pose optimization energy. The discriminative part-based pose detection method, implemented using Convolutional Networks (ConvNet), estimates unary potentials for each joint of a kinematic skeleton model. These unary potentials are used to probabilistically extract pose constraints for tracking by using weighted sampling from a pose posterior guided by the model. In the final energy, these constraints are combined with an appearance-based model-to-image similarity term. Poses can be computed very efficiently using iterative local optimization, as ConvNet detection is fast, and our formulation yields a combined pose estimation energy with analytic derivatives. In combination, this enables to track full articulated joint angles at state-of-the-art accuracy and temporal stability with a very low number of cameras.


computer vision and pattern recognition | 2017

Towards Accurate Multi-person Pose Estimation in the Wild

George Papandreou; Tyler Zhu; Nori Kanazawa; Alexander Toshev; Jonathan Tompson; Chris Bregler; Kevin P. Murphy

We propose a method for multi-person detection and 2-D pose estimation that achieves state-of-art results on the challenging COCO keypoints task. It is a simple, yet powerful, top-down approach consisting of two stages. In the first stage, we predict the location and scale of boxes which are likely to contain people, for this we use the Faster RCNN detector. In the second stage, we estimate the keypoints of the person potentially contained in each proposed bounding box. For each keypoint type we predict dense heatmaps and offsets using a fully convolutional ResNet. To combine these outputs we introduce a novel aggregation procedure to obtain highly localized keypoint predictions. We also use a novel form of keypoint-based Non-Maximum-Suppression (NMS), instead of the cruder box-level NMS, and a novel form of keypoint-based confidence score estimation, instead of box-level scoring. Trained on COCO data alone, our final system achieves average precision of 0.649 on the COCO test-dev set and the 0.643 test-standard sets, outperforming the winner of the 2016 COCO keypoints challenge and other recent state-of-art. Further, by using additional in-house labeled data we obtain an even higher average precision of 0.685 on the test-dev set and 0.673 on the test-standard set, more than 5% absolute improvement compared to the previous best performing method on the same dataset.


arXiv: Computer Vision and Pattern Recognition | 2018

PersonLab: Person Pose Estimation and Instance Segmentation with a Bottom-Up, Part-Based, Geometric Embedding Model

George Papandreou; Tyler Zhu; Liang-Chieh Chen; Spyros Gidaris; Jonathan Tompson; Kevin P. Murphy

We present a box-free bottom-up approach for the tasks of pose estimation and instance segmentation of people in multi-person images using an efficient single-shot model. The proposed PersonLab model tackles both semantic-level reasoning and object-part associations using part-based modeling. Our model employs a convolutional network which learns to detect individual keypoints and predict their relative displacements, allowing us to group keypoints into person pose instances. Further, we propose a part-induced geometric embedding descriptor which allows us to associate semantic person pixels with their corresponding person instance, delivering instance-level person segmentations. Our system is based on a fully-convolutional architecture and allows for efficient inference, with runtime essentially independent of the number of people present in the scene. Trained on COCO data alone, our system achieves COCO test-dev keypoint average precision of 0.665 using single-scale inference and 0.687 using multi-scale inference, significantly outperforming all previous bottom-up pose estimation systems. We are also the first bottom-up method to report competitive results for the person class in the COCO instance segmentation task, achieving a person category average precision of 0.417.


neural information processing systems | 2014

Joint Training of a Convolutional Network and a Graphical Model for Human Pose Estimation

Jonathan Tompson; Arjun Jain; Yann LeCun; Christoph Bregler


computer vision and pattern recognition | 2015

Efficient object localization using Convolutional Networks

Jonathan Tompson; Ross Goroshin; Arjun Jain; Yann LeCun; Christopher Bregler


international conference on learning representations | 2014

Learning Human Pose Estimation Features with Convolutional Networks

Arjun Jain; Jonathan Tompson; Mykhaylo Andriluka; Graham W. Taylor; Christoph Bregler


international conference on computer vision | 2015

Unsupervised Learning of Spatiotemporally Coherent Metrics

Ross Goroshin; Joan Bruna; Jonathan Tompson; David Eigen; Yann LeCun


international conference on machine learning | 2016

Accelerating Eulerian Fluid Simulation With Convolutional Networks

Jonathan Tompson; Kristofer Schlachter; Pablo Sprechmann; Ken Perlin

Collaboration


Dive into the Jonathan Tompson's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sergey Levine

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Connor Schenck

University of Washington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dieter Fox

University of Washington

View shared research outputs
Researchain Logo
Decentralizing Knowledge