Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Dinesh Jayaraman is active.

Publication


Featured researches published by Dinesh Jayaraman.


asian conference on computer vision | 2016

Object-Centric Representation Learning from Unlabeled Videos

Ruohan Gao; Dinesh Jayaraman; Kristen Grauman

Supervised (pre-)training currently yields state-of-the-art performance for representation learning for visual recognition, yet it comes at the cost of (1) intensive manual annotations and (2) an inherent restriction in the scope of data relevant for learning. In this work, we explore unsupervised feature learning from unlabeled video. We introduce a novel object-centric approach to temporal coherence that encourages similar representations to be learned for object-like regions segmented from nearby frames. Our framework relies on a Siamese-triplet network to train a deep convolutional neural network (CNN) representation. Compared to existing temporal coherence methods, our idea has the advantage of lightweight preprocessing of the unlabeled video (no tracking required) while still being able to extract object-level regions from which to learn invariances. Furthermore, as we show in results on several standard datasets, our method typically achieves substantial accuracy gains over competing unsupervised methods for image classification and retrieval tasks.


International Journal of Computer Vision | 2017

Learning Image Representations Tied to Egomotion from Unlabeled Video

Dinesh Jayaraman; Kristen Grauman

Understanding how images of objects and scenes behave in response to specific egomotions is a crucial aspect of proper visual development, yet existing visual learning methods are conspicuously disconnected from the physical source of their images. We propose a new “embodied” visual learning paradigm, exploiting proprioceptive motor signals to train visual representations from egocentric video with no manual supervision. Specifically, we enforce that our learned features exhibit equivariance i.e., they respond predictably to transformations associated with distinct egomotions. With three datasets, we show that our unsupervised feature learning approach significantly outperforms previous approaches on visual recognition and next-best-view prediction tasks. In the most challenging test, we show that features learned from video captured on an autonomous driving platform improve large-scale scene recognition in static images from a disjoint domain.


Archive | 2017

Divide, Share, and Conquer: Multi-task Attribute Learning with Selective Sharing

Chao-Yeh Chen; Dinesh Jayaraman; Fei Sha; Kristen Grauman

Existing methods to learn visual attributes are plagued by two common issues: (i) they are prone to confusion by properties that are correlated with the attribute of interest among training samples and (ii) they often learn generic, imprecise “lowest common denominator” attribute models in an attempt to generalize across classes where a single attribute may have very different visual manifestations. Yet, many proposed applications of attributes rely on being able to learn the precise and correct semantic concept corresponding to each attribute. We argue that these issues are both largely due to indiscriminate “oversharing” amongst attribute classifiers along two axes—(i) visual features and (ii) classifier parameters. To address both these issues, we introduce the general idea of selective sharing during multi-task learning of attributes. First, we show how selective sharing helps learn decorrelated models for each attribute in a vocabulary. Second, we show how selective sharing permits a new form of transfer learning between attributes, yielding a specialized attribute model for each individual object category. We validate both these instantiations of our selective sharing idea through extensive experiments on multiple datasets. We show how they help preserve semantics in learned attribute models, benefitting various downstream applications such as image retrieval or zero-shot learning.


computer vision and pattern recognition | 2018

Learning to Look Around: Intelligently Exploring Unseen Environments for Unknown Tasks

Dinesh Jayaraman; Kristen Grauman


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2018

End-to-end policy learning for active visual categorization

Dinesh Jayaraman; Kristen Grauman


arXiv: Computer Vision and Pattern Recognition | 2017

Unsupervised learning through one-shot image-based shape reconstruction.

Dinesh Jayaraman; Ruohan Gao; Kristen Grauman


Archive | 2017

Learning to look around.

Dinesh Jayaraman; Kristen Grauman


international conference on robotics and automation | 2018

More Than a Feeling: Learning to Grasp and Regrasp Using Vision and Touch

Roberto Calandra; Andrew Owens; Dinesh Jayaraman; Justin Lin; Wenzhen Yuan; Jitendra Malik; Edward H. Adelson; Sergey Levine


arXiv: Computer Vision and Pattern Recognition | 2018

Time-Agnostic Prediction: Predicting Predictable Video Frames.

Dinesh Jayaraman; Frederik Ebert; Alexei A. Efros; Sergey Levine


arXiv: Computer Vision and Pattern Recognition | 2017

ShapeCodes: Self-Supervised Feature Learning by Lifting Views to Viewgrids

Dinesh Jayaraman; Ruohan Gao; Kristen Grauman

Collaboration


Dive into the Dinesh Jayaraman's collaboration.

Top Co-Authors

Avatar

Kristen Grauman

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Ruohan Gao

The Chinese University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Sergey Levine

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andrew Owens

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Chao-Yeh Chen

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Edward H. Adelson

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Fei Sha

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Frederik Ebert

University of California

View shared research outputs
Top Co-Authors

Avatar

Jitendra Malik

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge