Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ramakrishna Vedantam is active.

Publication


Featured researches published by Ramakrishna Vedantam.


computer vision and pattern recognition | 2015

CIDEr: Consensus-based image description evaluation

Ramakrishna Vedantam; C. Lawrence Zitnick; Devi Parikh

Automatically describing an image with a sentence is a long-standing challenge in computer vision and natural language processing. Due to recent progress in object detection, attribute classification, action recognition, etc., there is renewed interest in this area. However, evaluating the quality of descriptions has proven to be challenging. We propose a novel paradigm for evaluating image descriptions that uses human consensus. This paradigm consists of three main parts: a new triplet-based method of collecting human annotations to measure consensus, a new automated metric that captures consensus, and two new datasets: PASCAL-50S and ABSTRACT-50S that contain 50 sentences describing each image. Our simple metric captures human judgment of consensus better than existing metrics across sentences generated by various sources. We also evaluate five state-of-the-art image description approaches using this new protocol and provide a benchmark for future comparisons. A version of CIDEr named CIDEr-D is available as a part of MS COCO evaluation server to enable systematic evaluation and benchmarking.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2016

Adopting Abstract Images for Semantic Scene Understanding

C. Lawrence Zitnick; Ramakrishna Vedantam; Devi Parikh

Relating visual information to its linguistic semantic meaning remains an open and challenging area of research. The semantic meaning of images depends on the presence of objects, their attributes and their relations to other objects. But precisely characterizing this dependence requires extracting complex visual information from an image, which is in general a difficult and yet unsolved problem. In this paper, we propose studying semantic information in abstract images created from collections of clip art. Abstract images provide several advantages over real images. They allow for the direct study of how to infer high-level semantic information, since they remove the reliance on noisy low-level object, attribute and relation detectors, or the tedious hand-labeling of real images. Importantly, abstract images also allow the ability to generate sets of semantically similar scenes. Finding analogous sets of real images that are semantically similar would be nearly impossible. We create 1,002 sets of 10 semantically similar abstract images with corresponding written descriptions. We thoroughly analyze this dataset to discover semantically important features, the relations of words to visual features and methods for measuring semantic similarity. Finally, we study the relation between the saliency and memorability of objects and their semantic importance.


international conference on computer vision | 2015

Learning Common Sense through Visual Abstraction

Ramakrishna Vedantam; Xiao Lin; Tanmay Batra; C. Lawrence Zitnick; Devi Parikh

Common sense is essential for building intelligent machines. While some commonsense knowledge is explicitly stated in human-generated text and can be learnt by mining the web, much of it is unwritten. It is often unnecessary and even unnatural to write about commonsense facts. While unwritten, this commonsense knowledge is not unseen! The visual world around us is full of structure modeled by commonsense knowledge. Can machines learn common sense simply by observing our visual world? Unfortunately, this requires automatic and accurate detection of objects, their attributes, poses, and interactions between objects, which remain challenging problems. Our key insight is that while visual common sense is depicted in visual content, it is the semantic features that are relevant and not low-level pixel information. In other words, photorealism is not necessary to learn common sense. We explore the use of human-generated abstract scenes made from clipart for learning common sense. In particular, we reason about the plausibility of an interaction or relation between a pair of nouns by measuring the similarity of the relation and nouns with other relations and nouns we have seen in abstract scenes. We show that the commonsense knowledge we learn is complementary to what can be learnt from sources of text.


computer vision and pattern recognition | 2017

Counting Everyday Objects in Everyday Scenes

Prithvijit Chattopadhyay; Ramakrishna Vedantam; Ramprasaath R. Selvaraju; Dhruv Batra; Devi Parikh

We are interested in counting the number of instances of object classes in natural, everyday images. Previous counting approaches tackle the problem in restricted domains such as counting pedestrians in surveillance videos. Counts can also be estimated from outputs of other vision tasks like object detection. In this work, we build dedicated models for counting designed to tackle the large variance in counts, appearances, and scales of objects found in natural scenes. Our approach is inspired by the phenomenon of subitizing – the ability of humans to make quick assessments of counts given a perceptual signal, for small count values. Given a natural scene, we employ a divide and conquer strategy while incorporating context across the scene to adapt the subitizing idea to counting. Our approach offers consistent improvements over numerous baseline approaches for counting on the PASCAL VOC 2007 and COCO datasets. Subsequently, we study how counting can be used to improve object detection. We then show a proof of concept application of our counting methods to the task of Visual Question Answering, by studying the how many? questions in the VQA and COCO-QA datasets.


computer vision and pattern recognition | 2017

Context-Aware Captions from Context-Agnostic Supervision

Ramakrishna Vedantam; Samy Bengio; Kevin P. Murphy; Devi Parikh; Gal Chechik

We introduce an inference technique to produce discriminative context-aware image captions (captions that describe differences between images or visual concepts) using only generic context-agnostic training data (captions that describe a concept or an image in isolation). For example, given images and captions of siamese cat and tiger cat, we generate language that describes the siamese cat in a way that distinguishes it from tiger cat. Our key novelty is that we show how to do joint inference over a language model that is context-agnostic and a listener which distinguishes closely-related concepts. We first apply our technique to a justification task, namely to describe why an image contains a particular fine-grained category as opposed to another closely-related category of the CUB-200-2011 dataset. We then study discriminative image captioning to generate language that uniquely refers to one of two semantically-similar images in the COCO dataset. Evaluations with discriminative ground truth for justification and human studies for discriminative image captioning reveal that our approach outperforms baseline generative and speaker-listener approaches for discrimination.


arXiv: Computer Vision and Pattern Recognition | 2015

Microsoft COCO Captions: Data Collection and Evaluation Server.

Xinlei Chen; Hao Fang; Tsung-Yi Lin; Ramakrishna Vedantam; Saurabh Gupta; Piotr Dollár; C. Lawrence Zitnick


international conference on computer vision | 2017

Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization

Ramprasaath R. Selvaraju; Michael Cogswell; Abhishek Das; Ramakrishna Vedantam; Devi Parikh; Dhruv Batra


Salon des Refusés | 2016

Grad-CAM: Why did you say that? Visual Explanations from Deep Networks via Gradient-based Localization

Ramprasaath R. Selvaraju; Abhishek Das; Ramakrishna Vedantam; Michael Cogswell; Devi Parikh; Dhruv Batra


computer vision and pattern recognition | 2016

VisualWord2Vec (Vis-W2V): Learning Visually Grounded Word Embeddings Using Abstract Scenes

Satwik Kottur; Ramakrishna Vedantam; José M. F. Moura; Devi Parikh


international conference on learning representations | 2018

Generative Models of Visually Grounded Imagination

Ramakrishna Vedantam; Ian Fischer; Jonathan Huang; Kevin P. Murphy

Collaboration


Dive into the Ramakrishna Vedantam's collaboration.

Top Co-Authors

Avatar

Devi Parikh

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Dhruv Batra

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Abhishek Das

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge