Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ameesh Makadia is active.

Publication


Featured researches published by Ameesh Makadia.


european conference on computer vision | 2008

A New Baseline for Image Annotation

Ameesh Makadia; Vladimir Pavlovic; Sanjiv Kumar

Automatically assigning keywords to images is of great interest as it allows one to index, retrieve, and understand large collections of image data. Many techniques have been proposed for image annotation in the last decade that give reasonable performance on standard datasets. However, most of these works fail to compare their methods with simple baseline techniques to justify the need for complex models and subsequent training. In this work, we introduce a new baseline technique for image annotation that treats annotation as a retrieval problem. The proposed technique utilizes low-level image features and a simple combination of basic distances to find nearest neighbors of a given image. The keywords are then assigned using a greedy label transfer mechanism. The proposed baseline outperforms the current state-of-the-art methods on two standard and one large Web dataset. We believe that such a baseline measure will provide a strong platform to compare and better understand future annotation techniques.


International Journal of Computer Vision | 2010

Baselines for Image Annotation

Ameesh Makadia; Vladimir Pavlovic; Sanjiv Kumar

Automatically assigning keywords to images is of great interest as it allows one to retrieve, index, organize and understand large collections of image data. Many techniques have been proposed for image annotation in the last decade that give reasonable performance on standard datasets. However, most of these works fail to compare their methods with simple baseline techniques to justify the need for complex models and subsequent training. In this work, we introduce a new and simple baseline technique for image annotation that treats annotation as a retrieval problem. The proposed technique utilizes global low-level image features and a simple combination of basic distance measures to find nearest neighbors of a given image. The keywords are then assigned using a greedy label transfer mechanism. The proposed baseline method outperforms the current state-of-the-art methods on two standard and one large Web dataset. We believe that such a baseline measure will provide a strong platform to compare and better understand future annotation techniques.


computer vision and pattern recognition | 2009

Shape-based object recognition in videos using 3D synthetic object models

Alexander Toshev; Ameesh Makadia; Kostas Daniilidis

In this paper we address the problem of recognizing moving objects in videos by utilizing synthetic 3D models. We use only the silhouette space of the synthetic models making thus our approach independent of appearance. To deal with the decrease in discriminability in the absence of appearance, we align sequences of object masks from video frames to paths in silhouette space. We extract object silhouettes from video by an integration of feature tracking, motion grouping of tracks, and co-segmentation of successive frames. Subsequently, the object masks from the video are matched to 3D model silhouettes in a robust matching and alignment phase. The result is a matching score for every 3D model to the video, along with a pose alignment of the model to the video. Promising experimental results indicate that a purely shape-based matching scheme driven by synthetic 3D models can be successfully applied for object recognition in videos.


International Journal of Computer Vision | 2010

Spherical Correlation of Visual Representations for 3D Model Retrieval

Ameesh Makadia; Kostas Daniilidis

In recent years we have seen a tremendous growth in the amount of freely available 3D content, in part due to breakthroughs for 3D model design and acquisition. For example, advances in range sensor technology and design software have dramatically reduced the manual labor required to construct 3D models. As collections of 3D content continue to grow rapidly, the ability to perform fast and accurate retrieval from a database of models has become a necessity.At the core of this retrieval task is the fundamental challenge of defining and evaluating similarity between 3D shapes. Some effective methods dealing with this challenge consider similarity measures based on the visual appearance of models. While collections of rendered images are discriminative for retrieval tasks, such representations come with a few inherent limitations such as restrictions in the image viewpoint sampling and high computational costs. In this paper we present a novel algorithm for model similarity that addresses these issues. Our proposed method exploits techniques from spherical signal processing to efficiently evaluate a visual similarity measure between models. Extensive evaluations on multiple datasets are provided.


european conference on computer vision | 2010

Feature tracking for wide-baseline image retrieval

Ameesh Makadia

We address the problem of large scale image retrieval in a wide-baseline setting, where for any query image all the matching database images will come from very different viewpoints. In such settings traditional bag-of-visual-words approaches are not equipped to handle the significant feature descriptor transformations that occur under large camera motions. In this paper we present a novel approach that includes an offline step of feature matching which allows us to observe how local descriptors transform under large camera motions. These observations are encoded in a graph in the quantized feature space. This graph can be used directly within a soft-assignment feature quantization scheme for image retrieval.


international conference on 3d vision | 2014

Learning 3D Part Detection from Sparsely Labeled Data

Ameesh Makadia; Mehmet Ersin Yumer

For large collections of 3D models, the ability to detect and localize parts of interest is necessary to provide search and visualization enhancements beyond simple high-level categorization. While current 3D labeling approaches rely on learning from fully labeled meshes, such training data is difficult to acquire at scale. In this work we explore learning to detect object parts from sparsely labeled data, i.e. we operate under the assumption that for any object part we have only one labeled vertex rather than a full region segmentation. Similarly, we also learn to output a single representative vertex for each detected part. Such localized predictions are useful for applications where visualization is important. Our approach relies heavily on exploiting the spatial configuration of parts on a model to drive the detection. Inspired by structured multi-class object detection models for images, we develop an algorithm that combines independently trained part classifiers with a structured SVM model, and show promising results on real-world textured 3D data.


Archive | 2009

Video content analysis for automatic demographics recognition of users and videos

Corinna Cortes; Sanjiv Kumar; Ameesh Makadia; Gideon S. Mann; Jay Yagnik; Ming Zhao


international conference on machine learning | 2013

Label Partitioning For Sublinear Ranking

Jason Weston; Ameesh Makadia; Hector Yee


Archive | 2010

Search with joint image-audio queries

Ameesh Makadia; Jason Weston


Archive | 2014

Content-based image ranking

Sanjiv Kumar; Henry A. Rowley; Ameesh Makadia

Collaboration


Dive into the Ameesh Makadia's collaboration.

Top Co-Authors

Avatar

Kostas Daniilidis

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alexander M. Bronstein

Technion – Israel Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge