Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Nima Razavi is active.

Publication


Featured researches published by Nima Razavi.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2011

Hough Forests for Object Detection, Tracking, and Action Recognition

Juergen Gall; Angela Yao; Nima Razavi; L. Van Gool; Victor S. Lempitsky

The paper introduces Hough forests, which are random forests adapted to perform a generalized Hough transform in an efficient way. Compared to previous Hough-based systems such as implicit shape models, Hough forests improve the performance of the generalized Hough transform for object detection on a categorical level. At the same time, their flexibility permits extensions of the Hough transform to new domains such as object tracking and action recognition. Hough forests can be regarded as task-adapted codebooks of local appearance that allow fast supervised training and fast matching at test time. They achieve high detection accuracy since the entries of such codebooks are optimized to cast Hough votes with small variance and since their efficiency permits dense sampling of local image patches or video cuboids during detection. The efficacy of Hough forests for a set of computer vision tasks is validated through experiments on a large set of publicly available benchmark data sets and comparisons with the state-of-the-art.


european conference on computer vision | 2012

Latent hough transform for object detection

Nima Razavi; Juergen Gall; Pushmeet Kohli; Luc Van Gool

Hough transform based methods for object detection work by allowing image features to vote for the location of the object. While this representation allows for parts observed in different training instances to support a single object hypothesis, it also produces false positives by accumulating votes that are consistent in location but inconsistent in other properties like pose, color, shape or type. In this work, we propose to augment the Hough transform with latent variables in order to enforce consistency among votes. To this end, only votes that agree on the assignment of the latent variable are allowed to support a single hypothesis. For training a Latent Hough Transform (LHT) model, we propose a learning scheme that exploits the linearity of the Hough transform based methods. Our experiments on two datasets including the challenging PASCAL VOC 2007 benchmark show that our method outperforms traditional Hough transform based methods leading to state-of-the-art performance on some categories.


british machine vision conference | 2010

On-line adaption of class-specific codebooks for instance tracking

Juergen Gall; Nima Razavi; Luc Van Gool

In this work, we demonstrate that an off-line trained class-specific detector can be transformed into an instance-specific detector on-the-fly. To this end, we make use of a codebook-based detector [1] that is trained on an object class. Codebooks model the spatial distribution and appearance of object parts. When matching an image against a codebook, a certain set of codebook entries is activated to cast probabilistic votes for the object. For a given object hypothesis, one can collect the entries that voted for the object. In our case, these entries can be regarded as a signature for the target of interest. Since a change of pose and appearance can lead to an activation of very different codebook entries, we learn the statistics for the target and the background over time, i.e. we learn on-line the probability of each part in the codebook belonging to the target. By taking the target-specific statistics into account for voting, the target can be distinguished from other instances in the background yielding a higher detection confidence for the target, see Fig. 1. A class-specific codebook as in [1, 2, 3, 4, 5] is trained off-line to identify any instance of the class in any image. It models the probability of the patches belonging to the object class p ( c=1|L ) and the local spatial distribution of the patches with respect to the object center p ( x|c=1,L ) . For detection, patches are sampled from an image and matched against the codebook, i.e. each patch P(y) sampled from image location y ends at a leaf L(y). The probability for an instance of the class centered at the location x is then given by


international conference on computer vision | 2011

An introduction to random forests for multi-class object detection

Juergen Gall; Nima Razavi; Luc Van Gool

Object detection in large-scale real-world scenes requires efficient multi-class detection approaches. Random forests have been shown to handle large training datasets and many classes for object detection efficiently. The most prominent example is the commercial application of random forests for gaming [37]. In this paper, we describe the general framework of random forests for multi-class object detection in images and give an overview of recent developments and implementation details that are relevant for practitioners.


british machine vision conference | 2012

Sparsity Potentials for Detecting Objects with the Hough Transform

Nima Razavi; Nima Sedaghat Alvar; Juergen Gall; Luc Van Gool

Hough transform based object detectors divide an object into a number of patches and combine them using a shape model. For efficient combination of patches into the shape model, the individual patches are assumed to be independent of one another. Although this independence assumption is key for fast inference, it requires the individual patches to have a high discriminative power in predicting the class and location of objects. In this paper, we argue that the sparsity of the appearance of a patch in its neighborhood can be a very powerful measure to increase the discriminative power of a local patch and incorporate it as a sparsity potential for object detection. Further, we show that this potential shall depend on the appearance of the patch to adapt to the statistics of the neighborhood specific to the type of appearance (e.g. texture or structure) it represents. We have evaluated our method on challenging datasets including the PASCAL VOC 2007 dataset and show that using the proposed sparsity potential result in a substantial improvement in the detection accuracy.


International Journal of Imaging Systems and Technology | 2006

An application of linear predictive coding and computational geometry to iris recognition

Masoud Alipour; Ali Farhadi; Nima Razavi

The aim of this work is to present a method in computer vision for person identification via iris recognition. The method makes essential use of computational geometry and LPC.


International Journal of Imaging Systems and Technology | 2006

How to tell the difference between a cat and a dog

Nima Razavi; Golnoosh Samei; Masoud Alipour; Ali Farhadi

The aim of this work is to present a method in computer vision for model‐based object recognition by integration of texture, color, and partial silhouette matching cues. As a test problem, a challenging task of distinguishing between cats and dogs are proposed. No condition is imposed on the images. In spite of high intraclass variability of two classes (especially dogs), accuracy rate of over 92% is achieved. A novel method for integration of color information to textures is also presented.


computer vision and pattern recognition | 2011

Scalable multi-class object detection

Nima Razavi; Juergen Gall; Luc Van Gool


Archive | 2013

OBJECT IDENTIFICATION DEVICE

Nima Razavi; Juergen Gall; Luc Vainqueur; Funayama Ryuji


Lecture Notes in Computer Science | 2010

Backprojection revisited: scalable multi-view object detection and similarity metrics for detections

Nima Razavi; Juergen Gall; Luc Van Gool

Collaboration


Dive into the Nima Razavi's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ali Farhadi

University of Washington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Victor S. Lempitsky

Skolkovo Institute of Science and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge