Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Vishnu Naresh Boddeti is active.

Publication


Featured researches published by Vishnu Naresh Boddeti.


computer vision and pattern recognition | 2013

Correlation Filters for Object Alignment

Vishnu Naresh Boddeti; Takeo Kanade; B. V. K. Vijaya Kumar

Alignment of 3D objects from 2D images is one of the most important and well studied problems in computer vision. A typical object alignment system consists of a landmark appearance model which is used to obtain an initial shape and a shape model which refines this initial shape by correcting the initialization errors. Since errors in landmark initialization from the appearance model propagate through the shape model, it is critical to have a robust landmark appearance model. While there has been much progress in designing sophisticated and robust shape models, there has been relatively less progress in designing robust landmark detection models. In this paper we present an efficient and robust landmark detection model which is designed specifically to minimize localization errors thereby leading to state-of-the-art object alignment performance. We demonstrate the efficacy and speed of the proposed approach on the challenging task of multi-view car alignment.


IEEE Transactions on Image Processing | 2013

Maximum Margin Correlation Filter: A New Approach for Localization and Classification

A. Rodriguez; Vishnu Naresh Boddeti; B. V. K. V. Kumar; Abhijit Mahalanobis

Support vector machine (SVM) classifiers are popular in many computer vision tasks. In most of them, the SVM classifier assumes that the object to be classified is centered in the query image, which might not always be valid, e.g., when locating and classifying a particular class of vehicles in a large scene. In this paper, we introduce a new classifier called Maximum Margin Correlation Filter (MMCF), which, while exhibiting the good generalization capabilities of SVM classifiers, is also capable of localizing objects of interest, thereby avoiding the need for image centering as is usually required in SVM classifiers. In other words, MMCF can simultaneously localize and classify objects of interest. We test the efficacy of the proposed classifier on three different tasks: vehicle recognition, eye localization, and face classification. We demonstrate that MMCF outperforms SVM classifiers as well as well known correlation filters.


computer vision and pattern recognition | 2015

Learning scene-specific pedestrian detectors without real data

Hironori Hattori; Vishnu Naresh Boddeti; Kris M. Kitani; Takeo Kanade

We consider the problem of designing a scene-specific pedestrian detector in a scenario where we have zero instances of real pedestrian data (i.e., no labeled real data or unsupervised real data). This scenario may arise when a new surveillance system is installed in a novel location and a scene-specific pedestrian detector must be trained prior to any observations of pedestrians. The key idea of our approach is to infer the potential appearance of pedestrians using geometric scene data and a customizable database of virtual simulations of pedestrian motion. We propose an efficient discriminative learning method that generates a spatially-varying pedestrian appearance model that takes into the account the perspective geometry of the scene. As a result, our method is able to learn a unique pedestrian classifier customized for every possible location in the scene. Our experimental results show that our proposed approach outperforms classical pedestrian detection models and hybrid synthetic-real models. Our results also yield a surprising result, that our method using purely synthetic data is able to outperform models trained on real scene-specific data when data is limited.


international conference on biometrics | 2012

Matching highly non-ideal ocular images: An information fusion approach

Arun Ross; Raghavender R. Jillela; Jonathon M. Smereka; Vishnu Naresh Boddeti; B. V. K. Vijaya Kumar; Ryan T. Barnard; Xiaofei Hu; Paul Pauca; Robert J. Plemmons

We consider the problem of matching highly non-ideal ocular images where the iris information cannot be reliably used. Such images are characterized by non-uniform illumination, motion and de-focus blur, off-axis gaze, and non-linear deformations. To handle these variations, a single feature extraction and matching scheme is not sufficient. Therefore, we propose an information fusion framework where three distinct feature extraction and matching schemes are utilized in order to handle the significant variability in the input ocular images. The Gradient Orientation Histogram (GOH) scheme extracts the global information in the image; the modified Scale Invariant Feature Transform (SIFT) extracts local edge anomalies in the image; and a Probabilistic Deformation Model (PDM) handles nonlinear deformations observed in image pairs. The simple sum rule is used to combine the match scores generated by the three schemes. Experiments on the extremely challenging Face and Ocular Challenge Series (FOCS) database and a subset of the Face Recognition Grand Challenge (FRGC) database confirm the efficacy of the proposed approach to perform ocular recognition.


systems man and cybernetics | 2010

Extended-Depth-of-Field Iris Recognition Using Unrestored Wavefront-Coded Imagery

Vishnu Naresh Boddeti; B. V. K. Vijaya Kumar

Iris recognition can offer high-accuracy person recognition, particularly when the acquired iris image is well focused. However, in some practical scenarios, user cooperation may not be sufficient to acquire iris images in focus; therefore, iris recognition using camera systems with a large depth of field is very desirable. One approach to achieve extended depth of field is to use a wavefront-coding system as proposed by Dowski and Cathey, which uses a phase modulation mask. The conventional approach when using a camera system with such a phase mask is to restore the raw images acquired from the camera before feeding them into the iris recognition module. In this paper, we investigate the feasibility of skipping the image restoration step with minimal degradation in recognition performance while still increasing the depth of field of the whole system compared to an imaging system without a phase mask. By using a simulated wavefront-coded imagery, we present the results of two different iris recognition algorithms, namely, Daugmans iriscode and correlation-filter-based iris recognition, using more than 1000 iris images taken from the iris challenge evaluation database. We carefully study the effect of an off-the-shelf phase mask on iris segmentation and iris matching, and finally, to better enable the use of unrestored wavefront-coded images, we design a custom phase mask by formulating an optimization problem. Our results suggest that, in exchange for some degradation in recognition performance at the best focus, we can increase the depth of field by a factor of about four (over a conventional camera system without a phase mask) by carefully designing the phase masks.


International Journal of Central Banking | 2011

A comparative evaluation of iris and ocular recognition methods on challenging ocular images

Vishnu Naresh Boddeti; Jonathon M. Smereka; B. V. K. Vijaya Kumar

Iris recognition is believed to offer excellent recognition rates for iris images acquired under controlled conditions. However, recognition rates degrade considerably when images exhibit impairments such as off-axis gaze, partial occlusions, specular reflections and out-of-focus and motion-induced blur. In this paper, we use the recently-available face and ocular challenge set (FOCS) to investigate the comparative recognition performance gains of using ocular images (i.e., iris regions as well as the surrounding peri-ocular regions) instead of just the iris regions. A new method for ocular recognition is presented and it is shown that use of ocular regions leads to better recognition rates than iris recognition on FOCS dataset. Another advantage of using ocular images for recognition is that it avoids the need for segmenting the iris images from their surrounding regions.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2015

Zero-Aliasing Correlation Filters for Object Recognition

Joseph A. Fernandez; Vishnu Naresh Boddeti; Andres Rodriguez; B. V. K. Vijaya Kumar

Correlation filters (CFs) are a class of classifiers that are attractive for object localization and tracking applications. Traditionally, CFs have been designed in the frequency domain using the discrete Fourier transform (DFT), where correlation is efficiently implemented. However, existing CF designs do not account for the fact that the multiplication of two DFTs in the frequency domain corresponds to a circular correlation in the time/spatial domain. Because this was previously unaccounted for, prior CF designs are not truly optimal, as their optimization criteria do not accurately quantify their optimization intention. In this paper, we introduce new zero-aliasing constraints that completely eliminate this aliasing problem by ensuring that the optimization criterion for a given CF corresponds to a linear correlation rather than a circular correlation. This means that previous CF designs can be significantly improved by this reformulation. We demonstrate the benefits of this new CF design approach with several important CFs. We present experimental results on diverse data sets and present solutions to the computational challenges associated with computing these CFs. Code for the CFs described in this paper and their respective zero-aliasing versions is available at http://vishnu.boddeti.net/projects/correlation-filters.html


international conference on computer vision | 2012

Coupled marginal fisher analysis for low-resolution face recognition

Stephen Siena; Vishnu Naresh Boddeti; B. V. K. Vijaya Kumar

Many scenarios require that face recognition be performed at conditions that are not optimal. Traditional face recognition algorithms are not best suited for matching images captured at a low-resolution to a set of high-resolution gallery images. To perform matching between images of different resolutions, this work proposes a method of learning two sets of projections, one for high-resolution images and one for low-resolution images, based on local relationships in the data. Subsequent matching is done in a common subspace. Experiments show that our algorithm yields higher recognition rates than other similar methods.


computer vision and pattern recognition | 2017

Local Binary Convolutional Neural Networks

Felix Juefei-Xu; Vishnu Naresh Boddeti; Marios Savvides

We propose local binary convolution (LBC), an efficient alternative to convolutional layers in standard convolutional neural networks (CNN). The design principles of LBC are motivated by local binary patterns (LBP). The LBC layer comprises of a set of fixed sparse pre-defined binary convolutional filters that are not updated during the training process, a non-linear activation function and a set of learnable linear weights. The linear weights combine the activated filter responses to approximate the corresponding activated filter responses of a standard convolutional layer. The LBC layer affords significant parameter savings, 9x to 169x in the number of learnable parameters compared to a standard convolutional layer. Furthermore, the sparse and binary nature of the weights also results in up to 9x to 169x savings in model size compared to a standard convolutional layer. We demonstrate both theoretically and experimentally that our local binary convolution layer is a good approximation of a standard convolutional layer. Empirically, CNNs with LBC layers, called local binary convolutional neural networks (LBCNN), achieves performance parity with regular CNNs on a range of visual datasets (MNIST, SVHN, CIFAR-10, and ImageNet) while enjoying significant computational savings.


knowledge discovery and data mining | 2012

RainMon: an integrated approach to mining bursty timeseries monitoring data

Ilari Shafer; Kai Ren; Vishnu Naresh Boddeti; Yoshihisa Abe; Gregory R. Ganger; Christos Faloutsos

Metrics like disk activity and network traffic are widespread sources of diagnosis and monitoring information in datacenters and networks. However, as the scale of these systems increases, examining the raw data yields diminishing insight. We present RainMon, a novel end-to-end approach for mining timeseries monitoring data designed to handle its size and unique characteristics. Our system is able to (a) mine large, bursty, real-world monitoring data, (b) find significant trends and anomalies in the data, (c) compress the raw data effectively, and (d) estimate trends to make forecasts. Furthermore, RainMon integrates the full analysis process from data storage to the user interface to provide accessible long-term diagnosis. We apply RainMon to three real-world datasets from production systems and show its utility in discovering anomalous machines and time periods.

Collaboration


Dive into the Vishnu Naresh Boddeti's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kris M. Kitani

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Takeo Kanade

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Marios Savvides

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Andres Rodriguez

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

B. V. K. V. Kumar

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Hideki Koike

Tokyo Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Ryohei Funakoshi

Tokyo Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Anil K. Jain

Michigan State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge