Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Santhoshkumar Sunderrajan is active.

Publication


Featured researches published by Santhoshkumar Sunderrajan.


IEEE Transactions on Multimedia | 2016

Context-Aware Hypergraph Modeling for Re-identification and Summarization

Santhoshkumar Sunderrajan; B. S. Manjunath

Tracking and re-identification in wide-area camera networks is a challenging problem due to non-overlapping visual fields, varying imaging conditions, and appearance changes. We consider the problem of person re-identification and tracking, and propose a novel clothing context-aware color extraction method that is robust to such changes. Annotated samples are used to learn color drift patterns in a non-parametric manner using the random forest distance (RFD) function. The color drift patterns are automatically transferred to associate objects across different views using a unified graph matching framework . A hypergraph representation is used to link related objects for search and re-identification. A diverse hypergraph ranking technique is proposed for person-focused network summarization . The proposed algorithm is validated on a wide-area camera network consisting of ten cameras on bike paths. Also, the proposed algorithm is compared with the state of the art person re-identification algorithms on the VIPeR dataset .


international conference on image processing | 2010

Distributed particle filter tracking with online multiple instance learning in a camera sensor network

Zefeng Ni; Santhoshkumar Sunderrajan; Amir M. Rahimi; B. S. Manjunath

This paper proposes a distributed algorithm for object tracking in a camera sensor network. At each camera node, an efficient online multiple instance learning algorithm is used to model objects appearance. This is integrated with particle filter for cameras image plane tracking. To improve the tracking accuracy, each camera node shares its particle states with others and fuses multi-camera information locally. In particular, particle weights are updated according to the fused information. Then, appearance model is updated with the re-weighted particles. The effectiveness of the proposed algorithm is demonstrated on human tracking in challenging environments.


international conference on pattern recognition | 2010

Particle Filter Tracking with Online Multiple Instance Learning

Zefeng Ni; Santhoshkumar Sunderrajan; Amir M. Rahimi; B. S. Manjunath

This paper proposes a distributed algorithm for object tracking in a camera sensor network. At each camera node, an efficient online multiple instance learning algorithm is used to model objects appearance. This is integrated with particle filter for cameras image plane tracking. To improve the tracking accuracy, each camera node shares its particle states with others and fuses multi-camera information locally. In particular, particle weights are updated according to the fused information. Then, appearance model is updated with the re-weighted particles. The effectiveness of the proposed algorithm is demonstrated on human tracking in challenging environments.This paper addresses the problem of object tracking by learning a discriminative classifier to separate the object from its background. The online-learned classifier is used to adaptively model objects appearance and its background. To solve the typical problem of erroneous training examples generated during tracking, an online multiple instance learning (MIL) algorithm is used by allowing false positive examples. In addition, particle filter is applied to make best use of the learned classifier and help to generate a better representative set of training examples for the online MIL learning. The effectiveness of the proposed algorithm is demonstrated in some challenging environments for human tracking.


IEEE Transactions on Multimedia | 2013

Graph-Based Topic-Focused Retrieval in Distributed Camera Network

Jiejun Xu; Vignesh Jagadeesh; Zefeng Ni; Santhoshkumar Sunderrajan; B. S. Manjunath

Wide-area wireless camera networks are being increasingly deployed in many urban scenarios. The large amount of data generated from these cameras pose significant information processing challenges. In this work, we focus on representation, search and retrieval of moving objects in the scene, with emphasis on local camera node video analysis. We develop a graph model that captures the relationships among objects without the need to identify global trajectories. Specifically, two types of edges are defined in the graph: object edges linking the same object across the whole network and context edges linking different objects within a spatial-temporal proximity. We propose a manifold ranking method with a greedy diversification step to order the relevant items based on similarity as well as diversity within the database. Detailed experimental results using video data from a 10-camera network covering bike paths are presented.


IEEE Computer | 2015

People Tracking in Camera Networks: Three Open Questions

Ninad Thakoor; Le An; Bir Bhanu; Santhoshkumar Sunderrajan; B. S. Manjunath

Camera networks provide opportunities for practical video surveillance and monitoring, but tracking people across the network presents many computational and modeling hurdles that researchers have yet to surmount.


ACM Transactions on Sensor Networks | 2014

Calibrating a wide-area camera network with non-overlapping views using mobile devices

Thomas Kuo; Zefeng Ni; Santhoshkumar Sunderrajan; B. S. Manjunath

In a wide-area camera network, cameras are often placed such that their views do not overlap. Collaborative tasks such as tracking and activity analysis still require discovering the network topology including the extrinsic calibration of the cameras. This work addresses the problem of calibrating a fixed camera in a wide-area camera network in a global coordinate system so that the results can be shared across calibrations. We achieve this by using commonly available mobile devices such as smartphones. At least one mobile device takes images that overlap with a fixed cameras view and records the GPS position and 3D orientation of the device when an image is captured. These sensor measurements (including the image, GPS position, and device orientation) are fused in order to calibrate the fixed camera. This article derives a novel maximum likelihood estimation formulation for finding the most probable location and orientation of a fixed camera. This formulation is solved in a distributed manner using a consensus algorithm. We evaluate the efficacy of the proposed methodology with several simulated and real-world datasets.


international conference on distributed smart cameras | 2013

Multiple view discriminative appearance modeling with IMCMC for distributed tracking

Santhoshkumar Sunderrajan; B. S. Manjunath

This paper proposes a distributed multi-camera tracking algorithm with interacting particle filters. A robust multi-view appearance model is obtained by sharing training samples between views. Motivated by incremental learning and [1], we create an intermediate data representation between two camera views with generative subspaces as points on a Grassmann manifold, and sample along the geodesic between training data from two views to uncover the meaningful description due to viewpoint changes. Finally, a Boosted appearance model is trained using the projected training samples on to these generative subspaces. For each object, a set of two particle filters i.e., local and global is used. The local particle filter models the object motion in the image plane. The global particle filter models the object motion in the ground plane. These particle filters are integrated into a unified Interacting Markov Chain Monte Carlo (IMCMC) framework. We show the manner in which we induce priors on scene specific information into the global particle filter to improve tracking accuracy. The proposed algorithm is validated with extensive experimentation in challenging camera network data, and compares favorably with state of the art object trackers.


IEEE Transactions on Circuits and Systems for Video Technology | 2017

Search Tracker: Human-Derived Object Tracking in the Wild Through Large-Scale Search and Retrieval

Archith J. Bency; Shri Karthikeyan; Carter De Leo; Santhoshkumar Sunderrajan; B. S. Manjunath

Humans use context and scene knowledge to easily localize moving objects in conditions of complex illumination changes, scene clutter, and occlusions. In this paper, we present a method to leverage human knowledge in the form of annotated video libraries in a novel search and retrieval-based setting to track objects in unseen video sequences. For every video sequence, a document that represents motion information is generated. Documents of the unseen video are queried against the library at multiple scales to find videos with similar motion characteristics. This provides us with coarse localization of objects in the unseen video. We further adapt these retrieved object locations to the new video using an efficient warping scheme. The proposed method is validated on in-the-wild video surveillance data sets where we outperform state-of-the-art appearance-based trackers. We also introduce a new challenging data set with complex object appearance changes.


international conference on computer vision | 2013

Camera Alignment Using Trajectory Intersections in Unsynchronized Videos

Thomas Kuo; Santhoshkumar Sunderrajan; B. S. Manjunath

This paper addresses the novel and challenging problem of aligning camera views that are unsynchronized by low and/or variable frame rates using object trajectories. Unlike existing trajectory-based alignment methods, our method does not require frame-to-frame synchronization. Instead, we propose using the intersections of corresponding object trajectories to match views. To find these intersections, we introduce a novel trajectory matching algorithm based on matching Spatio-Temporal Context Graphs (STCGs). These graphs represent the distances between trajectories in time and space within a view, and are matched to an STCG from another view to find the corresponding trajectories. To the best of our knowledge, this is one of the first attempts to align views that are unsynchronized with variable frame rates. The results on simulated and real-world datasets show trajectory intersections are a viable feature for camera alignment, and that the trajectory matching method performs well in real-world scenarios.


2014 ICPR Workshop on Computer Vision for Analysis of Underwater Imagery | 2014

Marine Biodiversity Classification Using Dropout Regularization

Amir M. Rahimi; Robert J. Miller; Dmitri G. Fedorov; Santhoshkumar Sunderrajan; Brandon Doheny; Henry M. Page; B. S. Manjunath

Collaboration


Dive into the Santhoshkumar Sunderrajan's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Zefeng Ni

University of California

View shared research outputs
Top Co-Authors

Avatar

Amir M. Rahimi

University of California

View shared research outputs
Top Co-Authors

Avatar

Thomas Kuo

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bir Bhanu

University of California

View shared research outputs
Top Co-Authors

Avatar

Brandon Doheny

University of California

View shared research outputs
Top Co-Authors

Avatar

Carter De Leo

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Henry M. Page

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge