Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Vijay Chandrasekhar is active.

Publication


Featured researches published by Vijay Chandrasekhar.


acm/ieee international conference on mobile computing and networking | 2006

Localization in underwater sensor networks: survey and challenges

Vijay Chandrasekhar; Winston Khoon Guan Seah; Yoo Sang Choo; How Voon Ee

In underwater sensor networks (UWSNs), determining the location of every sensor is important and the process of estimating the location of each node in a sensor network is known as localization. While various localization algorithms have been proposed for terrestrial sensor networks, there are relatively few localization schemes for UWSNs. The characteristics of underwater sensor networks are fundamentally different from that of terrestrial networks. Underwater acoustic channels are characterized by harsh physical layer environments with stringent bandwidth limitations. The variable speed of sound and the long propagation delays under water pose a unique set of challenges for localization in UWSN. This paper explores the different localization algorithms that are relevant to underwater sensor networks, and the challenges in meeting the requirements posed by emerging applications for such networks, e.g. offshore engineering.


OCEANS 2006 - Asia Pacific | 2006

An Area Localization Scheme for Underwater Sensor Networks

Vijay Chandrasekhar; Winston Khoon Guan Seah

For large wireless sensor networks, identifying the exact location of every sensor may not be feasible and the cost may be very high. A coarse estimate of the sensors locations is usually sufficient for many applications. In this paper, we propose an efficient Area Localization Scheme (ALS) for underwater sensor networks. This scheme tries to estimate the position of every sensor within a certain area rather than its exact location. The granularity of the areas estimated for each node can be easily adjusted by varying system parameters. All the complex calculations are handled by the powerful sinks instead of the sensors. This reduces the energy consumed by the sensors and helps extend the lifetime of the network.


Signal Processing | 2016

A practical guide to CNNs and Fisher Vectors for image instance retrieval

Vijay Chandrasekhar; Jie Lin; Olivier Morère; Hanlin Goh; Antoine Veillard

With deep learning becoming the dominant approach in computer vision, the use of representations extracted from Convolutional Neural Nets (CNNs) is quickly gaining ground on Fisher Vectors (FVs) as favoured state-of-the-art global image descriptors for image instance retrieval. While the good performance of CNNs for image classification are unambiguously recognised, which of the two has the upper hand in the image retrieval context is not entirely clear yet.We propose a comprehensive study that systematically evaluates FVs and CNNs for image instance retrieval. The first part compares the performances of FVs and CNNs on multiple publicly available data sets and for multiple criteria. We show that no descriptor is systematically better than the other and that performance gains can usually be obtained by using both types together. The second part of the study focuses on the impact of geometrical transformations. We show that performance of CNNs can quickly degrade in the presence of certain transformations and propose a number of ways to incorporate the required invariances in the CNN pipeline.Our findings are organised as a reference guide offering practically useful and simply implementable guidelines to anyone looking for state-of-the-art global descriptors best suited to their specific image instance retrieval problem. HighlightsCNNs exhibit very limited invariance to rotation changes compared to FVDoG.CNNs are more robust to scale changes than any variants of FV.Max-pooling across rotated/scaled database images gains rotation/scale invariance.Combining FV with CNN can improve retrieval accuracy by a significant margin.


asian conference on computer vision | 2014

A Wearable Face Recognition System on Google Glass for Assisting Social Interactions

Bappaditya Mandal; Shue-Ching Chia; Liyuan Li; Vijay Chandrasekhar; Cheston Tan; Joo-Hwee Lim

In this paper, we present a wearable face recognition (FR) system on Google Glass (GG) to assist users in social interactions. FR is the first step towards face-to-face social interactions. We propose a wearable system on GG, which acts as a social interaction assistant, the application includes face detection, eye localization, face recognition and a user interface for personal information display. To be useful in natural social interaction scenarios, the system should be robust to changes in face pose, scale and lighting conditions. OpenCV face detection is implemented in GG. We exploit both OpenCV and ISG (Integration of Sketch and Graph patterns) eye detectors to locate a pair of eyes on the face, between them the former is stable for frontal view faces and the latter performs better for oblique view faces. We extend the eigenfeature regularization and extraction (ERE) face recognition approach by introducing subclass discriminant analysis (SDA) to perform within-subclass discriminant analysis for face feature extraction. The new approach improves the accuracy of FR over varying face pose, expression and lighting conditions. A simple user interface (UI) is designed to present relevant personal information of the recognized person to assist in the social interaction. A standalone independent system on GG and a Client-Server (CS) system via Bluetooth to connect GG with a smart phone are implemented, for different levels of privacy protection. The performance on database created using GG is evaluated and comparisons with baseline approaches are performed. Numerous experimental studies show that our proposed system on GG can perform better real-time FR as compared to other methods.


mobile data management | 2006

Selective Iterative Multilateration for Hop Count-Based Localization in Wireless Sensor Networks

Jeffrey Tay; Vijay Chandrasekhar; Winston Khoon Guan Seah

Iterative multilateration techniques have been proposed to improve position estimates in localization schemes for sensor networks. However, these techniques are hampered by problems such as propagation of errors, which results in inferior estimates, and high communication overheads leading to poor scalability. In view of this, a novel Selective Iterative Multilateration (SIM) algorithm is described in this paper to improve the accuracy of location estimation in hop count-based localization schemes without incurring unnecessary overhead costs. New anchor nodes are selected judiciously such that their initial position estimates are sufficiently accurate. Also, such new anchor nodes are prevented from appearing in the same regions so that unnecessary overhead is kept to a minimum.


international conference on image processing | 2015

Whole space subclass discriminant analysis for face recognition

Bappaditya Mandal; Liyuan Li; Vijay Chandrasekhar; Joo Hwee Lim

In this work, we propose to divide each class (a person) into subclasses using spatial partition trees which helps in better capturing the intra-personal variances arising from the appearances of the same individual. We perform a comprehensive analysis on within-class and within-subclass eigen-spectrums of face images and propose a novel method of eigen-spectrum modeling which extracts discriminative features of faces from both within-subclass and total or between-subclass scatter matrices. Effective low-dimensional face discriminative features are extracted for face recognition (FR) after performing discriminant evaluation in the entire eigenspace. Experimental results on popular face databases (AR, FERET) and the challenging unconstrained YouTube Face database show the superiority of our proposed approach on all three databases.


international conference on acoustics, speech, and signal processing | 2016

Egocentric activity recognition with multimodal fisher vector

Sibo Song; Ngai-Man Cheung; Vijay Chandrasekhar; Bappaditya Mandal; Jie Liri

With the increasing availability of wearable devices, research on egocentric activity recognition has received much attention recently. In this paper, we build a Multimodal Egocentric Activity dataset which includes egocentric videos and sensor data of 20 fine-grained and diverse activity categories. We present a novel strategy to extract temporal trajectory-like features from sensor data. We propose to apply the Fisher Kernel framework to fuse video and temporal enhanced sensor features. Experiment results show that with careful design of feature extraction and fusion algorithm, sensor data can enhance information-rich video data. We make publicly available the Multimodal Egocentric Activity dataset to facilitate future research.


system analysis and modeling | 2014

Recovering Social Interaction Spatial Structure from Multiple First-Person Views

Tian Gan; Yongkang Wong; Bappaditya Mandal; Vijay Chandrasekhar; Liyuan Li; Joo-Hwee Lim; Mohan S. Kankanhalli

In a typical multi-person social interaction, spatial information plays an important role in analyzing the structure of the social interaction. Previous studies, which analyze spatial structure of the social interaction using one or more third-person view cameras, suffer from the occlusion problem. With the increasing popularity of wearable computing devices, we are now able to obtain natural first-person observations with limited occlusion. However, such observations have a limited field of view, and can only capture a portion of the social interaction. To overcome the aforementioned limitation, we propose a search-based structure recovery method in a small group conversational social interaction scenario to reconstruct the social interaction structure from multiple first-person views, where each of them contributes to the multifaceted understanding of the social interaction. We first map each first-person view to a local coordinate system, then a set of constraints and spatial relationships are extracted from these local coordinate systems. Finally, the human spatial configuration is searched under the constraints to ``best match the extracted relationships. The proposed method is much simpler than full 3D reconstruction, and suffices for capturing the social interaction spatial structure. Experiments for both simulated and real-world data show the efficacy of the proposed method.


international conference on multimedia retrieval | 2017

DeepHash for Image Instance Retrieval: Getting Regularization, Depth and Fine-Tuning Right

Jie Lin; Olivier Morère; Antoine Veillard; Ling-Yu Duan; Hanlin Goh; Vijay Chandrasekhar

This work focuses on representing very high-dimensional global image descriptors using very compact 64-1024 bit binary hashes for instance retrieval. We propose DeepHash: a hashing scheme based on deep networks. Key to making DeepHash work at extremely low bitrates are three important considerations -- regularization, depth and fine-tuning -- each requiring solutions specific to the hashing problem. In-depth evaluation shows that our scheme outperforms state-of-the-art methods over several benchmark datasets for both Fisher Vectors and Deep Convolutional Neural Network features, by up to 8.5% over other schemes. The retrieval performance with 256-bit hashes is close to that of the uncompressed floating point features -- a remarkable 512x compression.


IEEE Transactions on Multimedia | 2017

HNIP: Compact Deep Invariant Representations for Video Matching, Localization, and Retrieval

Jie Lin; Ling-Yu Duan; Shiqi Wang; Yan Bai; Yihang Lou; Vijay Chandrasekhar; Tiejun Huang; Alex ChiChung Kot; Wen Gao

With emerging demand for large-scale video analysis, MPEG initiated the compact descriptor for video analysis (CDVA) standardization in 2014. Beyond handcrafted descriptors adopted by the current MPEG-CDVA reference model, we study the problem of deep learned global descriptors for video matching, localization, and retrieval. First, inspired by a recent invariance theory, we propose a nested invariance pooling (NIP) method to derive compact deep global descriptors from convolutional neural networks (CNNs), by progressively encoding translation, scale, and rotation invariances into the pooled descriptors. Second, our empirical studies have shown that a sequence of well designed pooling moments (e.g., max or average) may drastically impact video matching performance, which motivates us to design hybrid pooling operations via NIP (HNIP). HNIP has further improved the discriminability of deep global descriptors. Third, the technical merits and performance improvements by combining deep and handcrafted descriptors are provided to better investigate the complementary effects. We evaluate the effectiveness of HNIP within the well-established MPEG-CDVA evaluation framework. The extensive experiments have demonstrated that HNIP outperforms the state-of-the-art deep and canonical handcrafted descriptors with significant mAP gains of 5.5% and 4.7%, respectively. In particular the combination of HNIP incorporated CNN descriptors and handcrafted global descriptors has significantly boosted the performance of CDVA core techniques with comparable descriptor size.

Collaboration


Dive into the Vijay Chandrasekhar's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Winston Khoon Guan Seah

Victoria University of Wellington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Antoine Veillard

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge