Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Vasudev Bhaskaran is active.

Publication


Featured researches published by Vasudev Bhaskaran.


computer vision and pattern recognition | 2017

LCDet: Low-Complexity Fully-Convolutional Neural Networks for Object Detection in Embedded Systems

Subarna Tripathi; Gokce Dane; Byeongkeun Kang; Vasudev Bhaskaran; Truong Q. Nguyen

Deep Convolutional Neural Networks (CNN) are the state-of-the-art performers for the object detection task. It is well known that object detection requires more com- putation and memory than image classification. In this work, we propose LCDet, a fully-convolutional neural net- work for generic object detection that aims to work in em- bedded systems. We design and develop an end-to-end TensorFlow(TF)-based model. The detection works by a single forward pass through the network. Additionally, we employ 8-bit quantization on the learned weights. As a use case, we choose face detection and train the proposed model on images containing a varying number of faces of different sizes. We evaluate the face detection perfor- mance on publicly available dataset FDDB and Widerface. Our experimental results show that the proposed method achieves comparative accuracy comparing with state-of- the-art CNN-based face detection methods while reducing the model size by 3× and memory-BW by 3 - 4× compar- ing with one of the best real-time CNN-based object de- tector YOLO [23]. Our 8-bit fixed-point TF-model pro- vides additional 4× memory reduction while keeping the accuracy nearly as good as the floating point model and achieves 20× performance gain compared to the floating point model. Thus the proposed model is amenable for em- bedded implementations and is generic to be extended to any number of categories of objects.


Proceedings of SPIE | 2013

A post-alignment method for stereoscopic movie

Xin Du; Xiaoyu Chen; Vasudev Bhaskaran; Fan Ling; Yunfang Zhu; Huiliang Shen

In this paper, we propose a novel post-alignment method. The method is both simple and effective for stereo video postproduction. A low-distortion algorithm for rectifying the epipolar lines is first introduced. Unlike traditional methods, which map the epipoles to (1,0,0) T directly, our method conducts it in two steps: 1) mapping the epipoles to points at infinity; 2) aligning the epipolar lines with x-axis. More specifically, by taking advantage of that commonly available stereoscopic movies are nearly aligned, our method keeps one of the stereo images unchanged, and the rectification is only applied to the other image. Besides epipolar non-parallel distortions, disparity distortion is also an important issue to consider for the stereoscopic movie. We propose a new constraint for stereoscopic video alignment such that the variations of disparities is also minimized. Experimental results have demonstrated that our method obtains better visual effect than the state-of-the-art methods.


Proceedings of SPIE | 2013

Multiview synthesis for autostereoscopic displays

Gokce Dane; Vasudev Bhaskaran

Autostereoscopic (AS) displays spatially multiplex multiple views, providing a more immersive experience by enabling users to view the content from different angles without the need of 3D glasses. Multiple views could be captured from multiple cameras at different orientations, however this could be expensive, time consuming and not applicable to some applications. The goal of multiview synthesis in this paper is to generate multiple views from a stereo image pair and disparity map by using various video processing techniques including depth/disparity map processing, initial view interpolation, inpainting and post-processing. We specifically emphasize the need for disparity processing when there is no depth information is available that is associated with the 2D data and we propose a segmentation based disparity processing algorithm to improve disparity map. Furthermore we extend the texture based 2D inpainting algorithm to 3D and further improve the hole-filling performance of view synthesis. The benefit of each step of the proposed algorithm is demonstrated with comparison to state of the art algorithms in terms of visual quality and PSNR metric. Our system is evaluated in an end-to-end multi view synthesis framework where only stereo image pair is provided as input to the system and 8 views are outputted and displayed in 8-view Alioscopy AS display.


Archive | 2009

Systems and Methods for Perceptually Lossless Video Compression

Vasudev Bhaskaran; Nikhil Balram


Archive | 2010

System and methods for gamut bounded saturation adaptive color enhancement

Vasudev Bhaskaran; Sujith Srinivasan; Nikhil Balram


Archive | 2010

Automatic adjustments for video post-processor based on estimated quality of internet video content

Vasudev Bhaskaran; Mainak Biswas; Nikhil Balram


Archive | 2009

BIT RESOLUTION ENHANCEMENT

Vasudev Bhaskaran; Nikhil Balram; Sujith Srinivasan; Sanjay Garg


Archive | 2013

METHOD AND APPARATUS OF REDUCING RANDOM NOISE IN DIGITAL VIDEO STREAMS

Mainak Biswas; Vasudev Bhaskaran; Sujith Srinivasan; Shilpi Sahu


Archive | 2014

MULTIVIEW SYNTHESIS AND PROCESSING SYSTEMS AND METHODS

Gokce Dane; Vasudev Bhaskaran


Archive | 2012

CROSSTALK REDUCTION WITH LOCATION-BASED ADJUSTMENT IN MULTIVIEW VIDEO PROCESSING

Gokce Dane; Vasudev Bhaskaran

Collaboration


Dive into the Vasudev Bhaskaran's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mainak Biswas

Marvell Technology Group

View shared research outputs
Top Co-Authors

Avatar

Mainak Biswas

Marvell Technology Group

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge