Rajesh Narasimha
Texas Instruments
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Rajesh Narasimha.
IEEE Signal Processing Magazine | 2012
Fa-Long Luo; Ward Williams; Raghuveer M. Rao; Rajesh Narasimha; Marie-José Montpetit
As one of technical committees of IEEE Signal Processing Society (SPS), the Industry Digital Signal Processing (DSP) Standing Committee (IDSP-SC) is focusing on emerging signal processing applications and technologies from industry and practice perspectives. This article is the summary our presentation at one of the expert sessions of ICASSP 2011; the emphasis is signal processing applications and technologies related to digital and software radio frequency (RF) processing, single-chip solution, nanoscale technolo-cognitive and reconfigurable radar, smart Internet of things, cloud and service computing, new-generation television (TV) [(smart TV, 3D-TV, 4K-TV and ultrahigh-definition TV (UHD-TV)], and autonomous system perception.
Proceedings of SPIE | 2011
Vikram V. Appia; Rajesh Narasimha
In this paper we describe a low complexity image orientation detection algorithm which can be implemented in real-time on embedded devices such as low-cost digital cameras, mobile phone cameras and video surveillance cameras. Providing orientation information to tamper detection algorithm in surveillance cameras, color enhancement algorithm and various scene classifiers can help improve their performances. Various image orientation detection algorithms have been developed in the last few years for image management systems, as a post processing tool. But, these techniques use certain high-level features and object classification to detect the orientation, thus they are not suitable for implementation on a capturing device in real-time. Our algorithm uses low-level features such as texture, lines and source of illumination to detect orientation. We implemented the algorithm on a mobile phone camera device with a 180 MHz, ARM926 processor. The orientation detection takes 10 ms for each frame which makes it suitable to use in image capture as well as video mode. It can be used efficiently in parallel with the other processes in the imaging pipeline of the device. On hardware, the algorithm achieved an accuracy of 92% with a rejection rate of 4% and a false detection rate of 8% on outdoor images.
international conference on acoustics, speech, and signal processing | 2013
Rajesh Narasimha; Karthik Raghuram; Jesse Villarreal; Joel Pacheco
In the recent past, vast amounts of stereo and augmented reality based applications are being developed for hand-held devices. In most of these applications depth map is a key ingredient for acceptable user experience. Accuracy and high density of depth map are important along with meeting real-time constraints on an embedded system. There is an inherent tradeoff between depth map quality and speed and invariably performance is usually important for competing in todays high-definition video marketplace. In this paper we present a method that addresses depth map quality while still maintaining performance at video frame-rates. Specifically, we discuss a technique to enhance a low-quality depth map for 3D point cloud generation on an embedded platform. We provide performance metrics and estimates on a Texas Instruments (TI) OMAP embedded platform and show that using simple pre and post-processing techniques one can achieve both quality and performance. A preliminary version of our point cloud application developed has a frame rate of about 15fps, majority being display and rendering related overheads. The core algorithms including pre and post processing have a much higher frame rate of about 23-25fps. We estimate that with adequate mapping of the algorithms to various cores and accelerated kernels, the frame rate could reach real-time performance of 30fps.
international conference on acoustics, speech, and signal processing | 2013
Rajesh Narasimha; Aziz Umit Batur
Brightness and contrast heavily influence image visual quality; therefore, modern digital camera image processing pipelines typically include a brightness and contrast enhancement (BCE) algorithm that enhances visual quality by applying tone mapping to the image. There are many BCE methods published in the literature that are variations of histogram equalization (HE) and contrast stretching (CS). When tested on large image databases, there are always certain images where these algorithms fail because image content is very diverse and a fixed method fails to adapt to this large variation. Our paper addresses this problem. We have developed an example-based BCE algorithm that can adapt its behavior to different scene types by using training examples that are hand-tuned by human observers for optimal visual quality. Our algorithm models the optimal enhancement function from these training images using Principal Component Analysis (PCA). Then, given a new image, the algorithm predicts the best amount of enhancement by extrapolating from closest training images. We have performed perceptual evaluations that conclude that our algorithm effectively enhances brightness and contrast judged by human observers.
Archive | 2011
Aziz Umit Batur; Rajesh Narasimha
Archive | 2011
Vikram V. Appia; Rajesh Narasimha; Aziz Umit Batur
Archive | 2013
Rajesh Narasimha; Karthik Raghuram; Jesse Villarreal; Roman Joel Pacheco
Archive | 2013
Rajesh Narasimha; Aziz Umit Batur
Archive | 2013
Rajesh Narasimha; Aziz Umit Batur
Archive | 2011
Shalini Gupta; Rajesh Narasimha; Aziz Umit Batur