Rengarajan Pelapur
University of Missouri
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Rengarajan Pelapur.
IEEE Transactions on Image Processing | 2015
V. B. Surya Prasath; Dmitry Vorotnikov; Rengarajan Pelapur; Shani Jose; Kannappan Palaniappan
Edge preserving regularization using partial differential equation (PDE)-based methods although extensively studied and widely used for image restoration, still have limitations in adapting to local structures. We propose a spatially adaptive multiscale variable exponent-based anisotropic variational PDE method that overcomes current shortcomings, such as over smoothing and staircasing artifacts, while still retaining and enhancing edge structures across scale. Our innovative model automatically balances between Tikhonov and total variation (TV) regularization effects using scene content information by incorporating a spatially varying edge coherence exponent map constructed using the eigenvalues of the filtered structure tensor. The multiscale exponent model we develop leads to a novel restoration method that preserves edges better and provides selective denoising without generating artifacts for both additive and multiplicative noise models. Mathematical analysis of our proposed method in variable exponent space establishes the existence of a minimizer and its properties. The discretization method we use satisfies the maximum-minimum principle which guarantees that artificial edge regions are not created. Extensive experimental results using synthetic, and natural images indicate that the proposed multiscale Tikhonov-TV (MTTV) and dynamical MTTV methods perform better than many contemporary denoising algorithms in terms of several metrics, including signal-to-noise ratio improvement and structure preservation. Promising extensions to handle multiplicative noise models and multichannel imagery are also discussed.
2011 15th International Conference on Information Visualisation | 2011
Anoop Haridas; Rengarajan Pelapur; Joshua Fraser; Filiz Bunyak; Kannappan Palaniappan
The task of automated object tracking and performance assessment in low frame rate, persistent, wide spatial coverage motion imagery is an emerging research domain. The collection of hundreds to tens of thousands of dense trajectories produced by such automatic algorithms along with the subset of manually verified tracks across several coordinate systems require new tools for effective human computer interfaces and exploratory trajectory visualization. We describe an interactive visualization system that supports very large gig pixel per frame video, facilitates rapid, intuitive monitoring and analysis of tracking algorithm execution, provides visual methods for the inter comparison of very long manual tracks with multi segmented automatic tracker outputs, and a flexible KOLAM Tracking Simulator (KOLAM-TS) middleware that generates visualization data by automating the object tracker performance testing and benchmarking process.
IEEE Transactions on Circuits and Systems for Video Technology | 2017
Rasha Gargees; Brittany Morago; Rengarajan Pelapur; Dmitrii Chemodanov; Prasad Calyam; Zakariya A. Oraibi; Ye Duan; Kannappan Palaniappan
In the event of natural or man-made disasters, providing rapid situational awareness through video/image data collected at salient incident scenes is often critical to the first responders. However, computer vision techniques that can process the media-rich and data-intensive content obtained from civilian smartphones or surveillance cameras require large amounts of computational resources or ancillary data sources that may not be available at the geographical location of the incident. In this paper, we propose an incident-supporting visual cloud computing solution by defining a collection, computation, and consumption (3C) architecture supporting fog computing at the network edge close to the collection/consumption sites, which is coupled with cloud offloading to a core computation, utilizing software-defined networking (SDN). We evaluate our 3C architecture and algorithms using realistic virtual environment test beds. We also describe our insights in preparing the cloud provisioning and thin-client desktop fogs to handle the elasticity and user mobility demands in a theater-scale application. In addition, we demonstrate the use of SDN for on-demand compute offload with congestion-avoiding traffic steering to enhance remote user quality of experience in a regional-scale application. The optimization between fogs computing at the network edge with core cloud computing for managing visual analytics reduces latency, congestion, and increases throughput.
international conference of the ieee engineering in medicine and biology society | 2016
Yasmin M. Kassim; V. B. Surya Prasath; Rengarajan Pelapur; Olga V. Glinskii; Richard J. Maude; Vladislav V. Glinsky; Virginia H. Huxley; Kannappan Palaniappan
Automatic segmentation of microvascular structures is a critical step in quantitatively characterizing vessel remodeling and other physiological changes in the dura mater or other tissues. We developed a supervised random forest (RF) classifier for segmenting thin vessel structures using multiscale features based on Hessian, oriented second derivatives, Laplacian of Gaussian and line features. The latter multiscale line detector feature helps in detecting and connecting faint vessel structures that would otherwise be missed. Experimental results on epifluorescence imagery show that the RF approach produces foreground vessel regions that are almost 20 and 25 percent better than Niblack and Otsu threshold-based segmentations respectively.
international symposium on biomedical imaging | 2015
V. B. S. Prasath; Rengarajan Pelapur; Olga V. Glinskii; Vladislav V. Glinsky; Virginia H. Huxley; Kannappan Palaniappan
Fluorescence microscopy images are contaminated by noise and improving image quality without blurring vascular structures by filtering is an important step in automatic image analysis. The application of interest here is to automatically extract the structural components of the microvascular system with accuracy from images acquired by fluorescence microscopy. A robust denoising process is necessary in order to extract accurate vascular morphology information. For this purpose, we propose a multiscale tensor with anisotropic diffusion model which progressively and adaptively updates the amount of smoothing while preserving vessel boundaries accurately. Based on a coherency enhancing flow with planar confidence measure and fused 3D structure information, our method integrates multiple scales for microvasculature preservation and noise removal membrane structures. Experimental results on simulated synthetic images and epifluorescence images show the advantage of our improvement over other related diffusion filters. We further show that the proposed multiscale integration approach improves de-noising accuracy of different tensor diffusion methods to obtain better microvasculature segmentation.
Proceedings of SPIE | 2013
Rengarajan Pelapur; Filiz Bunyak; Kannappan Palaniappan; Gunasekaran Seetharaman
Determining the location and orientation of vehicles in satellite and airborne imagery is a challenging task given the density of cars and other vehicles and complexity of the environment in urban scenes almost anywhere in the world. We have developed a robust and accurate method for detecting vehicles using a template-based directional chamfer matching, combined with vehicle orientation estimation based on a refined segmentation, followed by a Radon transform based profile variance peak analysis approach. The same algorithm was applied to both high resolution satellite imagery and wide area aerial imagery and initial results show robustness to illumination changes and geometric appearance distortions. Nearly 80% of the orientation angle estimates for 1585 vehicles across both satellite and aerial imagery were accurate to within 15◦ of the ground truth. In the case of satellite imagery alone, nearly 90% of the objects have an estimated error within ±1.0° of the ground truth.
international conference of the ieee engineering in medicine and biology society | 2014
Rengarajan Pelapur; V. B. Surya Prasath; Filiz Bunyak; Olga V. Glinskii; Vladislav V. Glinsky; Virginia H. Huxley; Kannappan Palaniappan
Automatic segmentation of three-dimensional mi-crovascular structures is needed for quantifying morphological changes to blood vessels during development, disease and treatment processes. Single focus two-dimensional epifluorescent imagery lead to unsatisfactory segmentations due to multiple out of focus vessel regions that have blurred edge structures and lack of detail. Additional segmentation challenges include varying contrast levels due to diffusivity of the lectin stain, leakage out of vessels and fine morphological vessel structure. We propose an approach for vessel segmentation that combines multi-focus image fusion with robust adaptive filtering. The robust adaptive filtering scheme handles noise without destroying small structures, while multi-focus image fusion considerably improves segmentation quality by deblurring out-of-focus regions through incorporating 3D structure information from multiple focus steps. Experiments using epifluorescence images of mice dura mater show an average of 30.4% improvement compared to single focus microvasculature segmentation.
Proceedings of SPIE | 2014
Amadou Gning; W. T. L. Teacy; Rengarajan Pelapur; Hadi Aliakbarpour; Kannappan Palaniappan; Gunasekaran Seetharaman; Simon J. Julier
Through its ability to create situation awareness, multi-target target tracking is an extremely important capability for almost any kind of surveillance and tracking system. Many approaches have been proposed to address its inherent challenges. However, the majority of these approaches make two assumptions: the probability of detection and the clutter rate are constant. However, neither are likely to be true in practice. For example, as the projected size of a target becomes smaller as it moves further from the sensor, the probability of detection will decline. When target detection is carried out using templates, clutter rate will depend on how much the environment resembles the current target of interest. In this paper, we begin to investigate the impacts on these effects. Using a simulation environment inspired by the challenges of Wide Area Surveillance (WAS), we develop a state dependent formulation for probability of detection and clutter. The impacts of these models are compared in a simulated urban environment populated by multiple vehicles and cursed with occlusions. The results show that accurate modelling the effects of occlusion and degradation in detection, significant improvements in performance can be obtained.
european conference on computer vision | 2016
Michael Felsberg; Matej Kristan; Aleš Leonardis; Roman P. Pflugfelder; Gustav Häger; Amanda Berg; Abdelrahman Eldesokey; Jörgen Ahlberg; Luka Cehovin; Tomáš Vojír̃; Alan Lukežič; Gustavo Fernández; Alfredo Petrosino; Álvaro García-Martín; Andres Solis Montero; Anton Varfolomieiev; Aykut Erdem; Bohyung Han; Chang-Ming Chang; Dawei Du; Erkut Erdem; Fahad Shahbaz Khan; Fatih Porikli; Fei Zhao; Filiz Bunyak; Francesco Battistone; Gao Zhu; Hongdong Li; Honggang Qi; Horst Bischof
The Thermal Infrared Visual Object Tracking challenge 2015, VOT-TIR2015, aims at comparing short-term single-object visual trackers that work on thermal infrared (TIR) sequences and do not apply pre-learned models of object appearance. VOT-TIR2015 is the first benchmark on short-term tracking in TIR sequences. Results of 24 trackers are presented. For each participating tracker, a short description is provided in the appendix. The VOT-TIR2015 challenge is based on the VOT2013 challenge, but introduces the following novelties: (i) the newly collected LTIR (Link -- ping TIR) dataset is used, (ii) the VOT2013 attributes are adapted to TIR data, (iii) the evaluation is performed using insights gained during VOT2013 and VOT2014 and is similar to VOT2015.
Proceedings of SPIE | 2014
V. B. Surya Prasath; Rengarajan Pelapur; Kannappan Palaniappan; Gunasekaran Seetharaman
We study an efficient texture segmentation model for multichannel videos using a local feature fitting based active contour scheme. We propose a flexible motion segmentation approach using fused features computed from texture and intensity components in a globally convex continuous optimization and fusion framework. A fast numerical implementation is demonstrated using an efficient dual minimization formulation. The novel contributions include the fusion of local feature density functions including luminance-chromaticity and local texture in a globally convex active contour variational method, combined with label propagation in scale space using noisy sparse object labels initialized from long term optical flow-based point trajectories. We provide a proof-of-concept demonstration of this novel multi-scale label propagation approach to video object segmentation using synthetic textured video objects embedded in a noisy background and starting with sparse label set trajectories for each object.