Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Palghat S. Ramesh is active.

Publication


Featured researches published by Palghat S. Ramesh.


Acta Mechanica | 1997

Nodal sensitivities as error estimates in computational mechanics

Glaucio H. Paulino; F. Shi; Subrata Mukherjee; Palghat S. Ramesh

SummaryThis paper proposes the use of special sensitivities, called nodal sensitivities, as error indicators and estimators for numerical analysis in mechanics. Nodal sensitivities are defined as rates of change of response quantities with respect to nodal positions. Direct analytical differentiation is used to obtain the sensitivities, and the infinitesimal perturbations of the nodes are forced to lie along the elements. The idea proposed here can be used in conjunction with general purpose computational methods such as the Finite Element Method (FEM), the Boundary Element Method (BEM) or the Finite Difference Method (FDM); however, the BEM is the method of choice in this paper. The performance of the error indicators is evaluated through two numerical examples in linear elasticity.


IEEE Transactions on Intelligent Transportation Systems | 2017

Segmentation- and Annotation-Free License Plate Recognition With Deep Localization and Failure Identification

Orhan Bulan; Vladimir Kozitsky; Palghat S. Ramesh; Matthew Shreve

Automated license plate recognition (ALPR) is essential in several roadway imaging applications. For ALPR systems deployed in the United States, variation between jurisdictions on character width, spacing, and the existence of noise sources (e.g., heavy shadows, non-uniform illumination, various optical geometries, poor contrast, and so on) present in LP images makes it challenging for the recognition accuracy and scalability of ALPR systems. Font and plate-layout variation across jurisdictions further adds to the difficulty of proper character segmentation and increases the level of manual annotation required for training classifiers for each state, which can result in excessive operational overhead and cost. In this paper, we propose a new ALPR workflow that includes novel methods for segmentation- and annotation-free ALPR, as well as improved plate localization and automation for failure identification. Our proposed workflow begins with localizing the LP region in the captured image using a two-stage approach that first extracts a set of candidate regions using a weak sparse network of winnows classifier and then filters them using a strong convolutional neural network (CNN) classifier in the second stage. Images that fail a primary confidence test for plate localization are further classified to identify localization failures, such as LP not present, LP too bright, LP too dark, or no vehicle found. In the localized plate region, we perform segmentation and optical character recognition (OCR) jointly by using a probabilistic inference method based on hidden Markov models (HMMs) where the most likely code sequence is determined by applying the Viterbi algorithm. In order to reduce manual annotation required for training classifiers for OCR, we propose the use of either artificially generated synthetic LP images or character samples acquired by trained ALPR systems already operating in other sites. The performance gap due to differences between training and target domain distributions is minimized using an unsupervised domain adaptation. We evaluated the performance of our proposed methods on LP images captured in several US jurisdictions under realistic conditions.


computer vision and pattern recognition | 2017

Deep Multimodal Representation Learning from Temporal Data

Xitong Yang; Palghat S. Ramesh; Radha Chitta; Sriganesh Madhvanath; Edgar A. Bernal; Jiebo Luo

In recent years, Deep Learning has been successfully applied to multimodal learning problems, with the aim of learning useful joint representations in data fusion applications. When the available modalities consist of time series data such as video, audio and sensor signals, it becomes imperative to consider their temporal structure during the fusion process. In this paper, we propose the Correlational Recurrent Neural Network (CorrRNN), a novel temporal fusion model for fusing multiple input modalities that are inherently temporal in nature. Key features of our proposed model include: (i) simultaneous learning of the joint representation and temporal dependencies between modalities, (ii) use of multiple loss terms in the objective function, including a maximum correlation loss term to enhance learning of cross-modal information, and (iii) the use of an attention model to dynamically adjust the contribution of different input modalities to the joint representation. We validate our model via experimentation on two different tasks: video-and sensor-based activity classification, and audio-visual speech recognition. We empirically analyze the contributions of different components of the proposed CorrRNN model, and demonstrate its robustness, effectiveness and state-of-the-art performance on multiple datasets.


IEEE Transactions on Multimedia | 2018

Deep Temporal Multimodal Fusion for Medical Procedure Monitoring Using Wearable Sensors

Edgar A. Bernal; Xitong Yang; Qun Li; Jayant Kumar; Sriganesh Madhvanath; Palghat S. Ramesh; Raja Bala

Process monitoring and verification have a wide range of uses in the medical and healthcare fields. Currently, such tasks are often carried out by a trained specialist, which makes them expensive, inefficient, and time-consuming. Recent advances in automated video- and multimodal-data-based action and activity recognition have made it possible to reduce the extent of manual intervention required to effectively carry out process supervision tasks. In this paper, we propose algorithms for automated egocentric human action and activity recognition from multimodal data, with a target application of monitoring and assisting a user perform a multistep medical procedure. We propose a supervised deep multimodal fusion framework that relies on concurrent processing of motion data acquired with wearable sensors and video data acquired with an egocentric or body-mounted camera. We demonstrate the effectiveness of the algorithm on a public multimodal dataset and conclude that automated process monitoring via the use of multiple heterogeneous sensors is a viable alternative to its manual counterpart. Furthermore, we demonstrate that the application of previously proposed adaptive sampling schemes to the video processing branch of the multimodal framework results in significant performance improvements.


International Journal for Numerical Methods in Engineering | 1996

DYNAMIC ANALYSIS OF MICRO‐ELECTRO‐MECHANICAL SYSTEMS

F. Shi; Palghat S. Ramesh; Subrata Mukherjee


Communications in Numerical Methods in Engineering | 1995

On the application of 2D potential theory to electrostatic simulation

F. Shi; Palghat S. Ramesh; Subrata Mukherjee


Archive | 2006

Photoconductor life through active control of charger settings

Aaron Michael Burry; Christopher Auguste DiRubio; Michael F. Zona; Paul C. Julien; Eric S. Hamby; Palghat S. Ramesh; William C. Dean


Archive | 2005

Full-width array sensing of two-dimensional residual mass structure to enable mitigation of specific defects

Aaron Michael Burry; Christopher A. DiRubio; Gerald M. Fletcher; Eric S. Hamby; Martin Krucinski; Robert J. Mead; Bruce J. Parks; Peter Paul; Palghat S. Ramesh; Eliud Robles Flores; Fei Xiao


Archive | 2014

TRANSFIX SURFACE MEMBER COATING

Anthony S. Condello; Chu-heng Liu; David J. Gervasi; Jeffrey J. Folkins; Santokh S. Badesha; Mandakini Kanungo; Palghat S. Ramesh; Paul J. McConville; Phillip J. Wantuck; Lifeng Chen


Archive | 2009

Least squares based exposure modulation for banding compensation

Palghat S. Ramesh; Peter Paul

Researchain Logo
Decentralizing Knowledge