Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where A. Ravishankar Rao is active.

Publication


Featured researches published by A. Ravishankar Rao.


NeuroImage | 2009

Prediction and interpretation of distributed neural activity with sparse models

Melissa K. Carroll; Guillermo A. Cecchi; Irina Rish; Rahul Garg; A. Ravishankar Rao

We explore to what extent the combination of predictive and interpretable modeling can provide new insights for functional brain imaging. For this, we apply a recently introduced regularized regression technique, the Elastic Net, to the analysis of the PBAIC 2007 competition data. Elastic Net regression controls via one parameter the number of voxels in the resulting model, and via another the degree to which correlated voxels are included. We find that this method produces highly predictive models of fMRI data that provide evidence for the distributed nature of neural function. We also use the flexibility of Elastic Net to demonstrate that model robustness can be improved without compromising predictability, in turn revealing the importance of localized clusters of activity. Our findings highlight the functional significance of patterns of distributed clusters of localized activity, and underscore the importance of models that are both predictive and interpretable.


machine vision applications | 1997

Automatic defect classification for semiconductor manufacturing

Paul B. Chou; A. Ravishankar Rao; Martin C. Sturzenbecker; Frederick Y. Wu; Virginia H. Brecher

Abstract.Visual defect inspection and classification are important parts of most manufacturing processes in the semiconductor and electronics industries. Defect classification provides relevant information to correct process problems, thereby enhancing the yield and quality of the product. This paper describes an automated defect classification (ADC) system that classifies defects on semiconductor chips at various manufacturing steps. The ADC system uses a golden template method for defect re-detection, and measures several features of the defect, such as size, shape, location and color. A rule-based system classifies the defects into pre-defined categories that are learnt from training samples. The system has been deployed in the IBM Burlington 16 M DRAM manufacturing line for more than a year. The system has examined over 100 000 defects, and has met the design criteria of over 80% classification rate and 80% classification accuracy. Issues involving system design tradeoff, implementation, performance, and deployment are closely examined.


BMC Cell Biology | 2007

Identifying directed links in large scale functional networks: application to brain fMRI

Guillermo A. Cecchi; A. Ravishankar Rao; Maria Virginia Centeno; Marwan N. Baliki; A. Vania Apkarian; Dante R. Chialvo

BackgroundBiological experiments increasingly yield data representing large ensembles of interacting variables, making the application of advanced analytical tools a forbidding task. We present a method to extract networks of correlated activity, specifically from functional MRI data, such that: (a) network nodes represent voxels, and (b) the network links can be directed or undirected, representing temporal relationships between the nodes. The method provides a snapshot of the ongoing dynamics of the brain without sacrificing resolution, as the analysis is tractable even for very large numbers of voxels.ResultsWe find that, based on topological properties of the networks, the method provides enough information about the dynamics to discriminate between subtly different brain states. Moreover, the statistical regularities previously reported are qualitatively preserved, i.e. the resulting networks display scale-free and small-world topologies.ConclusionOur method expands previous approaches to render large scale functional networks, and creates the basis for an extensive and -due to the presence of mixtures of directed and undirected links- richer motif analysis of functional relationships.


electronic imaging | 1998

Segmentation and automatic descreening of scanned documents

Alejandro Jaimes; Frederick Cole Mintzer; A. Ravishankar Rao; Gerry Thompson

One of the major challenges in scanning and printing documents in a digital library is the preservation of the quality of the documents and in particular of the images they contain. When photographs are offset-printed, the process of screening usually takes place. During screening, a continuous tone image is converted into a bi-level image by applying a screen to replace each color in the original image. When high-resolution scanning of screened images is performed, it is very common in the digital version of the document to observe the screen patterns used during the original printing. In addition, when printing the digital document, more effects tend to appear because printing requires halftoning. In order to automatically suppress these moire patterns, it is necessary to detect the image areas of the document and remove the screen pattern present in those areas. In this paper, we present efficient and robust techniques to segment a grayscale document into halftone image areas, detect the presence and frequency of screen patterns in halftone areas and suppress their detected screens. We present novel techniques to perform fast segmentation based on (alpha) -crossings, detection of screen frequencies using a fast accumulator function and suppression of detected screens by low-pass filtering.


IEEE Transactions on Neural Networks | 2008

Unsupervised Segmentation With Dynamical Units

A. Ravishankar Rao; Guillermo A. Cecchi; Charles C. Peck; James R. Kozloski

In this paper, we present a novel network to separate mixtures of inputs that have been previously learned. A significant capability of the network is that it segments the components of each input object that most contribute to its classification. The network consists of amplitude-phase units that can synchronize their dynamics, so that separation is determined by the amplitude of units in an output layer, and segmentation by phase similarity between input and output layer units. Learning is unsupervised and based on a Hebbian update, and the architecture is very simple. Moreover, efficient segmentation can be achieved even when there is considerable superposition of the inputs. The network dynamics are derived from an objective function that rewards sparse coding in the generalized amplitude-phase variables. We argue that this objective function can provide a possible formal interpretation of the binding problem and that the implementation of the network architecture and dynamics is biologically plausible.


NeuroImage | 2011

Full-brain auto-regressive modeling (FARM) using fMRI.

Rahul Garg; Guillermo A. Cecchi; A. Ravishankar Rao

In order to fully uncover the information potentially available in the fMRI signal, we model it as a multivariate auto-regressive process. To infer the model, we do not apply any form of clustering or dimensionality reduction, and solve the problem of under-determinacy using sparse regression. We find that only a few small clusters (with average size of 3-4 voxels) are useful in predicting the activity of other voxels, and demonstrate remarkable consistency within a subject as well as across multiple subjects. Moreover, we find that: (a) the areas that can predict activity of other voxels are consistent with previous results related to networks activated by the specific somatosensory task, as well as networks related to the default mode activity; (b) there is a global dynamical state dominated by two prominent (although not unique) streams, originating in the posterior parietal cortex and the posterior cingulate/precuneus cortex; (c) these streams span default mode and task-specific networks, and interact in several regions, notably the insula; and (d) the posterior cingulate is a central node of the default mode network, in terms of its ability to determine the future evolution of the rest of the nodes.


Frontiers in Neural Circuits | 2014

Attributed graph distance measure for automatic detection of attention deficit hyperactive disordered subjects.

Soumyabrata Dey; A. Ravishankar Rao; Mubarak Shah

Attention Deficit Hyperactive Disorder (ADHD) is getting a lot of attention recently for two reasons. First, it is one of the most commonly found childhood disorders and second, the root cause of the problem is still unknown. Functional Magnetic Resonance Imaging (fMRI) data has become a popular tool for the analysis of ADHD, which is the focus of our current research. In this paper we propose a novel framework for the automatic classification of the ADHD subjects using their resting state fMRI (rs-fMRI) data of the brain. We construct brain functional connectivity networks for all the subjects. The nodes of the network are constructed with clusters of highly active voxels and edges between any pair of nodes represent the correlations between their average fMRI time series. The activity level of the voxels are measured based on the average power of their corresponding fMRI time-series. For each node of the networks, a local descriptor comprising of a set of attributes of the node is computed. Next, the Multi-Dimensional Scaling (MDS) technique is used to project all the subjects from the unknown graph-space to a low dimensional space based on their inter-graph distance measures. Finally, the Support Vector Machine (SVM) classifier is used on the low dimensional projected space for automatic classification of the ADHD subjects. Exhaustive experimental validation of the proposed method is performed using the data set released for the ADHD-200 competition. Our method shows promise as we achieve impressive classification accuracies on the training (70.49%) and test data sets (73.55%). Our results reveal that the detection rates are higher when classification is performed separately on the male and female groups of subjects.


BMC Cell Biology | 2007

High performance computing environment for multidimensional image analysis

A. Ravishankar Rao; Guillermo A. Cecchi; Marcelo O. Magnasco

BackgroundThe processing of images acquired through microscopy is a challenging task due to the large size of datasets (several gigabytes) and the fast turnaround time required. If the throughput of the image processing stage is significantly increased, it can have a major impact in microscopy applications.ResultsWe present a high performance computing (HPC) solution to this problem. This involves decomposing the spatial 3D image into segments that are assigned to unique processors, and matched to the 3D torus architecture of the IBM Blue Gene/L machine. Communication between segments is restricted to the nearest neighbors. When running on a 2 Ghz Intel CPU, the task of 3D median filtering on a typical 256 megabyte dataset takes two and a half hours, whereas by using 1024 nodes of Blue Gene, this task can be performed in 18.8 seconds, a 478× speedup.ConclusionOur parallel solution dramatically improves the performance of image processing, feature extraction and 3D reconstruction tasks. This increased throughput permits biologists to conduct unprecedented large scale experiments with massive datasets.


Proceedings of SPIE | 2012

ADHD classification using bag of words approach on network features

Berkan Solmaz; Soumyabrata Dey; A. Ravishankar Rao; Mubarak Shah

Attention Deficit Hyperactivity Disorder (ADHD) is receiving lots of attention nowadays mainly because it is one of the common brain disorders among children and not much information is known about the cause of this disorder. In this study, we propose to use a novel approach for automatic classification of ADHD conditioned subjects and control subjects using functional Magnetic Resonance Imaging (fMRI) data of resting state brains. For this purpose, we compute the correlation between every possible voxel pairs within a subject and over the time frame of the experimental protocol. A network of voxels is constructed by representing a high correlation value between any two voxels as an edge. A Bag-of-Words (BoW) approach is used to represent each subject as a histogram of network features; such as the number of degrees per voxel. The classification is done using a Support Vector Machine (SVM). We also investigate the use of raw intensity values in the time series for each voxel. Here, every subject is represented as a combined histogram of network and raw intensity features. Experimental results verified that the classification accuracy improves when the combined histogram is used. We tested our approach on a highly challenging dataset released by NITRC for ADHD-200 competition and obtained promising results. The dataset not only has a large size but also includes subjects from different demography and edge groups. To the best of our knowledge, this is the first paper to propose BoW approach in any functional brain disorder classification and we believe that this approach will be useful in analysis of many brain related conditions.


Proceedings of SPIE | 1998

Automatic visible watermarking of images

A. Ravishankar Rao; Gordon W. Braudaway; Frederick Cole Mintzer

Visible image watermarking has become an important and widely used technique to identify ownership and protect copyrights to images. A visible image watermark immediately identifies the owner of an image, and if properly constructed, can deter subsequent unscrupulous use of the image. The insertion of a visible watermark should satisfy two conflicting conditions: the intensity of the watermark should be strong enough to be perceptible, yet it should be light enough to be unobtrusive and not mar the beauty of the original image. Typically such an adjustment is made manually, and human intervention is required to set the intensity of the watermark at the right level. This is fine for a few images, but is unsuitable for a large collection of images. Thus, it is desirable to have a technique to automatically adjust the intensity of the watermark based on some underlying property of each image. This will allow a large number of images to be automatically watermarked, this increasing the throughput of the watermarking stage. In this paper we show that the measurement of image texture can be successfully used to automate the adjustment of watermark intensity. A linear regression model is used to predict subjective assessments of correct watermark intensity based on image texture measurements.

Collaboration


Dive into the A. Ravishankar Rao's collaboration.

Researchain Logo
Decentralizing Knowledge