Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jayaraman J. Thiagarajan is active.

Publication


Featured researches published by Jayaraman J. Thiagarajan.


Proceedings of SPIE | 2016

Lung nodule detection using 3D convolutional neural networks trained on weakly labeled data

Rushil Anirudh; Jayaraman J. Thiagarajan; Timo Bremer; Hyojin Kim

Early detection of lung nodules is currently the one of the most effective ways to predict and treat lung cancer. As a result, the past decade has seen a lot of focus on computer aided diagnosis (CAD) of lung nodules, whose goal is to efficiently detect, segment lung nodules and classify them as being benign or malignant. Effective detection of such nodules remains a challenge due to their arbitrariness in shape, size and texture. In this paper, we propose to employ 3D convolutional neural networks (CNN) to learn highly discriminative features for nodule detection in lieu of hand-engineered ones such as geometric shape or texture. While 3D CNNs are promising tools to model the spatio-temporal statistics of data, they are limited by their need for detailed 3D labels, which can be prohibitively expensive when compared obtaining 2D labels. Existing CAD methods rely on obtaining detailed labels for lung nodules, to train models, which is also unrealistic and time consuming. To alleviate this challenge, we propose a solution wherein the expert needs to provide only a point label, i.e., the central pixel of of the nodule, and its largest expected size. We use unsupervised segmentation to grow out a 3D region, which is used to train the CNN. Using experiments on the SPIE-LUNGx dataset, we show that the network trained using these weak labels can produce reasonably low false positive rates with a high sensitivity, even in the absence of accurate 3D labels.


international parallel and distributed processing symposium | 2015

Identifying the Culprits Behind Network Congestion

Abhinav Bhatele; Andrew R. Titus; Jayaraman J. Thiagarajan; Nikhil Jain; Todd Gamblin; Peer-Timo Bremer; Martin Schulz; Laxmikant V. Kalé

Network congestion is one of the primary causes of performance degradation, performance variability and poor scaling in communication-heavy parallel applications. However, the causes and mechanisms of network congestion on modern interconnection networks are not well understood. We need new approaches to analyze, model and predict this critical behaviour in order to improve the performance of large-scale parallel applications. This paper applies supervised learning algorithms, such as forests of extremely randomized trees and gradient boosted regression trees, to perform regression analysis on communication data and application execution time. Using data derived from multiple executions, we create models to predict the execution time of communication-heavy parallel applications. This analysis also identifies the features and associated hardware components that have the most impact on network congestion and intern, on execution time. The ideas presented in this paper have wide applicability: predicting the execution time on a different number of nodes, or different input datasets, or even for an unknown code, identifying the best configuration parameters for an application, and finding the root causes of network congestion on different architectures.


eurographics | 2015

Visual Exploration of High-Dimensional Data through Subspace Analysis and Dynamic Projections

Shusen Liu; Bei Wang; Jayaraman J. Thiagarajan; Peer-Timo Bremer; Valerio Pascucci

We introduce a novel interactive framework for visualizing and exploring high‐dimensional datasets based on subspace analysis and dynamic projections. We assume the high‐dimensional dataset can be represented by a mixture of low‐dimensional linear subspaces with mixed dimensions, and provide a method to reliably estimate the intrinsic dimension and linear basis of each subspace extracted from the subspace clustering. Subsequently, we use these bases to define unique 2D linear projections as viewpoints from which to visualize the data. To understand the relationships among the different projections and to discover hidden patterns, we connect these projections through dynamic projections that create smooth animated transitions between pairs of projections. We introduce the view transition graph, which provides flexible navigation among these projections to facilitate an intuitive exploration. Finally, we provide detailed comparisons with related systems, and use real‐world examples to demonstrate the novelty and usability of our proposed framework.


ieee symposium on large data analysis and visualization | 2014

Multivariate volume visualization through dynamic projections

Shusen Liu; Bei Wang; Jayaraman J. Thiagarajan; Peer-Timo Bremer; Valerio Pascucci

We propose a multivariate volume visualization framework that tightly couples dynamic projections with a high-dimensional transfer function design for interactive volume visualization. We assume that the complex, high-dimensional data in the attribute space can be well-represented through a collection of low-dimensional linear subspaces, and embed the data points in a variety of 2D views created as projections onto these subspaces. Through dynamic projections, we present animated transitions between different views to help the user navigate and explore the attribute space for effective transfer function design. Our framework not only provides a more intuitive understanding of the attribute space but also allows the design of the transfer function under multiple dynamic views, which is more flexible than being restricted to a single static view of the data. For large volumetric datasets, we maintain interactivity during the transfer function design via intelligent sampling and scalable clustering. Using examples in combustion and climate simulations, we demonstrate how our framework can be used to visualize interesting structures in the volumetric space.


Synthesis Lectures on Image, Video, and Multimedia Processing | 2014

Image Understanding using Sparse Representations

Jayaraman J. Thiagarajan; Karthikeyan Natesan Ramamurthy; Pavan K. Turaga; Andreas Spanias

Image understanding has been playing an increasingly crucial role in several inverse problems and computer vision. Sparse models form an important component in image understanding, since they emulate the activity of neural receptors in the primary visual cortex of the human brain. Sparse methods have been utilized in several learning problems because of their ability to provide parsimonious, interpretable, and efficient models. Exploiting the sparsity of natural signals has led to advances in several application areas including image compression, denoising, inpainting, compressed sensing, blind source separation, super-resolution, and classification. The primary goal of this book is to present the theory and algorithmic considerations in using sparse models for image understanding and computer vision applications. To this end, algorithms for obtaining sparse representations and their performance guarantees are discussed in the initial chapters. Furthermore, approaches for designing overcomplete, data-adapted dictionaries to model natural images are described. The development of theory behind dictionary learning involves exploring its connection to unsupervised clustering and analyzing its generalization characteristics using principles from statistical learning theory. An exciting application area that has benefited extensively from the theory of sparse representations is compressed sensing of image and video data. Theory and algorithms pertinent to measurement design, recovery, and model-based compressed sensing are presented. The paradigm of sparse models, when suitably integrated with powerful machine learning frameworks, can lead to advances in computer vision applications such as object recognition, clustering, segmentation, and activity recognition. Frameworks that enhance the performance of sparse models in such applications by imposing constraints based on the prior discriminatory information and the underlying geometrical structure, and kernelizing the sparse coding and dictionary learning methods are presented. In addition to presenting theoretical fundamentals in sparse learning, this book provides a platform for interested readers to explore the vastly growing application domains of sparse representations.


international conference on image processing | 2014

Image segmentation using consensus from hierarchical segmentation ensembles

Hyojin Kim; Jayaraman J. Thiagarajan; Peer-Timo Bremer

Unsupervised, automatic image segmentation without contextual knowledge, or user intervention is a challenging problem. The key to robust segmentation is an appropriate selection of local features and metrics. However, a single aggregation of the local features using a greedy merging order often results in incorrect segmentation. This paper presents an unsupervised approach, which uses the consensus inferred from hierarchical segmentation ensembles, for partitioning images into foreground and background regions. By exploring an expanded set of possible aggregations of the local features, the proposed method generates meaningful segmentations that are not often revealed when only the optimal hierarchy is considered. A graph cuts-based approach is employed to combine the consensus along with a foreground-background model estimate, obtained using the ensemble, for effective segmentation. Experiments with a standard dataset show promising results when compared to several existing methods including the state-of-the-art weak supervised techniques that use co-segmentation.


international conference on acoustics, speech, and signal processing | 2016

Consensus inference on mobile phone sensors for activity recognition

Huan Songg; Jayaraman J. Thiagarajan; Karthikeyan Natesan Ramamurthy; Andreas Spanias; Pavan K. Turaga

The pervasive use of wearable sensors in activity and health monitoring presents a huge potential for building novel data analysis and prediction frameworks. In particular, approaches that can harness data from a diverse set of low-cost sensors for recognition are needed. Many of the existing approaches rely heavily on elaborate feature engineering to build robust recognition systems, and their performance is often limited by the inaccuracies in the data. In this paper, we develop a novel two-stage recognition system that enables a systematic fusion of complementary information from multiple sensors in a linear graph embedding setting, while employing an ensemble classifier phase that leverages the discriminative power of different feature extraction strategies. Experimental results on a challenging dataset show that our framework greatly improves the recognition performance when compared to using any single sensor.


IEEE Transactions on Visualization and Computer Graphics | 2018

Visual Exploration of Semantic Relationships in Neural Word Embeddings

Shusen Liu; Peer-Timo Bremer; Jayaraman J. Thiagarajan; Vivek Srikumar; Bei Wang; Yarden Livnat; Valerio Pascucci

Constructing distributed representations for words through neural language models and using the resulting vector spaces for analysis has become a crucial component of natural language processing (NLP). However, despite their widespread application, little is known about the structure and properties of these spaces. To gain insights into the relationship between words, the NLP community has begun to adapt high-dimensional visualization techniques. In particular, researchers commonly use t-distributed stochastic neighbor embeddings (t-SNE) and principal component analysis (PCA) to create two-dimensional embeddings for assessing the overall structure and exploring linear relationships (e.g., word analogies), respectively. Unfortunately, these techniques often produce mediocre or even misleading results and cannot address domain-specific visualization challenges that are crucial for understanding semantic relationships in word embeddings. Here, we introduce new embedding techniques for visualizing semantic and syntactic analogies, and the corresponding tests to determine whether the resulting views capture salient structures. Additionally, we introduce two novel views for a comprehensive study of analogy relationships. Finally, we augment t-SNE embeddings to convey uncertainty information in order to allow a reliable interpretation. Combined, the different views address a number of domain-specific tasks difficult to solve with existing tools.


ieee international conference on high performance computing data and analytics | 2017

Performance modeling under resource constraints using deep transfer learning

Aniruddha Marathe; Rushil Anirudh; Nikhil Jain; Abhinav Bhatele; Jayaraman J. Thiagarajan; Bhavya Kailkhura; Jae-Seung Yeom; Barry Rountree; Todd Gamblin

Tuning application parameters for optimal performance is a challenging combinatorial problem. Hence, techniques for modeling the functional relationships between various input features in the parameter space and application performance are important. We show that simple statistical inference techniques are inadequate to capture these relationships. Even with more complex ensembles of models, the minimum coverage of the parameter space required via experimental observations is still quite large. We propose a deep learning based approach that can combine information from exhaustive observations collected at a smaller scale with limited observations collected at a larger target scale. The proposed approach is able to accurately predict performance in the regimes of interest to performance analysts while outperforming many traditional techniques. In particular, our approach can identify the best performing configurations even when trained using as few as 1% of observations at the target scale.


international conference on computer graphics and interactive techniques | 2016

Stair blue noise sampling

Bhavya Kailkhura; Jayaraman J. Thiagarajan; Peer-Timo Bremer; Pramod K. Varshney

A common solution to reducing visible aliasing artifacts in image reconstruction is to employ sampling patterns with a blue noise power spectrum. These sampling patterns can prevent discernible artifacts by replacing them with incoherent noise. Here, we propose a new family of blue noise distributions, Stair blue noise, which is mathematically tractable and enables parameter optimization to obtain the optimal sampling distribution. Furthermore, for a given sample budget, the proposed blue noise distribution achieves a significantly larger alias-free low-frequency region compared to existing approaches, without introducing visible artifacts in the mid-frequencies. We also develop a new sample synthesis algorithm that benefits from the use of an unbiased spatial statistics estimator and efficient optimization strategies.

Collaboration


Dive into the Jayaraman J. Thiagarajan's collaboration.

Top Co-Authors

Avatar

Peer-Timo Bremer

Lawrence Livermore National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hyojin Kim

Lawrence Livermore National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Abhinav Bhatele

Lawrence Livermore National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Rushil Anirudh

Arizona State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Todd Gamblin

Lawrence Livermore National Laboratory

View shared research outputs
Researchain Logo
Decentralizing Knowledge