Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Amlan Chatterjee is active.

Publication


Featured researches published by Amlan Chatterjee.


ieee international symposium on parallel & distributed processing, workshops and phd forum | 2013

On Analyzing Large Graphs Using GPUs

Amlan Chatterjee; Sridhar Radhakrishnan; John K. Antonio

Studying properties of graphs is essential to various applications, and recent growth of online social networks has spurred interests in analyzing their structures using Graphical Processing Units (GPUs). Utilizing the faster available shared memory on GPUs have provided tremendous speed-up for solving many general-purpose problems. However, when data required for processing is large and needs to be stored in the global memory instead of the shared memory, simultaneous memory accesses by threads in execution becomes the bottleneck for achieving higher throughput. In this paper, for storing large graphs, we propose and evaluate techniques to efficiently utilize the different levels of the memory hierarchy of GPUs, with the focus being on the larger global memory. Given a graph G = (V, E), we provide an algorithm to count the number of triangles in G, while storing the adjacency information on the global memory. Our computation techniques and data structure for retrieving the adjacency information is derived from processing the breadth-first-search tree of the input graph. Also, techniques to generate combinations of nodes for testing the properties of graphs induced by the same are discussed in detail. Our methods can be extended to solve other combinatorial counting problems on graphs, such as finding the number of connected sub graphs of size k, number of cliques (resp. independent sets) of size k, and related problems for large data sets. In the context of the triangle counting algorithm, we analyze and utilize primitives such as memory access coalescing and avoiding partition camping that offset the increase in access latency of using a slower but larger global memory. Our experimental results for the GPU implementation show at least 10 times speedup for triangle counting over the CPU counterpart. Another 6 - 8% increase in performance is obtained by utilizing the above mentioned primitives as compared to the naïve implementation of the program on the GPU.


international parallel and distributed processing symposium | 2012

Counting Problems on Graphs: GPU Storage and Parallel Computing Techniques

Amlan Chatterjee; Sridhar Radhakrishnan; John K. Antonio

The availability and utility of large numbers of Graphical Processing Units (GPUs) have enabled parallel computations using extensive multi-threading. Sequential access to global memory and contention at the size-limited shared memory have been main impediments to fully exploiting potential performance in architectures having a massive number of GPUs. We propose novel memory storage and retrieval techniques that enable parallel graph computations to overcome the above issues. More specifically, given a graph G = (V, E) and an integer k <;= |V|, we provide both storage techniques and algorithms to count the number of: a) connected subgraphs of size k; b) k cliques; and c) k independent sets, all of which can be exponential in number. Our storage technique is based on creating a breadth-first search tree and storing it along with non-tree edges in a novel way. The counting problems mentioned above have many uses, including the analysis of social networks.


2016 Second International Conference on Research in Computational Intelligence and Communication Networks (ICRCICN) | 2016

Classification of wearable computing: A survey of electronic assistive technology and future design

Amlan Chatterjee; A. Aceves; R. Dungca; Hugo Flores; K. Giddens

In the past decade there have been significant advancements in computer technology that have reduced the hardware form factor as well as increased energy efficient computing. Using network protocols for near field communication such as Body Area Networks (BANs), smaller and lighter computing units with attached sensors have transformed into wearable devices. These devices have served a plethora of purposes including providing assistance to people with disabilities, gathering data, serving as sensors and enhancing human capabilities among other things. Depending on the usage and infrastructure, the devices can be classified into respective domains. In this paper we survey wearable computing devices and classify the same based on the form of assistance that is delivered to the person wearing the device. We also introduce a framework for futuristic devices that can operate in harsh environments.


2016 Second International Conference on Research in Computational Intelligence and Communication Networks (ICRCICN) | 2016

Exploiting topological structures for graph compression based on quadtrees

Amlan Chatterjee; M. Levan; C. Lanham; M. Zerrudo; Michael Nelson; Sridhar Radhakrishnan

In the age of big data, the need for efficient data processing and computation has been in the forefront of research endeavors. The process of extracting information from huge data sets require novel storage techniques to aid the computing devices to perform necessary computation. With pervasive use of heterogeneous systems and advent of non-traditional computing units like GPUs, with limited memory, it has become relevant to underline the relevance of data storage, especially to utilize such computing devices. Graphs contain a plethora of information, and also can be used to represent data from a broad range of domains; real-world big data sets are effectively represented by graphs. Efficient graph compression is therefore essential for performing computations on large data sets. Quadtrees, generally used to represent images, can be used as an effective technique to perform compression. Using additional topological information that depict certain patterns for the data sets, further improvements can be made to the space complexity of storing graph data. In this paper we describe algorithms that take into consideration the properties of graphs, and perform compression based on quadtrees. The introduced techniques achieve up to 70% compression as compared to adjacency matrix representation; when compared to existing quadtree based compression method, the proposed algorithms achieve an additional 50% improvement. Techniques to both compress data and also perform queries on the compressed data itself are introduced and discussed in detail.


network and parallel computing | 2014

Performance Prediction Model and Analysis for Compute-Intensive Tasks on GPUs

Khondker S. Hasan; Amlan Chatterjee; Sridhar Radhakrishnan; John K. Antonio

Using Graphics Processing Units (GPUs) to solve general purpose problems has received significant attention both in academia and industry. Harnessing the power of these devices however requires knowledge of the underlying architecture and the programming model. In this paper, we develop analytical models to predict the performance of GPUs for computationally intensive tasks. Our models are based on varying the relevant parameters - including total number of threads, number of blocks, and number of streaming multi-processors - and predicting the performance of a program for a specified instance of these parameters. The approach can be used in the context of heterogeneous environments where distinct types of GPU devices with different hardware configurations are employed.


international conference on big data | 2014

Connecting the dots: Triangle completion and related problems on large data sets using GPUs

Amlan Chatterjee; Sridhar Radhakrishnan; Chandra N. Sekharan

Studying the properties of Online Social Networks (OSNs) and other real world graphs have gained importance due to the large amount of information available from them. These large graphs contain data that can be analyzed and effectively used in advertising, security and improving the overall experience of the users of these networks. However, the analysis of these graphs for studying specific properties requires combinatorially explosive number of computations. Compute Unified Device Architecture (CUDA) is a programming model available from Nvidia for solving general-purpose problems using the massively parallel and highly multi-threaded Graphics Processing Units (GPUs). Therefore, using GPUs to solve these types of problems is appropriate. In addition, due to the properties of real-world data, the graphs being considered are sparse and have irregular data dependencies. Hence, using efficient techniques to store the graph data for initial preprocessing and final computation by taking advantage of heterogeneous CPU-GPU systems can address these issues. In this paper, we are interested in studying different properties of these real-world entities that transform into the following graph problems: a) identifying a missing edge, which when added would result in maximum increase in the number of triangles, b) identifying an existing edge whose removal would result in the maximum decrease in the number of triangles, c) identifying an existing edge whose removal would increase the number of connected components in the graph. In this paper, we develop and implement algorithms to solve the above problems using both CPU and GPU. Specifically, given a graph G = (V, E), we provide algorithms for the following: a) find (v<sub>i</sub>, V<sub>j</sub>) ∉ E, such that Δ<sub>f</sub> - Δ<sub>c</sub> is maximized, where Δ<sub>f</sub> and Δ<sub>c</sub> are the number of triangles in G<sub>m</sub> = (V, E ∪(v<sub>i</sub>, V<sub>j</sub>)) and G, respectively, b) find a (v<sub>i</sub>, V<sub>j</sub>) ϵ E, such that Δ<sub>c</sub> - Δ<sub>f</sub> is maximized, where Δ<sub>f</sub> and Δ<sub>c</sub> are the number of triangles in G<sub>m</sub> = (V, E \ (v<sub>i</sub>, V<sub>j</sub>)) and G = (V, E), respectively, c) find a (v<sub>i</sub>, V<sub>j</sub>) ϵ E, such that Φ<sub>c</sub> > Φ<sub>c</sub>, where Φ<sub>c</sub> and Φ<sub>c</sub> are the number of connected components in G<sub>m</sub> = (V, E \ (v<sub>i</sub>, V<sub>j</sub>)) and G = (V, E), respectively. We implement the algorithms using a GPU and achieve a 10 × speedup as compared to a sequential implementation. Thereafter, we design a heuristic for finding an edge whose existence would result in the maximum increase in the number of triangles. The heuristic is implemented and the results are reported and compared to those of the regular algorithm on the GPU.


international conference on big data | 2015

On compressing massive streaming graphs with Quadtrees

Michael Nelson; Sridhar Radhakrishnan; Amlan Chatterjee; Chandra N. Sekharan


International journal of networking and computing | 2013

Data Structures and Algorithms for Counting Problems on Graphs using GPU

Amlan Chatterjee; Sridhar Radhakrishnan; John K. Antonio


Archive | 2017

Queryable Compression for Massively Streaming Social Networks

Chandra N. Sekharan; Sridhar Radhakrishnan; Ben Nelson; Amlan Chatterjee


international conference on big data | 2017

Queryable compression on streaming social networks

Michael Nelson; Sridhar Radhakrishnan; Amlan Chatterjee; Chandra N. Sekharan

Collaboration


Dive into the Amlan Chatterjee's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hugo Flores

California State University

View shared research outputs
Top Co-Authors

Avatar

Khondker S. Hasan

University of Houston–Clear Lake

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

A. Aceves

California State University

View shared research outputs
Top Co-Authors

Avatar

Bin Tang

California State University

View shared research outputs
Top Co-Authors

Avatar

C. Lanham

California State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge