Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sanjeev Mohindra is active.

Publication


Featured researches published by Sanjeev Mohindra.


ieee high performance extreme computing conference | 2017

Static graph challenge: Subgraph isomorphism

Siddharth Samsi; Vijay Gadepally; Michael B. Hurley; Michael Jones; Edward K. Kao; Sanjeev Mohindra; Paul Monticciolo; Albert Reuther; Steven Smith; William S. Song; Diane Staheli; Jeremy Kepner

The rise of graph analytic systems has created a need for ways to measure and compare the capabilities of these systems. Graph analytics present unique scalability difficulties. The machine learning, high performance computing, and visual analytics communities have wrestled with these difficulties for decades and developed methodologies for creating challenges to move these communities forward. The proposed Subgraph Isomorphism Graph Challenge draws upon prior challenges from machine learning, high performance computing, and visual analytics to create a graph challenge that is reflective of many real-world graph analytics processing systems. The Subgraph Isomorphism Graph Challenge is a holistic specification with multiple integrated kernels that can be run together or independently. Each kernel is well defined mathematically and can be implemented in any programming environment. Subgraph isomorphism is amenable to both vertex-centric implementations and array-based implementations (e.g., using the Graph-BLAS.org standard). The computations are simple enough that performance predictions can be made based on simple computing hardware models. The surrounding kernels provide the context for each kernel that allows rigorous definition of both the input and the output for each kernel. Furthermore, since the proposed graph challenge is scalable in both problem size and hardware, it can be used to measure and quantitatively compare a wide range of present day and future systems. Serial implementations in C++, Python, Python with Pandas, Matlab, Octave, and Julia have been implemented and their single threaded performance have been measured. Specifications, data, and software are publicly available at GraphChallenge.org.


ieee high performance extreme computing conference | 2017

Streaming graph challenge: Stochastic block partition

Edward K. Kao; Vijay Gadepally; Michael B. Hurley; Michael Jones; Jeremy Kepner; Sanjeev Mohindra; Paul Monticciolo; Albert Reuther; Siddharth Samsi; William S. Song; Diane Staheli; Steven Smith

An important objective for analyzing real-world graphs is to achieve scalable performance on large, streaming graphs. A challenging and relevant example is the graph partition problem. As a combinatorial problem, graph partition is NP-hard, but existing relaxation methods provide reasonable approximate solutions that can be scaled for large graphs. Competitive benchmarks and challenges have proven to be an effective means to advance state-of-the-art performance and foster community collaboration. This paper describes a graph partition challenge with a baseline partition algorithm of sub-quadratic complexity. The algorithm employs rigorous Bayesian inferential methods based on a statistical model that captures characteristics of the real-world graphs. This strong foundation enables the algorithm to address limitations of well-known graph partition approaches such as modularity maximization. This paper describes various aspects of the challenge including: (1) the data sets and streaming graph generator, (2) the baseline partition algorithm with pseudocode, (3) an argument for the correctness of parallelizing the Bayesian inference, (4) different parallel computation strategies such as node-based parallelism and matrix-based parallelism, (5) evaluation metrics for partition correctness and computational requirements, (6) preliminary timing of a Python-based demonstration code and the open source C++ code, and (7) considerations for partitioning the graph in streaming fashion. Data sets and source code for the algorithm as well as metrics, with detailed documentation are available at GraphChallenge.org.


Parallel Architectures and Bioinspired Algorithms | 2012

A Knowledge-Based Operator for a Genetic Algorithm which Optimizes the Distribution of Sparse Matrix Data

Una-May O’Reilly; Nadya T. Bliss; Sanjeev Mohindra; Julie Mullen; Eric Robinson

We present the Hogs and Slackers genetic algorithm (GA) which addresses the problem of improving the parallelization efficiency of sparse matrix computations by optimally distributing blocks of matrices data. The performance of a distribution is sensitive to the non-zero patterns in the data, the algorithm, and the hardware architecture. In a candidate distributions the Hogs and Slackers GA identifies processors with many operations – hogs, and processors with fewer operations – slackers. Its intelligent operation-balancing mutation operator then swaps data blocks between hogs and slackers to explore a new data distribution.We show that the Hogs and Slackers GA performs better than a baseline GA. We demonstrate Hogs and Slackers GA’s optimization capability with an architecture study of varied network and memory bandwidth and latency.


parallel computing | 2010

Hogs and slackers: Using operations balance in a genetic algorithm to optimize sparse algebra computation on distributed architectures

Una-May O'Reilly; Eric Robinson; Sanjeev Mohindra; Julie Mullen; Nadya T. Bliss

We present a framework for optimizing the distributed performance of sparse matrix computations. These computations are optimally parallelized by distributing their operations across processors in a subtly uneven balance. Because the optimal balance point depends on the non-zero patterns in the data, the algorithm, and the underlying hardware architecture, it is difficult to determine. The Hogs and Slackers genetic algorithm (GA) identifies processors with many operations -hogs, and processors with few operations -slackers. Its intelligent operation-balancing mutation operator swaps data blocks between hogs and slackers to explore new balance points. We show that this operator is integral to the performance of the genetic algorithm and use the framework to conduct an architecture study that varies network specifications. The Hogs and Slackers GA is itself a parallel algorithm with near linear speedup on a large computing cluster.


dod hpcmp users group conference | 2008

PVTOL: Providing Productivity, Performance and Portability to DoD Signal Processing Applications on Multicore Processors

Hahn Kim; Edward Rutledge; Sharon Sacco; Sanjeev Mohindra; Matthew Marzilli; Jeremy Kepner; Ryan Haney; Jim Daly; Nadya T. Bliss


dod hpcmp users group conference | 2008

Task and Conduit Framework for Multi-core Systems

Sanjeev Mohindra; James Daly; Ryan Haney; Glenn Schrader


international parallel and distributed processing symposium | 2013

P-sync: A Photonically Enabled Architecture for Efficient Non-local Data Access

David Whelihan; Jeffrey J. Hughes; Scott M. Sawyer; Eric Robinson; Michael M. Wolf; Sanjeev Mohindra; Julie Mullen; Anna Klein; Michelle S. Beard; Nadya T. Bliss; Johnnie Chan; Robert Hendry; Keren Bergman; Luca P. Carloni


dod hpcmp users group conference | 2008

Performance Modeling and Mapping of Sparse Computations

Nadya T. Bliss; Sanjeev Mohindra; Una-May O'Reilly


arXiv: Distributed, Parallel, and Cluster Computing | 2018

GraphChallenge.org: Raising the Bar on Graph Analytic Performance.

Siddharth Samsi; Vijay Gadepally; Michael B. Hurley; Michael Jones; Edward K. Kao; Sanjeev Mohindra; Paul Monticciolo; Albert Reuther; Steven Smith; William S. Song; Diane Staheli; Jeremy Kepner


Archive | 2016

MIT CSAIL and Lincoln Laboratory Task Force Report

Robert Bond; Kenneth L. Gregson; Srini Devadas; Hamed Okhravi; Michael T Boulet; Michael Vai; Julie Shah; Robert K. Cunningham; Daniela Rus; Arvind; Jeremy Kepner; Regina Barzilay; Howard E. Shrobe; David Whelihan; Sanjeev Mohindra; Beijia Zhang

Collaboration


Dive into the Sanjeev Mohindra's collaboration.

Top Co-Authors

Avatar

Nadya T. Bliss

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Jeremy Kepner

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Eric Robinson

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Albert Reuther

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Diane Staheli

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Edward K. Kao

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Julie Mullen

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Michael B. Hurley

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Michael Jones

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Paul Monticciolo

Massachusetts Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge