Iman Pouya
Royal Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Iman Pouya.
Journal of Chemical Theory and Computation | 2015
Sander Pronk; Iman Pouya; Magnus Lundborg; Grant M. Rotskoff; Björn Wesén; Peter M. Kasson; Erik Lindahl
Computational chemistry and other simulation fields are critically dependent on computing resources, but few problems scale efficiently to the hundreds of thousands of processors available in current supercomputers-particularly for molecular dynamics. This has turned into a bottleneck as new hardware generations primarily provide more processing units rather than making individual units much faster, which simulation applications are addressing by increasingly focusing on sampling with algorithms such as free-energy perturbation, Markov state modeling, metadynamics, or milestoning. All these rely on combining results from multiple simulations into a single observation. They are potentially powerful approaches that aim to predict experimental observables directly, but this comes at the expense of added complexity in selecting sampling strategies and keeping track of dozens to thousands of simulations and their dependencies. Here, we describe how the distributed execution framework Copernicus allows the expression of such algorithms in generic workflows: dataflow programs. Because dataflow algorithms explicitly state dependencies of each constituent part, algorithms only need to be described on conceptual level, after which the execution is maximally parallel. The fully automated execution facilitates the optimization of these algorithms with adaptive sampling, where undersampled regions are automatically detected and targeted without user intervention. We show how several such algorithms can be formulated for computational chemistry problems, and how they are executed efficiently with many loosely coupled simulations using either distributed or parallel resources with Copernicus.
Future Generation Computer Systems | 2017
Iman Pouya; Sander Pronk; Magnus Lundborg; Erik Lindahl
Compute-intensive applications have gradually changed focus from massively parallel supercomputers to capacity as a resource obtained on-demand. This is particularly true for the large-scale adoption of cloud computing and MapReduce in industry, while it has been difficult for traditional high-performance computing (HPC) usage in scientific and engineering computing to exploit this type of resources. However, with the strong trend of increasing parallelism rather than faster processors, a growing number of applications target parallelism already on the algorithm level with loosely coupled approaches based on sampling and ensembles. While these cannot trivially be formulated as MapReduce, they are highly amenable to throughput computing. There are many general and powerful frameworks, but in particular for sampling-based algorithms in scientific computing there are some clear advantages from having a platform and scheduler that are highly aware of the underlying physical problem. Here, we present how these challenges are addressed with combinations of dataflow programming, peer-to-peer techniques and peer-to-peer networks in the Copernicus platform. This allows automation of sampling-focused workflows, task generation, dependency tracking, and not least distributing these to a diverse set of compute resources ranging from supercomputers to clouds and distributed computing (across firewalls and fragile networks). Workflows are defined from modules using existing programs, which makes them reusable without programming requirements. The system achieves resiliency by handling node failures transparently with minimal loss of computing time due to checkpointing, and a single server can manage hundreds of thousands of cores e.g. for computational chemistry applications. Hybrid dataflow and peer-to-peer computing to fully automated ensemble sampling.The platform automatically distributes workloads and manages them resilientlyProblems are defined as workflow by reusing existing software and scripts.Portability in networks where parts are behind firewalls.
ieee international conference on high performance computing data and analytics | 2011
Sander Pronk; Gregory R. Bowman; Berk Hess; Per Larsson; Imran S. Haque; Vijay S. Pande; Iman Pouya; Kyle A. Beauchamp; Peter M. Kasson; Erik Lindahl
Alcohol and Alcoholism | 2015
T. B. Voigt; S. Heusser; Göran Klement; Iman Pouya; A. R. Mola; T. M. D. Ruel; James R. Trudell; Erik Lindahl; Rebecca J. Howard
Biophysical Journal | 2016
Stephanie A. Heusser; Rebecca J. Howard; Iman Pouya; Göran Klement; Cecilia M. Borghese; R. Adron Harris; Erik Lindahl
Biophysical Journal | 2015
Ozge Yoluk; Stephanie A. Heusser; Iman Pouya; Rebecca J. Howard; Goran Klement; Erik Lindahl
The FASEB Journal | 2014
Jody-Ann Facey; Laura Venner; Michael Hyde; Iman Pouya; Erik Lindahl; Rebecca J. Howard
Israel Journal of Chemistry | 2014
Per A. Larsson; Iman Pouya; Erik Lindahl
Biophysical Journal | 2014
Goran Klement; Iman Pouya; Ozge Yoluk; Rebecca J. Howard; Erik Lindahl
Biophysical Journal | 2013
Iman Pouya; Sander Pronk; Grant M. Rotskoff; Peter M. Kasson; Erik Lindahl