Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jason R. Mick is active.

Publication


Featured researches published by Jason R. Mick.


Computer Physics Communications | 2013

GPU-accelerated Gibbs ensemble Monte Carlo simulations of Lennard-Jonesium

Jason R. Mick; Eyad Hailat; Vincent Russo; Kamel Rushaidat; Loren Schwiebert; Jeffrey J. Potoff

Abstract This work describes an implementation of canonical and Gibbs ensemble Monte Carlo simulations on graphics processing units (GPUs). The pair-wise energy calculations, which consume the majority of the computational effort, are parallelized using the energetic decomposition algorithm. While energetic decomposition is relatively inefficient for traditional CPU-bound codes, the algorithm is ideally suited to the architecture of the GPU. The performance of the CPU and GPU codes are assessed for a variety of CPU and GPU combinations for systems containing between 512 and 131,072 particles. For a system of 131,072 particles, the GPU-enabled canonical and Gibbs ensemble codes were 10.3 and 29.1 times faster (GTX 480 GPU vs. i5-2500K CPU), respectively, than an optimized serial CPU-bound code. Due to overhead from memory transfers from system RAM to the GPU, the CPU code was slightly faster than the GPU code for simulations containing less than 600 particles. The critical temperature T c ∗ = 1.312 ( 2 ) and density ρ c ∗ = 0.316 ( 3 ) were determined for the tail corrected Lennard-Jones potential from simulations of 10,000 particle systems, and found to be in exact agreement with prior mixed field finite-size scaling calculations [J.J. Potoff, A.Z. Panagiotopoulos, J. Chem. Phys. 109 (1998) 10914].


Journal of Chemical Physics | 2015

Optimized Mie potentials for phase equilibria: Application to noble gases and their mixtures with n-alkanes

Jason R. Mick; Mohammad Soroush Barhaghi; Brock Jackman; Kamel Rushaidat; Loren Schwiebert; Jeffrey J. Potoff

Transferrable force fields, based on n-6 Mie potentials, are presented for noble gases. By tuning the repulsive exponent, ni, it is possible to simultaneously reproduce experimental saturated liquid densities and vapor pressures with high accuracy, from the normal boiling point to the critical point. Vapor-liquid coexistence curves for pure fluids are calculated using histogram reweighting Monte Carlo simulations in the grand canonical ensemble. For all noble gases, saturated liquid densities and vapor pressures are reproduced to within 1% and 4% of experiment, respectively. Radial distribution functions, extracted from NVT and NPT Monte Carlo simulations, are in similarly excellent agreement with experimental data. The transferability of the optimized force fields is assessed through calculations of binary mixture vapor-liquid equilibria. These mixtures include argon + krypton, krypton + xenon, methane + krypton, methane + xenon, krypton + ethane, and xenon + ethane. For all mixtures, excellent agreement with experiment is achieved without the introduction of any binary interaction parameters or multi-body interactions.


International Journal of Parallel, Emergent and Distributed Systems | 2014

Parallel Monte Carlo simulation in the canonical ensemble on the graphics processing unit

Eyad Hailat; Vincent Russo; Kamel Rushaidat; Jason R. Mick; Loren Schwiebert; Jeffrey J. Potoff

Graphics processing units (GPUs) offer parallel computing power that usually requires a cluster of networked computers or a supercomputer to accomplish. While writing kernel code is fairly straightforward, achieving efficiency and performance requires very careful optimisation decisions and changes to the original serial algorithm. We introduce a parallel canonical ensemble Monte Carlo (MC) simulation that runs entirely on the GPU. In this paper, we describe two MC simulation codes of Lennard-Jones particles in the canonical ensemble, a single CPU core and a parallel GPU implementations. Using Compute Unified Device Architecture, the parallel implementation enables the simulation of systems containing over 200,000 particles in a reasonable amount of time, which allows researchers to obtain more accurate simulation results. A remapping algorithm is introduced to balance the load of the device resources and demonstrate by experimental results that the efficiency of this algorithm is bounded by available GPU resource. Our parallel implementation achieves an improvement of up to 15 times on a commodity GPU over our efficient single core implementation for a system consisting of 256k particles, with the speedup increasing with the problem size. Furthermore, we describe our methods and strategies for optimising our implementation in detail.


Molecular Physics | 2017

Optimised Mie potentials for phase equilibria: application to alkynes

Mohammad Soroush Barhaghi; Jason R. Mick; Jeffrey J. Potoff

ABSTRACT A transferable united-atom force field, based on Mie potentials, is presented for alkynes. The performance of the optimised Mie potential parameters is assessed for 1-alkynes and 2-alkynes using grand canonical histogram-reweighting Monte Carlo simulations. For each compound, vapour–liquid coexistence curves, vapour pressures, heats of vapourisation, critical properties and normal boiling points are predicted and compared to experiment. Experimental saturated liquid densities are reproduced to within 2% average absolute deviation (AAD), except for 1-hexyne, which are reproduced with 3% AAD. Experimental saturated vapour pressures are reproduced to within 3% AAD, except for 1-pentyne, 2-pentyne and 2-hexyne, where deviations from experiment of up to 20% AAD were observed. Binary phase diagrams, predicted from Gibbs ensemble Monte Carlo simulations, for propane + propyne, propene + propyne and propadiene + propyne, are in close agreement with experiment.


Concurrency and Computation: Practice and Experience | 2016

Improving performance of GPU code using novel features of the NVIDIA kepler architecture

Yuanzhe Li; Loren Schwiebert; Eyad Hailat; Jason R. Mick; Jeffrey J. Potoff

Graphics processing unit (GPU) computing is a popular approach to simulating complex models and performing massive calculations. GPUs have attracted a great deal of interest because they offer both high performance and energy efficiency. Efficient General‐Purpose computation on Graphics Processing Units requires good parallelism, memory coalescing, regular memory access, small overhead on data exchange between the CPU and the GPU, and few explicit global synchronizations, which are usually gained from optimizing the algorithms. Besides these advantages, the proper use of some novel features provided on NVIDIA GPUs can offer further improvement. In this paper, we modify an existing optimized GPU application to illustrate the potential performance gains of these features and to demonstrate the performance trade offs of different implementation choices. The paper focuses on the challenges of reducing interactions between CPU and GPU and reducing the use of explicit synchronization. We explain how to achieve these objectives using two features of the Kepler architecture, warp shuffle, and dynamic parallelism. We find that a judicious use of these two techniques, eliminating repeated operations and synchronizations, results in significantly better performance. We describe various pitfalls encountered in optimizing our code to use these two features and how these were addressed. In particular, we identify a subtle data race condition when using dynamic parallelism under certain circumstances, and present our solution. Finally, we present a technique to trade off the allocation of various device resources to find the parameters that offer the best performance. Copyright


high performance computing and communications | 2015

Evaluation of Hybrid Parallel Cell List Algorithms for Monte Carlo Simulation

Kamel Rushaidat; Loren Schwiebert; Brock Jackman; Jason R. Mick; Jeffrey J. Potoff

This paper describes efficient, scalable parallel implementations of the conventional cell list method and a modified cell list method to calculate the total system intermolecular Lennard-Jones force interactions in the Monte Carlo Gibbs ensemble. We targeted this part of the Gibbs ensemble for optimization because it is the most computationally demanding part of the force interactions in the simulation, as it involves all the molecules in the system. The modified cell list implementation reduces the number of particles that are outside the interaction range by making the cells smaller, thus reducing the number of unnecessary distance evaluations. Evaluation of the two cell list methods is done using a hybrid MPI+OpenMP approach and a hybrid MPI+CUDA approach. The cell list methods are evaluated on a small cluster of multicore CPUs, Intel Phi coprocessors, and GPUs. The performance results are evaluated using different combinations of MPI processes, threads, and problem sizes.


high performance computing symposium | 2013

GPU-based Monte Carlo simulation for the Gibbs ensemble

Eyad Hailat; Kamel Rushaidat; Loren Schwiebert; Jason R. Mick; Jeffery J. Potoff


Journal of Chemical & Engineering Data | 2016

Prediction of Radon-222 Phase Behavior by Monte Carlo Simulation

Jason R. Mick; Mohammad Soroush Barhaghi; Jeffrey J. Potoff


Journal of Chemical & Engineering Data | 2017

Optimized Mie Potentials for Phase Equilibria: Application to Branched Alkanes

Jason R. Mick; Mohammad Soroush Barhaghi; Brock Jackman; Loren Schwiebert; Jeffrey J. Potoff


arXiv: Distributed, Parallel, and Cluster Computing | 2014

An Efficient Cell List Implementation for Monte Carlo Simulation on GPUs

Loren Schwiebert; Eyad Hailat; Kamel Rushaidat; Jason R. Mick; Jeffrey J. Potoff

Collaboration


Dive into the Jason R. Mick's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Eyad Hailat

Wayne State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yuanzhe Li

Wayne State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge