Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Amanda Randles is active.

Publication


Featured researches published by Amanda Randles.


Cell Reports | 2014

Inference of Tumor Evolution during Chemotherapy by Computational Modeling and In Situ Analysis of Genetic and Phenotypic Cellular Diversity

Vanessa Almendro; Yu Kang Cheng; Amanda Randles; Shalev Itzkovitz; Andriy Marusyk; Elisabet Ametller; Xavier Gonzalez-Farre; Montse Muñoz; Hege G. Russnes; Åslaug Helland; Inga H. Rye; Anne Lise Børresen-Dale; Reo Maruyama; Alexander van Oudenaarden; M. Dowsett; Robin L. Jones; Jorge S. Reis-Filho; Pere Gascón; Mithat Gonen; Franziska Michor; Kornelia Polyak

Cancer therapy exerts a strong selection pressure that shapes tumor evolution, yet our knowledge of how tumors change during treatment is limited. Here, we report the analysis of cellular heterogeneity for genetic and phenotypic features and their spatial distribution in breast tumors pre- and post-neoadjuvant chemotherapy. We found that intratumor genetic diversity was tumor-subtype specific, and it did not change during treatment in tumors with partial or no response. However, lower pretreatment genetic diversity was significantly associated with pathologic complete response. In contrast, phenotypic diversity was different between pre- and posttreatment samples. We also observed significant changes in the spatial distribution of cells with distinct genetic and phenotypic features. We used these experimental data to develop a stochastic computational model to infer tumor growth patterns and evolutionary dynamics. Our results highlight the importance of integrated analysis of genotypes and phenotypes of single cells in intact tissues to predict tumor evolution.


ieee international conference on high performance computing data and analytics | 2013

Multiphysics simulations: Challenges and opportunities

David E. Keyes; Lois Curfman McInnes; Carol S. Woodward; William Gropp; Eric Myra; Michael Pernice; John B. Bell; Jed Brown; Alain Clo; Jeffrey M. Connors; Emil M. Constantinescu; Donald Estep; Kate Evans; Charbel Farhat; Ammar Hakim; Glenn E. Hammond; Glen A. Hansen; Judith C. Hill; Tobin Isaac; Kirk E. Jordan; Dinesh K. Kaushik; Efthimios Kaxiras; Alice Koniges; Kihwan Lee; Aaron Lott; Qiming Lu; John Harold Magerlein; Reed M. Maxwell; Michael McCourt; Miriam Mehl

We consider multiphysics applications from algorithmic and architectural perspectives, where “algorithmic” includes both mathematical analysis and computational complexity, and “architectural” includes both software and hardware environments. Many diverse multiphysics applications can be reduced, en route to their computational simulation, to a common algebraic coupling paradigm. Mathematical analysis of multiphysics coupling in this form is not always practical for realistic applications, but model problems representative of applications discussed herein can provide insight. A variety of software frameworks for multiphysics applications have been constructed and refined within disciplinary communities and executed on leading-edge computer systems. We examine several of these, expose some commonalities among them, and attempt to extrapolate best practices to future systems. From our study, we summarize challenges and forecast opportunities.


international parallel and distributed processing symposium | 2014

MIC-SVM: Designing a Highly Efficient Support Vector Machine for Advanced Modern Multi-core and Many-Core Architectures

Yang You; Shuaiwen Leon Song; Haohuan Fu; Andres Marquez; Maryam Mehri Dehnavi; Kevin J. Barker; Kirk W. Cameron; Amanda Randles; Guangwen Yang

Support Vector Machine (SVM) has been widely used in data-mining and Big Data applications as modern commercial databases start to attach an increasing importance to the analytic capabilities. In recent years, SVM was adapted to the field of High Performance Computing for power/performance prediction, auto-tuning, and runtime scheduling. However, even at the risk of losing prediction accuracy due to insufficient runtime information, researchers can only afford to apply offline model training to avoid significant runtime training overhead. Advanced multi- and many-core architectures offer massive parallelism with complex memory hierarchies which can make runtime training possible, but form a barrier to efficient parallel SVM design. To address the challenges above, we designed and implemented MIC-SVM, a highly efficient parallel SVM for x86 based multi-core and many-core architectures, such as the Intel Ivy Bridge CPUs and Intel Xeon Phi co-processor (MIC). We propose various novel analysis methods and optimization techniques to fully utilize the multilevel parallelism provided by these architectures and serve as general optimization methods for other machine learning tools. MIC-SVM achieves 4.4-84x and 18-47x speedups against the popular LIBSVM, on MIC and Ivy Bridge CPUs respectively, for several real-world data-mining datasets. Even compared with GPUSVM, run on a top of the line NVIDIA k20x GPU, the performance of our MIC-SVM is competitive. We also conduct a cross-platform performance comparison analysis, focusing on Ivy Bridge CPUs, MIC and GPUs, and provide insights on how to select the most suitable advanced architectures for specific algorithms and input data patterns.


Journal of Parallel and Distributed Computing | 2015

Scaling Support Vector Machines on modern HPC platforms

Yang You; Haohuan Fu; Shuaiwen Leon Song; Amanda Randles; Darren J. Kerbyson; Andres Marquez; Guangwen Yang; Adolfy Hoisie

Support Vector Machines (SVM) have been widely used in data-mining and Big Data applications as modern commercial databases start to attach an increasing importance to the analytic capabilities. In recent years, SVM was adapted to the field of High Performance Computing for power/performance prediction, auto-tuning, and runtime scheduling. However, even at the risk of losing prediction accuracy due to insufficient runtime information, researchers can only afford to apply offline model training to avoid significant runtime training overhead. Advanced multi- and many-core architectures offer massive parallelism with complex memory hierarchies which can make runtime training possible, but form a barrier to efficient parallel SVM design.To address the challenges above, we designed and implemented MIC-SVM, a highly efficient parallel SVM for?x86 based multi-core and many-core architectures, such as the Intel Ivy Bridge CPUs and Intel Xeon Phi co-processor (MIC). We propose various novel analysis methods and optimization techniques to fully utilize the multilevel parallelism provided by these architectures and serve as general optimization methods for other machine learning tools.MIC-SVM achieves 4.4-84i??and 18-47i??speedups against the popular LIBSVM, on MIC and Ivy Bridge CPUs respectively, for several real-world data-mining datasets. Even compared with GPUSVM, running on the NVIDIA k20x?GPU, the performance of our MIC-SVM is competitive. We also conduct a cross-platform performance comparison analysis, focusing on Ivy Bridge CPUs, MIC and GPUs, and provide insights on how to select the most suitable advanced architectures for specific algorithms and input data patterns. An efficient parallel support vector machine for?x86 based multi-core platforms.The novel optimization techniques to fully utilize the multi-level parallelism.The improvement for the deficiencies of the current SVM tools.Select the best architectures for input data patterns to achieve best performance.The large-scale distributed algorithm and power-efficient approach.


Journal of Computational Science | 2015

Massively Parallel Simulations of Hemodynamics in the Primary Large Arteries of the Human Vasculature

Amanda Randles; Erik W. Draeger; Peter E. Bailey

We present a computational model of three-dimensional and unsteady hemodynamics within the primary large arteries in the human on 1,572,864 cores of the IBM Blue Gene/Q. Models of large regions of the circulatory system are needed to study the impact of local factors on global hemodynamics and to inform next generation drug delivery mechanisms. The HARVEY code successfully addresses key challenges that can hinder effective solution of image-based hemodynamics on contemporary supercomputers, such as limited memory capacity and bandwidth, flexible load balancing, and scalability. This work is the first demonstration of large fluid dynamics simulations of the aortofemoral region of the circulatory system at resolutions as small as 10 μm.


international parallel and distributed processing symposium | 2013

Performance Analysis of the Lattice Boltzmann Model Beyond Navier-Stokes

Amanda Randles; Vivek Kale; Jeff R. Hammond; William Gropp; Efthimios Kaxiras

The lattice Boltzmann method is increasingly important in facilitating large-scale fluid dynamics simulations. To date, these simulations have been built on discretized velocity models of up to 27 neighbors. Recent work has shown that higher order approximations of the continuum Boltzmann equation enable not only recovery of the Navier-Stokes hydrodynamics, but also simulations for a wider range of Knudsen numbers, which is especially important in micro- and nanoscale flows. These higher-order models have significant impact on both the communication and computational complexity of the application. We present a performance study of the higher-order models as compared to the traditional ones, on both the IBM Blue Gene/P and Blue Gene/Q architectures. We study the tradeoffs of many optimizations methods such as the use of deep halo level ghost cells that, alongside hybrid programming models, reduce the impact of extended models and enable efficient modeling of extreme regimes of computational fluid dynamics.


ieee international conference on high performance computing data and analytics | 2015

Massively parallel models of the human circulatory system

Amanda Randles; Erik W. Draeger; Tomas Oppelstrup; Liam Krauss; John A. Gunnels

The potential impact of blood flow simulations on the diagnosis and treatment of patients suffering from vascular disease is tremendous. Empowering models of the full arterial tree can provide insight into diseases such as arterial hypertension and enables the study of the influence of local factors on global hemodynamics. We present a new, highly scalable implementation of the lattice Boltzmann method which addresses key challenges such as multiscale coupling, limited memory capacity and bandwidth, and robust load balancing in complex geometries. We demonstrate the strong scaling of a three-dimensional, high-resolution simulation of hemodynamics in the systemic arterial tree on 1,572,864 cores of Blue Gene/Q. Faster calculation of flow in full arterial networks enables unprecedented risk stratification on a perpatient basis. In pursuit of this goal, we have introduced computational advances that significantly reduce time-to-solution for biofluidic simulations.


Journal of Computational Physics | 2014

Parallel in time approximation of the lattice Boltzmann method for laminar flows

Amanda Randles; Efthimios Kaxiras

Abstract Fluid dynamics simulations using grid-based methods, such as the lattice Boltzmann equation, can benefit from parallel-in-space computation. However, for a fixed-size simulation of this type, the efficiency of larger processor counts will saturate when the number of grid points per core becomes too small. To overcome this fundamental strong scaling limit in space-parallel approaches, we present a novel time-parallel version of the lattice Boltzmann method using the parareal algorithm. This method is based on a predictor–corrector scheme combined with mesh refinement to enable the simulation of larger number of time steps. We present results of up to a 32× increase in speed for a model system consisting of a cylinder with conditions for laminar flow. The parallel gain obtainable is predicted with strong accuracy, providing a quantitative understanding of the potential impact of this method.


STACOM'12 Proceedings of the third international conference on Statistical Atlases and Computational Models of the Heart: imaging and modelling challenges | 2012

A lattice boltzmann simulation of hemodynamics in a patient-specific aortic coarctation model

Amanda Randles; Moritz Bächer; Hanspeter Pfister; Efthimios Kaxiras

In this paper, we propose a system to determine the pressure gradient at rest in the aorta. We developed a technique to efficiently initialize a regular simulation grid from a patient-specific aortic triangulated model. On this grid we employ the lattice Boltzmann method to resolve the characteristic fluid flow through the vessel. The inflow rates, as measured physiologically, are imposed providing accurate pulsatile flow. The simulation required a resolution of at least 20 microns to ensure a convergence of the pressure calculation. HARVEY, a large-scale parallel code, was run on the IBM Blue Gene/Q supercomputer to model the flow at this high resolution. We analyze and evaluate the strengths and weaknesses of our system.


international conference on conceptual structures | 2017

Numerical simulation of a compound capsule in a constricted microchannel

John Gounley; Erik W. Draeger; Amanda Randles

Simulations of the passage of eukaryotic cells through a constricted channel aid in studying the properties of cancer cells and their transport in the bloodstream. Compound capsules, which explicitly model the outer cell membrane and nuclear lamina, have the potential to improve computational model fidelity. However, general simulations of compound capsules transiting a constricted microchannel have not been conducted and the influence of the compound capsule model on computational performance is not well known. In this study, we extend a parallel hemodynamics application to simulate the fluid-structure interaction between compound capsules and fluid. With this framework, we compare the deformation of simple and compound capsules in constricted microchannels, and explore how deformation depends on the capillary number and on the volume fraction of the inner membrane. The computational frameworks parallel performance in this setting is evaluated and future development lessons are discussed.

Collaboration


Dive into the Amanda Randles's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Erik W. Draeger

Lawrence Livermore National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Payman Jalali

Lappeenranta University of Technology

View shared research outputs
Top Co-Authors

Avatar

Andres Marquez

Pacific Northwest National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge