Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Cyrus Harrison is active.

Publication


Featured researches published by Cyrus Harrison.


eurographics workshop on parallel graphics and visualization | 2011

Data-parallel mesh connected components labeling and analysis

Cyrus Harrison; Hank Childs; Kelly P. Gaither

We present a data-parallel algorithm for identifying and labeling the connected sub-meshes within a domaindecomposed 3D mesh. The identification task is challenging in a distributed-memory parallel setting because connectivity is transitive and the cells composing each sub-mesh may span many or all processors. Our algorithm employs a multi-stage application of the Union-find algorithm and a spatial partitioning scheme to efficiently merge information across processors and produce a global labeling of connected sub-meshes. Marking each vertex with its corresponding sub-mesh label allows us to isolate mesh features based on topology, enabling new analysis capabilities. We briefly discuss two specific applications of the algorithm and present results from a weak scaling study. We demonstrate the algorithm at concurrency levels up to 2197 cores and analyze meshes containing up to 68 billion cells.


Proceedings of the First Workshop on In Situ Infrastructures for Enabling Extreme-Scale Analysis and Visualization | 2015

Strawman: A Batch In Situ Visualization and Analysis Infrastructure for Multi-Physics Simulation Codes

Matthew Larsen; Eric Brugger; Hank Childs; Jim Eliot; Kevin S. Griffin; Cyrus Harrison

We present Strawman, a system designed to explore the in situ visualization and analysis needs of simulation code teams planning for multi-physics calculations on exascale architectures. Strawmans design derives from key requirements from a diverse set of simulation code teams, including lightweight usage of shared resources, batch processing, ability to leverage modern architectures, and ease-of-use both for software integration and for usage during simulation runs. We describe the Strawman system, the key technologies it depends on, and our experiences integrating Strawman into three proxy simulations. Our findings show that Strawmans design meets our target requirements, and that some of its concepts may be worthy of integration into our community in situ implementations.


IEEE Computer Graphics and Applications | 2012

Visual Analytics for Finding Critical Structures in Massive Time-Varying Turbulent-Flow Simulations

Kelly P. Gaither; Hank Childs; Karl W. Schulz; Cyrus Harrison; William L. Barth; Diego Donzis; Pui-Kuen Yeung

Visualization and data analysis are crucial in analyzing and understanding a turbulent-flow simulation of size 4,096³ cells per time slice (68 billion cells) and 17 time slices (one trillion total cells). The visualization techniques used help scientists investigate the dynamics of intense events individually and as these events form clusters.


ieee international conference on high performance computing data and analytics | 2016

Performance modeling of in situ rendering

Matthew Larsen; Cyrus Harrison; James Kress; David Pugmire; Jeremy S. Meredith; Hank Childs

With the push to exascale, in situ visualization and analysis will continue to play an important role in high performance computing. Tightly coupling in situ visualization with simulations constrains resources for both, and these constraints force a complex balance of trade-offs. A performance model that provides an a priori answer for the cost of using an in situ approach for a given task would assist in managing the trade-offs between simulation and visualization resources. In this work, we present new statistical performance models, based on algorithmic complexity, that accurately predict the run-time cost of a set of representative rendering algorithms, an essential in situ visualization task. To train and validate the models, we conduct a performance study of an MPI+X rendering infrastructure used in situ with three HPC simulation applications. We then explore feasibility issues using the model for selected in situ rendering questions.


Topological and Statistical Methods for Complex Data, Tackling Large-Scale, High-Dimensional, and Multivariate Data Spaces | 2015

A Distributed-Memory Algorithm for Connected Components Labeling of Simulation Data

Cyrus Harrison; Jordan Weiler; Ryan Bleile; Kelly P. Gaither; Hank Childs

This chapter describes a data-parallel, distributed-memory algorithm for identifying and labeling the connected sub-meshes within a three-dimensional mesh. The identification task is challenging in a distributed-memory setting because connectivity is transitive and the cells composing a sub-mesh may span many processors. The algorithm employs a multi-stage application of the Union-find algorithm and a spatial partitioning scheme to efficiently merge information across processors and to produce a global labeling of connected sub-meshes. Marking each vertex with its corresponding sub-mesh label allows mesh features to be isolated based on topology, enabling important analysis capabilities. The algorithm performs well in parallel; results are presented from a weak scaling study with concurrency levels up to 2,197 cores and meshes containing over two billion cells. This chapter is an extension of previous work by Harrison et al. (Data-parallel mesh connected components labeling and analysis. In: EuroGraphics Symposium on Parallel Graphics and Visualization (EGPGV), pp. 131–140, April 2011). It contains significant algorithmic improvements over the previous version, improved exploration of key bottlenecks in the algorithm, and improved clarity of presentation.


ieee symposium on large data analysis and visualization | 2014

Multi-threaded streamline tracing for data-intensive architectures

Ming Jiang; Brian Van Essen; Cyrus Harrison; Maya Gokhale

Streamline tracing is an important tool used in many scientific domains for visualizing and analyzing flow fields. In this work, we examine a shared memory multi-threaded approach to streamline tracing that targets emerging data-intensive architectures. We take an in-depth look at data management strategies for streamline tracing in terms of issues, such as memory latency, bandwidth, and capacity limitations, that are applicable to future HPC platforms. We present two data management strategies for streamline tracing and evaluate their effectiveness for data-intensive architectures with locally attached Flash. We provide a comprehensive evaluation of both strategies by examining the strong and weak scaling implications of a variety of parameters. We also characterize the relationship between I/O concurrency and I/O efficiency to guide the selection of strategy based on use case. From our experiments, we find that using kernel-managed memory-map for out-of-core streamline tracing can outperform optimized user-managed cache.


ieee international conference on high performance computing data and analytics | 2012

Efficient Dynamic Derived Field Generation on Many-Core Architectures Using Python

Cyrus Harrison; Paul A. Navrátil; Maysam Moussalem; Ming Jiang; Hank Childs

Derived field generation is a critical aspect of many visualization and analysis systems. This capability is frequently implemented by providing users with a language to create new fields and then translating their “programs” into a pipeline of filters that are combined in sequential fashion. Although this design is highly extensible and practical for development, the runtime characteristics of the typical implementation are poor, since it iterates over large arrays many times. As we reconsider visualization and analysis systems for many-core architectures, we must re-think the best way to implement derived fields while being cognizant of data movement. In this paper, we describe a flexible Python-based framework that realizes efficient derived field generation on many-core architectures using OpenCL. Our framework supports the development of different execution strategies for composing operations using a common library of building blocks. We present an evaluation of our framework by testing three execution strategies to explore tradeoffs between runtime performance and memory constraints. We successfully demonstrate our framework in an HPC environment using the vortex detection application on a large-scale simulation.


Proceedings of the In Situ Infrastructures on Enabling Extreme-Scale Analysis and Visualization | 2017

The ALPINE In Situ Infrastructure: Ascending from the Ashes of Strawman

Matthew Larsen; James P. Ahrens; Utkarsh Ayachit; Eric Brugger; Hank Childs; Berk Geveci; Cyrus Harrison

This paper introduces ALPINE, a flyweight in situ infrastructure. The infrastructure is designed for leading-edge supercomputers, and has support for both distributed-memory and shared-memory parallelism. It can take advantage of computing power on both conventional CPU architectures and on many-core architectures such as NVIDIA GPUs or the Intel Xeon Phi. Further, it has a flexible design that supports for integration of new visualization and analysis routines and libraries. The paper describes ALPINEs interface choices and architecture, and also reports on initial experiments performed using the infrastructure.


Proceedings of the 11th Python in Science Conference | 2012

Python's Role in VisIt

Cyrus Harrison; Harinarayan Krishnan


ieee international conference on high performance computing data and analytics | 2017

PyHPC2016: 6th workshop on python for high-performance and scientific computing

Andreas Schreiber; William Scullin; Bill Spotz; Andy R. Terrel; Achim Basermann; Yung Yu Chen; Samantha S. Foley; Cyrus Harrison; Konrad Hinsen; Michael Klemm; Andreas Kloeckner; Maurice Ling; Mike Muller

Collaboration


Dive into the Cyrus Harrison's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Eric Brugger

Lawrence Livermore National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Kelly P. Gaither

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David Pugmire

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar

James P. Ahrens

Los Alamos National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Ming Jiang

Lawrence Livermore National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Becky Springmeyer

Lawrence Livermore National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Bill Spotz

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Brad Whitlock

Lawrence Livermore National Laboratory

View shared research outputs
Researchain Logo
Decentralizing Knowledge