Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michael L. Norman is active.

Publication


Featured researches published by Michael L. Norman.


teragrid conference | 2010

Accelerating data-intensive science with Gordon and Dash

Michael L. Norman; Allan Snavely

In 2011 SDSC will deploy Gordon, an HPC architecture specifically designed for data-intensive applications. We describe the Gordon architecture and the thinking behind the design choices by considering the needs of two targeted application classes: massive database/data mining and data-intensive predictive science simulations. Gordon employs two technologies that have not been incorporated into HPC systems heretofore: flash SSD memory, and virtual shared memory software. We report on application speedups obtained with a working prototype of Gordon in production at SDSC called Dash, currently available as a TeraGrid resource.


ieee symposium on large data analysis and visualization | 2011

Exploring large data over wide area networks

Mark Hereld; Joseph A. Insley; Eric C. Olson; Michael E. Papka; Venkatram Vishwanath; Michael L. Norman; Rick Wagner

Simulations running on the top supercomputers are routinely producing multi-terabyte data sets. Enabling scientists, at their home institutions, to analyze, visualize and interact with these data sets as they are produced is imperative to the scientific discovery process. We report on interactive visualizations of large simulations performed on Kraken at the National Institute for Computational Sciences using the parallel cosmology code Enzo, with grid sizes ranging from 10243 to 64003. In addition to the asynchronous rendering of over 570 timesteps of a 40963 simulation (150 TB in total), we developed the ability to stream the rendering result to multipanel display walls, with full interactive control of the renderer(s).


extreme science and engineering discovery environment | 2013

Using Gordon to accelerate LHC science

Rick Wagner; Mahidhar Tatineni; Eva Hocks; Kenneth Yoshimoto; Scott Sakai; Michael L. Norman; Brian Bockelman; I. Sfiligoi; M. Tadel; J. Letts; F. Würthwein; L. A. T. Bauerdick

The discovery of the Higgs boson by the Large Hadron Collider (LHC) has garnered international attention. In addition to this singular result, the LHC may also uncover other fundamental particles, including dark matter. Much of this research is being done on data from one of the LHC experiments, the Compact Muon Solenoid (CMS). The CMS experiment was able to capture data at higher sampling frequencies than planned during the 2012 LHC operational period. The resulting data had been parked, waiting to be processed on CMS computers. While CMS has significant compute resources, by partnering with SDSC to incorporate Gordon into the CMS workflow, analysis of the parked data was completed months ahead of schedule. This allows scientists to review the results more quickly, and could guide future plans for the LHC.


Proceedings of the Practice and Experience in Advanced Research Computing 2017 on Sustainability, Success and Impact | 2017

SeedMe: Data Sharing Building Blocks

Amit Chourasia; David R. Nadeau; Michael L. Norman

Data sharing is essential and pervasive in scientific research. The requirements for data sharing vary as research projects mature and iterate through early designs and prototypes with a small number of collaborators, and develop into publishable results and larger collaborator teams. Along the way, preliminary and transient results often need to be shared, discussed, and visualized with a quick turn-around time in order to guide the next steps of the project. Data sharing throughout this process requires that the data itself be shared, along with essential context, such as descriptions, provenance, scripts, visualizations, and threaded discussions. However, current consumer-oriented data sharing solutions mainly rely on local or cloud file systems or web-based drop boxes. These mechanisms are rather basic and are largely focused on data storage for individual use, rather than data collaboration. Using them for scientific data sharing is cumbersome. SeedMe is a platform that enables easy sharing of transient and preliminary data for a broad research computing community by offering cyberinfrastructure as a service and a modular software stack that could be customized. SeedMe is based on Drupal content management system as a set of building blocks with additional PHP modules and web services clients. In this poster we present our progress on implementing a web based modular data sharing platform that collocates shared data, along with the datas context, including descriptions, discussion, light-weight visualizations, and support files. This project is an evolution of the earlier SeedMe[1, 2] project, which created prototype data sharing tools and garnered user feedback from realworld use. The new SeedMe platform is developing modular components for data sharing, light-weight visualization, collaboration, DOI registration, video encoding and playback, REST APIs, command-line data import/export tools, and more. These modules may be added to any web site based upon the widely-used open-source Drupal content management system. The new SeedMe modules allow extensive customization enabling the sites to select and enhance functionality to provide features specific to a research community or a project. The SeedMe modules are widely applicable to a broad research community. They will be released as a suite of open source extensible building blocks. With this poster we showcase current progress along with an interactive demonstration of the project and engage with the HPC community to get feedback.


ieee symposium on large data analysis and visualization | 2014

SeedMe: A cyberinfrastructure for sharing results

Amit Chourasia; Mona Wong-Barnum; David R. Nadeau; Michael L. Norman

Computational simulations have become an indispensible tool in a wide variety of science and engineering investigations. Nearly all scientific computation and analyses create transient data and preliminary results, these may consist of text, binary files and visualization images. Quick and effective assessments of these data are necessary for efficient use of the computation resources, but this is complicated when a large collaborating team is geographically dispersed and/or some team members do not have direct access to the computation resource and output data. Current methods for sharing and assessing transient data and preliminary results are cumbersome, labor intensive, and largely unsupported by useful tools and procedures. Each research team is forced to create their own scripts and ad hoc procedures to push data from system to system, and user to user, and to make quick plots, images, and videos to guide the next step in their research. These custom efforts often rely on email, ftp, and scp, despite the ubiquity of much more flexible dynamic web-based technologies and the impressive display and interaction abilities of todays mobile devices. To fill this critical gap we have developed SeedMe (Swiftly Encode, Explore and Disseminate My Experiments), is a web-based architecture to enable the rapid sharing of content directly from applications running on High Performance Computing or cloud resources. SeedMe converts a slow, manual, serial, error prone, repetitive, and redundant sharing and assessment process into a streamlined automatic and web accessible cyberinfrastructure. We provide an easy to use sharing model with granular access control, and mobile device support. SeedMe provides secure input and output complementing HPC resources, without compromising their security model, and thereby expand and extend their capabilities. SeedMe aims to foster rapid assessment, iteration, communication, and dissemination of transient data and preliminary results by seeding content that can be accessed via a simple collaborative web interface.


teragrid conference | 2011

Interactive large data exploration over the wide area

Mark Hereld; Michael E. Papka; Joseph A. Insley; Michael L. Norman; Eric C. Olson; Rick Wagner

The top supercomputers typically have aggregate memories in excess of 100 TB, with simulations running on these systems producing datasets of comparable size. The size of these datasets and the speed with which they are produced define the minimum performance that modern analysis and visualization must achieve. We report on interactive visualizations of large simulations performed on Kraken at the National Institute for Computational Sciences using the parallel cosmology code Enzo, with grid sizes ranging from 10243 to 64003. In addition to the asynchronous rendering of over 570 timesteps of a 40963 simulation (150 TB in total), we developed the ability to stream the rendering result to multi-panel display walls, with full interactive control of the renderer(s).


ieee international conference on high performance computing data and analytics | 2011

Modeling early galaxies using radiation hydrodynamics

Joseph A. Insley; Rick Wagner; Robert Harkness; Daniel R. Reynolds; Michael L. Norman; Mark Hereld; Eric C. Olson; Michael E. Papka; Venkatram Vishwanath

This simulation uses a flux-limited diffusion solver to explore the radiation hydrodynamics of early galaxies, in particular, the ionizing radiation created by Population III stars. At the time of this rendering, the simulation has evolved to a redshift of 3.5. The simulation volume is 11.2 comoving megaparsecs, and has a uniform grid of 10243 cells, with over 1 billion dark matter and star particles.n This animation shows a combined view of the baryon density, dark matter density, radiation energy and emissivity from this simulation. The multi-variate rendering is particularly useful because is shows both the baryonic matter (normal) and dark matter, and the pressure and temperature variables are properties of only the baryonic matter. Visible in the gas density are bubbles, or shells, created by the radiation feedback from young stars. Seeing the bubbles from feedback provides confirmation of the physics model implemented. Features such as these are difficult to identify algorithmically, but easily found when viewing the visualization


Archive | 2013

DIRECT NUMERICAL SIMULATION OF REIONIZATION IN LARGE COSMOLOGICAL VOLUMES I: NUMERICAL METHODS AND TESTS

Michael L. Norman; Daniel R. Reynolds; Geoffrey C. So; Robsert P. Harkness


Archive | 2012

Block preconditioning of stiff implicit models for radiative ionization in the early universe

Daniel R. Reynolds; Robert Harkness; Geoffrey C. So; Michael L. Norman


Archive | 2010

AMR MHD Simulations of Self Gravitating Super Alfvenic Turbulence

David C. Collins; Paolo Padoan; Michael L. Norman; Hao Xu

Collaboration


Dive into the Michael L. Norman's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Eric J. Hallman

University of Colorado Boulder

View shared research outputs
Top Co-Authors

Avatar

Jack O. Burns

University of Colorado Boulder

View shared research outputs
Top Co-Authors

Avatar

Rick Wagner

University of California

View shared research outputs
Top Co-Authors

Avatar

Daniel R. Reynolds

Southern Methodist University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Joseph A. Insley

Argonne National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Mark Hereld

Argonne National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Michael E. Papka

Northern Illinois University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge