Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Brian Friesen is active.

Publication


Featured researches published by Brian Friesen.


The Astrophysical Journal | 2014

Near-Infrared Line Identification In Type Ia Supernovae During The Transitional Phase

Brian Friesen; E. Baron; John P. Wisniewski; Jerod T. Parrent; R. C. Thomas; Timothy R. Miller; G. H. Marion

We present near-infrared synthetic spectra of a delayed-detonation hydrodynamical model and compare them to observed spectra of four normal Type Ia supernovae ranging from day +56.5 to day +85. This is the epoch during which supernovae are believed to be undergoing the transition from the photospheric phase, where spectra are characterized by line scattering above an optically thick photosphere, to the nebular phase, where spectra consist of optically thin emission from forbidden lines. We find that most spectral features in the near-infrared can be accounted for by permitted lines of Fe II and Co II. In addition, we find that [Ni II] fits the emission feature near 1.98 μm, suggesting that a substantial mass of 58Ni exists near the center of the ejecta in these objects, arising from nuclear burning at high density.


ieee international conference on high performance computing data and analytics | 2016

Evaluating and optimizing the NERSC workload on Knights Landing

Taylor Barnes; Brandon Cook; Jack Deslippe; Douglas W. Doerfler; Brian Friesen; Yun He; Thorsten Kurth; Tuomas Koskela; Mathieu Lobet; Tareq M. Malas; Leonid Oliker; Andrey Ovsyannikov; Abhinav Sarje; Jean-Luc Vay; Henri Vincenti; Samuel Williams; Pierre Carrier; Nathan Wichmann; Marcus Wagner; Paul R. C. Kent; Christopher Kerr; John M. Dennis

NERSC has partnered with 20 representative application teams to evaluate performance on the Xeon-Phi Knights Landing architecture and develop an application-optimization strategy for the greater NERSC workload on the recently installed Cori system. In this article, we present early case studies and summarized results from a subset of the 20 applications highlighting the impact of important architecture differences between the Xeon-Phi and traditional Xeon processors. We summarize the status of the applications and describe the greater optimization strategy that has formed.


Monthly Notices of the Royal Astronomical Society | 2015

Spectral models for early time SN 2011fe observations

E. Baron; Peter A. Hoeflich; Brian Friesen; M. Sullivan; E. Y. Hsiao; Richard S. Ellis; Avishay Gal-Yam; D. A. Howell; P. Nugent; Inma Dominguez; Kevin Krisciunas; Mark M. Phillips; Nicholas B. Suntzeff; L. Wang; Richard C. Thomas

We use observed UV through near-IR spectra to examine whether SN 2011fe can be understood in the framework of Branch-normal Type Ia supernovae (SNe Ia) and to examine its individual peculiarities. As a benchmark, we use a delayed-detonation model with a progenitor metallicity of Z_⊙/20. We study the sensitivity of features to variations in progenitor metallicity, the outer density profile, and the distribution of radioactive nickel. The effect of metallicity variations in the progenitor have a relatively small effect on the synthetic spectra. We also find that the abundance stratification of SN 2011fe resembles closely that of a delayed-detonation model with a transition density that has been fit to other Branch-normal SNe Ia. At early times, the model photosphere is formed in material with velocities that are too high, indicating that the photosphere recedes too slowly or that SN 2011fe has a lower specific energy in the outer ≈0.1 M_⊙ than does the model. We discuss several explanations for the discrepancies. Finally, we examine variations in both the spectral energy distribution and in the colours due to variations in the progenitor metallicity, which suggests that colours are only weak indicators for the progenitor metallicity, in the particular explosion model that we have studied. We do find that the flux in the U band is significantly higher at maximum light in the solar metallicity model than in the lower metallicity model and the lower metallicity model much better matches the observed spectrum.


Monthly Notices of the Royal Astronomical Society | 2016

Comparative analysis of SN 2012dn optical spectra: Days -14 to +114

Jerod T. Parrent; D. A. Howell; Robert A. Fesen; S. Parker; Federica B. Bianco; Benjamin E. P. Dilday; David J. Sand; S. Valenti; Jozsef Vinko; P. Berlind; Peter M. Challis; D. Milisavljevic; Nathan Edward Sanders; G. H. Marion; J. C. Wheeler; Peter J. Brown; M. L. Calkins; Brian Friesen; Robert P. Kirshner; Tyler A. Pritchard; Robert Michael Quimby; P. W. A. Roming

MNRAS 457, 3702–3723 (2016) doi:10.1093/mnras/stw239 Advance Access publication 2016 January 29 Comparative analysis of SN 2012dn optical spectra: days −14 to +114 J. T. Parrent, 1,2,3‹ D. A. Howell, 2,4‹ R. A. Fesen, 3 S. Parker, 5 F. B. Bianco, 6 B. Dilday, 7 D. Sand, 8 S. Valenti, 2,3 J. Vink´o, 9,10 P. Berlind, 11 P. Challis, 1 D. Milisavljevic, 1 N. Sanders, 1 G. H. Marion, 10 J. C. Wheeler, 10 P. Brown, 12 M. L. Calkins, 6 B. Friesen, 13 R. Kirshner, 1 T. Pritchard, 14 R. Quimby 15,16 and P. Roming 14,17 1 Harvard–Smithsonian Center for Astrophysics, 60 Garden St, Cambridge, MA 02138, USA Cumbres Observatory Global Telescope Network, Goleta, CA 93117, USA 3 6127 Wilder Lab, Department of Physics and Astronomy, Dartmouth College, Hanover, NH 03755, USA 4 Department of Physics, U.C. Santa Barbara, Santa Barbara, CA 93117, USA 5 Backyard Observatory Supernova Search, Parkdale Observatory, Canterbury 7495, New Zealand 6 Center for Cosmology and Particle Physics, New York University, 4 Washington Place, New York, NY 10003, USA 7 Gravity Jack, 23505 E Appleway Ave #200, Liberty Lake, WA 99019, USA 8 Physics Department, Texas Tech University, Lubbock, TX 79409, USA 9 Department of Optics and Quantum Electronics, University of Szeged, D´ om t´er 9, 6720 Szeged, Hungary 10 Department of Astronomy, University of Texas, Austin, TX 78712, USA 11 F. L. Whipple Observatory, 670 Mt Hopkins Road, PO Box 97, Amado, AZ 85645, USA 12 Department of Physics and Astronomy, George P. and Cynthia Woods Mitchell Institute for Fundamental Physics and Astronomy, Texas A. & M. University, 4242 TAMU, College Station, TX 77843, USA 13 Homer L. Dodge Department of Physics and Astronomy, University of Oklahoma, 440 W Brooks, Norman, OK 73019, USA 14 Department of Astronomy and Astrophysics, Penn State University, 525 Davey Lab, University Park, PA 16802, USA 15 Department of Astronomy, San Diego State University, 5500 Campanile Drive, San Diego, CA 92182-1221, USA 16 Kavli IPMU (WPI), UTIAS, The University of Tokyo, Kashiwa, Chiba 277-8583, Japan 17 Southwest Research Institute, Department of Space Science, 6220 Culebra Road, San Antonio, TX 78238, USA 2 Las Accepted 2016 January 27. Received 2016 January 27; in original form 2015 May 31 ABSTRACT SN 2012dn is a super-Chandrasekhar mass candidate in a purportedly normal spiral (SAcd) galaxy, and poses a challenge for theories of type Ia supernova diversity. Here we utilize the fast and highly parametrized spectrum synthesis tool, SYNAPPS, to estimate relative expansion velocities of species inferred from optical spectra obtained with six facilities. As with previous studies of normal SN Ia, we find that both unburned carbon and intermediate-mass elements are spatially coincident within the ejecta near and below 14 000 km s −1 . Although the upper limit on SN 2012dn’s peak luminosity is comparable to some of the most luminous normal SN Ia, we find a progenitor mass exceeding ∼1.6 M is not strongly favoured by leading merger models since these models do not accurately predict spectroscopic observations of SN 2012dn and more normal events. In addition, a comparison of light curves and host-galaxy masses for a sample of literature and Palomar Transient Factory SN Ia reveals a diverse distribution of SN Ia subtypes where carbon-rich material remains unburned in some instances. Such events include SN 1991T, 1997br, and 1999aa where trace signatures of C III at optical wavelengths are presumably detected. Key words: supernovae: general – supernovae: individual: SN 2012dn. 1 I N T RO D U C T I O N E-mail: [email protected] (JTP); [email protected] (DAH) The progenitor systems of type Ia supernovae (SN Ia) have not been observed, although it is widely believed that some SN Ia are the product of a thermonuclear runaway in a white dwarf (WD) star with a mass near the Chandrasekhar limit, M Ch ≈ 1.38 M (Nugent et al. C 2016 The Authors Published by Oxford University Press on behalf of the Royal Astronomical Society


Computational Astrophysics and Cosmology | 2016

In situ and in-transit analysis of cosmological simulations

Brian Friesen; Ann S. Almgren; Zarija Lukić; Gunther H. Weber; Dmitriy Morozov; Vincent E. Beckner; Marcus S. Day

Modern cosmological simulations have reached the trillion-element scale, rendering data storage and subsequent analysis formidable tasks. To address this circumstance, we present a new MPI-parallel approach for analysis of simulation data while the simulation runs, as an alternative to the traditional workflow consisting of periodically saving large data sets to disk for subsequent ‘offline’ analysis. We demonstrate this approach in the compressible gasdynamics/N-body code Nyx, a hybrid MPI+OpenMP


Concurrency and Computation: Practice and Experience | 2018

Preparing NERSC users for Cori, a Cray XC40 system with Intel many integrated cores

Yun He; Brandon Cook; Jack Deslippe; Brian Friesen; Richard A. Gerber; Rebecca Hartman-Baker; Alice Koniges; Thorsten Kurth; Stephen Leak; Woo-Sun Yang; Zhengji Zhao; E. Baron; Peter H. Hauschildt

\mbox{MPI}+\mbox{OpenMP}


Journal of Physics: Conference Series | 2017

Extreme I/O on HPC for HEP using the Burst Buffer at NERSC

Wahid Bhimji; Debbie Bard; Kaylan Burleigh; Chris Daley; Steve Farrell; Markus Fasel; Brian Friesen; L. Gerhardt; Jialin Liu; Peter E. Nugent; Dave Paul; Jeff Porter; Vakho Tsulaia

code based on the BoxLib framework, used for large-scale cosmological simulations. We have enabled on-the-fly workflows in two different ways: one is a straightforward approach consisting of all MPI processes periodically halting the main simulation and analyzing each component of data that they own (‘in situ’). The other consists of partitioning processes into disjoint MPI groups, with one performing the simulation and periodically sending data to the other ‘sidecar’ group, which post-processes it while the simulation continues (‘in-transit’). The two groups execute their tasks asynchronously, stopping only to synchronize when a new set of simulation data needs to be analyzed. For both the in situ and in-transit approaches, we experiment with two different analysis suites with distinct performance behavior: one which finds dark matter halos in the simulation using merge trees to calculate the mass contained within iso-density contours, and another which calculates probability distribution functions and power spectra of various fields in the simulation. Both are common analysis tasks for cosmology, and both result in summary statistics significantly smaller than the original data set. We study the behavior of each type of analysis in each workflow in order to determine the optimal configuration for the different data analysis algorithms.


ieee international conference on high performance computing, data, and analytics | 2017

Analyzing Performance of Selected NESAP Applications on the Cori HPC System

Thorsten Kurth; William Arndt; Taylor A. Barnes; Brandon Cook; Jack Deslippe; Douglas W. Doerfler; Brian Friesen; Yun He; Tuomas Koskela; Mathieu Lobet; Tareq M. Malas; Leonid Oliker; Andrey Ovsyannikov; Samuel Williams; Woo-Sun Yang; Zhengji Zhao

The newest NERSC supercomputer Cori is a Cray XC40 system consisting of 2,388 Intel Xeon Haswell nodes and 9,688 Intel Xeon‐Phi “Knights Landing” (KNL) nodes. Compared to the Xeon‐based clusters NERSC users are familiar with, optimal performance on Cori requires consideration of KNL mode settings; process, thread, and memory affinity; fine‐grain parallelization; vectorization; and use of the high‐bandwidth MCDRAM memory. This paper describes our efforts preparing NERSC users for KNL through the NERSC Exascale Science Application Program, Web documentation, and user training. We discuss how we configured the Cori system for usability and productivity, addressing programming concerns, batch system configurations, and default KNL cluster and memory modes. System usage data, job completion analysis, programming and running jobs issues, and a few successful user stories on KNL are presented.


ieee international conference on high performance computing data and analytics | 2017

Galactos: computing the anisotropic 3-point correlation function for 2 billion galaxies

Brian Friesen; Md. Mostofa Ali Patwary; Brian Austin; Nadathur Satish; Zachary Slepian; Narayanan Sundaram; Deborah Bard; Daniel J. Eisenstein; Jack Deslippe; Pradeep Dubey; Prabhat

In recent years there has been increasing use of HPC facilities for HEP experiments. This has initially focussed on less I/O intensive workloads such as generator-level or detector simulation. We now demonstrate the efficient running of I/O-heavy analysis workloads on HPC facilities at NERSC, for the ATLAS and ALICE LHC collaborations as well as astronomical image analysis for DESI and BOSS. To do this we exploit a new 900 TB NVRAM-based storage system recently installed at NERSC, termed a Burst Buffer. This is a novel approach to HPC storage that builds on-demand filesystems on all-SSD hardware that is placed on the high-speed network of the new Cori supercomputer. We describe the hardware and software involved in this system, and give an overview of its capabilities, before focusing in detail on how the ATLAS, ALICE and astronomical workflows were adapted to work on this system. We describe these modifications and the resulting performance results, including comparisons to other filesystems. We demonstrate that we can meet the challenging I/O requirements of HEP experiments and scale to many thousands of cores accessing a single shared storage system.


Proceedings of the Second Annual PGAS Applications Workshop on | 2017

Performance portability of an intermediate-complexity atmospheric research model in coarray Fortran

Damian W. I. Rouson; Ethan D. Gutmann; Alessandro Fanfarillo; Brian Friesen

NERSC has partnered with over 20 representative application developer teams to evaluate and optimize their workloads on the Intel® Xeon Phi™Knights Landing processor. In this paper, we present a summary of this two year effort and will present the lessons we learned in that process. We analyze the overall performance improvements of these codes quantifying impacts of both Xeon Phi™architectural features as well as code optimization on application performance. We show that the architectural advantage, i.e. the average speedup of optimized code on KNL vs. optimized code on Haswell is about 1.1\(\times \). The average speedup obtained through application optimization, i.e. comparing optimized vs. original codes on KNL, is about 5\(\times \).

Collaboration


Dive into the Brian Friesen's collaboration.

Top Co-Authors

Avatar

E. Baron

University of Oklahoma

View shared research outputs
Top Co-Authors

Avatar

Jack Deslippe

Lawrence Berkeley National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

R. C. Thomas

Lawrence Berkeley National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Brandon Cook

Lawrence Berkeley National Laboratory

View shared research outputs
Top Co-Authors

Avatar

D. A. Howell

University of California

View shared research outputs
Top Co-Authors

Avatar

Peter E. Nugent

Lawrence Berkeley National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Thorsten Kurth

Lawrence Berkeley National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Tuomas Koskela

Lawrence Berkeley National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Yun He

Lawrence Berkeley National Laboratory

View shared research outputs
Researchain Logo
Decentralizing Knowledge