Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Donald J. Holmgren is active.

Publication


Featured researches published by Donald J. Holmgren.


The Astronomical Journal | 1998

The Sloan Digital Sky Survey Photometric Camera

James E. Gunn; Michael A. Carr; C. Rockosi; M. Sekiguchi; K. Berry; Brian R. Elms; E. de Haas; Željko Ivezić; Gillian R. Knapp; Robert H. Lupton; George Pauls; R. Simcoe; R. Hirsch; D. Sanford; Shu I. Wang; D. G. York; Frederick H. Harris; J. Annis; L. Bartozek; William N. Boroski; Jon Bakken; M. Haldeman; Stephen M. Kent; Scott Holm; Donald J. Holmgren; D. Petravick; Angela Prosapio; Ron Rechenmacher; Mamoru Doi; Masataka Fukugita

We have constructed a large-format mosaic CCD camera for the Sloan Digital Sky Survey. The camera consists of two arrays, a photometric array that uses 30 2048 × 2048 SITe/Tektronix CCDs (24 μm pixels) with an effective imaging area of 720 cm2 and an astrometric array that uses 24 400 × 2048 CCDs with the same pixel size, which will allow us to tie bright astrometric standard stars to the objects imaged in the photometric camera. The instrument will be used to carry out photometry essentially simultaneously in five color bands spanning the range accessible to silicon detectors on the ground in the time-delay–and–integrate (TDI) scanning mode. The photometric detectors are arrayed in the focal plane in six columns of five chips each such that two scans cover a filled stripe 25 wide. This paper presents engineering and technical details of the camera.


The Astronomical Journal | 1999

High-Redshift Quasars Found in Sloan Digital Sky Survey Commissioning Data

Xiaohui Fan; Michael A. Strauss; Donald P. Schneider; James E. Gunn; Robert H. Lupton; Brian Yanny; Scott F. Anderson; John Anderson; James Annis; Neta A. Bahcall; Jon Bakken; Steven Bastian; Eileen Berman; William N. Boroski; Charlie Briegel; John W. Briggs; J. Brinkmann; Michael A. Carr; Patrick L. Colestock; A. J. Connolly; James H. Crocker; István Csabai; Paul C. Czarapata; John Eric Davis; Mamoru Doi; Brian R. Elms; Michael L. Evans; Glenn R. Federwitz; Joshua A. Frieman; Masataka Fukugita

We present photometric and spectroscopic observations of 15 high-redshift quasars (z > 3.6) discovered from ~140 deg2 of five-color (u, g, r, i, and z) imaging data taken by the Sloan Digital Sky Survey (SDSS) during its commissioning phase. The quasars are selected by their distinctive colors in SDSS multicolor space. Four of the quasars have redshifts higher than 4.6 (z = 4.63, 4.75, 4.90, and 5.00, the latter being the highest redshift quasar yet known). In addition, two previously known z > 4 objects were recovered from the data. The quasars all have i* < 20 and have luminosities comparable to that of 3Cxa0273. The spectra of the quasars have similar features (strong, broad emission lines and substantial absorption blueward of the Lyα emission line) seen in previously known high-redshift quasars. Although the photometric accuracy and image quality fail to meet the final survey requirements, our success rate for identifying high-redshift quasars (17 quasars from 27 candidates) is much higher than that of previous multicolor surveys. However, the numbers of high-redshift quasars found is in close accord with the number density inferred from previous surveys.


arXiv: High Energy Physics - Lattice | 2005

PC Clusters for Lattice QCD

Donald J. Holmgren

In the last several years, tightly coupled PC clusters have become widely applied, cost effective resources for lattice gauge computations. This paper discusses the practice of building such clusters, in particular balanced design requirements. I review and quantify the improvements over time of key performance parameters and overall price to performance ratio. Applying these trends and technology forecasts given by computer equipment manufacturers, I predict the range of price to performance for lattice codes expected in the next several years.


ieee international conference on escience | 2008

Lattice QCD Workflows: A Case Study

Luciano Piccoli; James B. Kowalkowski; James N. Simone; Xian-He Sun; Hui Jin; Donald J. Holmgren; Nirmal Seenu; Amitoj Singh

This paper discusses the application of existing workflow management systems to a real world science application (LQCD). Typical workflows and execution environment used in production are described. Requirements for the LQCD production system are discussed. The workflow management systems Askalon and Swift were tested by implementing the LQCD workflows and evaluated against the requirements. We report our findings and future work.


local computer networks | 2011

G-NetMon: A GPU-accelerated network performance monitoring system for large scale scientific collaborations

Wenji Wu; Phil DeMar; Donald J. Holmgren; Amitoj Singh; R. Pordes

We have prototyped a GPU-accelerated network performance monitoring system, called G-NetMon, to support large-scale scientific collaborations at Fermilab. Our system exploits the data parallelism that exists within network flow data to provide fast analysis of bulk data movement between Fermilab and collaboration sites. Experiments demonstrate that our G-NetMon can rapidly detect sub-optimal bulk data movements.


Proceedings of the second workshop on Scalable algorithms for large-scale systems | 2011

Layout-aware scientific computing: a case study using MILC

Jun He; Jim Kowalkowski; Marc Paterno; Donald J. Holmgren; James N. Simone; Xian-He Sun

Nowadays, high performance computers have more cores and nodes than ever before. Computation is spread out among them, leading to more communication. For this reason, communication can easily become the bottleneck of a system and limit its scalability. The layout of an application on a computer is the key factor to preserve communication locality and reduce its cost. In this paper, we propose a simple model to optimize the layout for scientific applications by minimizing inter-node communication cost. The model takes into account the latency and bandwidth of the network and associates them with the dominant layout variables of the application. We take MILC as an example and analyze its communication patterns. According to our experimental results, the model developed for MILC achieved a satisfactory accuracy for predicting the performance, leading to up to 31% performance improvement.


ieee npss real time conference | 1999

Consumer-server/logger system for the CDF experiment

M. Shimojima; Ben Kilminster; K. S. McFarland; A. Vaiciulis; Donald J. Holmgren

The level-3 trigger system for the CDF experiment runs on a farm of Linux PCs. Events that pass this trigger system are collected and logged to disk by a logger process running on an SGI Origin 200 GIGAchannel server. The system must support data logging at 75 Hz for 250 KB events. In addition, some fraction of events are sent to remote consumer processes for online monitoring. The Consumer-Server/Logger system inherently behaves like an intelligent hardware hub, maintaining connectivity between these large numbers (/spl sim/200) of Level-3 farm nodes, consumer processes, and the logging hardware. The farm nodes and the consumer processes are networked using multiple Fast Ethernet interfaces while the disk subsystem of 0.5-1 TB capacity is connected via dual Fibre Channel arbitrated loops. Different functions of the Consumer-Server/Logger are implemented as distinct state machines to keep each section of the code as simple and robust as possible. In this paper we describe the first implementation of the system and present initial performance results conducted at CDF.


Journal of Computational Science | 2013

Layout-aware scientific computing: A case study using the MILC code

Jun He; Jim Kowalkowski; Marc Paterno; Donald J. Holmgren; James N. Simone; Xian-He Sun

Abstract Nowadays, high performance computers have more cores and nodes than ever before. Computation is spread out among them, leading to more communication cost than before. For this reason, communication can easily become the bottleneck of a system and limit its scalability. The layout of an application on a computer is the key factor to preserve communication locality and reduce its cost. In this paper, we propose a straightforward model to optimize the layout for scientific applications by minimizing inter-node communication cost. The model takes into account the latency and bandwidth of the network and associates them with the dominant layout variables of the application. We take the MILC code as an example and analyze its communication patterns. According to our experimental results, the model developed for the MILC code achieved a satisfactory accuracy for predicting the performance, leading to up to 31% performance improvement.


Archive | 2004

Lattice QCD clusters at Fermilab

Donald J. Holmgren; Paul B. Mackenzie; Anitoj Singh; Jim Simone

As part of the DOE SciDAC National Infrastructure for Lattice Gauge Computing project, Fermilab builds and operates production clusters for lattice QCD simulations. This paper will describe these clusters. The design of lattice QCD clusters requires careful attention to balancing memory bandwidth, floating point throughput, and network performance. We will discuss our investigations of various commodity processors, including Pentium 4E, Xeon, Opteron, and PPC970. We will also discuss our early experiences with the emerging Infiniband and PCI Express architectures. Finally, we will present our predictions and plans for future clusters.


Journal of Physics: Conference Series | 2012

Fermilab multicore and GPU-accelerated clusters for lattice QCD

Donald J. Holmgren; N Seenu; James N. Simone; Amitoj Singh

As part of the DOE LQCD-ext project, Fermilab designs, deploys, and operates dedicated high performance clusters for lattice QCD (LQCD) computations. We describe the design of these clusters, as well as their performance and the benchmarking processes that were used to select the hardware and the techniques used to handle their NUMA architecture. We discuss the design and performance of a GPU-accelerated cluster that Fermilab deployed in January 2012. On these clusters, the use of multicore processors with increasing numbers of cores has contributed to the steady decrease in price/performance for these calculations over the last decade. In the last several years, GPU acceleration has led to further decreases in price/performance for ported applications.

Collaboration


Dive into the Donald J. Holmgren's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xian-He Sun

Illinois Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Christoph Paus

Massachusetts Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge