Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Philip Heidelberger is active.

Publication


Featured researches published by Philip Heidelberger.


Operations Research | 1983

Simulation Run Length Control in the Presence of an Initial Transient

Philip Heidelberger; Peter D. Welch

This paper studies the estimation of the steady state mean of an output sequence from a discrete event simulation. It considers the problem of the automatic generation of a confidence interval of prespecified width when there is an initial transient present. It explores a procedure based on Schrubens Brownian bridge model for the detection of nonstationarity and a spectral method for estimating the variance of the sample mean. The procedure is evaluated empirically for a variety of output sequences. The performance measures considered are bias, confidence interval coverage, mean confidence interval width, mean run length, and mean amount of deleted data. If the output sequence contains a strong transient, then inclusion of a test for stationarity in the run length control procedure results in point estimates with lower bias, narrower confidence intervals, and shorter run lengths than when no check for stationarity is performed. If the output sequence contains no initial transient, then the performance measures of the procedure with a stationarity test are only slightly degraded from those of the procedure without such a test. If the run length is short relative to the extent of the initial transient, the stationarity tests may not be powerful enough to detect the transient, resulting in a procedure with unreliable point and interval estimates.


ACM Transactions on Modeling and Computer Simulation | 1995

Fast simulation of rare events in queueing and reliability models

Philip Heidelberger

This paper surveys efficient techniques for estimating, via simulation, the probabilities of certain rare events in queueing and reliability models. The rare events of interest are long waiting times or buffer overflows in queueing systems, and system failure events in reliability models of highly dependable computing systems. The general approach to speeding up such simulations is to accelerate the occurrence of the rare events by using importance sampling. In importance sampling, the system is simulated using a new set of input probability distributions, and unbiased estimates are recovered by multiplying the simulation output by a likelihood ratio. Our focus is on describing asymptotically optimal importance sampling techniques. Using asymptotically optimal importance sampling, the number of samples required to get accurate estimates grows slowly compared to the rate at which the probability of the rare event approaches zero. In practice, this means that run lengths can be reduced by many orders of magnitude, compared to standard simulation. In certain cases, asymptotically optimal importance sampling results in estimates having bounded relative error. With bounded relative error, only a fixed number of samples are required to get accurate estimates, no matter how rare the event of interest is. The queueing systems studied include simple queues (e.g., GI/GI/1), Jackson networks, discrete time queues with multiple autocorrelated arrival processes that arise in the analysis of Asynchronous Transfer Mode communications switches, and tree structured networks of such switches. Both Markovian and non-Markovian reliability models are treated.


Ibm Journal of Research and Development | 2005

Overview of the Blue Gene/L system architecture

Alan Gara; Matthias A. Blumrich; Dong Chen; George Liang-Tai Chiu; Paul W. Coteus; Mark E. Giampapa; Ruud A. Haring; Philip Heidelberger; Dirk Hoenicke; Gerard V. Kopcsay; Thomas A. Liebsch; Martin Ohmacht; Burkhard Steinmacher-Burow; Todd E. Takken; Pavlos M. Vranas

The Blue Gene®/L computer is a massively parallel supercomputer based on IBM system-on-a-chip technology. It is designed to scale to 65,536 dual-processor nodes, with a peak performance of 360 teraflops. This paper describes the project objectives and provides an overview of the system architecture that resulted. We discuss our application-based approach and rationale for a low-power, highly integrated design. The key architectural features of Blue Gene/L are introduced in this paper: the link chip component and five Blue Gene/L networks, the PowerPC® 440 core and floating-point enhancements, the on-chip and off-chip distributed memory system, the node- and system-level design for high reliability, and the comprehensive approach to fault isolation.


Communications of The ACM | 1981

A spectral method for confidence interval generation and run length control in simulations

Philip Heidelberger; Peter D. Welch

This paper discusses a method for placing confidence limits on the steady state mean of an output sequence generated by a discrete event simulation. An estimate of the variance is obtained by estimating the spectral density at zero frequency. This estimation is accomplished through a regression analysis of the logarithm of the averaged periodogram. By batching the output sequence the storage and computational requirements of the method remain low. A run length control procedure is developed that uses the relative width of the generated confidence interval as a stopping criterion. Experimental results for several queueing models of an interactive computer system are reported.


Ibm Journal of Research and Development | 2005

Blue Gene/L torus interconnection network

Narasimha R. Adiga; Matthias A. Blumrich; Dong Chen; Paul W. Coteus; Alan Gara; Mark E. Giampapa; Philip Heidelberger; Sarabjeet Singh; Burkhard Steinmacher-Burow; Todd E. Takken; Mickey Tsao; Pavlos M. Vranas

The main interconnect of the massively parallel Blue Gene®/L is a three-dimensional torus network with dynamic virtual cut-through routing. This paper describes both the architecture and the microarchitecture of the torus and a network performance simulator. Both simulation results and hardware measurements are presented.


international symposium on microarchitecture | 2012

The IBM Blue Gene/Q Compute Chip

Ruud A. Haring; Martin Ohmacht; Thomas W. Fox; Michael Karl Gschwind; David L. Satterfield; Krishnan Sugavanam; Paul W. Coteus; Philip Heidelberger; Matthias A. Blumrich; Robert W. Wisniewski; Alan Gara; George Liang-Tai Chiu; Peter A. Boyle; Norman H. Chist; Changhoan Kim

Blue Gene/Q aims to build a massively parallel high-performance computing system out of power-efficient processor chips, resulting in power-efficient, cost-efficient, and floor-space- efficient systems. Focusing on reliability during design helps with scaling to large systems and lowers the total cost of ownership. This article examines the architecture and design of the Compute chip, which combines processors, memory, and communication functions on a single chip.


Mathematical Finance | 2002

Portfolio Value-at-Risk with Heavy-Tailed Risk Factors

Paul Glasserman; Philip Heidelberger; Perwez Shahabuddin

This paper develops efficient methods for computing portfolio value-at-risk (VAR) when the underlying risk factors have a heavy-tailed distribution. In modeling heavy tails, we focus on multivariate t distributions and some extensions thereof. We develop two methods for VAR calculation that exploit a quadratic approximation to the portfolio loss, such as the delta-gamma approximation. In the first method, we derive the characteristic function of the quadratic approximation and then use numerical transform inversion to approximate the portfolio loss distribution. Because the quadratic approximation may not always yield accurate VAR estimates, we also develop a low variance Monte Carlo method. This method uses the quadratic approximation to guide the selection of an effective importance sampling distribution that samples risk factors so that large losses occur more often. Variance is further reduced by combining the importance sampling with stratified sampling. Numerical results on a variety of test portfolios indicate that large variance reductions are typically obtained. Both methods developed in this paper overcome difficulties associated with VAR calculation with heavy-tailed risk factors. The Monte Carlo method also extends to the problem of estimating the conditional excess, sometimes known as the conditional VAR.


IEEE Transactions on Computers | 1992

A unified framework for simulating Markovian models of highly dependable systems

Ambuj Goyal; Perwez Shahabuddin; Philip Heidelberger; Victor F. Nicola; Peter W. Glynn

The authors present a unified framework for simulating Markovian models of highly dependable systems. It is shown that a variance reduction technique called importance sampling can be used to speed up the simulation by many orders of magnitude over standard simulation. This technique can be combined very effectively with regenerative simulation to estimate measures such as steady-state availability and mean time to failure. Moveover, it can be combined with conditional Monte Carlo methods to quickly estimate transient measures such as reliability, expected interval availability, and the distribution of interval availability. The authors show the effectiveness of these methods by using them to simulate large dependability models. They discuss how these methods can be implemented in a software package to compute both transient and steady-state measures simultaneously from the same sample run. >


ieee international conference on high performance computing data and analytics | 2011

The IBM Blue Gene/Q interconnection network and message unit

Dong Chen; Noel A. Eisley; Philip Heidelberger; Robert M. Senger; Yutaka Sugawara; Sameer Kumar; Valentina Salapura; David L. Satterfield; Burkhard Steinmacher-Burow; Jeffrey J. Parker

This is the first paper describing the IBM Blue Gene/Q interconnection network and message unit. The Blue Gene/Q system is the third generation in the IBM Blue Gene line of massively parallel supercomputers. The Blue Gene/Q architecture can be scaled to 20 PF/s and beyond. The network and the highly parallel message unit, which provides the functionality of a network interface, are integrated onto the same chip as the processors and cache memory, and consume 8% of the chips area. For better application scalability and performance, we describe new routing algorithms and new techniques to parallelize the injection and reception of packets in the network interface. Measured hardware performance results are also presented.


international conference on supercomputing | 2005

Optimization of MPI collective communication on BlueGene/L systems

George S. Almasi; Philip Heidelberger; Charles J. Archer; Xavier Martorell; C. Christopher Erway; José E. Moreira; Burkhard Steinmacher-Burow; Yili Zheng

BlueGene/L is currently the worlds fastest supercomputer. It consists of a large number of low power dual-processor compute nodes interconnected by high speed torus and collective networks, Because compute nodes do not have shared memory, MPI is the the natural programming model for this machine. The BlueGene/L MPI library is a port of MPICH2.In this paper we discuss the implementation of MPI collectives on BlueGene/L. The MPICH2 implementation of MPI collectives is based on point-to-point communication primitives. This turns out to be suboptimal for a number of reasons. Machine-optimized MPI collectives are necessary to harness the performance of BlueGene/L. We discuss these optimized MPI collectives, describing the algorithms and presenting performance results measured with targeted micro-benchmarks on real BlueGene/L hardware with up to 4096 compute nodes.

Collaboration


Dive into the Philip Heidelberger's collaboration.

Researchain Logo
Decentralizing Knowledge