Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Daniel P. Heyman is active.

Publication


Featured researches published by Daniel P. Heyman.


Operations Research | 1985

Regenerative Analysis and Steady State Distributions for Markov Chains

Winfried K. Grassmann; Michael I. Taksar; Daniel P. Heyman

We apply regenerative theory to derive certain relations between steady state probabilities of a Markov chain. These relations are then used to develop a numerical algorithm to find these probabilities. The algorithm is a modification of the Gauss-Jordan method, in which all elements used in numerical computations are nonnegative; as a consequence, the algorithm is numerically stable.


Operations Research | 1968

Optimal Operating Policies for M/G/1 Queuing Systems

Daniel P. Heyman

We consider the economic behavior of a M/G/1 queuing system operating with the following cost structure: a server start-up cost, a server shut-down cost, a cost per unit time when the server is turned on, and a holding cost per unit time spent in the system for each customer. We prove that for the single server queue there is a stationary optimal policy of the form: Turn the server on when n customers are present, and turn it off when the system is empty. For the undiscounted, infinite horizon problem, an exact expression for the cost rate as a function of n and a closed form expression for the optimal value of n are derived. When future costs are discounted, we obtain an equation for the expected discounted cost as a function of n and the interest rate, and prove that for small interest rates the optimal discounted policy is approximately the optimal undiscounted policy. We conclude by establishing the recursion relation to find the optimal (nonstationary) policy for finite horizon problems.


Journal of Applied Probability | 1990

Equilibrium distribution of block-structured Markov chains with repeating rows

Winfried K. Grassmann; Daniel P. Heyman

In this paper we consider two-dimensional Markov chains with the property that, except for some boundary conditions, when the transition matrix is written in block form, the rows are identical except for a shift to the right. We provide a general theory for finding the equilibrium distribution for this class of chains. We illustrate theory by showing how our results unify the analysis of the M/G/ 1 and GI/M/ 1 paradigms introduced by M. F. Neuts.


Informs Journal on Computing | 1993

Computation of Steady-State Probabilities for Infinite-State Markov Chains with Repeating Rows

Winfried K. Grassmann; Daniel P. Heyman

In this paper we consider Markov chains with these properties. The transition matrix is banded, and except for some boundary conditions, when the transition matrix is written in block form, the rows are identical except for a shift to the right. Some authors have used variants of the state reduction method to solve special cases. We present a general algorithm and some of the theoretical underpinnings of these methods. In particular, we give a rigorous proof of convergence. We also provide a simple method to norm the probabilities such that their sum is unity. We describe the connection between this new technique and the matrix-iterative methods of M. F. Neuts. The paper concludes with some numerical examples. INFORMS Journal on Computing , ISSN 1091-9856, was published as ORSA Journal on Computing from 1989 to 1995 under ISSN 0899-1499.


Siam Journal on Algebraic and Discrete Methods | 1987

Further comparisons of direct methods for computing stationary distributions of Markov chains

Daniel P. Heyman

An algorithm for computing the stationary distribution of an irreducible Markov chain consisting of ergodic states is described in Grassmann et al. [Oper. Res., 33 (1985), pp. 1107–1116]. In this algorithm, all the arithmetic operations use only nonnegative numbers and there are no subtractions. In this paper we present numerical evidence to show that this algorithm achieves significantly greater accuracy than other algorithms described in the literature. We also describe our computational experience with large block-tridiagonal matrices.


Journal of Applied Probability | 1991

Approximating the stationary distribution of an infinite stochastic matrix

Daniel P. Heyman

We are given a Markov chain with states 0, 1, 2, - .. We want to get a numerical approximation of the steady-state balance equations. To do this, we truncate the chain, keeping the first n states, make the resulting matrix stochastic in some convenient way, and solve the finite system. The purpose of this paper is to provide some sufficient conditions that imply that as n tends to infinity, the stationary distributions of the truncated chains converge to the stationary distribution of the given chain. Our approach is completely probabilistic, and our conditions are given in probabilistic terms. We illustrate how to verify these conditions with five examples.


Operations Research | 1995

On the Choice of Alternative Measures in Importance Sampling with Markov Chains

Sigrún Andradóttir; Daniel P. Heyman; Teunis J. Ott

In the simulation of Markov chains, importance sampling involves replacing the original transition matrix, say P, with a suitably chosen transition matrix Q that tends to visit the states of interest more frequently. The likelihood ratio of P relative to Q is an important random variable in the importance sampling method. It always has expectation one, and for any interesting pair of transition matrices P and Q, there is a sample path length that causes the likelihood ratio to be close to zero with probability close to one. This may cause the variance of the importance sampling estimator to be larger than the variance of the traditional estimator. We develop sufficient conditions for ensuring the tightness of the distribution of the logarithm of the likelihood ratio for all sample path lengths, and we show that when these conditions are satisfied, the likelihood ratio is approximately lognormally distributed with expected value one. These conditions can be used to eliminate some choices of the alternative transition matrix Q that are likely to result in a variance increase. We also show that if the likelihood ratio is to remain well behaved for all sample path lengths, the alternative transition matrix Q has to approach the original transition matrix P as the sample path length increases. The practical significance of this result is that importance sampling can be difficult to apply successfully in simulations that involve long sample paths.


ACM Transactions on Modeling and Computer Simulation | 1993

Variance reduction through smoothing and control variates for Markov chain simulations

Sigrún Andradóttir; Daniel P. Heyman; Teunis J. Ott

We consider the simulation of a discrete Markov chain that is so large that numerical solution of the steady-state balance equations cannot be done with available computers, We propose smoothing methods to obtain variance reduction when simulation is used to estimate a function of a subset of the steady-state probabilities. These methods attempt to make each transition provide information about the probabilities of interest. We give an algorithm that converges to the optimal smoothing operator, and some guidelines for picking the parameters of this algorithm. Analytical arguments are used to justify our procedures, and they are buttressed by the results of a numerical example,


Informs Journal on Computing | 1989

Numerical Solution of Linear Equations Arising in Markov Chain Models

Daniel P. Heyman; Alyson Reeves

We examine several methods for numerically solving linear equations that arise in the study of Markov chains. These methods are Gaussian elimination, state-reduction, closed-form matrix solutions, and some hybrid methods. The emphasis is on moments of first-passage times and times to absorption. We compare the methods on the basis of accuracy and computation. We conclude that state-reduction is the most accurate and that the matrix solutions have the least computation time. INFORMS Journal on Computing, ISSN 1091-9856, was published as ORSA Journal on Computing from 1989 to 1995 under ISSN 0899-1499.


Journal of Applied Probability | 1995

A decomposition theorem for infinite stochastic matrices

Daniel P. Heyman

We prove that every infinite-state stochastic matrix P say, that is irreducible and consists of positive-recurrrent states can be represented in the form I-P=(A -I)(B-S), where A is strictly upper-triangular, B is strictly lower-triangular, and S is diagonal. Moreover, the elements of A are expected values of random variables that we will specify, and the elements of B and S are probabilities of events that we will specify. The decomposition can be used to obtain steady-state probabilities, mean first-passage-times and the fundamental matrix.

Collaboration


Dive into the Daniel P. Heyman's collaboration.

Top Co-Authors

Avatar

Sigrún Andradóttir

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tv Lakshman

Telcordia Technologies

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge