Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Linn I. Sennott is active.

Publication


Featured researches published by Linn I. Sennott.


Technometrics | 1998

Stochastic dynamic programming and the control of queueing systems

Linn I. Sennott

Optimization Criteria. Finite Horizon Optimization. Infinite Horizon Discounted Cost Optimization. An Inventory Model. Average Cost Optimization for Finite State Spaces. Average Cost Optimization Theory for Countable State Spaces. Computation of Average Cost Optimal Policies for Infinite State Spaces. Optimization Under Actions at Selected Epochs. Average Cost Optimization of Continuous Time Processes. Appendices. Bibliography. Index.


Operations Research | 1989

Average Cost Optimal Stationary Policies in Infinite State Markov Decision Processes with Unbounded Costs

Linn I. Sennott

We deal with infinite state Markov decision processes with unbounded costs. Three simple conditions, based on the optimal discounted value function, guarantee the existence of an expected average cost optimal stationary policy. Sufficient conditions are the existence of a distinguished state of smallest discounted value and a single stationary policy inducing an irreducible, ergodic Markov chain for which the average cost of a first passage from any state to the distinguished state is finite. A result to verify this is also given. Two examples illustrate the ease of applying the criteria.


Operations Research Letters | 1992

Comparing recent assumptions for the existence of average optimal stationary policies

Rolando Cavazos-Cadena; Linn I. Sennott

We consider discrete time average cost Markov decision processes with countable state space and finite action sets. Conditions recently proposed by Borkar, Cavazos-Cadena, Weber and Stidham, and Sennott for the existence of an expected average cost optimal stationary policy are compared. The conclusion is that the Sennott conditions are the weakest. We also give an example for which the Sennott axioms hold but the others fail.


Probability in the Engineering and Informational Sciences | 1993

Constrained Average Cost Markov Decision Chains

Linn I. Sennott

A Markov decision chain with denumerable state space incurs two types of costs — for example, an operating cost and a holding cost. The objective is to minimize the expected average operating cost, subject to a constraint on the expected average holding cost. We prove the existence of an optimal constrained randomized stationary policy, for which the two stationary policies differ on at most one state. The examples treated are a packet communication system with reject option and a single-server queue with service rate control.


European Journal of Operational Research | 2006

Optimal dynamic assignment of a flexible worker on an open production line with specialists

Linn I. Sennott; Mark P. Van Oyen; Seyed M. R. Iravani

This paper models and analyzes serial production lines with specialists at each station and a single, cross-trained floating worker who can work at any station. We formulate Markov decision process models of K-station production lines in which (1) workers do not collaborate on the same job, and (2) two workers can work at the same task/workstation on different jobs at the same time. Our model includes holding costs, set-up costs, and set-up times at each station. We rigorously compute finite state regions of an optimal policy that are valid with an infinite state space, as well as an optimal average cost and the worker utilizations. We also perform a numerical study for lines with two and three station. Computations and bounds insightfully expose the performance opportunity gained through capacity balancing and variability buffering. � 2004 Published by Elsevier B.V.


Operations Research Letters | 1986

A new condition for the existence of optimal stationary policies in average cost Markov decision processes

Linn I. Sennott

Discrete time countable state Markov decision processes with finite decision sets and bounded costs are considered. Conditions are given under which an unbounded solution to the average cost optimality equation exists and yields an optimal stationary policy. A new form of the optimality equation is derived for the case in which every stationary policy gives rise to an ergodic Markov chain.


Probability in the Engineering and Informational Sciences | 1991

Constrained Discounted Markov Decision Chains

Linn I. Sennott

A Markov decision chain with countable state space incurs two types of costs: an operating cost and a holding cost. The objective is to minimize the expected discounted operating cost, subject to a constraint on the expected discounted holding cost. The existence of an optimal randomized simple policy is proved. This is a policy that randomizes between two stationary policies, that differ in at most one state. Several examples from the control of discrete time queueing systems are discussed.


Annals of Operations Research | 1991

Value iteration in constable state average cost Markov decision processes with unbounded costs

Linn I. Sennott

We deal with countable state Markov decision processes with finite action sets and (possibly) unbounded costs. Assuming the existence of an expected average cost optimal stationary policyf, with expected average costg, when canf andg be found using undiscounted value iteration? We give assumptions guaranteeing the convergence of a quantity related tong−Νn(i), whereΝn(i) is the minimum expectedn-stage cost when the process starts in statei. The theory is applied to a queueing system with variable service rates and to a queueing system with variable arrival parameter.


Mathematics of Operations Research | 1992

Optimal Stationary Policies in General State Space Markov Decision Chains with Finite Action Sets

Robert K. Ritt; Linn I. Sennott

The result of Sennott [9] on the existence of optimal stationary policies in countable state Markov decision chains with finite action sets is generalized to arbitrary state space Markov decision chains. The assumption of finite action sets occurring in a global countable action space allows a particularly simple theoretical structure for the general state space Markov decision chain. Two examples illustrate the results. Example 1 is a system of parallel queues with stochastic work requirements, a movable server with controllable service rate, and a reject option. Example 2 is a system of parallel queues with stochastic controllable inputs, a movable server with fixed service rates, and a reject option.


Archive | 2002

Average Reward Optimization Theory for Denumerable State Spaces

Linn I. Sennott

In this chapter we deal with certain aspects of average reward optimality. It is assumed that the state space X is denumerably infinite, and that for each x ∈ X, the set A(x) of available actions is finite. It is possible to extend the theory to compact action sets, but at the expense of increased mathematical complexity. Finite action sets are sufficient for digitally implemented controls, and so we restrict our attention to this case.

Collaboration


Dive into the Linn I. Sennott's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Pierre A. Humblet

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Robert K. Ritt

Illinois State University

View shared research outputs
Top Co-Authors

Avatar

Rolando Cavazos-Cadena

Universidad Autónoma Agraria Antonio Narro

View shared research outputs
Researchain Logo
Decentralizing Knowledge