Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mark E. Lewis is active.

Publication


Featured researches published by Mark E. Lewis.


IEEE Transactions on Intelligent Transportation Systems | 2005

Optimal vehicle routing with real-time traffic information

Seongmoon Kim; Mark E. Lewis; Chelsea C. White

This paper examines the value of real-time traffic information to optimal vehicle routing in a nonstationary stochastic network. We present a systematic approach to aid in the implementation of transportation systems integrated with real-time information technology. We develop decision-making procedures for determining the optimal driver attendance time, optimal departure times, and optimal routing policies under time-varying traffic flows based on a Markov decision process formulation. With a numerical study carried out on an urban road network in Southeast Michigan, we demonstrate significant advantages when using this information in terms of total cost savings and vehicle usage reduction while satisfying or improving service levels for just-in-time delivery.


Top | 2006

A Survey of Recent Results on Continuous-Time Markov Decision Processes

Xianping Guo; Onésimo Hernández-Lerma; Tomás Prieto-Rumeau; Xi-Ren Cao; Junyu Zhang; Qiying Hu; Mark E. Lewis; Ricardo Vélez

This paper is a survey of recent results on continuous-time Markov decision processes (MDPs) withunbounded transition rates, and reward rates that may beunbounded from above and from below. These results pertain to discounted and average reward optimality criteria, which are the most commonly used criteria, and also to more selective concepts, such as bias optimality and sensitive discount criteria. For concreteness, we consider only MDPs with a countable state space, but we indicate how the results can be extended to more general MDPs or to Markov games.


IEEE Transactions on Intelligent Transportation Systems | 2005

State space reduction for nonstationary stochastic shortest path problems with real-time traffic information

Seongmoon Kim; Mark E. Lewis; Chelsea C. White

Routing vehicles based on real-time traffic conditions has been shown to significantly reduce travel time, and hence cost, in high-volume traffic situations. However, taking real-time traffic data and transforming them into optimal route decisions are a computational challenge. This is in a large part due to the amount of data available that could be valuable in the route selection. The authors model the dynamic route determination problem as a Markov decision process (MDP) and present procedures for identifying traffic data having no decision-making value. Such identification can be used to reduce the state space of the MDP, thereby improving its computational tractability. This reduction can be achieved by a two-step process. The first is an a priori reduction that may be performed using a stationary deterministic network with upper and lower bounds on the cost functions before the trip begins. The second part of the process reduces the state space further on the nonstationary stochastic road network as the trip optimally progresses. The authors demonstrate the potential computational advantages of the introduced methods based on actual data collected on a road network in southeast Michigan.


Probability in the Engineering and Informational Sciences | 2002

OPTIMAL CONTROL OF A TWO-STAGE TANDEM QUEUING SYSTEM WITH FLEXIBLE SERVERS

Hyun Soo Ahn; Izak Duenyas; Mark E. Lewis

We consider the optimal control of two parallel servers in a two-stage tandem queuing system with two flexible servers. New jobs arrive at station 1, after which a series of two operations must be performed before they leave the system. Holding costs are incurred at rate h1 per unit time for each job at station 1 and at rate h2 per unit time for each job at station 2.The system is considered under two scenarios; the collaborative case and the noncollaborative case. In the prior, the servers can collaborate to work on the same job, whereas in the latter, each server can work on a unique job although they can work on separate jobs at the same station. We provide simple conditions under which it is optimal to allocate both servers to station 1 or 2 in the collaborative case. In the noncollaborative case, we show that the same condition as in the collaborative case guarantees the existence of an optimal policy that is exhaustive at station 1. However, the condition for exhaustive service at station 2 to be optimal does not carry over. This case is examined via a numerical study.


Mathematics of Operations Research | 2007

Optimality Inequalities for Average Cost Markov Decision Processes and the Stochastic Cash Balance Problem

Eugene A. Feinberg; Mark E. Lewis

For general state and action space Markov decision processes, we present sufficient conditions for the existence of solutions of the average cost optimality inequalities. These conditions also imply the convergence of both the optimal discounted cost value function and policies to the corresponding objects for the average costs per unit time case. Inventory models are natural applications of our results. We describe structural properties of average cost optimal policies for the cash balance problem; an inventory control problem where the demand may be negative and the decision-maker can produce or scrap inventory. We also show the convergence of optimal thresholds in the finite horizon case to those under the expected discounted cost criterion and those under the expected discounted costs to those under the average costs per unit time criterion.


Queueing Systems | 2004

Optimal Pricing and Admission Control in a Queueing System with Periodically Varying Parameters

Seunghwan Yoon; Mark E. Lewis

We consider congestion control in a nonstationary queueing system. Assuming that the arrival and service rates are bounded, periodic functions of time, a Markov decision process (MDP) formulation is developed. We show under the infinite horizon discounted and average reward optimality criteria, for each fixed time, optimal pricing and admission control strategies are nondecreasing in the number of customers in the system. This extends stationary results to the nonstationary setting. Despite this result, the problem still seems intractable. We propose an easily implementable pointwise stationary approximation (PSA) to approximate the optimal policies, suggest a heuristic to improve the implementation of the PSA and verify its usefulness via a numerical study.


Queueing Systems | 2011

Dynamic control of a single-server system with abandonments

Douglas G. Down; Ger Koole; Mark E. Lewis

In this paper, we discuss the dynamic server control in a two-class service system with abandonments. Two models are considered. In the first case, rewards are received upon service completion, and there are no abandonment costs (other than the lost opportunity to gain rewards). In the second, holding costs per customer per unit time are accrued, and each abandonment involves a fixed cost. Both cases are considered under the discounted or average reward/cost criterion. These are extensions of the classic scheduling question (without abandonments) where it is well known that simple priority rules hold.The contributions in this paper are twofold. First, we show that the classic c–μ rule does not hold in general. An added condition on the ordering of the abandonment rates is sufficient to recover the priority rule. Counterexamples show that this condition is not necessary, but when it is violated, significant loss can occur. In the reward case, we show that the decision involves an intuitive tradeoff between getting more rewards and avoiding idling. Secondly, we note that traditional solution techniques are not directly applicable. Since customers may leave in between services, an interchange argument cannot be applied. Since the abandonment rates are unbounded we cannot apply uniformization—and thus cannot use the usual discrete-time Markov decision process techniques. After formulating the problem as a continuous-time Markov decision process (CTMDP), we use sample path arguments in the reward case and a savvy use of truncation in the holding cost case to yield the results. As far as we know, this is the first time that either have been used in conjunction with the CTMDP to show structure in a queueing control problem. The insights made in each model are supported by a detailed numerical study.


IEEE Transactions on Automatic Control | 2006

Dynamic allocation of reconfigurable resources ina two-stage Tandem queueing system with reliability considerations

Cheng-Hung Wu; Mark E. Lewis; Michael H. Veatch

Consider a two-stage tandem queueing system, with dedicated machines in each stage. Additional reconfigurable resources can be assigned to one of these two stations without setup cost and time. In a clearing system (without external arrivals) both with and without machine failures, we show the existence of an optimal monotone policy. Moreover, when all of the machines are reliable, the switching curve defined by this policy has slope greater than or equal to -1. This continues to hold true when the holding cost rate is higher at the first stage and machine failures are considered.


European Journal of Operational Research | 2006

Dynamic load balancing in parallel queueing systems : stability and optimal control

Douglas G. Down; Mark E. Lewis

We consider a system of parallel queues with dedicated arrival streams. At each decision epoch a decision-maker can move customers from one queue to another. The cost for moving customers consists of a fixed cost and a linear, variable cost dependent on the number of customers moved. There are also linear holding costs that may depend on the queue in which customers are stored. Under very mild assumptions, we develop stability (and instability) conditions for this system via a fluid model. Under the assumption of stability, we consider minimizing the long-run average cost. In the case of two-servers the optimal control policy is shown to prefer to store customers in the lowest cost queue. When the inter-arrival and service times are assumed to be exponential, we use a Markov decision process formulation to show that for a fixed number of customers in the system, there exists a level S such that whenever customers are moved from the high cost queue to the low cost queue, the number of customers moved brings the number of customers in the low cost queue to S. These results lead to the development of a heuristic for the model with more than two servers.


IEEE Transactions on Automatic Control | 2001

A probabilistic analysis of bias optimality in unichain Markov decision processes

Mark E. Lewis; Martin L. Puterman

Focuses on bias optimality in unichain, finite state, and action-space Markov decision processes. Using relative value functions, we present methods for evaluating optimal bias, this leads to a probabilistic analysis which transforms the original reward problem into a minimum average cost problem. The result is an explanation of how and why bias implicitly discounts future rewards.

Collaboration


Dive into the Mark E. Lewis's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Martin L. Puterman

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Chelsea C. White

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hayriye Ayhan

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge