Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Adam Shwartz is active.

Publication


Featured researches published by Adam Shwartz.


Advances in Applied Probability | 1989

The fork-join queue and related systems with synchronization constraints: stochastic ordering and computable bounds

François Baccelli; Armand M. Makowski; Adam Shwartz

A simple queueing system, known as the fork-join queue, is considered with basic performance measure defined as the delay between the fork and join dates. Simple lower and upper bounds are derived for some of the statistics of this quantity. They are obtained, in both transient and steady-state regimes, by stochastically comparing the original system to other queueing systems with a structure simpler than the original system, yet with identical stability characteristics. In steady-state, under renewal assumptions, the computation reduces to standard GI/GI /1 calculations and the bounds constitute a first sizing-up of system performance. These bounds can also be used to show that for homogeneous fork-join queue system under assumptions, the moments of the system response time grow logarithmically in the number of parallel processors provided the service time distribution has rational Laplace–Stieltjes transform. The bounding arguments combine ideas from the theory of stochastic ordering with the notion of associated random variables, and are of independent interest to study various other queueing systems with synchronization constraints. The paper is an abridged version of a more complete report on the matter [6].


Mathematics of Operations Research | 1996

Constrained Discounted Dynamic Programming

Eugene A. Feinberg; Adam Shwartz

This paper deals with constrained optimization of Markov Decision Processes with a countable state space, compact action sets, continuous transition probabilities, and upper semicontinuous reward functions. The objective is to maximize the expected total discounted reward for one reward function, under several inequality constraints on similar criteria with other reward functions. Suppose a feasible policy exists for a problem with M constraints. We prove two results on the existence and structure of optimal policies. First, we show that there exists a randomized stationary optimal policy which requires at most M actions more than a nonrandomized stationary one. This result is known for several particular cases. Second, we prove that there exists an optimal policy which is i stationary nonrandomized from some step onward, ii randomized Markov before this step, but the total number of actions which are added by randomization is at most M, iii the total number of actions that are added by nonstationarity is at most M. We also establish Pareto optimality of policies from the two classes described above for multi-criteria problems. We describe an algorithm to compute optimal policies with properties i--iii for constrained problems. The policies that satisfy properties i--iii have the pleasing aesthetic property that the amount of randomization they require over any trajectory is restricted by the number of constraints. In contrast, a randomized stationary policy may require an infinite number of randomizations over time.


Mathematics of Operations Research | 1995

Constrained Markov decision models with weighted discounted rewards

Eugene A. Feinberg; Adam Shwartz

This paper deals with constrained optimization of Markov Decision Processes. Both objective function and constraints are sums of standard discounted rewards, but each with a different discount factor Such models arise, e.g., in production and in applications involving multiple time scales. We prove that it a feasible policy exists, then there exists an optimal policy which is (i) stationary (nonrandomized) from some step onward, (ii) randomized, Markov before this step, but the total number of actions which are added by randomization is bounded by the number of constraints. Optimality of such policies for multi-criteria problems is also established. These new policies have the pleasing aesthetic property that the amount of randomization they require over any trajectory is restricted by the number of constraints. This result is new even for constrained optimization with a single discount factor, where the optimality of randomized stationary policies is known. However, a randomized stationary policy may req...


Mathematics of Operations Research | 1994

Markov Decision Models with Weighted Discounted Criteria

Eugene A. Feinberg; Adam Shwartz

We consider a discrete time Markov Decision Process with infinite horizon. The criterion to be maximized is the sum of a number of standard discounted rewards, each with a different discount factor. Situations in which such criteria arise include modeling investments, production, modeling projects of different durations and systems with multiple criteria, and some axiomatic formulations of multi-attribute preference theory. We show that for this criterion for some positive e there need not exist an e-optimal randomized stationary strategy, even when the state and action sets are finite. However, e-optimal Markov nonrandomized strategies and optimal Markov strategies exist under weak conditions. We exhibit e-optimal Markov strategies which are stationary from some time onward. When both state and action spaces are finite, there exists an optimal Markov strategy with this property. We provide an explicit algorithm for the computation of such strategies and give a description of the set of optimal strategies.


Siam Journal on Control and Optimization | 1984

An Invariant Measure Approach to the Convergence of Stochastic Approximations with State Dependent Noise

Harold J. Kushner; Adam Shwartz

A new method is presented for quickly getting the ODE (ordinary differential equation) associated with the asymptotic properties of the stochastic approximation


conference on decision and control | 1986

Estimation and optimal control for constrained Markov chains

Dye-Jyun Ma; Armand M. Makowski; Adam Shwartz

X_{n + 1} = X_n + a_n f(X_n ,\zeta _n )


Archive | 2000

Constrained Markov Games: Nash Equilibria

Eitan Altman; Adam Shwartz

(or the projected algorithm for the constrained problem). Either


IEEE Transactions on Automatic Control | 1999

Constrained dynamic programming with two discount factors: applications and an algorithm

Eugene A. Feinberg; Adam Shwartz

a_n \to 0


Annals of Operations Research | 1991

Sensitivity of constrained Markov decision processes

Eitan Altman; Adam Shwartz

, or


Annals of Operations Research | 1991

Adaptive control of constrained Markov chains: criteria and policies

Eitan Altman; Adam Shwartz

a_n

Collaboration


Dive into the Adam Shwartz's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nahum Shimkin

Technion – Israel Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Rami Atar

Technion – Israel Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Eitan Altman

French Institute for Research in Computer Science and Automation

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ad Ridder

VU University Amsterdam

View shared research outputs
Top Co-Authors

Avatar

Moshe Sidi

Technion – Israel Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Yair Carmon

Technion – Israel Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge