Héctor Jasso-Fuentes
CINVESTAV
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Héctor Jasso-Fuentes.
Stochastic Analysis and Applications | 2009
Héctor Jasso-Fuentes; Onésimo Hernández-Lerma
Abstract This article deals with expected average reward (a.k.a. ergodic reward) and sensitive discount criteria for a general class of Markov diffusion processes. We give conditions under which: (1) average reward optimality and strong − 1-discount optimality are equivalent; (2) strong 0-discount optimality implies bias optimality; and (3) bias optimality implies 0-discount optimality. Moreover, under additional hypotheses, (4) bias optimality also implies strong 0-discount optimality. Thus, under previous results that guarantee average and bias optimality, we ensure that strong − 1-discount and strong 0-discount optimality hold.
International Journal of Control | 2014
Héctor Jasso-Fuentes; José Daniel López-Barrientos
In this paper, we propose an application of the so-called games against nature for giving solution to an ergodic control problem governed by a general class of Markov diffusion processes whose coefficients depend on an unknown and non-observable parameter. To this end, we assume that the values of the parameter are taken by means of ‘actions’ made by some opposite player of the controller (the nature). Then, the problem reduces to finding optimality for the controller given that the nature has chosen its best strategy. Such a control is also known as the worst case optimal control. Our analysis is based on the use of the dynamic programming technique by showing, among other facts, the existence of classical (twice differentiable) solutions of the so called Hamilton Jacobi Bellman equation. We also provide an example on economic welfare to illustrate our results.
Optimization | 2015
Armando F. Mendoza-Pérez; Héctor Jasso-Fuentes; Onésimo Hernández-Lerma
This article concerns n-dimensional controlled diffusion processes. The main problem is to maximize a certain long-run average reward (also known as an ergodic reward) in such a way that a given long-run average cost is bounded above by a constant. Under suitable assumptions, the existence of optimal controls for such constrained control problems is a well-known fact. In this article we go a bit further and our goal is to introduce a technique to compute optimal controls. To this end, we follow the Lagrange multipliers approach. An example on a linear-quadratic system illustrates our results.
Asymptotic Analysis | 2012
Alain Bensoussan; Héctor Jasso-Fuentes; Stéphane Menozzi; Laurent Mertz
In a previous work by the first author with J. Turi (AMO, 08), a stochastic variational inequality has been introduced to model an elasto-plastic oscillator with noise. A major advantage of the stochastic variational inequality is to overcome the need to describe the trajectory by phases (elastic or plastic). This is useful, since the sequence of phases cannot be characterized easily. In particular, there are numerous small elastic phases which may appear as an artefact of the Wiener process. However, it remains important to have informations on these phases. In order to reconcile these contradictory issues, we introduce an approximation of stochastic variational inequalities by imposing artificial small jumps between phases allowing a clear separation of the phases. In this work, we prove that the approximate solution converges on any finite time interval, when the size of jumps tends to 0.
Archive | 2018
Beatris Adriana Escobedo-Trujillo; Héctor Jasso-Fuentes; José Daniel López-Barrientos
Advanced-type equilibria for a general class of zero-sum stochastic differential games have been studied in part by Escobedo-Trujillo et al. (J Optim Theory Appl 153:662–687, 2012), in which a comprehensive study of the so-named bias and overtaking equilibria was provided. On the other hand, a complete analysis of advanced optimality criteria in the context of optimal control theory such as bias, overtaking, sensitive discount, and Blackwell optimality was developed independently by Jasso-Fuentes and Hernandez-Lerma (Appl Math Optim 57:349–369, 2008; J Appl Probab 46:372–391, 2009; Stoch Anal Appl 27:363–385, 2009). In this work we try to fill out the gap between the aforementioned references. Namely, the aim is to analyze Blackwell-Nash equilibria for a general class of zero-sum stochastic differential games. Our approach is based on the use of dynamic programming, the Laurent series and the study of sensitive discount optimality.
Mathematical Methods of Operations Research | 2016
Armando F. Mendoza-Pérez; Héctor Jasso-Fuentes; Omar A. De-la-Cruz Courtois
In this paper we study discrete-time Markov decision processes in Borel spaces with a finite number of constraints and with unbounded rewards and costs. Our aim is to provide a simple method to compute constrained optimal control policies when the payoff functions and the constraints are of either: infinite-horizon discounted type and average (a.k.a. ergodic) type. To deduce optimality results for the discounted case, we use the Lagrange multipliers method that rewrites the original problem (with constraints) into a parametric family of discounted unconstrained problems. Based on the dynamic programming technique as long with a simple use of elementary differential calculus, we obtain both suitable Lagrange multipliers and a family of control policies associated to these multipliers, this last family becomes optimal for the original problem with constraints. We next apply the vanishing discount factor method in order to obtain, in a straightforward way, optimal control policies associated to the average problem with constraints. Finally, to illustrate our results, we provide a simple application to linear–quadratic systems (LQ-systems).
Stochastics An International Journal of Probability and Stochastic Processes | 2013
Héctor Jasso-Fuentes; Onésimo Hernández-Lerma
In this paper, we study the optimal ergodic control problem with minimum variance for a general class of controlled Markov diffusion processes. To this end, we follow a lexicographical approach. Namely, we first identify the class of average optimal control policies, and then within this class, we search policies that minimize the limiting average variance. To do this, a key intermediate step is to show that the limiting average variance is a constant independent of the initial state. Our proof of this latter fact gives a result stronger than the central limit theorem for diffusions. An application to manufacturing systems illustrates our results.
Archive | 2012
Héctor Jasso-Fuentes; G. Yin
This chapter provides a summary of some of the recent progress in controlled switching diffusions in an infinite horizon. Although both the basic criteria and advanced criteria are examined, emphases are placed on the advanced criteria. This chapter is mainly concerned with the existence and characterizations of optimal control policies associated to different types of infinite-horizon control objectives. Conditions needed are given and results are stated, while the detailed proofs are referred to the paper [29].
Applied Mathematics and Optimization | 2008
Héctor Jasso-Fuentes; Onésimo Hernández-Lerma
Journal of Applied Probability | 2009
Héctor Jasso-Fuentes; Onésimo Hernández-Lerma