Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yinlam Chow is active.

Publication


Featured researches published by Yinlam Chow.


IEEE Transactions on Power Systems | 2016

Online Modified Greedy Algorithm for Storage Control Under Uncertainty

Junjie Qin; Yinlam Chow; Jiyan Yang; Ram Rajagopal

This paper studies the general problem of operating energy storage under uncertainty. Two fundamental sources of uncertainty are considered, namely the uncertainty in the unexpected fluctuation of the net demand process and the uncertainty in the locational marginal prices. We propose a very simple algorithm termed Online Modified Greedy (OMG) algorithm for this problem. A stylized analysis for the algorithm is performed, which shows that comparing to the optimal cost of the corresponding stochastic control problem, the sub-optimality of OMG is controlled by an easily computable bound. This suggests that, albeit simple, OMG is guaranteed to have good performance in cases when the bound is small. Meanwhile, OMG together with the sub-optimality bound can be used to provide a lower bound for the optimal cost. Such a lower bound can be valuable in evaluating other heuristic algorithms. For the latter cases, a semidefinite program is derived to minimize the sub-optimality bound of OMG. Numerical experiments are conducted to verify our theoretical analysis and to demonstrate the use of the algorithm.


advances in computing and communications | 2014

A framework for time-consistent, risk-averse model predictive control: Theory and algorithms

Yinlam Chow; Marco Pavone

In this paper we present a framework for risk-averse model predictive control (MPC) of linear systems affected by multiplicative uncertainty. Our key innovation is to consider time-consistent, dynamic risk metrics as objective functions to be minimized. This framework is axiomatically justified in terms of time-consistency of risk preferences, is amenable to dynamic optimization, and is unifying in the sense that it captures a full range of risk assessments from risk-neutral to worst case. Within this framework, we propose and analyze an online risk-averse MPC algorithm that is provably stabilizing. Furthermore, by exploiting the dual representation of time-consistent, dynamic risk metrics, we cast the computation of the MPC control law as a convex optimization problem amenable to implementation on embedded systems. Simulation results are presented and discussed.


international conference on future energy systems | 2014

Modeling and online control of generalized energy storage networks

Junjie Qin; Yinlam Chow; Jiyan Yang; Ram Rajagopal

The integration of intermittent and volatile renewable energy resources requires increased flexibility in the operation of the electric grid. Storage, broadly speaking, provides the flexibility of shifting energy over time; network, on the other hand, provides the flexibility of shifting energy over geographical locations. The optimal control of general storage networks in uncertain environments is an important open problem. The key challenge is that, even in small networks, the corresponding constrained stochastic control problems with continuous spaces suffer from curses of dimensionality, and are intractable in general settings. For large networks, no efficient algorithm is known to give optimal or near-optimal performance. This paper provides an efficient and provably near-optimal algorithm to solve this problem in a very general setting. We study the optimal control of generalized storage networks, i.e., electric networks connected to distributed generalized storages. Here generalized storage is a unifying dynamic model for many components of the grid that provide the functionality of shifting energy over time, ranging from standard energy storage devices to deferrable or thermostatically controlled loads. An online algorithm is devised for the corresponding constrained stochastic control problem based on the theory of Lyapunov optimization. We prove that the algorithm is near-optimal, and construct a semidefinite program to minimize the sub-optimality bound. The resulting bound is a constant that depends only on the parameters of the storage network and cost functions, and is independent of uncertainty realizations. Numerical examples are given to demonstrate the effectiveness of the algorithm.


Journal of Dynamic Systems Measurement and Control-transactions of The Asme | 2014

Trading Safety Versus Performance: Rapid Deployment of Robotic Swarms With Robust Performance Constraints

Yinlam Chow; Marco Pavone; Brian M. Sadler; Stefano Carpin

In this paper we consider a stochastic deployment problem, where a robotic swarm is tasked with the objective of positioning at least one robot at each of a set of pre-assigned targets while meeting a temporal deadline. Travel times and failure rates are stochastic but related, inasmuch as failure rates increase with speed. To maximize chances of success while meeting the deadline, a control strategy has therefore to balance safety and performance. Our approach is to cast the problem within the theory of constrained Markov Decision Processes, whereby we seek to compute policies that maximize the probability of successful deployment while ensuring that the expected duration of the task is bounded by a given deadline. To account for uncertainties in the problem parameters, we consider a robust formulation and we propose efficient solution algorithms, which are of independent interest. Numerical experiments confirming our theoretical results are presented and discussed.


IEEE Transactions on Smart Grid | 2016

Distributed Online Modified Greedy Algorithm for Networked Storage Operation Under Uncertainty

Junjie Qin; Yinlam Chow; Jiyan Yang; Ram Rajagopal

The integration of intermittent and stochastic renewable energy resources requires increased flexibility in the operation of the electric grid. Storage, broadly speaking, provides the flexibility of shifting energy over time; network, on the other hand, provides the flexibility of shifting energy over geographical locations. The optimal control of storage networks in stochastic environments is an important open problem. The key challenge is that, even in small networks, the corresponding constrained stochastic control problems on continuous spaces suffer from curses of dimensionality and are intractable in general settings. For large networks, no efficient algorithm is known to give optimal or provably near-optimal performance for this problem. This paper provides an efficient algorithm to solve this problem with performance guarantees. We study the operation of storage networks, i.e., a storage system interconnected via a power network. An online algorithm, termed online modified greedy algorithm, is developed for the corresponding constrained stochastic control problem. A sub-optimality bound for the algorithm is derived and a semidefinite program is constructed to minimize the bound. In many cases, the bound approaches zero so that the algorithm is near-optimal. A task-based distributed implementation of the online algorithm relying only on local information and neighborhood communication is then developed based on the alternating direction method of multipliers. Numerical examples verify the established theoretical performance bounds and demonstrate the scalability of the algorithm.


american control conference | 2013

Stochastic optimal control with dynamic, time-consistent risk constraints

Yinlam Chow; Marco Pavone

In this paper we present a dynamic programming approach to stochastic optimal control problems with dynamic, time-consistent risk constraints. Constrained stochastic optimal control problems, which naturally arise when one has to consider multiple objectives, have been extensively investigated in the past 20 years; however, in most formulations, the constraints are formulated as either risk-neutral (i.e., by considering an expected cost), or by applying static, single-period risk metrics with limited attention to “time-consistency” (i.e., to whether such metrics ensure rational consistency of risk preferences across multiple periods). Recently, significant strides have been made in the development of a rigorous theory of dynamic, time-consistent risk metrics for multi-period (risk-sensitive) decision processes; however, their integration within constrained stochastic optimal control problems has received little attention. The goal of this paper is to bridge this gap. First, we formulate the stochastic optimal control problem with dynamic, time-consistent risk constraints and we characterize the tail subproblems (which requires the addition of a Markovian structure to the risk metrics). Second, we develop a dynamic programming approach for its solution, which allows to compute the optimal costs by value iteration. Finally, we present a procedure to construct optimal policies.


IEEE Transactions on Automatic Control | 2017

Sequential Decision Making With Coherent Risk

Aviv Tamar; Yinlam Chow; Mohammad Ghavamzadeh; Shie Mannor

We provide sampling-based algorithms for optimization under a coherent-risk objective. The class of coherent-risk measures is widely accepted in finance and operations research, among other fields, and encompasses popular risk-measures such as conditional value at risk and mean-semi-deviation. Our approach is suitable for problems in which tuneable parameters control the distribution of the cost, such as in reinforcement learning or approximate dynamic programming with a parameterized policy. Such problems cannot be solved using previous approaches. We consider both static risk measures and time-consistent dynamic risk measures. For static risk measures, our approach is in the spirit of policy gradient methods, while for the dynamic risk measures, we use actor-critic type algorithms.


international conference on robotics and automation | 2016

Risk aversion in finite Markov Decision Processes using total cost criteria and average value at risk

Stefano Carpin; Yinlam Chow; Marco Pavone

In this paper we present an algorithm to compute risk averse policies in Markov Decision Processes (MDP) when the total cost criterion is used together with the average value at risk (AVaR) metric. Risk averse policies are needed when large deviations from the expected behavior may have detrimental effects, and conventional MDP algorithms usually ignore this aspect. We provide conditions for the structure of the underlying MDP ensuring that approximations for the exact problem can be derived and solved efficiently. Our findings are novel inasmuch as average value at risk has not previously been considered in association with the total cost criterion. Our method is demonstrated in a rapid deployment scenario, whereby a robot is tasked with the objective of reaching a target location within a temporal deadline where increased speed is associated with increased probability of failure. We demonstrate that the proposed algorithm not only produces a risk averse policy reducing the probability of exceeding the expected temporal deadline, but also provides the statistical distribution of costs, thus offering a valuable analysis tool.


conference on decision and control | 2013

A uniform-grid discretization algorithm for stochastic optimal control with risk constraints

Yinlam Chow; Marco Pavone

In this paper, we present a discretization algorithm for the solution of stochastic optimal control problems with dynamic, time-consistent risk constraints. Previous works have shown that such problems can be cast as Markov decision problems (MDPs) on an augmented state space where a “constrained” form of Bellmans recursion can be applied. However, even if both the state space and action spaces for the original optimization problem are finite, the augmented state in the induced MDP problem contains state variables that are continuous. Our approach is to apply a uniform-grid discretization scheme for the augmented state. To prove the correctness of this approach, we develop novel Lipschitz bounds for “constrained” dynamic programming operators. We show that convergence to the optimal value functions is linear in the step size, which is the same convergence rate for discretization algorithms for unconstrained dynamic programming operators. Simulation experiments are presented and discussed.


power and energy society general meeting | 2016

Online Modified Greedy algorithm for storage control under uncertainty

Junjie Qin; Yinlam Chow; Jiyan Yang; Ram Rajagopa

Summary form only given. This paper studies the general problem of operating energy storage under uncertainty. Two fundamental sources of uncertainty are considered, namely the uncertainty in the unexpected fluctuation of the net demand process and the uncertainty in the locational marginal prices. We propose a very simple algorithm termed Online Modified Greedy (OMG) algorithm for this problem. A stylized analysis for the algorithm is performed, which shows that comparing to the optimal cost of the corresponding stochastic control problem, the sub-optimality of OMG is controlled by an easily computable bound. This suggests that, albeit simple, OMG is guaranteed to have good performance in cases when the bound is small. Meanwhile, OMG together with the sub-optimality bound can be used to provide a lower bound for the optimal cost. Such a lower bound can be valuable in evaluating other heuristic algorithms. For the latter cases, a semidefinite program is derived to minimize the sub-optimality bound of OMG. Numerical experiments are conducted to verify our theoretical analysis and to demonstrate the use of the algorithm.

Collaboration


Dive into the Yinlam Chow's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Aviv Tamar

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sumeet Katariya

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Shie Mannor

Technion – Israel Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Alan Malek

University of California

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge