Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Amarjit Budhiraja is active.

Publication


Featured researches published by Amarjit Budhiraja.


IEEE Transactions on Automatic Control | 2000

A nonlinear filtering algorithm based on an approximation of the conditional distribution

Harold J. Kushner; Amarjit Budhiraja

An effective form of the Gaussian or moment approximation method for approximating optimal nonlinear filters with a diffusion signal process and discrete-time observations is presented. Various computational simplifications reduce the dimensionality of the numerical integrations that need to be done. This process, combined with an iterative Gaussian quadrature method, makes the filter effective for real-time use. The advantages are illustrated by a model that captures the general flavour of modeling the highly uncertain behavior of a ship near obstacles such as a shore line into which it cannot go, and must manoeuvre away in some unknown fashion. The observations are of very poor quality, yet the filter behaves well and is quite stable. The procedure does not rely on linearization, but attempts to compute the conditional moments directly by approximating the integrations used by the optimal filter.


Mathematics of Operations Research | 2009

Stationary Distribution Convergence for Generalized Jackson Networks in Heavy Traffic

Amarjit Budhiraja; Chihoon Lee

In a recent paper, Gamarnik and Zeevi [Gamarnik, D., A. Zeevi. 2006. Validity of heavy traffic steady-state approximations in open queueing networks. Ann. Appl. Probab.16(1) 56--90], it was shown that under suitable conditions stationary distributions of the (scaled) queue-lengths process for a generalized Jackson network converge to the stationary distribution of the associated reflected Brownian motion in the heavy traffic limit. The proof relied on certain exponential integrability assumptions on the primitives of the network. In this note we show that the above result holds under much weaker integrability conditions. We provide an alternative proof of this result assuming (in addition to natural heavy traffic and stability assumptions) only standard independence and square integrability conditions on the network primitives that are commonly used in heavy traffic analysis. Furthermore, under additional integrability conditions we establish convergence of moments of stationary distributions.


Systems & Control Letters | 1997

Exponential stability of discrete-time filters for bounded observation noise

Amarjit Budhiraja; Daniel Ocone

This paper proves exponential asymptotic stability of discrete-time filters for the estimation of solutions to stochastic difference equations, when the observation noise is bounded. No assumption is made on the ergodicity of the signal. The proof uses the Hilbert projective metric, introduced into filter stability analysis by Atar and Zeitouni [1,2]. It is shown that when the signal noise is sufficiently regular, boundedness of the observation noise implies that the filter update operation is, on average, a strict contraction with respect to the Hilbert metric. Asymptotic stability then follows.


Annals of Probability | 2012

Large deviation properties of weakly interacting processes via weak convergence methods

Amarjit Budhiraja; Paul Dupuis; Markus Fischer

We study large deviation properties of systems of weakly interacting particles modeled by Ito stochastic differential equations (SDEs). It is known under certain conditions that the corresponding sequence of empirical measures converges, as the number of particles tends to infinity, to the weak solution of an associated McKean–Vlasov equation. We derive a large deviation principle via the weak convergence approach. The proof, which avoids discretization arguments, is based on a representation theorem, weak convergence and ideas from stochastic optimal control. The method works under rather mild assumptions and also for models described by SDEs not of diffusion type. To illustrate this, we treat the case of SDEs with delay.


Stochastic Processes and their Applications | 1999

Exponential stability in discrete-time filtering for non-ergodic signals

Amarjit Budhiraja; Daniel Ocone

In this paper we prove exponential asymptotic stability for discrete-time filters for signals arising as solutions of d-dimensional stochastic difference equations. The observation process is the signal corrupted by an additive white noise of sufficiently small variance. The model for the signal admits non-ergodic processes. We show that almost surely, the total variation distance between the optimal filter and an incorrectly initialized filter converges to 0 exponentially fast as time approaches [infinity].


Siam Journal on Control and Optimization | 2007

Convergent Numerical Scheme for Singular Stochastic Control with State Constraints in a Portfolio Selection Problem

Amarjit Budhiraja; Kevin Ross

We consider a singular stochastic control problem with state constraints that arises in problems of optimal consumption and investment under transaction costs. Numerical approximations for the value function using the Markov chain approximation method of Kushner and Dupuis are studied. The main result of the paper shows that the value function of the Markov decision problem (MDP) corresponding to the approximating controlled Markov chain converges to that of the original stochastic control problem as various parameters in the approximation approach suitable limits. All our convergence arguments are probabilistic; the main assumption that we make is that the value function be finite and continuous. In particular, uniqueness of the solutions of the associated HJB equations is neither needed nor available (in the generality under which the problem is considered). Specific features of the problem that make the convergence analysis nontrivial include unboundedness of the state and control space and the cost function; degeneracies in the dynamics; mixed boundary (Dirichlet-Neumann) conditions; and presence of both singular and absolutely continuous controls in the dynamics. Finally, schemes for computing the value function and optimal control policies for the MDP are presented and illustrated with a numerical study.


Siam Journal on Control and Optimization | 1998

Robustness of Nonlinear Filters Over the Infinite Time Interval

Amarjit Budhiraja; Harold J. Kushner

Nonlinear filtering is one of the classical areas of stochastic control. From the point of view of practical usefulness, it is important that the filter not be too sensitive to the assumptions made on the initial distribution, the transition function of the underlying signal process and the model for the observation. This is particularly acute if the filter is of interest over a very long or potentially infinite time interval. Then the effects of small errors in the model which is used to construct the filter might accumulate to make the output useless for large time. The problem of asymptotic sensitivity to the initial condition has been treated in several papers. We are concerned with this as well as with the sensitivity to the signal model, uniformly over the infinite time interval. It is conceivable that the effects of even small errors in the model will accumulate so that the filter will eventually be useless. The robustness is shown for three classes of problems. For the first two cases, the signal model is Markov and the observations are taken in discrete time, and the observation is the usual function of the signal plus noise. The last class treated is a continuous time Markov process, with a point process observation.


Annals of Applied Probability | 2006

Diffusion approximations for controlled stochastic networks: An asymptotic bound for the value function

Amarjit Budhiraja; Arka P. Ghosh

We consider the scheduling control problem for a family of unitary networks under heavy traffic, with general interarrival and service times, probabilistic routing and infinite horizon discounted linear holding cost. A natural nonanticipativity condition for admissibility of control policies is introduced. The condition is seen to hold for a broad class of problems. Using this formulation of admissible controls and a time-transformation technique, we establish that the infimum of the cost for the network control problem over all admissible sequencing control policies is asymptotically bounded below by the value function of an associated diffusion control problem (the Brownian control problem). This result provides a useful bound on the best achievable performance for any admissible control policy for a wide class of networks.


Siam Journal on Control and Optimization | 1999

Approximation and Limit Results for Nonlinear Filters Over an Infinite Time Interval

Amarjit Budhiraja; Harold J. Kushner

This paper is concerned with approximations to nonlinear filtering problems that are of interest over a very long time interval. Since the optimal filter can rarely be constructed, one needs to compute with numerically feasible approximations. The signal model can be a jump-diffusion or just a process that is approximated by a jump-diffusion. The observation noise can be either white or of wide bandwidth. The observations can be taken in either discrete or continuous time. The cost of interest is the pathwise error per unit time over a long time interval. It is shown under quite reasonable conditions on the approximating filter and the signal and noise processes that (as time, bandwidth, process, and filter approximation, etc.) go to their limit in any way at all, the limit of the pathwise average costs per unit time is just what one would get if the approximating processes were replaced by their ideal values and the optimal filter were used. Analogous results are obtained (with appropriate scaling) if the observations are taken in discrete time, and the sampling interval also goes to zero. For these cases, the approximating filter is a numerical approximation to the optimal filter for the presumed limit (signal, observation noise) problem.


Annals of Probability | 2016

Moderate deviation principles for stochastic differential equations with jumps

Amarjit Budhiraja; Paul Dupuis; Arnab Ganguly

Moderate deviation principles for stochastic differential equations driven by a Poisson random measure (PRM) in finite and infinite dimensions are obtained. Proofs are based on a variational representation for expected values of positive functionals of a PRM.

Collaboration


Dive into the Amarjit Budhiraja's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Rami Atar

Technion – Israel Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Shankar Bhamidi

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xin Liu

University of Minnesota

View shared research outputs
Top Co-Authors

Avatar

Chihoon Lee

Colorado State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xuan Wang

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge