Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Dev G. Rajnarayan is active.

Publication


Featured researches published by Dev G. Rajnarayan.


document analysis systems | 2004

The Stanford testbed of autonomous rotorcraft for multi agent control (STARMAC)

Gabe Hoffmann; Dev G. Rajnarayan; Steven Lake Waslander; David Dostal; Jung Soon Jang; Claire J. Tomlin

As an alternative to cumbersome aerial vehicles with considerable maintenance requirements and flight envelope restrictions, the X4 flyer is chosen as the basis for the Stanford testbed of autonomous rotorcraft for multi-agent control (STARMAC). This paper outlines the design and development of a miniature autonomous waypoint tracker flight control system, and the creation of a multi-vehicle platform for experimentation and validation of multi-agent control algorithms. This testbed development paves the way for real-world implementation of recent work in the fields of autonomous collision and obstacle avoidance, task assignment formation flight, using both centralized and decentralized techniques.


12th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference | 2008

A Multifidelity Gradient-Free Optimization Method and Application to Aerodynamic Design

Dev G. Rajnarayan; Alex Haas; Ilan Kroo

The use of expensive simulations in engineering design optimization often rules out conventional techniques for design optimization for a variety of reasons, such as lack of smoothness, unavailability of gradient information, presence of multiple local optima, and most importantly, limits on available computing resources and time. Often, the designer also has access to lower-fidelity simulations that may suer from poor accuracy in some regions of the design space, but are much cheaper to evaluate than the original expensive simulation. We can accelerate the design process by eciently managing these models of various fidelities. There has been previous research in this area: some algorithms in the literature first estimate of the relationships between these models, and then perform optimization on the corrected low-fidelity models. Others adaptively select new high-fidelity designs, but these usually require gradient information; those that relax this requirement use a trust-region-based local search method. In contrast, most global optimization methods in the literature require smoothness, and do not incorporate multifidelity analyses. We would like to combine the advantages of all these techniques, and in this paper, we describe a method to incorporate models of two fidelities and perform a gradient-free global search on expensive functions that are not necessarily smooth everywhere. The main contribution of this paper is an extension of the well-known technique of maximization of expected improvement to the two-fidelity case. We demonstrate this improved technique on some academic problems with an artificially constructed ‘low-fidelity’ approximation, and also on a simple application problem in supersonic design optimization.


48th AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference | 2007

Probability Collectives for Optimization of Computer Simulations

Dev G. Rajnarayan; Ilan Kroo; David H. Wolpert

A significant body of research under the rubric of blackbox optimization addresses the problem of optimization in a design space where the designs are evaluated by a computer simulation. For many such simulations, conventional local optimization methods prove inadequate, owing to the presence of local minima, local non-smoothness, and so on. A more effective approach would learn the global characteristics of the design space, and sample from disparate regions as it progresses. Several probabilistic approaches to this problem are based on the following idea: during the course of any search process, our partial knowledge of the design space grows as we evaluate a growing number of designs, and this knowledge can be elegantly expressed in terms of probability distributions. This is usually done using surrogate models, or data fits. We would like to use this partial knowledge to lead us to promising new designs, and this involves searching the data fit for points where the uncertainty of the fit, in conjunction with the fit itself, indicates a good chance of finding a improved design. This is the basis for the well-known Efficient Global Optimization (EGO) algorithm. This task can be regarded as an auxiliary optimization problem, but one that has some undesirable characteristics, including large flat regions and multiple local optima. Previous solution approaches have included branch and bound (the EGO algorithm), and local optimization with multi-start. In this paper, we present a new approach, itself based on probability distributions. Probability Collectives (PC) is an optimization framework in which a given optimization problem is transformed to one over probability distributions. This transformation often enables us to overcome problems such as local non-smoothness and multiple local optima. One particular PC approach is closely related to function smoothing and filtering. Our formulation is a type of Monte Carlo Optimization, often encountered in machine learning algorithms. We briefly introduce this approach, and apply it to the auxiliary optimization problem described above. Finally, we compare this technique with other algorithms, including a heuristic sampling algorithm, and a genetic algorithm.


11th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference | 2006

Optimization Under Uncertainty Using Probability Collectives

Dev G. Rajnarayan; David H. Wolpert; Ilan Kroo

In this paper, we review an optimization technique called Probability Collectives (PC), and compare its approach with that of quadratic Response Surface Methods (RSM), on some standard continuous test functions for unconstrained minimization, both in the presence and absence of uncertainty in the function evaluations. Probability Collectives (PC) is an optimization framework where optimization is performed over probability distributions over the variables of interest, rather than the variables themselves. In order to find a solution to the original optimization problem, we sample the final solution of the transformed problem, which is a probability distribution. In other words, PC is a transform technique with the special property that the inverse transform is performed stochastically. This transformation yields algorithms that are relatively insensitive to structural properties of the underlying objective function, like continuity, convexity, and dierentiability; optimization under uncertainty becomes natural and straightforward; finally, the distributions yield sensitivity information about the original problem. Some PC algorithms bear a strong resemblance to RSM, and in this paper, we investigate these similarities, and compare performances of these techniques on some simple test functions.


Handbook of Statistics | 2013

Chapter 4 - Probability Collectives in Optimization

David H. Wolpert; Stefan Bieniawski; Dev G. Rajnarayan

Abstract This article concerns “blackbox optimization” algorithms in which one iterates the following procedure: Choose a value x ∈ X , getting statistical information about an associated value G ( x ) , then use the set of all pairs { ( x , G ( x ) ) } found so far to choose a next x value at which to sample G , the goal being to find x s with as small G ( x ) as possible, and to do so as fast as possible. Examples of conventional blackbox optimization algorithms are genetic algorithms, simulated annealing, etc. These conventional algorithms work directly with values x , stochastically mapping the set { ( x , G ( x ) ) } to the next x . The distribution over new x s that gets sampled is never explicitly optimized. In contrast, in the Probability Collectives (PC) approach, one explicitly uses the set { ( x , G ( x ) ) } to optimize the probability distribution over x that will be sampled. This article reviews some of the work that has been done on Probability Collectives, in particular presenting some of the many experiments that have demonstrated its power.


Advances in Complex Systems | 2006

ADVANCES IN DISTRIBUTED OPTIMIZATION USING PROBABILITY COLLECTIVES

David H. Wolpert; Charlie E. M. Strauss; Dev G. Rajnarayan


arXiv: Learning | 2007

Parametric Learning and Monte Carlo Optimization

David H. Wolpert; Dev G. Rajnarayan


Encyclopedia of Machine Learning | 2010

Bias-Variance Trade-offs: Novel Applications.

Dev G. Rajnarayan; David H. Wolpert


arXiv: Numerical Analysis | 2008

Bias-Variance Techniques for Monte Carlo Optimization: Cross-validation for the CE Method

Dev G. Rajnarayan; David H. Wolpert


national conference on artificial intelligence | 2013

Using machine learning to improve stochastic optimization

David H. Wolpert; Dev G. Rajnarayan

Collaboration


Dive into the Dev G. Rajnarayan's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andrew March

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Charlie E. M. Strauss

Los Alamos National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Karen Willcox

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge