Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Devika Subramanian is active.

Publication


Featured researches published by Devika Subramanian.


languages, compilers, and tools for embedded systems | 1999

Optimizing for reduced code space using genetic algorithms

Keith D. Cooper; Philip J. Schielke; Devika Subramanian

Code space is a critical issue facing designers of software for embedded systems. Many traditional compiler optimizations are designed to reduce the execution time of compiled code, but not necessarily the size of the compiled code. Further, different results can be achieved by running some optimizations more than once and changing the order in which optimizations are applied. Register allocation only complicates matters, as the interactions between different optimizations can cause more spill code to be generated. The compiler for embedded systems, then, must take care to use the best sequence of optimizations to minimize code space.Since much of the code for embedded systems is compiled once and then burned into ROM, the software designer will often tolerate much longer compile times in the hope of reducing the size of the compiled code. We take advantage of this by using a genetic algorithm to find optimization sequences that generate small object codes. The solutions generated by this algorithm are compared to solutions found using a fixed optimization sequence and solutions found by testing random optimization sequences. Based on the results found by the genetic algorithm, a new fixed sequence is developed to reduce code size. Finally, we explore the idea of using different optimization sequences for different modules and functions of the same program.


Journal of Artificial Intelligence Research | 1994

Provably bounded-optimal agents

Stuart J. Russell; Devika Subramanian

Since its inception, arti cial intelligence has relied upon a theoretical foundation centred around perfect rationality as the desired property of intelligent systems. We argue, as others have done, that this foundation is inadequate because it imposes fundamentally unsatis able requirements. As a result, there has arisen a wide gap between theory and practice in AI, hindering progress in the eld. We propose instead a property called bounded optimality. Roughly speaking, an agent is bounded-optimal if its program is a solution to the constrained optimization problem presented by its architecture and the task environment. We show how to construct agents with this property for a simple class of machine architectures in a broad class of real-time environments. We illustrate these results using a simple model of an automated mail sorting facility. We also de ne a weaker property, asymptotic bounded optimality (ABO), that generalizes the notion of optimality in classical complexity theory. We then construct universal ABO programs, i.e., programs that are ABO no matter what real-time constraints are applied. Universal ABO programs can be used as building blocks for more complex systems. We conclude with a discussion of the prospects for bounded optimality as a theoretical basis for AI, and relate it to similar trends in philosophy, economics, and game theory.


The Journal of Supercomputing | 2002

Adaptive Optimizing Compilers for the 21st Century

Keith D. Cooper; Devika Subramanian; Linda Torczon

Historically, compilers have operated by applying a fixed set of optimizations in a predetermined order. We call such an ordered list of optimizations a compilation sequence. This paper describes a prototype system that uses biased random search to discover a program-specific compilation sequence that minimizes an explicit, external objective function. The result is a compiler framework that adapts its behavior to the application being compiled, to the pool of available transformations, to the objective function, and to the target machine.This paper describes experiments that attempt to characterize the space that the adaptive compiler must search. The preliminary results suggest that optimal solutions are rare and that local minima are frequent. If this holds true, biased random searches, such as a genetic algorithm, should find good solutions more quickly than simpler strategies, such as hill climbing.


Reliability Engineering & System Safety | 2010

Performance assessment of topologically diverse power systems subjected to hurricane events

James Winkler; Leonardo Dueñas-Osorio; Robert M. Stein; Devika Subramanian

Large tropical cyclones cause severe damage to major cities along the United States Gulf Coast annually. A diverse collection of engineering and statistical models are currently used to estimate the geographical distribution of power outage probabilities stemming from these hurricanes to aid in storm preparedness and recovery efforts. Graph theoretic studies of power networks have separately attempted to link abstract network topology to transmission and distribution system reliability. However, few works have employed both techniques to unravel the intimate connection between network damage arising from storms, topology, and system reliability. This investigation presents a new methodology combining hurricane damage predictions and topological assessment to characterize the impact of hurricanes upon power system reliability. Component fragility models are applied to predict failure probability for individual transmission and distribution power network elements simultaneously. The damage model is calibrated using power network component failure data for Harris County, TX, USA caused by Hurricane Ike in September of 2008, resulting in a mean outage prediction error of 15.59% and low standard deviation. Simulated hurricane events are then applied to measure the hurricane reliability of three topologically distinct transmission networks. The rate of system performance decline is shown to depend on their topological structure. Reliability is found to correlate directly with topological features, such as network meshedness, centrality, and clustering, and the compact irregular ring mesh topology is identified as particularly favorable, which can influence regional lifeline policy for retrofit and hardening activities to withstand hurricane events.


languages, compilers, and tools for embedded systems | 2005

ACME: adaptive compilation made efficient

Keith D. Cooper; Alexander Grosul; Timothy J. Harvey; Steven W. Reeves; Devika Subramanian; Linda Torczon; Todd Waterman

Research over the past five years has shown significant performance improvements using a technique called adaptive compilation. An adaptive compiler uses a compile-execute-analyze feedback loop to find the combination of optimizations and parameters that minimizes some performance goal, such as code size or execution time.Despite its ability to improve performance, adaptive compilation has not seen widespread use because of two obstacles: the large amounts of time that such systems have used to perform the many compilations and executions prohibits most users from adopting these systems, and the complexity inherent in a feedback-driven adaptive system has made it difficult to build and hard to use.A significant portion of the adaptive compilation process is devoted to multiple executions of the code being compiled. We have developed a technique called virtual execution to address this problem. Virtual execution runs the program a single time and preserves information that allows us to accurately predict the performance of different optimization sequences without running the code again. Our prototype implementation of this technique significantly reduces the time required by our adaptive compiler.In conjunction with this performance boost, we have developed a graphical-user interface (GUI) that provides a controlled view of the compilation process. By providing appropriate defaults, the interface limits the amount of information that the user must provide to get started. At the same time, it lets the experienced user exert fine-grained control over the parameters that control the system.


international conference on computer communications | 1998

An efficient multipath forwarding method

Johnny Chen; Peter Druschel; Devika Subramanian

We motivate and formally define dynamic multipath routing and present the problem of packet forwarding in the multipath routing context. We demonstrate that for multipath sets that are suffix matched, forwarding can be efficiently implemented with (1) a per packet overhead of a small, fixed-length path identifier, and (2) router space overhead linear in K, the number of alternate paths between a source and a destination. We derive multipath forwarding schemes for suffix matched path sets computed by both de-centralized (link-state) and distributed (distance-vector) routing algorithms. We also prove that (1) distributed multipath routing algorithms compute suffix matched multipath sets, and (2) for the criterion of ranked k-shortest paths, decentralized routing algorithms also yield suffix matched multipath sets.


international conference on robotics and automation | 2001

Robust localization algorithms for an autonomous campus tour guide

Richard Thrapp; Christian Westbrook; Devika Subramanian

This paper describes a robust localization method for an outdoor robot that gives tours of the Rice University campus. The robot fuses odometry and GPS data using extended Kalman filtering. We propose and experimentally test a technique for handling two types of nonstationarity in GPS data quality: abrupt changes in GPS position readings caused by sudden obstructions to line of sight access to satellites, and more gradual changes caused by disparities in atmospheric conditions. We construct measurement error covariance matrices indexed by number of visible satellites and switch them into the localization computation automatically. The matrices are built by sampling GPS data repeatedly along the route and are updated continuously to handle drift in GPS data quality. We demonstrate that our approach performs better than extended Kalman filters that use only a single error covariance matrix. With a GPS receiver that delivers 1 meter accuracy, we have been able to localize to 40 cm through a challenging route in the Engineering Quadrangle of Rice University.


ACM Transactions on Information Systems | 1997

Customizing information capture and access

Daniela Rus; Devika Subramanian

This article presents a customizable architecture for software agents that capture and access information in large, heterogeneous, distributed electronic repositories. The key idea is to exploit underlying structure at various levels of granularity to build high-level indices with task-specific interpretations. Information agents construct such indices and are configured as a network of reusable modules called structure detectors and segmenters. We illustrate our architecture with the design and implementation of smart information filters in two contexts: retrieving stock market data from Internet newsgroups and retrieving technical reports from Internet FTP sites.


international conference on machine learning | 1989

A theory of justified reformulations

Michael Genesereth; Devika Subramanian

Present day systems, intelligent or otherwise, are limited by the conceptualizations of the world given to them by their designers. This thesis explores issues in the construction of adaptive systems that can incrementally reformulate their conceptualizations to achieve computational efficiency or descriptional adequacy. A detailed account of a special case of the reformulation problem is presented: we reconceptualize a knowledge base in terms of new abstract objects and relations in order to make the computation of a given class of queries more efficient. Automatic reformulation will not be possible unless a reformulator can justify a shift in conceptualization. We present a new class of meta-theoretical justifications for a reformulation, called irrelevance explanations. A logical irrelevance explanation proves that certain distinctions made in the formulation are not necessary for the computation of a given class of problems. A computational irrelevance explanation proves that some distinctions are not useful with respect to a given problem solver for a given class of problems. Inefficient formulations make irrelevant distinctions and the irrelevance principle logically minimizes a formulation by removing all facts and distinctions in it that are not needed for the specified goals. The automation of the irrelevance principle is demonstrated with the generation of abstractions from first principles. We also describe the implementation of an irrelevance reformulator and outline experimental results that confirm our theory.


Research in Engineering Design | 1995

Kinematic synthesis with configuration spaces

Devika Subramanian; Cheuk-San (Edward) Wang

This paper introduces a new approach to the conceptual design of mechanical systems from qualitative specifications of behavior. The power of the approach stems from the integration of techniques in qualitative physics and constraint programming. We illustrate the approach with an effective kinematic synthesis method that reasons with qualitative representations of configuration spaces using constraint programming.

Collaboration


Dive into the Devika Subramanian's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Daniela Rus

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bradley M. Broom

University of Texas MD Anderson Cancer Center

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge