Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ashwin Ram is active.

Publication


Featured researches published by Ashwin Ram.


international conference on case based reasoning | 2007

Case-Based Planning and Execution for Real-Time Strategy Games

Santiago Ontañón; Kinshuk Mishra; Neha Sugandh; Ashwin Ram

Artificial Intelligence techniques have been successfully applied to several computer games. However in some kinds of computer games, like real-time strategy (RTS) games, traditional artificial intelligence techniques fail to play at a human level because of the vast search spaces that they entail. In this paper we present a real-time case based planning and execution approach designed to deal with RTS games. We propose to extract behavioral knowledge from expert demonstrations in form of individual cases. This knowledge can be reused via a case based behavior generator that proposes behaviors to achieve the specific open goals in the current plan. Specifically, we applied our technique to the W ARGUS domain with promising results.


Artificial Intelligence | 1997

Continuous case-based reasoning

Ashwin Ram; Juan Carlos Santamaria

Abstract Case-based reasoning systems have traditionally been used to perform high-level reasoning in problem domains that can be adequately described using discrete, symbolic representations. However, many real-world problem domains, such as autonomous robotic navigation, are better characterized using continuous representations. Such problem domains also require continuous performance, such as on-line sensorimotor interaction with the environment, and continuous adaptation and learning during the performance task. This article introduces a new method for continuous case-based reasoning , and discusses its application to the dynamic selection, modification, and acquisition of robot behaviors in an autonomous navigation system, SINS (self-improving navigation system). The computer program and the underlying method are systematically evaluated through statistical analysis of results from several empirical studies. The article concludes with a general discussion of case-based reasoning issues addressed by this research.


The Journal of the Learning Sciences | 1991

A Theory of Questions and Question Asking

Ashwin Ram

This article focuses on knowledge goals, that is, the goals of a reasoner to acquire or reorganize knowledge. Knowledge goals, often expressed as questions, arise when the reasoners model of the domain is inadequate in some reasoning situation. This leads the reasoner to focus on the knowledge it needs, to formulate questions to acquire this knowledge, and to learn by pursuing its questions. I develop a theory of questions and of question asking, motivated both by cognitive and computational considerations, and I discuss the theory in the context of the task of story understanding. I present a computer model of an active reader that learns about novel domains by reading newspaper stories.


Artificial Intelligence | 1999

Introspective multistrategy learning: on the construction of learning strategies

Michael T. Cox; Ashwin Ram

Abstract A central problem in multistrategy learning systems is the selection and sequencing of machine learning algorithms for particular situations. This is typically done by the system designer who analyzes the learning task and implements the appropriate algorithm or sequence of algorithms for that task. We propose a solution to this problem which enables an AI system with a library of machine learning algorithms to select and sequence appropriate algorithms autonomously. Furthermore, instead of relying on the system designer or user to provide a learning goal or target concept to the learning system, our method enables the system to determine its learning goals based on analysis of its successes and failures at the performance task. The method involves three steps: Given a performance failure, the learner examines a trace of its reasoning prior to the failure to diagnose what went wrong (blame assignment); given the resultant explanation of the reasoning failure, the learner posts explicitly represented learning goals to change its background knowledge (deciding what to learn); and given a set of learning goals, the learner uses nonlinear planning techniques to assemble a sequence of machine learning algorithms, represented as planning operators, to achieve the learning goals (learning-strategy construction). In support of these operations, we define the types of reasoning failures, a taxonomy of failure causes, a second-order formalism to represent reasoning traces, a taxonomy of learning goals that specify desired change to the background knowledge of a system, and a declarative task-formalism representation of learning algorithms. We present the Meta-AQUA system, an implemented multistrategy learner that operates in the domain of story understanding. Extensive empirical evaluations of Meta-AQUA show that it performs significantly better in a deliberative, planful mode than in a reflexive mode in which learning goals are ablated and, furthermore, that the arbitrary ordering of learning algorithms can lead to worse performance than no learning at all. We conclude that explicit representation and sequencing of learning goals is necessary for avoiding negative interactions between learning algorithms that can lead to less effective learning.


Machine Learning | 1993

Indexing, Elaboration and Refinement: Incremental Learning of Explanatory Cases

Ashwin Ram

This article describes how a reasoner can improve its understanding of an incompletely understood domain through the application of what it already knows to novel problems in that domain. Case-based reasoning is the process of using past experiences stored in the reasoners memory to understand novel situations or solve novel problems. However, this process assumes that past experiences are well understood and provide good “lessons” to be used for future situations. This assumption is usually false when one is learning about a novel domain, since situations encountered previously in this domain might not have been understood completely. Furthermore, the reasoner may not even have a case that adequately deals with the new situation, or may not be able to access the case using existing indices. We present a theory of incremental learning based on the revision of previously existing case knowledge in response to experiences in such situations. The theory has been implemented in a case-based story understanding program that can (a) learn a new case in situations where no case already exists, (b) learn how to index the case in memory, and (c) incrementally refine its understanding of the case by using it to reason about new situations, thus evolving a better understanding of its domain through experience. This research complements work in case-based reasoning by providing mechanisms by which a case library can be automatically built for use by a case-based reasoning program.


computational intelligence | 2010

ON-LINE CASE-BASED PLANNING

Santi Ontañón; Kinshuk Mishra; Neha Sugandh; Ashwin Ram

Some domains, such as real‐time strategy (RTS) games, pose several challenges to traditional planning and machine learning techniques. In this article, we present a novel on‐line case‐based planning architecture that addresses some of these problems. Our architecture addresses issues of plan acquisition, on‐line plan execution, interleaved planning and execution, and on‐line plan adaptation. We also introduce the Darmok system, which implements this architecture to play Wargus (an open source clone of the well‐known RTS game Warcraft II). We present empirical evaluation of the performance of Darmok and show that it successfully learns to play the Wargus game.


international acm sigir conference on research and development in information retrieval | 2008

Exploring question subjectivity prediction in community QA

Baoli Li; Yandong Liu; Ashwin Ram; Ernest V. Garcia; Eugene Agichtein

In this paper we begin to investigate how to automatically determine the subjectivity orientation of questions posted by real users in community question answering (CQA) portals. Subjective questions seek answers containing private states, such as personal opinion and experience. In contrast, objective questions request objective, verifiable information, often with support from reliable sources. Knowing the question orientation would be helpful not only for evaluating answers provided by users, but also for guiding the CQA engine to process questions more intelligently. Our experiments on Yahoo! Answers data show that our method exhibits promising performance.


international conference on robotics and automation | 1992

Learning momentum: online performance enhancement for reactive systems

Russell J. Clark; Ronald C. Arkin; Ashwin Ram

The authors describe a reactive robotic control system which incorporates aspects of machine learning to improve the systems ability to navigate successfully in unfamiliar environments. This system overcomes limitations of completely reactive systems by exercising online performance enhancement without the need for high-level planning. The goal of the learning system is to give the autonomous robot the ability to adjust the scheme control parameters in an unstructured dynamic environment. The results of a successful implementation that learns to navigate out of a box canyon are presented. This system never resorts to a high-level planner, but instead learns continuously by adjusting gains based on the progress made so far. The system is successful because it is able to improve its performance in reaching a goal in a previously unfamiliar and dynamic world.<<ETX>>


Applied Intelligence | 1992

The use of explicit goals for knowledge to guide inference and learning

Ashwin Ram; Lawrence Hunter

Combinatorial explosion of inferences has always been a central problem in artificial intelligence. Although the inferences that can be drawn from a reasoners knowledge and from available inputs is very large (potentially infinite), the inferential resources available to any reasoning system are limited. With limited inferential capacity and very many potential inferences, reasoners must somehow control the process of inference.Not all inferences are equally useful to a given reasoning system. Any reasoning system that has goals (or any form of a utility function) and acts based on its beliefs indirectly assigns utility to its beliefs. Given limits on the process of inference, and variation in the utility of inferences, it is clear that a reasoner ought to draw the inferences that will be most valuable to it.This paper presents an approach to this problem that makes the utility of a (potential) belief an explicit part of the inference process. The method is to generate explicit desires for knowledge. The question of focus of attention is thereby transformed into two related problems: How can explicit desires for knowledge be used to control inference and facilitate resource-constrained goal pursuit in general? and, Where do these desires for knowledge come from? We present a theory of knowledge goals, or desires for knowledge, and their use in the processes of understanding and learning. The theory is illustrated using two case studies, a natural language understanding program that learns by reading novel or unusual newspaper stories, and a differential diagnosis program that improves its accuracy with experience.


computational intelligence | 2010

DRAMA MANAGEMENT AND PLAYER MODELING FOR INTERACTIVE FICTION GAMES

Manu Sharma; Santiago Ontañón; Manish Mehta; Ashwin Ram

A growing research community is working toward employing drama management components in story‐based games. These components gently guide the story toward a narrative arc that improves the players gaming experience. In this article we evaluate a novel drama management approach deployed in an interactive fiction game called Anchorhead. This approach uses players feedback as the basis for guiding the personalization of the interaction. The results indicate that adding our Case‐based Drama manaGer (C‐DraGer) to the game guides the players through the interaction and provides a better overall player experience. Unlike previous approaches to drama management, this article focuses on exhibiting the success of our approach by evaluating results using human players in a real game implementation. Based on this work, we report several insights on drama management which were possible only due to an evaluation with real players.

Collaboration


Dive into the Ashwin Ram's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Manish Mehta

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Juan Carlos Santamaria

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

John T. Stasko

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Mark Guzdial

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Richard Catrambone

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Ronald C. Arkin

Georgia Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge