Vadim Bulitko
University of Alberta
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Vadim Bulitko.
Journal of Artificial Intelligence Research | 2006
Vadim Bulitko; Greg Lee
Real-time search methods are suited for tasks in which the agent is interacting with an initially unknown environment in real time. In such simultaneous planning and learning problems, the agent has to select its actions in a limited amount of time, while sensing only a local part of the environment centered at the agents current location. Real-time heuristic search agents select actions using a limited lookahead search and evaluating the frontier states with a heuristic function. Over repeated experiences, they refine heuristic values of states to avoid infinite loops and to converge to better solutions. The wide spread of such settings in autonomous software and hardware agents has led to an explosion of real-time search algorithms over the last two decades. Not only is a potential user confronted with a hodgepodge of algorithms, but he also faces the choice of control parameters they use. In this paper we address both problems. The first contribution is an introduction of a simple three-parameter framework (named LRTS) which extracts the core ideas behind many existing algorithms. We then prove that LRTA*, E-LRTA* , SLA*, and γ-Trap algorithms are special cases of our framework. Thus, they are unified and extended with additional features. Second, we prove completeness and convergence of any algorithm covered by the LRTS framework. Third, we prove several upper-bounds relating the control parameters and solution quality. Finally, we analyze the influence of the three control parameters empirically in the realistic scalable domains of real-time navigation on initially unknown maps from a commercial role-playing game as well as routing in ad hoc sensor networks.
Journal of Artificial Intelligence Research | 2007
Vadim Bulitko; Nathan R. Sturtevant; Jieshan Lu; Timothy Yau
Real-time heuristic search methods are used by situated agents in applications that require the amount of planning per move to be independent of the problem size. Such agents plan only a few actions at a time in a local search space and avoid getting trapped in local minima by improving their heuristic function over time. We extend a wide class of real-time search algorithms with automatically-built state abstraction and prove completeness and convergence of the resulting family of algorithms. We then analyze the impact of abstraction in an extensive empirical study in real-time pathfinding. Abstraction is found to improve efficiency by providing better trading offs between planning time, learning speed and other negatively correlated performance measures.
Journal of Artificial Intelligence Research | 2008
Vadim Bulitko; Mitja Luštrek; Jonathan Schaeffer; Yngvi Björnsson; Sverrir Sigmundarson
Real-time heuristic search is a challenging type of agent-centered search because the agents planning time per action is bounded by a constant independent of problem size. A common problem that imposes such restrictions is pathfinding in modern computer games where a large number of units must plan their paths simultaneously over large maps. Common search algorithms (e.g., A*, IDA*, D*, ARA*, AD*) are inherently not real-time and may lose completeness when a constant bound is imposed on per-action planning time. Real-time search algorithms retain completeness but frequently produce unacceptably suboptimal solutions. In this paper, we extend classic and modern real-time search algorithms with an automated mechanism for dynamic depth and subgoal selection. The new algorithms remain real-time and complete. On large computer game maps, they find paths within 7% of optimal while on average expanding roughly a single state per action. This is nearly a three-fold improvement in suboptimality over the existing state-of-the-art algorithms and, at the same time, a 15-fold improvement in the amount of planning per action.
computational intelligence and games | 2008
Stephen Hladky; Vadim Bulitko
A well-known Artificial Intelligence (AI) problem in video games is designing AI-controlled humanoid characters. It is desirable for these characters to appear both skillful and believably human-like. Many games address the former objective by providing their agents with unfair advantages. Although challenging, these agents are frustrating to humans who perceive the AI to be cheating. In this paper we evaluate hidden semi-Markov models and particle filters as a means for predicting opponent positions. Our results show that these models can perform with similar or better accuracy than the average human expert in the game Counter-Strike: Source. Furthermore, the mistakes these models make are more human-like than perfect predictions.
Journal of Artificial Intelligence Research | 2010
Vadim Bulitko; Yngvi Björnsson; Ramon Lawrence
Real-time heuristic search algorithms satisfy a constant bound on the amount of planning per action, independent of problem size. As a result, they scale up well as problems become larger. This property would make them well suited for video games where Artificial Intelligence controlled agents must react quickly to user commands and to other agents. actions. On the downside, real-time search algorithms employ learning methods that frequently lead to poor solution quality and cause the agent to appear irrational by re-visiting the same problem states repeatedly. The situation changed recently with a new algorithm, D LRTA*, which attempted to eliminate learning by automatically selecting subgoals. D LRTA* is well poised for video games, except it has a complex and memory-demanding pre-computation phase during which it builds a database of subgoals. In this paper, we propose a simpler and more memory-efficient way of pre-computering subgoals thereby eliminating the main obstacle to applying state-of-the-art real-time search methods in video games. The new algorithm solves a number of randomly chosen problems off-line, compresses the solutions into a series of subgoals and stores them in a database. When presented with a novel problem on-line, it queries the database for the most similar previously solved case and uses its subgoals to solve the problem. In the domain of pathfinding on four large video game maps, the new algorithm delivers solutions eight times better while using 57 times less memory and requiring 14% less pre-computation time.
Archive | 2011
Mark O. Riedl; David Thue; Vadim Bulitko
Much research on artificial intelligence in games has been devoted to creating opponents that play competently against human players. We argue that the traditional goal of AI in games-to win the game-is but one of several interesting goals to pursue. We promote the alternative goal of making the human player’s play experience “better,” i.e., AI systems in games should reason about how to deliver the best possible experience within the context of the game. The key insight we offer is that approaching AI reasoning for games as “storytelling reasoning” makes this goal much more attainable. We present a framework for creating interactive narratives for entertainment purposes based on a type of agent called an experience manager. An experience manager is an intelligent computer agent that manipulates a virtual world to dynamically adapt the narrative content the player experiences, based on his or her actions and inferences about his or her preferred style of play. Following a theoretical perspective on game AI as a form of storytelling, we discuss the implications of such a perspective in the context of several AI technological approaches.
international conference on interactive digital storytelling | 2008
David Thue; Vadim Bulitko; Marcia L. Spetch
Of all forms of storytelling, interactive storytelling presents authors with a unique opportunity: while most traditional stories must rely on having general high appeal, the nature of interactive stories to encourage audience interaction allows aspects of each individuals state to be automatically inferred. Given such information, an authors decisions would become more informed, and his ability to affect the audience would be improved. In this paper, we present an analysis of the decision-making process in interactive storytelling, and construct a method for characterizing storytelling systems based on features of their design. We demonstrate our method by comparing four recently published systems, and review related literature on inferring player information. Finally, we present Delayed Authoring, a new perspective on the design of interactive storytelling systems which takes advantage of their opportunity to make stories player-specific.
Artificial Intelligence | 2003
Vadim Bulitko; David C. Wilkins
This paper presents a formalism called Time Interval Petri Nets (TIPNs), which are designed to support a qualitative simulation of temporal concurrent processes. One of the key features of TIPNs is a uniform use of time intervals throughout the model. This enables a natural and efficient representation of temporal uncertainty in inputs, outputs, and intermediate states of the qualitative simulation. This is required because the exact time of key events, such as the start time of a fire crisis, is typically not known with certainty. Likewise, output conclusions of the qualitative simulation include earliest time and guaranteed time of key events that can be used by a decision maker to select the most appropriate action.Results are described of a TIPN-based qualitative simulator constructed in the domain of ship damage control. The simulator was created to replace an existing quantitative simulator which was too slow to support envisionment-based real-time decision making in this domain. The experimental results showed a speedup of four to five orders of magnitude which enables hyper-real time qualitative prediction of consequences of multiple competing actions. An automated shipboard damage control decision-making system incorporating a TIPN-based qualitative simulator achieved a 318% improvement over human subject matter experts in a large-scale simulated exercise of over 500 scenarios.
international joint conference on artificial intelligence | 2011
Nathan R. Sturtevant; Vadim Bulitko
Real-time agent-centric algorithms have been used for learning and solving problems since the introduction of the LRTA* algorithm in 1990. In this time period, numerous variants have been produced, however, they have generally followed the same approach in varying parameters to learn a heuristic which estimates the remaining cost to arrive at a goal state. Recently, a different approach, RIBS, was suggested which, instead of learning costs to the goal, learns costs from the start state. RIBS can solve some problems faster, but in other problems has poor performance. We present a new algorithm, f-cost Learning Real-Time A* (f-LRTA*), which combines both approaches, simultaneously learning distances from the start and heuristics to the goal. An empirical evaluation demonstrates that f-LRTA* outperforms both RIBS and LRTA*-style approaches in a range of scenarios.
IEEE Spectrum | 2008
Jonathan Schaeffer; Vadim Bulitko; Michael Buro
The main challenge in making video games is to make computer-generated characters-dubbed bots-act realistically. They must, of course, look good and move naturally. But, ideally, they should also be able to engage in believable conversations, plan their actions, find their way around virtual worlds, and learn from their mistakes. That is, they need to be smart. Today many video games create only an illusion of intelligence, using a few programming tricks. But in the not-so -distant future, game bots will routinely use sophisticated AI techniques to shape their behavior. We and our colleagues in the University of Alberta GAMES (game-playing, analytical methods, minimax search and empirical studies) research group, in Edmonton, Canada, have been working to help bring about such a revolution.