Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yaxin Liu is active.

Publication


Featured researches published by Yaxin Liu.


Annals of Mathematics and Artificial Intelligence | 2001

Efficient and inefficient ant coverage methods

Sven Koenig; Boleslaw K. Szymanski; Yaxin Liu

Ant robots are simple creatures with limited sensing and computational capabilities. They have the advantage that they are easy to program and cheap to build. This makes it feasible to deploy groups of ant robots and take advantage of the resulting fault tolerance and parallelism. We study, both theoretically and in simulation, the behavior of ant robots for one-time or repeated coverage of terrain, as required for lawn mowing, mine sweeping, and surveillance. Ant robots cannot use conventional planning methods due to their limited sensing and computational capabilities. To overcome these limitations, we study navigation methods that are based on real-time (heuristic) search and leave markings in the terrain, similar to what real ants do. These markings can be sensed by all ant robots and allow them to cover terrain even if they do not communicate with each other except via the markings, do not have any kind of memory, do not know the terrain, cannot maintain maps of the terrain, nor plan complete paths. The ant robots do not even need to be localized, which completely eliminates solving difficult and time-consuming localization problems. We study two simple real-time search methods that differ only in how the markings are updated. We show experimentally that both real-time search methods robustly cover terrain even if the ant robots are moved without realizing this (say, by people running into them), some ant robots fail, and some markings get destroyed. Both real-time search methods are algorithmically similar, and our experimental results indicate that their cover time is similar in some terrains. Our analysis is therefore surprising. We show that the cover time of ant robots that use one of the real-time search methods is guaranteed to be polynomial in the number of locations, whereas the cover time of ant robots that use the other real-time search method can be exponential in (the square root of) the number of locations even in simple terrains that correspond to (planar) undirected trees.


adaptive agents and multi-agents systems | 2001

Terrain coverage with ant robots: a simulation study

Sven Koenig; Yaxin Liu

In this paper, we study a simple means for coordinating teams of simple agents. In particular, we study ant robots and how they can cover terrain once or repeatedly by leaving markings in the terrain, similar to what ants do. These markings can be sensed by all robots and allow them to cover terrain even if they do not communicate with each other except via the markings, do not have any kind of memory, do not know the terrain, cannot maintain maps of the terrain, nor plan complete paths. The robots do not even need to be localized, which completely eliminates solving difficult and time-consuming localization problems. In this paper, we use real-time heuristic search methods to implement ant robots and present a simulation study with several real-time heuristic search methods to study their properties for terrain coverage. Our experiments show that all of the real-time heuristic search methods robustly cover terrain even if the robots are moved without realizing this, some robots fail, and some markings get destroyed. These results demonstrate that terrain coverage with real-time heuristic search methods is an interesting alternative to more conventional terrain coverage methods.


Ai Magazine | 2004

Incremental heuristic search in AI

Sven Koenig; Maxim Likhachev; Yaxin Liu; David Furcy

Incremental search reuses information from previous searches to find solutions to a series of similar search problems potentially faster than is possible by solving each search problem from scratch. This is important because many AI systems have to adapt their plans continuously to changes in (their knowledge of) the world. In this article, we give an overview of incremental search, focusing on LIFELONG PLANNING A*, and outline some of its possible applications in AI.


Journal of Experimental and Theoretical Artificial Intelligence | 2002

The interaction of representations and planning objectives for decision-theoretic planning tasks

Sven Koenig; Yaxin Liu

This article studies decision-theoretic planning or reinforcement learning in the presence of traps such as steep slopes for outdoor robots or staircases for indoor robots. In this case, achieving the goal from the start is often the primary objective while minimizing the travel time is only of secondary importance. This article studies how this planning objective interacts with possible representations of the planning tasks, namely whether to use a discount factor that is one or smaller than one and whether to use the action-penalty or the goal-reward representation. It is shown that the action-penalty representation without discounting guarantees that the plan that maximizes the expected reward also achieves the goal from the start (provided that this is possible) but neither the action-penalty representation with discounting nor the goal-reward representation with discounting have this property. The article then shows exactly when this trapping phenomenon occurs, using a novel interpretation of discounting, namely that it models agents that use convex exponential utility functions and thus are optimistic in the face of uncertainty. Finally, it is shown how the selective state-deletion method can be used in conjunction with standard decision-theoretic planners to eliminate the trapping phenomenon.


Lecture Notes in Computer Science | 1999

Sensor Planning with Non-linear Utility Functions

Sven Koenig; Yaxin Liu

Sensor planning is concerned with when to sense and what to sense. We study sensor planning in the context of planning objectives that trade-off between minimizing the worst-case, expected, and best-case plan- execution costs. Sensor planning with these planning objectives is interesting because they are realistic and the frequency of sensing changes with the planning objective: more pessimistic decision makers tend to sense more frequently. We perform sensor planning by combining one of our techniques for planning with non-linear utility functions with an existing sensor-planning method. The resulting sensor-planning method is not only as easy to implement as the sensor-planning method that it extends but also (almost) as efficient. We demonstrate empirically how sensor plans change as the planning objective changes, using a common testbed for sensor planning.


Ai Magazine | 2004

Incremental Heuristic Search in Artificial Intelligence

Sven Koenig; Maxim Likhachev; Yaxin Liu; David Furcy


national conference on artificial intelligence | 2002

Speeding up the calculation of heuristics for heuristic search-based planning

Yaxin Liu; Sven Koenig; David Furcy


adaptive agents and multi-agents systems | 2003

Risk-averse auction agents

Yaxin Liu; Richard Goodwin; Sven Koenig


Archive | 2005

Decision-theoretic planning under risk-sensitive planning objectives

Yaxin Liu; Sven Koenig


international conference on artificial intelligence planning systems | 2000

Representations of decision-theoretic planning tasks

Sven Koenig; Yaxin Liu

Collaboration


Dive into the Yaxin Liu's collaboration.

Top Co-Authors

Avatar

Sven Koenig

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

David Furcy

University of Wisconsin–Oshkosh

View shared research outputs
Top Co-Authors

Avatar

Maxim Likhachev

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Boleslaw K. Szymanski

Rensselaer Polytechnic Institute

View shared research outputs
Researchain Logo
Decentralizing Knowledge