Ryan Luna
Rice University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ryan Luna.
intelligent robots and systems | 2011
Ryan Luna; Kostas E. Bekris
Multi-robot path planning is abstracted as the problem of computing a set of non-colliding paths on a graph for multiple robots. A naive search of the composite search space, although complete, has exponential complexity and becomes computationally prohibitive for problems with just a few robots. This paper proposes an efficient and complete algorithm for solving a general class of multi-robot path planning problems, specifically those where there are at most n-2 robots in a connected graph of n vertices. This paper provides a full proof of completeness. The algorithm employs two primitives: “push”, where a robot moves toward its goal until no progress can be made, and “swap”, that allows two robots to swap positions without altering the position of any other robot. Additionally, this paper provides a smoothing procedure for improving solution quality. Simulated experiments compare the proposed approach with several other centralized and decoupled planners, and show that the proposed technique improves computation time and solution quality, while scaling to problems with 100s of robots, solving them in under 5 seconds.
international conference on robotics and automation | 2013
Ryan Luna; Ioan Alexandru Sucan; Mark Moll; Lydia E. Kavraki
Recent work in sampling-based motion planning has yielded several different approaches for computing good quality paths in high degree of freedom systems: path shortcutting methods that attempt to shorten a single solution path by connecting non-consecutive configurations, a path hybridization technique that combines portions of two or more solutions to form a shorter path, and asymptotically optimal algorithms that converge to the shortest path over time. This paper presents an extensible meta-algorithm that incorporates a traditional sampling-based planning algorithm with offline path shortening techniques to form an anytime algorithm which exhibits competitive solution lengths to the best known methods and optimizers. A series of experiments involving rigid motion and complex manipulation are performed as well as a comparison with asymptotically optimal methods which show the efficacy of the proposed scheme, particularly in high-dimensional spaces.
WAFR | 2015
Ryan Luna; Morteza Lahijanian; Mark Moll; Lydia E. Kavraki
This work presents a planning framework that allows a robot with stochastic action uncertainty to achieve a high-level task given in the form of a temporal logic formula. The objective is to quickly compute a feedback control policy to satisfy the task specification with maximum probability. A top-down framework is proposed that abstracts the motion of a continuous stochastic system to a discrete, bounded-parameter Markov decision process (bmdp), and then computes a control policy over the product of the bmdp abstraction and a dfa representing the temporal logic specification. Analysis of the framework reveals that as the resolution of the bmdp abstraction becomes finer, the policy obtained converges to optimal. Simulations show that high-quality policies to satisfy complex temporal logic specifications can be obtained in seconds, orders of magnitude faster than existing methods.
international conference on robotics and automation | 2014
Ryan Luna; Morteza Lahijanian; Mark Moll; Lydia E. Kavraki
This work presents a framework for fast reconfiguration of local control policies for a stochastic system to satisfy a high-level task specification. The motion of the system is abstracted to a class of uncertain Markov models known as bounded-parameter Markov decision processes (BMDPs). During the abstraction, an efficient sampling-based method for stochastic optimal control is used to construct several policies within a discrete region of the state space in order for the system to transit between neighboring regions. A BMDP is then used to find an optimal strategy over the local policies by maximizing a continuous reward function; a new policy can be computed quickly if the reward function changes. The efficacy of the framework is demonstrated using a sequence of online tasks, showing that highly desirable policies can be obtained by reconfiguring existing local policies in just a few seconds.
international joint conference on artificial intelligence | 2011
Ryan Luna; Kostas E. Bekris
SOCS | 2012
Qandeel Sajid; Ryan Luna; Kostas E. Bekris
annual symposium on combinatorial search | 2011
Ryan Luna; Kostas E. Bekris
annual symposium on combinatorial search | 2013
Athanasios Krontiris; Ryan Luna; Kostas E. Bekris
intelligent robots and systems | 2010
Ryan Luna; Kostas E. Bekris
national conference on artificial intelligence | 2014
Ryan Luna; Morteza Lahijanian; Mark Moll; Lydia E. Kavraki