Marcel Steinmetz
Saarland University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Marcel Steinmetz.
Journal of Artificial Intelligence Research | 2016
Marcel Steinmetz; Joerg Hoffmann; Olivier Buffet
Unavoidable dead-ends are common in many probabilistic planning problems, e.g. when actions may fail or when operating under resource constraints. An important objective in such settings is MaxProb, determining the maximal probability with which the goal can be reached, and a policy achieving that probability. Yet algorithms for MaxProb probabilistic planning are severely under-explored, to the extent that there is scant evidence of what the empirical state of the art actually is. We close this gap with a comprehensive empirical analysis. We design and explore a large space of heuristic search algorithms, systematizing known algorithms and contributing several new algorithm variants. We consider MaxProb, as well as weaker objectives that we baptize AtLeastProb (requiring to achieve a given goal probabilty threshold) and ApproxProb (requiring to compute the maximum goal probability up to a given accuracy). We explore both the general case where there may be 0-reward cycles, and the practically relevant special case of acyclic planning, such as planning with a limited action-cost budget. We design suitable termination criteria, search algorithm variants, dead-end pruning methods using classical planning heuristics, and node selection strategies. We design a benchmark suite comprising more than 1000 instances adapted from the IPPC, resource-constrained planning, and simulated penetration testing. Our evaluation clarifies the state of the art, characterizes the behavior of a wide range of heuristic search algorithms, and demonstrates significant benefits of our new algorithm variants.
Journal of Artificial Intelligence Research | 2016
Maximilian Fickert; Joerg Hoffmann; Marcel Steinmetz
Recent work has shown how to improve delete relaxation heuristics by computing relaxed plans, i. e., the hFF heuristic, in a compiled planning task ΠC which represents a given set C of fact conjunctions explicitly. While this compilation view of such partial delete relaxation is simple and elegant, its meaning with respect to the original planning task is opaque, and the size of ΠC grows exponentially in |C|. We herein provide a direct characterization, without compilation, making explicit how the approach arises from a combination of the delete-relaxation with critical-path heuristics. Designing equations characterizing a novel view on h+ on the one hand, and a generalized version hC of hm on the other hand, we show that h+(ΠC) can be characterized in terms of a combined hC+ equation. This naturally generalizes the standard delete-relaxation framework: understanding that framework as a relaxation over singleton facts as atomic subgoals, one can refine the relaxation by using the conjunctions C as atomic subgoals instead. Thanks to this explicit view, we identify the precise source of complexity in hFF(ΠC), namely maximization of sets of supported atomic subgoals during relaxed plan extraction, which is easy for singleton-fact subgoals but is NP-complete in the general case. Approximating that problem greedily, we obtain a polynomial-time hCFF version of hFF(ΠC), superseding the ΠC compilation, and superseding the modified ΠceC compilation which achieves the same complexity reduction but at an information loss. Experiments on IPC benchmarks show that these theoretical advantages can translate into empirical ones.
international joint conference on artificial intelligence | 2018
Marcel Steinmetz; Joerg Hoffmann
Two strands of research in classical planning are LP heuristics and conjunctions to improve approximations. Combinations of the two have also been explored. Here, we focus on convergence properties, forcing the LP heuristic to equal the perfect heuristic h∗ in the limit. We show that, under reasonable assumptions, partial variable merges are strictly dominated by the compilation ΠC of explicit conjunctions, and that both render the state equation heuristic equal to h∗ for a suitable set C of conjunctions. We show that consistent potential heuristics can be computed from a variant of ΠC , and that such heuristics can represent h∗ for suitable C. As an application of these convergence properties, we consider sound nogood learning in state space search, via refining the set C. We design a suitable refinement method to this end. Experiments on IPC benchmarks show significant performance improvements in several domains.
international joint conference on artificial intelligence | 2017
Marcel Steinmetz; Jörg Hoffmann
A key technique for proving unsolvability in classical planning are dead-end detectors ∆: effectively testable criteria sufficient for unsolvability, pruning (some) unsolvable states during search. Related to this, a recent proposal is the identification of traps prior to search, compact representations of non-goal state sets T that cannot be escaped. Here, we create new synergy across these ideas. We define a generalized concept of traps, relative to a given dead-end detector ∆, where T can be escaped, but only into dead-end states detected by ∆. We show how to learn compact representations of such T during search, extending the reach of ∆. Our experiments show that this can be quite beneficial. It improves coverage for many unsolvable benchmark planning domains and dead-end detectors ∆, in particular on resource-constrained domains where it outperforms the state of the art.
Artificial Intelligence | 2017
Marcel Steinmetz; Jörg Hoffmann
Abstract Conflict-directed learning is ubiquitous in constraint satisfaction problems like SAT, but has been elusive for state space search on reachability problems like classical planning. Almost all existing approaches learn nogoods relative to a fixed solution-length bound, in which case planning/reachability reduces to a constraint satisfaction problem. Here we introduce an approach to learning more powerful nogoods, that are sound regardless of solution length, i.e., that identify dead-end states for which no solution exists. The key technique we build on are critical-path heuristics h C , relative to a set C of conjunctions. These recognize a dead-end state s, returning h C ( s ) = ∞ , if s has no solution even when allowing to break up conjunctive subgoals into the elements of C. Our key idea is to learn C during search. Whenever forward search has identified an unrecognized dead-end s, where h C ( s ) ∞ , we analyze the situation at s, and add new conjunctions into C in a way guaranteeing to obtain h C ( s ) = ∞ . Thus we learn to recognize s, as well as similar dead-ends search may encounter in the future. We furthermore learn clauses ϕ where s ′ ⊭ ϕ implies h C ( s ′ ) = ∞ , to avoid the overhead of computing h C on every search state. Arranging these techniques in a depth-first search, we obtain an algorithm approaching the elegance of nogood learning in constraint satisfaction, learning to refute search subtrees. We run comprehensive experiments on solvable and unsolvable planning benchmarks. In cases where forward search can identify dead-ends, and where h C dead-end detection is effective, our techniques reduce the depth-first search space size by several orders of magnitude, and often result in state-of-the-art performance.
national conference on artificial intelligence | 2016
Marcel Steinmetz; Jörg Hoffmann
arXiv: Cryptography and Security | 2017
Michael Backes; Jörg Hoffmann; Robert Künnemann; Patrick Speicher; Marcel Steinmetz
international conference on automated planning and scheduling | 2016
Marcel Steinmetz; Joerg Hoffmann; Olivier Buffet
SOCS | 2016
Daniel Gnad; Marcel Steinmetz; Mathäus Jany; Jörg Hoffmann; Ivan Serina; Alfonso Gerevini
national conference on artificial intelligence | 2018
Patrick Speicher; Marcel Steinmetz; Michael Backes; Joerg Hoffmann; Robert Künnemann