John K. Slaney
Australian National University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by John K. Slaney.
Artificial Intelligence | 2001
John K. Slaney; Sylvie Thiébaux
Abstract Contemporary AI shows a healthy trend away from artificial problems towards real-world applications. Less healthy, however, is the fashionable disparagement of “toy” domains: when properly approached, these domains can at the very least support meaningful systematic experiments, and allow features relevant to many kinds of reasoning to be abstracted and studied. A major reason why they have fallen into disrepute is that superficial understanding of them has resulted in poor experimental methodology and consequent failure to extract useful information. This paper presents a sustained investigation of one such toy: the (in)famous Blocks World planning problem, and provides the level of understanding required for its effective use as a benchmark. Our results include methods for generating random problems for systematic experimentation, the best domain-specific planning algorithms against which AI planners can be compared, and observations establishing the average plan quality of near-optimal methods. We also study the distribution of hard/easy instances, and identify the structure that AI planners must be able to exploit in order to approach Blocks World successfully.
international conference on logic programming | 2005
Peter J. Stuckey; Maria J. García de la Banda; Michael J. Maher; Kim Marriott; John K. Slaney; Zoltan Somogyi; Mark Wallace; Toby Walsh
The G12 project recently started by National ICT Australia (NICTA)is an ambitious project to develop a software platform for solving large scale industrial combinatorial optimisation problems. The core design involves three languages: Zinc, Cadmium and Mercury (Group 12 of the periodic table). Zinc is a declarative modelling language for expressing problems, independent of any solving methodology. Cadmium is a mapping language for mapping Zinc models to underlying solvers and/or search strategies, including hybrid approaches. Finally, existing Mercury will be extended as a language for building extensible and hybridizable solvers. The same Zinc model, used with different Cadmium mappings, will allow us to experiment with different complete, local, or hybrid search approaches for the same problem. This talk will explain the G12 global design, the final G12 objectives, and our progress so far.
Archive | 2000
Riichiro Mizoguchi; John K. Slaney
Computer systems in which autonomous software agents negotiate with one another in order to come to mutually acceptable agreements are likely to become pervasive in the next generation of networked systems. In such systems, the agents will be required to participate in a range of negotiation scenarios and exhibit a range of negotiation behaviours (depending on the context). To this end, this talk explores the issues involved in designing and implementating a number of automated negotiators for real-world electronic commerce applications. R. Mizoguchi and J. Slaney (Eds.): PRICAI 2000, LNAI 1886, p. 1, 2000.
Journal of Artificial Intelligence Research | 2006
Sylvie Thiébaux; Charles Gretton; John K. Slaney; David Price; Froduald Kabanza
A decision process in which rewards depend on history rather than merely on the current state is called a decision process with non-Markovian rewards (NMRDP). In decision-theoretic planning, where many desirable behaviours are more naturally expressed as properties of execution sequences rather than as properties of states, NMRDPs form a more natural model than the commonly adopted fully Markovian decision process (MDP) model. While the more tractable solution methods developed for MDPs do not directly apply in the presence of non-Markovian rewards, a number of solution methods for NMRDPs have been proposed in the literature. These all exploit a compact specification of the non-Markovian reward function in temporal logic, to automatically translate the NMRDP into an equivalent MDP which is solved using efficient MDP solution methods. This paper presents NMRDPP(Non-Markovian Reward Decision Process Planner), a software platform for the development and experimentation of methods for decision-theoretic planning with non-Markovian rewards. The current version of NMRDPP implements, under a single interface, a family of methods based on existing as well as new approaches which we describe in detail. These include dynamic programming, heuristic search, and structured methods. Using NMRDPP, we compare the methods and identify certain problem features that affect their performance. NMRDPPs treatment of non-Markovian rewards is inspired by the treatment of domain-specific search control knowledge in the TLPlan planner, which it incorporates as a special case. In the First International Probabilistic Planning Competition, NMRDPP was able to compete and perform well in both the domain-independent and hand-coded tracks, using search control knowledge in the latter.
conference on automated deduction | 1992
Ewing L. Lusk; William McCune; John K. Slaney
We describe a parallel theorem prover based on the Argonne theorem-proving system OTTER. The parallel system, called Roo, runs on shared-memory multiprocessors such as the Sequent Symmetry. We explain the parallel algorithm used and give performance results that demonstrate near-linear speedups on large problems.
conference on automated deduction | 1990
John K. Slaney; Ewing L. Lusk
In this paper we present a parallel algorithm for computing the closure of a set under an operation. This particular type of computation appears in a variety of disguises, and has been used in automated theorem proving, abstract algebra, and formal logic. The algorithm we give here is particularly suited for shared-memory parallel computers, where it makes possible economies of space. Implementations of the algorithm in two application contexts are described and experimental results given.
conference on automated deduction | 1994
John K. Slaney; Ewing L. Lusk; William McCune
The theorem prover SCOTT, early work on which was reported in [3], is the result of tying together the existing prover O T T E R [1] and the existing model generator FINDER [4] to make a new system of significantly greater power than either of its parents. The functionality of SCOTT is broadly similar to that of OT T E R, but its behaviour is sufficiently different that we regard it as a separate system.
Inconsistency Tolerance | 2004
John K. Slaney
This is an account of the approach to paraconsistency associated with relevant logic. The logic fde of first degree entailments is shown to arise naturally out of the deeper concerns of relevant logic. The relationship between relevant logic and resolution, and especially the disjunctive syllogism, is then examined. The relevant refusal to validate these inferences is defended, and finally it is suggested that more needs to be done towards a satisfactory theory of when they may nonetheless safely be used.
international joint conference on automated reasoning | 2001
Kahlil Hodgson; John K. Slaney
This paper reports recent experimental work in the development and refinement of the first order theorem prover SCOTT-5. This is descended from the SCOTT (Semantically Constrained OTTER) prover (see Proc. IJCAI 1993, pp. 109-114) and uses the same combination of a saturation-based theorem prover and a finite domain constraint solver, but the architecture of SCOTT-5 is radically different from that of its ancestor. Here we briefly outline semantic guidance as it occurs in SCOTT-5, and give experimental evidence of an improvement in performance (in terms of efficiency) that we attribute to the guidance strategy.
european conference on artificial intelligence | 2014
John K. Slaney
The duality between conflicts and diagnoses in the field of diagnosis, or between plans and landmarks in the field of planning, or between unsatisfiable cores and minimal co-satisfiable sets in SAT or CSP solving, has been known for many years. Recent work in these communities (Davies and Bacchus, CP 2011, Bonet and Helmert, ECAI 2010, Haslum et al., ICAPS 2012, Stern et al., AAAI 2012) has brought it to the fore as a topic of current interest. The present paper lays out the set-theoretic basis of the concept, and introduces a generic implementation of an algorithm based on it. This algorithm provides a method for converting decision procedures into optimisation ones across a wide range of applications without the need to rewrite the decision procedure implementations. Initial experimental validation shows good performance on a number of benchmark problems from AI planning.