Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Barry Richards is active.

Publication


Featured researches published by Barry Richards.


international syposium on methodologies for intelligent systems | 1994

parcPlan: A Planning Architecture with Parallel Actions, Resources and Constraints

Jonathan M. Lever; Barry Richards

We describe the generic planning architecture parcPLAN, which uses constraint-solving to perform both temporal and non-temporal reasoning. The architecture allows considerable temporal sophistication in the specification of actions, integrity constraints and planning problems, and produces plans with a high degree of concurrency in action execution. A feature of parcPLAN is the capability to represent and minimise costs associated with plans. parcPLAN has been implemented in the Constraint Logic Programming language ECLiPSe, and performs well on large-scale planning and resource allocation problems.


Annals of Operations Research | 1998

Towards a closer integration of finite domainpropagation and simplex-based algorithms

Mozafar T. Hajian; Hani El-Sakkout; Mark Wallace; Jonathan M. Lever; Barry Richards

This paper describes our experience in implementing an industrial application using thefinite domain solver of the ECLi PSe constraint logic programming (CLP) system, inconjunction with the mathematical programming (MP) system, FortMP. In this technique,the ECLi PSe system generates a feasible solution that is adapted to construct a starting point(basic solution) for the MP solver. The basic solution is then used as an input to the FortMPsystem to warm-start the simplex (SX) algorithm, hastening the solution of the linearprogramming relaxation, (LPR). SX proceeds as normal to find the optimal integer solution.Preliminary results indicate that the integration of the two environments is suitable for thisapplication in particular, and may generally yield significant benefits. We describe adaptationsrequired in the hybrid method, and report encouraging experimental results for thisproblem.


Networks | 2004

A hybrid multicommodity routing algorithm for traffic engineering

Wided Ouaja; Barry Richards

Traffic engineering seeks to route traffic demands in data networks to guarantee certain quality of service (QoS) parameters while efficiently utilizing network resources. MPLS, for example, provides the essential capabilities to achieve this with explicit routing. Finding paths for all the demands which meet QoS requirements is a nontrivial task. Indeed, guaranteeing just bandwidth is known to be -hard. In this paper, we propose a new complete (exact) algorithm for solving this problem, which is a hybrid that tightly integrates Lagrangian optimization and Constraint Programming search. We evaluate its performance on a set of benchmark tests, based on a large well-provisioned commercial backbone. The tests involve demand sets of varying size, mostly between 100 and 600 demands. We compare the results with those achieved by several other well-known algorithms, some complete and some heuristic. This reveals that the hybrid algorithm typically yields the most informative results in the most effective way. It resolves most of the test cases either by finding a solution or by proving infeasibility, each taking only a few seconds. Moreover, the solutions found for solvable problems are provably near-optimal. The results show, perhaps surprisingly, that the routing task can be difficult even in a very well-provisioned network.


Journal of Automated Reasoning | 2000

Nonsystematic Search and No-Good Learning

E. Thomas Richards; Barry Richards

Nonsystematic search algorithms seem, in general, to be well suited to large-scale problems with many solutions. However, they tend to perform badly for problems with few solutions, and they cannot be used for insoluble problems, since they are incomplete.Here we present a new algorithm, learn-SAT, that, although based on nonsystematic search, is complete. Completeness is realized through a process of no-good learning, learning-by-merging. This requires exponential space in the worst case. We show, nevertheless, that learn-SAT performs very well on certain SAT problems that are tightly constrained or insoluble. Indeed, its performance generally approximates the best SAT algorithms and does much better at lower clause densities. Learn-SAT also maintains much of the efficient performance of nonsystematic search for large-scale problems with many solutions, at least relative to backtrack search algorithms.These results indicate that the burden on memory, imposed by no-good learning, is not generally a problem for learn-SAT. This is perhaps surprising in view of previous work. What is even more surprising is the scalability of learn-SAT. For some types of problem it scales very much better than the nearest competitive algorithm. There are other types, however, for which this is not the case.The performance profile of learn-SAT emerges from an experimental methodology related to the one outlined by Mammen and Hogg in 1997.


principles and practice of constraint programming | 1994

Nogood Backmarking with Min-Conflict Repair in Constraint Satisfaction and Optimization

Yuejun Jiang; Thomas Richards; Barry Richards

There are generally three approaches to constraint satisfaction and optimization: domain-filtering, tree-search labelling and solution repair. The main attractions of repair-based algorithms over domain-filtering and/or tree-search algorithms seem to be their scalability, reactivity and applicability to optimization problems. The main detraction of the repair-based algorithms appear to be their failure to guarantee optimality. In this paper, a repair-based algorithm, that guarantees to find an optimal solution if one exists, is presented. The search space of the algorithm is controlled by no-good backmarking, a learning process of polynomial complexity that records generic patterns of no-good partial labels. These no-goods serve to avoid the repeated traversing of those failed paths of a search graph and to force the search process to jump out of a local optimum. Unlike some similar repair-based methods which usually work on complete (but possibly inconsistent) labels, the proposed algorithm works on partial (possibly inconsistent) labels by repairing those variables that contribute to the violation of constraints in the spirit of dependency-directed backjumping. In addition, the algorithm will accept a repair if it can minimise the conflicts of a label even if it does not eliminate them. To control the space of no-good patterns, we propose to generate the most generic no-good pattern as early as possible. To support dynamic constraint satisfaction, we introduce several strategies to maintain no-good patterns on the tradeoffs between space, efficiency and overheads. In particular, through the comparisons with other works such as Dynamic Backtracking, weighted GSAT and Breakout, we suggest possible strategies to improve the proposed method.


acm symposium on applied computing | 2005

Hybrid Lagrangian relaxation for bandwidth-constrained routing: knapsack decomposition

Wided Ouaja; Barry Richards

To deliver quality of service, internet service providers are seeking effective solutions to optimize their networks. One of the main tasks is to optimally route a set of traffic demands, each along a single path, while satisfying their bandwidth requirements and without exceeding edge capacities. This is an integer multicommodity flow problem, which is known to be NP-hard. To solve this problem efficiently, a new complete and scalable hybrid solver (HLR) integrating Lagrangian relaxation and constraint programming has been proposed. It exploits the shortest path decomposition of the problem and has been shown to yield significant benefits over several other algorithms, such as CPLEX and well-known routing heuristics. In this paper we explore an alternative dualization within the same hybrid. We present a variant of HLR, adapted to the knapsack decomposition of the problem. Although this relaxation seems less natural, experimental results show that it has some advantages. The paper provides an interesting insight of where the benefits may lie, in particular for larger and harder cases where the ratio of total demand to available capacity is higher.


Lecture Notes in Computer Science | 1999

Scaleability in Planning

Vassilis Liatsos; Barry Richards

This paper explores the performance of three planners, viz. parcPLAN, IPP and Blackbox, on a variant of the standard blocks-world problem. The variant problem has a restricted number of table positions, and the number of arms can vary (from 1 upwards). This type of problem is typical of many real world planning problems, where resources form a significant component. The empirical studies reveal that least commitment planning, as implemented in parcPLAN, is far more effective than the strategies in IPP and Blackbox. But the studies also reveal a serious limitation on the scaleability of parcPLAN’s algorithm.


international syposium on methodologies for intelligent systems | 1991

On Interval-based Temporal Planning: An IQ Strategy

Barry Richards; Yuejun Jiang; Ho-Jin Choi

Allen & Koomens interval planner and Dean & McDermotts time map manager (TMM) offer different approaches to temporal database management in planning. In this paper we present a temporal planning system that integrates ideas from both methods, and at the same time develops several new ideas. In particular, we treat time points and intervals within a common structure, and adopt an alternative method for handling temporal constraints based on constraint logic programming. To provide a proper characterization of actions within an interval environment, we invoke the notion of noninterference conditions to handle action interaction and the qualification problem. To deal with the persistence problem, we adopt a spectrum of methods based on TMMs stretching and clipping rules. We show that our approach allows a temporally minimum specification for preconditions, which not only improves the clarity of the specification of an action, but perhaps also reduces the computational cost of constraint satisfaction. The formal aspects of our temporal approach are encapsulated in an interval temporal logic called IQ which is functionally more expressive than first order logic. The specification of our temporal reasoning is modelled in IQ-Prolg — a computation-oriented subsidiary language of IQ.


international conference on tools with artificial intelligence | 2007

Approaches to the Subnet Generation Problem

Cheuk Fun Bede Leung; Olli Kamarainen; Barry Richards

This paper introduces the subnet generation problem (SGP) which is a new type of network routing problem that is found, for example, in some peer-to-peer applications. We explore two algorithms to solve the SGP by exploiting its special structure. The first algorithm, the tree search algorithm (TS), is an adaptation of an existing algorithm. TS decomposes the SGP into a master problem solved by a systematic tree search, and a subproblem solved by incomplete but efficient heuristics. Our question is what happens if we sacrifice the complete search in the master problem in exchange for better exploration of the search space. In order to answer this, we present a second algorithm, the local search algorithm (LS), which replaces the tree search with non-systematic search in the master problem. The two algorithms are compared in an experimental study.


principles and practice of constraint programming | 2005

Subnet generation problem: a new network routing problem

Cheuk Fun Bede Leung; Barry Richards; Olli Kamarainen

We introduce a new type of network routing problem, the subnet generation problem (SGP) which is a special case of the traffic placement problem (TPP). In the TPP, given (1) a network which consists of routers and links, and (2) a set of point-to-point traffic demands to be placed including finding feasible routes, the objective is to minimize the sum of costs of the unplaced demands subject to the Quality-of-Service routing constraints. The SGP is a TPP with an extra set of constraints that restricts the combinations of demands to be placed. In our SGP, each router has a fixed amount of information-gain that is to be transmitted to every other router in the subnet. A subnet is defined as any subset of the routers in the network. This means that every router in the subnet will have exactly the same total information-gain. The objective is to find a subnet that maximizes the total information-gain: there must be a demand with a valid path between every pair of routers in the subnet. The reason for creating demands among the routers arises from the fact that every node in a selected group of routers is required to be a client and a server. The figure below shows an example of a subnet and a path solution. I would like to thank Quanshi Xia and Christophe Guettier for their participation.

Collaboration


Dive into the Barry Richards's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yuejun Jiang

Imperial College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ho-Jin Choi

Imperial College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Wided Ouaja

Imperial College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge