Mark Jerrum
Queen Mary University of London
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Mark Jerrum.
Theoretical Computer Science | 1986
Mark Jerrum; Leslie G. Valiant; Vijay V. Vazirani
Abstract The class of problems involving the random generation of combinatorial structures from a uniform distribution is considered. Uniform generation problems are, in computational difficulty, intermediate between classical existence and counting problems. It is shown that exactly uniform generation of ‘efficiently verifiable’ combinatorial structures is reducible to approximate counting (and hence, is within the third level of the polynomial hierarchy). Natural combinatorial problems are presented which exhibit complexity gaps between their existence and generation, and between their generation and counting versions. It is further shown that for self-reducible problems, almost uniform generation and randomized approximate counting are inter-reducible, and hence, of similar complexity.
SIAM Journal on Computing | 1989
Mark Jerrum; Alistair Sinclair
A randomised approximation scheme for the permanent of a 0–1s presented. The task of estimating a permanent is reduced to that of almost uniformly generating perfect matchings in a graph; the latter is accomplished by simulating a Markov chain whose states are the matchings in the graph. For a wide class of 0–1 matrices the approximation scheme is fully-polynomial, i.e., runs in time polynomial in the size of the matrix and a parameter that controls the accuracy of the output. This class includes all dense matrices (those that contain sufficiently many 1’s) and almost all sparse matrices in some reasonable probabilistic model for 0–1 matrices of given density.For the approach sketched above to be computationally efficient, the Markov chain must be rapidly mixing: informally, it must converge in a short time to its stationary distribution. A major portion of the paper is devoted to demonstrating that the matchings chain is rapidly mixing, apparently the first such result for a Markov chain with genuinely c...
Information & Computation | 1989
Alistair Sinclair; Mark Jerrum
The paper studies effective approximate solutions to combinatorial counting and uniform generation problems. Using a technique based on the simulation of ergodic Markov chains, it is shown that, for self-reducible structures, almost uniform generation is possible in polynomial time provided only that randomised approximate counting to within some arbitrary polynomial factor is possible in polynomial time. It follows that, for self-reducible structures, polynomial time randomised algorithms for counting to within factors of the form (1 +n-@) are available either for all fl E R or for no fi E R. A substantial part of the paper is devoted to investigating the rate of convergence of finite ergodic Markov chains, and a simple but powerful characterisation of rapid convergence for a broad class of chains based on a structural property of the underlying graph is established. Finally, the general techniques of the paper are used to derive an almost uniform generation procedure for labelled graphs with a given degree sequence which is valid over a much wider range of degrees than previous methods: this in turn leads to randomised approximate counting algorithms for these graphs with very good
Algorithmica | 1997
Alan M. Frieze; Mark Jerrum
Polynomial-time approximation algorithms with nontrivial performance guarantees are presented for the problems of (a) partitioning the vertices of a weighted graph intok blocks so as to maximize the weight of crossing edges, and (b) partitioning the vertices of a weighted graph into two blocks of equal cardinality, again so as to maximize the weight of crossing edges. The approach, pioneered by Goemans and Williamson, is via a semidefinite programming relaxation.
Archive | 2003
Mark Jerrum
Foreword.- 1 Two good counting algorithms.- 1.1 Spanning trees.- 1.2 Perfect matchings in a planar graph.- 2 #P-completeness.- 2.1 The class #P.- 2.2 A primal #P-complete problem.- 2.3 Computing the permanent is hard on average.- 3 Sampling and counting.- 3.1 Preliminaries.- 3.2 Reducing approximate countingto almost uniform sampling.- 3.3 Markov chains.- 4 Coupling and colourings.- 4.1 Colourings of a low-degree graph.- 4.2 Bounding mixing time using coupling.- 4.3 Path coupling.- 5 Canonical paths and matchings.- 5.1 Matchings in a graph.- 5.2 Canonical paths.- 5.3 Back to matchings.- 5.4 Extensions and further applications.- 5.5 Continuous time.- 6 Volume of a convex body.- 6.1 A few remarks on Markov chainswith continuous state space.- 6.2 Invariant measure of the ball walk.- 6.3 Mixing rate of the ball walk.- 6.4 Proof of the Poincaru inequality (Theorem 6.7).- 6.5 Proofs of the geometric lemmas.- 6.6 Relaxing the curvature condition.- 6.7 Using samples to estimate volume.- 6.8 Appendix: a proof of Corollary 6.8.- 7 Inapproximability.- 7.1 Independent sets in a low degree graph.
Random Structures and Algorithms | 1995
Mark Jerrum
A fully polynomial randomized approximation scheme is presented for estimating the number of (vertex) k‐colorings of a graph of maximum degree Δ, when k ≥ 2Δ + 1.
Random Structures and Algorithms | 1992
Mark Jerrum
In a random graph on n vertices, the maximum clique is likely to be of size very close to 2 lg n. However, the clique produced by applying the naive “greedy” heuristic to a random graph is unlikely to have size much exceeding lg n. The factor of two separating these estimates motivates the search for more effective heuristics. This article analyzes a heuristic search strategy, the Metropolis process, which is just one step above the greedy one in its level of sophistication. It is shown that the Metropolis process takes super-polynomial time to locate a clique that is only slightly bigger than that produced by the greedy heuristic.
integer programming and combinatorial optimization | 1995
Alan M. Frieze; Mark Jerrum
Polynomial-time approximation algorithms with non-trivial performance guarantees are presented for the problems of (a) partitioning the vertices of a weighted graph into k blocks so as to maximise the weight of crossing edges, and (b) partitioning the vertices of a weighted graph into two blocks of equal cardinality, again so as to maximise the weight of crossing edges. The approach, pioneered by Goemans and Williamson, is via a semidefinite programming relaxation.
Algorithmica | 2004
Martin E. Dyer; Leslie Ann Goldberg; Catherine S. Greenhill; Mark Jerrum
AbstractTwo natural classes of counting problems that are interreducible under approximation-preserving reductions are: (i) those that admit a particular kind of efficient approximation algorithm known as an “FPRAS”, and (ii) those that are complete for #P with respect to approximation-preserving reducibility. We describe and investigate not only these two classes but also a third class, of intermediate complexity, that is not known to be identical to (i) or (ii). The third class can be characterised as the hardest problems in a logically defined subclass of #P.
conference on learning theory | 1993
Paul W. Goldberg; Mark Jerrum
The Vapnik-Chervonenkis (V-C) dimension is an important combinatorial tool in the analysis of learning problems in the PAC framework. For polynomial learnability, we seek upper bounds on the V-C dimension that are polynomial in the syntactic complexity of concepts. Such upper bounds are automatic for discrete concept classes, but hitherto little has been known about what general conditions guarantee polynomial bounds on V-C dimension for classes in which concepts and examples are represented by tuples of real numbers. In this paper, we show that for two general kinds of concept class the V-C dimension is polynomially bounded in the number of real numbers used to define a problem instance. One is classes where the criterion for membership of an instance in a concept can be expressed as a formula (in the first-order theory of the reals) with fixed quantification depth and exponentially-bounded length, whose atomic predicates are polynomial inequalities of exponentially-bounded degree, The other is classes where containment of an instance in a concept is testable in polynomial time, assuming we may compute standard arithmetic operations on reals exactly in constant time. Our results show that in the continuous case, as in the discrete, the real barrier to efficient learning in the Occam sense is complexity-theoretic and not information-theoretic. We present examples to show how these results apply to concept classes defined by geometrical figures and neural nets, and derive polynomial bounds on the V-C dimension for these classes.