Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Dimitris Achlioptas is active.

Publication


Featured researches published by Dimitris Achlioptas.


Journal of Computer and System Sciences | 2003

Database-friendly random projections: Johnson-Lindenstrauss with binary coins

Dimitris Achlioptas

A classic result of Johnson and Lindenstrauss asserts that any set of n points in d-dimensional Euclidean space can be embedded into k-dimensional Euclidean space---where k is logarithmic in n and independent of d--so that all pairwise distances are maintained within an arbitrarily small factor. All known constructions of such embeddings involve projecting the n points onto a spherically random k-dimensional hyperplane through the origin. We give two constructions of such embeddings with the property that all elements of the projection matrix belong in {-1, 0, +1 }. Such constructions are particularly well suited for database environments, as the computation of the embedding reduces to evaluating a single aggregate over k random partitions of the attributes.


symposium on principles of database systems | 2001

Database-friendly random projections

Dimitris Achlioptas

A classic result of Johnson and Lindenstrauss asserts that any set of n points in d-dimensional Euclidean space can be embedded into k-dimensional Euclidean space where k is logarithmic in n and independent of d so that all pairwise distances are maintained within an arbitrarily small factor. All known constructions of such embeddings involve projecting the n points onto a random k-dimensional hyperplane. We give a novel construction of the embedding, suitable for database applications, which amounts to computing a simple aggregate over k random attribute partitions.


Journal of the ACM | 2007

Fast computation of low-rank matrix approximations

Dimitris Achlioptas; Frank McSherry

Given a matrix A, it is often desirable to find a good approximation to A that has low rank. We introduce a simple technique for accelerating the computation of such approximations when A has strong spectral features, that is, when the singular values of interest are significantly greater than those of a random matrix with size and entries similar to A. Our technique amounts to independently sampling and/or quantizing the entries of A, thus speeding up computation by reducing the number of nonzero entries and/or the length of their representation. Our analysis is based on observing that the acts of sampling and quantization can be viewed as adding a random matrix N to A, whose entries are independent random variables with zero-mean and bounded variance. Since, with high probability, N has very weak spectral features, we can prove that the effect of sampling and quantization nearly vanishes when a low-rank approximation to A + N is computed. We give high probability bounds on the quality of our approximation both in the Frobenius and the 2-norm.


Science | 2009

Explosive Percolation in Random Networks

Dimitris Achlioptas; Raissa M. D'Souza; Joel Spencer

Networks in which the formation of connections is governed by a random process often undergo a percolation transition, wherein around a critical point, the addition of a small number of connections causes a sizable fraction of the network to suddenly become linked together. Typically such transitions are continuous, so that the percentage of the network linked together tends to zero right above the transition point. Whether percolation transitions could be discontinuous has been an open question. Here, we show that incorporating a limited amount of choice in the classic Erdös-Rényi network formation model causes its percolation transition to become discontinuous.


Nature | 2005

Rigorous location of phase transitions in hard optimization problems

Dimitris Achlioptas; Assaf Naor; Yuval Peres

It is widely believed that for many optimization problems, no algorithm is substantially more efficient than exhaustive search. This means that finding optimal solutions for many practical problems is completely beyond any current or projected computational capacity. To understand the origin of this extreme ‘hardness’, computer scientists, mathematicians and physicists have been investigating for two decades a connection between computational complexity and phase transitions in random instances of constraint satisfaction problems. Here we present a mathematically rigorous method for locating such phase transitions. Our method works by analysing the distribution of distances between pairs of solutions as constraints are added. By identifying critical behaviour in the evolution of this distribution, we can pinpoint the threshold location for a number of problems, including the two most-studied ones: random k-SAT and random graph colouring. Our results prove that the heuristic predictions of statistical physics in this context are essentially correct. Moreover, we establish that random instances of constraint satisfaction problems have solutions well beyond the reach of any analysed algorithm.


symposium on the theory of computing | 2005

On the bias of traceroute sampling: or, power-law degree distributions in regular graphs

Dimitris Achlioptas; Aaron Clauset; David Kempe; Cristopher Moore

Understanding the structure of the Internet graph is a crucial step for building accurate network models and designing efficient algorithms for Internet applications. Yet, obtaining its graph structure is a surprisingly difficult task, as edges cannot be explicitly queried. Instead, empirical studies rely on traceroutes to build what are essentially single-source, all-destinations, shortest-path trees. These trees only sample a fraction of the networks edges, and a recent paper by Lakhina et al. found empirically that the resuting sample is intrinsically biased. For instance, the observed degree distribution under traceroute sampling exhibits a power law even when the underlying degree distribution is Poisson.In this paper, we study the bias of traceroute sampling systematically, and, for a very general class of underlying degree distributions, calculate the likely observed distributions explicitly. To do this, we use a continuous-time realization of the process of exposing the BFS tree of a random graph with a given degree distribution, calculate the expected degree distribution of the tree, and show that it is sharply concentrated. As example applications of our machinery, we show how traceroute sampling finds power-law degree distributions in both δ-regular and Poisson-distributed random graphs. Thus, our work puts the observations of Lakhina et al. on a rigorous footing, and extends them to nearly arbitrary degree distributions.


symposium on the theory of computing | 2003

The threshold for random k-SAT is 2 k (ln 2 - O(k))

Dimitris Achlioptas; Yuval Peres

Let <i>F<sub>k</sub>(n,m)</i> be a random <i>k</i>-SAT formula on <i>n</i> variables formed by selecting uniformly and independently <i>m</i> out of all possible <i>k</i>-clauses. It is well-known that for <i>r ≥ 2<sup>k</sup> ln 2</i>, <i>F<sub>k</sub>(n,rn)</i> is unsatisfiable with probability <i>1-o(1)</i>. We prove that there exists a sequence <i>t<sub>k</sub> = O(k)</i> such that for <i>r ≥ 2<sup>k</sup> ln 2 - t<sub>k</sub></i>, <i>F<sub>k</sub>(n,rn)</i> is satisfiable with probability <i>1-o(1)</i>.Our technique yields an explicit lower bound for every <i>k</i> which for <i>k > 3</i> improves upon all previously known bounds. For example, when <i>k=10</i> our lower bound is 704.94 while the upper bound is 708.94.


symposium on the theory of computing | 2001

Fast computation of low rank matrix approximations

Dimitris Achlioptas; Frank McSherry

Given a matrix <italic>A</italic> it is often desirable to find an approximation to <italic>A</italic> that has low rank. We introduce a simple technique for accelerating the computation of such approximations when <italic>A</italic> has strong spectral structure, i.e., when the singular values of interest are significantly greater than those of a random matrix with size and entries similar to <italic>A</italic>. Our technique amounts to independently sampling and/or quantizing the entries of <italic>A</italic>, thus speeding up computation by reducing the number of non-zero entries and/or the length of their representation. Our analysis is based on observing that the acts of sampling and quantization can be viewed as adding a random matrix <italic>E</italic> to <italic>A</italic>, whose entries are independent random variables with zero-mean and bounded variance. Since, with high probability, <italic>E</italic> has very weak spectral structure, we can prove that the effect of sampling and quantization nearly vanishes when a low rank approximation to <italic>A+E</italic> is computed. In fact, the stronger the spectral structure of <italic>A</italic>, the more of its entries we can afford to discard and, ultimately, the faster we can discover that structure. We give bounds on the quality of our approximation both in the L2 and in the Frobenius norm.


conference on learning theory | 2005

On spectral learning of mixtures of distributions

Dimitris Achlioptas; Frank McSherry

We consider the problem of learning mixtures of distributions via spectral methods and derive a characterization of when such methods are useful. Specifically, given a mixture-sample, let μ i , C i , w i denote the empirical mean, covariance matrix, and mixing weight of the samples from the i-th component. We prove that a very simple algorithm, namely spectral projection followed by single-linkage clustering, properly classifies every point in the sample provided that each pair of means μ i , μ j is well separated, in the sense that ∥μ i - μ j ∥ 2 is at least ∥C i ∥ 2 (1/w i +1/w j ) plus a term that depends on the concentration properties of the distributions in the mixture. This second term is very small for many distributions, including Gaussians, Log-concave, and many others. As a result, we get the best known bounds for learning mixtures of arbitrary Gaussians in terms of the required mean separation. At the same time, we prove that there are many Gaussian mixtures {(μ i , C i , w i )} such that each pair of means is separated by ∥C i ∥ 2 (1/w i + 1/w j ), yet upon spectral projection the mixture collapses completely, i.e., all means and covariance matrices in the projected mixture are identical.


Constraints - An International Journal | 2001

Random Constraint Satisfaction: A More Accurate Picture

Dimitris Achlioptas; Michael Molloy; Lefteris M. Kirousis; Yannis C. Stamatiou; Evangelos Kranakis; Danny Krizanc

In the last few years there has been a great amount of interest in Random Constraint Satisfaction Problems, both from an experimental and a theoretical point of view. Quite intriguingly, experimental results with various models for generating random CSP instances suggest that the probability of such problems having a solution exhibits a “threshold–like” behavior. In this spirit, some preliminary theoretical work has been done in analyzing these models asymptotically, i.e., as the number of variables grows. In this paper we prove that, contrary to beliefs based on experimental evidence, the models commonly used for generating random CSP instances do not have an asymptotic threshold. In particular, we prove that asymptotically almost all instances they generate are overconstrained, suffering from trivial, local inconsistencies. To complement this result we present an alternative, single–parameter model for generating random CSP instances and prove that, unlike current models, it exhibits non–trivial asymptotic behavior. Moreover, for this new model we derive explicit bounds for the narrow region within which the probability of having a solution changes dramatically.

Collaboration


Dive into the Dimitris Achlioptas's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge