Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Leonard J. Schulman is active.

Publication


Featured researches published by Leonard J. Schulman.


foundations of computer science | 1995

Splitters and near-optimal derandomization

Moni Naor; Leonard J. Schulman; Aravind Srinivasan

We present a fairly general method for finding deterministic constructions obeying what we call k-restrictions; this yields structures of size not much larger than the probabilistic bound. The structures constructed by our method include (n,k)-universal sets (a collection of binary vectors of length n such that for any subset of size k of the indices, all 2/sup k/ configurations appear) and families of perfect hash functions. The near-optimal constructions of these objects imply the very efficient derandomization of algorithms in learning, of fixed-subgraph finding algorithms, and of near optimal /spl Sigma/II/spl Sigma/ threshold formulae. In addition, they derandomize the reduction showing the hardness of approximation of set cover. They also yield deterministic constructions for a local-coloring protocol, and for exhaustive testing of circuits.


foundations of computer science | 2006

The Effectiveness of Lloyd-Type Methods for the k-Means Problem

Rafail Ostrovsky; Yuval Rabani; Leonard J. Schulman; Chaitanya Swamy

We investigate variants of Lloyds heuristic for clustering high dimensional data in an attempt to explain its popularity (a half century after its introduction) among practitioners, and in order to suggest improvements in its application. We propose and justify a clusterability criterion for data sets. We present variants of Lloyds heuristic that quickly lead to provably near-optimal clustering solutions when applied to well-clusterable instances. This is the first performance guarantee for a variant of Lloyds heuristic. The provision of a guarantee on output quality does not come at the expense of speed: some of our algorithms are candidates for being faster in practice than currently used variants of Lloyds method. In addition, our other algorithms are faster on well-clusterable instances than recently proposed approximation algorithms, while maintaining similar guarantees on clustering quality. Our main algorithmic contribution is a novel probabilistic seeding process for the starting configuration of a Lloyd-type iteration.


symposium on the theory of computing | 2001

Quantum mechanical algorithms for the nonabelian hidden subgroup problem

Michelangelo Grigni; Leonard J. Schulman; Monica Vazirani; Umesh V. Vazirani

We provide positive and negative results concerning the “standard method” of identifying a hidden subgroup of a nonabelian group using a quantum computer.


international symposium on information theory | 1995

Coding for interactive communication

Leonard J. Schulman

Let the input to a computation problem be split between two processors connected by a communication link; and let an interactive protocol /spl pi/ be known by which, on any input, the processors can solve the problem using no more than T transmissions of bits between them, provided the channel is noiseless in each direction. We study the following question: if in fact the channel is noisy, what is the effect upon the number of transmissions needed in order to solve the computation problem reliably? Technologically this concern is motivated by the increasing importance of communication as a resource in computing, and by the tradeoff in communications equipment between bandwidth, reliability, and expense. We treat a model with random channel noise. We describe a deterministic method for simulating noiseless-channel protocols on noisy channels, with only a constant slowdown. This is an analog for general, interactive protocols of Shannons coding theorem, which deals only with data transmission, i.e., one-way protocols. We cannot use Shannons block coding method because the bits exchanged in the protocol are determined only one at a time, dynamically, in the course of the interaction. Instead, we describe a simulation protocol using a new kind of code, explicit tree codes.


IEEE Transactions on Antennas and Propagation | 2004

A random walk model of wave propagation

Massimo Franceschetti; Jehoshua Bruck; Leonard J. Schulman

This paper shows that a reasonably accurate description of propagation loss in small urban cells can be obtained with a simple stochastic model based on the theory of random walks, that accounts for only two parameters: the amount of clutter and the amount of absorption in the environment. Despite the simplifications of the model, the derived analytical solution correctly describes the smooth transition of power attenuation from an inverse square law with the distance to the transmitter, to an exponential attenuation as this distance is increased - as it is observed in practice. Our analysis suggests using a simple exponential path loss formula as an alternative to the empirical formulas that are often used for prediction. Results are validated by comparison with experimental data collected in a small urban cell.


foundations of computer science | 2007

Quantum Algorithms for Hidden Nonlinear Structures

Andrew M. Childs; Leonard J. Schulman; Umesh V. Vazirani

It is well-known that constraint satisfaction problems (CSP) can be solved in time nO(k) if the treewidth of the primal graph of the instance is at most k and n is the size of the input. We show that no algorithm can be significantly better than this treewidth-based algorithm, even if we restrict the problem to some special class of primal graphs. Formally, let g be an arbitrary class of graphs and assume that there is an algorithm A solving binary CSP for instances whose primal graph is in g. We prove that if the running lime of A is f(G)nO(k/logk), where k is the treewidth of the primal graph G and f is an arbitrary function, then the Exponential Time Hypothesis fails. We prove the result also in the more general framework of the homomorphism problem for bounded-arity relational structures. For this problem, the treewidth of the core of the left-hand side structure plays the same role as the. treewidth of the primal graph above.Attempts to find new quantum algorithms that outperform classical computation have focused primarily on the nonAbelian hidden subgroup problem, which generalizes the central problem solved by Shors factoring algorithm. We suggest an alternative generalization, namely to problems of finding hidden nonlinear structures over finite fields. We give examples of two such problems that can be solved efficiently by a quantum computer, but not by a classical computer. We also give some positive results on the quantum query complexity of finding hidden nonlinear structures.


symposium on the theory of computing | 1993

Deterministic coding for interactive communication

Leonard J. Schulman

Two factors are prominent among those contributing to the increases in speed and storage capacity in current generations of computers. The first is increasing parallelism — whether in actual parallel and distributed computers, or among the steadily more numerous components of a sequential machine. The second is the dramatic miniaturization of logical devices and wires. The first of these factors greatly magnifies the number of interprocessor communications performed during any computation, while the second increases the noise level affecting transmissions. For these reasons, and on the basis that the role of noise should be understood in a model of a physical process, the following concern was recently identified as basic [10]. Consider a problem whose input is split between two processors connect ed by a communication link; and for which an interactive protocol exists which solves the problem in T transmissions on any input, provided the channel is noiseless. If in fact there is some noise on the channel, what is the effect upon the number of transmissions needed in order to solve the communication problem reliably? We describe a deterministic method for simulating noiseless-channel protocols on noisy channels, with only a constant slow-down. This is an analog for general interactive protocols of Shannon’s coding theorem, which dealt only with data transmission, i.e. one-way protocols [11]. This result im*Research supported by an NSF postdoctoral fellowship. Permission to copy without fee all or part of this material is granted provided that the copias are not made or distributed for direct commercial advantage, the ACM copyright notice and tha title of the publication and its date appear, and notioe is given that copying is by permission of the Association for Computing Machinery. To copy otherwise, or to republish, requires a fee and/or specific permission. 25th ACM STOC ‘93-51931CA,USA 01993 ACM 0-89791 -591 -71931000510747 . .. S1.50 proves on recent work which provided a randomized simulation method for interactive protocols. The Shannon theorem is thus reproduced for the general interactive case, in all but the constant factor. The randomized method was fundamentally unsuited to further derandomization, and the deterministic solution is entirely different. A key role in the present work is played by tree codes, originally considered by Wozencraft [13] for the sake of comput ationally efficient decoding of noisy data transmissions. In their new setting tree codes are reinterpreted as a way of transforming a highly interactive protocol into one that behaves like a pair of one-way protocols, and which therefore can be implemented at both high rate and reliability y.


foundations of computer science | 1998

Pattern matching for spatial point sets

David E. Cardoze; Leonard J. Schulman

Two sets of points in d-dimensional space are given: a data set D consisting of N points, and a pattern set or probe P consisting of k points. We address the problem of determining whether there is a transformation, among a specified group of transformations of the space, carrying P into or near (meaning at a small directed Hausdorff distance of) D. The groups we consider are translations and rigid motions. Runtimes of approximately O(nlogn) and O(n/sup d/logn) respectively are obtained (letting n=max{N,k} and omitting the effects of several secondary parameters). For translations, a runtime of approximately O(n(ak+1)log/sup 2/n) is obtained for the case that a constant fraction /spl alpha/<1 of the points of the probe is allowed to fail to match.


IEEE Transactions on Information Theory | 1999

Signal propagation and noisy circuits

William S. Evans; Leonard J. Schulman

The information carried by a signal decays when the signal is corrupted by random noise. This occurs when a message is transmitted over a noisy channel, as well as when a noisy component performs computation. We first study this signal decay in the context of communication and obtain a tight bound on the rate at which information decreases as a signal crosses a noisy channel. We then use this information theoretic result to obtain depth lower bounds in the noisy circuit model of computation defined by von Neumann. In this model, each component fails (produces 1 instead of 0 or vice-versa) independently with a fixed probability, and yet the output of the circuit is required to be correct with high probability. Von Neumann showed how to construct circuits in this model that reliably compute a function and are no more than a constant factor deeper than noiseless circuits for the function. We provide a lower bound on the multiplicative increase in circuit depth necessary for reliable computation, and an upper bound on the maximum level of noise at which reliable computation is possible.


symposium on the theory of computing | 1999

Molecular scale heat engines and scalable quantum computation

Leonard J. Schulman; Umesh V. Vazirani

We describe a quantum mechanical heat engine. Like its classical counterpart introduced by Carnot, this entine carries out a reversible process in which an input of energy to the system results in a separation of cold and hot regions. The method begins with a reinterpretation in thermodynamic terms of a simple step introduced by van Neumann to extract fair coin flips from sequences of biased coin flips. Some of the experimental set-ups proposed for implementation of quantum computers, begin with the quantum bits of the computer initially in a mixed state. Each qubit is L polarized in the state IO) with probability 9, and in the state 11) with probability *, independently (or nearly so) of all other bits. The heat engine may be used to trans. form this initial collection of n qubits into a state in which a near-optimal m = n[ FIg(l +e) + %Ig(l -c) -o(l)] qubits are in the joint state IO”‘). These qubits can then be used as the register for a quantum computation. The heat engine is described at the level of an algorithm implementable in any quantum system capable of massive coherent states. A particular implementation is also described for a system of nuclear spins arranged in a chain. The temperature the cold qubits reach is inverse polynomial in n. Introduction A sequence of results over the last decade [11, 1, 24, 231 has provided the first credible challenge to the Extended (complexity theoretic) Church-Turing thesis. At issue is the ability of computers based on quantum physics to perform certain computations (such as factorization [23]) exponentially faster than classical computers. However, realizing quantum computation in the laboratory has proved to be a formidable challenge since it requires the isolation of the computer from the effects of environmentally induced decoherence, while being able to operate upon its state to perform elementary operations. In this paper, we concentrate on one aspect of this problem namely the initialization of the state of the quantum computer. The task here is that in some experimental set-ups, the quantum bits in the computer are initially in a mixed state (where each qubit is L polarized in the state IO) with probability F, and II) with probability *), independently of all other bits’, whereas we would like the bits to be in the joint state IO”). Since quantum mechanics is time reversible, any initialization algorithm is necessarily constrained to extract at most (1 H(r))n purfied bits, where H(z) = F lg & + * lg & i.e. the optimal performance is extraction of the state IO(‘-H(‘)1”). The challenge is to extract as close as possible to this number of purified bits, without access to any scratch bits, as efficiently as possible, and using only very simple elementary device operations. In this paper, we give n procedure that extracts the asymptotically optimal fraction of purified bits. We give a quasi-linear time implementation of our procedure in a model motivated by NMR quantum computing. Recently, small scale quantum computation (on two to three qubits) (see e.g. [4, 9, 51) has been successfully demonstrated in the laboratory, using conventional NMR systems. The qubits in question are the nuclear spins associated with the atoms of a molecule. A major difficulty is that conventional NMR systems operate upon a macroscopic sample, rather than a single molecule. At room temperature, and in thermal equilibrium, such a sample must be regarded as constituting a statistical mixture of pure states (rather than the state IO”) in which we would like to start the computation). A major breakthrough in the use of NMR techniques in quaturn computation came about in [15, 81, where a scheme was given for embedding a small dimensional ‘virtual’ pure state within the density matrix describing the bulk sample, by exploiting the structure present in thermal equilibrium. (Some of the experimental results cited are based on this scheme.) Although this approach provides a very important proof of concept demonstration for quantum computation, it does not scale the strength of signal output by the NMR quantum computer degrades exponentially in the number of quantum bits n in the system. Thus the exponential speedup promised by quantum computation is offset by an exponential increase in the effort required to detect the out-

Collaboration


Dive into the Leonard J. Schulman's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yuval Rabani

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xiaojie Gao

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Rafail Ostrovsky

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

William S. Evans

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge