Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Don Coppersmith is active.

Publication


Featured researches published by Don Coppersmith.


Mathematics of Computation | 1994

Solving homogeneous linear equations over GF (2) via block Wiedemann algorithm

Don Coppersmith

We propose a method of solving large sparse systems of homogeneous linear equations over G F ( 2 ) , the field with two elements. We modify an algorithm due to Wiedemann. A block version of the algorithm allows us to perform 32 matrix-vector operations for the cost of one. The resulting algorithm is competitive with structured Gaussian elimination in terms of time and has much lower space requirements. It may be useful in the last stage of integer factorization. We address here the problem of solving a large sparse system of homogeneous linear equations over GF(2) , the field with two elements. One important application, which motivates the present work, arises in integer factorization. During the last stage of most integer factorization algorithms, we are presented with a large sparse integer matrix and are asked to find linear combinations of the columns of this matrix which vanish modulo 2. For example [7], the matrix may have 100,000 columns, with an average of 15 nonzero entries per column. For this application we would like to obtain several solutions, because a given solution will lead to a nontrivial factorization with probability 112 ; with n independent solutions, our probability of finding a factorization rises to 1 2- . Structured Gaussian elimination can be used [7], but as problems get larger, it may become infeasible to store the matrices obtained in the intermediate stages of Gaussian elimination. The Wiedemann algorithm [9, 71 has smaller storage requirements (one need only store a few vectors and an encoding of a sparse matrix, not a dense matrix as occurs in Gaussian elimination after fillin), and it may have fewer computational steps (since one takes advantage of the sparseness of the matrix). But its efficiency is hampered by the fact that the algorithm acts on only one bit at a time. In the present paper we work with blocks of vectors at a single time. By treating 32 vectors at a time (on a machine with 32-bit words), we can perform 32 matrix-vector products at once, thus considerably decreasing the cost of indexing. This can be viewed as a block Wiedemann algorithm. The main technical difficulty is in obtaining the correct generalization of the Berlekamp-Massey algorithm to a block version, namely, a multidimensional version of the extended Euclidean algorithm. Received by the editor November 20, 1991 and, in revised form, July 24, 1992. 1991 Mathematics Subject Classijication. Primary 15A33, 11Y05, 11-04, 15-04. @ 1994 Amencan Mathernat~cal Soc~ety 0025-571 8/94


Machine Learning | 2008

Robust reductions from ranking to classification

Maria-Florina Balcan; Nikhil Bansal; Alina Beygelzimer; Don Coppersmith; John Langford; Gregory B. Sorkin

1 .OO +


international workshop and international workshop on approximation randomization and combinatorial optimization algorithms and techniques | 2006

Minimizing setup and beam-on times in radiation therapy

Nikhil Bansal; Don Coppersmith; Baruch Schieber

.25 per page


ACM Transactions on Algorithms | 2010

Ordering by weighted number of wins gives a good ranking for weighted tournaments

Don Coppersmith; Lisa Fleischer; Atri Rurda

We reduce ranking, as measured by the Area Under the Receiver Operating Characteristic Curve (AUC), to binary classification. The core theorem shows that axa0binary classification regret of r on the induced binary problem implies anxa0AUC regret of at most 2r. This is axa0large improvement over approaches such as ordering according to regressed scores, which have axa0regret transform of r↦nr where n is the number of elements.


Mathematics of Computation | 2008

Divisors in residue classes, constructively

Don Coppersmith; Nick Howgrave-Graham; S. V. Nagaraj

Radiation therapy is one of the commonly used cancer therapies. The radiation treatment poses a tuning problem: it needs to be effective enough to destroy the tumor, but it should maintain the functionality of the organs close to the tumor. Towards this goal the design of a radiation treatment has to be customized for each patient. Part of this design are intensity matrices that define the radiation dosage in a discretization of the beam head. To minimize the treatment time of a patient the beam-on time and the setup time need to be minimized. For a given row of the intensity matrix, the minimum beam-on time is equivalent to the minimum number of binary vectors with the consecutive “1”s property that sum to this row, and the minimum setup time is equivalent to the minimum number of distinct vectors in a set of binary vectors with the consecutive “1”s property that sum to this row. We give a simple linear time algorithm to compute the minimum beam-on time. We prove that the minimum setup time problem is APX-hard and give approximation algorithms for it using a duality property. For the general case, we give a


Mathematics of Computation | 1990

Fermat’s last theorem (case 1) and the Wieferich criterion

Don Coppersmith

frac {24}{13}


Algorithmica | 2011

Shape Rectangularization Problems in Intensity-Modulated Radiation Therapy

Nikhil Bansal; Danny Z. Chen; Don Coppersmith; Xiaobo Sharon Hu; Shuang Luan; Ewa Misiołek; Baruch Schieber; Chao Wang

approximation algorithm. For unimodal rows, we give a


Random Structures and Algorithms | 2008

Non-Abelian homomorphism testing, and distributions close to their self-convolutions

Michael Ben-Or; Don Coppersmith; Michael Luby; Ronitt Rubinfeld

frac 97


Mathematics of Computation | 1994

SOLVING LINEAR EQUATIONS OVER GF(2) VIA BLOCK WIEDEMANN ALGORITHM

Don Coppersmith

approximation algorithm. We also consider other variants for which better approximation ratios exist.


Statistics & Probability Letters | 2008

Conditions for weak ergodicity of inhomogeneous Markov chains

Don Coppersmith; Chai Wah Wu

We consider the following simple algorithm for feedback arc set problem in weighted tournaments --- order the vertices by their weighted indegrees. We show that this algorithm has an approximation guarantee of 5 if the weights satisfy <i>probability constraints</i> (for any pair of vertices <i>u</i> and <i>v, w</i><inf><i>uv</i></inf> + w<inf><i>vu</i></inf> = 1). Special cases of feedback arc set problem in such weighted tournaments include feedback arc set problem in unweighted tournaments and rank aggregation. Finally, for any constant ε > 0, we exhibit an infinite family of (unweighted) tournaments for which the above algorithm (<i>irrespective</i> of how ties are broken) has an approximation ratio of 5 - ε.

Collaboration


Dive into the Don Coppersmith's collaboration.

Top Co-Authors

Avatar

Nikhil Bansal

Eindhoven University of Technology

View shared research outputs
Top Co-Authors

Avatar

Atri Rurda

State University of New York System

View shared research outputs
Top Co-Authors

Avatar

David Gamarnik

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michael Luby

International Computer Science Institute

View shared research outputs
Top Co-Authors

Avatar

Shuang Luan

University of New Mexico

View shared research outputs
Researchain Logo
Decentralizing Knowledge