Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Carl Pomerance is active.

Publication


Featured researches published by Carl Pomerance.


Journal of Number Theory | 1983

On a problem of Oppenheim concerning “factorisatio numerorum”

E.R Canfield; Paul Erdös; Carl Pomerance

Let f(n) denote the number of factorizations of the natural number n into factors larger than 1 where the order of the factors does not count. We say n is “highly factorable” if f(m)<f(n) for all m < n. We prove that f(n)=n·L(n)−1+0(1) for n highly factorable, where L(n)=exp{log n logloglog nloglog n}. This result corrects the 1926 paper of Oppenheim where it is asserted that f(n)=n·L(n)−2+0(1). Some results on the multiplicative structure of highly factorable numbers are proved and a table of them up to 109 is provided. Of independent interest, a new lower bound is established for the function Ψ(x, y), the number of n≤x free of prime factors exceeding y.


theory and application of cryptographic techniques | 1985

The quadratic sieve factoring algorithm

Carl Pomerance

The quadratic sieve algorithm is currently the method of choice to factor very large composite numbers with no small factors. In the hands of the Sandia National Laboratories team of James Davis and Diane Holdridge, it has held the record for the largest hard number factore since mid-1983. As of this writing, the largest number it has crackd is the 71 digit number (1071−1)/9, taking 9.5 hours on the Cray XMP computer at Los Alamos, New Mexico. In this paper I shall give some of the history of the algorithm and also describe some of the improvements that habe been suggested for it


Mathematics of Computation | 1997

A search for Wieferich and Wilson primes

Richard E. Crandall; Karl Dilcher; Carl Pomerance

An odd prime p is called a Wieferich prime if 2 P-1 = 1 (mod p 2 ) alternatively, a Wilson prime if (p - 1)|= -1 (mod p 2 ). To date, the only known Wieferich primes are p = 1093 and 3511, while the only known Wilson primes are p = 5,13, and 563. We report that there exist no new Wieferich primes p < 4 x 10 12 , and no new Wilson primes p < 5x 10 8 . It is elementary that both defining congruences above hold merely (mod p), and it is sometimes estimated on heuristic grounds that the probability that p is Wieferich (independently: that p is Wilson) is about 1/p. We provide some statistical data relevant to occurrences of small values of the pertinent Fermat and Wilson quotients (mod p).


Journal of the American Mathematical Society | 1992

A rigorous time bound for factoring integers

H.W. Lenstra; Carl Pomerance

In this paper a probabilistic algorithm is exhibited that factors any positive integer n into prime factors in expected time at most Ln[2, 1 + o()] for n oo, where L,[a, b] = exp(b(logx)a(loglogx)l a). Many practical factoring algorithms, including the quadratic sieve and the elliptic curve method, are conjectured to have an expected running time that satisfies the same bound, but this is the first algorithm for which the bound can be rigorously proved. Nevertheless, this does not close the gap between rigorously established time bounds and merely conjectural ones for factoring algorithms. This is due to the advent of a new factoring algorithm, the number field sieve, which is conjectured to factor any positive integer n in time Ln[ I, 0(1)] . The algorithm analyzed in this paper is a variant of the class group relations method, which makes use of class groups of binary quadratic forms of negative discriminant. This algorithm was first suggested by Seysen, and later improved by A. K. Lenstra, who showed that the algorithm runs in expected time at most Ln [ 12 1 + O( 1)] if one assumes the generalized Riemann hypothesis. The main device for removing the use of the generalized Riemann hypothesis from the proof is the use of multipliers. In addition a character sum estimate for algebraic number fields is used, with an explicit dependence on possible exceptional zeros of the corresponding L-functions. Another factoring algorithm using class groups that has been proposed is the random class groups method. It is shown that there is a fairly large set of numbers that this algorithm cannot be expected to factor as efficiently as had previously been thought. DEPARTMENT OF MATHEMATICS, UNIVERSITY OF CALIFORNIA, BERKELEY, CALIFORNIA 94720 E-mail address: [email protected] DEPARTMENT OF MATHEMATICS, UNIVERSITY OF GEORGIA, ATHENS, GEORGIA 30602 E-mail address: [email protected] This content downloaded from 157.55.39.186 on Tue, 12 Apr 2016 08:54:17 UTC All use subject to http://about.jstor.org/terms


Discrete Algorithms and Complexity#R##N#Proceedings of the Japan–US Joint Seminar, June 4–6, 1986, Kyoto, Japan | 1987

Fast, Rigorous Factorization and Discrete Logarithm Algorithms

Carl Pomerance

Publisher Summary This chapter discusses two similar random algorithms, one for factoring and one for computing discrete logarithms in GF(q) where q is prime or a power of 2. The factoring algorithm will have expected worst case running time L(N)√2 + 0(1) the discrete logarithm algorithm will have expected worst case running time L(q)√2 + 0(1) preprocessing stage and expected worst case running time L(q)√1/2 + 0(1) actual discrete logarithm calculation. Both methods are quite similar to previously considered algorithms. In particular, the factoring algorithm is a variant of Dixons random squares method, which is based on ideas of Morrison, Brillhart, and earlier writers. The random squares method is augmented with Lenstras elliptic curve factoring method and Wiedemanns coordinate recurrence method for solving a sparse system of linear equations over a finite field. The discrete logarithm algorithm is based on the index calculus method of Western, Miller. Again, the new ingredients are the elliptic curve method and the coordinate recurrence method. It is perhaps paradoxical that the elliptic curve factoring method can be used as a subroutine in a rigorously analyzed algorithm while it has not been completely rigorously analyzed. The point is that the algorithms described in the chapter are random, so a subroutine need not work on all inputs. The chapter discusses that a somewhat weakened form of the elliptic curve method works fairly rapidly for most inputs. The argument uses a new result of Friedlander, Lagarias in analytic number theory.


Mathematics of Computation | 2001

Period of the power generator and small values of Carmichael's function

John B. Friedlander; Carl Pomerance; Igor E. Shparlinski

Consider the pseudorandom number generator u n ≡ u e n-1 (mod m), 0 ≤ u n ≤ m - 1, n = 1,2,..., where we are given the modulus m, the initial value u 0 = and the exponent e. One case of particular interest is when the modulus m is of the form pl, where p, I are different primes of the same magnitude. It is known from work of the first and third authors that for moduli m = pl, if the period of the sequence (u n ) exceeds m 3/4+e , then the sequence is uniformly distributed. We show rigorously that for almost all choices of p, l it is the case that for almost all choices of , e, the period of the power generator exceeds (pl) 1-e . And so, in this case, the power generator is uniformly distributed. We also give some other cryptographic applications, namely, to ruling-out the cycling attack on the RSA cryptosystem and to so-called time-release crypto. The principal tool is an estimate related to the Carmichael function λ(m), the size of the largest cyclic subgroup of the multiplicative group of residues modulo m. In particular, we show that for any Δ ≥ (log log N) 3 , we have λ(m) ≥ N exp(-Δ) for all integers m with l ≤ m ≤ N, apart from at most N exp (-0.69(Δ log Δ) 1/3 ) exceptions.


SIAM Journal on Computing | 1988

A pipeline architecture for factoring large integers with the quadratic sieve algorithm

Carl Pomerance; Jeffrey W. Smith; Randy Tuler

We describe the quadratic sieve factoring algorithm and a pipeline architecture on which it could be efficiently implemented. Such a device would be of moderate cost to build and would be able to factor 100-digit numbers in less than a month. This represents an order of magnitude speed-up over current implementations on supercomputers. Using a distributed network of many such devices, it is predicted much larger numbers could be practically factored.


Experimental Mathematics | 1992

Reduction of Huge, Sparse Matrices over Finite Fields Via Created Catastrophes

Carl Pomerance; Jeffrey W. Smith

We present a heuristic method for the reduction modulo 2 of a large, sparse bit matrix to a smaller, dense bit matrix that can then be solved by conventional means, such as Gaussian elimination. This method worked effectively for us in reducing matrices as large as 100,000 × 100,000 to much smaller, but denser square matrices.


The Mathematical Intelligencer | 1981

Recent developments in primality testing

Carl Pomerance

ConclusionIs there an algorithm that can decide whethern is prime or composite and that runs in polynomial time? The Adleman-Rumely algorithm and the Lenstra variations come so close, that it would seem that almost any improvement would give the final breakthrough.Is factoring inherently more difficult than distinguishing between primes and composites? Most people feel that this is so, but perhaps this problem too will soon yield.In his “Disquisitiones Arithmeticae” Gauss [9], p. 396, wrote“The problem of distinguishing prime numbers from composite numbers and of resolving the latter into their prime factors is known to be one of the most important and useful in arithmetic. It has engaged the industry and wisdom of ancient and modern geometers to such an extent that it would be superfluous to discuss the problem at length. Nevertheless we must confess that all methods that have been proposed thus far are either restricted to very special cases or are so laborious and prolix that even for numbers that do not exceed the limits of tables constructed by estimable men, i.e., for numbers that do not yield to artificial methods, they try the patience of even the practiced calculator. And these methods do not apply at all to larger numbers.”The struggle continues!


Transactions of the American Mathematical Society | 1990

Unusually large gaps between consecutive primes

Helmut Maier; Carl Pomerance

Let G(x) denote the largest gap between consecutive primes below x. In a series of papers from 1935 to 1963, Erdos, Rankin, and Schonhage showed that G(x) > (c + o(1)) logx loglogx loglogloglogx(logloglogx)-2 where c = ey and y is Eulers constant. Here, this result is shown with c = c ey where c0 = 1.31256 ... is the solution of the equation 4/co e 4/Co = 3 The principal new tool used is a result of independent interest, namely, a mean value theorem for generalized twin primes lying in a residue class with a large modulus.

Collaboration


Dive into the Carl Pomerance's collaboration.

Top Co-Authors

Avatar

Florian Luca

University of the Witwatersrand

View shared research outputs
Top Co-Authors

Avatar

Bruce Landman

University of North Carolina at Greensboro

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Igor E. Shparlinski

University of New South Wales

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

András Sárközy

Eötvös Loránd University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge