On algorithms to calculate integer complexity
Katherine Cordwell, Alyssa Epstein, Anand Hemmady, Steven J. Miller, Eyvindur A. Palsson, Aaditya Sharma, Stefan Steinerberger, Yen Nhi Truong Vu
aa r X i v : . [ m a t h . N T ] D ec ON ALGORITHMS TO CALCULATE INTEGER COMPLEXITY
KATHERINE CORDWELL, ALYSSA EPSTEIN, ANAND HEMMADY, STEVEN J. MILLER, EYVINDUR PALSSON,AADITYA SHARMA, STEFAN STEINERBERGER, AND YEN NHI TRUONG VUA
BSTRACT . We consider a problem first proposed by Mahler and Popken in 1953 and later developed byCoppersmith, Erd˝os, Guy, Isbell, Selfridge, and others. Let f ( n ) be the complexity of n ∈ Z + , where f ( n ) is defined as the least number of ’s needed to represent n in conjunction with an arbitrary number of + ’s, ∗ ’s, and parentheses. Several algorithms have been developed to calculate the complexity of all integers upto n . Currently, the fastest known algorithm runs in time O ( n . ) and was given by J. Arias de Reynaand J. van de Lune in 2014. This algorithm makes use of a recursive definition given by Guy and iteratesthrough products, f ( d ) + f (cid:0) nd (cid:1) , for d | n , and sums, f ( a ) + f ( n − a ) , for a up to some function of n .The rate-limiting factor is iterating through the sums. We discuss potential improvements to this algorithmvia a method that provides a strong uniform bound on the number of summands that must be calculated foralmost all n . We also develop code to run J. Arias de Reyna and J. van de Lune’s analysis in higher basesand thus reduce their runtime of O ( n . ) to O ( n . ) . All of our code can be found online at:https://github.com/kcordwel/Integer-Complexity.
1. I
NTRODUCTION
Introduction.
In this paper, log denotes ln , and log b denotes the logarithm in base b. Given n ∈ N ,the complexity of n , which we denote f ( n ) , is defined as the least number of 1’s needed to represent n using an arbitrary number of additions, multiplications, and parentheses. For example, because 6 may berepresented as (1 + 1)(1 + 1 + 1) , f (6) ≤ . Calculating f ( n ) for arbitrary n is a problem that was posedin 1953 by Mahler and Popken [MP]. Guy [G] drew attention to this problem in 1986 when he discussedit and several other simply stated problems in an Am. Math. Monthly article. The following recursiveexpression for integer complexity highlights the interplay of additive and multiplicative structures: f ( n ) = min d | n ≤ d ≤√ n ≤ a ≤ n/ n f ( d ) + f (cid:16) nd (cid:17) , f ( a ) + f ( n − a ) o . (1.1)Some unconditional bounds on f ( n ) are known. In particular, [G] attributes a lower bound of f ( n ) ≥ ( n ) to Selfridge. Also, an upper bound of f ( n ) ≤ ( n ) is attributed to Coppersmith. Extensivenumerical investigation (see [IBCOOP]) suggests that f ( n ) ∼ . ( n ) for n large but it is not evenknown whether f ( n ) ≥ (3 + ε ) log n for some ε > . As a step towards understanding these problemsAltman and Zelinsky [AZ] introduced the discrepancy δ ( n ) = f ( n ) − ( n ) and provided a way toclassify those numbers with a small discrepancy. This classification was taken further by Altman [A1, A2]where he obtained a finite set of polynomials that represent precisely the numbers with small defects. As aconsequence Altman [A3] was able to calculate the integer complexity of certain classes of numbers. Anyprogress on these difficult questions likely requires a substantial new idea; the main difficulty, the interplaybetween additive and multiplicative structures, is at the core of a variety of different open problems, whichwe believe adds to its allure.1.2. Algorithms.
Much of the progress on this problem has been algorithmic. Using the above recursivedefinition, it is possible to write algorithms to calculate f ( n ) for large values of n where the rate-limitingstep of the algorithm is iterating through the summands, f ( a ) + f ( n − a ) , for many values of a . In par-ticular, the brute-force algorithm that iterates over all a ′ s such that ≤ a ≤ n/ runs in time O ( n ) , butthere are ways to bound the number of summands that must be checked so as to significantly decrease the Date : December 19, 2018.2010
Mathematics Subject Classification.
Key words and phrases.
Integer Complexity. computational complexity. Srinivar and Shankar [SS] used the unconditional upper and lower bounds on f ( n ) to bound the number of summands, obtaining an algorithm that runs in time O ( n log (3) ) < O ( n . ) .The fastest known algorithm runs in time O ( n . ) and is due to J. Arias de Reyna and J. van deLune [AV]. Also, the experimental data in [IBCOOP] is based on an algorithm that calculates f ( n ) for n up to n = 10 . They derive many interesting results from their data, but they do not analyze the runtimeof their algorithm. We obtain both an overall improvement on the runtime of the J. Arias de Reyna andJ. van de Lune algorithm and a potential internal improvement to the workings of the algorithm. Theoverall improvement is derived from running the analysis of [AV] in much higher bases, while the internalimprovement gives a strong uniform bound on the number of summands f ( a ) + f ( n − a ) that must becalculated for almost all n . We detail the overall improvement in Section 2. We introduce the potentialinternal improvement in Section 3 and test it in Section 4. We end the paper by proposing a new approachfor improving the current unconditional upper bound on f ( n ) .2. A LGORITHMIC ASPECTS
The de Reyna & van de Lune algorithm.
J. Arias de Reyna and J. van de Lune [AV] developedcode in Python to perform the analysis of their algorithm, which they have generously shared with us.Additionally, Fuller has published open-source code [F] written in C to calculate integer complexities.Using these, we have developed code in C that is comparable to J. Arias de Reyna and J. van de Lune’sPython code. The heart of the code is the calc_count method, which calculates D ( b, r ) for varying valuesof b and r , where D ( b, r ) is an upper bound on how much multiplying by b and adding r increases thecomplexity of any given number. More precisely, we define D ( b, r ) to be the smallest integer such that f ( r + bn ) ≤ f ( n ) + D ( b, r ) (2.1)for all n . As an example, notice that D ( b, ≤ f ( b ) , because we can always represent b with f ( b ) n with f ( n ) bn —and thus f ( bn ) ≤ f ( n ) + f ( b ) . Similarly, D (1 , r ) ≤ f ( r ) because we can represent r with f ( r ) n with f ( n ) n + r that uses f ( n ) + f ( r ) D ( b, r ) are useful for bounding f ( n ) in the following way: [AV] defined C avg as theinfimum of all C such that f ( n ) ≤ C log( n ) for a set of natural numbers of density 1 and showed that C avg ≤ b log( b ) b − X r =0 D ( b, r ) . (2.2)In this calculation, we refer to b as the base in which we are working. Our code closely follows the logicof J. Arias de Reyna and J. van de Lune’s program, making the following slight optimization. Theorem 2.1.
Take b = 2 i j where b < and i + j > . If r | b for ≤ r < b , then D ( b, r ) = f ( b )+1 .Proof. From the equality r + bn = r (1 + n · br ) , notice that: f ( bn + r ) ≤ f ( r ) + f (cid:18) n · br (cid:19) ≤ f ( r ) + f ( n ) + f ( br ) (2.3)From [IBCOOP], we know that f (2 v w ) = 2 v + 3 w for v w < and v + w > . We know that b = 2 i j , and since r divides b , r is of the form x y for x ≤ i , y ≤ j . This means that b/r is of the form i − x j − y . Since r ≥ , we have x + y > and since r < b , we have ( i − x ) + ( j − y ) > .Then applying the result of [IBCOOP] to both r and br , we obtain f ( bn + r ) ≤ f ( r ) + f ( n ) + f (cid:18) br (cid:19) = 1 + 2 x + 3 y + f ( n ) + 2( i − x ) + 3( i − y )= 1 + 2 i + 3 j + f ( n )= 1 + f ( b ) + f ( n ) . (2.4) See the “calculate_complexities.c” file at https://github.com/kcordwel/Integer-Complexity
This shows that D ( b, r ) ≤ f ( b ) + 1 .Now we wish to argue that D ( b, r ) ≥ f ( b ) + 1 . While this makes sense intuitively, in order to berigorous we do a computer aided proof . Our computer calculations work as follows: First, we calculatethe complexities of b + r for all b, r as in the theorem. For the vast majority of the b , r , f ( b + r ) = f ( b ) + 1 ,meaning that D ( b, r ) ≥ f ( b ) + 1 from the definition of D ( b, r ) . However, there are 372 pairs of b, r suchthat f ( b + r ) = f ( b ) + 1 . For these pairs, we do a second pass and calculate the complexities of b + r .For all of the pairs, f (2 b + r ) = f ( b ) + 3 = f ( b ) + 1 + f (2) , meaning that D ( b, r ) ≥ f ( b ) + 1 . (cid:3) J. Arias de Reyna and J. van de Lune [AV] suggest that their algorithms will be more powerful whenimplemented in C and Pascal. [AV] proved that their algorithm has running time O ( N α ) where α = − b log b − X d =0 D ( b,d ) ! . (2.5)They calculated the runtime of their algorithm for bases n m up to 3188646, and found the best valueof α as α = − / · / · · ) ≈ . in base = 2239488 . Using C is advantageous because it runs much faster than Python, and so weare able to calculate values for higher bases. We calculated values for bases n m ≤ . In base = 53747712 , we find that the runtime is O ( n α ) where α = − · / + 24493392174530898 · / )log(2 ) , so that the runtime is O ( n . ) . Improved asymptotic results.
Probably Guy [G] was the first who remarked that while pointwisebounds seem difficult, it is possible to establish bounds that are true for almost all (in the sense of asymptoticdensity 1) numbers. His method showed that f ( n ) ≤ .
816 log n for a subset of integers with density 1.Using their definition of C avg as the infimum of all C where f ( n ) ≤ C log( n ) for a set of naturalnumbers of density 1, [AV] showed that for any base b ≥ C avg ≤ b log b b − X r =0 D ( b, r ) . (2.6)In base b = 2 , they obtain C avg ≤ log(2 ) , (2.7)so that f ( n ) ≤ . n ) , or f ( n ) ≤ . ( n ) , for a set of natural numbers of density 1.We find that in base , C avg ≤ log(2 ) , (2.8)so that f ( n ) ≤ . n ) , or f ( n ) ≤ . ( n ) , for a set of natural numbers of density 1. See the code in the “Thm2.1” folder at https://github.com/kcordwel/Integer-Complexity After submission of this paper, we ran the code even longer, for bases n m ≤ . See the “calculate_complexities.txt”file on GitHub for our data. Further, in base , we obtain O ( n . ) . See the “calculate_complexities.txt” file on GitHub for the exactnumbers involved in this calculation. Further, in base , we find f ( n ) ≤ . n ) , or f ( n ) ≤ . log ( n ) . See the “calcu-late_complexities.txt” file for the exact numbers involved in this calculation.
3. P
OSSIBLE I MPROVEMENTS VIA B ALANCING D IGITS
Balancing Digits.
Our goal is to improve the algorithm for calculating complexity given in [AV]. Therate-limiting factor in this algorithm is checking, for all n ≤ N , f ( a ) + f ( n − a ) for all ≤ a ≤ kMax forsome kMax that is specially calculated for each n . We will show that we can give a strong uniform boundon the number of summands that must be checked for almost all n .We say that n ∈ Z is digit-balanced in base b if each of the digits , . . . , b − occurs roughly /b timesin the base b representation of n , or digit-unbalanced if some digits occur significantly more often thanothers. We will show that almost all numbers are digit-balanced, although the exact threshold of variationthat we allow will depend on the base b . Finally, assuming that we have a set S of digit-balanced numbersin base b , we will use Guy’s method to find that for any n ∈ S , f ( n ) ≤ c log ( n ) for some c . Then, usingthis bound on f ( n ) and assuming that f ( n ) = f ( a ) + f ( n − a ) , we are able to bound a , which, in turn,narrows the search space that a reasonable algorithm has to cover.3.2. Bounds on Digit-Balanced Numbers.
Our main result is as follows.
Proposition 3.1.
There exists a constant c b > only depending on the base b such that (cid:26) ≤ n ≤ N : max ≤ i ≤ b (cid:12)(cid:12)(cid:12)(cid:12) number of digits of n in base b that are inumber of digits of n in base b − b (cid:12)(cid:12)(cid:12)(cid:12) ≥ ε (cid:27) ≤ N − c b ε . Proof.
The main idea behind the argument is to replace a combinatorial counting argument by the prob-abilistic large deviation theory. Let N = b k , and consider all k -digit numbers in base b , let X i be arandom variable such that X i = 1 with probability /b and 0 otherwise for ≤ i ≤ k . For any givendigit ≤ d < b , each X i gives the probability that this digit will appear in a fixed position i in the base b representation of a number. Since we are considering k -digit numbers, we need to understand the averagevalue of X + · · · + X k and to analyze how close this average is to b . Let X = k ( X + · · · + X k ) . Next,we can use Hoeffding’s inequality, which gives P (cid:18) X − b ≥ ǫ (cid:19) ≤ e − kǫ . (3.1)We know that k ≈ log b ( N ) = log( N )log( b ) , so: e − kǫ = e − ǫ N )log( b ) = ( e log( N ) ) − ǫ b ) = N − ǫ b ) . (3.2)So, the probability that a number with k digits in its base b representation has some digits that appear moreoften than the average is less than or equal to N − ǫ b ) , meaning that | S | ≤ N · N − ǫ b ) = N − ǫ b ) . (cid:3) Bound on Number of Summands.
Assume now that f ( n ) = f ( n − a ) + f ( a ) and that this is theoptimal representation using the least number of 1’s. We assume that f ( n ) = c log ( n ) for some c > .Our goal is to derive a bound on a . The main idea is to show that the logarithmic growth implies that a cannot be very large (otherwise the growth of f ( n ) would be closer to linear). Using the lower bound dueto Selfridge [G], we attain: c log ( n ) ≥ ( n − a ) + log ( a )) . (3.3)This is equivalent to: log ( n c/ ) ≥ log ( n − a ) + log ( a ) . (3.4)Say that a = qn , where necessarily q ≤ . Then we have: log ( n c/ ) ≥ log ((1 − q ) n · a ) . (3.5)Exponentiating both sides and simplifying gives n c/ − − q ≥ a. (3.6)Since q ≤ , then − q ≥ , and so n c/ − / ≥ n c/ − − q ≥ a, (3.7)or: n c/ − ≥ n c/ − − q ≥ a. (3.8)Thus, we need only check for values of a at most n c/ − .3.4. Binary Analysis.
To see how our result works, we analyze it in the simplest possible base, which isbinary. Consider k -digit numbers less than N (so that k ≈ log ( N ) ). The average case in Guy’s method,illustrated in [G] and based on Horner’s scheme of representing binary numbers, gives f ( n ) ≤ ( n ) / ,or f ( n ) < . ( n ) . “Bad” numbers in base 2 are those that have many 1’s, as that is when therepresentation is rather inefficient. If we move away from the average case to numbers which have, say, · .
75 + 2 · .
25) log(3) < . . (3.9)This is already much worse than the original average case constant of 3.962407, and so we need to staymuch closer to the average case. In particular, the following percentages of 1’s and 0’s give the followingvalues for the constant in Guy’s method:Percent 0’s Percent 1’s Constant46 54 4.0258147 53 4.0099748 52 3.9941149 51 3.9782649.9 50.1 3.9639949.99 50.01 3.962565Consider numbers with at most 46% 0’s and 54% 1’s. The previous section affords a bound of a ≤ n . / − ≤ n . for such numbers. We want to understand how often this case occurs. Recall thatwe are considering k -digit numbers. We need to bound the number of times that 0 occurs at most k times,or the number of times that 1 occurs at least k times. Say that X i is the Bernoulli variable correspondingto digit i , ≤ i ≤ k . Then P ( X i = 1) = . Let S k = X + · · · + X k , so that S k represents the totalnumber of 1’s in our number. Since < < , we may apply Theorem 1 from [AG] to achieve thefollowing bound: P (cid:18) S k ≥ k (cid:19) ≤ e − kD ( || ) (3.10)where D (cid:18) || (cid:19) = 54100 log (cid:18) (cid:18) (cid:19)(cid:19) + (cid:18) − (cid:19) log (cid:18) (cid:18) − (cid:19)(cid:19) . (3.11)Because k ≈ log ( N ) , we get that P (cid:18) S k ≥ k (cid:19) ≤ N − D ( || ) . (3.12)In particular, then, there are at most N − · D ( || ) < N − . “bad” numbers, i.e. we have thedesired bound a ≤ n . for the other > N . numbers, which is significant as N grows large. Callthis set of numbers for which we have this bound U .Following the analysis in [AV], Arias de Reyna and van de Lune’s algorithm has a runtime of n α in base2 where α = − D (2 , / + 3 D (2 , / )log(2) = − / + 3)log(2) ≈ . . (3.13) Recall that in their complexity proof, Arias de Reyna and van de Lune denote the number of summandsthat must be checked for each n by kMax. Our bound on the numbers in U compares well to [AV]’s boundin that if kMax were uniform for all numbers in [AV], our bound would be lower on all u ∈ U . Moreexplicitly, in binary, if kMax were uniform, then [AV] would require checking summands up to ≈ n . whereas we require checking summands up to ≈ n . for numbers in U .Unfortunately, kMax is not uniform in this way, and so we cannot claim a definitive improvement withour uniform bound on α . It is possible that some of the u ∈ U have a low value of kMax to begin with, andfor such numbers our bound may not afford an improvement. Conversely, it is possible that our bound willimprove some numbers that are not in U . Overall, since kMax is not uniform, it is not easy to theoreticallycompare our bound to [AV]. Given this, and given that the ideal bases are much larger than binary (whichsignificantly complicates theoretical analysis), we performed a number of empirical tests to understandhow our algorithm compares to [AV] in the general case.4. E MPIRICAL C ALCULATIONS
To see whether our method improves J. Arias de Reyna and J. van de Lune’s algorithm in practice, wemodified J. Arias de Reyna and J. van de Lune’s code by adding various precomputations and calculatinghow many numbers would be improved with these precomputations .The first precomputation uses a greedy algorithm due to Steinerberger [St], which gives that f ( n ) ≤ .
66 log ( n ) for most n . The recursive algorithm works as follows: if n ≡ or n ≡ ,take n = 3( n/ and run the algorithm on n/ . If n ≡ or n ≡ , take n = 2( n/ andrun the algorithm on n/ . If n ≡ , take n = 1 + 3( n − / and run the algorithm on ( n − / .If n ≡ , take n = 1 + 2( n − / and run the algorithm on ( n − / .The method is as follows: First, run the greedy algorithm on all of the numbers up to some limit andstore the results in a dictionary. Then, use these values to compute a bound on the number of summandsfor each number (using the formula derived in Section 3.3). Store a counter that is initialized to 0. Next,run J. Arias de Reyna and J. van de Lune’s algorithm. For each number, test whether the precomputedsummand bound is better than the summand bound in the original algorithm. If an improvement is found,increment the counter. When we use this algorithm to precompute summands, we improve 7153 numbersout of the first 200000, or less than 3.6% of numbers. If we compute complexities further, up to 2000000,we improve 60864 numbers, or less than 3.05% of numbers.We can also combine Steinerberger’s algorithm with a stronger algorithm, due to Shriver [Sh]. Shriverdeveloped a greedy algorithm in base 2310. If we use the best upper bound on complexities from Shriverand Steinerberger’s greedy algorithm, we improve 11188 numbers out of 200000, or about 5.6% of num-bers. If we compute complexities up to 2000000, we improve 107077 numbers, or less than 5.36% ofnumbers.Shriver conjectures that his best algorithm, which uses simulated annealing, produces a bound of f ( n ) ≤ .
529 log ( n ) for generic integers. In fact, only 824 numbers up to 2000000 would be improvedby assuming a uniform bound of f ( n ) ≤ .
529 log ( n ) . Of course, this is a purely theoretical result—if wewere to actually introduce a uniform bound, then we would not be able to accurately calculate complexities.If we become even more optimistic and use a uniform bound of f ( n ) ≤ . ( n ) , we would only poten-tially improve 4978 numbers out of the first 2000000. Similarly, using f ( n ) ≤ . ( n ) would improve124707 numbers of 2000000, which is about 6.23%. If we venture significantly below Shriver’s conjectureof .
529 log ( n ) and use f ( n ) ≤ . ( n ) uniformly, then we start to see a significant difference—wewould improve 726756 numbers of 2000000, or about 36%.Overall, it seems that Arias de Reyna and van de Lune’s algorithm already has a strong bound on thenumber of summands that are computed. It is possible that we are encountering difficulties because kMaxis not uniform, or it is possible that the complexity of J. Arias de Reyna and J. van de Lune’s algorithm issignificantly lower than O ( n . ) . Thus, while summand precomputing improves the complexity compu-tation for some numbers, given the overhead for performing precomputations and the current speed of J.Arias de Reyna and J. van de Lune’s algorithm, introducing a precomputation does not seem to yield anoverall improvement to the algorithm. See the “ExperimentalResults” folder at https://github.com/kcordwel/Integer-Complexity
5. P
ROGRESS T OWARDS AN U NCONDITIONAL U PPER B OUND
The current unconditional upper bound on complexity, f ( n ) ≤ ( x ) , is derived from applyingGuy’s method in base 2 to n . In particular, the most complex numbers have binary expansions of the form · · · so that at each step, Guy’s method requires three 1’s. The resulting representation is of the form · · · ]] .Say that n mod ≡ k . Instead of applying Guy’s method to n , what if write n = k +(1+1+1)( n − k ) / and then apply Guy’s method to ( n − k ) / ? Then in the case where n = 11 · · · , ( n − k ) / is eitherof the form · · · or · · · , and applying Guy’s method to ( n − k ) / gives f (( n − k ) / ≤ . ( n ) . Using this, we find that f ( n ) ≤ . ( n ) , which is a significant improvement over f ( n ) ≤ ( n ) .This suggests the following method: If the binary representation of n contains more than a certainpercentage of 1’s, then write n as k + (1 + 1 + 1) · ( n − k ) / and apply Guy’s method instead to ( n − k ) / .Empirically, in most cases, when the binary expansion of n contains a high percentage of 1’s, ( n − k ) / hasa significantly lower percentage of 1’s. However, there are some examples where this fails. For example,if n = 2 − − , then both the binary expansion of n and the binary expansion of ( n − / havea high percentage of 1’s. Notably, if we repeat this division process and consider (( n − / / , then wewill obtain a number with a nice binary expansion. Accordingly, we say that − − requires twoiterations of division by 3.Some numbers require numerous iterations of division by 3 before their binary expansions are nice. Forexample, n = 2 − − − requires nine iterations. These sorts of counterexamples seem tofollow some interesting patterns. Let n i denote the number obtained after i iterations of division by 3 sothat n = n , n = ( n − ( n mod / , etc. In general, it seems that the number of iterations that arenecessary to produce a “nice” binary expansion is tied to the number of iterations for which n ≡ mod .For example, when n = 2 − − − , then n ≡ n ≡ n ≡ · · · ≡ n ≡ mod , but n ≡ mod , and n has the first “nice” binary expansion.It should be noted that there is no reason to only employ division by 3. For example, when n =2 − − − , n mod ≡ , and ( n − / has a nice binary expansion. It should be notedthat n ≡ mod and n ≡ mod , and the binary representations of ( n − / and ( n − / both containa large percentage of 1’s.In general, then, performing this process of division by appropriate numbers before applying Guy’smethod is a promising strategy for obtaining an improvement on the unconditional upper bound on f ( n ) .We believe that it could be an interesting problem to make these vague heuristics precise and understandwhether this could give rise to a new effective method of giving explicit constructions of n with sums andproducts that use few ’s. 6. A CKNOWLEDGMENTS
We would like to thank Professor Arias de Reyna for generously sharing the code that he developedwith Professor van de Lune. Thank you to the SMALL REU program, Williams College, and the WilliamsCollege Science Center where the bulk of this work took place. We would like to thank Professor AmandaFolsom for funding from NSF Grant DMS1449679 as well as SMALL REU for funding from NSF GrantDMS1347804, the Williams College Finnerty Fund, and the Clare Boothe Luce Program. The fourthlisted author was supported by NSF grants DMS1265673 and DMS1561945, the fifth listed author wassupported by Simons Foundation Grant R EFERENCES[A1] H. Altman,
Integer complexity and well-ordering , Michigan Math. Journal (2015), no. 3, 509-538.[A2] H. Altman, Integer complexity: Representing numbers of bounded defect , Theoretical Computer Science (2016),64-85.[A3] H. Altman,
Integer complexity: Algorithms and computational results , 2016. arXiv:1606.03635.[AZ] H. Altman & J. Zelinsky,
Numbers with integer complexity close to the lower bound , Integers (2012), no. 6,1093-1125.[AV] J. Arias de Reyna & J. van de Lune, Algorithms for determining integer complexity , arXiv:1404.2183 [math.NT].[AG] R. Arratia & L. Gordon,
Tutorial on large deviations for the binomial distribution , Bulletin of Mathematical Biology (1989), no. 1, 125-131.[F] M. N. Fuller, C-Program to Compute A005245 , Feburary 2008.http://oeis.org/A005245/a005245.c.txt.[G] R. K. Guy,
Unsolved problems: Some suspiciously simple sequences , Amer. Math. Monthly (1986), no. 3, 186-190.[IBCOOP] J. Iraids, K. Balodis, J.Cerenoks, M. Opmanis, R. Opmanis, and K. Podnieks, Integer complexity: Experimentaland analytical results , Scientific Papers University of Latvia, Computer Science and Information Technologies (2012), 153-179.[MP] K. Mahler and J. Popken,
On a maximum problem in arithmetic (Dutch) , Nieuw Arch. Wiskd. (1953), no. 1, 1-15.[Sh] C. Shriver, Applications of Markov chain analysis to integer complexity , 2016, arXiv:1511.07842 [math.NT].[St] S. Steinerberger,
A short note on integer complexity , Contributions to Discrete Mathematics, (2014), no. 1.[SS] V. V. Srinivas and B. R. Shankar, Integer complexity: Breaking the θ ( n ) barrier , World Academy of Science,Engineering and Technology (2008), no. 5, 454 - 455. E-mail address : [email protected] D EPARTMENT OF M ATHEMATICS , U
NIVERSITY OF M ARYLAND , C
OLLEGE P ARK , MD 20742
E-mail address : [email protected] D EPARTMENT OF M ATHEMATICS AND S TATISTICS , W
ILLIAMS C OLLEGE , W
ILLIAMSTOWN , MA 01267
E-mail address : [email protected] D EPARTMENT OF M ATHEMATICS AND S TATISTICS , W
ILLIAMS C OLLEGE , W
ILLIAMSTOWN , MA 01267
E-mail address : [email protected], [email protected] D EPARTMENT OF M ATHEMATICS AND S TATISTICS , W
ILLIAMS C OLLEGE , W
ILLIAMSTOWN , MA 01267
E-mail address : [email protected] D EPARTMENT OF M ATHEMATICS , V
IRGINIA P OLYTECHNIC I NSTITUTE AND S TATE U NIVERSITY , VA 24061
E-mail address : [email protected] D EPARTMENT OF M ATHEMATICS AND S TATISTICS , W
ILLIAMS C OLLEGE , W
ILLIAMSTOWN , MA 01267
E-mail address : [email protected] D EPARTMENT OF M ATHEMATICS , Y
ALE U NIVERSITY , CT 06510
E-mail address : [email protected] D EPARTMENT OF M ATHEMATICS , A
MHERST C OLLEGE , A