How inefficient can a sort algorithm be?
aa r X i v : . [ c s . D S ] J un How inefficient can a sort algorithm be?
Miguel A. LermaJune 5, 2014
Abstract
Here we find large lower bounds for a certain family of algorithms,and prove that such bounds are limited only by natural computabilityarguments.
Here we study algorithms intended to sort a list of n integers. It is wellknown that optimal sort algorithms such as mergesort have a run timeΘ( n log n ) (see [2]). Bublesort , with a worst-case run time of Θ( n ), isconsidered “inefficient”. But, are there any sort algorithms that performeven worse?This paper is inspired on a discussion found in Internet about inefficient sortalgorithms. The summary of such discussion can be found (at the time ofthis writing) in the following page: http://home.tiac.net/ ∼ cri d/cri/2001/badsort.html That discussion contains details on how to design sort algorithms with largerthan quadratic run time. The record holder for such kind of inefficientalgorithm among the ones mentioned in that page is called
EvilSort , witha run time Ω(( n )!). Here we show how to break that record and producebasically boundless inefficient sort algorithms.1 A hierarchy of inefficient sort algorithms
Before we start stepping up the slope of inefficiency we want to make surethat we don’t do it in a trivial way, such as inserting useless loops just withthe purpose of “wasting” time by adding delays in an artificial way. Thesort algorithms described here will always contain only steps directed to thefinal goal of obtaining a sorted list of elements.The basic task of our algorithms will be to sort a list of integers L =[ a , a , . . . , a n ] in increasing order. The size of the input will be given by thenumber n of elements in the list, and time will be measured by the numberof integer comparisons performed.A particularly inefficient way to sort the given list of integers consists ofgenerating a random permutation of it and check if such permutation con-tains the elements correctly sorted. That is the so called bogosort algorithm(see e.g [1]), performing asymptotically ( e − n ! integer comparisons and( n − · n ! swaps in average. This kind of algorithm however has severalproblems. First, it requires a random generator. Then, the best case runtime is very low, just n − bogosort that eliminates randomness consists of generating all n ! permutations of the given list and then search for the one that containsthe elements correctly sorted. This keeps the average run time in Ω( n !)integer comparisons, but still produces a low n − sort all n ! permutations in lexicographicalorder, an return the first one of them. For instance, if the given list is L = [2 , , P = [[2 , , , [2 , , , [3 , , , [3 , , , [1 , , , [1 , , P sorted = [[1 , , , [1 , , , [2 , , , [2 , , , [3 , , , [3 , , . The first element of this list of integer lists is the sorted integer list [1 , , bublesort , which runs in Θ( n ) time. The lexicographicalorder of integer lists is defined so that L < lex L precisely when the firstindex k ∈ [1 , . . . , n ] for which they differ verifies L [ k ] < L [ k ]. So, compar-ing two integers lists requires at least one integer comparison, and the totaltime (number of integer comparisons) required to sort n ! permutations of n elements in lexicographical order using bublesort will be Ω(( n !) ).That is still less than the run time of the EvilSort algorithm mentionedabove, but soon we will see how to do better—I mean, worse.One obvious way consists of replacing bublesort with another instance ofthe algorithm just described, i.e., instead of using bublesort to sort the n !permutations of the original list of n integers, generate the ( n !)! permuta-tions of the list of n ! permutations, and then sort lexicographically the listof permutations of permutations. The first element will be a list of permu-tations of integers lists, and the first element of it will be the original listof n integers sorted in increasing order. The number of integer comparisonsperformed will be now Ω((( n !)!) ).This finally breaks the record hold by EvilSort, but we want go further,break our own record, and in fact any record ever set by anybody in thepast or in the future. To do so we can repeat what we just did, i.e., replacethe final application of bublesort with an instance of the latest version ofthe kind of algorithm described here, so that the run time will keep growingto Ω(((( n !)!)!) ), Ω((((( n !)!)!)!) ), and so on, but how far can we go?In the next section we will develop these ideas in a more precise way, and alsowill look at what the limit of this strategy might be. In particular we willanswer the following question: given any (rapidly) increasing computablefunction f : N → N , is there a sort algorithm with run time Ω( f ( n ))? As stated, our algorithm will take as its input a list L with n integer elementsand return the same list with its elements sorted in increasing order. In theintermediate steps we will be handling general lists whose elements can beof any type, in particular the elements of a list can also be lists.3he following is assumed about lists:1. The number of elements of a list L is available and represented as length(L) .2. Elements are indexed with an index that runs from 1 to length(L) .3. It is possible to access/retrieve/modify the element at a particularindex without affecting any other elements. In particular it is possibleto swap two elements of a list.4. It is possible to insert an element at a particular index. The indicesof higher elements at that are increased by 1.5. It is possible to remove an element at a particular index. The indicesof higher elements at that are decreased by 1.6. It is possible to append two lists. Here we represent L1 + L2 = resultof appending list L1 and L2 , e.g. [ a, b, c ] + [ d, e ] = [ a, b, c, d, e ].The usual assignment operator ’ := ’ between lists makes the list in the lefthand side identical to the list on the right hand side, i.e., L1 := L2 makes L1 into another name for list L2 . If after the assignment list L2 is modified,list L1 is also modified because they in fact represent the same list.We can make a copy of a list in such as way that the original list and itscopy have the same elements, but remain different lists, so that changes inthe copy do not affect the original list. The following is an implementationof a list copy function (using Pascal-like pseudocode): procedure copy(A,B)2: for i := 1 to length(A) do
3: B[i] := A[i]4: od end The length and indexing of B are adjusted to fit those of A .Variables are supposed to be local to the procedure where they occur, andcreated as needed if they do not exist. The types of variables will be ’integer’,’list of integers’, ’list of list of integers’, and so on. The type of a variable4s determined by context. Integer arguments are passed by value, and listsare passed by reference.Since the algorithms to be precisely defined here will require not only integercomparison, but also lexicographical comparisons of list of integers, of listsof list of integers, etc., we need a function lt that is able to perform thatoperation to any level. The following code fulfills this requirement: procedure lt(A,B) // is A less than B?2: if type(A) = integer then // the arguments are integers3: return (A B[k], hence A > B10: else // otherwise A[k] = B[k], keep going11: fi od return ( false ) // all elements are equal, hence A = B14: fi end lt The following is the version of bublesort that we will be using here. Thealgorithm modifies the original list L , and performs Θ( n ) ’ lt ’ comparisons. procedure bublesort(L)2: for i:=2 to length(L) do for j:=1 to length(L)-i+1 do if lt(L[j+1],L[j]) then
5: swap(L[j],L[j+1])6: fi od od end bublesort permutations takes a list as its argument and returns a list oflists with all permutations of the elements of the original list. The followingcode is one among many possible ways of generating all the permutations ofa list L : procedure permutations(L)2: if length(L) =< 1 then // in this case there is only one permutation3: copy(L,L0) // this is to preserve original list4: return ([L0]) // return the only permutation5: else
6: P := [] // the list of permutations is initially empty7: for i:=1 to length(L) do
8: copy(L,L1) // make copy of original list9: remove(i,L1) // remove i-th element from the copy10: P0 := permutations(L1) // generate its permutations// put removed element at the beginning// of each permutation of L1 and add the// result to the list of permutations11: for j:=1 to length(P0) do
12: P := P + [[L[i]] + P0[j]]13: od od return (P)16: fi end permutations The following is the code for the multilevel version of the sort algorithmdescribed in section 2: procedure multilevelsort(L,k)2: if k = 0 then // last level, just perform bublesort3: bublesort(L)4: else
5: P := permutations(L) // generate permutations6: multilevelsort(P,k-1) // sort them lexicographically7: copy(P[1],L) // copy first element into L8: fi end multilevelsort k = 0, multilevelsort performs just bublesort on the given list ofelements, run time Ω( n ). For k > multilevelsort performs k recursiveself-calls before using bublesort. Its run time is Ω((( · · · ( n !) · · · !)!) ), with k nested factorials. Using the multifactorial notation n ! ( k ) = take the factorialof n k times, then the lower bound for the run time of multilevelsort will be Ω(( n ! ( k ) ) ).We finally answer the question of how inefficient a sort algorithm can be.To do so we define the following sort algorithm, that takes a list of integers L , and an increasing computable function f : N → N as its arguments: procedure worstsort(L,f)2: multilevelsort(L,f(length(L)))3: end worstsort The run time for this algorithm is now Ω(( n ! ( f ( n )) ) ) ≥ Ω( f ( n )), showingthat a sort algorithm can be made as inefficient as we wish, with its runtime growing at least as fast as any given fix computable function. Sinceworstsort is itself computable, the growth rate of its run time will still beasymptotically bounded above by rapidly growing uncomputable functionssuch as a busy beaver (which is known to grow faster than any computablefunction—see [3]). But given any fix rapidly growing computable function,we can make the run time of worstsort grow faster just by feeding thatfunction as its second argument. We have shown that there is no computable limit to the inefficiency of asort algorithm, even when respecting the rule of not using useless loops anddelays unrelated to the sorting task. The run time of such algorithm cangrowth at least as fast as any given fix computable function.7 eferences [1] H.xi Gruber, M. Holzer, and O. Ruepp. Sorting the slow way: an analysisof perversely awful randomized sorting algorithms.
Lecture Notes inComputer Science , 4475:183–197, 2007.[2] Donald Knuth.
Sorting and Searching , volume 3 of
The Art of ComputerProgramming . Addison-Wesley, 2nd edition, 1998.[3] Tibor Rad´o. On non-computable functions.