Constructing Segmented Differentiable Quadratics to Determine Algorithmic Run Times and Model Non-Polynomial Functions
CConstructing Segmented Differentiable Quadratics to DetermineAlgorithmic Run Times and Model Non-Polynomial Functions
Ananth Goyal a, ∗ a Dougherty Valley High School
Abstract
We propose an approach to determine the continual progression of algorithmic efficiency, as an alternative to standardcalculations of time complexity, likely, but not exclusively, when dealing with data structures with unknown maximum indexesand with algorithms that are dependent on multiple variables apart from just input size. The proposed method can effectivelydetermine the run time behavior F at any given index x , as well as ∂F∂x , as a function of only one or multiple arguments,by combining n quadratic segments, based upon the principles of Lagrangian Polynomials and their respective secant lines.Although the approach used is designed for analyzing the efficacy of computational algorithms, the proposed method can beused within the pure mathematical field as a novel way to construct non-polynomial functions, such as log n or n + n − , as a seriesof segmented differentiable quadratics to model functional behavior and reoccurring natural patterns. After testing, our methodhad an average accuracy of above of 99% with regard to functional resemblance.Keywords: Time Complexity, Algorithmic Run Time, Polynomials, Lagrangian Interpolation
1. Introduction
Runtime, and it’s theoretical subset, time complexity, are imperative to understanding the speed andcontinual efficiency of all algorithms[1][2]. Particularly because runtime information allows for thoroughcomparisons between the performance of competing approaches. Due to the varying environments inwhich algorithms are executed, time complexity is implemented as a function of inputted arguments[3]rather than accounting for the situational execution time[4]; this removes the need to address every extra-neous factor that affects the speed of such algorithms[5]. There are countless methods on determining theformulaic runtime complexity[6][7], particularly because, from a theoretical perspective, the true runtimecan never be determined without thoroughly examining the algorithm itself[8]; however, this does notmean that the process cannot be expedited, simplified, or made easier.The goal is to produce a function O ( T ( n )) that can model the time complexity of any given algorithm[9],primarily who’s runtime is defined as a function of more than just a single variable. We define E ( foo ( args )) where foo ( args ) is any given algorithm and E denotes the execution in a controlled environment. Thefollowing method can be used to determine run time with respect to a several variables (not just elementsize) by evaluating CPU time with respect to an input size. Any confounding variables such as CPU type,computing power, and/or programming language, will be bypassed as they will remain controlled duringtesting. The constructed polynomial series, which will be a piece-wise of segmented quadratics, will thenproduce the same functional asymptotic behavior as the true time complexity O ( T ( n )) , which can thenbe independently determined through the correlation with their respective parent functions. In addition,the methods found for computing such runtimes has profound mathematical implications for represent-ing various non-polynomial functions as differentiable quadratic segments, similar, but not identical, tothe outcome of evaluating Taylor Series[10][11]. In short, we do this by using reference points of anygiven non-polynomial, and developing a quadratic (using polynomial interpolation[12]) over a particularsegment that accurately matches the true functional behavior. ∗ Corresponding author
URL: [email protected] (Ananth Goyal) a r X i v : . [ c s . CC ] D ec METHODS
2. Methods
Our primary condition is the following: ∃ x ∈ R (cid:20) F ( x + c ) − f ( x ) = (cid:90) x + cx ∂f∂x (cid:21) Additionally, ∀ ( n ∈ R : n > ) ∃ ∂∂n O ( T ( n )) This ensures that the targeted Time Complexity function must be constructed of only real numbers andbe differentiable throughout, except for segemented bounds. It is important to note that F ( x ) (cid:54) = O ( T ( n )) ∨ F ( x ) (cid:54)≈ O ( T ( n )) . We also define E ( foo ( args )) = k O ( T ( n )) , where k is any constant of proportionality thatconverts the predicted time complexity into execution time or vice-versa. We first construct a single line of intersection amongst every consecutive ordered pair of segmentedindexes and respective computing time (or any alternative performance modeling metric). We use thefollowing standard point slope formula to do so: y = y i − y i − x i − x i − ( x − x i ) + y i (2.1)The polynomial of any given segment can be constructed using the explicit formula below[13][14], inthis case the first three indexes within a data set are used; however, this applies for any given 3 pointsegment within the data set. Defined as: ∀ ( x ∈ ( x j , x k )) | ( k = j + )) . Note: The proof for the followingformulas is shown in section 2.5. ∀ ( x ∈ ( x , x )) : f ( x ) = y ( x − x )( x − x )( x − x )( x − x ) + y ( x − x )( x − x )( x − x )( x − x ) + y ( x − x )( x − x )( x − x )( x − x ) (2.2)We then factor in the polynomial model above and the respective secant line equation, to construct theexplicit average form of the initial 3 point segment such that each point is equivalent to the differencebetween the secant line and the original polynomial. ∀ ( x ∈ ( x , x )) : f ( x ) = y ( x − x )( x − x )( x − x )( x − x ) + y ( x − x )( x − x )( x − x )( x − x ) + y ( x − x )( x − x )( x − x )( x − x ) + (cid:20) y i − y i − x i − x i − ( x − x i ) + y i (cid:21) | ( i = ( ∨ )) (2.3)Before we implement this method, we must account for any given segment, and to do so, we must simplifythe method of polynomial construction. First we define F(x) to be dependent on our f j outputs. F ( x ) := k (cid:88) j = y j f j ( x ) (2.4)These outputs are determined accordingly (Note: k = 3 in our case; however, the model would work forany value of k): f j ( x ) := (cid:89) (cid:54) m (cid:54) km (cid:54) = j x − x m x j − x m = ( x − x )( x j − x ) · · · ( x − x j − )( x j − x j − ) ( x − x j + )( x j − x j + ) · · · ( x − x k )( x j − x k ) (2.5)Such that, ∀ ( j (cid:54) = i ) : f j ( x i ) = (cid:89) m (cid:54) = j x i − x m x j − x m = METHODS O ( T ( n )) as a Function of Quadratic Segments We can then average this with the constructed Lagrangian polynomial to get our model for any given3-point segment. Note: ∵ ( F ( x k ) = F ( x k )) ∧ ( lim x → k − ∂F∂x (cid:54) = lim x → k + ∂F∂x ) ∴ (cid:64) ( ∂F∂x | x = k ) We can simplify thegiven expression to[15]: ∀ ( x ∈ ( x j , x k )) : F ( x ) = k = j + (cid:88) j y j (cid:89) (cid:54) m (cid:54) km (cid:54) = j x − x m x j − x m + (cid:20) y k − y k − x k − x k − ( x − x k ) + y k (cid:21) (2.7)Such that, ∂∂x k = j + (cid:88) j y j (cid:89) (cid:54) m (cid:54) km (cid:54) = j x − x m x j − x m + (cid:20) y k − y k − x k − x k − ( x − x k ) + y k (cid:21)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) x = x j + ≈ ∂∂x ( k ) E ( foo ( x j + )) (cid:12)(cid:12)(cid:12)(cid:12) x = x j + (2.8)as well as the segmented average[16],12 x k − x j + (cid:90) x k x j k = j + (cid:88) j y j (cid:89) (cid:54) m (cid:54) km (cid:54) = j x − x m x j − x m + (cid:20) y k − y k − x k − x k − ( x − x k ) + y k (cid:21) ≈ (cid:90) x k x j + ( k ) E ( foo ( n )) (2.9)We then implement the proposed method of each selected segment to construct the function for everyiteration of natural numbers by redefining F ( x ) from a single constructed polynomial to a multi-layered,piece-wise construction of the primary segments of such polynomials. ∀ ( x ∈ R : x > )) : F ( x ) = (cid:80) y j (cid:81) (cid:54) m (cid:54) m (cid:54) = x − x m x − x m + (cid:104) y − y x − x ( x − x ) + y (cid:105) x (cid:54) x (cid:54) x (cid:80) y j (cid:81) (cid:54) m (cid:54) m (cid:54) = x − x m x − x m + (cid:104) y − y x − x ( x − x ) + y (cid:105) x (cid:54) x (cid:54) x · · · · · · (cid:80) nn − y j (cid:81) n − (cid:54) m (cid:54) nm (cid:54) = n − x − x m x n − − x m + (cid:104) y n − y n − x n − x n − ( x − x n ) + y n (cid:105) x n − (cid:54) x (cid:54) x n (2.10)In order to retrieve the complexity of the algorithm at a particular index i we can now simply compute F ( i ) .Note: (cid:64) ∂F∂x (cid:12)(cid:12) x = x j ∨ x k but ∀ ( x ∈ ( x j , x k ) ∃ ∂F∂x (cid:12)(cid:12) x . Additionally, however, the proposed method, when graphed,will construct a continuous function, making it easy to determine the true runtime of the function as O T ( n )) . The following method will suffice if, and only if, the arguments are not directly correlated through anymathematical operation, excluding addition, subtraction, or any non-composite operation. For example,if our unknown time complexity of Algorithm foo ( x , b ) was O ( log ( x ) + b ) . We must first evaluate theexecution time with respect to a single variable. We use E ( foo ()) , to denote the execution time of thegiven function; this can be determined by implementing a computing timer into the algorithm. In thiscase we evaluate the algorithm accordingly: Y = E ( foo ( x , b )) |{ ( x ∈ N : x > ) ∧ ( b = ) } (2.11)Such that, Y = y ∨ E ( foo ( x , 0 )) , y ∨ E ( foo ( x , 0 )) , · · · , y n ∨ E ( foo ( x n , 0 )) (2.12) METHODS X = E ( foo ( x , b )) |{ ( b ∈ N : b > ) ∧ ( x = ) } (2.13)Such that, X = χ ∨ E ( foo ( b )) , χ ∨ E ( foo ( b )) , · · · , χ n ∨ E ( foo ( b n )) (2.14)In this particular case, we first isolate the F ( x , b ) in terms of x. To do so we must first ensure x and b areindependent of each other. Since, in our sample scenario, E ( foo ( x , b )) = log ( x ) + b (2.15)Now, we can conclude that, E ( foo ( x , 0 )) = log ( x ) + = log ( x ) (2.16)And, E ( foo ( ∨ ( ∀ ∈ R > ) , b )) = log ( ∨ ( ∀ ∈ R > )) + b (2.17)Now, we can evaluate the E ( foo ( x , b )) over a set of fixed data points. First with respect to x: F ( x , 0 ) ∨ F x ( x , b ) = k = j + (cid:88) j y j (cid:89) (cid:54) m (cid:54) km (cid:54) = j x − x m x j − x m + (cid:20) y k − y k − x k − x k − ( x − x k ) + y k (cid:21) (2.18)Then with respect to b: F ( b ) ∨ F b ( x , b ) = k = j + (cid:88) j χ j (cid:89) (cid:54) m (cid:54) km (cid:54) = j b − b m b j − b m + (cid:20) χ k − χ k − b k − b k − ( b − b k ) + χ k (cid:21) (2.19)Once we have computed our segmented quadratics with respect a particular index group, we can con-struct our piece-wise function of E ( foo ( x , b )) = log ( x ) + b as two independent, graphical representations. F ( x , b ) = ∀ ( x > )) : F x = (cid:80) y j (cid:81) (cid:54) m (cid:54) m (cid:54) = x − x m x − x m + (cid:104) y − y x − x ( x − x ) + y (cid:105) x (cid:54) x (cid:54) x · · · · · · (cid:80) nn − y j (cid:81) n − (cid:54) m (cid:54) nm (cid:54) = n − x − x m x n − − x m + (cid:104) y n − y n − x n − x n − ( x − x n ) + y n (cid:105) x n − (cid:54) x (cid:54) x n ∀ ( b > )) : F b = (cid:80) χ j (cid:81) (cid:54) m (cid:54) m (cid:54) = x − x m b − b m + (cid:104) χ − χ b − b ( b − b ) + χ (cid:105) b (cid:54) x (cid:54) b · · · · · · (cid:80) nn − χ j (cid:81) n − (cid:54) m (cid:54) nm (cid:54) = n − b − b m b n − − b m + (cid:104) χ n − χ n − b n − b n − ( b − b n ) + y n (cid:105) b n − (cid:54) b (cid:54) b n (2.20)Although our method produces non-differentiable points at segmented bounds, we can still computepartial derivatives at points ∀ ( x ∈ R : x > ) such as: ( ∂∂x ) k = j + (cid:88) j y j (cid:89) (cid:54) m (cid:54) km (cid:54) = j x − x m x j − x m + (cid:20) y k − y k − x k − x k − ( x − x k ) + y k (cid:21)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) x = x j + ≈ ( ∂∂x )( log x + b ) (cid:12)(cid:12)(cid:12)(cid:12) x = x j + (2.21)We can justify the accuracy of F x with E ( foo ( x )) assuming only one inputted argument accordingly: ∵ (cid:64) ( ∂F∂x | x = j ) ∧ (cid:64) ( ∂F∂x | x = k ) ∧ ∵ ( lim x → k − F x ( x ) = lim x → k + F x ( x ) = F x ( x )) (2.22) METHODS n n (cid:88) ∀ ( x ∈ N : x>x k ) E ( foo ( x )) ≈ x − x (cid:90) x x F x ( x ) + x − x (cid:90) x x F x ( x ) + · · · + x n − x n − (cid:90) x n x n − F x ( x ) (2.23)Alternatively, 1 n n (cid:88) ∀ ( x ∈ N : x>x k ) E ( foo ( x )) ≈ n (cid:88) ∀ ( x ∈ N : x>x k ) x k − x j (cid:90) x k x j F x ( x ) (2.24) In order to explain the approach used in cases with unknown runtime functions that consist of com-posite operations, we must implement the following proof.
Theorem 2.1.
If the unknown run time function consists of composite operations, such as in M ( x , b ) = b log ( x ) ,this can be instantly determined if the functional difference across a set of input values is not just a graphicaltranslation.Proof of Theorem 2.1. If, G ( x , b ) = log ( x ) + b ∧ M ( x , b ) = b log ( x ) (2.25)Then, G ( x , 0 ) = log ( x ) + = log ( x ) = G x ( x , b ) ∨ G ( x ) (2.26)Additionally, G ( x , 0 ) = G x ( x , b ) ∨ G ( x ) (2.27)But, M ( x , 0 ) (cid:54) = M x ( x , b ) ∨ M ( x ) (2.28)Due to the non-composite operations of G , the value of b does not directly impact the value of x , ratherjust the output of the mulitvariable function. The same can be done conversely with other variables;however, if they are directly correlated, such as in M ( x , b ) it prevents the difference from being just atranslation. M ( x , 0 ∨ ( ∀ ( b ∈ R : b > ))) (cid:54) = log ( x ) (2.29)Above, it is clear that both independent variables cannot be determined through inputting a constant of0, causing a non-linear intervariable relationship.In order to construct the primary segmented function with equivalent behavior to the multivariableruntime E ( foo ( x , b )) or with any number of arguments, we must run execution tests with respect toeach variable such that the remaining are treated as constants. If, like in the example stated earlier,the unknown runtime function was b log x , then when graphically modeled for n number of tests interms of x , a set of skewed logarithmic curves would be constructed, where as with respect to y , a set ofhyperbolic functions would be produced. By treating each temporary, non-functional, constant argumentas k n the graphical differences can be factored in, when creating the single time complexity formula.Although there are several k values that can be used, to keep the methodology consistent, we decideto take the input value that produces the average functional value over a given segment as the selectconstant. Although the true function is unknown, we can use the constructed, segmented quadratic to doso. 1 δ k − δ j (cid:90) δ k δ j F δ ( δ x , k b , · · · , k z ) ∂δ = aδ + bδ + c (2.30) METHODS δ accordingly[17], such that ( δ ∈ R ) ∧ ( δ > ) : δ = − b ± (cid:113) b − a ( c − δ k − δ j (cid:82) δ k δ j F δ ( δ x , k b , · · · , k z ) ∂δ ) a (2.31)While this process is still comprehensible, once we start introducing functions with more than 2 argu-ments, we must test their values in planes with multiple dimensional layers, rather than just one or twodimensions. To do so, we determine the intervariable relationships between every potential pair of argu-ments and construct a potential runtime formula accordingly. Note: The higher dimensional order of thefunction, the more convoluted the formulaic determination becomes.Suppose the intervariable runtime function E ( foo ( x , b , c , · · · , z )) such that the corresponding seg-mented quadratic function is F ( x , b , c , · · · , z ) . We would evaluate the unknown E ( foo ( x , b , · · · , z )) withrespect to a single variable such that the remaining are treated as constants. Using the example above, wewould first plug in constant values into x and b , while graphically modeling the rate of change of c asan independent function. Then we begin to adjust b with increments of i to determine, their respectivetransformational relationship. We would repeat this process for every potential pair ( x , b ) , ( x , c ) , ( b , c ) ,and so forth. F x , b ( x , b , · · · , z ) = F x ( x , ∀ ( b ∈ R : b > : b = b + i ) , · · · , k z ) (2.32) F b , c ( x , b , · · · , z ) = F b ( k x , b , ∀ ( c ∈ R : c > : c = c + i ) , · · · , k z ) (2.33) · · · F c , z ( x , b , · · · , z ) = F b ( k x , k b , c , ∀ ( z ∈ R : z > : z = z + i ) , · · · , k z ) (2.34)From their we can use the graphical model to help deduce the formulaic runtime with respect to allvariables.Similar to the analysis method with respect to a single variable, we can justify the accuracy by approx-imately equating the average integrated value of each independent segment with it’s true, algorithmiccounterpart: Note: We define l as the total number of input arguments.1 ( n )( l ) ( n )( l ) (cid:88) ∀ ( x ∈ N : x>x k ) E ( foo ( x , b , · · · , z )) ≈ x − x (cid:90) x x F x (( x , b , · · · , z ) ∂x + · · · + x n − x n − (cid:90) x n x n − F x ( x , b , · · · , z ) ∂x + b − b (cid:90) b b F b (( x , b , · · · , z ) ∂b + · · · + b n − b n − (cid:90) x n b n − F b ( x , b , · · · , z ) ∂b + · · · + z − z (cid:90) z z F z (( x , b , · · · , z ) ∂z + · · · + z n − z n − (cid:90) x n z n − F z ( x , b , · · · , z ) ∂b (2.35)This method can be simplified accordingly:1 ( n )( l ) ( n )( l ) (cid:88) ∀ ( x ∈ N : x>x k ) E ( foo ( x , b , · · · , z )) ≈ z n (cid:88) x n (cid:88) ∀ ( δ ∈ R : x>δ k ) δ k − δ j (cid:90) δ k δ j F ( x , b , · · · , z ) ∂δ (2.36) The following subsection will discuss the mathematical applications of this method, and will focus onthe proofs behind constructing non-polynomials as piece-wise functions built upon segmented quadratics.
Lemma 2.2.
Given n values of ( x ∈ R ) with corresponding n values of ( y ∈ R ) a representative polynomial P canbe constructed such that deg ( P ) < n ∧ P ( x k ) = y k METHODS Proof of Lemma 2.2.
Let, P ( x ) = ( x − x )( x − x ) · · · ( x − x n )( x − x )( x − x ) · · · ( x − x n ) (2.37)Therefore, P ( x ) = ∧ P ( x ) = P ( x ) = · · · = P ( x n ) = P , P , · · · , P n | P j ( x j ) = ∧ P j ( x i ) = ∧ P j ( x i ) = ∀ ( i (cid:54) = j ) (2.39)Therefore, P ( x ) = (cid:80) y i P i ( x ) is a constructed polynomial such that ∀ ( x i ∈ R : ∃ P ( x i )) ∧ ∀ ( i ∈ N : i < n ) .It is built upon subsidiary polynomials of degree n − ∴ deg ( P ) < n Theorem 2.3.
Referencing Lemma 1, given any real non-polynomial, an approximate quadratic piece-wise func-tion F ( x ) can be constructed using n segments produced by 3 values, defined over 2 values, of x ∈ R and theircorresponding outputs such that F ( x ) is continuous at all x values including respective transition points, but notnecessarily differentiable at such values.Proof of Theorem 2.3. Since the initial portion of the polynomial is based upon Lemma 1, it is clear that 3base points will construct a quadratic polynomial, unless their respective derivatives are equivalent whichwould produce a sloped line. The following method is shown: F ( x ) = (cid:80) y j (cid:81) (cid:54) m (cid:54) m (cid:54) = x − x m x − x m + (cid:104) y − y x − x ( x − x ) + y (cid:105) x (cid:54) x (cid:54) x (cid:80) y j (cid:81) (cid:54) m (cid:54) m (cid:54) = x − x m x − x m + (cid:104) y − y x − x ( x − x ) + y (cid:105) x (cid:54) x (cid:54) x · · · · · · (cid:80) nn − y j (cid:81) n − (cid:54) m (cid:54) nm (cid:54) = n − x − x m x n − − x m + (cid:104) y n − y n − x n − x n − ( x − x n ) + y n (cid:105) x n − (cid:54) x (cid:54) x n (2.40)When simplified, the function would be defined accordingly: F ( x ) = a x + b x + c x (cid:54) x (cid:54) x a x + b x + c x (cid:54) x (cid:54) x · · · · · · a x + b x + c x n − (cid:54) x (cid:54) x n (2.41)By definition any polynomial is continuous throughout it’s designated bounds[18], therefore, for all val-ues within each segment, F ( x ) is continuous. And, since the bounded values of each segment are equiv-alent, we can conclude that the produced function is continuous everywhere. Formally ( lim x → t − F ( x ) = lim x → t + F ( x ) = F ( x )) where t is any bounded point. However, this does not guarantee that ( lim x → t − ∂F ( x ) ∂x = lim x → t + ∂F ( x ) ∂x ; therefore it’s derivative at the bounded point is likely undefined. Theorem 2.4.
Given any segmented quadratic F ( x ) = ax + bx + c constructed through averaging LagrangianInterpolation with it’s respective secant, the graphical concavity can be determined by looking at the sign of variable a such that ( a ∈ R ) ∧ ( b ∈ R ) ∧ ( c ∈ R ) .Proof of Theorem 2.4. Since F ( x ) constructed with three base points, the only polynomial function producedare segmented quadratics. Upward concavity exists ∀ ( x ∈ ( x j , x k ) | ∂ ∂x >
0; while downward concavityexists ∀ ( x ∈ ( x j , x k ) | ∂ ∂x <
0. However, this process can be expedited without the need to compute secondderivatives.Since our segmented polynomial is constructed using only three points we can conclude that, ∀ ( x ∈ ( x j , x k )) : { F x ( x ) = ax + bx + c }|{ ( a ∈ R : a > ) ∧ ( b ∈ R : b > ) ∧ ( c ∈ R : c > ) } (2.42) RESULTS ∂ F∂x (cid:12)(cid:12)(cid:12)(cid:12) ∀ ( x ∈ ( x j , x k )) = a ∴ (cid:90)(cid:90) a∂x∂x = F x ( x ) ∴ (cid:90) x k x j ∂ F∂x = a | a | (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) (cid:90) x k x j ∂ F∂x (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) (2.43)Thus, the sign of a is the only significant value to determine the segmented concavity F x ( x ) . Determiningthe functional concavity of the Lagrangian construction is important, as with certain functions, primarilythose with upward concavity, it may not be necessary to compute secant line averages.When testing the mathematical accuracy A of our approach, we use the segmented average value andcompare it to that of the original function G ( x ) . In cases where (cid:82) ba G ( x ) > (cid:82) ba F ( x ) we simply compute thereciprocal of the following function. We use a and b as placeholder variables to represent the segmentedbounds. A = (cid:80) n (cid:82) ba G ( x ) (cid:80) n (cid:82) ba F ( x ) (2.44)
3. Results
We divide the results section into the primary perspective (algorithmic implications) and the secondaryperspective (pure mathematical implications).
We tested our method on four algorithms (two single variable functions and two multivariable func-tions) and compared the produced time complexity formulas to the true, known, complexities to see howaccurate our formulations were, and the extremity of deviations, if, at all. The first algorithm (singlevariable) was a binary search of x elements, with a complexity of O ( log x ) . The second (single variable)was a sort of n elements, with a complexity of O ( x log x ) . The third (multivariable) was a combined searchsort algorithm of n unsorted elements, and b select sorted elements, with a complexity of O ( b + log x ) .The fourth (multivariable) was a custom algorithm of n elements, with a complexity of O ( mx + log log x ) .Although coefficients and additional constants are implemented in the predicted complexity, due to timebeing the only output variable, the only relevant component is the contents of the big- O as it representsthe asymptotic behavior of the algorithmic runtime regardless of any confounding variables. For multi-variable algorithms, like stated in the methods, runtime complexities were computed with respect to eachvariable, and put together in the form of the final predicted complexity. RESULTS
Complexity Constructed Polynomial Predicted Complexity O ( log x ) F ( x ) = − x + x + (cid:54) x (cid:54) − x + x + (cid:54) x (cid:54) − x + x + (cid:54) x (cid:54) O ( log x ) + O ( x log x ) F ( x ) = x + x + (cid:54) x (cid:54) x + x − (cid:54) x (cid:54) x + x + (cid:54) x (cid:54) O ( x log x ) O ( mx log x ) F x ( x , m ) = x + x + (cid:54) x (cid:54) x + x − (cid:54) x (cid:54) x + x + (cid:54) x (cid:54) F m ( x , m ) = m (cid:54) m (cid:54) m − (cid:54) m (cid:54) m − (cid:54) m (cid:54) O ( mx log x ) O ( mx + log log b ) F x ( x , m , b ) = x + x + (cid:54) x (cid:54) x + x − (cid:54) x (cid:54) x + x + (cid:54) x (cid:54) F m ( x , m , b ) = m (cid:54) m (cid:54) m − (cid:54) m (cid:54) m − (cid:54) m (cid:54) F b ( x , m , b ) = − x + x + (cid:54) x (cid:54) − x + x + (cid:54) x (cid:54) − x + x + (cid:54) x (cid:54) O ( mx log x + log log b ) + We tested our method against various non-polynomial functions, and selected one example of eachcommon type of non-polynomial to be representative of the accuracy with it’s functional family. Wemade sure to remove any non-composite components such as any constants, as that would only adjustthe function by a graphical translation. The functions used were log x (logarithm family), x − x (rationalfamily), 2 x (exponential family), and 4 cos ( x ) (trigonometric family); although we could choose moreconvoluted functions, we wanted to showcase performance for functions that are similar to their parentfunctions to attain a holistic perspective. In most cases, the Lagrangian constructions tend to exceed thevertical level of their respective non-polynomials, making secant line averages most useful with downwardconcavity. We defined our piece wise function until some relatively close, arbitrary whole value that leavesnumbers simple, to stay consistent; however, the accuracy is still indicative of potential performanceregardless of the final bound, due to the natural progression of such functions. RESULTS
Function Constructed Polynomial Calculated Accuracy log x F ( x ) = − x + x + (cid:54) x (cid:54) − x + x + (cid:54) x (cid:54) − x + x + (cid:54) x (cid:54)
64 99.964%cos ( πx ) F ( x ) = − x − x + (cid:54) x (cid:54) x − x + (cid:54) x (cid:54) x − x + (cid:54) x (cid:54) x F ( x ) = x − x + (cid:54) x (cid:54) x − x +
32 4 (cid:54) x (cid:54) x − x +
112 5 (cid:54) x (cid:54) x − x F ( x ) = − x + x + (cid:54) x (cid:54) − x + x + (cid:54) x (cid:54) − x + x + (cid:54) x (cid:54)
16 99.34% (a) Without Partitions (b) With Partitions
Figure 1: Graphical Representations of log x and F ( x ) RESULTS (a) Without Partitions (b) With Partitions Figure 2: Graphical Representations of cos ( πx ) and F ( x ) (a) Without Partitions (b) With Partitions Figure 3: Graphical Representations of 2 x and F ( x ) (a) Without Partitions (b) With Partitions Figure 4: Graphical Representations of x − x and F ( x ) DISCUSSION AND IMPLICATIONS
4. Discussion and Implications
After testing the proposed approach against several known algorithms, we were able to swiftly deter-mine the runtime functions that correspond with their true time complexities. We tested the approachon two single variable algorithms and two multivariable algorithms such that we could compare the pro-duced complexity behaviour with the true, known, complexity. In practice, this method will be usedon algorithm’s where complexities are unknown to help determine their runtime functions; however, ex-perimentally, we needed to know the true complexity beforehand to deduce the comparative accuracy.Regardless, in all cases our method was able to produce the correct big- O runtime function. This wasdetermined through the automated construction of segmented polynomial models given a set of inputdata. By treating each variable independently and graphing their grouped correlation, it made it easy todeduce the respective time complexity. To reiterate, any external coefficients and constants are a resultof the particular test environment and because time is the output value. The only relevant componentin determining the accuracy of the method are the contents of the big-O function. Most of the predictedruntime complexity functions followed the format of k O ( T ( n )) + C where k is the constant of proportion-ality between execution time and standard time complexity and C is any factor of translation that matchesthe produced graphical curve/line with their true counterpart. While these values help us overlay ourconstructions with their parent functions, they aren’t necessarily important in determining the accuracyof our approach as the asymptotic behavior of our construction will be the same regardless. We are confi-dent that the proposed method can significantly help expedite the process of determining functional timecomplexities in all cases, including both single and mulitvariable algorithms. After reviewing the results, we were able to confirm the accuracy of the proposed approach withconstructing matching segmented differentiable quadratics given any non-polynomials. These includelogarithmic, exponential, trigonometric, and rational functions. To determine the approaches accuracywith select functions, we calculated the average value of the formulated function over a particular seg-ment. And after doing so, as well as reviewing the formulaic relationships between computed segments,we found a collective functional resemblance score of greater than 99% and began to notice profoundmathematical implications. After testing just a few data points, we can produce a rule that can con-struct the next consecutive segmented polynomial based upon the functional patterns that surface. Forexample, with regard to log x , we were able to determine that every consecutive segment was equiv-alent to a x + b x + ( c + ) such that the variables are the constants of the previous segment and theaccuracy of any additional segments would remain identical. Not only is this a revolutionary methodfor accurate, polynomial replications; but it’s sheer simplicity combats flaws found in leading methodsof doing so (primarily with non-sinusoidal functions), most notably Taylor series. Using these methodsmathematicians and scientists can construct accurate, differentiable functions to represent patterned data,non-polynomials, and functions found in higher theoretical dimensions. Additionally, a similar approachcan be used to determine the natural progression of repetitious systems such as natural disasters, plane-tary orbits, or pandemic-related death tolls, to lead to a better understanding of their nature. As in theory,their physical attributes and properties are built upon reoccurring, natural functions. In this paper we proposed an approach to use segmented quadratic construction, based upon theprinciples of Lagrangian Interpolation to help determine algorithmic runtimes, as well as model non-polynomials with advanced, foreseeable applications in pure mathematics and pattern modeling/recognitionfound in science and nature. We hope to build upon this approach by improving and determine new waysto apply this research in all computational and mathematical based fields.
EFERENCES Acknowledgments
I would like to thank Professor Jeffery Ullman, Mr. Sudhir Kamath, Mr. Robert Gendron, Mr. PhillipNho, and Ms. Katie MacDougall for their continual support with my research work.
References [1] Nasar, A. A. (2016). The history of Algorithmic complexity.
The Mathematics Enthusiast , 13(3), 217-242. 1[2] Aho, A., Lam, M., Sethi, R., & Ullman, J. (2007). Compilers: Principles, Techniques and Tools, 2nd Editio. 1[3] Sipser, M. (1996). Introduction to the Theory of Computation. ACM Sigact News, 27(1), 27-29. 1[4] Puschner, P., & Koza, C. (1989). Calculating the maximum execution time of real-time programs.
Real-time systems ,1(2), 159-176.. 1[5] Dean, W. (2015). Computational complexity theory. 1[6] Qi, Q., Weise, T., & Li, B. (2017, July). Modeling optimization algorithm runtime behavior and its applications. In
Proceedings of the Genetic and Evolutionary Computation Conference Companion (pp. 115-116). 1[7] Guzman, J. P., & Limoanco, T. (2017). An Empirical Approach to Algorithm Analysis Resulting in Approximationsto Big Theta Time Complexity. JSW, 12(12), 946-976. 1[8] Aho, A. V., & Ullman, J. D. (1994).
Foundations of computer science . WH Freeman & Co.. 1[9] Mohr, A. (2014). Quantum computing in complexity theory and theory of computation.
Carbondale, IL.
Computers & Mathematics with Applications , 51(9-10), 1367-1376. 1[11] Corliss, G., & Chang, Y. F. (1982). Solving ordinary differential equations using Taylor series.
ACM Transactions onMathematical Software (TOMS) , 8(2), 114-144. 1[12] De Boor, C., & Ron, A. (1990). On multivariate polynomial interpolation. Constructive Approximation, 6(3), 287-302. 1[13] Sauer, T., & Xu, Y. (1995). On multivariate Lagrange interpolation.
Mathematics of computation , 64(211), 1147-1170.2.1[14] Rashed, M. T. (2004). Lagrange interpolation to compute the numerical solutions of differential, integral andintegro-differential equations.
Applied Mathematics and computation , 151(3), 869-878. 2.1[15] Berrut, J. P., & Trefethen, L. N. (2004). Barycentric lagrange interpolation. SIAM review, 46(3), 501-517. 2.2[16] Comenetz, M. (2002). Calculus: the elements. World Scientific Publishing Company. 2.2[17] Irving, R. (2020). Beyond the quadratic formula (Vol. 62).
American Mathematical Soc. . 2.3.2[18] Cucker, F., & Corbalan, A. G. (1989). An alternate proof of the continuity of the roots of a polynomial.