An Experiment Combining Specialization with Abstract Interpretation
LL. Fribourg and M. Heizmann (Eds.): VPT/HCVS 2020EPTCS 320, 2020, pp. 155–158, doi:10.4204/EPTCS.320.11 © J. P. Gallagher and R. Gl¨uckThis work is licensed under theCreative Commons Attribution License.
An Experiment Combining Specializationwith Abstract Interpretation
John P. Gallagher
Roskilde University, Denmark
IMDEA Software Institute, Spain [email protected]
Robert Gl¨uck
Copenhagen University, Denmark [email protected]
Introduction and motivation.
It was previously shown that control-flow refinement can be achievedby a program specializer incorporating property-based abstraction, as described in [6] and applied in [2]to improve termination and complexity analysis tools. We now show that this purpose-built specializercan be reconstructed in a more modular way, and that the previous results can be achieved using anoff-the-shelf partial evaluation tool, applied to an abstract interpreter. The key feature of the abstractinterpreter is the abstract domain, which is the product of the property-based abstract domain with theconcrete domain. This language-independent framework provides a practical approach to implementinga variety of powerful specializers, and contributes to a stream of research on using interpreters andspecialization to achieve program transformations.
Abstract interpreters.
Let L be a programming language. We consider a program p ∈ L to bea partial function of one argument (possibly an n -tuple), denoted [[ p ]] , and assume that both argumentand result are elements of a set S . An interpreter for L is a program I such that for all p ∈ L , v ∈ S , [[ I ]]( p v ) = [[ p ]] v (or both are undefined). An abstract interpreter computes a safe approximation of aninterpreter I . For the present discussion, we say that an abstract interpreter A takes a program p ∈ L witha set φ of input values, and computes a set of values as output. A is a safe approximation of I if for all p ∈ L and φ ∈ ℘ ( S ) , { [[ I ]]( p v ) | v ∈ φ } ⊆ [[ A ]]( p φ ) . In other words, A over-approximates the set ofresults that I computes on elements of φ .In practice, abstract interpreters represent elements of ℘ ( S ) by descriptions in some abstract domainD . If D is finite, an abstract interpreter using D is a total function [[ A ]] : ( L × D ) → D , that is, [[ A ]]( p φ ) terminates for all p ∈ L and φ ∈ D . Although abstract interpreters are typically designed to terminate,domains that are infinite and abstract interpretations that do not guarantee termination are also useful;we can still gain interesting information from running them. In what follows, we define a mixed abstractinterpreter that combines the concrete domain ℘ ( S ) with a finite domain D . In the theory of abstractinterpretation [1], the Cartesian product ℘ ( S ) × D is an abstract domain. The abstract interpreter A usedin our experiment has type ( L × (( ℘ ( S ) × D )) → ( ℘ ( S ) × D ) . Structure of an interpreter.
Let us assume that an (abstract) interpreter operates as a transitionsystem, though this is not essential. A state consists of a point in the program being interpreted togetherwith the values of variables in the domain of interpretation at that point. For the standard interpreter,let q and q ′ be program points, and v , v ′ ∈ S be the respective values of the variables at those points. Atransition is h q , v i δ ( v , q )= v ′ −−−−−→ h q ′ , v ′ i , where δ is a function relating v and v ′ at the point q . A transition inan abstract interpreter over domain D uses a mapping δ D ( φ , q ) = φ ′ where φ , φ ′ ∈ D ( δ D is sometimescalled the abstract “transfer function”).Transitions in an abstract interpreter over the product domain ℘ ( S ) × D have two components cor-responding to ℘ ( S ) and D respectively: they have the form h q , { v } , φ i δ ( v , q )= v ′ ; δ D ( φ , q )= φ −−−−−−−−−−−−→ h q ′ , { v ′ } , φ ′ i ,56 Anexperiment inspecialising an abstract interpreterwhere δ and δ D are the transfer functions for ℘ ( S ) and D respectively. Assuming that δ is the standardinterpreter transfer function, this both computes the standard result as well as an abstract result in D . Wewill see that we can exploit the separate components during specialization. We assume that the initialcall to the interpreter contains a singleton set { v } ∈ ℘ ( S ) , and thus only singleton concrete states arereachable. Specialization. A specializer for L is a program S that transforms a program p ∈ L with respect topartially specified input. We assume that p ’s argument is a pair ( v v ) and that S is provided with v .The result is a program p ′ ∈ L , i.e. [[ S ]]( p v ) = p ′ which satisfies the property [[ p ′ ]] v = [[ p ]]( v v ) .Specialization of an interpreter with respect to a program in L is known as the first Futamura projec-tion [4, 11]. We have [[ S ]]( I p ) = I p , where according to the properties of interpreters and specializers, [[ I p ]] v = [[ I ]]( p v ) = [[ p ]] v . The program I p can be seen as the compilation or transformation of p into thelanguage of I . Values encountered during specialization are static or dynamic , in the terminology of par-tial evaluation. Functions with static arguments can be evaluated during specialization, while functionswith dynamic arguments are not, and are retained in the specialised program. A binding-time analysis [11] can determine which parts of the program to be specialised are guaranteed to be static. Abstract interpreter specialization.
Consider the specialization of an abstract interpreter with do-main ℘ ( S ) × D . In a state h q , { v } , φ i , we can determine that the program point q and the abstract state φ are static, while v is dynamic. This is because the initial abstract state is static (even if it is the “top”element of D ) and in a transition from h q , { v } , φ i , where q and φ are static, and v dynamic, δ D ( φ , q ) = φ ′ can be evaluated while δ ( v , q ) = v ′ cannot; thus the computation δ ( v , q ) = v ′ is retained in the residualspecialised interpreter. Thus in the next state h q ′ , { v ′ } , φ ′ i , q ′ and φ ′ are static, while v ′ is dynamic.Furthermore, if D is finite, then the static values have bounded static variation , which means that onlya finite number of different values of the static arguments arise during specialization. This leads to aso-called polyvariant specialization. A transition of the form h q , { v } , φ i δ ( v , q )= v ′ ; δ D ( φ , q )= φ ′ −−−−−−−−−−−−→ h q ′ , { v ′ } , φ ′ i is specialised into a finite number of transitions of the form h q φ , v i δ ( v , q )= v ′ −−−−−→ h q ′ φ ′ , v ′ i , for each pair ( q , φ ) encountered during specialization. Control-flow refinement by mixed interpreter specialization.
A abstract interpreter for con-strained Horn clauses was written as a Prolog program . The abstract domain is a product domainas described above, where D is a set ℘ ( Ψ ) where Ψ is finite set of properties. The main interpreterpredicate is solve(Q,A,Phi,Psi,Prog) , representing a state of the interpreter with a call to pred-icate Q with concrete arguments A , abstract state (the set of properties Phi from
Psi that A entails), Psi and
Prog , the last two being the set of all properties and the set of Horn clauses respectively.A transition of the interpreter evaluates two calls delta and delta_D , corresponding to δ and δ D above. δ evaluates the constraints of the clauses, and the δ D computes the properties for the bodycalls in the clause. When run normally, the interpreter mirrors the standard semantics, but in additioncarries around the set of properties that hold. Consider the following example clauses considered in [6]. while0(X,Y,M) ← X>0,if0(X,Y,M).while0(X,Y,M) ← X=<0. if0(X,Y,M) ← Y
The interpreter applied to these clauses can run a goal of the form solve(while0(5,3,10),....) and terminate.The offline partial evaluator
LOGEN [14] was used to partially evaluate the interpreter with respectto a set of clauses and a fixed finite set of properties. To use
LOGEN , each call in the interpreter isannotated as unfold or memo , and each argument of memoed calls is annotated as static , dynamic or nonvar (meaning that everything below the top level of the term is dynamic). In the interpreter sketched Available at https://github.com/jpgallagher/absint4pe .P.Gallagher and R.Gl¨uck 157above, the calls to solve and delta are memoed, while all other calls, including delta_D , areunfolded. The arguments Q , Phi , Psi and
Prog are static , while A is nonvar . The specialised programthus consists solely of specialised clauses for solve and the concrete constraints linking one concretestate with the next. Example result.
The result of specialization of the clauses above, using the same set of propertiesin the
Psi argument as were used in [6] is as follows. solve__2(A,B,C) :- A>0,solve__3(A,B,C).solve__2(A,B,C) :- A=<0.solve__3(A,B,C) :-B
This result is identical, apart from predicate names, to the result obtained in [6]. Polyvariance is ex-emplified by solve_2 , solve_4 and solve_5 , these being three versions of calls to solve wheninterpreting a call to while0 in the input clauses, corresponding to different values for the static argu-ments (the properties that hold) arising during partial evaluation. Similarly, solve_3 and solve_6 are versions of the interpretation of if0 .The implementation of delta_D reused code from the specializer described in [6], but the abstractinterpreter has a simpler structure than the specializer used in that work. This is due to the fact that theoperations handling unfolding, memoing, generalization and polyvariance are handled by LOGEN anddo not need to be included in the interpreter. Furthermore, it would be simple to replace the code for delta_D with an implementation of an abstract transfer function for some other domain.
Related and future work.
The transformation of programs by specialising interpreters goes backto the Futamura projections [4]. The projections can be exploited by inserting more sophisticated in-terpreters between a program and the specializer (e.g.[5, 16, 9, 12]). The power of the overall programtransformation has been improved by combining specialization with abstraction [10, 15, 13, 3]. The maincontrast to previous work on combining specialization with abstract interpretation is that we choose notto integrate abstract interpretation in the specializer, but into the interpreter. Thus a simple partial eval-uator (in our case L
OGEN can achieve the same results as the more elaborate specializers incorporatingabstract interpretation. We argue that the approach of combining the interpretive approach with an ab-stract interpreter has practical advantages such as modularity and ease of implementation. The sametransformation power of a sophisticated specializer can be achieved by interpreter specialization pro-vided the underlying specializer is Jones-optimal and performs static expression reduction [8]. Often itis easier to modify an interpreter than the specialization tool. Also, it only requires to reason about thecorrectness of the interpreter provided the underlying specializer is correct.The approach presented here needs further research. For instance, interpreters may be parameterisedby abstract interpretation domains; the ‘binding-time improvement’ of the interpreter is thereby doneonly once. The approach is not limited to offline specialization; other specialization tools such as onlinespecializers and supercompilers may be used. Clearly, the interpretive approach lends itself to generatespecializers by the specializer projections [7]. These will be challenges for further investigations.
References [1] P. Cousot & R. Cousot (1977):
Abstract interpretation: a unified lattice model for static analysis of programsby construction or approximation of fixpoints . In: POPL, pp. 238–252, doi: .
58 Anexperiment inspecialising an abstract interpreter [2] J. J. Dom´enech, J. P. Gallagher & S. Genaim (2019):
Control-Flow Refinement by Partial Evaluation,and its Application to Termination and Cost Analysis . TPLP 19(5-6), pp. 990–1005, doi: .[3] F. Fioravanti, A. Pettorossi, M. Proietti & V. Senni (2013):
Controlling Polyvariance for Specialization-basedVerification . Fundam.Inform.124(4), pp. 483–502, doi: .[4] Y. Futamura (1971):
Partial Evaluation of Computation Process - An Approach to a Compiler-Compiler .Systems,Computers,Controls 2(5), pp. 45–50.[5] J. P. Gallagher (1986):
Transforming Logic Programs by Specialising Interpreters . In: Proceedings of the7thEuropeanConferenceonArtificialIntelligence(ECAI-86),Brighton, pp. 109–122.[6] J. P. Gallagher (2019):
Polyvariant program specialisation with property-based abstraction . In A. Lisitsa &A. P. Nemytykh, editors: VPT-19, EPTCS 299, doi: .[7] R. Gl¨uck (1994):
On the generation of specializers . JournalofFunctionalProgramming 4(4), pp. 499–514,doi: .[8] R. Gl¨uck (2002):
Jones Optimality, Binding-Time Improvements, and the Strength of Program Specializers .In: Proc. Asian Symposium on Partial Evaluation and Semantics-Based Program Manipulation, ACM, pp.9–19, doi: .[9] R. Gl¨uck & J. Jørgensen (1994):
Generating transformers for deforestation and supercompilation . InB. Le Charlier, editor: Static Analysis. Proceedings, LNCS 864, Springer-Verlag, pp. 432–448, doi: .[10] J. Hatcliff, M. Dwyer & S. Laubach (1998):
Staging static analyses using abstraction-based program spe-cialization . In C. Palamidessi et al., editors: Principlesof DeclarativeProgramming, LNCS 1490, Springer,pp. 134–151, doi: .[11] N. D. Jones, C. Gomard & P. Sestoft (1993):
Partial Evaluation and Automatic Software Generation . PrenticeHall, doi: .[12] N. D. Jones (2004):
Transformation by interpreter specialization . SCP52(1-3), pp. 307–339, doi: .[13] M. Leuschel (2004):
A framework for the integration of partial evaluation and abstract interpretation oflogic programs . ACMTOPLAS 26(3), pp. 413–463, doi: .[14] M. Leuschel, D. Elphick, M. Varea, S. Craig & M. Fontaine (2006):
The Ecce and Logen partial evalua-tors and their web interfaces . In J. Hatcliff & F. Tip, editors: PEPM, ACM, pp. 88–94, doi: .[15] G. Puebla, M. Hermenegildo & J. P. Gallagher (1999):
An integration of partial evaluation in a genericabstract interpretation framework . In O. Danvy, editor: PEPM’99, San Antonio, Texas, pp. 75–84.[16] V. F. Turchin (1993):
Program transformation with metasystem transitions . Journalof FunctionalProgram-ming 3(3), pp. 283–313, doi:10.1017/S0956796800000757