Accelerating Parametric Probabilistic Verification
Nils Jansen, Florian Corzilius, Matthias Volk, Ralf Wimmer, Erika Ábrahám, Joost-Pieter Katoen, Bernd Becker
aa r X i v : . [ c s . S E ] M a r AcceleratingParametric Probabilistic Verification ⋆ Nils Jansen , Florian Corzilius , Matthias Volk , Ralf Wimmer ,Erika ´Abrah´am , Joost-Pieter Katoen , and Bernd Becker RWTH Aachen University, Germany { nils.jansen | corzilius | volk | abraham | katoen } @cs.rwth-aachen.de Albert-Ludwigs-University Freiburg, Germany { wimmer | becker } @informatik.uni-freiburg.de Abstract.
We present a novel method for computing reachability proba-bilities of parametric discrete-time Markov chains whose transition prob-abilities are fractions of polynomials over a set of parameters. Our al-gorithm is based on two key ingredients: a graph decomposition intostrongly connected subgraphs combined with a novel factorization strat-egy for polynomials. Experimental evaluations show that these approachescan lead to a speed-up of up to several orders of magnitude in comparisonto existing approaches.
Discrete-time Markov chains ( DTMCs ) are a widely used modeling formalismfor systems exhibiting probabilistic behavior. Their applicability ranges from dis-tributed computing to security and systems biology. Efficient algorithms exist tocompute measures like: “What is the probability that our communication proto-col terminates successfully if messages are lost with probability 0.05?”. However,often actual system parameters like costs, faultiness, reliability and so on are notgiven explicitly. For the design of systems incorporating random behavior, thismight even not be possible at an early design stage. In model-based performanceanalysis, the research field of fitting [1], where—intuitively—probability distri-butions are generated from experimental measurements, mirrors the difficultiesin obtaining such concrete values.This calls for treating probabilities as parameters and motivates to consider parametric
DTMCs, PDTMCs for short, where transition probabilities are (ratio-nal) functions in terms of the system’s parameters. Using these functions one can,e. g., find appropriate values of the parameters such that certain properties aresatisfied or analyze the sensitivity of reachability probabilities to small changes ⋆ This work was partly supported by the German Research Council (DFG) as part ofthe Research Training Group AlgoSyn (1298) and the Transregional CollaborativeResearch Center AVACS (SFB/TR 14), the EU FP7-project MoVeS, the FP7-IRSESproject MEALS and by the Excellence Initiative of the German federal and stategovernment. n the parameters. Computing reachability probabilities for DTMCs is typicallydone by solving a linear equation system. This is not feasible for PDTMCs, sincethe resulting equation system is non-linear. Instead, approaches based on stateelimination have been proposed [2,3]. The idea is to replace states and their in-cident transitions by direct transitions from each predecessor to each successorstate. Eliminating states this way iteratively leads to a model having only initialand absorbing states, where transitions from the initial states to the absorbingstates carry—as rational functions over the model parameters—the probabilityof reaching the absorbing states from the initial states. The efficiency of suchelimination methods strongly depends on the order in which states are eliminatedand on the representation of the rational functions.
Related work
The idea of constructing a regular expression representing a DTMC’sbehavior originates from Daws [2]. He uses state elimination to generate regu-lar expressions describing the paths from the initial states to the absorbingstates of a DTMC. Hahn et al. [3] apply this idea to PDTMCs to obtain ra-tional functions for reachability and expected reward properties. They improvethe efficiency of the construction by heuristics for the transformation of finiteautomata to regular expressions [4] to guide the elimination of states. Addition-ally, they reduce the polynomials to simplify the rational functions. These ideashave been extended to Markov decision processes [5]. The main problem thereis that the reachability probabilities depend on the chosen scheduler to resolvethe nondeterminism. When maximizing or minimizing these probabilities, theoptimal scheduler generally depends on the values of the parameters. Their algo-rithms are implemented in PARAM [6], the—to the best of our knowledge—onlyavailable tool for computing reachability probabilities of PDTMCs.Several authors have considered the related problem of parameter synthesis:for which parameter instances does a given (LTL or PCTL) formula hold? Tomention a few, Han et al. [7] considered this problem for timed reachability incontinuous-time Markov chains, Pugelli et al. [8] for Markov decision processes,and Benedikt et al. [9] for ω -regular properties of interval Markov chains. Contributions of this paper
In this paper we improve the computation of reach-ability probabilities for PDTMCs [2,3] in two important ways. We introducea state elimination strategy based on a recursive graph decomposition of thePDTMC into strongly connected subgraphs and give a novel method to effi-ciently factorize polynomials . Although presented in the context of parametricMarkov chains, this constitutes a generic method for representing and manip-ulating rational functions and is well-suited for other applications as well. Theexperiments show that using our techniques yield a speed-up of up to threeorders of magnitude compared to [3] on many benchmarks.
Definition 1 (Discrete-time Markov chain). A discrete-time Markov chain(DTMC) is a tuple D = ( S, I, P ) with a non-empty finite set S of states, an initialistribution I : S → [0 , ⊆ R with P s ∈ S I ( s ) = 1 , and a transition probabilitymatrix P : S × S → [0 , ⊆ R with P s ′ ∈ S P ( s, s ′ ) = 1 for all s ∈ S . The states S I = { s I ∈ S | I ( s I ) > } are called initial states . A transition leads from a state s ∈ S to a state s ′ ∈ S iff P ( s, s ′ ) >
0. The set of successorstates of s ∈ S is succ( s ) = { s ′ ∈ S | P ( s, s ′ ) > } . A path of D is a finitesequence π = s s . . . s n of states s i ∈ S such that P ( s i , s i +1 ) > ≤ i < n . The set Paths D contains all paths of D , Paths D ( s ) those starting in s ∈ S , and Paths D ( s, t ) those starting in s and ending in t . We generalize thisto sets S ′ , S ′′ ⊆ S of states by Paths D ( S ′ , S ′′ ) = S s ′ ∈ S ′ S s ′′ ∈ S ′′ Paths D ( s ′ , s ′′ ).A state t is reachable from s iff Paths D ( s, t ) = ∅ .The probability measure Pr D for paths satisfiesPr D ( s . . .s n ) = n − Y i =0 P ( s i , s i +1 )and Pr D (cid:0) { π , π } (cid:1) = Pr D ( π )+Pr D ( π ) for all π , π ∈ Paths D not being the pre-fix of each other. In general, for R ⊆ Paths D we have Pr D ( R ) = P π ∈ R ′ Pr D ( π )with R ′ = { π ∈ R | ∀ π ′ ∈ R. π ′ is not a proper prefix of π } . We often omit thesuperscript D if it is clear from the context. For more details see, e. g., [10].For a DTMC D = ( S, I, P ) and some K ⊆ S we define the set of input states of K by Inp( K ) = { s ∈ K | I ( s ) > ∨ ∃ s ′ ∈ S \ K. P ( s ′ , s ) > } , i. e., the statesinside K that have an incoming transition from outside K . Analogously, we definethe set of output states of K by Out( K ) = { s ∈ S \ K | ∃ s ′ ∈ K. P ( s ′ , s ) > } ,i. e., the states outside K that have an incoming transition from a state inside K . The set of inner states of K is given by K \ Inp( K ).We call a state set S ′ ⊆ S absorbing iff there is a state s ′ ∈ S ′ from whichno state outside S ′ is reachable in D , i. e., iff Paths D ( { s ′ } , S \ S ′ ) = ∅ . A state s ∈ S is absorbing if { s } is absorbing.A set S ′ ⊆ S induces a strongly connected subgraph (SCS) of D iff for all s, t ∈ S ′ there is a path from s to t visiting only states from S ′ . A stronglyconnected component (SCC) of D is a maximal (w. r. t. ⊆ ) SCS of S . An SCC S ′ is called bottom if Out( S ′ ) = ∅ holds. The probability of eventually reachinga bottom SCC in a finite DTMC is always 1 [10, Chap. 10.1].We consider probabilistic reachability properties , putting bounds on the prob-ability P s I ∈ S I I ( s I ) · Pr D (cid:0) Paths D ( s I , T ) (cid:1) to eventually reach a set T ⊆ S of statesfrom the initial states. It is well-known that this suffices for checking arbitrary ω -regular properties, see [10, Chap. 10.3] for the details.Note that the probability of reaching a state in a bottom SCC equals theprobability of reaching one of the input states of the bottom SCC. Therefore, wecan make all input states of bottom SCCs absorbing, without loss of information.Furthermore, if we are interested in the probability to reach a given state, alsothis state can be made absorbing without modifying the reachability probabilityof interest. Therefore, in the following we consider only models whose bottomSCCs are single absorbing states forming the set T of target states, whose reach-ability probabilities are of interest. .1 Parametric Markov Chains To add parameters to DTMCs, we follow [6] by allowing arbitrary rational func-tions in the definition of probability distributions.
Definition 2 (Polynomial and rational function).
Let V = { x , . . . , x n } be a finite set of variables with domain R . A polynomial g over V is a sum of monomials , which are products of variables in V and a coefficient in Z : g = a · x e , · . . . · x e ,n n + · · · + a m · x e m, · . . . · x e m,n n , where e i,j ∈ N = N ∪ { } and a i ∈ Z for all ≤ i ≤ m and ≤ j ≤ n . Z [ x , . . . , x n ] denotes the set of polynomials over V = { x , . . . , x n } . A rationalfunction over V is a quotient f = g g of two polynomials g , g over V with g = 0 . We use F V = (cid:8) g g | g , g ∈ Z [ x , . . . , x n ] ∧ g = 0 (cid:9) to denote the set ofrational functions over V . Definition 3 (PDTMC). A parametric discrete-time Markov chain (PDTMC) is a tuple M = ( S, V , I, P ) with a finite set of states S , a finite set of parameters V = { x , . . . , x n } with domain R , an initial distribution I : S → F V , and aparametric transition probability matrix P : S × S → F V . The underlying graph G M = ( S, D P ) of a (P)DTMC M = ( S, V , I, P ) isgiven by D P = (cid:8) ( s, s ′ ) ∈ S × S (cid:12)(cid:12) P ( s, s ′ ) = 0 (cid:9) . As for DTMCs, we assume thatall bottom SCCs of considered PDTMCs are single absorbing states. Definition 4 (Evaluated PDTMC). An evaluation u of V is a function u : V → R . The evaluation g [ u ] of a polynomial g ∈ Z [ x , . . . , x n ] under u : V → R substitutes each x ∈ V by u ( x ) , using the standard semantics for + and · . For f = g g ∈ F V we define f [ u ] = g [ u ] g [ u ] ∈ R if g [ u ] = 0 .For a PDTMC M = ( S, V , I, P ) and an evaluation u , the evaluated PDTMC is the DTMC D = ( S u , I u , P u ) given by S u = S and for all s, s ′ ∈ S u , I u ( s ) = I ( s )[ u ] and P u ( s, s ′ ) = P ( s, s ′ )[ u ] if the evaluations are defined and otherwise. An evaluation u substitutes each parameter by a real number. This induces awell-defined probability measure on the evaluated PDTMC under the followingconditions. Definition 5 (Well-defined evaluation).
An evaluation u is well-defined fora PDTMC M = ( S, V , I, P ) if for the evaluated PDTMC D = ( S u , I u , P u ) itholds that – I u : S u → [0 , with P s ∈ S u I u ( s ) = 1 , and – P u : S u × S u → [0 , with P s ′ ∈ S u P u ( s, s ′ ) = 1 for all s ∈ S u .An evaluation u is called graph preserving if is well-defined and it holds that ∀ s, s ′ ∈ S : P ( s, s ′ ) = 0 = ⇒ P ( s, s ′ )[ u ] > . g = 0 means that g cannot be simplified to 0. I s s K (a) Initial PDTMC s I s s K (b) Abstraction of K with abstract loop s I s s K (c) Abstraction of K Fig. 1.
The concept of PDTMC abstraction
Note that P ( s, s ′ )[ u ] > u tobe graph preserving, i. e., G M = G M u . This is necessary, otherwise altering thegraph could make reachable states unreachable, thereby changing reachabilityprobabilities. Definition 6.
Given a PDTMC M = ( S, V , I, P ) with absorbing states T ⊆ S ,the parametric probabilistic model checking problem is to find for each initialstate s I ∈ S I and each t ∈ T a rational function f s I ,t ∈ F V such that for all graph-preserving evaluations u : V → R and the evaluated PDTMC D = ( S u , I u , P u ) itholds that f s I ,t [ u ] = Pr M u (cid:0) Paths M u ( s I , t ) (cid:1) . Given the functions f s I ,t for s I ∈ S I and t ∈ T , the probability of reaching astate in T from an initial state is P s I ∈ S I I ( s I ) · (cid:16)P t ∈ T f s I ,t (cid:17) . In this section we present our algorithmic approach to apply model checking toPDTMCs. In the following let M = ( S, V , I, P ) be a PDTMC with absorbingstate set T ⊆ S . For each initial state s I ∈ S I and each target state t ∈ T wecompute a rational function f s I ,t over the set of parameters V which describesthe probability of reaching t from s I as in [3]. We do this using hierarchicalgraph decomposition , inspired by a former method for computing reachabilityprobabilities in the non-parametric case [11]. The basic concept of our model checking approach is to replace a non-absorbingsubset K ⊆ S of states and all transitions between them by transitions directly s s s s s s s s . . . . . q − q . . . . . p − p S S . S . S . . Fig. 2.
Example PDTMC and its SCC decomposition leading from the input states Inp( K ) of K to the output states Out( K ) of K ,carrying the accumulated probabilities of all paths between the given input andoutput states inside K . This concept is illustrated in Figure 1: In Figure 1(a), K has one input state s I and two output states s , s . The abstraction inFigure 1(c) hides every state of K except for s I ; all transitions are directly leadingto the output states.As we need a probability measure for arbitrary subsets of states, we firstdefine sub-PDTMCs induced by such subsets. Definition 7 (Induced PDTMC).
Given a PDTMC M = ( S, V , I, P ) anda non-absorbing subset K ⊆ S of states, the PDTMC induced by M and K isgiven by M K = ( S K , V K , I K , P K ) with S K = K ∪ Out( K ) , V K = V , and forall s, s ′ ∈ S K , I K ( s ) = 0 ⇐⇒ s ∈ Inp( K ) and P K ( s, s ′ ) = P ( s, s ′ ) , if s ∈ K, s ′ ∈ S K , , if s = s ′ ∈ Out( K ) , , otherwise. Intuitively, all incoming and outgoing transitions are preserved for innerstates of K while the output states are made absorbing. We allow an arbitraryinput distribution I K with the only constraint that I K ( s ) = 0 iff s is an inputstate of K . Example 1.
Consider the PDTMC M in Figure 2 and the state set K = { s , s } with input states Inp( K ) = { s } and output states Out( K ) = { s , s , s } . ThePDTMC M K = ( S K , V K , I K , P K ) induced by M and K is shown in Fig-ure 3(a).Note that, since K is non-absorbing, the probability of eventually reaching oneof the output states is 1. The probability of reaching an output state t from an s s s s . . . p − p (a) Induced PDTMC s s s s f s ,s f s ,s f s ,s f s ,s (b) Abstracted PDTMC s s s s f s ,s ˆ f s ,s ˆ f s ,s (c) Scaled functions Fig. 3.
PDTMC Abstraction input state s is determined by the accumulated probability of all paths Paths( s, t )from s to t . Those paths are composed by a (possibly empty) prefix looping on s and a postfix leading from s to t without returning back to s . In our abstractionthis is reflected by abstracting the prefixes by an abstract self-loop on s withprobability f s,s and the postfixes by abstract transitions from the input states s to the output states t with probability f s,t (see Figure 1(b)). If all loops in K are loops on s then f s,t can be easily computed as the sum of the probabilitiesof all loop-free paths from s to t . In the final abstraction shown in Figure 1(c),we make use of the fact that all paths from s to t can be extended with the sameloops on s as a prefix. Therefore we do not need to compute the probability oflooping on s , but can scale the probabilities f s,t such that they sum up to 1. Definition 8 (Abstract PDTMC).
Let M = ( S, V , I, P ) be a PDTMC withabsorbing states T ⊆ S . The abstract PDTMC M abs = ( S abs , V abs , I abs , P abs ) isgiven by S abs = { s ∈ S | I ( s ) = 0 ∨ s ∈ T } , V abs = V , and for all s, s ′ ∈ S abs wedefine I abs ( s ) = I ( s ) and P abs ( s, s ′ ) = p M abs ( s, s ′ ) P s ′′ ∈ T p M abs ( s, s ′′ ) , if I ( s ) > ∧ s ′ ∈ T , , if s = s ′ ∈ T , , otherwise.with p M abs ( s, s ′ ) = Pr M (cid:0) { π = s . . . s n ∈ Paths M ( s, s ′ ) | s i = s ∧ s i = s ′ , < i < n } (cid:1) . Example 2.
Consider the PDTMC M ′ = ( S ′ , V ′ , I ′ , P ′ ) of Figure 3(a) with ini-tial state s and target states T ′ = { s , s , s } . The first abstraction step re-garding the probabilities p M abs ( s, s ′ ) is depicted in Figure 3(b) and refers to theollowing probabilities: f s ,s = p M ′ abs ( s , s ) = 0 . f s ,s = p M ′ abs ( s , s ) = 0 . f s ,s = p M ′ abs ( s , s ) = 0 . · p f s ,s = p M ′ abs ( s , s ) = 0 . · (1 − p )The total probabilities of reaching the output states in M ′ abs are given by pathswhich first use the loop on s arbitrarily many times (including zero times) andthen take a transition to an output state. For example, using the geometric series,the probability of the set of paths leading from s to s is given by ∞ X i =0 ( f s ,s ) i · f s ,s = 11 − f s ,s · f s ,s . As the probability of finally reaching the set of absorbing states in M ′ is 1, wecan directly scale the probabilities of the outgoing edges such that their sum isequal to 1. This is achieved by dividing each of these probabilities by the sumof all probabilities of outgoing edges, f out = 0 . . . · (1 − p ) = 1 − . p .Thus the abstract PDTMC M ′ abs = ( S ′ abs , V ′ abs , I ′ abs , P ′ abs ) depicted in Fig-ure 3(c) has states S ′ abs = { s , s , s , s } and edges from s to all other stateswith the following probabilities:ˆ f s ,s = 0 . /f out ˆ f s ,s = 0 . /f out ˆ f s ,s = (cid:0) . · (1 − p ) (cid:1) /f out Theorem 1.
Assume a PDTMC M = ( S, V , I, P ) with absorbing states T ⊆ S ,and let M abs be the abstraction of M . Then for all s I ∈ S I and t ∈ T it holdsthat Pr M (cid:0) Paths M ( s I , t ) (cid:1) = Pr M abs (cid:0) Paths M abs ( s I , t ) (cid:1) . The proof of this theorem can be found in the appendix. It remains to definethe substitution of subsets of states by their abstractions. Intuitively, a subset ofstates is replaced by the abstraction as in Definition 8, while the incoming tran-sitions of the initial states of the abstraction as well as the outgoing transitionsof the absorbing states of the abstraction remain unmodified.
Definition 9 (Substitution).
Assume a PDTMC M = ( S, V , I, P ) , a non-absorbing set K ⊆ S of states, the induced PDTMC M K = ( S K , V K , I K , P K ) and the abstraction M K abs = ( S K abs , V K abs , I K abs , P K abs ) . The substitution of M K byits abstraction M K abs in M is given by M K abs = ( S K abs , V K abs , I K abs , P K abs ) with S K abs = ( S \ K ) ∪ S K abs , V K abs = V and for all s, s ′ ∈ S K abs , I K abs ( s ) = I ( s ) and P K abs ( s, s ′ ) = P ( s, s ′ ) , if s / ∈ K,P K abs ( s, s ′ ) , if s ∈ K ∧ s ′ ∈ Out( K ) , , otherwise . lgorithm 1 Model Checking PDTMCs abstract (PDTMC M ) beginfor all non-bottom SCCs K in M S \ Inp( M ) do (1) M K abs := abstract( M K ) (2) M := M K abs (3) end for (4) K := { non-absorbing states in M} (5) M := M K abs (6) return M (7) endmodel check (PDTMC M = ( S, V , I, P ), T ⊆ { t ∈ S | P ( t, t ) = 1 } ) begin M abs = ( S abs , V abs , I abs , P abs ) := abstract( M ) (8) return P s I ∈ S I I ( s I ) · (cid:16) P t ∈ T P abs ( s I , t ) (cid:17) (9) end Due to Theorem 1, it directly follows that this substitution does not change reach-ability properties from the initial states to the absorbing states of a PDTMC.
Corollary 1.
Given a PDTMC M and a non-absorbing subset K ⊆ S of states,it holds for all initial states s I ∈ S I and absorbing states t ∈ T that Pr M (cid:0) Paths M ( s I , t ) (cid:1) = Pr M K abs (cid:0) Paths M K abs ( s I , t ) (cid:1) . In the previous section we gave the theoretical background for our model check-ing algorithm. Now we describe how to compute the abstractions efficiently.As a heuristic for forming the sets of states to be abstracted, we choose anSCC-based decomposition of the graph. Algorithmically, Tarjan’s algorithm [12]is used to determine the SCC structure of the graph while we do not consider bot-tom SCCs. We hierarchically determine also sub-SCCs inside the SCCs withouttheir input states, until no non-trivial sub-SCCs exist any more.
Example 3.
In Figure 2, the dashed rectangles indicate the decomposition intothe SCC S = { , , , , , , } and the sub-SCSs S . = { , , } , S . = { , , } , and S . . = { , } with S . ⊂ S and S . . ⊂ S . ⊂ S .The general model checking algorithm is depicted in Algorithm 1. The recur-sive method abstract (PDTMC M ) computes the abstraction M abs by iteratingover all SCCs of the graph without the input states of M (line 1). For each SCC K , the abstraction M K abs of the induced PDTMC M K is computed by a recur-sive call of the method (line 2, Definitions 7,8). Afterwards, M K is substitutedy its abstraction inside M (line 3, Definition 9). Finally, the abstraction M abs is computed and returned (line 7, Definition 8). This method is called by themodel checking method (line 8) which yields the abstract system M abs , in whichtransitions lead only from the initial states to the absorbing states. All transi-tions are labeled with a rational function for the reachability probability, as inDefinition 6. Then the whole reachability probability is computed by buildingthe sum of these transitions (line 9).What remains to be explained is the computation of the abstract probabilities p M abs . We distinguish the cases where the set K has one or multiple input states. One input state
Consider a PDTMC M K induced by K with one initial state s I and the set of absorbing states T = { t , . . . , t n } , such that K \ { s I } hasno non-trivial SCCs. If there is only one absorbing state, i. e., n = 1, we have p M K abs ( s I , t ) = 1. This is directly exploited without further computations.Otherwise we determine the probabilities p M K abs ( s I , t i ) for all 1 ≤ i ≤ n . As K \ { s I } has no non-trivial SCSs, the set of those paths from s I to t i that donot return to s I consists of finitely many loop-free paths. The probability iscomputed recursively for all s ∈ S K by: p M K abs ( s, t i ) = , if s = t i , P s ′ ∈ (succ( s ) ∩ K ) \ Inp( K ) P K ( s, s ′ ) · p M K abs ( s ′ , t i ) , otherwise. (1)These probabilities can also be computed by direct or indirect methods for solv-ing linear equation systems, see, e. g., [13, Chapters 3,4]. Note that state elimi-nation as in [3] can be applied here, too.The probabilities of the abstract PDTMC M K abs = ( S abs , V abs , I abs , P abs ) asin Definition 8 can now directly be computed, while an additional constraint isadded in order to avoid divisions by zero: P M K abs ( s I , t i ) = p M K abs ( s I ,t i ) P nj =1 p M K abs ( s I ,t j ) , if P nj =1 p M K abs ( s I , t j ) = 0,0 , otherwise. (2) Multiple input states
Given a PDTMC M K with initial states S I = { s , . . . , s m I } , m >
1, such that I K ( s i I ) > ≤ i ≤ m , and absorbing states T = { t , . . . , t n } . The intuitive idea would be to maintain a copy of M K for eachinitial state and handle the other initial states as inner states in this copy. Then,the method as described in the previous paragraph can be used. However, thiswould be expensive in terms of both time and memory. Therefore, we first for-mulate the linear equation system as in Equation (1). All variables p M K abs ( s, t i )with s ∈ K \ Inp( K ) are eliminated from the equation system. Then for each ini-tial state s i I the equation system is solved separately by eliminating all variables p M K abs ( s j I , t k ), j = i .Algorithm 1 returns the rational functions P M K abs ( s I , t ) for all s I ∈ S I and t ∈ T as in Equation (2). To allow only graph-preserving evaluations of thearameters, we perform preprocessing where conditions are collected accordingto Definition 5 as well as the ones from Equation (2). These constraints canbe evaluated by a SAT-modulo- theories ( SMT ) solver for non-linear real arith-metic [14]. In case the solver returns an evaluation which satisfies the resultingconstraint set, the reachability property is satisfied. Otherwise, the property isviolated.
Both the SCC-based procedure as introduced in the last section as well as merestate-elimination [3] build rational functions representing reachability probabili-ties. These rational functions might grow rapidly in both algorithms and therebyform one of the major bottlenecks of this methodology. As already argued in [3],the best way to stem this blow-up is the cancellation of the rational functions inevery computation step, which involves—apart from addition , multiplication , and division of rational functions—the rather expensive calculation of the greatestcommon divisor (gcd) of two polynomials.In this section we present a new way of handling this problem: An addi-tional maintenance and storage of (partial) polynomial factorizations can leadto remarkable speed-ups in the gcd computation, especially when dealing withsymmetrically structured benchmarks where many similar polynomials occur.We present an optimized algorithm called gcd which operates on the (partial)factorizations of the polynomials to compute their gcd. During the calculations,the factorizations are also refined. On this account we reformulate the arithmeticoperations on rational functions such that they preserve their numerator’s anddenominator’s factorizations, if it is possible with reasonable effort. Factorizations.
In the following we assume that polynomials are normalized , thatis they are of the form g = a · x e , · . . . · x e ,n n + · · · + a m · x e m, · . . . · x e m,n n with ( e j, , . . . , e j,n ) = ( e k, , . . . , e k,n ) for all j, k ∈ { , . . . , m } with j = k and themonomials are ordered, e. g., according to the reverse lexicographical ordering. Definition 10 (Factorization). A factorization F g = { g e , . . . , g e n n } of a poly-nomial g = 0 is a non-empty set of factors g e i i , where the bases g i are pairwisedifferent polynomials and the exponents are e i ∈ N such that g = Q ni =1 g e i i . Weadditionally set F = ∅ . For polynomials g, h and a factorization F g = { g e , . . . , g e n n } of g let bases( F g ) = { g , . . . , g n } and exp( h, F g ) be e i if g i = h and 0 if h / ∈ bases( F g ). As the basesare not required to be irreducible, factorizations are not unique.We assume that bases and exponents are non-zero, F = { } , and 1 k / ∈ F g for g = 1. For F g = { g e , . . . , g e n n } , this is expressed by the reduction F red g = { } if n > g i = 1 or e i = 0 for all 1 ≤ i ≤ n , and F red g = F g \ { g e i i | g i =1 ∨ e i = 0 } otherwise. We represent a factorization of a polynomial as a set; however, in the implementationwe use a more efficient binary search tree instead. perations on factorizations.
Instead of applying arithmetic operations on twopolynomials g and g directly, we operate on their factorizations F g and F g .We use the following operations on factorizations: F g ∪ F F g factorizes a (notnecessarily least) common multiple of g and g , F g ∩ F F g a (not necessarilygreatest) common divisor, whereas the binary operations · F , : F and + F corre-spond to multiplication, division and addition, respectively. Due to space lim-itations, we omit in the remaining of this paper the trivial cases involving F .Therefore we define F g ∪ F F g = { h max(exp( h, F g ) , exp( h, F g )) | h ∈ bases( F g ) ∪ bases( F g ) } red F g ∩ F F g = { h min(exp( h, F g ) , exp( h, F g )) | h =1 ∨ h ∈ bases( F g ) ∩ bases( F g ) } red F g · F F g = { h exp( h, F g )+exp( h, F g ) | h ∈ bases( F g ) ∪ bases( F g ) } red F g : F F g = { h max(0 ,e − exp( h, F g )) | h e ∈ F g } red F g + F F g = D · F (cid:8)(cid:0)Q g ′ ∈ ( F g : F D ) g ′ (cid:1) + (cid:0)Q g ′ ∈ ( F g : F D ) g ′ (cid:1)(cid:9) red where D = F g ∩ F F g and max( a, b ) (min( a, b )) equals a if a ≥ b ( a ≤ b ) and b otherwise. Example 4 illustrates the application of the above operations. Operations on rational functions.
We represent a rational function g g by sepa-rate factorizations F g and F g for the numerator g and the denominator g ,respectively. For multiplication g g = h h · q q , we compute F g = F h · F F q and F g = F h · F F q . Division is reduced to multiplication according to h h : q q = h h · q q .For the addition g g = h h + q q , we compute g with F g = F h ∪ F F q as acommon multiple of h and q , such that g = h · h ′ with F h ′ = F g : F F h ,and g = q · q ′ with F q ′ = F g : F F q . For the numerator g we first determinea common divisor d of h and q by F d = F h ∩ F F q , such that h = d · h ′ with F h ′ = F h : F F d , and q = d · q ′ with F q ′ = F q : F F d . The numerator g is d · ( h ′ · h ′ + q ′ · q ′ ) with factorization F d · F ( F h ′ · F F h ′ + F F q ′ · F F q ′ ).The rational function g g resulting from the addition is further simplified bycancellation, i. e., dividing g and g by their greatest common divisor (gcd) g .Given the factorizations F g and F g , Algorithm 2 calculates the factorizations F g , F g g , and F g g .Intuitively, the algorithm maintains the fact that G · F F · F F ′ is a factor-ization of g , where G contains common factors of g and g , F is going to bechecked whether it contains further common factors, and F ′ does not containany common factors. In the outer while-loop, an element r e to be checked istaken from F . In the inner while-loop, a factorization G · F F · F F ′ of g is main-tained such that F ′ does not contain any common factors with r , and F is stillto be checked.Now we explain the algorithm in more detail. Initially, a factorization G ofa common divisor of g and g is set to F g ∩ F F g (line 2). The remaining F g : F F g is a factorization of g /g only if F g and F g are sufficiently refined and g divides g . lgorithm 2 gcd computation with factorization refinement GCD (factorization F g , factorization F g ) begin G := ( F g ∩ F F g ) (1) F i := F g i : F G and F ′ i := { } for i = 1 , while exists r e ∈ F with r = 1 do (3) F := F \ { r e } (4) while r = 1 and exists r e ∈ F with r = 1 do (5) F := F \ { r e } (6) if ¬ irreducible( r ) ∨ ¬ irreducible( r ) then g := gcd( r , r ) (7) else g := 1 (8) if g = 1 then (9) F ′ := F ′ · F { r e } (10) else (11) r := r g (12) F i := F i · F { g e i − min( e ,e ) } for i = 1 , F ′ := F ′ · F { ( r g ) e } (14) G := G · F { g min( e ,e ) } (15) end if (16) end while (17) F ′ := F ′ · F { r e } (18) F := F · F F ′ (19) F ′ := { } (20) end while (21) return ( F ′ , F , G ) (22) end factors of g and g are stored in F resp. F . The sets F ′ and F ′ containfactors of g resp. g whose greatest common divisor is 1 (line 4). The algorithmnow iteratively adds further common divisors of g and g to G until it is afactorization of their gcd. For this purpose, we consider for each factor in F allfactors in F and calculate the gcd of their bases using standard gcd computationfor polynomials (line 14). Note that the main concern of Algorithm 2 is to avoidthe application of this expensive operation as far as possible and to apply it topreferably simple polynomials otherwise. Where the latter is entailed by the ideaof using factorizations, the former can be achieved by excluding pairs of factorsfor which we can cheaply decide that both are irreducible , i. e., they have no non-trivial divisors. If factors r e ∈ F and r e ∈ F with g := gcd( r , r ) = 1 arefound, we just shift r e from F to F ′ (line 17). Otherwise, we can add g min( e ,e ) ,which is the gcd of r e and r e , to G and extend the factors F resp. F , whichcould still contain common divisors, by g e − min( e ,e ) resp. g e − min( e ,e ) (line 21).Furthermore, F ′ obtains the new factor ( r g ) e , which has certainly no commondivisor with any factor in F ′ . Finally, we set the basis r to r g , excluding thejust found common divisor. If all factors in F have been considered for commondivisors with r , we can add it to F ′ and continue with the next factor in , for which we must reconsider all factors in F ′ and, therefore, shift themto F (line 35-39). The algorithm terminates, if the last factor of F has beenprocessed, returning the factorizations F g , F g g and F g g , which we can use torefine the factorizations of g and g via F g := F g g · F G and F g := F g g · F G . Example 4.
Assume we want to apply Algorithm 2 to the factorizations F xyz = { ( xyz ) } and F xy = { ( x ) , ( y ) } . We initialize G = F ′ = F ′ = { (1) } , F = F xyz and F = F xy . First, we choose the factors ( r ) e = ( xyz ) and ( x ) andremove them from F resp. F . The gcd of their bases is x , hence we only update r to ( yz ) and G to { ( x ) } . Then we remove the next and last element ( y ) from F . Its basis and r have the gcd y and we therefore update r to ( z ) and G to { ( x ) , ( y ) } . Finally, we add ( z ) to F ′ and return the expectedresult ( { ( z ) } , { (1) } , { ( x ) , ( y ) } ). Using these results, we can also refine F xyz = F ′ · F G = { ( x ) , ( y ) , ( z ) } and F xy = F · F G = { ( x ) , ( y ) } . Theorem 2.
Let p and p be two polynomials with factorizations F p resp. F p . Applying Algorithm 2 to these factorizations results in gcd( F p , F p ) =( F r , F r , G ) with G being a factorization of the greatest common divisor g of p and p , and F r and F r being factorizations of p g resp. p g . The proof of this theorem can be found in the appendix.
We developed a C ++ prototype implementation of our approach using the arith-metic library GiNaC [15]. The prototype is available on the project homepage .Moreover, we implemented the state-elimination approach used by PARAM [6]using our optimized factorization approach to provide a more distinct compari-son. All experiments were run on an Intel Core 2 Quad CPU 2.66 GHz with 4GB of memory. We defined a timeout ( T O ) of 14 hours (50400 seconds) and amemory bound (
M O ) of 4 GB. We report on three case studies; a more distinctdescription and the specific instances we used are available at our homepage.The bounded retransmission protocol (BRP) [16] models the sending of filesvia an unreliable network, manifested in two lossy channels for sending andacknowledging the reception. This model is parametrized in the probability ofreliability of those channels. The crowds protocol (CROWDS) [17] is designedfor anonymous network communication using random routing, parametrized inhow many members are “good” or “bad” and the probability if a good memberdelivers a message or randomly routes it to another member.
NAND multiplexing (NAND) [18] models how reliable computations are obtained using unreliablehardware by having a certain number of copies of a NAND unit all doing the samejob. Parameters are the probabilities of faultiness of the units and of erroneousinputs. The experimental setting includes our SCC-based approach as describedin Section 3 using the optimized factorization of polynomials as in Section 4 (SCC http://goo.gl/nS378q C), the state elimination as in PARAM but also using the approach of Section 4(STATE ELIM) and the PARAM tool itself. For all instances we list the numberof states and transitions; for each tool we give the running time in seconds andthe memory consumption in MB; the best time is boldfaced . Moreover, for ourapproaches we list the number of polynomials which are intermediately stored.
Graph SCC MC STATE ELIM PARAMModel States Trans. Time Poly Mem Time Poly Mem Time MemBRP 3528 4611 29.05 3283 48.10
For BRP, STATE ELIM always outperforms PARAM and SCC MC by up totwo orders of magnitude. On larger instances, SCC MC is faster than PARAMwhile on smaller ones PARAM is faster and has a smaller memory consumption.In contrast, the crowds protocol always induces a nested SCC structure,which is very hard for PARAM since many divisions of polynomials have tobe carried out. On larger benchmarks, it is therefore outperformed by morethan three orders of magnitude while SCC MC performs best. Please note thatthis is measured by the timeout. In fact, we were not able to retrieve results forPARAM on the larger crowds instances.To give an example where PARAM performs mostly better than our ap-proaches, we consider NAND. Its graph consists of single paths, inducing a highnumber of polynomials we store. Our implementation offers the possibility tolimit the number of stored polynomials, which decreases the memory consump-tion at the price of losing information about the factorizations. However, anefficient strategy to manage this bounded pool of polynomials is not yet im-plemented. Therefore, we refrain from presenting experimental results for thisscenario. Note that no bisimulation reduction was applied to any of the input models, whichwould improve the feasibility of all approaches likewise.
Conclusion and Future Work
We presented a new approach to verify parametric Markov chains together withan improved factorization of polynomials. We were able to highly improve thescalability in comparison to existing approaches. Future work will be dedicated tothe actual parameter synthesis. First, we want to incorporate interval constraintpropagation [19] in order to provide reasonable intervals for the parameters whereproperties are satisfied or violated. Moreover, we are going to investigate thepossibility of extending our approaches to models with costs.
References
1. Su, G., Rosenblum, D.S.: Asymptotic bounds for quantitative verification of per-turbed probabilistic systems. In: Proc. of ICFEM. Volume 8144 of LNCS, Springer(2013) 297–3122. Daws, C.: Symbolic and parametric model checking of discrete-time Markov chains.In: Proc. of ICTAC. Volume 3407 of LNCS, Springer (2004) 280–2943. Hahn, E.M., Hermanns, H., Zhang, L.: Probabilistic reachability for parametricMarkov models. Software Tools for Technology Transfer (1) (2010) 3–194. Gruber, H., Johannsen, J.: Optimal lower bounds on regular expression size usingcommunication complexity. In: Proc. of FOSSACS. Volume 4962 of LNCS, Springer(2008) 273–2865. Hahn, E.M., Han, T., Zhang, L.: Synthesis for PCTL in parametric Markov decisionprocesses. In: Proc. of NFM. Volume 6617 of LNCS, Springer (2011) 146–1616. Hahn, E.M., Hermanns, H., Wachter, B., Zhang, L.: PARAM: A model checkerfor parametric Markov models. In: Proc. of CAV. Volume 6174 of LNCS, Springer(2010) 660–6647. Han, T., Katoen, J.P., Mereacre, A.: Approximate parameter synthesis for proba-bilistic time-bounded reachability. In: Proc. of RTSS, IEEE CS (2008) 173–1828. Puggelli, A., Li, W., Sangiovanni-Vincentelli, A.L., Seshia, S.A.: Polynomial-timeverification of PCTL properties of MDPs with convex uncertainties. In: Proc. ofCAV. Volume 8044 of LNCS, Springer (2013) 527–5429. Benedikt, M., Lenhardt, R., Worrell, J.: LTL model checking of interval Markovchains. In: Proc. of TACAS. Volume 7795 of LNCS, Springer (2013) 32–4610. Baier, C., Katoen, J.P.: Principles of Model Checking. The MIT Press (2008)11. ´Abrah´am, E., Jansen, N., Wimmer, R., Katoen, J.P., Becker, B.: DTMC modelchecking by SCC reduction. In: Proc. of QEST, IEEE CS (2010) 37–4612. Tarjan, R.E.: Depth-first search and linear graph algorithms. SIAM Journal onComputing (2) (1972) 146–16013. Quarteroni, A., Sacco, R., Saleri, F.: Numerical Mathematics. Springer (2000)14. Jovanovic, D., de Moura, L.M.: Solving non-linear arithmetic. In: Proc. of IJCAR.Volume 7364 of LNCS, Springer (2012) 339–35415. Bauer, C., Frink, A., Kreckel, R.: Introduction to the GiNaC framework for sym-bolic computation within the C ++ programming language. J. Symb. Comput. (1)(2002) 1–1216. Helmink, L., Sellink, M., Vaandrager, F.: Proof-checking a data link protocol. In:Proc. of TYPES. Volume 806 of LNCS, Springer (1994) 127–16517. Reiter, M.K., Rubin, A.D.: Crowds: Anonymity for web transactions. ACM Trans.on Information and System Security (1) (1998) 66–928. Han, J., Jonker, P.: A system architecture solution for unreliable nanoelectronicdevices. IEEE Transactions on Nanotechnology (2002) 201–20819. Fr¨anzle, M., Herde, C., Teige, T., Ratschan, S., Schubert, T.: Efficient solvingof large non-linear arithmetic constraint systems with complex boolean structure.Journal on Satisfiability, Boolean Modeling, and Computation (3-4) (2007) 209–236 ppendix Theorem 1
Assume a PDTMC M = ( S, V , I, P ) with absorbing states T ⊆ S ,and let M abs be the abstraction of M . Then for all s I ∈ S I and t ∈ T it holdsthat Pr M (cid:0) Paths M ( s I , t ) (cid:1) = Pr M abs (cid:0) Paths M abs ( s I , t ) (cid:1) . Proof.
First note that all initial states and absorbing states in M are also statesof the abstraction.As the bottom SCCs are the absorbing states in T , the probability of reachinga state in T is 1. The probability p M abs ( s I , s I ) can therefore be expressed w. r. t.the probabilities of reaching an absorbing state without revisiting s I : p M abs ( s I , s I ) = 1 − X t ∈ T p M abs ( s I , t ) . (3)To reduce notation, we define the set of paths R loop looping on s I and the set ofpaths R out going to some t ∈ T without revisiting s I . R loop = { s I s . . . s n s I ∈ Paths M | s i / ∈ { s I } ∪ T, ≤ i ≤ n } (4) R out = { s I s . . . s n t ∈ Paths M | s i / ∈ { s I } ∪ T, ≤ i ≤ n, t ∈ T } (5)As the self-loop on s I represents the paths of R loop , it holds that p M abs ( s I , s I ) = Pr( R loop ) . (6)We now have:Pr M (cid:0) Paths M ( s I , t ) (cid:1) = Pr M (cid:16) ∞ [ i =0 { π · · · · · π i · π out | π j ∈ R loop , ≤ j ≤ i ; π out ∈ R out } (cid:17) = ∞ X i =0 Pr M (cid:0) { π · · · · · π i · π out | π j ∈ R loop , ≤ j ≤ i ; π out ∈ R out } (cid:1) = ∞ X i =0 (cid:0) Pr M ( R loop ) (cid:1) i · Pr M ( R out )= ∞ X i =0 (cid:0) p M abs ( s I , s I ) (cid:1) i · Pr M ( R out ) (Equation (6))= 11 − p M abs ( s I , s I ) · Pr M ( R out ) (Geometric Series)= 1 P s out ∈ T p M abs ( s I , s out ) · Pr M ( R out ) (Equation (3)) 1 P s out ∈ T p M abs ( s I , s out ) · p M abs ( s I , t ) (Definition 8)= P abs ( s I , t ) (Definition 8)= Pr M abs (cid:0) Paths M abs ( s I , t ) (cid:1) . As the probabilities of reaching the absorbing states from initial states coincidein M and M abs , our abstraction is valid. Theorem 2
Let p and p be two polynomials with factorizations F p resp. F p . Applying Algorithm 2 to these factorizations results in gcd( F p , F p ) =( F r , F r , G ) with G being a factorization of the greatest common divisor g of p and p , and F r and F r being factorizations of p g resp. p g .Proof. We denote the product of a factorization F p by P ( F p ) = Q q e ∈F p q e andthe standard greatest common divisor computation for polynomials by gcd.We define the following Hoare-style assertion network: GCD (factorization F g , factorization F g ) begin { true } (1) G := ( F g ∩ F F g ) (2) { G = F g ∩ F F g } (3) F i := F g i : F G and F ′ i := { } for i = 1 , (4) {F g = G · F F · F F ′ ∧ F g = G · F F · F F ′ ∧ P ( F ′ ) = 1 ∧ P ( F ′ ) = 1 } (5) while exists r e ∈ F with r = 1 do (6) {F g = G · F F · F F ′ ∧ F g = G · F F · F F ′ ∧ gcd( P ( F ′ ) , P ( F · F F ′ )) = 1 ∧ gcd( r e , P ( F ′ )) = 1 ∧ r e ∈ F } (7) F := F \ { r e } (8) {F g = G · F F · F F ′ · F { r e } ∧ F g = G · F F · F F ′ ∧ gcd( P ( F ′ ) , P ( F · F F ′ )) = 1 ∧ gcd( r e , P ( F ′ )) = 1 } (9) while r = 1 and exists r e ∈ F with r = 1 do (10) {F g = G · F F · F F ′ · F { r e } ∧ F g = G · F F · F F ′ ∧ gcd( P ( F ′ ) , P ( F · F F ′ )) = 1 ∧ gcd( r e , P ( F ′ )) = 1 ∧ r e ∈ F } (11) F := F \ { r e } (12) {F g = G · F F · F F ′ · F { r e } ∧ F g = G · F F · F F ′ · F { r e } ∧ gcd( P ( F ′ ) , P ( F · F F ′ · F { r e } )) = 1 ∧ gcd( r e , P ( F ′ )) = 1 } (13) if ¬ irreducible( r ) ∨ ¬ irreducible( r ) then g := gcd( r , r ) (14) else g := 1 (15) {F g = G · F F · F F ′ · F { r e } ∧ F g = G · F F · F F ′ · F { r e } ∧ gcd( P ( F ′ ) , P ( F · F F ′ · F { r e } )) = 1 ∧ gcd( r e , P ( F ′ )) = 1 ∧ g = gcd( r , r ) } (16) if g = 1 then (17) {F g = G · F F · F F ′ · F { r e } ∧ F g = G · F F · F F ′ · F { r e } ∧ gcd( P ( F ′ ) , P ( F · F F ′ · F { r e } )) = 1 ∧ gcd( r e , P ( F ′ )) = 1 ∧ gcd( r , r ) = 1 } (18) F ′ := F ′ · F { r e } (19) {F g = G · F F · F F ′ · F { r e } ∧ F g = G · F F · F F ′ ∧ cd( P ( F ′ ) , P ( F · F F ′ )) = 1 ∧ gcd( r e , P ( F ′ )) = 1 } (20) else (21) {F g = G · F F · F F ′ · F { r e } ∧ F g = G · F F · F F ′ · F { r e } ∧ gcd( P ( F ′ ) , P ( F · F F ′ · F { r e } )) = 1 ∧ gcd( r e , P ( F ′ )) = 1 ∧ g = gcd( r , r ) } (22) r := r g (23) {F g = G · F F · F F ′ · F { ( r · g ) e } ∧ F g = G · F F · F F ′ · F { r e } ∧ gcd( P ( F ′ ) , P ( F · F F ′ · F { r e } )) = 1 ∧ gcd(( r · g ) e , P ( F ′ )) = 1 ∧ g = gcd(( r · g ) , r ) } (24) F i := F i · F { g e i − min( e ,e ) } for i = 1 , (25) {F g = G · F F · F F ′ · F { r e , g min( e ,e } ∧ F g = G · F F · F F ′ · F { ( r g ) e , g min( e ,e } ∧ gcd( P ( F ′ ) , P ( F · F F ′ · F { ( r g ) e , g min( e ,e } )) = 1 ∧ gcd(( r · g ) e , P ( F ′ )) = 1 ∧ g = gcd(( r · g ) , r ) } (26) F ′ := F ′ · F { ( r g ) e } (27) {F g = G · F F · F F ′ · F { r e , g min( e ,e } ∧ F g = G · F F · F F ′ · F { g min( e ,e } ∧ gcd( P ( F ′ ) , P ( F · F F ′ · F { g min( e ,e } )) = 1 ∧ gcd(( r · g ) e , P ( F ′ )) = 1 } (28) G := G · F { g min( e ,e ) } (29) {F g = G · F F · F F ′ · F { r e } ∧ F g = G · F F · F F ′ ∧ gcd( P ( F ′ ) , P ( F · F F ′ )) = 1 ∧ gcd( r e , P ( F ′ )) = 1 } (30) end if (31) {F g = G · F F · F F ′ · F { r e } ∧ F g = G · F F · F F ′ ∧ gcd( P ( F ′ ) , P ( F · F F ′ )) = 1 ∧ gcd( r e , P ( F ′ )) = 1 } (32) end while (33) {F g = G · F F · F F ′ · F { r e } ∧ F g = G · F F · F F ′ ∧ gcd( P ( F ′ ) , P ( F · F F ′ )) = 1 ∧ gcd( r e , P ( F ′ )) = 1 ∧ ( r = 1 ∨ P ( F ) = 1) (34) F ′ := F ′ · F { r e } (35) {F g = G · F F · F F ′ ∧ F g = G · F F · F F ′ ∧ gcd( P ( F ′ ) , P ( F · F F ′ )) = 1 } (36) F := F · F F ′ (37) {F g = G · F F · F F ′ ∧ F g = G · F F ∧ gcd( P ( F ′ ) , P ( F )) = 1 } (38) F ′ := { } (39) {F g = G · F F · F F ′ ∧ F g = G · F F ∧ gcd( P ( F ′ ) , P ( F )) = 1 ∧ P ( F ′ ) = 1 } (40) end while (41) {F g = G · F F ′ ∧ F g = G · F F ∧ gcd( P ( F ′ ) , P ( F )) = 1 } (42) return ( F ′ , F , G ) (43) end The above assertion network is inductive. – For the assignments, their preconditions imply their postconditions after sub-stituting the assigned expression for the assigned variables. (For simplicity,we handle the first if-then-else statement in lines (14)-(15) also as atomicassignment.) – For the if-then-else statement in lines (17)-(31), its precondition (16) impliesthe precondition (18) of the if-branch if the branching condition holds, andthe precondition (22) of the else-branch if the condition does not hold. Thepostconditions (20) and (30) of both branches imply the postcondition (32)of the if-then-else statement. – For the outer while-loop (6)-(41), its precondition (5) as well as the postcon-dition (40) of its body imply the precondition (7) of the body if the loopondition holds, and they both imply the postcondition (42) of the whileloop if the loop condition does not hold. – The inner while loop’s inductivity can be shown similarly.That means, the assertion (42) always holds before returning, implying the cor-rectness of the algorithm.The algorithm is also complete, since it always terminates: We can use asranking function the sum of the degrees of all polynomials in F for the outerloop and in F2