Multi-dimensional Long-Run Average Problems for Vector Addition Systems with States
MMulti-dimensional Long-Run Average Problemsfor Vector Addition Systems with States
Krishnendu Chatterjee
IST Austria, [email protected]
Thomas A. Henzinger
IST Austria, [email protected]
Jan Otop
University of Wrocław, [email protected]
Abstract
A vector addition system with states (VASS) consists of a finite set of states and counters. A transitionchanges the current state to the next state, and every counter is either incremented, or decremented,or left unchanged. A state and value for each counter is a configuration; and a computation is aninfinite sequence of configurations with transitions between successive configurations. A probabilisticVASS consists of a VASS along with a probability distribution over the transitions for each state.Qualitative properties such as state and configuration reachability have been widely studied for VASS.In this work we consider multi-dimensional long-run average objectives for VASS and probabilisticVASS. For a counter, the cost of a configuration is the value of the counter; and the long-run averagevalue of a computation for the counter is the long-run average of the costs of the configurations inthe computation. The multi-dimensional long-run average problem given a VASS and a thresholdvalue for each counter, asks whether there is a computation such that for each counter the long-runaverage value for the counter does not exceed the respective threshold. For probabilistic VASS,instead of the existence of a computation, we consider whether the expected long-run average valuefor each counter does not exceed the respective threshold. Our main results are as follows: we showthat the multi-dimensional long-run average problem (a) is NP-complete for integer-valued VASS;(b) is undecidable for natural-valued VASS (i.e., nonnegative counters); and (c) can be solved inpolynomial time for probabilistic integer-valued VASS, and probabilistic natural-valued VASS whenall computations are non-terminating.
Theory of computation → Automata over infinite objects; Theoryof computation → Quantitative automata
Keywords and phrases vector addition systems, mean-payoff, multidimension, probabilistic semantics
Funding
Krishnendu Chatterjee : The Austrian Science Fund (FWF) NFN grant S11407-N23(RiSE/SHiNE).
Thomas A. Henzinger : The Austrian Science Fund (FWF) grants S11402-N23 (RiSE/ShiNE) andZ211-N23 (Wittgenstein Award).
Jan Otop : The National Science Centre (NCN), Poland under grant 2017/27/B/ST6/00299.
Acknowledgements
We want to thank the anonymous reviewers of CONCUR 2020 for their helpfulcomments, which contributed to the final version of this paper.
Vector Addition System with States (VASS) and probabilistic VASS.
Vector AdditionSystems (VASs) provide a powerful framework for analysis of parallel processes [16]. Theyare equivalent to the well-studied model of Petri Nets [25]. The generalization of VASs witha finite-state transition system gives
Vector Addition Systems with States (VASS) . The model a r X i v : . [ c s . F L ] J u l Multi-dimensional Long-Run Average Problems for VASS of VASS is as follows: there is a finite set of control states with transitions between them,and a set of k counters, where at every transition between the control states each counter iseither incremented, decremented, or remains unchanged. For a VASS, a configuration is acontrol state and a valuation of each counter, and the transitions of the VASS determines thetransitions between the configurations. Thus a VASS is a finite description of an infinite-statetransition system between the configurations. The class of VASS where the counters canhold all possible integer values, are referred to as integer-valued VASS; and the class ofVASS where the counters can hold only non-negative values, are referred to as natural-valuedVASS. A probabilistic VASS consists of a VASS along with probability distribution over thetransitions for every state. VASS Framework in Verification.
VASS are an elegant mathematical framework for con-current processes [16], and have been widely studied in performance analysis of concurrentprocesses [14, 20, 23, 24]. They have also been used in several other contexts, such as:(a) analysis of parametrized systems [3], (b) abstract models for programs for bounds analy-sis [34], (c) interactions between components of an API in component-based synthesis [18].The probabilistic VASS provide a natural model for problems mentioned above with stochas-ticity in the system [6]. Thus VASS and probabilistic VASS provide a rich framework formany problems in verification and program analysis.
Previous results for VASS.
A computation (run) in a VASS is an infinite sequence ofconfigurations with transitions between successive configurations. The classical problemsstudied for VASS are as follows: (a) control-state reachability where given a set of targetcontrol states a computation satisfies the objective if a target state is reached; (b) configurationreachability where given a set of target configurations a computation satisfies the objective ifa target configuration reached. For natural-valued VASS, (a) the control-state reachabilityproblem is
ExpSpace -complete: the
ExpSpace -hardness is shown in [15, 30] and the upperbound follows from [33]; and (b) the configuration reachability problem is decidable [26,27, 28, 31], and a recent breakthrough result establishes non-elementary hardness [13]. Forinteger-valued VASS, (a) the control-state reachability problem is
NLogSpace -complete (byreduction to graph reachability); (b) the configuration reachability problem is NP -complete.In probabilistic VASS, for the natural-valued class, even defining the probability measureover infinite computations is a challenging and complex problem [6], as computations thatviolate the non-negativity condition terminate as finite computations. Long-run average objective and multi-dimensional long-run average problem.
The clas-sical problems for VASS consider qualitative (or Boolean) objectives where each computationis either satisfactory or not. In this work we consider multi-dimensional long-run averageobjective. For a counter, we consider the cost of a configuration as the value of the counter.For a computation, the long-run average of the costs of the configurations of the computationis the long-run average value for the respective counter. The multi-dimensional long-runaverage problem given a VASS and a threshold value for each counter, asks whether there isa computation such that for each counter the long-run average value for the counter doesnot exceed the respective threshold. For integer-valued probabilistic VASS, instead of theexistence of a computation, we consider whether the expected long-run average value foreach counter does not exceed the respective threshold. For natural-valued probabilisticVASS, the presence of terminating runs makes even defining the probability measure com-plex. We consider two variants: (a) strict semantics that require all computations to be . Chatterjee, T.A. Henzinger, J. Otop 3 non-terminating; (b) relaxed semantics where we consider the conditional probability withrespect to non-terminating runs.
Motivating examples.
We present some motivating examples for the problems we consider.First, consider a VASS where the counters represent different queue lengths, and each queueconsumes a resource type (e.g., energy or memory or time delay) proportional to its length.The multi-dimensional long-run average problems asks that the average consumption ofeach resource does not exceed a desired threshold. Second, consider a system that uses twodifferent batteries, and the counters represent the charge levels. At different states, differentbatteries are used, and we are interested in the long-run average charge of each battery. Thisis again modeled as the multi-dimensional long-run average problem.
Our contributions.
Our main contributions are as follows: For non-probabilistic VASS we show that the multi-dimensional long-run average problem(a) is NP -complete for integer-valued VASS, and (b) is undecidable for natural-valuedVASS. For probabilistic integer-valued VASS, we show that the multi-dimensional long-runaverage problem can be solved in polynomial time. For natural-valued VASS, we showthat the multi-dimensional problem can be solved in polynomial-time for (a) the strictsemantics, and (b) the relaxed semantics for strongly connected VASS such that theexpected multi-dimensional long-run average is finite. For the relaxed semantics andgeneral natural-valued VASS, we show
ExpSpace -hardness, and the exact decidabilityand complexity remain open.
Related works.
For probabilistic VASS the long-run average behavior problem has beenstudied [6], as well as for other infinite-state models such as pushdown automata andgames [1, 11, 12]. However, these works consider that costs are associated with the transitionsof the finite-state system and do not depend on the counter values; moreover, they do notconsider the multi-dimensional problem. In contrast, we consider costs that depend on thecounter values, and hence on the configurations. Costs based on configurations, specificallythe content of the stack in pushdown automata, have been considered in [32]. Quantitativeasymptotic bounds for polynomial-time termination in VASS have also been studied [5, 29],however, these works do not consider long-run average property. Finally, a related modelof automata with monitor counters with long-run average property have been consideredin [7, 8]. However, there is a crucial difference: in automata with monitor counters, countersare reset once the value is used. Moreover, the complexity results for automata with monitorcounters are quite different from the results we establish. Finally a recent work considerslong-run average problem for VASS [9]. However, the cost is always single-dimensional witha linear combination of the counter values, and moreover, probabilistic VASS have not beenconsidered in [9].
For a sequence w , we define w [ i ] as the ( i + 1)-th element of w (we start with 0) and w [ i, j ]as the subsequence w [ i ] w [ i + 1] . . . w [ j ]. We allow j to be ∞ for infinite sequences. For afinite sequence w , we denote by | w | its length; and for an infinite sequence the length is ∞ .We use the same notation for vectors. For a vector ~x ∈ R k (resp., Q k , Z k or N k ), we define x [ i ] as the i -th component of ~x . Multi-dimensional Long-Run Average Problems for VASS A k -dimensional vector addition system with states (VASS) over Z (resp., over N ), referredto as VASS ( Z , k ) (resp., VASS ( N , k )), is a tuple A = h Q, Q , δ i , where (1) Q is a finite setof states, (2) Q ⊆ Q is a set of initial states, and (3) δ ⊆ Q × Q × Z k is a transition relation.In a transition ( q, q , ~y ), the vector ~y is called a counter update as we refer to k dimensions ofa VASS as counters . We often omit the dimension in VASS and write VASS ( Z ) , VASS ( N ) ifa definition or an argument is uniform w.r.t. the dimension.We define the size of a VASS in a standard way assuming binary encoding of counterupdates. Formally, the size of a VASS h Q, Q , δ i is defined as | Q | + P ( q,q ,~y ) ∈ δ len( ~y ), wherelen( ~y ) is the length of the binary representation of ~y . Configurations and computations. A configuration of a VASS ( Z , k ) A is a pair from Q × Z k , which consists of a state and a valuation of the counters. A computation of A is aninfinite sequence π of configurations such that (a) π [0] ∈ Q × { ~ } , and (b) for every i ≥ q, q , ~y ) ∈ δ such that π [ i ] = ( q, ~x ) and π [ i + 1] = ( q , ~x + ~y ). Note that, withoutloss of generality, we assume that the initial counter valuation is ~
0. We can encode any initialconfiguration in the VASS itself.A computation of a
VASS ( N , k ) A is a computation π of A considered as a VASS ( Z , k )such that the values of all counters are non-negative, i.e., for all i we have π [ i ] ∈ Q × N k .Transitions of a VASS ( N ) that make the value of some counter negative are disabled.We call a finite sequence ρ a subcomputation of a VASS ( Z , k ) (resp., VASS ( N , k )) A , ifit satisfies condition (b), i.e., all configurations are consistent with some transitions of A ,and all configurations belong to Q × Z k (resp., Q × N k ). Paths and cycles.
A path p = ( q , q , ~y ) , ( q , q , ~y ) , . . . in a VASS ( Z ) (resp., VASS ( N )) A is a (finite or infinite) sequence of transitions (from δ ) such that for all 0 ≤ i < | p | wehave q i = q i +1 . A finite path p is a cycle if p = ( q , q , ~y ) , . . . , ( q m , q m , ~y m ) and q = q m .Every computation in a VASS ( Z ) (resp., VASS ( N )) corresponds to the unique infinite path.Conversely, every infinite path in a VASS ( Z ) A starting with q ∈ Q defines a computationin A . However, if A is a VASS ( N , k ), some paths do not correspond to valid computationsdue to non-negativity restriction posed on the counters. Cycle characteristics.
For a path p we define Gain ( p ) as the vector of total counterchange upon p . Formally, for p of length n with counter updates ~y , . . . , ~y n we define Gain ( p ) = P ni =1 ~y i . Markov chains. A Markov chain is a tuple h Σ , Q, Q , δ, P, µ i such that (1) Σ is a (finite)set of labels, (2) Q is a (finite) set of states, (3) Q is a set of initial states, (4) δ ⊆ Q × Q × Σis a transition relation, (5) P : δ → (0 ,
1] is a probability distribution over transitions suchthat for every s ∈ S we have P s ∈ S,a ∈ Σ p ( s, s , a ) = 1, and (6) µ : Q → [0 ,
1] is an initialdistribution such that P q ∈ Q µ ( q ) = 1. Probability measures defined by Markov chains.
For a finite path p in a Markov chain M , we define the probability of p , denoted by P M ( p ), as the product of probabilities oftransitions along p . For any n >
0, the probability P M ( · ) is indeed a probability measureover paths of length n . We extend this probability measure to infinite paths in the standard . Chatterjee, T.A. Henzinger, J. Otop 5 fashion. Let X be the set of all infinite paths in M . For a basic open set p · X , which isthe set of all paths with the common prefix p , we define P M ( p · X ) = P M ( p ), and then theprobability measure over infinite paths defined by M is the unique extension of the abovemeasure (by Carathéodory’s extension theorem [17]). We will denote the unique probabilitymeasure defined by M as P M . Probabilistic VASS.
Probabilistic
VASS generalize both
VASS and Markov chains. A prob-abilistic
VASS is a VASS, in which transitions are labeled with probabilities. It can be alsoconsidered to be an infinite-state Markov chain over the set of states Q × Z k (resp., Q × N k )and Σ is a singleton. Formally, a probabilistic VASS is a tuple A = h Q, Q , δ, P, µ i suchthat (1) h Q, Q , δ i is a VASS ( VASS ( Z ) or VASS ( N )), (2) P : δ → (0 ,
1] is the probabilitydistribution over transitions, which for every q ∈ Q satisfies P ( q,q ,~y ) ∈ δ P ( q, q , ~y ) = 1, and(3) µ : Q → [0 ,
1] is the initial distribution, which satisfies P q ∈ Q µ ( q ) = 1. Probability measures defined by probabilistic VASS.
A probabilistic
VASS ( Z ) (resp. VASS ( N )) defines the probability measure over its computations. First, a probabilistic VASS ( Z ) (resp., VASS ( N )) A defines the probability measure over its infinite paths in thesame way as a Markov chain does. In VASS ( Z ), every path corresponds to a computationand hence the probability measure over infinite paths carries over to computations.We define P A as the probability measure on computations carried over from infinite paths.However, in VASS ( N ) some paths may not correspond to valid computations. For thatreason, defining the probability measure over computations poses difficulties [6]. We considertwo possible solutions: the strict and the relaxed semantics.Under the strict semantics, we require that all paths correspond to valid computationsand then we define the probability measure P s A over computations as in the VASS ( Z )case.Under the relaxed semantics, we require the set of paths corresponding to valid compu-tations to have a non-zero probability, and we define the probability measure P r A overcomputations as the conditional probability under the condition being the set of all pathsthat correspond to valid computations. Random computations.
To indicate that we consider a computation picked at random, wedenote by ξ computations considered as random events. (cid:73) Remark 1.
Under the strict semantics we require that every path corresponds to a validcomputation, i.e., no counter gets a negative value. Note that relaxing all to almost all (i.e.,with probability 1) gives us the same notion. Being a valid computation is a safety propertyand hence if the set of paths corresponding to valid computations has probability 1, then itis the set of all paths. In this section, we define the multi-dimensional average problem and the expected multi-dimensional average problem , which we study in this paper. We define the averages overselected positions; averages are parametrized by a set of states S , called selected states , whichdetermines meaningful configurations over which we compute the average, while skippingother configurations. This allows us to specify properties based on desired events (from S )rather than steps. Multi-dimensional Long-Run Average Problems for VASS
Averages and limit-averages over selecting states.
Let A = h Q, Q , δ i be a VASS ( Z , k )(resp., VASS ( N , k )) and S ⊆ Q be a set of selecting states . Fix a counter i ∈ { , . . . , k } .For a finite subcomputation ρ of A , which contains at least one configuration from S × Z k (resp., S × N k ), we define the average value of counter i (over S ) , denoted by Avg iS ( ρ ), asthe average over values of counter i over configurations with the state belonging to S , i.e.,we first pick a subsequence ( s , ~x ) , . . . , ( s m , ~x m ) consisting of all configurations ( s, ~x ) suchthat s ∈ S , and then take the average of the values of counter i : Avg iS ( ρ ) = m P mj =1 ~x j [ i ] . If ρ has no configurations with states from S , then Avg iS ( ρ ) is undefined. For an infinitesequence π of configurations, which contains infinitely many configurations from S × Z k (resp., S × N k ), we define the limit-average value of the counter i (over S ) , denoted by LimAvg iS ( π ),as LimAvg iS ( π ) = lim inf k →∞ Avg iS ( π [0 , k − . If π does not contain infinitely manyconfigurations from S × Z k (resp., S × N k ), then LimAvg iS ( π ) is undefined. Multi-dimensional averages and limit-averages over selecting states.
We extend averagesand limit-averages to multiple dimensions. Let ~S = ( S [1] , . . . , S [ k ]) be a k -tuple of the subsetsof Q . For a (finite) subcomputation ρ and an (infinite) computation π , we define Avg ~S ( ρ ) = ( Avg S [1] ( ρ ) , . . . , Avg kS [ k ] ( ρ )) LimAvg ~S ( π ) = ( LimAvg S [1] ( π ) , . . . , LimAvg kS [ k ] ( π ))if all their components are defined. If any component of Avg ~S ( ρ ) (resp., LimAvg ~S ( π )) isundefined, the whole vector is undefined. (cid:73) Definition 2 (The multi-dimensional average problem for VASS) . Given a
VASS ( N , k ) (resp., VASS ( Z , k ) ) A , ~S ∈ (2 Q ) k and ~λ ∈ Q k , the (multi-dimensional) average problem askswhether there exists a computation π such that LimAvg ~S ( π ) is defined and LimAvg ~S ( π ) ≤ ~λ ,i.e., the limit-averages of counter values over ~S are component-wise bounded by ~λ . Expected limit-averages over selecting states.
Consider a probabilistic
VASS ( Z , k ) (resp., VASS ( N , k )), which defines a probability measure P A (resp., P s A or P r A ) over its computations.Let ~S = ( S [1] , . . . , S [ k ]) be a k -tuple of the subsets of Q . The function ξ LimAvg iS [ i ] ( ξ )is a random variable w.r.t. P A (resp., P s A or P r A ) and we define E A ( LimAvg iS [ i ] ) as theexpected value of this random variable. If the set of computations ξ , at which LimAvg iS [ i ] ( ξ )is undefined, has a non-zero probability, then the expected value is undefined as well. Weextend the expectation to vectors and define the expected multi-dimensional limit-average as E A ( LimAvg ~S ) = ( E A ( LimAvg S [1] ) , . . . , E A ( LimAvg kS [ k ] )) . As above, the expected value E A ( LimAvg ~S ) is defined only if all components are defined. (cid:73) Definition 3 (The expected (multi-dimensional) average problem for VASS) . Given a prob-abilistic VASS A , ~S ∈ (2 Q ) k , the expected multi-dimensional average problem asks tocompute the expected limit-averages over ~S , i.e., E A ( LimAvg ~S ) . (cid:73) Remark 4.
In all complexity results for the multi-dimensional average and the expectedmulti-dimensional average problems, we consider VASS where the counter updates areencoded in binary. . Chatterjee, T.A. Henzinger, J. Otop 7
Consider a
VASS ( Z , k ) A = h Q, Q , δ i , a vector ~S ∈ (2 Q ) k and thresholds ~λ ∈ Q k . Forsimplicity, we assume that Q = { q } and hence ( q ,~
0) is the initial configuration.We present sufficient and necessary conditions for the existence of a computation π with LimAvg ~S ( π ) ≤ ~λ . These conditions are expressed in terms of simple cycles in A , i.e., theystipulate that for each counter i there exist (a) a simple cycle that can be iterated to ensurethat limit average infimum is consistent with the threshold ~λ [ i ], and (b) a path to access thiscycle, and then to switch back to another cycle. These conditions can be check in NP . Wepresent main ideas assuming that A is strongly connected.Assume that A is strongly connected, i.e., it is strongly connected as a labeled graph.We distinguish two types of counters based on their behavior in A : bounded and unbounded .We first assume that for every counter i there is a cycle c i such that iterating this cycledecreases this counter’s value, i.e., Gain ( c i )[ i ] <
0. In such a case all counters are unbounded and for any ~λ there exists a computation π such that LimAvg ~S ( π ) ≤ ~λ . The all-unbounded case.
We assume that all counters are unbounded. Fix some ~λ ∈ Q k .We construct π such that LimAvg ~S ( π ) ≤ ~λ by interleaving strategies for each counter i tomake its partial average below ~λ [ i ]. More precisely, we define the path p of the form p = s s . . . s k s . . . s k . . . such that for every prefix s . . . s ji of p , the subcomputation ρ ji corresponding to that prefixsatisfies Avg S [ i ] ( ρ ji ) ≤ ~λ [ i ], i.e., the partial average over S [ i ] is bounded by ~λ [ i ]. We canconstruct such p as follows. Suppose that a prefix of p has been defined as above, and weneed to construct s ji . There are two cases: if the cycle c i with Gain ( c i )[ i ] < S [ i ], then s ji = ( c i ) m for some large m , i.e., we iterate c i long enoughsuch that the average of the whole prefix computation is below ~λ [ i ]. If c i does not containany state from S [ i ], then there exists c i that contains a selecting state and Gain ( c i )[ i ] < d be a cycle from the initial state of c i to itself that contains a selecting state.Then, d · c Ni contains a selecting state and Gain ( d · c Ni )[ i ] < N .Now, let π be the computation corresponding to p . For every counter i there are infinitelymany positions at which the partial average over S [ i ] at most ~λ [ i ] and hence LimAvg ~S ( π ) ≤ ~λ . The some-bounded case.
Assume that for a counter j , there is no cycle such that iteratingit decreases the value of counter j . In other words, for all cycles c we have Gain ( c )[ j ] ≥ bounded . It is clearly lower bounded and for the limit averageto be finite its has to be upper bounded. In consequence, in any computation π with finitelimit-average, all cycles c that occur infinitely often satisfy Gain ( c )[ j ] = 0. This in turnrestricts the set of cycles that can appear infinitely often in the considered paths, whichmakes other counters bounded . We iterate this process until we reach a fixed point B , which isthe set of all bounded counters. The complement of B , denoted by U , is the set of unbounded counters.Note that for each unbounded counter i ∈ U there is a cycle c i such that: (U1) we have Gain ( c i )[ i ] < (U2) for each bounded counter j ∈ B , we have Gain ( c i )[ j ] = 0. Multi-dimensional Long-Run Average Problems for VASS
It follows that similarly to the all-unbounded case, we can make sure that the partial averagesof unbounded counters are arbitrarily low.The limit average of a bounded counter depends on its initial value. Indeed, in the extremecase, if the value of a counter i does not change in any transition, then it is bounded and inevery computation the limit average of counter i is precisely its initial value. However, tocharacterize cycles that witness low limit-averages of bounded counters it is more convenientto refer to a configuration that occurs infinitely often rather than the initial configuration.Therefore, we consider a recurring configuration ( s , ~x ) that is: (a) reachable from the initialconfiguration ( q ,~ s , ~y ) such that ~y and ~x agree on bounded counters. We now drop the strongly-connected assumption on A .Observe that switching between cycles for different (bounded or unbounded) countersmay affect values of bounded counters. Therefore, for a bounded counter i ∈ B we requirethat there is a cycle c i , which (a) can be accessed with an appropriate path, and (b) itsaverage together with the initial value are bounded by ~λ [ i ]. To make it more precise: thereexist a cycle c i and paths in i , out i such that (B1) we have Avg S [ i ] ( ρ ) ≤ ~λ [ i ], where ρ is the subcomputation corresponding to the cycle c i starting from the configuration reached from ( s , ~x ) over the path in i , and (B2) in i , out i are from s to some s ∈ c i and from the same s to s respectively, and foreach bounded counter j ∈ B , we have Gain ( c i )[ j ] = 0 and Gain ( in i out i )[ j ] = 0.Finally, we require that for all unbounded counters i ∈ U there exist access paths in i , out i as in condition (B2) , i.e., paths in i , out i satisfy: (U3) in i , out i are from s to some s ∈ c i and from the same s to s respectively, and foreach bounded counter j ∈ B , we have Gain ( in i out i )[ j ] = 0.Condition (U3) is necessary as otherwise, switching between cycles for unbounded cyclesand bounded cycles could change values of bounded counters. Observe that conditions (U2) and (U3) together are the same as (B2) . We unify these conditions into a single onedenoted (BU) . A witness for LimAvg ~S ≤ ~λ . A witness for LimAvg ~S ≤ ~λ is a tuple consisting of (a) a(recurring) configuration ( s , ~x ) reachable from the initial configuration ( q ,~ B and U , and (c) cycles c i and access paths in i , out i , for all i , which allsatisfy conditions (U1) , (B1) and (BU) .First, we show that the existence of a witness for LimAvg ~S ≤ ~λ is sufficient for theexistence of a computation π with LimAvg ~S ( π ) ≤ ~λ . Key ideas.
Using a witness, we construct a computation π satisfying LimAvg ~S ( π ) ≤ ~λ ina similar way as in the all-unbounded case. The only difference here is that we use accesspaths to switch between cycles for different counters so that we switch between the countersin the state s , where the values of bounded counters are the same as in the configuration( s , ~x ). Due to condition (BU) , we do not require A to be strongly connected.In consequence, we have the following: (cid:73) Lemma 5.
Let A be a VASS ( Z ) . If it has a witness for LimAvg ~S ≤ ~λ , then there existsa computation π such that LimAvg ~S ( π ) ≤ ~λ . Proof.
Let ( q ,~
0) be the initial configuration and ( s , ~x ) be the recurrent configuration ofthe witness. We assume that B is non-empty. Otherwise, the construction presented in theall-bounded case essentially works. The only difference is that we use paths in i , out i toswitch between cycles. . Chatterjee, T.A. Henzinger, J. Otop 9 We define the path p of the form p = s s s . . . s k s . . . s k . . . such that for every prefix s s . . . s ji of p , the precomputation ρ ji corresponding to that prefixsatisfies: (a) Avg S [ i ] ( ρ ji ) ≤ ~λ [ i ] + j , and (b) ρ ji terminates in ( s , ~y ), where for every i ∈ B we have ~y [ i ] = ~x [ i ].Having such a path p , consider the computation π that corresponds to p . Observe that forevery counter i , condition (a) implies that for every (cid:15) > k such that the average Avg S [ i ] ( π [1 , k ]) is less than ~λ [ i ] + (cid:15) and hence LimAvg S [ i ] ( π ) ≤ ~λ [ i ].It follows that LimAvg ~S ( π ) ≤ ~λ .Now, we discuss how to construct such p . First, s is a path that corresponds to acomputation from ( q ,~
0) to ( s , ~x ). Second, suppose that a prefix p of p has been definedas above, and we need to construct s ji . Observe that the already constructed subcomputationends in ( s , ~y ) such that for every k ∈ B we have ~y [ i ] = ~x [ i ].There exist paths in i , out i and a cycle c i satisfying (BU) , and (U1) (if i ∈ U ) or (B1) (if i ∈ B ). Consider s ji of the form in i c Ni out i from some N > (BU) , for every k ∈ B we have Gain ( in i out i ) = 0 and Gain ( c i ) = 0, and hence Gain ( s ji ) = 0. It follows that (b) holds.Now, to see that (a) holds for N big enough we consider two cases. If counter i is bounded,then (B1) implies that for ρ N being the computation corresponding to p in i c Ni out i , theaverage of the part corresponding to c Ni is x , which is less or equal to ~λ [ i ]. Therefore, Avg S [ i ] ( ρ N ) tends to x as N → ∞ and hence there is N such that Avg S [ i ] ( ρ N ) ≤ ~λ [ i ] + j .If counter i is unbounded, then Gain ( c i )[ i ] < c i contains a selecting state. Itfollows that the values of counter i tend to −∞ , and hence Avg S [ i ] ( ρ N ) tends to −∞ as N → ∞ . Therefore, there exists N such that Avg S [ i ] ( ρ N ) ≤ ~λ [ i ] + j . (cid:74) We show that the existence of a witness for
LimAvg ~S ≤ ~λ is necessary for the existenceof a computation π with LimAvg ~S ( π ) ≤ ~λ . (cid:73) Lemma 6.
For all
VASS ( Z , k ) A and ~λ ∈ Q k the following holds: if there is a computation π such that LimAvg ~S ( π ) ≤ ~λ , then there exists a witness for LimAvg ~S ≤ ~λ , which has apolynomial size in |A| + | ~λ | . Proof.
Consider a computation π such that LimAvg ~S ( π ) ≤ ~λ and let p be the infinite pathcorresponding to π . We decompose p into simple cycles greedily always picking the firstoccurring simple cycle. Now, consider all simple cycles that occur infinitely often as well asall rotations of these cycles D = { d , . . . , d m } . Based on these cycles, we define B as the setof counters j such that for all cycles d ∈ D we have Gain ( d )[ j ] = 0, and U = { , . . . , k } \ B .Let s be a state that occurs infinitely often in p . Note that eventually, past someposition K , all transitions belong to cycles from D . Therefore, we pick the first configuration( s , ~x ) past position K and observe that for all successive configurations ( s , ~y ), for everycounter i ∈ B , the gain between these configurations is 0 and hence ~x [ i ] = ~y [ i ]. The lengthof description of ( s , ~x ) is unbounded, but we show at the end of the proof that it can bechosen to be polynomial in |A| + | ~λ | . First, we show that there is any witness.Consider a counter j such that all cycles d ∈ D satisfy Gain ( d )[ j ] ≥
0. We observe thatfor all cycles d ∈ D we have Gain ( d )[ j ] = 0 and hence j ∈ B . Indeed, if there is a cycle d ∈ D that satisfies Gain ( d )[ j ] >
0, then values of counter j in π tend to ∞ and hence LimAvg S [ j ] ( π ) = ∞ > ~λ [ j ]. It follows that for ever i ∈ U , there is a cycle c i such that Gain ( c i )[ i ] < The unbounded-counter case.
Consider i ∈ U . Let ˜ c i ∈ D be such that Gain ( ˜ c i )[ i ] < q be the first state of ˜ c i . Observe that there exist cycles f , f ∈ D such that f isfrom s to itself and contains q , and f is from q to itself and contains some selecting statefrom S i . Indeed, q and s occur infinite often in p . Consider disjoint cycles e , e , . . . eachfrom s to itself that contains q . For each e l we remove from it iteratively simple cycles from D such that the resulting e l does not contain any simple cycle from D . Observe that onlyfinitely many e l are non-empty as otherwise there would be another simple cycle that occursinfinitely often and does not belong to D . Now, let e l be a cycle that can be decomposedinto simple cycles from D . Let us remove iteratively simple cycles to leave the ends s of e l and a single occurrence of q . The resulting cycle consists of one or two simple cycles from D .The proof for f is similar.Now, for N = | Gain ( f )[ i ] | + 1 we define c i = f ˜ c iN . Then, Gain ( c Ni )[ i ] < c i contains a selecting state from S [ i ]. Therefore, condition (U1) holds.The cycle f can be decomposed into paths in i , out i , respectively from s to q , and from q to s . Furthermore, since f , c i , in i out i can be decomposed into cycles from D , then bydefinition of B , for all j ∈ B we have Gain ( f )[ j ] = Gain ( c i )[ j ] = Gain ( in i out i )[ j ] = 0 andhence Gain ( f c ni )[ j ] = 0. Therefore, condition (BU) holds. Note that in i , out i have thelengths bounded by 2 · |A| and c i can be represented by the pair of cycles: ( ˜ c i , f ) of thelength at most 2 · |A| . Thus, all have polynomial-size representation. The bounded-counter case.
Let i ∈ B . Since every cycle d ∈ D satisfies Gain ( d )[ i ] = 0,from some position K onwards the gain of each cycle is 0 and hence the value of counter i on any two positions past K with the same state are the same. Therefore, we associate witheach state the value of counter i and eliminate values of counters. Furthermore, we associatewith each cycle d ∈ D its average value over S [ i ], which is uniquely defined. Finally, if forall cycles d ∈ D the average exceeds ~λ [ i ], then LimAvg S [ i ] ( π ) > ~λ [ i ]. Therefore, there existsa cycle c i ∈ D with the average value less or equal to ~λ [ i ].Since s occurs infinitely often, in particular it occurs past position K . Therefore, weshow as in the unbounded-counter case that there exist in i , out i such that in i leads from s to some state q of c i and out i from q to s such that c i , in i , out i satisfy (BU) . Finally,observe that c i , in i , out i satisfy (U1) . Note that c i , in i , out i can be picked to have thelengths at most 2 · |A| . A witness with polynomial recurrent configuration.
We have shown that there exists awitness for
LimAvg ~S ≤ ~λ with ( s , ~x ). We show that there exists ~z such that (a) the witnessfor LimAvg ~S ≤ ~λ with ( s , ~x ) replaced by ( s , ~z ) remains a witness for LimAvg ~S ≤ ~λ , and(b) ~z has the binary representation of polynomial length in |A| + | ~λ | .For (a) we need to show that (i) (B1) is satisfied with ( s , ~z ), and (ii) ( s , ~z ) is reachablefrom ( q ,~ (B1) states that for every i ∈ B , the subcomputation ρ correspondingto the cycle c i starting from the configuration reached from ( s , ~z ) over the path in i satisfies Avg S [ i ] ( ρ ) ≤ ~λ [ i ]. Note that the lengths of in i , c i are bounded by 2 · |A| (because i ∈ B ) andhence there exists α i with the binary representation of polynomial-length in |A| such that Avg S [ i ] ( ρ ) = α i + ~z [ i ]. Therefore, any ~z such that ~z [ i ] < − α i + ~λ [ i ] for all i ∈ B satisfies (i).For (ii) observe that reachable configurations in VASS ( Z ) are semilinear sets [4] representedby polynomial-size equations (where coefficients are given in binary). Therefore, we canfind a vector ~z satisfying (i) and (ii) whose binary representation has polynomial length in |A| + | ~λ | . (cid:74) . Chatterjee, T.A. Henzinger, J. Otop 11 Finally, a polynomial-size witness for
LimAvg ~S ( π ) ≤ ~λ can be non-deterministicallypicked and verified in polynomial time. More precisely, in the definition of a witness for LimAvg ~S ≤ ~λ condition (a) can be checked in NP as reachability for VASS ( Z ) is NP -complete [4], and conditions (b) and (c) can be check in polynomial time. In consequence,we have: (cid:73) Lemma 7.
The multi-dimensional average problem for
VASS ( Z ) is in NP . For hardness of the multi-dimensional average problem, consider configuration-reachabilityfor
VASS ( Z ), which is NP -complete. Configuration-reachability is mutually reducible tocoverability for VASS ( Z ) [21], which in turn is equivalent to dual coverability, i.e, theproblem, given a VASS ( Z ) and two configurations ( s,~
0) and ( t, ~x ), decide whether thereis a (finite) subcomputation from ( s,~
0) to some ( t, ~y ), where ~y ≤ ~x . The dual coverabilitystraightforwardly reduces to the multi-dimensional average problem as follows. We construct A from A by adding a fresh state t ∗ and two transitions labeled with ~
0: from t to t ∗ and aself-loop over t ∗ . Observe that there is a subcomputation from ( s,~
0) to ( t, ~y ) where ~y ≤ ~x in A if and only if there is a computation from ( s,~
0) that eventually reaches t ∗ and themulti-dimensional limit-averages are bounded by ~x . To enforce that a computation eventuallyreaches t ∗ , we use an additional counter that is 0 in the configurations of A and it changes to − t ∗ . Requirement that the limit average of this counter is less or equal to − t ∗ . In consequence, the dual coverability for VASS ( Z )reduces to the multi-dimensional average problem for VASS ( Z ) and hence the latter problemis NP -complete. (cid:73) Theorem 8.
The multi-dimensional average problem for
VASS ( Z ) is NP -complete. Observe that the expected average problem for probabilistic
VASS ( Z ) is modular and eachdimension can be considered separately. This follows from the fact that each path in a VASS ( Z ) corresponds to a computation, which is not the case for VASS ( N ). Furthermore,in this problem we compute the expected value over all computations and hence it canbe considered for each dimension separately. Therefore, we consider VASS that are single-dimensional. We first discuss the strongly-connected case and then generalize our results toall VASS ( Z ). Let A be a single-dimensional probabilistic VASS ( Z , q , A , whichcorresponds to the expected trend of the counter. The expected gain.
The graph of A can be considered as a Markov chain and usingstandard methods we compute for each state q its long-run frequency x q [2, 19]. Moreprecisely, the frequency of q in a subcomputation ξ [1 , n ] is the number of configurations withthe state q in ξ [1 , n ] divided by n . The Ergodic Theorem for Markov chains implies thatwith probability 1 over a random computation ξ , for every state q , the frequency of q in ξ [1 , n ] converges to x q as n tends to infinity. Based on frequencies x q we define the expectedgain E ( Gain ) as the expected counter update provided that the initial state q is picked at random according to the frequencies x q and the outgoing transition is picked at randomaccording to the distribution at q , that is: E ( Gain ) = X ( q,q ,y ) ∈ δ x q · P ( q, q , y ) · y The classification based on E ( Gain ) . We show that if E ( Gain ) is positive (resp., negative),then the limit-average is infinite (resp. minus infinity). However, if the expected gain is zerothere are two cases based on boundedness of configurations. Either the gain of every cycle isactually zero, or cycles with a positive gain balance cycles with a negative gain so that theexpected gain is zero. We discuss these cases below.We say that a
VASS ( Z ,
1) is totally bounded if the gain of each cycle is zero. Thisproperty does not depend on the probability distribution over transitions and we extendit straightforwardly to probabilistic
VASS ( Z , VASS ( Z ,
1) is stronglyconnected and totally bounded, then in each reachable configuration, the state uniquelydetermines the counter’s value. Otherwise, there exists a cycle with a non-zero gain. Thisobservation allows us to reduce the expected limit-average problem for such VASS tocomputing the expected long-run reward for Markov chains [2, Chapter 10.5].Consider a
VASS ( Z ,
1) with E ( Gain ) being zero and at least one cycle with a non-zerogain. Observe that E ( Gain ) being 0 implies that there is at least on cycle with a positivegain and a cycle with a negative gain. Let us consider the simplest probabilistic VASS,which has a single state q and two self loops labeled with 1 and −
1, both with probability0 .
5. The distribution of the counter’s gain in n transitions, denoted S n , is related to thebinomial distribution B ( n, .
5) in the following way: S n ∼ · B ( n, . − n . It follows thatwith probability 1 over a random computation ξ , the counter in ξ is neither lower nor upperbounded. Furthermore, we show that with probability 1, a random computation ξ has twosubsequences such that the averages on one sequence tend to ∞ , and on the other tend to −∞ . To state this formally we define: LimAvgInf S ( π ) = lim inf k →∞ Avg S ( ρ [1 , k ]) LimAvgSup S ( π ) = lim sup k →∞ Avg S ( ρ [1 , k ])Now, we present the lemma summarizing the above discussion. (cid:73) Lemma 9.
Let A be a strongly-connected probabilistic VASS ( Z , . One of the followingconditions holds: (1) E ( Gain ) > , and LimAvgInf S ( ξ ) = LimAvgSup S ( ξ ) = ∞ with probability (over ξ ), (2) E ( Gain ) < , and LimAvgInf S ( ξ ) = LimAvgSup S ( ξ ) = −∞ with probability , (3) A is totally bounded, and for some x ∈ Q , with probability over ξ we have LimAvgInf S ( ξ ) = LimAvgSup S ( ξ ) = x , and (4) E ( Gain ) = 0 , A is not totally bounded, and with probability over ξ we have LimAvgInf S ( ξ ) = −∞ and LimAvgSup S ( ξ ) = ∞ . Proof (of (1) and (2) from Lemma 9).
Assume that E ( Gain ) = 0. The Ergodic Theoremfor Markov chains implies that with probability 1 (over ξ ) for every state q , the frequency ofconfigurations with the state q converges to x [ q ]. For every transition ( q, q , y ), the frequencyof this transition converges to x q · P ( q, q , y ). Now, we multiply the frequency of eachtransition ( q, q , y ) by its update value y and get the value of the counter in ξ [ n ] dividedby n . On the other hand, this value converges to E ( Gain ) as n tends to infinity. It follows . Chatterjee, T.A. Henzinger, J. Otop 13 that with probability 1 over ξ the counter’s value at a position n equals E ( Gain ) · n ± o ( n ).Therefore, if E ( Gain ) > LimAvgInf S ( ξ ) = LimAvgSup S ( ξ ) = ∞ . Similarly, if E ( Gain ) <
0, then
LimAvgInf S ( ξ ) = LimAvgSup S ( ξ ) = −∞ with probability 1. (cid:74) Proof (of (3) from Lemma 9).
Assume that A is totally bounded. In every computation π if there are two configurations with the same state ( s, x ) , ( s, x ), then the counter’s valueis the same x = x . To see that, consider a subcomputation from ( s, x ) to ( s, x ) and let p be the path that corresponds to that subcomputation. Then, 0 = Gain ( p ) = x − x .Furthermore, since A is strongly connected and ( q ,
0) is the initial configuration for allcomputations, then in all computations the state determines the value of the counter.It follows that we can eliminate the counter and consider A as a Markov chain withthe limit-average objective with silent moves [10]. In a Markov chain with silent moves,transitions are weighted with rational numbers and a special value ⊥ , which is skipped in thecomputation of partial averages. Similarly to Markov chains, in strongly-connected Markovchains with silent moves, the expected limit-average in the Markov chain is actually thelimit-average of almost all paths and it is our value x . Moreover, the expected value can becomputed in polynomial time [10].More precisely, let h ( q ) be the value of the counter in the state q . We define a Markovchain M corresponding to A as follows. We define M = h{ a } , Q, { q } , δ , P , µ i such that δ ( q, q , a ) holds if and only if ( q, q , x ) ∈ δ for some x ∈ Z , P ( q, q , a ) = P x ∈ Z P ( q, q , x ),and µ ( q ) = 1. We consider the weighted Markov chain with silent moves hM , c i such that c : δ → Z ∪ {⊥} is defined as c ( q, q , a ) = h ( q ), if q ∈ S is a selecting state, and c ( q, q , a ) = ⊥ (is silent) otherwise. Observe that for every computation π and the corresponding p (withoutcounter updates), we have LimAvg S ( π ) is precisely the limit average of costs c of transitionsalong p . Since for almost all paths p in hM , c i , the limit average cost of p is the expectedcost x of hM , c i , almost all computations in A have the limit-average equal to x . (cid:74) It remains to prove (4) from Lemma 9). We only show that
LimAvgInf S ( ξ ) = −∞ holds with probability 1 over ξ , as the proof of LimAvgSup S ( ξ ) = ∞ is symmetric. Observethat in a strongly-connected VASS ( Z ), the event LimAvgInf S ( ξ ) = −∞ is a tail event.Therefore, due to Kolmogorov’s 0-1 law [17] it has either probability 0 or 1. In consequence,it suffices to show that it has a positive probability.First, we show that with a positive probability LimAvgInf S ( ξ ) is upper bounded. (cid:73) Lemma 10.
Consider a probabilistic
VASS ( Z ) A as in (4) of Lemma 9. There exist c ∈ Q and δ > such that LimAvgInf S ( ξ ) < c holds with probability greater than δ . Proof.
Let A results from A by assuming that the all states are initial, i.e., Q = Q , andthe initial distribution over states µ coincides with the long-run frequencies of states, i.e., µ ( q ) = x q .Suppose that LimAvgInf S ( ξ ) = ∞ with probability 1 w.r.t. A . Then, it also holds withprobability 1 w.r.t. A . Then, the average counter value at the n -th position converges to ∞ (lim n →∞ Avg S ( ξ [1 , n ]) = ∞ ) with probability 1 in A . Therefore, the expected averagecounter value up to position n , E A ( Avg S ( ξ [1 , n ])), converges to ∞ . However, the expectedgain is 0, which implies that in A , at every position n , the expected value of the counter (in A ) is 0. It follows that E A ( Avg S ( ξ [1 , n ])) is 0. A contradiction. (cid:74) For c ∈ Q , we define X c as the set of computations π such that LimAvgInf S ( π ) < c .Lemma 10 states that there are c ∈ Q and δ > P ( X c ) = δ . We show that forevery d ∈ Q , LimAvgInf S ( ξ ) < d holds with probability at least δ . (cid:73) Lemma 11.
Consider a probabilistic
VASS ( Z ) A as in (4) of Lemma 9. Assume that P ( X c ) = δ > . Then, for every d we have P ( LimAvgInf S ( ξ ) < d ) ≥ δ . Proof.
The main idea is to prepend to computations from X c a subcomputation thatdecreases the initial counter’s value to a . Then, the limit infimum of averages is c + a .Furthermore, we show that the set of such finite paths has probability 1.More precisely, consider a subcomputation ρ from ( q ,
0) to ( q , a ) and π ∈ X c . Wedefine the join of ρ and π , denoted by ρ (cid:111)(cid:110) π , as the computation consisting of first ρ andthen π [1 , ∞ ] ( π with the first configuration removed) with a added to the counter of allfollowing configurations of π . Observe that the join of ρ and π is indeed a computation.Moreover, the influence of the average of ρ on the whole computation diminishes and hence LimAvgInf S ( ρ (cid:111)(cid:110) π ) = LimAvgInf S ( π ) + a < c + a .Let Y be the set of (finite) subcomputations that start in ( q ,
0) and terminate oncethey reach some configuration ( q , b ) where b < d − c . Since A is strongly connected andnot totally bounded, almost surely a random computation ξ reaches a configuration ( q , b )where b < d − c . It follows that the set of all computations extending some subcomputationfrom Y has probability 1. Therefore, the set of all joins of subcomputations from Y withcomputations from X c has probability at least δ and all such computations ρ (cid:111)(cid:110) π satisfy LimAvgInf S ( ρ (cid:111)(cid:110) π ) < d , and hence Lemma 11 follows. (cid:74) Proof (of (4) from Lemma 9).
Lemma 11 implies that the set of computations ξ such that LimAvgInf S ( ξ ) = −∞ has a positive probability. Since LimAvgInf S ( ξ ) = −∞ is a tailevent in a strongly-connected VASS ( Z ), Kolmogorov’s 0-1 law [17] implies that its probabilityis 1, which concludes the proof of Lemma 9. (cid:74) Lemma 9 implies the following: (cid:73)
Lemma 12.
The expected average problem for strongly-connected probabilistic
VASS ( Z , can be solved in polynomial time. Proof’s ideas.
Consider a strongly-connected probabilistic
VASS ( Z , A . We can computefrequencies x q of states of A in polynomial time using standard methods [2, Chapter 10.5].Having frequencies x q , we can compute the expected gain E ( Gain ) of A in polynomial timefrom the definition.Assume that E ( Gain ) = 0. We can check whether A is not totally bounded by checkingwhether it has a cycle with a non-zero gain, which can be done in polynomial time. Finally, ifit is totally bounded, then each state of A uniquely determines the value of each counter, andwe can eliminate the counters and label states with counter values. Therefore, the problemof computing the long-run average of almost all computations, denoted by x , reduces tocomputing the expected long-run reward a Markov chain with rewards, which can be donein polynomial time [2, Chapter 10.5]. In consequence, we can check all the conditions ofLemma 9 in polynomial time and hence the result follows. (cid:74) Let A be a probabilistic VASS ( Z , bottom SCCs (BSCCs) of A (where an SCC B is bottom ifall states reachable from B belong to B ). If there is a BSCC that does not contain a statefrom S , then the expected limit-average is undefined. Assume that every BSSC contains astate from S and consider the following cases: . Chatterjee, T.A. Henzinger, J. Otop 15 If there are two BSCCs: (a) one with a positive expected gain, and (b) the other with anegative expected gain, then the expected limit average is undefined. The expected valueis undefined for random variables that attain + ∞ and −∞ with a positive probability [17].If there is a BSCC with a positive gain and every BSCC has (a) a positive gain, or (b) ithas the zero gain and it is totally bounded, then the expected limit-average is ∞ .If there is a BSCC with (a) a negative gain, or (b) the zero gain and not totally bounded,and every BSCC has a non-positive gain, then the expected limit-average is −∞ .If all BSCCs have the zero gain and are totally bounded, the expected limit-average isfinite and we discuss below how to compute it.First, we compute all BSCCs B , . . . , B m of A . We pick in each of these components aninitial state q i . For each BSCC B i with its initial configuration ( q i , x i , whichis the expected limit-average in B i . As we observed before, if we join a subcomputationfrom ( q ,
0) to ( q i , y i ) and some computation from ( q i ,
0) with the limit-average x i , thenthe limit-average of the resulting computation is x i + y i . Therefore, for each state q i wecompute the probability of reaching that state from the initial distribution, denoted p i , andthe expected counter’s value y i upon reaching q i , i.e., the conditional expected counter’svalue under the condition that the state q i is reached. Probabilities p i can be computedusing standard methods for Markov chains [2, Chapter 10.1]. The values y i can be computedas well using standard methods for Markov chains with rewards [2, Chapter 10.5]. Observethat the expected limit-average of A is given by the following formula: E A ( LimAvg S ) = m X i =1 p i · ( y i + x i )Finally, as we discussed above, we can compute the expected value for each counterseparately. In consequence we have the following: (cid:73) Theorem 13.
The expected average problem for probabilistic
VASS ( Z ) can be solved inpolynomial time. We first study the average problem for single-dimensional
VASS ( N , VASS ( N , NP -complete [22], reducesto the average problem for VASS ( N , NP upper bound, we show the following: (cid:73) Lemma 14.
For all
VASS ( N , A the following holds: there exists a computation π with LimAvg S ( π ) ≤ λ if and only if there exist subcomputations ρ , ρ c such that: ρ is from ( q , to ( s, x ) , where x ≤ λ , and ρ c is a cycle from ( s, x ) to itself satisfying the following conditions: (a) Avg S ( ρ c ) ≤ λ , (b) the number of configurations with selecting states in ρ c , i.e., configurations from S × N ,is O ( | S | · | λ | ) , and (c) the value of the counter in each configuration of ρ c from S × N is O ( | S | · | λ | ) . Proof.
Observe that having ρ , ρ c as above, the computation π = ρ ( ρ c ) ∞ is a valid compu-tation and it satisfies LimAvg S ( π ) ≤ λ . Conversely, assume that there is a computation π with LimAvg S ( π ) ≤ λ . Consider (cid:15) > ρ from π of the average value at most λ + (cid:15) of theminimal length (all shorter subcomputations have higher average). Such a cycle exists asthere has to be a configuration ( s, y ) with y ≤ λ + (cid:15) that occurs infinitely often. Otherwise, LimAvg S ( π ) ≥ λ + (cid:15) . Then, we divide π into cycles with ends with configuration ( s, y ) andthere has to be a cycle with the average value at most λ + (cid:15) .We show that this minimal ρ has few selecting configurations , which are configurationswith a selecting state. Let L be the number of selecting configurations in ρ with the counter’svalue at most d λ e and let H be the number of selecting configurations with the countersvalue at least d λ e + 1. We lower the average if we replace the value of the configurations ofthe first type by 0 and the second by d λ e + 1 and get( d λ e + 1) HL + H ≤ λ + (cid:15) and hence H ≤ λ + (cid:15) − (cid:15) · L Assuming that (cid:15) ≤ .
5, we can bound H ≤ · ( d λ e + 1) L .Now, we give a bound on L . Due to minimality assumption on ρ , it cannot containsubcycles with the same properties. Suppose it has a subcycle τ . Due to minimalityassumption, the subcycle has the average value exceeding λ + (cid:15) . But then, ρ obtained from ρ by removal of τ has a smaller average and a shorter length. A contradiction. It followsthat for every state q and for every value x ≤ d λ e there is at most one configuration ( q, x ) in ρ . Therefore, L ≤ ( d λ e + 1) | S | . and hence L + H ≤ ( d λ e + 1) | S | + 2 · ( d λ e + 1) | S | ≤ · | S | · ( d λ e + 2) . As previously observed the minimal value of a configuration from L is 0 and from H is d λ e + 1. Suppose that there is a single high value B is ρ and all other values take theminimal possible value. Then, we get: B + ( H − d λ e L + H ≤ λ thus B ≤ d λ e ( L + 1) ≤ ( d λ e + 1) | Q | and that is the bound on the maximal value of a selecting configuration. (cid:74) We can check in NP whether there exist ρ , ρ c satisfying the conditions from Lemma 14. Key ideas.
We non-deterministically pick all selecting configurations ( s , x ) , . . . , ( s m , x m )from ρ c . Then, we check reachability from ( q ,
0) to ( s , x ), and for each i < m reachabilityover non-selecting configurations from ( s i , x i ) to ( s i +1 , x i +1 ), and from ( s m , x m ) to ( s , x ).All these reachability checks can be done in NP . Finally, we check m P mi =1 x i ≤ λ . Allthese checks can be done is NP . The number of configurations m as well as the size of eachconfiguration is polynomially bounded due to Lemma 14. In consequence, we have: (cid:73) Theorem 15.
The average problem for
VASS ( N , is NP -complete. We show that the (decision variant of the) multi-dimensional average problem for
VASS ( N )is undecidable. A related problem, called the average-value problem, has been studied in [9]. . Chatterjee, T.A. Henzinger, J. Otop 17 In that problem, the values of all counters in each configuration ( q, ~x ) are aggregated into asingle number, called the cost , by computing dot-product of ~x and a cost vector ~c q ∈ N k . Acost vector ~c q depends on the state q in the configuration. The average-value problem askswhether there exists a computation such that the limit average of costs is less or equal toa threshold λ . The problem for VASS ( N ) with threshold 0 is undecidable [9, Theorem 24].Threshold 0 in VASS ( N ) means that whenever a cost vector is non-zero at component i (i.e, ~c q [ i ] = 0), then counter’s i value should be 0. This constraint is expressed in themulti-dimensional average problem and hence we have: (cid:73) Theorem 16.
The decision variant of the multi-dimensional average problem for
VASS ( N ) is undecidable. We first study probabilistic
VASS ( N ) under the strict semantics and give the precise com-plexity. Next, we consider the relaxed semantics, where we have the exact complexity in thestrongly-connected case and a hardness result in the general case. Consider a probabilistic
VASS ( N ) A under the strict semantics. To check whether everypath of A corresponds to a valid computation, we examine each counter i separately andcheck whether it can reach a negative value from some initial configuration. This can bedone in polynomial time with the standard reachability analysis. If a negative value for somecounter is reachable, then the expected limit-average is undefined under the strict semantics.Otherwise, every path in A corresponds to a valid computation and we can consider A as a VASS ( Z ) as the non-negativity restriction is vacuous for A . Therefore, we apply Theorem 13and compute the expected value for A . In consequence, we have: (cid:73) Theorem 17.
The expected average problem for probabilistic
VASS ( N ) under the strictsemantics can be solved in polynomial time. The finite strongly-connected case.
Consider a probabilistic
VASS ( N ) A , which is stronglyconnected. We show that if the expected limit-average is finite, then the strict and the relaxedsemantics coincide. We first assume that A is single-dimensional. Using the classificationfrom Lemma 9 applied to A considered as a VASS ( Z , E ( Gain ) < E ( Gain ) = 0 and A is not totally bounded, then a random computation ξ (under the VASS ( Z ) semantics) satisfies LimAvgInf S ( ξ ) = −∞ , and hence it is nota valid computation of the VASS ( N ). Therefore, the expected limit-average under therelaxed semantics is undefined for A .If E ( Gain ) >
0, then
LimAvgInf S ( ξ ) = ∞ . Therefore, if the set of random computations ξ (under the VASS ( Z ) semantics) that are also valid computations under the VASS ( N )semantics has a positive probability, then the expected limit-average under the relaxedsemantics is defined and infinite. Otherwise, it is undefined.If E ( Gain ) = 0 and A is totally bounded, then (as we observe in Section 4.2) in eachconfiguration, the state uniquely determines the counters value. Therefore, we can checkwhether counter values in all states are non-negative. If this is the case, then all pathscorrespond to valid computations, the expected limit-average is defined and finite, and we can compute it with Theorem 17. Otherwise, observe that in a strongly-connected A every state is visited with probability 1 and hence the expected value is undefined.Therefore, for the expected limit-average under the relaxed semantics to be defined andfinite, the expected gain w.r.t. every counter has to be 0 and it has to be totally bounded.Furthermore, we check for each counter independently whether every path corresponds toa valid computation in VASS ( N ). Since we consider all paths, we can make these checksindependently for all counters. In consequence we have the following: (cid:73) Theorem 18.
Deciding whether the expected limit-average is defined and finite over strongly-connected probabilistic
VASS ( N ) under the relaxed semantics can be solved in polynomialtime. Furthermore, it if is it can be computed in polynomial time. The general case.
We present only a hardness result. The coverability problem for
VASS ( N ), which is ExpSpace -complete [30], reduces to (the decision version of) the expectedlimit-average problem for
VASS ( N ). The reduction is rather straightforward with minortechnical difficulties (we need to ensure that the expected value of each counter is finite). Inconsequence, we have: (cid:73) Theorem 19.
The problem, given a probabilistic
VASS ( N , k ) A under the relaxed semantics, S ⊆ Q and ~x ∈ Q k , decide whether E A ( LimAvg ~S ) < ~x is ExpSpace -hard.
Proof.
Consider a
VASS ( N , k ) A , an initial configuration ( s, ~x ), and a target configuration( t, ~x ). Without loss of generality, we assume that ~x = ~x = ~ − , ,
1. We construct a probabilistic
VASS ( N ) A P based on A by adding anadditional counter k + 1 and a sink state r , which has only a single outgoing transition, whichis a self-loop upon which counters do not change values, i.e., ( r, r,~ t to r labeled with ~ A P is defined and finite, we add to every state A P a transition to r labeledwith ~ . We assign positive probabilities to the remaining transitions.Observe that a random computation reaches the sink r with probability 1, and the probabilitythat it happens after more than n steps is bounded by ( ) n . The value of the counter after n steps is at most n . Therefore, the expected limit-average of counters 1 , . . . , k is bounded by P ∞ j =1 ( ) i · i = 2. Upon reaching r the counter k + 1 has value 0, if the previous state was t ,and 1 otherwise. Therefore, the expected limit-average is strictly less than (2 + (cid:15), . . . , (cid:15), (cid:15) >
0) if and only if there is a computation from ( s,~
0) to ( t, ~y ) (with any ~y ) in A . (cid:74) Remark.
The decidability of the problem from Theorem 19 is open.
References Parosh Aziz Abdulla, Mohamed Faouzi Atig, Piotr Hofman, Richard Mayr, K. Narayan Kumar,and Patrick Totzke. Infinite-state energy games. In
CSL-LICS 2014 , pages 7:1–7:10, 2014. doi:10.1145/2603088.2603100 . Christel Baier and Joost-Pieter Katoen.
Principles of model checking . MIT Press, 2008. Roderick Bloem, Swen Jacobs, Ayrat Khalimov, Igor Konnov, Sasha Rubin, Helmut Veith,and Josef Widder. Decidability in parameterized verification.
SIGACT News , 47(2):53–64,2016. doi:10.1145/2951860.2951873 . Michael Blondin, Alain Finkel, Stefan Göller, Christoph Haase, and Pierre McKenzie. Reacha-bility in two-dimensional vector addition systems with states is pspace-complete. In
LICS2015 , pages 32–43. IEEE Computer Society, 2015. doi:10.1109/LICS.2015.14 . . Chatterjee, T.A. Henzinger, J. Otop 19 Tomás Brázdil, Krishnendu Chatterjee, Antonín Kucera, Petr Novotný, Dominik Velan, andFlorian Zuleger. Efficient algorithms for asymptotic bounds on termination time in VASS. In
LICS 2018 , pages 185–194, 2018. doi:10.1145/3209108.3209191 . Tomás Brázdil, Stefan Kiefer, Antonín Kucera, and Petr Novotný. Long-run average behaviourof probabilistic vector addition systems. In
LICS 2015 , pages 44–55, 2015. doi:10.1109/LICS.2015.15 . Krishnendu Chatterjee, Thomas A. Henzinger, and Jan Otop. Nested weighted limit-averageautomata of bounded width. In
MFCS 2016 , pages 24:1–24:14, 2016. doi:10.4230/LIPIcs.MFCS.2016.24 . Krishnendu Chatterjee, Thomas A. Henzinger, and Jan Otop. Quantitative monitor automata.In
SAS 2016 , pages 23–38, 2016. doi:10.1007/978-3-662-53413-7\_2 . Krishnendu Chatterjee, Thomas A. Henzinger, and Jan Otop. Long-run average behaviorof vector addition systems with states. In
CONCUR 2019 , pages 27:1–27:16, 2019. doi:10.4230/LIPIcs.CONCUR.2019.27 . Krishnendu Chatterjee, Thomas A. Henzinger, and Jan Otop. Quantitative automata underprobabilistic semantics.
Logical Methods in Computer Science , 15(3), 2019. doi:10.23638/LMCS-15(3:16)2019 . Krishnendu Chatterjee and Yaron Velner. The complexity of mean-payoff pushdown games.
J.ACM , 64(5):34:1–34:49, 2017. doi:10.1145/3121408 . Krishnendu Chatterjee and Yaron Velner. Hyperplane separation technique for multidimen-sional mean-payoff games.
J. Comput. Syst. Sci. , 88:236–259, 2017. doi:10.1016/j.jcss.2017.04.005 . Wojciech Czerwinski, Slawomir Lasota, Ranko Lazic, Jérôme Leroux, and Filip Mazowiecki.The reachability problem for petri nets is not elementary. In
STOC 2019 , pages 24–33, 2019. doi:10.1145/3313276.3316369 . Emanuele D’Osualdo, Jonathan Kochems, and C.-H. Luke Ong. Automatic verifica-tion of erlang-style concurrency. In
SAS 2013 , pages 454–476, 2013. doi:10.1007/978-3-642-38856-9\_24 . Javier Esparza. Decidability and complexity of petri net problems—an introduction.
Lectureson Petri nets I: Basic models , pages 374–428, 1998. Javier Esparza and Mogens Nielsen. Decidability issues for petri nets - a survey.
Bull. EATCS ,52:244–262, 1994. W. Feller.
An introduction to probability theory and its applications . Wiley, 1971. Yu Feng, Ruben Martins, Yuepeng Wang, Isil Dillig, and Thomas W. Reps. Component-basedsynthesis for complex apis. In
POPL 2017 , pages 599–612, New York, NY, USA, 2017. ACM.URL: http://doi.acm.org/10.1145/3009837.3009851 , doi:10.1145/3009837.3009851 . Jerzy Filar and Koos Vrieze.
Competitive Markov decision processes . Springer, 1996. Pierre Ganty and Rupak Majumdar. Algorithmic verification of asynchronous programs.
ACMTrans. Program. Lang. Syst. , 34(1):6:1–6:48, May 2012. URL: http://doi.acm.org/10.1145/2160910.2160915 , doi:10.1145/2160910.2160915 . Christoph Haase and Simon Halfon. Integer vector addition systems with states. In
RP 2014 ,pages 112–124, 2014. doi:10.1007/978-3-319-11439-2\_9 . Christoph Haase, Stephan Kreutzer, Joël Ouaknine, and James Worrell. Reachability insuccinct and parametric one-counter automata. In
CONCUR 2009 , pages 369–383, 2009. doi:10.1007/978-3-642-04081-8\_25 . Alexander Kaiser, Daniel Kroening, and Thomas Wahl. Dynamic cutoff detection in pa-rameterized concurrent programs. In
CAV 2010 , pages 645–659, 2010. doi:10.1007/978-3-642-14295-6\_55 . Alexander Kaiser, Daniel Kroening, and Thomas Wahl. Efficient coverability analysis by proofminimization. In Maciej Koutny and Irek Ulidowski, editors,
CONCUR 2012 , pages 500–515,Berlin, Heidelberg, 2012. Springer Berlin Heidelberg. URL: http://dx.doi.org/10.1007/978-3-642-32940-1_35 , doi:10.1007/978-3-642-32940-1_35 . Richard M. Karp and Raymond E. Miller. Parallel program schemata.
J. Comput. Syst. Sci. ,3(2):147–195, 1969. doi:10.1016/S0022-0000(69)80011-5 . S. Rao Kosaraju. Decidability of reachability in vector addition systems (preliminary version).In
Proceedings of the 14th Annual ACM Symposium on Theory of Computing, May 5-7, 1982,San Francisco, California, USA , pages 267–281, 1982. doi:10.1145/800070.802201 . Jean-Luc Lambert. A structure to decide reachability in petri nets.
Theoretical ComputerScience , 99(1):79–104, 1992. doi:10.1016/0304-3975(92)90173-D . Jérôme Leroux. Vector addition systems reachability problem (A simpler solution). In
Turing-100 - The Alan Turing Centenary, Manchester, UK, June 22-25, 2012 , pages 214–228, 2012.URL: https://easychair.org/publications/paper/Blr . Jérôme Leroux. Polynomial vector addition systems with states. In
ICALP 2018 , pages134:1–134:13, 2018. doi:10.4230/LIPIcs.ICALP.2018.134 . Richard Lipton. The reachability problem is exponential-space hard.
Department of ComputerScience, Yale University, Tech. Rep , 62, 1976. Ernst W. Mayr. An Algorithm for the General Petri Net Reachability Problem. In
STOC1981 , pages 238–246, 1981. doi:10.1145/800076.802477 . Jakub Michaliszyn and Jan Otop. Average stack cost of büchi pushdown automata. In
FSTTCS2017 , pages 42:1–42:13, 2017. doi:10.4230/LIPIcs.FSTTCS.2017.42 . Charles Rackoff. The covering and boundedness problems for vector addition systems.
Theoret-ical Computer Science , 6(2):223 – 231, 1978. doi:https://doi.org/10.1016/0304-3975(78)90036-1 . Moritz Sinn, Florian Zuleger, and Helmut Veith. A simple and scalable static analysis forbound analysis and amortized complexity analysis. In
CAV 2014 , pages 745–761, 2014. doi:10.1007/978-3-319-08867-9\_50doi:10.1007/978-3-319-08867-9\_50