Formal Methods for An Iterated Volunteer's Dilemma
FF ORMAL M ETHODS FOR AN I TERATED V OLUNTEER ’ S D ILEMMA
A P
REPRINT
Jacob Dineen, A S M Ahsan-Ul Haque, Matthew Bielskas
Department of Computer Science, University of Virginia, Charlottesville, VA 22904 [jd5ed, ah3wj, mb6xn]@virginia.edu
September 1, 2020 A BSTRACT
Game theory provides a paradigm through which we can study the evolving communication andphenomena that occur via rational agent interaction [1]. In this work, we design a model frameworkand explore The Volunteer’s Dilemma with the goals of 1) modeling it as a stochastic concurrent mul-tiplayer game, 2) constructing properties to verify model correctness and reachability, 3) constructingstrategy synthesis graphs to understand how the game is iteratively stepped through most optimallyand, 4) analyzing a series of parameters to understand correlations with expected local and globalrewards over a finite time horizon. K eywords Formal Methods · Multi-agent System · Game Theory · PRISM · Public Good Game · Game Theory
We are interested in expressing Volunteer’s Dilemma games through Prism Model Checker. This is useful becausewith this software, one can easily tune game parameters to get intuition of game dynamics. This can allow us to seewhat setting changes correlate with change in expected reward for each player. Additionally, PRISM can provide us aprobabilistic graph that reflects a strategy that is optimal (or approximately optimal).Previous works [2] define public good game as a concurrent stochastic game, evaluating optimal strategies under afixed set of parameters deciding the length of the game and the scaling factor associated with resource distribution. Ourproposed model would be similar in that it would be finite state, i.e. each agent can choose to share a discrete portion oftheir initial resources, but would differ in the fact that the Volunteer’s Dilemma is a collective good game. The PublicGood Game appears to search for localized reward maximization without explicitly being combative or zero-sum.The novelty is that, to the best of our ability, PRISM has not been used to study Volunteer’s Dilemma in the form of aniterated game. By iterated game, we mean that the game repeats so that the environment experiences a soft reset aftereach round. The initial goal is to check the correctness of our game implementation, so we can guarantee that a wincondition is always achievable. Another interest is to tune parameters and plot expected reward as explained above. Andfinally we want to see how iterations of the game are reflected in the synthesized graphs. Time permitting, we hope thisanalysis will guide us toward new questions and experiments that reflect subtleties of the Volunteer’s Dilemma game.
One-shot games, i.e. Prisoner’s Dilemma, can typically be modeled with a simple payoff matrix. Players in the gamechoose a strategy and act concurrently and independently of one another. Extensive form games model game theoreticscenarios with sequential mechanisms, in which a subsequent player acts once their predecessor makes known their a r X i v : . [ c s . M A ] A ug PREPRINT - S
EPTEMBER
1, 2020strategy and state transition. Iterated games, or repeated games, are examples of extensive form games and study longer(possibly infinite) time horizons. Both methods have gleaned valuable insight into behavioral economics and rationalchoice theory, and fuse many respective fields. Stochastic games are argued to be the most reflective of real-worldsystems, as they are governed by probabilistic dynamics that many situations incur. These games are typically modeledas being extensive form, and arguably produce more interesting results of long-run behavior. These dynamics have beenstudied in games involving social welfare (public goods), robot coordination and investing/auction scenarios [3–6].
In game theory, the volunteer’s dilemma is a game played by multiple agents concurrently that models a situation inwhich each agent has one of two options:1. Cooperation: An agent can make a small sacrifice for public good i.e. that benefits everyone,2. Defect: An agent can wait and freeride, and hope someone else will eventually cooperate.The agents make the decisions independently of each other. The incentive for an agent to freeride is greater than theincentive to volunteer. However, if no-one volunteers then everyone loses. Conversely, if at least one person volunteersthen everyone receives benefit.A typical payoff matrix for the volunteer’s dilemma looks like this:Table 1: Payoff matrixat least one other cooperates all others defectscooperate 0 0defect 1 -10As stated, the agents have more incentive to defect (1 in this case), than to cooperate (0 payoff). However, if everyonedefects, everyone receives -10 payoff.The volunteer’s dilemma occurs in various natural scenarios. For example: in a group of meerkats, some act as sentriesto let everyone else know if there are any predators nearby. In doing so, those become more vulnerable. It is alsoimportant to understand group behaviors. One particular example we are interested in is a democratic election. Let’sassume an election where a candidate has much more supporters than all other candidates. Thus, the supporters of thatcandidate have little incentive to go out and vote, since that candidate is predicted to win anyway. However, if all of hissupporters think in that way and do not vote, that candidate may end up losing the election.Figure 1: Volunteer’s Dilemma.
Left:
We have a situation where the number of cooperating agents within the system isless than the total required resources for collective group benefit. In this case, no agent in the system benefits.
Right:
Here, the number of agents within the system who choose to cooperate is ≥ the total number of resources needed. Allagents operating within the system, even those choosing to defect, benefit.2 PREPRINT - S
EPTEMBER
1, 2020
Concurrent stochastic multi-player games (CSGs) are an an extension to stochastic games (SGs) popularized in the1950s. SGs are generalizable to n-player games and present a viable way to model group dynamics, in collaborativeor competitive games, where the environment changes given feedback from agents in the system. Beginning fromsome state s ∈ S , immediate payoff, or reward, is dependent on the actions taken by all agents in the system v ∈ V .Stochastic multi-player games (SMGs) are turn-based and are governed by individual or joint state transitions, where aplayer chooses from a set of probabilistic transitions to determine the next state [7]. Formally, a CSG can be representedby a tuple not dissimilar from a Markov Decision Process (MDP): G = (
N, S, (cid:126)S, A, ∆ , δ, AP, L ) where: N is a finite set of players. S is a finite set of states. A is a finite set of actions available to v i at time t . ∆ isan action assignment function. δ is a probabilistic transition function. AP is a set of atomic propositions, and L isa labeling function. In a CSG, similar to how a policy resolves nondeterminism in an MDP, a strategy resolves choice [4].Our focus is in games where this transition occurs as a product of all agents simultaneously, hence the concurrence. Asthe environment changes depending on these actions, the choice of a new state is also influenced, and, in turn, expectedfuture payoffs are affected. The design of our CSG is detailed in this section. We use an extension to ProbabilisticSymbolic Model Checker (PRISM), PRISM Games [7–9], throughout experimentation. k : We refer to rounds of the finite length game as episodes. k ∈ { , , , ..., k max } k max : The maximum number of episodes specified as input into the environment.3. V : The set of agents within a system. The game is typically played with | V | > in literature. Here, we fix | V | = n = 3 . We’ll be studying this problem through the lens of a 3-player game.4. r init : The initial allocation of a resource. Each agent within the system is initialized with r init = 100 at eachepisode. Resources could be generalized to be currency, votes or public goods, etc.5. c i : The current resource allocation for agent v i at round k. c i is updated throughout gameplay.6. s i : The number of shared resources for agent v i at round k. A player can donate increments ( { , . , . } ) oftheir procured resource allocation. s i ≤ c i r needed : A specified parameter that dictates the number of resources needed to ’win’ a round k in the game. Inthe traditional game, only a single volunteer is needed. Here, we consider the effects of resource procurementover finite-length runs of the game. E.g., rewards distributed at round k can be used as ’donation’ at round k + 1 . In literature, to reach a winning condition, we generally require donations from strictly less than thetotal number of agents in the system. This holds here. We require r needed < n . This parameter is fixedround over round, i.e., it is not dynamically dependent on the values of state variables c i . We discretize the allowable actions, i.e. resource donations, to reduce the search space. We present the action space { a , a , a } ∈ A below. 3 PREPRINT - S
EPTEMBER
1, 2020Table 2: Volunteer’s Dilemma Action Space variable action definitiona0 Free Ride A player here chooses to contribute nothing to thepot of r needed . This player is known in literature asa free-rider. They are hopeful that total group con-tribution still results in immediate payoff withoutsacrificing any of their resource allocation.a50 Partial Contribu-tion A player taking this action will contribute (cid:98) (0 . ∗ c i ) (cid:99) resources.a100 Total Contribu-tion This action entails contribution in totality. All avail-able resources will be pushed toward r needed . Anagent taking this action could be seen as altruis-tic, as they may perceive the good of the many tooutweigh the good of themselves. We present a simple reward structure as follows. At the kth round, all agents starting in s are to concurrently choosean action. For a winning condition to be met, the sum of total contributions from all agents Σ n = | V | i =1 s i must meet orexceed the predefined threshold r needed (Fig. 2).Figure 2: Reward function given: r needed = 200 , n = | V | , f = 2 . This plot shows donated resources exceededresources needed and reward (resources) in the 100s of units. When Σ n = | V | i =1 s i (cid:48) < r needed , that round incurs no reward.When Σ n = | V | i =1 s i (cid:48) = r needed , an optimal joint strategy has been found. Because a single agent freerode in this instance,the number of resources at the end of this round exceeds those of when the round began. When Σ n = | V | i =1 s i (cid:48) > r needed , awinning condition has been met, but resources were expended that didn’t need to be. This figure shows the linearlydecaying reward function, and current resources at the kth round are found via the update function in eqn. 3.The immediate reward passed back to each agent subject to a winning condition can be formulated as: r ki = when (cid:0) Σ ni =1 s ki (cid:48) (cid:1) < r neededr needed · f | V | when (cid:0) Σ ni =1 s ki (cid:48) (cid:1) = r needed ( − . ni =1 s ki (cid:48) − r needed )+ r needed · f | V | when (cid:0) Σ ni =1 s ki (cid:48) (cid:1) > r needed (1) R ki = Σ n = | V | i =1 r ki (2)4 PREPRINT - S
EPTEMBER
1, 2020 c k +1 i ← min ( rmax, (cid:18)(cid:22)(cid:0) c ki − s ki (cid:48) (cid:1) + R ki | V | (cid:23)(cid:19) ) (3) v i k r initi r needed a i c ki − s ki (cid:48) r ki c k +1 i (cid:48) v i k r initi r needed a i c ki − s ki (cid:48) r ki c k +1 i (cid:48) k k
500 200 250 250 57 3073 k
200 200 100 100 57 157Table 4: Over-donation + WIN (Decayed Reward)Mock Gameplay: Consider a simple model with three players. Table 3 shows the initial run through the CSG. Theplayers transition through the system perfectly and gain max possible global and local rewards. The total resources aftera WIN condition are perfectly met are greater than when the round started. Table 4 At the kth round, players havegained resources beyond their initial allocation. A player can donate ( p a i * c i ) resources at this step. Players 2 and 3donate half their resources. They have over-donated, and the reward passed back is less than optimal.where the possible resources procured by a single player are constrained by r max . c i − s i (cid:48) is the cost incurred bydonating resources (initial resources at the start of a round less the donated resources during a round). This can also bethought of as an expenditure. f is a scaling factor to ensure that players who donate aren’t penalized more than playersthat do not donate when a WIN is achieved. s i (cid:48) is the state that v i transitions to c k +1 = ( s i , c i | a i ) given their initialround state and the chosen action. Rewards gained at a time-step are re-aggregated in c k +1 and are allowable donationsin round k + 1 .In literature, particularly studies involving human psychology, confounding effects may diminish the virtue of altruismin and of itself, as it could be done for ulterior motives [10]. We consider this here by ’punishing’ over-donations. If Σ n = | V | i =1 s i > r needed the immediate reward for all agents at the kth round R k decays linearly according to the piecewisefunction noted above. This can be seen in Fig. 2.Static games assume a total reset of the environment at each round of an episode. Here, resources gained at round kcan be used at round k + 1 . Meaning, the choice to donate is not a binary flag as is usual in models of the Volunteer’sDilemma. We propose that such modifications will allow for more dense analysis of strategy graphs given pre-tunedparameters, as long-run reward aggregation and strategies can be searched for that may shed light on aspects ofbehavioral theory under specific conditions. Here, we extend a static one-shot version of the Volunteer’s Dilemma to aniterated game with the goal of evaluating long-run dynamics (Fig. 1). The size of the state space is generally represented by | S | = n | A | for static games. The winning conditions here aredynamically dependent on the state of the game at a given time-step, There are more possible joint policies that result inWIN/SAT as the game progresses and resources are procured via reward feedback, and, as such, the state space growsexponentially over time. E.g., in this first round of a game, assuming r init = 1 , there are only nC r needed transitions thatinduce a ‘perfect’ WIN, where no decay is met via over-donation. | S | increases as c i −→ r max . With a parameter setof { kmax = 4 , r init = 100 , r needed = 200 , n = 3 } the size of | S | grows according to | S | = 1 . e . k . We’llconstrain k max to be ≤ , as extrapolating this to 5 and 6 rounds leads to respectively ≈ mm and ≈ mm possiblestates. We look at a mostly fixed parameter set: the number of agents | V | = 3 , the number of initial resources e init = 100 , thethreshold for resources r max = 1000 , specified at the local level, and the maximum number of rounds iterated through5 PREPRINT - S
EPTEMBER
1, 2020Table 5: VGD Probabilistic Reachability AnalysisRound States Y N M Y / (Y + N)1 2 (1 init) 0 2 0 0%2 55 (1 init) 6 48 1 11.1%3 1162 (1 init) 141 1009 12 12.3%4 27065 (1 init) 2724 8766 85 23.7%as k max = 4 . To ensure that our model is working, we create properties based on a temporal logic, rPATL, whichcombines PCTL and ATL [11].With a nonzero probability, we want to ensure that after k rounds, it is eventually possible for an agent to have c i ≥ e init ,which would mean that rewards were accrued during game-play and winning conditions were met. It is not a formalrequirement that all agents meet this condition individually, however. If it is not met, the piecewise function inconjunction with the resource update step ensures that c ki < c k +1 i . If at any point during the game Σ n = | V | i =1 c i < r needed it becomes impossible to satisfy this correctness property. Unfortunately, PRISM Games does not support modelchecking on CTL operators. Ideally, we would want to verify that there exists some state good = Σ n = | V | i =1 c i > r needed across k rounds, such that E [ F good ] evaluates to TRUE. Because this condition is trivially satisfied via the initial statewhere n · e enit > r needed , we look at a case where 2 r needed are required, which can be satisfied only after the firstround of the game. good = Σ n = | V | i =1 c i > ∗ r needed << p , p , p >> P > = 1 . F < = k max + 1 “ good ”] (4)In rPATL, the << C >> operator specifies a coalition of players [11]. Here, we consider a cooperative game, whereplayers are within a singular coalition aimed at maximizing expected reward. The property in eqn. 4 asserts that thereexists a joint strategy, or a collection of policies for each agent, such that the probability of reaching the goal state“good" within k max steps is at least 1.00. This verifies to FALSE in the first round, and TRUE thereafter to k max = 4 ,suggesting a viable model for our purposes. We can also observe the probabilistic reachability via the PRISM GamesGUI for the noted property, detailed in Table III. Intuitively, as the game progresses on, assuming round-wise SAT ofthe given property, the number of possible states which result in SAT increase. This is due to more resources beinginjected into the environment, resulting in more possible combinations of donations which result in reward. Now that we’re sure our model is implemented correctly in PRISM, the next step is to construct properties and verifythem so we formulate a reachability analysis for the CSG. Recall “probabilistic reachability" as referred to in theprevious subsection and Table 3. We note that such information (Y,N,M)/(Yes,No,Maybe) is not directly returned whenverifying a property but instead is recorded in the PRISM log. For a probability-based property, the direct result is aBoolean that indicates whether the property holds for at least one state in the model. This corresponds to at least oneYes in the aforementioned (Y, N, M) tuple and we emphasize that both outputs are situationally useful for understandingthe game. For a property defined by maximizing or minimizing a variable/reward, the direct result is the max/minnumber while (Y, N, M) has no reason to be recorded. Below we present some property templates we experimentedwith in PRISM. << p , p , p >> R { “ r } max =? { F k = k max + 1 } (5)With our three players in the game, this property returns maximum reward value r assigned to Player 1 when the gameends after k max rounds. Here r and done are interchangeable but r is able to be examined for all k = 1 , ..., k max . << p p , p >> max =?( R “ done F k = k max + 1]+ R “ done F k = k max + 1]) (6)For this property we have Player 1 aligned against Players 2 and 3 for a total of two coalitions. With done
23 = c c − · e init , the returned value is the maximum when these two coalitions are separately trying to maximizereward. << p , p , p >> P ≥ [ F c c c < (7)Here we present the first probability-based property. The direct result obtained is 1 if there must always exist a reachablestate where the sum of player resources is below 200. In the PRISM log we can examine (Y,N,M) to see the fraction of6 PREPRINT - S
EPTEMBER
1, 2020states where this inequality holds. << p , p , p >> P max =?[ F < = k max + 1 c < c (8)This property returns the maximum probability that player 2 has more resources than player 1 after k max rounds. Thisis expected to be 1 since our CSG doesn’t impose limitations on how player resources compare to each other . Similarlywe expect the minimum probability to be zero, and we can obtain a fraction of states that satisfy this from the PRISMlog.Figure 3: An iterated run through the system with variable initial resources. The y-axis represents the total, aggregategroup reward through time k. The different lines represent varying initial state conditions. We subject the environment to a property involving global reward maximization: << p , p , p >>R “ done max =?[ F k = kmax + 1] , where the label R “ done specifies the total resources accrued afterround k by all agents in the system. The results can be seen in Fig. 3. Interestingly, we see that when players within thesystem are instantiated with a lesser initial resource allocation, the maximum possible reward at the end of round 4 isgreater than other cases. We believe this relationship to be paradoxical. We can view the slope of the reward plots as anindicator, where r init < produces greater rates of change and lesser stabilization as the rounds progress. Becausethe update step of current resource allocation takes into account expenditures, this leads us to believe that freeridingis a more popular choice of action when initial resources are more scarce. We also note that an optimal strategy isimpossible to reach round over round, as the following shows that aggregate reward falls below the ceiling of possiblereward n . This is further detailed in Fig. 4, as we show the optimality of strategies to produce non-intuitive resultsacross 2 rounds of gameplay with r needed = 200 . We theorize that group optimality is achieved iff all agents within thesystem contribute. Reward Properties:
Although it’s simple enough to formulate properties involving a max or min over linearcombinations of rewards, PRISM doesn’t support the usage of probability bounds (max/min) or inequalities (
P > = p )for such formulas. Luckily in this game all rewards are of the form c i − e init with each c i a player resource variable.Therefore this became a non-issue as we realized all reward formulas can be substituted if necessary. Limitations in Multi-Partition Property Analysis:
We note that PRISM’s support for CSGs is in beta-testing, andadditionally the final release may feature limitations to prevent ‘obvious’ computational intractability. With that inmind, a challenge we faced was the inability to create more than two partitions for properties i.e maximizing sum ofplayer reward. Of course we are still allowed to feature more than two players in a property. But ultimately we lack thecapability to fully analyze this game when each player is in a different coalition, and thus we work around this byextracting as much as we can from 1/2-coalition properties.7
PREPRINT - S
EPTEMBER
1, 2020Figure 4: Strategy Graphs can be used to find an optimal controller given a property. Here, we consider << p p , p >> R “ r max =?[ F k = kmax + 1] under the specified parameter set noted above. The graphs can be readvia [k, c1, s1, c2, s2, c3, s3], where branching is determined by the actions taken concurrently by all agents in thesystem. Some interesting patterns emerge when looking at global reward maximization against optimal strategies. Onthe left, results are shown for a single round. From the init state of the game, the optimal strategy is for two players todonate in totality, and for one player to partially donate. On the right, we extend this to round 2. Here, global rewardmaximization is achieved as a result of full participation via partial contribution. In both cases, no agent within thesystem freerides. Strategy Graphs:
Perhaps the most interesting analysis involves the strategy graphs generated for specific properties.Because our state space grows exponentially due to the mechanisms involving game-play, this is exceedingly difficult.For instance, we can look at a strategy graph for one round over three players and the strategy synthesis is easy toconceptualize (Fig. 4). As the game progresses, it becomes computational taxing to conduct value iteration with anexploding state space.
Extensions to Cyberphysical Systems:
While we note a theoretical framework above, there are some obvious ties relatedto free market and democratic systems. For future work, we see an extension of the Volunteer’s Dilemma Game (VDG)to the domain of Cyberphysical Systems in proximity-based applications focused on optimal route planning and trafficde-congestion, like Google’s Waze [12, 13]. This would present a case of evolving geo-proximity based behaviors anddynamics, as the backbone of these frameworks is dependent on ‘guinea-pigs’ who willingly and interactively share8
PREPRINT - S
EPTEMBER
1, 2020real-time traffic data. We see these operators as being analogous with cooperators in the VDG, and those who utilize thisdata to avoid adverse traffic conditions as the defectors. In these cases, we speculate that users who may unknowinglyenter situations where the outcome could result in diminished rewards (time, mostly) gain some intrinsic value fromsharing valuable information with other users, while the defectors gain reward by utilizing this information to benefitthemselves in. Interestingly, such a paradigm presents a case where users are simultaneously collecting informationfrom other users, and also sharing their own information.
We have presented a viable, working model for studying optimal and sub-optimal behaviors in multi-agent systemsunder probabilistic dynamics. We have also introduced and verified properties to check the correctness of our holisticapproach, as well as having analyzed our reward mechanism under various conditions. Our analysis focused mainlyon a single parameter set where a number of variables were fixed. The exponential growth of the state space madeit difficult to directly induce collaborative strategies that maximize, or minimize, long-run rewards. Our model waspresented as a concurrent stochastic game in which players were guided to cooperate with one another. We believe thisapproach to be realistic, but would like to dive into literature regarding the freerider problem. It is entirely possiblethat agents in a real-world system do not act cooperatively in the presence of such a dilemma. Perhaps, in the case ofa democratic voting schema, a coalition of agents gains intrinsic satisfaction from minimizing the collective rewardof an opposing coalition. This could be introduced by partitioning coalitions in the form of subgraphs in a graphicaldynamic system. There, it would be of interest to explore such games under a combative approach where coalitionswould oppose one another.
References [1] Michael Ummels.
Stochastic multiplayer games: theory and algorithms . PhD thesis, RWTH Aachen, Germany,January 2010.[2] Public Good Game. Online. http://prismmodelchecker.org/casestudies/public_good_game.php.[3] Oliver P Hauser, Christian Hilbe, Krishnendu Chatterjee, and Martin A Nowak. Social dilemmas among unequals.
Nature , 572(7770):524–527, 2019.[4] Marta Kwiatkowska, Gethin Norman, David Parker, and Gabriel Santos. Equilibria-based probabilistic modelchecking for concurrent stochastic games. In
International Symposium on Formal Methods , pages 298–315.Springer, 2019.[5] Gabriel Santos. Equilibria-based probabilistic model checking for concurrent stochastic games. In
FormalMethods–The Next 30 Years: Third World Congress, FM 2019, Porto, Portugal, October 7–11, 2019, Proceedings ,volume 11800, page 298. Springer Nature, 2019.[6] Péter Biró and Gethin Norman. Analysis of stochastic matching markets.
International Journal of Game Theory ,42(4):1021–1040, 2013.[7] Taolue Chen, Vojtˇech Forejt, Marta Kwiatkowska, David Parker, and Aistis Simaitis. PRISM-games: A ModelChecker for Stochastic Multi-Player Games. In David Hutchison, Takeo Kanade, Josef Kittler, Jon M. Kleinberg,Friedemann Mattern, John C. Mitchell, Moni Naor, Oscar Nierstrasz, C. Pandu Rangan, Bernhard Steffen, MadhuSudan, Demetri Terzopoulos, Doug Tygar, Moshe Y. Vardi, Gerhard Weikum, Nir Piterman, and Scott A. Smolka,editors,
Tools and Algorithms for the Construction and Analysis of Systems , volume 7795, pages 185–191. SpringerBerlin Heidelberg, Berlin, Heidelberg, 2013. Series Title: Lecture Notes in Computer Science.[8] M. Kwiatkowska, D. Parker, and C. Wiltsche. Prism-games 2.0: A tool for multi-objective strategy synthesis forstochastic games. In M. Chechik and J-F. Raskin, editors,
Proc. 22nd International Conference on Tools andAlgorithms for the Construction and Analysis of Systems (TACAS’16) , volume 9636 of
LNCS . Springer, 2016.[9] T. Chen, V. Forejt, M. Kwiatkowska, A. Simaitis, A. Trivedi, and M. Ummels. Playing stochastic games precisely.In , volume 7454 of
LNCS , pages 348–363.Springer, 2012.[10] Nikolaos Askitas. Selfish altruism, fierce cooperation and the predator.
Journal of Biological Dynamics , 12(1):471–485, 2018. PMID: 29774800.[11] Marta Kwiatkowska, David Parker, and Clemens Wiltsche. Prism-games: verification and strategy synthesis forstochastic multi-player games with multiple objectives.
International Journal on Software Tools for TechnologyTransfer , 20:1–16, 11 2017. 9
PREPRINT - S
EPTEMBER
1, 2020[12] Noni Noerkaisar, Budi Suharjo, and Lilik Noor Yuliati. The adoption stages of mobile navigation technologywaze app as jakarta traffic jam solution.
Independent Journal of Management & Production , 7(3):914–925, 2016.[13] Susana (Shoshana) Vasserman, Michal Feldman, and Avinatan Hassidim. Implementing the wisdom of waze. In