Promises, Impositions, and other Directionals
aa r X i v : . [ c s . M A ] J a n Promises, Impositions, and other Directionals
Jan A. BergstraInformatics Institute, University of Amsterdam ∗ Mark BurgessCFEngine † Abstract
Promises, impositions, proposals, predictions, and suggestions are catego-rized as voluntary co-operational methods. The class of voluntary co-operationalmethods is included in the class of so-called directionals. Directionals are mech-anisms supporting the mutual coordination of autonomous agents.Notations are provided capable of expressing residual fragments of direction-als. An extensive example, involving promises about the suitability of programsfor tasks imposed on the promisee is presented. The example illustrates the dy-namics of promises and more specifically the corresponding mechanism of trustupdating and credibility updating. Trust levels and credibility levels then deter-mine the way certain promises and impositions are handled.The ubiquity of promises and impositions is further demonstrated with twoextensive examples involving human behaviour: an artificial example about anagent planning a purchase, and a realistic example describing technology medi-ated interaction concerning the solution of pay station failure related problemsarising for an agent intending to leave a parking area. ∗ Science Park 904, 1098 XH Amsterdam, The Netherlands; email [email protected] and [email protected] . † Email: [email protected] . ontents Promise dynamics: examples involving computer program usage 23
A.1 Promises about motivation, preferences, and activity planning . . . . 30A.2 Possible extensions of the promise bundle . . . . . . . . . . . . . . . 32
B Case study: a recurring parking exit problem 32
B.1 Unproblematic complications . . . . . . . . . . . . . . . . . . . . . . 34B.2 A problematic complication: parking exit problem case I . . . . . . . 35B.3 Trust and credibility . . . . . . . . . . . . . . . . . . . . . . . . . . . 39B.4 Lessons learned for A as a user of RPPC at P7 . . . . . . . . . . . . . 40B.5 Aspects of promise dynamics . . . . . . . . . . . . . . . . . . . . . . 42B.6 The parking exit problem and informal logic . . . . . . . . . . . . . . 43B.6.1 From induction to deduction . . . . . . . . . . . . . . . . . . 43B.6.2 Induction and conduction on top of deduction . . . . . . . . . 44B.7 Parking exit problem case II . . . . . . . . . . . . . . . . . . . . . . 45 C Promise dynamics continued 46
C.1 Extending the trust scale and mechanism . . . . . . . . . . . . . . . . 46C.1.1 Aggregates for specific accumulation of appreciation . . . . . 46C.1.2 The meaning of aggregate levels, an outline . . . . . . . . . . 47C.1.3 Product, task, user, and provider attributes . . . . . . . . . . . 50C.1.4 Expanding the trust network and mechanisms . . . . . . . . . 51C.2 Balancing imposition strength and promiser trust levels . . . . . . . . 52C.3 Reputation infection . . . . . . . . . . . . . . . . . . . . . . . . . . . 53C.3.1 Letter of recommendation (LOR) based reputation flow . . . . 54C.3.2 Third party survey based reputation infection . . . . . . . . . 54C.4 Informal logic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 543
Introduction
The objectives of this paper are diverse, including the following:1. to discuss in some detail the dynamics of promises assuming the statics ofpromises to have been assessed to some satisfactory extent in [3].
2. To add impositions to the theory of promises thus achieving higher symmetry.An imposition is an invitation for voluntary cooperation.Impositions are most plausible in contexts where the target agent of an imposi-tion has in advance promised the corresponding source of the imposition of itswillingness to receive and to subsequently effectuate a sufficiently large class ofimpositions.3. To add proposals, predictions, suggestions, and warnings as methods for induc-ing voluntary cooperation similar to though different from promises and impo-sitions.Incorporation of these further methods allows for a more flexible application ofpromises and impositions in human management and organization. For instancepredictions play a role when knowledge about an environment must be sharedbetween different agents. Suggestions and proposals are exchanged during pre-liminary stages in advance of an exchange of promises and impositions.4. To collect promises, impositions, proposals, predictions, suggestions, and warn-ings into a category of so-called co-op (short for co-operational) methods whichshare to a large extent options for formal description as well as life-cycle modelsand method dynamics. Penalties for non-compliance play no role in the settingof voluntary co-operational methods. Below voluntary co-op methods will be re-ferred to as “directionals”. Directionals constitute a larger class of methods forachieving coordination between autonomous agents including messages, hints,smiles, outcries, alarms.5. To provide examples of promises and other directionals that demonstrate theinteraction with trust maintenance which in turn is the key to promise dynamics.And to collect a number of additional attributes of promises that are helpful foran understanding of promise dynamics.We follow the initial development of [7, 8, 10] for an approach to a theory ofpromises with a principled emphasis on agent autonomy. A simple notation for promises,involving four components: a promiser, a promisee, a promise type, and a promise The role of autonomy for agents acting upon the reception of decision outcomes has not beenbrought into focus in my work [1] on outcome oriented decision taking and in these papers the possibilitythat decision outcomes imply obligations for other agents is left open.
A design decision that underlies promise theory is to liberate the concept of a promisefrom the connotation, or implicit expectation, that a promise correlates one to onewith an obligation. In [3] several arguments have been put forward why that so-callednon-obligationist conception of promising may be of practical value, both inside andoutside computing.The main reason for disentangling promises from obligations is that in a world ofautonomous agents promising is unproblematic, whereas obliging is not. One agentimposing an obligation on another agent may be understood as an impairment of thesecond agent’s autonomy. This objection against promises being strongly coupled withobligations depends on a conception of obligations that may be questioned. Althoughthe decoupling of promises and obligations has been dealt with extensively in [3] wefeel that more ought to be said. Below we will outline how some promissory obliga-tions may be understood as bundles of promises.5 .2 Impositions and impository obligations
Another common source of obligations arises when one agent commands another toperform some action which the “must be” performed. However, just as with promisesone may remove the connotation of obligation from a command. We will speak of animposition instead. An imposition is issued by an impositioner to an impositionee.Unfortunately “imposition” has a negative connotation because an imposition isgenerally understood to be be unwanted by its target agent. Nevertheless there is astriking congruence between impositions and promises because source, target, scope,type, and body, each make sense in similar ways.Just as to each promise a promissory obligation can be found which grasps what isobliged to the promiser (which may be nothing), one may find an impository obligationfor an imposition which collects that what becomes an obligation to the impositioneeupon the imposition being issued. In many cases the impository obligation is empty.
After undoing “imposition” from its connotation of unfairness, if only in the contextof promise theory, an imposition becomes a symmetric counterpart to a promise. Inorder to have a perfect symmetry the preferable interpretation is thus:1. Promise: act of promising, event of promise issuing, resulting in a promise out-come,2. Promise outcome: essence, or merely description, of what has been promised.The promise outcome is specified as a component of a promise statement in theso-called promise body. A promise outcome equals an imposition enacted bythe promiser towards itself.3. Imposition: the result of an act/event of imposing. An imposition may be eitherexternal or it may be self-imposed (a self-imposition created upon a promise),4. Imposition event/action: source agent tells target agent what to do, what toachieve, what must be the state of affairs the the target must see to that is reached.
Now the vital step is to mobilize an “imposition” for interaction between autonomousagents. This requires a number of assumptions: • There is no underlying hierarchical structure that explains or governs who mayimpose on whom. A may say to B , “please open that door for me”, and that canhappen for all agents A and B (assuming these agents deal with doors).6 An imposition of A on B should not be understood as an attack by A on B .Rather an imposition constitutes an attempt by A to induce voluntary coopera-tion for a certain objective or course of events from B . • Suppose that A imposes p to B , then B may degrade its respect for A , respectbeing an additional status feature besides credibility and trust. If B fails to com-ply with p then A may degrade either its respect or its trust of B . And converselyif B subsequent behavior achieves p . • B may be happy to comply with an imposition p issued by A . For instance if A says to B : “our chairperson C is delayed, and you chair the opening sessionof this conference now”, then B might be honored and very willing to do so. B might also be embarrassed, there is no way to tell in advance. • For an agent B the collection of all open impositions (as were issued by anyagents including B by way of its promises), represents a to do list which B canact upon, depending on its own preferences, which take into account the impactof B actions on other agent’s respect and trust for B , as eel as B ’s reputation ingeneral. • A ’s issuing an “unfair” imposition on B is reflected by, potentially a decreaseof respect and/or trust in A from B , and from other agents in scope of the actof imposition. There is no need to have a definition or description of fairness orunfairness other than what may be derived from how different agents update theirtrust and respect of A upon the imposition being issued. Promise and impositionare equally neutral. Various different imperatives indicate different strength levels of impositions: youmust immediately X , you must under all circumstances perform X , you must X , Irequest you to X , you should X , you ought to X , can you please X now, can youplease X , I would appreciate if you X , you are advised to X . The view on impositions put forward above may be termed “neutralism with respectto impositions”, where neutrality is meant to replace the connotation of unfairness forimposition that all dictionaries indicate.In [3] a viewpoint towards promises has been worked out that was termed non-obligationism. This view implies that a promise need not be characterized by itspromissory obligation. As a stronger view on the independence of the concept ofpromises from obligations, strong non-obligationism was put forward as the viewpointthat the concept of promises may be introduced without making any use of the concept7f an obligation. Non-obligationism being somewhat problematic in certain examples,restricted non-obligationism was put forward as the viewpoint that for a large classof promises, sufficiently large to be of vital importance for the coordination of multi-agent systems, (i) obligations are not needed as a foundational basis, (ii) promissoryobligations need not characterize the essence of a promise, and (iii) promissory obli-gations may in turn be explained in terms of combinations of promises.Similarly non-obligationism in the case of impositions amounts to the viewpointthat an imposition need not be characterized by its impository obligation.
Here are some examples of impositions (from A to B ) that make perfect sense amongautonomous agents. It is reasonable that the imposition comes along with some mo-tivation. In many cases an imposition can alternatively, though not always more con-vincingly, be understood as a conditional promise.1. You must pay 50 BTC on Bitcoin account X within one week (that is before date d ), otherwise your web sites (with addresses w , w ) will suffer a DDoS attacklaunched from 10.000 bots for the duration of two weeks, starting at d .2. You must pay 10.000 EUR on account Y before date d to settle the debt causedby event E .3. You must push the third button from above (as part of a protocol for entering asecured site).4. You must now take the return money from the cash register outlet.5. In the coming week you should issue a formal request for reimbursement of yourtravel costs of last month (so that the money can be transferred to you in time).6. You must be careful not driving too fast because police in watching closely afew kilometers from here.7. You must not take the usual way to your work in order to avoid a massive trafficjam.8. You must be home at 8.00 PM when dinner starts (our guests arrive at 7.30 andthey must leave around 9.30 PM, so please be on time).9. Please send us your name and the usernames and passwords for your gmail ac-counts so that we can help you to improve the structure of the classification ofyour email history. (We are well-known service providers for people having dif-ficulties with dealing with too much email; please check our credentials on thefollowing site). 8 Promises and impositions as instances of Directionals
We will use the term directional to indicate a directed communication between twoagents within a given scope consisting of the originating agent, perhaps the targetagent, and zero or more other agents. Promises and impositions are classes of direc-tionals. Besides promises and impositions we will distinguish four more classes of directionalutterances.
Suggestion: an option for a course of action or of a state of affairs to be achievedwhich is issued by A to B . A suggestion expresses that A has in mind someactions or sequence of events, or state of affairs, which A assumes to be possibleor reachable for B and which A expects B to contemplate as an option. Asuggestion of A to B may or may not be effectuated by B . Warning: a warning issued by A to B is a suggestion from A to B the effectuation ofwhich A considers to be not fruitful either form its own perspective or from B ’sperspective. Proposal: a suggestion from A to B the effectuation of which A considers to be fruit-ful either form its own perspective or from B ’s perspective. Prediction: a suggestion (from A to B ) that A considers likely to occur, irrespectiveof B ’s behavior. Predictions encode A ’s knowledge about the environment andmay be used to transfer that knowledge to B .The idea is that predictions can be used to convey reasoning patterns to otheragents. Such reasoning patterns can be helpful for agents that must determine ex-pectations generated from promises. Here is an example.A suggestion from A to B may induce the occurrence of a proposal from B to A which in turn gives rise to a promise from A to B that A will cooperate with B when B tries to carry out its proposal. Subsequently B may promise to A that it will try tocarry out the proposal and that it will make use of A ’s last promise.In Appendix A we provide an artificial example involving a large family of promises,proposals, and suggestions. Each directional will have primary side effects on the target agent’s state of cognition(mind) and secondary side effects on the level of trust that the target agent has in the Searle’s directives and commissives will both qualify as subcategories of the directionals.
Promise, imposition, warning, proposal, suggestion, and prediction, each qualify asmethods in the sense of object oriented programming, to be applied to a target agentby a source agent. As such these are super class (in an object orientation style classhierarchy, though a subset in a set theory style class hierarchy) of the conceivabledirectionals (which may also include praise, criticism, and signaling excitement orboredom etc.). Because the target agent is always assumed to operate in a voluntaryfashion upon being influenced by one of these methods, these methods together consti-tute the category of voluntary cooperation oriented agent coordination methods, whichwe will refer to as voluntary co-operational methods, or more briefly as voluntary co-op methods. Below we will often speak of directionals instead of voluntary co-opmethods.
Promise dynamics primarily refer to how a single promise moves through its life-cycle. More distantly, promises interact in complex ways as agents maintaining variouspromise bundles related to different threads of activity may generate new promises,or rather incentives to issue new promises, as a consequence of reflection upon theexisting package of promises each promise being in its own state of its dedicated life-cycle.Analyzing the internal activity of agents that triggers their preparation and produc-tion of new promises is not a part of promise theory per se. Rather promise theoryprovides a language that facilitates system description while remaining uncommittedto that kind of in depth analysis of individual agent behavior.The central life-cycle of a promise p indicates that after it has been issued it persistsas a cognition in the minds (memories) of agents involved until on of the followingevents occurs:1. p is (observably) kept,2. p broken, that is demonstrably not going to be kept,3. p or withdrawn by the promiser, or 10. p becomes outdated (faded out).The peripheral life-cycle of a promise involves modifications of credibility andof trust assigned to the promiser, as well as to the promise by agents in scope. Theperipheral life-cycle also involves acts of determination of plausibility (probability,expectation value) of various possible events (by agents in scope, and in particular bythe promisee) given certain trust levels. These plausibilities are the key factor in thereduction of uncertainty that promising may effectuate. A promise once issued moves through the stages of a promise life-cycle. Each of theentities of the entity classes just mentioned moves through a corresponding life-cycleas well.1. Promise preparation.2. Promise issuing and corresponding promise fragmentation and distribution.3. the following steps take place concurrently for all agents in scope (each agenttaking care of its own instance of a promise fragment carrying the local namejust mentioned/generated):(a) promise outcome credibility assessment,(b) promiser trust assessment relative to promise outcome,(c) promise based expectation generation,(d) promise fading update, alternating with promise fulfillment assessment,(e) repetition of the steps 3b, 3c, and 3d after each update of promise outcomecredibility and promiser trustworthiness, until fading threshold reached oruntil promise fulfillment assessment turns positive,(f) final update of promise credibility and promiser trust,(g) local (for the agent) promise termination.These steps are carried out by each agent concurrently (that is interleaved for thesame agent, concurrently with other agents) with an ongoing reputation produc-tion and maintenance (that is exchange and update) process performed by eachagent (also outside the scope of this particular promise). Reputation updatesare caused by incoming messages reporting trust modification steps enacted byother agents.In particular promiser reputation influences trust assessment, which in turn in-fluences promise fulfillment expectation assessment.11. Global termination of the promise once the last promise fragment (locally) car-rying its (global but otherwise secret name) has expired.Similar life-cycle schemes can be given for other voluntary co-op methods. Wewe not write out these matters in detail, with the understanding that these are ratherstraightforward. If A promises ( p ) B that “ A is capable of performing an action c ”, that promise mayreduce uncertainty for B . Indeed upon noticing p , B knows, modulo its trust in A that c is doable.If A promises ( p ) B with a significant scope S including B that “ A will not per-form action c ”, then, if B prefers c to occur it needs to look for other ways, for in-stance performing c itself. Clearly B ’s uncertainty is reduced once more by this secondpromise.We assume that some agents in S won’t applaud that c takes place, even if B isexpected to be happy about that event.Now suppose that A promises ( p ) B that “ A will support B if B performs c ” withonly B in scope of promise p .At once B needs to be very careful. If B fails to notice that the third promise hasa very small scope, B may judge that an additional incentive (namely the increasedsupport for A after B has performed c ) has arisen to perform c itself. If, however, B takes notice of the reduced scope of p , B must take the possibility into account that A deceives B and will not keep its promise p and will not show its support after B wouldperform c . (Here uncertainty pops up in the form of a potential misunderstanding:while B thinks of “support by A ” as being visible to other members of S , A may onlythink in terms of support shown to B in private.)At this stage B promises ( p ) A with scope S that “ B will perform c providedthat A promises B , now with scope S that it will support B once B has performed c ”. If a counter-promise from A to B with scope S , that “it will support B whenperforming c ” is issued by A then B finds a significant reduction of its uncertainty andmay proceed with performing c (assuming that support from A will balance oppositionfrom members of S ). Otherwise B has obtained very valuable information: A may notbe trustworthy.We find that promises are helpful for reducing uncertainty about what can be done,and what will be done and by whom, while at the same time the mechanics of promisesalso creates new forms of uncertainty, in particular concerning trustworthiness. Insome cases such forms of uncertainty can in turn be remedied by way of promising.Certainty as a concept requires much more philosophical analysis that we can pro-vide in this brief paper. We refer to [9] for an account of certainty that is compatiblewith the aims of this paper. 12esides promises obligations can be a tool as well for the reduction of uncertainty,because what is obliged may be likely to happen. Therefore, following the line of [3]we will continue with an analysis of the relation between promises and obligations. An imposition issued by A on B with scope S (containing B ) may reduce uncertaintyin all agents in scope and in particular in B about what B will intend to accomplish. Itmay also reduce uncertainty about which agent will perform a certain task that manyagents expect to lie ahead of at least some of them. Predictions may reduce uncer-tainty about an environment. Suggestions may reduce uncertainty on how to initiateplanning, and proposals may reduce uncertainty about preferences between a varietyof suggested options.A cascade of voluntary co-op methods exchanged between a group of agents mayincreasingly reduce uncertainty until each agent feels confident that its plans with besupported by peer agents according to promises and that occasional impositions willmeet an understanding attitude.A bundle of predictions may set a stage in which a subsequent bundle of sug-gestions invokes a pluralty of proposals which in turn are detailed into a network ofpromises one of which prepare agents for the exchange of impositions during operationin real time. In [3] non-obligationism has been proposed as a preferred perspective on promises.This means that promises are primarily viewed in their capacity as mechanisms forreducing uncertainty and for inter-agent management of credibility, trust and expecta-tions.When promises are used as a method for specification and explanation of artifi-cial agent based distributed systems obligations need not at all appear, and if only forthat reason a non-obligationist perspective on promises is profitable because it allowsone do do without obligations altogether. At the same time the management of cred-ibility and trust, as well as the determination of quantified expectations for event andstates that are sensitive to various promises from an existing promise bundle need tobe realized by means of sophisticated AI software.When considering promises as a tool for management science primarily aimed atorganizing distributed human behavior the situation is quite different, on the one handhuman agents seem to have build in capacities for credibility assessment and for trustassessment and maintenance as well as for the generation of qualified, if not quantified,expectations. In addition, however, the existence of obligations, however defined, is afact of life for human agents. Promise theory can contribute to management science bymaking promises available in a systematic way based on a non-obligationist interpre-13ation. When aiming at a contribution to management science, the interplay betweenpromises and obligations requires careful investigation which cannot be simplified bydisregarding obligations entirely.
Imagine the the following chain of promises:1. B offers a service s delivered in units (1 hr sessions at B ’s office) at a price p EUR per unit to be paid after successful delivery of the service. The offer ismade as a promise ( r ) to a scope including agent A .2. A promises ( r ) B that A is willing to use units of B ’s service s and to com-pensate B by paying B an amount · p EUR within one week after the finalsession related to the delivery of this service, and upon having received a written(electronic) request for that payment from B .3. B promises ( r ) A to provide two sessions implementing both units of service atsuccessive times t and r .4. A promises ( r ) to appear at B ’s office twice at t and r in order to consume thesuccessive units of B ’s service.Is it the case that any of these promises has engaged A in an obligation to pay B ? Wefind that there is no such obligation, instead only actually consuming both units of s engages A in a an obligation to pay.The obligation for A to pay an amount to B seems to originate from promise r orperhaps from promise r . It is not directly linked to either of these promises as thisobligation is still somehow conditional. For that reason it may be called a pseudo-promissory obligation rather than a promissory obligation. Assuming that it is clearwhat it means that after having consumed 2 units of s at the agreed timeslots A isobliged to pay B and that such an obligation arises in that manner the link with promiseissuing still is an indirect one only.This connection between promises and obligations is very common: a conditionalpromise expresses that once a condition is satisfied (which requires one or more actionsfrom either parties subsequent to the issuing of the promise) that state of affairs createsan obligation.In the above example the simplest way for A to understand the obligation at handis that it coincides (consists of) a bundle of promises issued by B :1. B promises A that after having received the required payment from A (or onbehalf of A ) in due time B will not send any further requests for payment con-nected to that particular episode of service delivery from B to A ,2. if no payment is performed by A , B will issue another request adding the costof so doing plus some amount serving as a penalty, and14. if, after some fixed period u yet no payment is made, B will sell (rather thanoutsource) the cashing of its once more increased claim on A at some discountto a third party (another agent) who will seek to obtain the payments from A onhis behalf.It seems pointless to ask for a deeper sense of obligation than can be specifiedby means of this bundle of promises because concurrently A may be complainingabout B ’s poor service and ask for a promise by B to nullify A ’s costs or even toprovide compensation because A ’s problems have not been solved but have ratherbeen worsened.Requests for payment may be understood as impositions, like promises such im-positions may be credible or lacking credibility, stem from an agent that is consideredtrustworthy to some yet unknown degree, may be credible and deceptive at the sametime and so on. In one insists on the production of one or more “obligations” as a side-effect of apromise being issued, the idea of pseudo-promissory obligations is that a promise issupposed to be implicitly extended with one or more conditional promises in the wayexemplified by the case just mentioned.Thus working with pseudo-promissory obligations involves the application of cer-tain conventions for expanding promises to promise bundles that contain packages ofconditional promises representing what is often viewed as obligations produced bya promise but what is now seen as a special class of obligations that can in fact beequated to (or reduced to) bundles of conditional promises.Looking at the matter form the perspective of obligations rather than from the per-spective of promises or impositions we are dealing with a special class of obligationswhich merits some further attention.
These considerations lead to the following definition of a relevant subclass of obliga-tions: obligations the content of which consists of a pattern of promises that results asa side effect from issuing a promise. The fact that the pattern arises is likely to be thecontent of previous promises. Such obligations will be called promise pattern basedobligations or PPB-obligations for short.As long as one thinks of PPB-obligations a non-obligationist understanding ofpromises provides a consistent viewpoint. Speaking of obligations as shorthands forunderlying promise patterns or bundles may be helpful and efficient. Because meta-promises, that is promises to (conditionally) issue promises are promises as well,promises may create PPB-obligations without contradicting the non-obligationist viewof promises. 15ot all obligations are PPB-obligations, and the interaction between promises andnon-PPB-obligations provides an area for further research. However, restricting atten-tion to PPB-obligations allows for a useful extension of the non-obligationist view ofpromises to the practice and science of management of human operations. At the basisof the application of promise theory to management science and practice lies the use ofpromises that do not create any non-PPB-obligations. There seems to be ample roomfor such applications.
Non-PPB-obligations may preferably be called irreducible obligations as the reductionof their essence to promises is impossible. The promise of a witness in court to statethe truth and nothing but the truth produces a promissory obligation that cannot bereduced to a promise pattern. In that sense the obligation is irreducible.Remarkably this obligation comes about after issuing a promise (or a vow) sug-gesting that even in this case somehow promises take priority over obligations.
Although promises may be understood as directionals that create self-impositions, theimplicit tenet of promise theory is that promises alone provide a very flexible toolfor coordination in a multi-agent system. Augmenting promises with impositions andother directionals is meaningful because it strengthens the expressiveness of the theoryby relieving it from a fundamentalistic focus on promises that seems unnecessary.Moreover, the addition of impositions to the picture provides additional clarity aboutthe distance between promises and obligations, which can hardly be assessed withoutfirst assessing the relation between impositions and obligations.
In [2] the approach to decision taking from [1] (so-called Outcome Oriented DecisionTaking, OODT) has been contrasted with non-obligationist promissory theory. Werecall that in the terminology of OODT a decision is supposed to be taken by a decidingagent and the result of that action is a decision outcome which specifies what has beendecided. In another process a decision outcome may subsequently be effectuated. Inorder to have comparable terminology, it was suggested in [2] that a promise is issuedleading to a promise outcome, the latter being close to what is called a promise bodyin non-obligationist promissory theory.Now a key difference between a decision outcome and a promise outcome has beenidentified in [2] as follows: while a promiser is usually expected to be instrumentalin putting a promise outcome into effect (that is keeping the promise), in the case of Following [11] decision is subject to a product/process ambiguity, and OODT incorporates a pref-erence for a process view of decision.
16 decision outcome there is no expectation that the decider will be instrumental forputting the decision outcome into effect.In a similar fashion the difference between deciding and imposing can be under-stood. For an imposition outcome to be effectuated it is expected that the impostioneewill play an instrumental role, rather than the impositioner. A decision is not targetedto a specific agent. Of course one might contemplate “decisionary obligations” as be-ing obligations that arise from a decision outcome. Such decisionary obligations aremost plausible viewed as the consequence of preexisting promises about agents beingcompliant with specific classes of impositions impositions.To give an example: if the government of X decides to go to war with Y (thedecision outcome constituting a declaration intention of war), the effectuation of thatdecision outcome is based on the military staff having promised to take into effect attheir own responsibility such forms of decision outcomes. Once the declaration of war D W has been produced, the military staff will exchange suggestions, warnings, andproposals, and soon the may issue impositions to their subordinate staff members whoin turn will produce impositions down the hierarchy. The compliance with most ofthese impositions can be understood in terms of the impositionee having promised tofollow impositions from his/her superiors, assuming a that a correct decision takingprocess lies at the root of such impositions. Four features of promises were taken into account in our static theory of processesin [3]. In this section we will provide a number of additional features for promises andwe will propose notations for promises allowing to take the additional features intoaccount.1. agents, type, and body (taken from [3]): • promiser, • promisee, • agents in scope (observing the promise upon being issued), • promise type, promise body,2. promise issuing coordinates (time, space, phase),3. promise viewing agent (promise as seen from the perspective of that agent),4. promise inspection coordinate (time, space, phase coordinates of where is thepromise looked at by the viewing agent),5. promise validity interval (to be kept in the interval from time/event/state totime/event/state), 17. promise identification token (an abstract token in pi-calculus style that links dif-ferent agent centered views on the same promise together),7. promise fading out function (describes the degree of fading out of various com-ponents of a promise notation).The features mentioned above are independent of trust based reasoning by agentsinvolved. A calculus of trust and credibility needs to be presupposed for promises tobe of any use. A promise statement is an expression that combines a sample of features instantiatedfor a single promise. Promise statements that display information about more featurescan be developed with ease. Here are some examples.
Base promise form:
In its simplest form (called ground form), p [ π : b ] q , a promisestatement conveys (the name of) a promiser ( p ) a promise type π a promise body( b ), and a promisee ( q ). These promise statements, though with a more figurativenotation with an arrow between promiser and promise and promise type andbody as a subscript for that arrow, have been introduced and used extensively in[7, 8, 10]. In this notation p [ π : b ] q is written as p π : b −→ q . Scope: with S a collection of agents p [ π : b/S ] q specifies promise p [ π : b ] q with allagents in S ∪ { p } in its scope. (Promisee q may or may not be included in S .) Episode: with t and s instances of time (or other situational descriptions from whichtemporal and or causal ordering information can be derived) p [( π : ( t, b, s ) /S ] q specifies promise p [ π : b/S ] q with the additional features that b is supposed to bekept after t and before r . Thus a promise so specified expires at r . Issuing time: with u an instance of time: p [ u, ( π : ( t, b, s ) /S ] q provides the additionalinformation that the promise has been issued at u . Observation time: with w an instance of time (called observation time) the promisestatement p [ w/u, ( π : ( t, b, s ) /S ] q provides the additional information that thepromise statement is considered at time w by an appropriate agent. Subject fragmentation:
Upon its issuing a promise fragments over a community ofagents, that is each agent in scope of the promise becomes the carrier of a frag-ment of it. A notation for fragments will include a name r of a subject (carry-ing agent) as additional information. Subject r constitutes the perspective fromwhich the promise statement is considered descriptive of the state of affairs: p [ w, r/u, ( π : ( t, b, s ) /S ] q is a promise statement that provides the additional in-formation that it is considered, or held, at time w by an agent r in S ∪ { p } .18 ubject fragment identification: For different human agents the common origin ofrespective promise fragments available to them lies in fault prone memories.For artificial agents additional techniques are available, for instance tagging allfragments with a secret key α known to the agents in scope only, who mayuse a corresponding public key β for calming to an agreement that respectivefragments have the same promise issuing as an origin.By decorating a subject fragment with that key pair an identifiable subject frag-ment results: p [ w, r ( α, β ) /u, ( π : ( t, b, s ) /S ] q . Subject fragment identity binding with alpha-conversion:
In a formal or theoreticaccount of an agent community making use of shared secret keys, instead of akey pair involving a public key, and following π -calculus style process algebra(see [14]) an alpha-convertible name x may be used with in combination with abinder ( νx )( ... ) . Taking P and Q for names of agents and P [ − ] and Q [ − ] forcontexts formed by these agents in which a promise fragment description can beembedded states are denoted by parallel compositions ( P [ − ] || Q [ − ] || ... ) .Applying the binder and subsequently allowing for alpha-conversion for x oneobtains expressions of the following form: ( νx )( P [ p [ w, r ( x ) /u, ( π : ( t, b, s ) /S ] q ] || Q [ p [ w, r ( x ) /u, ( π : ( t, b, s ) /S ] q ] || ... ) . Fading out: p [ w, r, F/u, ( π : ( t, b, s ) /S ] q adds information about the fading out func-tion F . This can be explained as follows. Upon haven been created when apromise is issued by its promiser, the promise statement splits in a distributedcollection of subjective promise statements, one for each subject in S ∪ { p } .Subjective promise statements will fade out and after some time, which may ex-tend long after the expiration time of the promise its existence comes to an end.A fading out function F can specify at each moment u the degree of fading out(being forgotten by the subject) of the components of the promise. Fading out for identity carrying subject fragments:
Subject fragment identificationcan be combined with fading out: p [ w, r ( α, β ) /u, F, ( π : ( t, b, s ) /S ] q . Fading out for alpha-converting identity carrying subject fragments:
Fading out canbe described in a formalized world using alpha-convertible fragment identities: ( νx )( P [ p [ w, r ( x ) , F/u, ( π : ( t, b, s ) /S ] q ] || Q [ p [ w, r ( x ) /u, G, ( π : ( t, b, s ) /S ] q ] || .. ) . The type π need not be a promise type. It can be a type for an imposition or for any vol-untary co-op method, or any directional. A more general type system including typesfor other voluntary co-op methods makes sense and by using typing in that more gen-eral way the above description of promise statements can be adapted into a descriptionof imposition statements, proposal statements, warning statements, suggestion state-ments, and so on. We will omit the extensive details of this matter.19 .3 Promise context: concurrent existence of directional fragments A promise, and after its split into fragments carried by various agents, a promise frag-ment exists in between of a number (may be zero) of comparable entities. The follow-ing can be mentioned: • other directionals or fragments of directionals, (with the same or with anotherother issuer), • descriptions of fact, descriptions of opinion, descriptions of contracts, descrip-tions of obligations, existing in databases each carried by agents involved, • cognitions of fact, cognitions of opinion, cognitions of contract, cognitions ofobligation (each supposed to reside in the minds of various agents involved), • wishes, requests, objectives, intentions, plans, • states of reasoning arrived at by an agent, that is incomplete sets of conclusionsdrawn during an ongoing inference process.In its full generality the range of possible contexts of promises or other direction-als is so complex and varied that finding a general structure theory of such contextsis inconceivable. Clarification of structure can only be achieved in the presence ofsimplifying assumptions. Once a promiser issues a promise the promise outcome will after some time be assessedby the promisee and by other agents in scope according to its credibility. That is, giventhe kind of promiser and the kind of promise outcome, it is assessed by agents involvedto what extent it is plausible that the promise can be kept.If that plausibility is considered very low, fading out of the promise outcome issped up, and the promise may even be terminated without any assessment having beenmade of the promiser’s trustworthiness. In that case it may lead to a negative update ofthe promiser’s trustworthiness even without awaiting the time needed to assess whetheror not the promise is kept in cases where credibility was found sufficient.
An agent noticing a promise being issued will attempt to assess its credibility. In doingso the agent applies some kind of credibility calculus to its collection (in memory) ofold and new promise statements. We must assume some mechanism that producesa plausibility that the promise can be kept by an agent like the one who issued thepromise. This mechanism will be referred to as the credibility assessment mechanism(CRAM). 20he CRAM takes observations on agent behavior, agent classification, and agentperformance as inputs and it produces, given a promise statement an expectation of thecredibility that the promiser will keep the promise taking only into consideration thetype (class, kind) of the promiser (e.g. a human being will not keep the promise to flylike a bird, or to swim across the Atlantic with out support, or to walk 100 km withouttaking food and drinks during the walk. For promise lacking credibility the questionwhether the promiser can be trusted is immaterial. However, issuing such a promisemay decrease the promisers trustworthiness in the eyes of the promisee or other agentsin scope of the promise. The CRAM may make use of a credibility calculus that allowsto express assessments in rational values between 0 and 1.
Promises that have been assessed as being credible given the promiser may still notbe kept while other comparable promisers may keep similar promises without hesita-tion. Besides credibility trust plays a role. Low trust matters only when high cred-ibility has been assessed. Trust depends on the logic of promiser behavior (if thempromisee thinks that keeping a promise is against promisers self-interest it may lowerits trust that the promise will be kept). It also depends on past behavior of the par-ticular promiser and it may depend on the size of the scope, as all agents in scopemay lower their trust of the promiser upon observing that a promise is not kept. Ifa promiser values high regard (trust) by the agents in scope that may constitute anadditional incentive to keep the promise. The trust assessment mechanism (TRAM)which is operational as a separate and autonomous functionality for each agent takesobserved behavior as inputs besides a stream of promises. To each promise-promiserpair it can assign a degree of trust that the promiser will keep the promise.
If someone ( A ) suggests his partner ( B ) to make a financial reservation for the comingvacation of an amount of 10.000.000 Euro an immediate clash with credibility mayarise. B may wonder what sort of a vacation might this be about and where is thisamount to be found?Plausibly B may react with a warning to A : “that’s nonsense!”. Had A suggested tomake a reservation of 2500 Euro instead, B ’s reaction might be to propose B makinga reservation of 3000 Euro. If B reacts to this alternative proposal of A with the samewarning, that may induce a credibility drop in A ’s perception of B .Directionals lacking credibility are routed differently from directionals understoodby the target as having adequate credibility. Only in the second case expectationalreasoning will take place and will induce observational activity that may in turn triggerassessments impacting on trust levels. 21 .4 Promises and self-trust Different agents may plan to compute expectations or otherwise quantified plausibil-ities of future events on the basis of promises and trust in the promiser. The verypromiser, who may or may not trust itself, has a special position among these agents.Indeed if A promises ( p ) B with scope S to perform c , a variety of options concerning A ’s trust in itself can be distinguished:1. A may deceive B by having (but not showing) a lack of trust in itself. A mayeven consider itself unable to perform c so that self-trust and self-credibility areboth very low. This does not imply that A has a low self-confidence. On thecontrary, because A has to deal with adverse reaction (degradation of trustwor-thiness in their eyes) from members of S once A breaks its promise, A must beconfident that it can deal with that eventual consequence of its promise (and inparticular with the consequences of its expected breaking of the promise).2. A may have little doubt that it can deliver c and this may be based on informationnot available to other agents in S who initially have less trust (than A itself) in A ’s ability to perform c in an adequate manner.3. A may be overconfident, in which case A is honestly convinced that it will de-liver c , while other agents in S , who are better informed about the relation be-tween A ’s capabilities and what is needed to perform c , rightly place less trustin A as a potential actor of c .4. Most agents in A may have high trust in A ’s capability for performing c but A itself may not be so sure. A may feel having been put under pressure to issue apromise for doing c that A might have preferred to avoid.In this case A may feel deceived by some agents in S who promised to makeuse of a possibly forthcoming promise by A for performing c . A may think thatthese agents should have known that A promising c lacks credibility. Perhapsperforming c involves certain risks that A prefers to avoid.The interplay between self-confidence, self-credibility, and self-trust can be verycomplex. This complexity is real, however, and promises seem to provide a usefulmethod for dealing with it. Each agent may issue promises repeatedly. Two promise issuings convey the samepromise if keeping one of the necessarily implies keeping the other as well. In thatcase promise statements are called equivalent. As equivalent promises may be issuedat different moments in time the corresponding promise statements need not be equal.When describing a scene promise statements should be made so detailed (high promise22tatement resolution) that equivalence with other promises that occur in the same scenecan be reliably judged.If a promise statement is abstract, that is it contains relatively little informationabout various features, equivalence with another promise statement may be hard toassess.Promise repetition, i.e. the consecutive issuing of different but equivalent promisesmay impact (that is serve as an input for) both TRAM and CRAM of various agents.Whether repeated issuing of a promise has positive or negative impact on credibilityand trust depends on the circumstances.
Once an imposition has been internalized by an agent its life-cycle begins and theimposition, or rather its residual representation, may transform over time. Agents maychange their view of what has been imposed upon them, including what they havepromised themselves. An agent may invent new promises that it thinks it has made(while it has not), it may forget impositions created by itself and by other agents. Anagent may have its own strategy for fading out impositions and finally forgetting aboutthem. These phenomena are captured under imposition drift.
An agent may be supposed to maintain a data base with a portfolio of impositions that ithas received, including self-impositions resulting from its own promises. For a humanagent this portfolio may range from a very formalized and technically well-supportedsystem to a bundle of more or less vague memories. An imposition portfolio may havebeen modified (perhaps compromised) by imposition drift of some of its content.
Promise dynamics has two main aspects one of which we are now in the position toillustrate by means of examples. What is shown clearly by the examples below is howtrust updating, promising and promise keeping are interrelated.The effect of a promise being issued lies in the behavior of the promisee beingcompliant with an expectation (in the promisee’s perception) that has been generatedby the promiser upon issuing the promise. The later effect is mediated by promisee’strust in the promiser. We will exemplify how that may work and also how differentpromises may interfere provided both are based upon the same trust variable.23 .1 Promise assessment and trust level maintenance
We will consider promise dynamics in a context where some impositions play a rolesas well. The examples below involve a single promise type only: π = π α ( P, U ) = “promises about the adequacy of a product P as a tool for per-forming task U ”.As a typical example of a product we will consider a computer program. With T p ( q ) we denote the trust that p has in q . We assume that trust is measured on the fivelevel scale [ − , − , , , where 0 is neutral, -2 expresses strong distrust, - 1 expressdistrust, 1 expresses trust, and finally 2 expresses strong trust.We will display several threads of activity involving promises and correspondingtrust maintenance.1. The first thread illustrates (i) that observation of failure to comply with a promiseleads to decrease of trust in the relevant promiser, and (ii) that with neutral trust(in the promiser) a promisee ignores a promiser’s promise: • Initial trust state T q ( p ) = 1 , • A promise m is issued: m = p [ π α ( P, U ) : “
P is adequate for task U ” / { p, q, r } ] q , • q installs P and prepares for the use of P for task U , • s imposes q to perform task U , • q uses P for task U , • q observes that P fails for task U , and assesses that m was not kept, • q decreases its trust in p : T q ( p ) = 0 , • A promise m is issued: m = p [ π α ( Q, V ) : “
Q is adequate for task V ” / { p, q, r } ] q , • q refuses to install Q (and by consequence to prepare it for task V ), • final trust state T q ( p ) = 0
2. On the other hand, observation of a promise having been kept, induces increasedtrust (provided an increase is still possible): • Initial trust state T q ( p ) = 1 , • A promise m is issued: m = p [ π α ( R, W ) : “
R is adequate for task W ′ ” / { p, q, r } ] q , • q installs R and prepares for the use of R for task W , • s imposes q to perform task W , • q successfully uses R for task W , and assesses that m was kept, • q increases its trust in p : T q ( p ) = 2 ,24 final trust state T q ( p ) = 2 .3. Interleaving both threads makes sense and may allow further progress. Differentinterleaving strategies (see [6]) lead to different outcomes. In the thread below,which results from interleaving the first two threads, the program Q is used by q for purpose V instead of the refusal of that use by q caused by q ’s neutral trustin p in the first thread. • Initial trust state T q ( p ) = 1 , • A promise m is issued: m = p [ π α ( P, U ) : “
P is adequate for task U ” / { p, q, r } ] q , • q installs P and prepares for the use of P for task U , • A promise m is issued: m = p [ π α ( R, W ) : “
R is adequate for task W ” / { p, q, r } ] q , • s imposes q to perform task W , • q successfully uses R for task W , and assesses that m was kept, • q increases its trust in p : T q ( p ) = 2 , • s imposes q to perform task U , • q uses P for task U , • q observes that P fails for task U , and assesses that m was not kept, • q decreases its trust in p : T q ( p ) = 1 , • A promise m is issued: m = p [ π α ( Q, V ) : “
Q is adequate for task V ” / { p, q, r } ] q , • q installs Q and prepares for the use of Q for task V , • s imposes q to perform task V , • q successfully uses Q for task V and assesses that m was kept, • q increases its trust in p : T q ( p ) = 2 , • Final trust state T q ( p ) = 2 .These examples of threads of activity illustrate the interaction between trust updating,assessment and making use of a program which’ adequacy has been promised. In these examples agent q performs reasoning in order to deal with the implicationsof its trust in p . That part of q ’s reasoning proceeds according to a collection of rules.Here are some rules that may be used to describe control of q :25. If p promises the adequacy of a program X for task Y to q and T q ( p ) > then q will prepare for the use of X (provided that has not been done already).2. If p promises the adequacy of a program X for task Y to q and T q ( p ) ≤ then q will not prepare for the use of X .3. Given task Y , if(a) q has prepared for the use of a program X the use of which (for some task Y ) has been promised (to q ) to be adequate only by p , and(b) T q ( p ) ≥ ,then q will use X as soon as a request for Y is received by q .4. Assuming that for some task Y :(a) q has prepared for the use of a program X the use of which (for task Y )has been promised (to q ) to be adequate only by p , and(b) T q ( p ) = − then q will intercept the preparation and unload that program (thus blocking itsby q use for whatever task).5. If for some task Z :(a) q has prepared for the use of a program X the use of which (for task Z ) hasbeen promised (to q ) to be adequate only by p , and(b) T q ( p ) = − , and(c) q has made use of X before for task Z (without unloading it in between),and(d) q is requested to perform Z ,then q will use X to perform Z .In case the first two conditions hold and the fourth conditions holds but the thirdcondition fails, q will not use X to perform Z (and may fail to perform Z ).6. If q has prepared for the use of different programs for a task Y and is requestedto perform Y it will use that program for which the adequacy has been promisedby an agent with highest current trust, if such an agent exists.7. If several agents have promised a plurality of programs adequate for task Y and different programs have been promised adequate by agents with the samemaximum trust (trust in them of q ), then upon a request to perform Y ,26a) per program sums of trust levels from different promising agents are com-pared (relevant only if different agents have made adequacy promises aboutthe same programs) and the “best” program is chosen, and if this criterionfails to discriminate,(b) that program is chosen about which the most recent adequacy promise (fortask Y ) has been issued by an agent currently enjoying a maxima trust level(from q ).Obviously analyzing the validity, consistency, and completeness of this collectionof rules, or an appropriate variation of it, poses a significant problem in itself. It isreasonable to assume that q experiences a learning curve through which a combinationof such rules stabilizes. Agent q makes use of appropriate informal logic to organizethe application of the various rules that underlie this part of its reasoning. We have introduced impositions as a second member besides promises of the class ofvoluntary co-op methods, followed by several other elements such as suggestions andproposals.Voluntary co-op methods are actions or patterns of activity which one agentapplies in the direction of another agent in order to bring about, enhance, facilitate,or invite, voluntary cooperation. Promises are the key instance of voluntary co-opmethods. Voluntary co-op methods are collected in a more general class of directionalswhich may go beyond the coordination of voluntary activity.We have indicated how, in which cases, and to what extent promissory obligations(and impository obligations) may be understood as patterns of obligations which aresupposed to be automatically co-generated with promises or impositions.Then we have extended the notational format for promises know from previouswork to include may aspects that enter the picture when contemplating dynamic as-pects. These extensions are generic in the sense that similar notations may work forother directionals.In Appendix A we provide examples of stepwise development of promise bundlesand in connection with the coming about and effectuation of a plan of an agent tobuy an item from another agent. The example indicates the relatively large number ofpromises, and to a lesser extent impositions, that may occur in the context of a simpleplan involving a few actions only.In Appendix B we provide an example in human machine interaction where a rangeof promises, and to a lesser extent impositions, constitute an essential component ofthe explanation of system behavior in a context with autonomous agents. The exampleindicates that the language of promises is indispensable for the description of somehuman machine interaction scenarios.In Appendix C we carry on with the example on program usage and the side ef-fects of trust updating. In spite of the open ended complexity of the topic, mapping27ut plausible mechanisms and combinations of mechanisms proves to be doable andinformative.
References [1] Jan A. Bergstra. Informatics Perspectives on Decision Taking. http://arxiv.org/abs/1112.5840v1 [cs.OH] (2011).[2] Jan A. Bergstra, Decision Taking versus Promise Issuing. http://arxiv.org/abs/1306.6412 [cs.SE] (2013).[3] Jan A. Bergstra and Mark Burgess. A Static Theory of Promises. http://arxiv.org/abs/0810.3294v4 [cs.MA] (2013).[4] Jan A. Bergstra and Karl de Leeuw. Bitcoin and Beyond: Exclusively Informa-tional Money. http://arxiv.org/abs/1304.4758v2 [cs.CY] (2013).[5] Jan A. Bergstra and Karl de Leeuw. Questions related to Bitcoin and other Infor-mational Money. http://arxiv.org/abs/1305.5956v1 [cs.CY] (2013).[6] J.A. Bergstra and C.A. Middelburg. Thread algebra for strategic interleaving.
Formal Aspects of Computing , 19(4):445–474 (2007).[7] Mark Burgess. An approach to understanding policy based on autonomy andvoluntary cooperation. In
IFIP/IEEE 16th international workshop on distributedsystems operations and management (DSOM), in LNCS 3775 pages 97–108,(2005).[8] Mark Burgess. System administration and the scientific method.
In : J.A.Bergstra and M. Burgess (eds.), Handbook of Network and System Administra-tion, Elsevier , 689–728 (2007).[9] Mark Burgess.
In Search of Certainty. χt Axis Press Oslo Norway, (2013).[10] M. Burgess and S. Fagernes. Laws of systemic organization and collective be-haviour in ensembles. In
Proceedings of MACE 2007 , volume 6 of
MulticonLecture Notes . Multicon Verlag, (2007).[11] G.C. Goddu. Is ‘argument’ subject tot the product/process ambiguity?
InformalLogic,
31 (2) pages 75-88, 2011.[12] L. Groarke. Informal Logic.
The Stanford Encyclopedia of Philosophy (Spring2013 Edition), Edward N. Zalta (ed.) , plato.stanford.edu/archives/spr2013/entries/logic-informal/ Informal logic , 26 (3), pages231-258, 2006.[14] R. Milner, J. Parrow, and D. Walker. A calculus of mobile processes I. Informa-tion and Computation, 100 (1) 1-40 (1992)[15] Satoshi Nakamoto. Bitcoin: a peer-to-peer electronic cash system. http://Bitcoin.org/Bitcoin.pdf (2008).[16] R.C. Pinto. Argumentation and the force of reasons.
Informal Logic,
29 (3) pages268-295, 2006.[17] H. Sheinman. Introduction: promises and agreements. in: ed. H. Scheinman,Promises and Agreements, Oxford University Press , 3-57 (2011).[18] Samuel J. Stoljar. The ambiguity of promise.
Northwestern University LawReview,
47 (1) : 1–20 (1952).[19] Douglas Walton. Story similarity in arguments from analogy.
Informal Logic,
A An artificial example of a promise bundle
Many examples of promises can be given. In this Section a fixed running example willbe used to illustrate the relative abundance of promises compared to real actions whichthe promises may be about. The example is artificial in that it has not been derivedfrom a real case.The example illustrates first of all that promise bundles linked to a single and sim-ple activity can be quite large, in addition it becomes obvious that most of the promisesmust fade out rather quickly in order to avoid unmanageable agent states.The example also illustrates the use of some other directionals.A coherent bundle of promises is involved with a single transfer of an amount m by agent A to agent B in compensation for a service or good S/G that B delivers to A . • A proposal p by B (with B ’s management M B in scope) to A to deliver S/Gagainst compensation m . • A promise p issued by A (with A ’s manager M A in scope) to B to accept S/G,( p is a counter promise, also called a promise to use, for p ). • A suggestion p (issued by A ) to B ( M B in scope) that A is able and willing topay B , as a compensation for S/G, via a particular informational money, say IM(see [4, 5] for informational monies). • A proposal p by A to B to pay B by way of a specific informational money (sayIMX). 29 A promise p by B to A accept an IMX payment, ( p a counter promise to theproposal p ). • A promise p by B ( M B and M A in scope) to A a to confirm a payment ofamount m (made by A ) after it has been received via an IMX channel by B .These promises coexist during a single transfer scenario, and in order to understandthe role of each promise a detailed analysis of its dynamics is needed. Each promiseevolves though a life-cycle. For instance p may disappear at once if it is considerednot credible to A or to M B that B can deliver S/G. When p is considered a crediblepromise in principle, given what is known about B in most general terms, A will eval-uate its trust that B can deliver S/G against compensation m . The degree of trust in B may be a function of past observed behavior of B by a community of potential clientswho maintain reputation based trust calculus about a number of agents including B .Updating trust (of B ) when a promise (issued by B ) is kept or not kept requires asound assessment of promise keeping. That in turn requires that promises need to bewrapped in time intervals and similar constants that enable reliable assessment at somemoment in time. A.1 Promises about motivation, preferences, and activity planning
Many more promises may be contained in the promise graph surrounding a singletransfer. Here are some promises that may precede p − . These promises concern A ’smotivation and preferences. Such matters may be case as promises from A to itselfwith other agents in scope. • A promise p m issued by A to A with the content that A will be satisfied uponacquiring S/G in exchange of compensation m or below. • A promise p m issued by A to A with B in scope with the content that A will besatisfied upon acquiring S/G in exchange of compensation m or below. • A promise p m issued by A to A with M A in scope that A currently prefers trans-ferring amounts via informational money IMX to other means of money transfer.Another collection of promises relate to the way in which A will interact with B whenpreparing the transfer of S/G. • A promise q issued by A to B that A will visit B at time t and location l B withthe intent to be informed by B about the specifics of S/G, • A promise q issued by B to A that B will receive A at time t and location l withthe intent to inform A about the specifics of S/G, Having issued that promise A can assess its credibility as well as its trustworthiness. As a part ofpromise dynamics A may initially deem p m credible but upon further reflection A may have limitedtrust in its truth. A promise q issued by B to A that B will not deliver S/G to any other agent ifthat stands in the way of delivery of S/G to A until 24 hours after A completedits visit to B , • A promise q issued by B that a car (driven by A ) can be parked for free at B ’ssite when A announces his/her arrival at the gate of B ’s premises at or after sometime t − u reasonably in advance of t (with u equal to, say, 30 minutes). • A proposal q issued by B that a price for S/G will be fixed turing the visit andthat the offer for S/G against that price will stand for 10 days.Some promises connected with the (preparations for) the transfer involve A ’s partner P A . • A suggestion q issued by P A to A that A will use P A ’s car provided that it willbe returned in time. • A promise q issued by A to P A that the car will be returned at time t + r (atlocation l A ) at the latest for subsequent use by P A . • A promise q issued by P A to A that the car will be ready for use (by A ) at time t − s (with t − s < t − u ) at the latest (and at location l A ) for subsequent use by P A . • A promise q by A to P A that, after returning from the visit to B , A will subse-quently seek P A ’s opinion before promising B to buy/use S/G against compen-sation m . • A promise q by P A to A that once having provided a positive opinion aboutthe use/acquisition of S/G, P A will support A to keep the promise to providecompensation m to B upon delivery of S/G. • A promise q by A to P A that A will only transfer m to B after adequate deliveryof S/G. • A proposal q by P A to A that P A will be reachable (for A ) by phone during A ’svisit to B . • A promise q by A to P A to make use of proposal q .Apparently a rather formidable bundle of promises, proposals, and suggestions mayconstitute the context for a single money transfer from A to B . The dynamics of eachof these promises may impact on the very occurrence of the deal between A and B andthe corresponding transfer. 31 .2 Possible extensions of the promise bundle In practical circumstances a bundle of promises connected with a single activity can befar more complicated than the example just given. For instance this specific examplemay be extended in various directions:1. A may not have a driving license and she may wish her daughter D A to drive thefamily car. This which may be communicated though several promises to driveher to l b and back.2. A may not be able to comply with a previous promise to P A about doing somehousekeeping work and an other arrangement may be agreed upon, that agree-ment being encoded in an appropriate collection of promises issued in advanceof A moving towards l B .3. A may wish support of P A in acquiring S/G and may try to arrange that supportin terms of specific promises issued by P A that (s)he will help A with the use ofS/G if that might be needed.4. A may agree with M A upon a strategy for negotiation with B about differentversions of S/G against different prices and conditions. This agreement mayagain materialize in a collection of appropriate promises from A to M A andconversely.So it seems that up to 50 promises may easily be involved when planning a single trans-action for buying some good or service. The need for an understanding of promisedynamics is obvious from the need to forget about the majority of these once their rolehas come to an end. Instead of logging all promise descriptions agents will maintaintrust about one-another.
B Case study: a recurring parking exit problem
The use of promises can be unavoidable in practice. Here is a realistic case studywhere promises arise time and again and the main agent has no other option than todeal with a growing bundle of related promises.We imagine a parking lot, say P7 on an industrial site with parking areas numberedP1 to P10. Agent A works in company C A and has been issued a particular subscriptioncard with the following virtues and features. The parking areas are operated by acompany C P licensed to do so by the local municipality. it seems obvious that most of these promises cannot produce obligations because otherwise thecomplexity of the whole setup explodes. In fact the setup must be somehow robust against multiplepromises not being kept, and promise redundancy may be vital for plan reliability.
32. The subscription is called a reduced parking price subscription. It can only beissued by C P via employers C who provide the subscriptions to there staff mem-bers.2. A staff member of a company C pays an annual fee (say 50 EUR) to C and ob-tains a card in return. The card is named a “reduced parking price card (RPPC)”.The card provides entry and exit (under certain conditions) to a subset of theparking areas P1,..,P10 which is made known to the employee via an email,shortly before the card is physically handed over by one of C ’s support staffmembers.3. The standard parking cycle for A works as follows:(a) A approaches the entry of P7 and hold his RPPC close to a black squareon the surface of a piece of hardware next to the road and in front of the(entry) barrier.(b) The barrier opens and A drives into P7.(c) A searches (and is guaranteed to find) and empty place and parks.(d) A leaves the car and then walks into his office.(e) When time to leave has come, A returns to the parking are P7, and ap-proaches the pay station machine. Then A holds the RPPC in front of themachine close to a dedicated area for RPPC’s and a price is announced ona little screen.(f) Now A must pay. That needs to be done electronically and there are threeoptions: a debit card, a credit card, or a cash card (so-called Chipknip).A selection must be made, and is made though a simple and well-knowninterface.(g) A pays and receives an indication that this has succeeded.(h) A returns to the car and drives inside the parking area to the exit stoppingin front of the exit barrier.(i) A holds the RPPC in font of a dedicated black area and the exit barrieropens.4. Rules of the game. Users of an RPPC are explained the following rules of en-gagement.(a) Parked cars must always exit within 48 hours, after that period the issuerhas the right to withdraw the RPPC.(b) Reduced price is active only between 6.00 in the morning and 23.00 in thenight.(c) Cars must be properly parked in demarcated locations (quite hard to see inpractice). 33d) If there is no free space cars are not admitted until place has again becomeavailable, that is until other cars have left.(e) Payment of the initial fee provides no guarantee that a free space can befound.5. Other aspects of the user interface of the parking area equipment. • When entering the parking area one always has the option to push a buttonand to receive a paper ticket and to park at full cost. This option is open tothe public at large. • At the entry station and the exit station and at the pay station one finds: – A display that allows some 250 characters of text for messages aboutthe state of affairs. (The display at the pay station is not identical tothe one at both other stations). – A button that one can push and which is supposed to provide an inter-com connection to an operation room where a staff member workingfor or on behalf of C P can answer questions, and may provide somehelp in case of complications. – A text with a telephone number that may be called in case of problems. • Several TV cameras provide the control room staff with information aboutwhat is going on at entrance, exit, and pay station.
B.1 Unproblematic complications
Here are some minor difficulties that may occur with the use of RPPC together withthe way in which these may be handled.1. If one chooses a way of paying which does not work then after some (veryclumsy) interaction at the pay station one may opt for another mode of payment.This is important because each mode may fail for different reasons that may beout of control of a user.2. If one is refused access with RPPC and is willing to take a paper ticket and paythe full prices, one will be able to park (provided there is free space). For longerperiods that is very expensive, however. For short visits it may be a realisticoption.3. If when approaching the parking area the barrier is open and the system is outof action, say for maintenance purposes, one may enter and park. When exitingunder the same conditions one has parked for free and no one complains. Whenexiting some interaction through the intercom system will suffice to convince thecontrol room staff that remotely opening the barrier is the most reasonable wayto proceed and one has parked without paying.34. If when approaching the parking area the barrier is open, one should try to checkin with the RPPC but there will be no feedback as to whether that has succeeded.
B.2 A problematic complication: parking exit problem case I
A problematic complication is a problem for which the user when confronted with theproblem for the first time has no standard way of resolving and the impact of whichmay be hard to assess. • Either the the pay station display states that it is currently out of order, or • all modes of payment (open to a user) fail for a variety of reasons none of whichare in control of the user.At this stage we imagine that A is confronted with the simplest case: the pay stationdisplay indicates a malfunction. It is also imagined that this difficulty arises for thefirst time (for A ) so that A must now find out how to deal with the matter in an orderlyfashion. We imagine that A plans to leave for a lunch at a friday say 12.00 and to returnat 14.00. When leaving, together with a guest, he finds out that the pay station says ofitself that it does not work. Here is a trace of actions of A following that unfortunateevent.1. A tries to pay and finds that the paying machine does not work, consistent withits announced self diagnosis. (The machine has promised to be out of order andthat promise has been kept.)2. Then A chooses to push the button on the pay station in order to make a callto the control room staff. (The button represents an implicit promise that suchcommunication can be achieved after its use.) After some 30 seconds of waitinga staff member, say Q cr , responds (the implicit promise has been kept) and Q cr asks for the reason to push the button. A explains that the pay station is out oforder and that exit with is RPPC is impossible for that reason.3. Q cr suggests A to drive the car in front of the exit barrier and to call him oncemore from the intercom next to the exit barrier. (An implicit promise that thesecond call will connect to Q cr again.) A agrees (that is promises to do so) andthe verbal interaction through the pay station is ended.4. Once in front of the exit barrier A proceeds with solving the problems as follows:(a) A pushes the button, waits for some 30 seconds and gets connected to thesame control room staff member Q cr .(b) A briefly reexplains the problem.(c) Q cr states that he will remotely open the barrier.35d) A proposes Q cr to check out his RPPC in “the system” so that it is known(to the system) that A has exited the parking area (which will allow a sub-sequent entry).(e) Q cr agrees and the barrier opens, A drives forward and is satisfied that theproblem now has been solved.5. When returning after lunch, arriving with the same guest in the car, A approachesthe entry barrier and holds the RPPC in front of the dedicated area expectingthe barrier to open automatically just as it usually does. Unfortunately nothinghappens and the display indicates that a car is already inside the area (supposedlyvia the same RPPC) so that entry is impossible. A proceeds as follows:(a) A pushes the intercom button waits for contact, and is connected with staffmember Q cr to whom A explains the problem, now with other cars waitingbehind him for entry, and A is told by Q cr that he should now take a paperticked and that when exiting the payment at a reduced price can be madewith the RPPC as well after showing the ticket.In addition A is told by Q cr that he has probably exited the area when thebarrier was open forgetting to check out. This hypothetical cause of theproblem is denied by A (though to some extent it is true, checkout was notforgotten but rather it was impossible).(b) A takes the paper ticket, the barrier opens and A enters P7 and easily parkson a nearly empty P7.(c) At 5.30 PM A proceeds to leave again (now without guest) and A noticesthat reduced price payment is impossible with the combination of the paperticket and the RPPC. Moreover A finds out from the display that exit ispossible when paying full price against the paper ticket (now 18 EUR) orwhen paying 73 EUR against the RPPC.The state of affairs can be phased convincingly in terms of promises asfollows. In response of A ’s actions the parking system has produced twopromises: • Upon entering the paper ticket in the pay station and paying 18 EURthe paper ticket will be returned in a state where it allows exit (whenoffering the paper ticket at the exit unit) within a reasonable time. • Upon showing the RPPC and paying 73 EUR the system ail allow exit(when showing the RPPC at the exit unit). A chooses not to make use of either promise.(d) Expecting to pay some 3 EUR at most A dislikes both options and A pushesthe intercom button at the payment station in order to find out how to pro-ceed. After talking with staff member Q cr A is suggested to drive towards36he exit barrier and to reopen the interaction form there. A promises to doso and after having kept that promise, A succeeds to convince (the same)staff member Q cr that the barrier must be opened and that he (that is hisRPPC) must be checked out. The barrier opens and A leaves P7.At this stage A understands that two promises that were issued by control roomstaff have not been kept: (i) he has not been checked out when exiting last time,and (ii) the promise that he could profit from a reduced price after taking a paperticket was unwarranted. A comes to the following conclusions: • A becomes aware that he needs a conceptual model of the control roomincluding a perspective on the expertise and capabilities of its various staffmembers. • A understands that intercom connection with the control room staff maynot suffice to solve these problems. • A concludes that the next time he will phone the indicated phone numberimmediately after entering the parking area and that he will take a paperticked if that turns out to be needed.6. On the following monday A enters and exits once more after the same kind ofdiscussions with control room staff. Now (after having communicated with twomore control room staff members Q , cr ) he has found out that: • Control room staff cannot consistently answer the question whether or notthey can see from their location that the payment machine has declareditself out of order, though they think of themselves that they can see this bymeans the TV camera system while agreeing that resolution is insufficientto read from the display by means of the TV image. • Control room staff states that they have no expertise about parking cards,that is not a part of their job. • Control room staff cannot check in our check out cars (wrt. the database ofparked cars that underlies RPPC). They can only open and close the barrierand they can make the machine near the entry produce a paper ticked evenif it has not been requested by the client in front of the barrier.7. On Tuesday and Wednesday A commutes by means of public transportation.8. On Thursday, with the third entry after these problems have started A speaksthrough the intercom at the entry barrier with staff member Q cr and is explainedthat Q cr cannot from there solve these problems, and that taking a paper ticketis the only option available at this stage. In addition A is told that he must findsome higher authority to solve his parking problem with the card. Thus:(a) A takes a paper ticket and enters.37b) A phones the number indicated at the entry and gets connected to Q cr oncemore. Q cr complains that it makes no sense to phone him twice for thesame issue.(c) A indicates that he could not know that the button and phone number leadto the same person and that he saw no other option than to phone the indi-cated number.(d) Q cr reads a A another telephone number (say N bo ) which will provide ac-cess to relevant back office staff of C P . A promises Q cr to contact C P viathe second phone number.(e) Once phoned a person appears who states her name, a piece of informationthat A forgets. After some explanation X understands the problem, and shepromises to check out the car (which according to her can be done whenthe car is outside P7 as well as when it is inside P7, and which will leadto a state from which both entry and exit with RPPC is enabled) and states(promises) that from now on things will be back to normal.9. When leaving that day at 16.00 A finds out that the barrier fails to open (thedisplay also shows, and thereby promises on behalf of the parking system, thatexit can be obtained with the RPPC when paying 125 EUR). He proceeds asfollows.(a) A drives the car back to a position not standing in the way of other exitingcars and once more phones the backoffice number obtained from Q cr nowbeing connected to another person Y who claims not being responsible forP7 and that someone else is in charge, who can be reached under number N bo a piece of information that A forgets. During the same call, however,a connection to that person, say Z is arranged by Y and after having beenexplained by A the historic account of events Z states that he is indeedresponsible for P7 and should have been in the loop in an earlier stagealready. In addition he readily admits that he is sometimes is puzzled bythe system himself just as well, and that he does not know (though expects)that once he has checked out the car (a third promise issued but not keptby C P staff) subsequent exit will be unproblematic and that A should drivetowards the exit and phone him once more if it does note work. A promises Z to drive to the exit and to try to get out by means of his RPPC.(b) After haven complied with the latter promise, A is refused exit and phonesthe backoffice once more, getting connected with Y , asks for a connectionwith Z , and Z now promises A that (i) he ( Z ) will phone the control roomand tell them to open the barrier and that, (ii) subsequent entry and exitwill be normal, and (iii), more generally the problem will have been solvedupon exiting P7. 38c) A waits 2 minutes, then the barrier opens and A leaves P7. A notices thata promise now has been kept kept and A starts trusting Z . A understandsthat communication with C P staff unavoidably and exclusively leads topromises made by them that may or may not turn out to be kept. Misunder-standing about the content of these promises is very likely to occur, whileanalyzing these in terms of obligations is uninformative.10. On Friday, one week after the problem appeared, A has some doubts about whatto do, trust Z ’s promise and go by car, or don’t trust Z ’s second promise and takepublic transportation, thus postponing the finalization of the issue to another day.That particular Friday is likely to be a stressful day for A in office. From thatexpectation A infers that (the risk of engaging in) extended negotiations with C P staff must preferably be avoided.However, because A knows that entry will be easy by means of a paper ticket,he opts for the car because its saves a lot of time, accepting the fact that he mayhave to pay full price for the paper ticket when exiting if after a stressful day hefeels disinclined to negotiate with control room staff from scratch.11. Indeed (on Friday) entry to P7 is unproblematic with RPPC. Several hours laterexit is possible against the usual reduced price. A concludes, that Z has keptthe second promise as well, and thereupon A inductively infers that Z ’s thirdpromise has also been kept and that for that reason in all likelihood the problemshave been solved in a satisfactory way, so that A can forget all promises thathave been issued in connection with parking on P7 since the first complicationarose at the pay station. B.3 Trust and credibility
Credibility plays some role in this example: the statement made by Q cr that he can seevia the TV camera that the display of the pay station indicates that it is out of orderlacks credibility. But that lack is not clear to Q cr . The statement by Q cr that paymentat reduced price can be performed with RPPC also after entry with a paper ticket lackscredibility as the interface of the pay station shows no sign of that option. The promiseissued by X that after resetting (check out, neutralization) of the RPPC, exit is possibleand that this step is insensitive to whether the car is inside or outside P7 lacks somecredibility.Trust plays a role just as well. A notices that once a promise is not kept trust inthe promiser is decreased almost unconsciously and ale markably. Once a promiseis issued, that seems to create both trust and expectation at the same time. Once thepromise turns out not to be kept, that is the expectation is proven wrong, trust collapses.39 .4 Lessons learned for A as a user of RPPC at P7 Here are some practical lessons that A has acquired from the episode..1. Upon entering P7, the simplest understanding of A ’s action besides physicallyentering P7 is this: A promises to make use of an expected forthcoming promiseto be issued by the parking system, to exit at a reduced price after showing theRPPC and subsequently successfully paying the amount due.However, entering P7 does not engage A in an obligation of any form, at leastnot in an obligations which can be simply and completely stated. Thinking interms of promises issued by A , by C P staff members and by the parking systemand in terms of implied expectations, credibility, and trust, provides a far moreflexible and applicable model than thinking in terms of obligations.The problem having been solved coincides with A having the car outside P7 andall promises having been discharged, either kept or not kept.2. Promises issued by C P staff will necessarily play an essential role when solvingsome complications with the RPPC.3. When A leaves P7 in an irregular way the probability is high that check out hasnot occurred in a satisfactory manner. That difficulty will not go away and itssolution requires contacting Z . That should be done at the earliest convenientopportunity, preferably when the car is outside P7.4. Control room staff know nothing about the pay station and about the cards andits underlying information system. But they may not always (or all) be willingto admit that state of affairs. They are likely to say whatever ends the discussionwithout any wish to get it right. Control room staff cannot inspect what is onany of the three displays without the support of the client’s visual informationgathering on site. On the other hand backoffice staff cannot operate the entryand exit barriers directly (but they can instruct control room staff to do so).5. Parking for free seems to be possible for someone who is not afraid of extendedand repeated intercom discussions, and who is not afraid to lie about how he hasoperated the equipment and about and what is shown on various displays.6. A must distinguish 4 categories of C P staff: • Control room staff, ( Q − cr in the example), • general back office staff, ( X, Y in the example), • parking area specialized backoffice staff, ( Z in the example), • on site maintenance staff, (not playing a role in this example, but oftenactive for solving other problems).40. A has no clue as to the scope of various promises. The extent to which dis-cussions with C P staff and dimply message histories are logged is unknown to A . Each category of C P personnel has their own views, capabilities and com-petences. These differences require different styles of interaction from A . Verydifferent levels of theoretical insight in the issues can be noticed. Control roomstaff seem to assume that clients like A know in detail what information they canaccess from their work place. This assumption is unwarranted (at least for A ).Contemplating alternative paths towards the solution of the original problem (pay sta-tion out of order) several questions remain for A . Answers to such questions matter inview of a potential reoccurrence of the same problem.1. At what times is backoffice staff available? In other words are there times ofthe day when backoffice staff cannot be reached and problems must be solvedthrough interaction with control room staff only?2. If the same problem appears once more, what is the most effective solution? Isthat dependent on the time of the day, is it dependent on the time pressure that A is in?3. Is it possible to take a paper tick at entry (simultaneously with checking in withthe RPPC) so as to have a method available of exiting efficiently (though athigher costs) if the same problem arises once more.4. Is on site maintenance staff able to neutralize an RPPC? Stated differently: isasking for the support by on site maintenance staff an alternative for asking fora connection with backoffice staff.5. Is control room staff able to switch an intercom conversation to backoffice staffso that check out can be arranged via the intercom system alone (important whena mobile phone connection is unavailable).6. Is it advisable for A to find out the answers to the previous 4 questions beforethe same complication arises once more? Or is the probability of these adverseevents so low that learning by doing suffices in the future as well.Besides these “lessons” there is much room for improvement of the system. Hereare some suggestions. • Control room staff should be able to read the various displays, and about thebarrier status (probably already visible via TV) and should be informed in realtime about the pay station status. • Once the pay station is out of order RPPC holders must be allowed exit withoutpayment and with proper check out. (This requires a software modification.)41
RPPC holders must be able to check out without further payment when the bar-rier is open. This may require that control room staff visually inspects the sit-uation and validates that a car is at the exit. If the car does not exit and thebarrier closes with the car inside P7, control room staff must be able to undo thecheckout. • If all fails and control room staff must open the barrier while check out of anRPPC holder is in doubt, oral communication of the card number must be pos-sible as a valid form of check out. • Control room staff must be instructed on how to communicate validly aboutRPPCs.
B.5 Aspects of promise dynamics
This particular case study features an certain mix of promise dynamics. In other ex-amples other features may be combined.1. In the parking example quite a number of promises appear, all of which can beforgotten by A (and other agents involved) once the problems have been solved.Only a modified trust assignment by A to various staff member categories resultsform the episode.2. When a promise is first issued by C P staff personnel A assigns a high expec-tation too its being kept and a high trust to the promiser. The very fact thata professional member of the parking system staff issues a promise creates abonus leading to both initial trust and expectation.Trust and expectation remain high and unchallenged until either the promise iskept and trust increases or the promise is broken and trust collapses.3. Reputation based mechanisms play no role in the example. Degraded trust of A in parking support staff is turned by A into a change of the model that A has in mind, thus allowing to deduce that certain promises are lacking sufficientcredibility to rely on. Promises lacking credibility are assimilated by A with-out further degrading their trust in the issuing promisers because A thinks tounderstand why (i) these promisers don’t know how and why not to make suchpromises, and (ii) that other (credible) promises made by the same staff mem-bers are likely to be kept, so trust (of A ) has become agent specific and promisedependent.4. In all circumstances A has been trusted by parking staff to the extent needed toresolve acute complications (that is entry and exit). A has no clue as to whetheror not his handling of the difficulties has modified that trust, and if so for howlong and with whom. 42 .6 The parking exit problem and informal logic In [3] promise theory has been displayed as a subject in informal logic. This line ofthought merits further contemplation. As it turns out the parking exit problem exampleprovides a number of connections to informal logic. By first analyzing how informallogic relates to the reasoning that is applied by various agents without any role ofpromising it becomes possible to understand some informal logic aspects of promises.
B.6.1 From induction to deduction
A classification of reasoning with some support from informal logic (see [ ? ]) is asfollows: deductive reasoning produces conclusions from assumptions where the con-clusions are at least as much justified as the assumptions, inductive reasoning pro-duces conclusions from assumptions where the conclusions are plausible (understoodin terms of subjective probability) relative to the assumptions, and conductive reason-ing (or pro and con based reasoning) combines and weights the combined impact ofboth supporting and opposing reasons for a single assertion. • The parking exit problem example provides phases where each of these formsof reasoning are applied. To begin with the trust that the system will work asintended at any moment of time comes about from inductive reasoning only. Itis certainly impossible for any client to understand al implementation details ofthe system to such an extent that deductive reasoning provides the certainty thatit will operate without flaws. • Real time reasoning, in particular client based reasoning during (problems with)system use, may start as inductive reasoning and migrate towards deductive rea-soning. The latter takes place once a client starts developing a mental model ofthe parking system. For instance:1. Consider assumption R : “If the pay machine display states ‘out of order’payments cannot be made”.This assumption may first emerge as a plausible fact in a (learning) phasewhere a client tries to pay in spite of the indication on the display. Thenthe client may conclude that display status “out of order” indicates withhigh likelihood that further attempts to use the pay machine are futile untilthe status has changed. After some time the client will used deductivereasoning from a mental model comprising rule R .2. Assumption R : “exit granted by control room staff will not check out theRPPC status”.Again this fact may be a matter of inductive inference first, only to becomean axiom permitting deductive inference once the client has developed amental model of the working conditions of control room staff (monitoringa plurality of parking areas each dealing with different equipment, different43isplay systems and error messaging, and with different subscription cardpolicies).3. Assumption R : “when the pay machine is out of order RPC holders needto check out by way of intercom communication either with C P controlroom staff at the exit terminal, or with backoffice staff via the mobilephone”.4. Assumption R : “control room staff is unaware which problems must bedealt with by backoffice staff, they only think in terms of sending on sitemaintenance staff”. R is not a consequence of any model of the system, itremains an outcome of inductive reasoning. The validity of R may changein time. • Conductive reasoning appears (hypothetically) in several circumstances:1. in the case that A experiences the same complication once more now afterworking hours, say at 10.00 PM. Now A must determine whether to try tophone backoffice staff first or to deal with control room staff only and topostpone formal check out to another day.2. Conductive reasoning is also called for if A finds the pay station out of or-der during working hours, but at a moment where A must act under severetime constraints.3. If during working hours A (for whatever reason known or unknown to A )is refused access after showing the RPPC then A needs conductive reason-ing to determine whether or not entry by means of a paper ticket is to bepreferred.4. Conductive reasoning is also called for if A prefers to resolve a refusedentry problem by means of interaction with backoffice staff and A mustdetermine whether or not to allow other persons to park (by driving awayfrom the entry) before the problem has been resolved. B.6.2 Induction and conduction on top of deduction
Once A has developed a model of how C P operates P7, it becomes possible for A toderive the credibility of a variety of promises on the basis of deductive reasoning. (E.g.control room staff promising to check out RPPC is not credible). Deductive reasoningmay govern to a large extent how A will handle a problem. Then A knows whichpromises must be viewed in the light of trust management and maintenance.Some predictions cannot be made by deductive means and induction remains un-avoidable in such cases. A typical example is that A may assume that control roomstaff cannot connect an intercom exchange to backoffice staff, although A ’s mentalmodel of the system allows for that option. A has inferred this limitation inductivelybecause control room staff does not mention the option. But the conclusion might be44rong and might prove wrong when tested explicitly. Another example is that A mayassume backoffice staff to be unavailable outside normal working hours. This need notfollow in a deductive manner from A ’s model, but it may follow with some plausibility,and for that reason it may still be wrong.Conductive reasoning seems to apply when the same failure is experienced inslightly new circumstances. It plays a role in plan formation when different priori-ties or objectives have to be balanced. Promise assessment (in this case) is not a matterof conductive reasoning so it seems.Credible promises must be considered in the light of promiser trust. Such promisesplay a role in inductive reasoning with expectations not only depending on promisecontent but also on the trust A has in the promiser. Reasoning will deliver a quantifiedplausibility that the promise will be kept. System behavior will produce assessmentsas to whether or not a promise is kept and in either case an update of trust will takeplace.Any high expectation that the system can be used via an RPPC as intended (orpromised) is by necessity the result of inductive inference of some kind. It is unrea-sonable to expect parking client A to apply a deductive reasoning system about thissubject which is able to deal with system failures. Even when dealing with known fail-ures A may need a combination of deductive, inductive, and conductive reasoning eachapplied to an approximate model of the parking system and its management practice,and on top of that A may need both deductive reasoning to assess the credibility ofpromises issued by parking authority staff and inductive reasoning to assess the plau-sibility that promises will be kept. The latter form of reasoning taking inputs from acurrent trust level in various staff members (or classes) which is updated whenever itcomes to light that a promise is kept or is broken. B.7 Parking exit problem case II
A week later A tried to exit P7 and the pay machine is clearly out of order. A phones thenumber N bo of backoffice staff that was communicated before (see 8d in B.2 above),and is automatically told (promised) that non-one is available and that after tellingname and number the caller will be called back as soon as possible. That happensafter a few minutes, by backoffice staff member, say Y ′ , who asks about the problem.Upon understanding the cause of the problem and without asking further questions Y ′ connects A back to the control room. Now A , understands that he will be unable to getanywhere with checking out, and after some explanation he finds control room staffmember (say Q cr ) willing to open the barrier whereupon A exits P7.Now A still has to check out. Four successive times A calls the backoffice beingtold by an answering machine that his call will be answered ASAP, which only takesplace at the next morning. Now A reexplains the entire chain of events to some back-office staff member, say Y ′′ , who, after asking for the card number neutralizes the cardand claims that all will be fine from now. Subjectively A assesses that three out of thefour promises (issued after A ’s leaving P7) have not been kept, but P7 management45ay claim that (i) by means of a single return call four promises may be kept at thesame time, and (ii) returning the call the next morning has been the best they coulddo. By now A regrets not having written down N bo as it constitutes a connection thatmight yet give access to a living entity. C Promise dynamics continued
In this Section we will continue with the extensive example from Section 6 above.We will first expand the notation for trust with aggregates that are specific for eitherprograms or application areas. Then we will survey a plurality of attributes both forprograms and for tasks that may need consideration in a more comprehensive setting.Finally we consider reputation based mechanisms for trust modification.
C.1 Extending the trust scale and mechanism
Of course the trust of q in p , denoted by T q ( p ) can be measured in a linear ordinal scalewith higher resolution than five levels. Doing so without clear examples of its use isless convincing, however.Trust maintenance can be described by making of a family of dedicated trust levelsrather than a single one. Here are some examples, still in the context of programs P, Q, R, ... supposedly usable for tasks
U, V, W, .. .. We notice that, in the context ofthe previous examples, T q ( p ) may be understood as q ’s trust in p in its capacity ofbeing a supplier of programs, or in its capacity of being a consultant about programsthat have been supplied by third parties. C.1.1 Aggregates for specific accumulation of appreciation
Viewing trust and credibility as forms of appreciation, confidence and dependabil-ity can be categorized as such as well. By focusing appreciation concerning specificthemes dedicated aggregates of appreciation can be introduced for the accumulationof findings, judgements, and sentiments. Here are some examples: task oriented credibility:
Given a task U , CR q ( p, [ ] , U ) represents q ’s view on thecredibility of p as an agent authoritative on the suitability of a range of programsfor task U . task oriented trust: Given a task U , T q ( p, [ ] , U ) represents q ’s trust in p as an agentauthoritative on the suitability of a range of programs for task U . program oriented credibility: Given a program P , CR q ( p, P, [ ]) represents q ’s trustin p as an agent authoritative on the suitability of program P for a range of tasks. program oriented trust: Given a program P , T q ( p, P, [ ]) represents q ’s trust in p asan agent authoritative on the suitability of program P for a range of tasks.46 rogram/task oriented confidence: Given a program P , and a task U , C q ( P, U ) rep-resents q ’s confidence that program P is useful for task U .Confidence is an abstraction (from promiser’s identities) that q manufacturesafter having been issued promises by one or more promisers about the quality of P in relation to U . program/task matching credibility: Given a program P , and a task U , CR ( P, U ) represents a general level of credibility that program P is useful for task U .An appropriate level of matching credibility may be found from the documenta-tion of the program. program/task subjective dependability: By abstracting from (averaging out over avariety of) judgements of individual agents, a subjective (or rather intersubjec-tive) trust level can be introduced for dependability. D ( P, U ) represents an agent community’s evidence based perception of the de-gree to which P is suitable for U . program/task objective dependability: Rather than taking subjective, though oftenassessment based, appreciations as a basis for dependability, objective crite-ria (testing, verification, validation, software process certification etc.) may betaken as a basis for the development of an attribution of dependability to a pro-gram/task pair.Such information may be brought into circulation in a reputation flow basedtrust management network by an agent with a high status on software qualitymanagement and assessment.
C.1.2 The meaning of aggregate levels, an outline
For each of these dimensions of appreciation: credibility, trust, confidence, and de-pendability, the same three key questions can be posed regarding dynamics and impact.We will provide some provisional answers to these questions: • What is a plausible explanation (informal meaning) of a level on the five pointscale? We will only consider the program or task specific credibility and trust. – Task U oriented credibility for an agent p (in the eyes of q ) is plausibly asfollows:-2. (low) if (i) p has no experience with of advise about the usability ofprograms for tasks related to U , and (ii) p is professionally connectedwith a program producer offering programs supposedly capable ofsupporting with task U ,-1. (moderately negative) if either p is not independent or p is lackingexperience, 47. (neutral) if p is independent and if in addition p has relevant experi-ence.1. (moderately positive) if p has been consulting on a range of function-alities comparable to but different from U , and,2. (positive) if in addition to the virtues creating a moderately positivejudgement p has a recognized reputation for the task at hand. – Program P oriented credibility for an agent p (in the eyes of q ) is plausiblyas follows:-2. (low) if (i) p has no experience with of advise about the usability of theprogram P or close relatives of it, and (ii) p is professionally connectedwith the producer of program P ,-1. (moderately negative) if either p is not independent (from the producerof P ) or if p is lacking experience with consulting about the capabili-ties of P ,0. (neutral) if p is independent (in the relevant way) and if in addition p has relevant experience.1. (moderately positive) if p has been consulting on a range of applica-tions of P ,2. (positive) if in addition to the virtues creating a moderately positivejudgement p has a recognized reputation for consulting on applicationsof P . – Task U oriented trust for an agent p (in the eyes of q ) is plausibly as follows:-2. (low) if q has been wrongly advised before at least twice by q aboutprograms for task U and the two most recent experiences of p with q ’sadvice on this matter were negative,-1. (moderately negative) if not low and if the most recent experience of p with q ’s advice was negative,0. (neutral) if p has no relevant experience with q ’s advice on the matter,1. (moderately positive) if p ’s most recent experience with q ’s advicewas positive,2. (positive) if p had two consecutive positive experiences with the adviceof q , and these were p ’s most recent experiences with q .This listing is incomplete because a reputation mechanism may overrule p ’s reliance on its own experience with p . When q is told about two otheragents having very recent positive experiences with q ’s advice on the ca-pability of programs to provide support for task U the trust level may beraised level may be raised, and conversely. – Program P oriented trust for an agent p (in the eyes of q ) is plausibly asfollows: 482. (low) if q has been wrongly advised before at least twice by q aboutapplications of P , and the two most recent experiences of p with q ’sadvice on this matter were negative,-1. (moderately negative) if not low and if the most recent experience of p with q ’s advice was negative,0. (neutral) if p has no relevant experience with q ’s advice on the matter,1. (moderately positive) if p ’s most recent experience with q ’s advicewas positive,2. (positive) if p had two consecutive positive experiences with the adviceof q , and these were p ’s most recent experiences with q .This listing is incomplete because a reputation mechanism may overrule p ’s reliance on its own experience with p . When q is told about two otheragents having very recent positive experiences with q ’s advice on applica-tions of P the trust level may be raised level may be raised, and conversely. • Which events produce updates of the levels?It is implicit in the above descriptions how credibility levels and trust levels maychange in the course of the interaction with agent q . • What effects on the handling of directionals can be expected from the differentlevels?Low or moderately low credibility of p in the eyes of q (for consulting service s ) plausibly has the effect that q will not ask p to provide s . In the presence ofpositive credibility q may have a preference for an equally credible consultantwho is most trusted.An example of the working this machinery may read as follows:In the style of the examples in Section 6 one may imagine an agent p promisingto q that usability of program P for task U , and agent p ′ promising to agent q ′ thatprogram P ′ is suitable for the same task.After agent r has issued the imposition on q to perform U , q looks for an agent c such that c ’s task oriented credibility CR q ( c, [ ] , U ) is sufficient (level 1 or level 2) andsuch that among its peers c ’s task oriented trust ( T q ( c, [ ] , U ) ) is maximal.Having found c , q proposes that c will consult Q about which of the two programsis best suited for performing task U . Then c may accept the job of consulting q bypromising q that it will do so, by way of issuing a proposal. Subsequently q promisesto make use of c ’s advice and after having noticed the proposal by c for a choicebetween both programs q chooses the program to be applied accordingly.49 .1.3 Product, task, user, and provider attributes In order to extend the examples of the use of refinements of the trust scale in connectionwith the issuing of and reaction to directionals by a plurality of agents we extend thesetting of programs and tasks with additional attributes of both. We will assume thateach attribute is measured in an ordinal five point scale with -2 representing very lowand 2 representing very high.Below we made an attempt to list a fairly comprehensive collection of attributesthat may arise in the context of our running example. Although only a few of these at-tributes will play a role in subsequent examples, for the remaining attributes examplesof their use in the context of the exchange of promises and impositions can be easilyimagined.1. program development time,2. program development cost,3. program quality (speed, precision, flexibility),4. program manufacturing process documentation availability,5. program system/installation/hardware specificity,6. program testability,7. program maintainability,8. program user base size,9. program size (e.g. measured in LOC),10. program dependability (1- probability of occurrence of failure during 1 year ofnormal use),11. task description availability,12. task ubiquity (many agents in need of the functionality),13. task complexity,14. task safety criticality,15. task evolution speed,16. user awareness of required task functionality,17. user dependency on task,18. user access to alternative program providers for the given task,509. user competence for program/task failure detection and diagnosis,20. provider track record for producing programs for given task,21. provider size,22. provider profitability and stability,23. provider reputation,24. provider certification,25. provider software process documentation available,26. provider software process maturity level,27. provider software process involves formal specification and verification,28. provider dependence from the market of given task oriented programs.
C.1.4 Expanding the trust network and mechanisms
We will now expand the setting of the example with the assumption that agent (pro-gram constructor) C P is the provider of program P , that C R has constructed R , and soon, and that this and much more information provides a background for all promisesand other directionals about P .In the presence of information regarding these attributes several additional rulesof behavior can be contemplated. In practice a vast and hardly systematically chartedcollection of such rules may underly the control logic of agent q ’s assessment andupdate of p ’s credibility, as well as q ’s manner of making use of resulting trust levels.On the background trust maintenance concerning program providers is needed, andits relation with consultants (such as p ) must be captured in a suitable logic.1. If p promises q that P is adequate for task U , then program oriented credibilityof p is low and for that reason q may not install P for the use for task U , providedone of the following (combinations of) conditions is satisfied. • (i) provider size is very low and (ii) program size is very high, or • (i) program cost are very low, (ii) program user base is small, (iii) userdependency on task is high, (iv) task ubiquity is low, and (v) program de-velopment time is high, or • (i) program dependability is low, (ii) task safety criticality is high, (iii) userdependency on task is high, or • (i) program maintainability is low, (ii) task evolution speed is high, and (iii)user awareness of required task functionality is low, and (iv) user access toalternative program providers for the given task is high.51e notice that many more such combinations of conditions can be found.2. If p promises q that P is adequate for task U , then program oriented credibilityof p is high and for that reason q will install P and prepare it for the use for task U , provided one of the following (combinations of) conditions is satisfied. • (i) program quality is high, (ii) program dependability is high, (iii) programuser base is large, (iii) user dependency on task is moderate, or • (i) program dependability is high, (ii) task safety criticality is high, (iii)user dependency on task is high, and (iv) user access to alternative programproviders for the given task is low, or • (i) program maintainability is high, (ii) task evolution speed is low, (iii) userawareness of required task functionality is high, (iv) user access to alterna-tive program providers for the given task is low. (v) provider reputation ismoderate, (vi) program cost are moderate, (vii) program development timeis moderate, and (viii) task description availability is high.Again we notice that many more such combinations of conditions can be found.3. If (i) p promises q that P is adequate for task U , then the task oriented credibilityof p is high and for that reason q will install P and prepare it for the use for task U , provided one of the following (combinations of) conditions is satisfied. • (i) provider track record for producing programs for task U is high, (ii) taskis of low safety criticality, (iii) user competence for program/task failuredetection and diagnosis is high, and (iv) task is highly user specific, or • (i) provider track record for producing programs for task U is high, (ii) taskis highly safety critical, (iii) user awareness of required task functionality ishigh, (iv) user competence for program/task failure detection and diagnosisis high, (v) user dependency on task is high, and (vi) task is highly userspecific,The supply of such rules seems endless, though in practice a learning system mightdevelop such rules (semi-)automatically and add the rules by need to its rule base. C.2 Balancing imposition strength and promiser trust levels
In the examples above we have assumed that a request on q to perform a task U takesthe form of a corresponding imposition on q issued by some other agent s . Nowimpositions are request for voluntary cooperation, and for that reason q ’s trust in s enters the picture. Assuming again a five level scale for that from of trust one maywonder how it might interact with the scenario’s as outlined above. Here are two rulesfor the interplay between impostioner trust and product supplier/consultant trust.52. Suppose that s imposes U on q , then if(a) T q ( s ) = 2 , and(b) q has installed and prepared (only) program P for task U , and(c) q has been promised that P is adequate for U only by p ,then:(a) if T q ( p ) = 2 then q will use P for task U , and(b) if T q ( p ) = 1 then q will issue a warning to s that its relevant trust level ispositive but not optimal, and wait for a reply by s (either the proposal notto be bothered and to carry on with using P for task U , or the proposal toquit complying with its previous imposition altogether), and(c) if T q ( s ) ≤ then q will propose s to withdraw its imposition (for q toperform U ).This rule embodies the idea that q ’s high trust in s is reflected by q applyingmaximal scrutiny to avoid s being confronted with a failure when U is performedby q . Here q prefers not delivering service U to risking the delivery of a faultyservice.2. Suppose that s imposes to perform U on q , then if(a) T q ( s ) = 1 , and(b) q has installed and prepared (only) program P for task U , and(c) q has been promised that P is adequate for U only by p ,then:(a) if T q ( p ) ≥ then q will use P for task U , and(b) if T q ( s ) ≤ then q will propose s to withdraw its imposition (for q toperform U ).Having less trust in s than in the case of the first rule, q takes a higher risk offailure when performing U with the help of P upon the request by s . C.3 Reputation infection
The existence of a community C p of agents, each independently maintaining trust in p , suggests consideration of mechanisms for allowing q ’s trust in p to be positivelyaffected by the presence of high trust in p for a significant number of other membersof C p .If we define p ’s reputation within C p as the distribution of trust in p over membersof C p , then reputation infection takes place if reputation evolves to modified reputation53y means of a mechanism which involves comparison and communication of trustlevels between different members of C p only. C.3.1 Letter of recommendation (LOR) based reputation flow
In this paragraph we will outline how the spreading out of trust and the conversion oftrust into reputation might work in the context of our running example.Suppose that q entertains T q ( p ) = 0 about p and a promise m = p [ π α ( P, U ) :“
P is adequate for task U ′′ / { p, q, r } ] q is issued by p . Rather than refusing to install P , q may first propose to q ′ a peer of q that q ′ tells q about its trust in p . The reactionof q ′ to this proposal determines how q will deal with p ’s promise m .1. If T q ′ ( p ) = − then q ′ communicates that fact to q , and q adapts T q ( p ) to min( − , T q ( p )) .2. If T q ′ ( p ) = − or T q ′ ( p ) = 0 it is plausible that q ′ refuses (that is promises notto) send this information to q .3. If T q ′ ( p ) = 1 then q ′ will communicate that fact to q upon which q sets T q ( p ) to .4. If T q ′ ( p ) = 2 then that is communicated by q ′ to q upon which q sets T q ( p ) to .In the latter case it is plausible that q reconsiders promise m just issued by p .This mechanism involves the request (cast as a proposal) for a letter of recommen-dation (LOR, issued by q to q ′ about p ). In case T q ′ ( p ) ≥ that LOR is produced by q ′ in the form of an imposition (by q ′ on q ) to take notice of that state of affairs.This very simple mechanism of reputation based trust generation can easily beincluded in the above examples. C.3.2 Third party survey based reputation infection