Multiagent Approach for the Representation of Information in a Decision Support System
aa r X i v : . [ c s . A I] M a r Multiagent Approach for the Representation ofInformation in a Decision Support System
Fahem Kebair and Fr´ed´eric Serin
Universit´e du Havre, LITIS - Laboratoire d’Informatique,de Traitement de l’Information et des Syst`emes,25 rue Philippe Lebon, 76058, Le Havre Cedex, France { fahem.kebair, frederic.serin } @univ-lehavre.fr Abstract.
In an emergency situation, the actors need an assistance al-lowing them to react swiftly and efficiently. In this prospect, we presentin this paper a decision support system that aims to prepare actors ina crisis situation thanks to a decision-making support. The global archi-tecture of this system is presented in the first part. Then we focus on apart of this system which is designed to represent the information of thecurrent situation. This part is composed of a multiagent system that ismade of factual agents. Each agent carries a semantic feature and aimsto represent a partial part of a situation. The agents develop thanks totheir interactions by comparing their semantic features using proximitymeasures and according to specific ontologies.
Keywords.
Decision support system, Factual agent, Indicators, Multi-agent system, Proximity measure, Semantic feature.
Making a decision in a crisis situation is a complicated task. This is mainly dueto the unpredictability and the rapid evolution of the environment state. Indeed,in a critic situation time and resources are limited. Our knowledge about theenvironment is incomplete and uncertain, verily obsolete. Consequently, it isdifficult to act and to adapt to the hostile conditions of the world. This makessense to the serious need of robust, dynamic and intelligent planning system forsearch-and-rescue operations to cope with the changing situation and to bestsave people [9]. The role of such a system is to provide an emergency planningthat allows actors to react swiftly and efficiently to a crisis case.In this context, our aim is to build a system designed to help decision-makersmanage cases of crisis with an original representation of information. From thesystem point of view, detecting a crisis implies its representation, its characteri-sation and its comparison permanently with other crisis stored in scenarios base.The result of this comparison is provided to the user as the answer of the globalsystem .The idea began with the speech interpretation of human actors during a crisis[3], [5]. The goal was to build an information, and communication system (ICS)hich enables the management of emergency situations by interpreting aspectscommunications created by the actors. Then, a preventive vigil system (PVS) [1]was designed with the mean of some technologies used in the ICS modelling as:semantic features, ontologies, and agents with internal variables and behaviouralautomata. The PVS aims either to prevent a crisis or to deal with it with a maininternal goal: detecting a crisis.Since 2003, the architecture of the PVS was redesigned with a new specificity,that is the generic aspect; generic is used here with different meaning from [13].A part of the global system, which is responsible of the dynamic informationrepresentation of the current situation, was applied to the game of Risk andtested thanks to a prototype implemented in Java [10]. However, we postulatethat some parts of the architecture and, at a deeper level, some parts of the agentswere independent of the subject used as application. Therefore, the objective atpresent is to connect this part to the other parts, that we present latter in thispaper, and to test the whole system on various domains, as RoboCup Rescue[11] and e-learning.We focus here on the modelling of the information representation part of thesystem that we intend to use it in a crisis management support system.The paper begins with the presentation of the global system architecture.The core of the system is constituted by a multiagent system (MAS) which isstructured on three multiagent layers. Then, in section 3, we explain the way weformalise the environment state and we extract information related to it, whichare written in the form of semantic features. The latter constitute data that feedthe system permanently and that carry information about the current situation.The semantic features are handled by factual agents and are compared the onewith the other using specific ontologies [2].Factual agents, that compose the first layer of the core, are presented there-after in section 4. Each agent carries a semantic feature and aims to reflect apartial part of the situation. We present their structures and their behavioursinside their organisation using internal automaton and indicators.Finally, we present a short view about the game of Risk test in which wedescribe the model application and the behaviour of factual agents.
The role of the decision support system (DSS) is to provide a decision-makingsupport to the actors in order to assist them during a crisis case. The DSS allowsalso managers to anticipate the occur of potential incidents thanks to a dynamicand a continuous evaluation of the current situation. Evaluation is realised bycomparing the current situation with past situations stored in a scenarios base.The latter can be viewed as one part of the knowledge we have on the specificdomain.The DSS is composed of a core and three parts which are connected to it(figure 1):
A set of user-computer interfaces and an intelligent interface allow the coreto communicate with the environment. The intelligent interface controlsand manages the access to the core of the authenticated users, filters entriesinformation and provides actors with results emitted by the system; • An inside query MAS ensures the interaction between the core and worldinformation. These information represent the knowledge the core need. Theknowledge includes the scenarios, that are stored in a scenarios base, theontologies of the domain and the proximity measures; • An outside query MAS has as role to provide the core with information,that are stored in network distributed information systems. Fig. 1.
General Architecture of the DSS
The core of the decision support system is made of a MAS which is structuredon three layers. The latter contain specific agents that differ in their objectivesand their communications way. In a first time, the system describes the semanticof the current situation thanks to data collected from the environment. Then itanalyses pertinent information extracted from the scenario. Finally, it providesan evaluation of the current situation and a decision support using a dynamicand incremental case-base reasoning.The three layers of the core are: • The lowest layer: factual agents; • The intermediate layer: synthesis agents; • The highest layer: prediction agents.nformation are coming from the environment in the form of semantic fea-tures without a priori knowledge of their importance. The role of the first layer(the lowest one) is to deal with these data thanks to factual agents and let emer-gence detect some subsets of all the information [7]. More precisely, the set ofthese agents will enable the appearance of a global behaviour thanks to theirinteractions and their individual operations. The system will extract thereafterfrom this behaviour the pertinent information that represent the salient facts ofthe situation.The role of the synthesis agents is to deal with the agents emerged from thefirst layer. Synthesis agents aim to create dynamically factual agents clustersaccording to their evolutions. Each cluster represents an observed scenario. Theset of these scenarios will be compared to past ones in order to deduce theirpotential consequences.Finally, the upper layer, will build a continuous and incremental process ofrecollection for dynamic situations. This layer is composed of prediction agents and has as goal to evaluate the degree of resemblance between the current sit-uation and its associate scenario continuously. Each prediction agent will beassociated to a scenario that will bring it closer, from semantic point of view, toother scenarios for which we know already the consequences. The result of thiscomparison constitutes a support information that can help a manager to makea good decision.
Fig. 2.
Architecture of the Core
Environment Study and Creation of Semantic Features
To formalise a situation means to create a formal system, in an attempt tocapture the essential features of the real-world. To realise this, we model theworld as a collection of objects, where each one holds some properties. The aimis to define the environment objects following the object paradigm. Therefore,we build a structural and hierarchical form in order to give a meaning to thevarious relations that may exist between them. The dynamic change of theseobjects states and more still the interactions that could be entrenched betweenthem will provide us a snapshot description of the environment. In our context,information are decomposed in atomic data where each one is associated to agiven object.
A semantic feature is an elementary piece of information coming from the envi-ronment and which represents a fact that occurred in the world. Each semanticfeature is related to an object (defined in section 3.1), and allows to defineall or a part of this object. A semantic feature has the following form: (key,( qualif ication, value ) + ), where key is the described object and ( qualif ication,value ) + is a set of couples formed by: the qualification of the object and itsassociated value. As example of a semantic feature related to a phenomenonobject: (phenomenon − ,
1] and is associated to ascale. The reference value in this scale is 0 that means neutral relation betweenhe two compared semantic features. Otherwise, we can define the scale as follow:0.4=Quiet Close, 0.7=Close, 0.9=Very Close, 1=Equal. Negative values mirrorspositive ones (replacing close by different).
Factual agents are hybrid agents, they are both cognitive and reactive agents.They have therefore the following characteristics: reactivity, proactiveness andsocial ability [14]. Such an agent represents a feature with a semantic characterand has also to formulate this character feature, a behaviour [4]. This behaviourensures the agent activity, proactiveness and communication functions.The role of a factual agent is to manage the semantic feature that it carriesinside the MAS. The agent must develop to acquire a dominating place in itsorganisation and consequently, to make prevail the semantic category which itrepresents. For this, the factual agent is designed with an implicit goal that isto gather around it as much friends as possible in order to build a cluster. Inother words, the purpose of the agent is to add permanently in its acquaintancesnetwork a great number of semantically close agents. The cluster formed by theseagents is recognized by the system as a scenario of the current situation and forwhich it can bring a potential consequence. A cluster is formed only when itsagents are enough strong and consequently they are in an advanced state intheir automaton. Therefore, the goal of the factual agent is to reach the actionstate, in which is supreme and its information may be regarded by the systemas relevant.
Fig. 3.
Structure of a Factual Agent
An internal automaton describes the behaviour and defines the actions ofthe agent. Some indicators and an acquaintances network allow the automatonoperation, that means they help the agent to progress inside its automaton andto execute actions in order to reach its goal. These characteristics express theproactiveness of the agent.he acquaintances network contains the addresses of the friends agents andthe enemies agents used to send messages. This network is dynamically con-structed and permanently updated. Agents are friends (enemies) if their seman-tic proximities are strictly positive (negative).
The internal behaviour of a factual agent is de-scribed by a generic augmented transition network (ATN). The ATN is made offour states [3] (quoted above) linked by transitions: • Initialisation state: the agent is created and enters in activities; • Deliberation state: the agent searches in its acquaintances allies in order toachieve its goals; • Decision state: the agent try to control its enemies to be reinforced; • Action state: it is the state-goal of the factual agent, in which the latterdemonstrates its strength by acting and liquidating its enemies.
Fig. 4.
Generic Automaton of a Factual Agent
ATN transitions are stamped by a set of conditions and a sequence of actions.Conditions are defined as thresholds using internal indicators. The agent mustvalidate thus one of its outgoing current state transitions in order to pass to thenext state. The actions of the agents may be an enemy aggression or a friendhelp. The choice of the actions to perform depend both on the type of the agentand its position in the ATN.
Factual Agent Indicators
The dynamic measurement of an agent behaviourand its state progression at a given time are given thanks to indicators. Thesecharacters are significant parameters that describe the activities variations ofeach agent and its structural evolution. In other words, the agent state is specifiedby the set of these significant characters that allow both the description of itscurrent situation and the prediction of its future behaviour [4] (quoted above).Factual agent has five indicators, which are pseudoPosition (PP), pseudoSpeed(PS), pseudoAcceleration (PA), satisfactory indicator (SI) and constancy indi-cator (CI) [8]. The “pseudo” prefix means that these indicators are not a realathematical speed or acceleration: we chose a constant interval of time of onebetween two evolutions of semantic features. PP represents the current positionof an agent in the agent representation space. PS evaluates the PP evolutionspeed and PA means the PS evolution estimation. SI is a valuation of the suc-cess of a factual agent in reaching and staying in the deliberation state. Thisindicator measures the satisfaction degree of the agent. Whereas, CI representsthe tendency of a given factual agent to transit both from a state to a differentstate and from a state to the same state. This allows the stability measurementof the agent behaviour.The compute of these indicators is according to this formulae where valProx-imity depends on the category of a given application factual agents:
P P t +1 = valPoximity P S t +1 = P P t +1 − P P t P A t +1 = P S t +1 − P S t PP, PS and PA represent thresholds that define the conditions of the ATNtransitions. The definition of this conditions are specified to a given application.As shown in the previous formulae, only PP is specific. However, PS and PA aregeneric and are deduced from PP. SI and CI are also independent of the studieddomain and are computed according to the agent movement in its ATN.
The first layer model has been tested on the game of Risk. We chose this gameas application not only because it is well suited for crisis management but alsowe apprehend the elements and the actions on such an environment. Moreoverwe have an expert [8] (quoted above) in our team who is able to evaluate andvalidate results at any moment.As result, this test proved that this model allows the dynamic informationrepresentation of the current situation thanks to factual agents organisation.Moreover we could study the behaviour and the dynamic evolution of theseagents.Risk is a strategic game which is composed of a playing board representinga map of forty-two territories that are distributed on six continents. A playerwins by conquering all territories or by completing his secret mission. In turn,each player receives and places new armies and may attack adjacent territories.An attack is one or more battles fought with dice. Rules, tricks and strategiesare detailed in [12].The representation layer of the system has as role to simulate the game un-winding and to provide a semantic instantaneous description of its current state.To achieve this task, we began by identifying the different objects that definethe game board (figure 5) and which are: territory, player, army and continent.Continents and territories are regarded as descriptions of a persistent situation.Whereas, armies and players are activities respectively observed (occupying aterritory) and driving the actions. ig. 5.
Class Diagram for the Game of Risk Representation
From this model we distinguish two different types of semantic features: aplayer type and a territory type. For example (Quebec, player, green, nbArmies,4, time, 4) is a territory semantic feature that means Quebec territory is ownedby the green player and has four armies. However, (blue, nbTerritories, 4, time,1) is a player semantic feature that signifies a blue player has four territories atstep 1.The first extracted semantic features of the initial state of the game cause thecreation of factual agents. For example, a semantic feature as (red, nbTerritories,0, time, 1) will cause the creation of red player factual agent.During the game progression, the entry of a new semantic feature to thesystem may affect some agents state. A factual agent of type (Alaska, player,red, nbArmies, 3, time, 10) become (Alaska, player, red, nbArmies, -2, time, 49)with the entry of the semantic feature (Alaska, player, red, nbArmies, 1, time,49). Alaska agent sends messages containing its semantic feature to all the otherfactual agents to inform them about its change. The other agents compare theirown information with the received one. If an agent is interested by this message(the proximity measure between the two semantic features is not null) it updatesits semantic feature accordingly. If the red player owned GB before the semanticfeature (GB, player, blue, nbArmies, 5, time, 52), both red player and blue playerwill receive messages because of the change of the territory owner.If we take again the preceding example (Alaska territory), Alaska agent com-putes its new PP (valProximity). The computation of valProximity in our caseis given by: number of armies (t) - number of armies (t-1) e.g. here valProximity= 1-3 = -2. PS and PA are deduced thereafter from PP. The agent verify thenthe predicates of its current state outgoing transitions in order to change state.To pass from
Deliberation state to
Decision state for example the PS must bestrictly positive. During this transition, the agent will send a
SupportMessage toa friend and an
AgressionMessage to an enemy.
The paper has presented a decision support system which aims to help decision-makers to analyse and evaluate a current situation. The core of the system restson an agent-oriented multilayer architecture. We have described here the firstlayer which aims to provide a dynamic information representation of the currentituation and its evolution in time. This part is modelled with an original in-formation representation methodology which is based on the handle of semanticfeatures using a factual agents organisation.The model of the first layer was applied on the game of Risk. Results providedby this test correspond to our attempts, which consist on the dynamic repre-sentation of information. This application allowed us to track the behaviour offactual agents and to understand their parameters which are the most accu-rate to characterise information. Moreover, we consider that a great part of thesystem is generic and may be carried into other fields. Currently, we intend ina first time to connect the representation layer to the two other and to applythereafter the whole system on more significant domains as RoboCup Rescueand e-learning.
References nd Indian InternationalConference on Artificial Intelligence ndnd