Engineering Cooperative Smart Things based on Embodied Cognition
Nathalia Moraes do Nascimento, Carlos Jose Pereira de Lucena
EEngineering Cooperative Smart Thingsbased on Embodied Cognition
Nathalia Moraes do Nascimentoand Carlos Jos´e Pereira de Lucena
Software Engineering Laboratory (LES), Department of InformaticsPontifical Catholic University of Rio de JaneiroRio de Janeiro, BrazilEmail: nnascimento, [email protected]
Abstract —The goal of the Internet of Things (IoT) is totransform any thing around us, such as a trash can or a streetlight, into a smart thing. A smart thing has the ability ofsensing, processing, communicating and/or actuating. In order toachieve the goal of a smart IoT application, such as minimizingwaste transportation costs or reducing energy consumption, thesmart things in the application scenario must cooperate witheach other without a centralized control. Inspired by knownapproaches to design swarm of cooperative and autonomousrobots, we modeled our smart things based on the embodiedcognition concept. Each smart thing is a physical agent witha body composed of a microcontroller, sensors and actuators,and a brain that is represented by an artificial neural network.This type of agent is commonly called an embodied agent. Thebehavior of these embodied agents is autonomously configuredthrough an evolutionary algorithm that is triggered accordingto the application performance. To illustrate, we have designedthree homogeneous prototypes for smart street lights based on anevolved network. This application has shown that the proposedapproach results in a feasible way of modeling decentralizedsmart things with self-developed and cooperative capabilities.
Keywords-embodied cognition; cognitive system; cognitive em-bedded systems; evolved network; neural network; multiagentsystem; smart things; internet of things; cooperative systems; self-developed systems;emergent communication system
I. I
NTRODUCTION
A few years ago, Kephart and Chess (2003) [1] called theglobal goal to connect trillions of computing devices to theInternet the nightmare of ubiquitous computing. The reasonfor that is that to reach this global goal requires a lot of skilledInformation Technology (IT) professionals to create millionsof lines of code, and install, configure, tune, and maintain thesedevices. According to Kephart (2005) [2], in some years, ITenvironments will be impossible to be administered, even bythe most skilled IT professionals.Predicting the emergence of this problem, in 2001 theIBM company suggested the creation of autonomic computing[3]. IBM recognized that the only viable solution to resolvethis problem was to endow systems and the componentsthat comprise them with the ability to manage themselvesin accordance with high-level objectives specified by humans[2]. Therefore, IBM proposed systems with self-developedcapabilities. The company emphasized the need of automatingIT key tasks, such as coding, configuring, and maintaining systems, based on the progress observed in the automation ofmanual tasks in agriculture.Other IT companies agreed with IBM and then generatedtheir own proposals [4], [5]. However, the IT Industry interestin the development of self-management devices is not yet veryevident. As a result, not only the goal of the Internet of Things(IoT) to connect billions of devices to the Internet has not beenreached, but also we have been experiencing the problemspreviously listed by Kephart and Chess (2003) [1]. In fact,there is a lack of software to support the development of ahuge number of different IoT applications.In this context, we have been investigating how to createapplications based on the IoT with self-developed and coop-erative capabilities. To this end, our approach consists in: • Developing smart things: – Things that are autonomous and able to executecomplex behavior without the need for centralizedcontrol to manage their interaction. – Things that are able to have behavior assigned atdesign-time and/or at run-time. • Providing mechanisms to allow things to self-adapt, im-prove their own behavior and cooperate;To reach these objectives, we previously developed a genericsoftware basis for IoT, which is called the “Framework forInternet of Things” (FIoT) [6]. The framework approachwas used to develop the common requirements among IoTapplications and implement a reusable architecture [7]. Wedeveloped FIoT according to the following directions:1) To create autonomous things and a distributed control,we modeled the framework based on a multiagentapproach [8]. According to Cetnarowicz et. al (1996)[8], the active agent was invented as a basic elementfrom which distributed and decentralized systems couldbe built. In our approach, we considered the use ofembodied agents, which is typically used to model andcontrol autonomous physical objects that are situated inactual environments [9].2) To control the things, we chose a control architecturebased on artificial neural networks [10]. A neural net-work is a well known approach to dynamically provideresponses and automatically create a mapping of input- a r X i v : . [ c s . A I] J a n utput relations [10]. In addition, it is commonly usedas an internal controller of embodied agents [11].3) To make things self-adaptive, we proposed the use of theIBM control-loop [12] combined with various MachineLearning (ML) techniques, notably supervised learningand evolutionary algorithms [13].As the development of smart things is part of a broadercontext, a set of related aspects will be left out of the scopeof this work. Thus, the following approaches are not directlyaddressed by this work: security, ontology, protocols andscalability.The goal of this paper is to show how FIoT can be used toprototype physical smart things based on embodied cognition.Previously, we modeled and simulated smart traffic lights in[6], which were tested in a simulated car traffic application.However, we only provided simulated smart things. Therefore,we did not show how to transfer the evolved controller tophysical smart things. For instance, we created a simplerexperiment, but we will show all the steps of engineering smartthings using evolved neural networks. We will present the stepsof modeling and evolving a neural network in a simulatedenvironment, and the step of transferring this evolved networkto physical devices.We present this experiment in Section IV. It presents theexperimental setup, results, and evaluation. The remainder ofthis paper is organized as follows. Section II presents therelated work. Section III describes the background for theproposed approach. The paper ends with conclusive remarksin Section V. II. R ELATED W ORK
There are some research results in the literature about smartthings, which use a kind of self-developed approach [14]–[17]. Baresi et al. [15], for example, provide a simulation fora smart green house scenario. In their simulation, flowers aredistributed in different rooms based on specific characteristics.If a flower is sick, it will be allocated to another roomor its room’s configuration will change. For this purpose,the authors use adaptive techniques to perform discovery,self-configuration, and communication among heterogeneousthings. However, most of this research presents only simu-lated smart things and does not show how to transfer theirapproaches from a simulated smart thing to a physical one.In [17], one of the few papers that designed a prototype for asmart thing, the authors state that new algorithms need to beintegrated to their approach for the development of cooperativesmart things. They developed smart street lights that are notable to interact with each other. Thus, each smart street lightmakes decisions independently.There are also some commercial applications based on smartthings [18], [19]. But they do not seem to provide things with aself-developed capability. For example, in Apple’s HomeKit’s[18] approach, the user needs to control and specify thebehavior of each one of the smart devices, instead of thingshaving the ability of acting by themselves and learning toadapt. Very recently, IBM proposed the use of the embodied cognition concept in its future products [20], [21] in orderto create devices featuring dynamic learning and reasoningabout how to act. Their proposed solution is to embed Watson- an IBM platform that uses machine learning techniques,especially neural networks - into smart things [22].Besides the software industry starting to discuss the use ofembodied cognition to model physical devices, this approachhas been used in robotic literature for many years [9], [11],[23], [24]. Therefore, inspired by known approaches to designswarm of cooperative and autonomous robots, our goal hasbeen to adapt this approach and show that it is also feasible tomodel applications based on the Internet of Things that requiresmart things with self-developed and cooperative capabilities.III. B
ACKGROUND
A. Embodied Agents
Embodied agents have a body and are physically situated,that is, they are physical agents interacting not only amongthemselves but also with the physical environment. Theycan communicate among themselves and also with humanusers. Robots, wireless devices and ubiquitous computing areexamples of embodied agents [9].Figure 1 depicts an embodied agent according to the de-scription presented by the Laboratory of Artificial Life andRobotics [25] about embodied agents. They define embodiedagents as agents that have a body and are controlled byartificial neural networks [10]. These agents use learningtechniques, such as an evolutionary algorithm, to adapt toexecute a specific task.
Fig. 1. Embodied agent model.
B. Evolving Embodied Agents
The authors in [24] describe the process for evolving em-bodied agents using an evolutionary algorithm, such as geneticalgorithm. Accordingly, we provided a simplified flowchart ofthis process in Figure 2. The interested reader may consultmore extensive papers [26], [27] or our dissertation [28] (chap.ii, sec. iii).Normally, the use of an evolutionary algorithm in a mul-tiagent system provides the emergence of features that werenot defined at design-time, such as a communication system[29]. While in traditional agent-based approaches the desiredbehaviors are accomplished intuitively by the designer, involutionary ones these are often the result of an adaptationprocess that usually involves a larger number of interactionsbetween the agents and the environment [30].
Fig. 2. Flowchart: Evolving embodied agents.
The process of evolving an embodied agent’s neural networkcan occur on-line or off-line [31]. The on-line training usesphysical devices during the evolutionary process. In such case,an untrained neural network is loaded into a physical agent.Then, the evolution of this neural network occurs based onthe evaluation of how this real device behaves in a specificscenario. The off-line training evolves the neural controller ina simulated agent [31], and then transfers the evolved neuralnetwork to a physical agent.The major disadvantage of executing on-line evolution isthe time of execution, since evaluating physical devices mayrequire much time. In addition, the training process basedon evolution can produce bad configurations for the neuralnetwork, which could generate serious problems in particularscenarios. Otherwise, the on-line training insures that evolvedcontrollers function well in real devices.
C. FIoT: A Framework for the Internet of Things
The Framework for the Internet of Things (FIoT) [6] isan agent-based software framework that we implemented [6]to generate application controllers for smart things throughlearning algorithms. The framework does not cover the devel-opment of environment simulators, but only the developmentof smart things’ controllers.If a researcher develops an application using FIoT, hisapplication will contain a Java software already equippedwith modules for detecting smart things in an environment,assigning a controller to a particular thing, creating softwareagents, collecting data from devices and supporting the com-munication structure among agents and devices.Some features are variable and may be selected/developedaccording to the application type, as follows: (i) a controlmodule such as a neural network or finite state machine;(2) an adaptive technique to train the controller; and (iii) an evaluation process to evaluate the behavior of smart thingsthat are making decisions based on the controller. For example,Table I summarizes how the “Street Light Control” applicationwill adhere to the proposed framework, while extending theFIoT flexible points.
TABLE IFI O T’ S F LEXIBLE P OINTS
FIoT Framework Street Light Control Application
Controller Three Layer Neural NetworkMaking Evaluation Collective Fitness Evaluation: Test a pool ofcandidates to represent the network parameters.For each candidate, it evaluates the collectionof smart street lights, comparing fitnessamong candidatesController Adaptation Evolutionary Algorithm: Generate a pool ofcandidates to represent the network parameters
IV. A
PPLICATION S CENARIO : S
MART S TREET L IGHTS
In order to evaluate our proposed approach to create self-developed and cooperative smart things, we developed a smartstreet light application. The overall goal of this application isto reduce the energy consumption and maintain the maximumvisual comfort in illuminated areas. For this purpose, weprovided each street light with ambient brightness and motionsensors, and an actuator to control its light intensity. Inaddition, we also provided street lights with wireless commu-nicators. Therefore, they are able to cooperate with each otherin order to establish the most evaluable routes of the passers-byand to achieve the goal of minimizing energy consumption.We used an evolutionary algorithm to support the designof this system’s features automatically. By using a geneticalgorithm, we expect that a policy for controlling the streetlights, with a simple communication system among them, willemerge from this experiment. Therefore, no system featuresuch as the effect of ambient brightness on light status changeswas specified at design-time.As we discussed, the training process can occur in a sim-ulated or in a physical environment. However, many devicescould be damaged if we were to use real equipment, sinceseveral configurations must be tested during the training pro-cess. Therefore, to execute the training algorithm, we decidedto simulate how smart street lights behave in a fictitiousneighborhood. After the training process, we transferred theevolved neural network to physical devices and observed howthey behaved in a real scenario.
A. Simulating the environment
In this subsection, we describe a simulated neighborhoodscenario. Figure 3 depicts the elements that are part of theapplication namely, street lights, people, nodes and edges. Wemodeled our scenario as a graph, in which a node representsa street light position and an edge represents the smallestdistance between two street lights.The graph representing the street light network consistsof 18 nodes and 34 edges. Each node represents a streetlight. In the graph, the yellow, gray, black and red triangles ig. 3. Simulated Neighborhood. represent the street light status (ON/DIM/OFF/Broken Lamp).Each edge is two-way and links two nodes. In addition, eachedge has a light intensity parameter that is the sum of theenvironmental light and the brightness from the street lights inits nodes. Our goal is to simulate different lighting in differentneighborhood areas.People walk along different paths starting at random depar-ture points. Their role is to complete their routes, reachinga destination point. A person can only move if his currentand next positions are not completely dark. In addition, wealso supposed that people walk slowly if the place is partiallydevoid of light. For simulation purposes, we chose four nodesas departure points (yellow nodes) and two as destinations(red nodes). We started with ten people in this experiment.We also configured that 20% of the street lights lamps will godark during the simulation.
B. Smart Street Light
Each street light in the simulation has a micro-controllerthat is used to detect the approximation of a person, interactwith the closest street light, and control its lights. A streetlight can change the status of its light to ON, OFF orDIM. Smart street lights have to execute three tasks: datacollection, decision-making and action enforcement. The firsttask consists of receiving data related to people flow, ambientbrightness, data from the neighboring street lights and currentlight status. To make decisions, smart street lights use a three-layer feedforward neural network with a feedback loop [10].Feedback occurs because one or more of the neural network’soutputs influence the next neural network’s inputs.
C. Creating the Neural Network Controller
We used the FIoT (see Section III-C) to instantiate the three-layer neural network controller for our smart street lights (seeFigure 4).
Fig. 4. The neural network controller for smart street lights: zeroed weights(FIoT’s Application View).
The input layer includes four units that encode the ac-tivation level of sensors and the previous output value oflisteningDecision. The output layer contains three output units:(i) listeningDecision, that enables the smart lamp to receivesignals from neighboring street lights in the next cycle; (ii)wirelessTransmitter, a signal value to be transmitted to neigh-boring street lights; and (iii) lightDecision, that switches thelight’s OFF/DIM/ON functions.The middle layer of the neural network has two neuronsconnecting the input and output layers. These neurons providean association between sensors and actuators, which representthe system policies that can change based on the neuralnetwork configuration.
D. Training the Neural Network
The weights in the neural network used by the smartstreet lamps vary during the training process, as the systemapplies a genetic algorithm to find a better solution. Figure5 depicts the simulation parameters that were used by theevolutionary algorithm. We selected these parameters values(i.e number of generation and tests, population size, mutationrate, etc.) according to known experiments of evolutionaryneural networks that we found in the literature [11], [32] (seeFigure 2 - Section III-B).
Fig. 5. Configuration file to evolve the neural network via genetic algorithmusing FIoT. uring the training process, the algorithm evaluates theweight possibilities based on the energy consumption, thenumber of people that finished their routes after the simulationends, and the total time spent by people to move during theirtrip. Therefore, each weights set trial is evaluated after thesimulation ends based on the following equations: pP eople = ( completedP eople × totalP eople (1) pEnergy = ( totalEnergy × × ( timeSimulation × totalSmartLights )10 ) (2) pT rip = ( totalT imeT rip × × timeSimulation (2) ) × totalP eople ) (3) f itness = (1 . × pP eople ) − (0 . × pT rip ) − (0 . × pEnergy ) (4)in which pP eople is the percentage of the number of peoplethat completed their routes as of the end of the simulation outof the total number of people in the simulation; pEnergy isthe percentage of energy that was consumed by street lightsout of the maximum energy value that could be consumedduring the simulation. We also considered the use of thewireless transmitter to calculate energy consumption; pT rip is the percentage of the total duration time of people’s tripsout of the maximum time value that their trip could spend;and f itness is the fitness of each representation candidatethat encodes the neural network. Generations
Trainning Results - Best Individuals
BEST FITNESSENERGY %TRIP %PEOPLE %
Fig. 6. Simulation results - Most-Fit from each generation.
Normally, the performance of the most-fit individual isbetter than the others. Figure 6 illustrates the best individualfrom each generation (i.e. the candidate with the highestfitness value). As shown, the best individuals from the gener-ations have tend to minimize energy consumption and find anequilibrium between energy consumption and the trip time.We selected the best individual from the last generation toinvestigate its solution, as shown in the subsection below(IV-D1).
1) Evaluation of the Best Candidate:
After the end of theevolutionary process, the algorithm selects the set of weightswith the highest fitness (equation 4). Figure 7 depicts theevolved neural network configured with the best set of weightsfound during the evolution.
Fig. 7. The Evolved Neural Network to be used as a controller for real StreetLights (FIoT’s Application View).
One disadvantage of using neural networks combined withevolutionary algorithms is to understand and explain the be-haviors that were automatically assigned by the smart things.Therefore, we executed the simulated street lights using theevolved network in order to generate logs and extract therules that are implicit in patterns of the generated input-output mapping. To generate these logs, we used the runtimemonitoring platform proposed by Nascimento et al. [33] to testdistributed systems. After analyzing logs, we could realize therules that were created by the evolved neural network in orderto understand why street lights decided to communicate andswitch the lights ON. The code below exemplifies some ofthese rules: ( I = 1 . ∧ I = 0 . ∧ I = 0 . ∧ I = 0 . ⇒ ( Out = 0 . ∧ Out = 1 . ∧ Out = 0 . (5) ( I = 1 . ∧ I = 0 . ∧ I = 1 . ∧ I = 0 . ⇒ ( Out = 0 . ∧ Out = 1 . ∧ Out = 0 . (6) ( I = 0 . ∧ I = 0 . ∧ I = 0 . ∧ I = 0 . ⇒ ( Out = 0 . ∧ Out = 0 . ∧ Out = 0 . (7) ( I = 1 . ∧ I = 0 . ∧ I = 0 . ∧ I = 0 . ⇒ ( Out = 0 . ∧ Out = 1 . ∧ Out = 0 . (8)n which the variables are: I ≡ previousListeningDecision, I ≡ lightSensor,I ≡ motionSensor, I ≡ wirelessReceiver,O ≡ wirelessT ransmitter, O ≡ listeningDecision,O ≡ lightDecision (9)Based on the generated rules and the system execution, wecould observe that only the street lights with broken lampsemit “0.5” by its wireless transmitter (rule 7). In addition, wealso observed that a street light that is not broken switchesits lamp ON if it detects a person’s approximation (rule 6) orreceives “0.5” from wireless receiver (rule 8) . Discussion:
Imagine if we had to codify into the physicalsmart lights all of these rules that could be operated by thisevolved neural network. Using the evolved neural network, wesaved lines of code and programming time. The code size is animportant parameter in this kind of project, since it is normallycomposed of devices with many resource constraints.We provided street lights with the possibility of disablingthe feature of receiving signals from neighboring street lights.In an initial instance, we did not consider broken lamps.Therefore, as the action of communication increases energyconsumption, the street lights decided to disable this feature.However, when we added broken lamps to the scenario,during the evolutionary process, the solution of enabling acommunication system among street lights provided betterresults. Therefore, as shown in the rules generated by theevolved neural network, a smart street light takes lightSensor,motionSensor and wirelessReceived inputs into account tomake decisions. Thus, the best solution does not ignore anyof these parameters.One advantage of engineering physical devices based onembodied cognition is that the found solution normally issufficiently generic. To estimate how generic is the approach,we simulated another neighborhood with a different number ofstreet lights and a different configuration map, then we appliedthis best solution to this new scenario. The results showed thatthe evolved street lights’ behavior do not vary based on thenumber of street lights, and the lighting application continuesfunctioning well even if we disable some street lights in thescenario.
E. Prototyping the Smart Street Light Device
As depicted in Figure 8, the prototype of the smart streetlight is composed of an Arduino [34] and the following sensorsand actuators: (i) HC-SR501 (a device that detects movingobjects, particularly people. The detection distance is slightlyshorter - maximum of 7 meters); LM393 light sensor (adevice to detect the ambient brightness and light intensity);nRF24L01 (a wireless module to allow one device to commu-nicate with another); and (iii) LEDS (the representation of alamp).We put two LEDs in this circuit. Our goal is to simulatelight intensity. Therefore, if a smart street light decides to setits light intensity to the maximum, both LEDs will be on. If
Fig. 8. Prototyping the smart street light. the light intensity is medium, one LED will be on and theother LED will be off.
F. Transferring the evolved neural network to physical devices
After the neural network has been evolved, we codified itinto the Arduino. We show below the code in C++ languagethat operates as a neural network inside the Arduino: double fSigmoide(double x) { double output = 1 / (1 + exp ( x)) ;return output ; } double calculateHiddenUnitOutput (double w[4]) { double H = previousListeningDecision ∗ w[0] +lightSensor ∗ w[1]+motionSensor ∗ w[2]+wirelessReceiver ∗ w[3];double HOutput = fSigmoide(H);return HOutput; } double calculateOutputDecisions (double w[2], double h0, doubleh1) { double outputSum = h0 ∗ w[0] + h1 ∗ w[1];double output = fSigmoide(outputSum);return output ; } As we described in section IV-B, each smart street light hasto execute three tasks. Accordingly, we present below the mainparts of the C++ code that the Arduino executes to attend tothe tasks of collecting data, making decisions and enforcingactions: • Collecting data: void getInputs () { lightSensor = readLightSensor () ;motionSensor = readMotionSensor();previousListeningDecision = listeningDecision ;if ( listeningDecision ==1) { receivedSignal = receiveWirelessData () ; } elsereceivedSignal = 0; } • Making Decision (calculating output decisions based oncode of the evolved neural network functions - see IV-F): ouble weightsH0[4] = 1.2, -0.8, 1.6, -0.5;double weightsH1[4] = 1.6, -0.8, 1.5, -0.3;double H0 = calculateHiddenUnitOutput(weightsH0);double H1 = calculateHiddenUnitOutput(weightsH1);...double weightsTransmitterOutput [2] = -0.6, -0.2 ;double transmitterOutput =calculateOutputDecisions ( weightsTransmitterOutput , H0, H1);...double weightslisteningDecision [2] = -0.9, -0.7;double listeningDecisionOutput =calculateOutputDecisions ( weightslisteningDecision , H0, H1);...double weightslightDecision [2] = 1.7, -0.4;double lightDecisionOutput =calculateOutputDecisions ( weightslightDecision , H0, H1);if ( lightDecisionOutput > threshold2) { lightDecision = 1.0; } else { if ( lightDecisionOutput > threshold1) { lightDecision = 0.5; } else lightDecision = 0.0; } • Enforcing action: void setOutputs () { ...sendWirelessData( transmitterSignal ) ;...writeLed( lightDecision ) ;... } void writeLed(double value) { if (value == 1) { digitalWrite ( ledPin , HIGH);digitalWrite (led2Pin , HIGH); } else if (value == 0.5) { digitalWrite ( ledPin , HIGH);digitalWrite (led2Pin , LOW); } else { digitalWrite ( ledPin , LOW);digitalWrite (led2Pin , LOW); }} G. Testing Physical Smart Street Lights in a Real Scenario
In a controlled real scenario , we put three prototypes ofthe smart street lights using the evolved neural network intooperation. We distributed them in the scenario as shown inFigure 9. To compare the behavior of physical smart streetlights to the simulated ones, we also collected logs fromthe Arduinos . As we could observe, the behavior of thephysical smart street lights was similar to the simulated ones:it switches lamps ON if it receives a signal different from0.0 or detects the approximation of a person. However, wecannot assure that a street light is receiving the signal from the
12 3
Smart Street Light PrototypePerson
Smart Street Light Prototypewith broken lampArduino with the evolved neural network
Fig. 9. Real Scenario where we tested a network of three smart street lightsprototypes. closest street light. In addition, different from the simulator,the real scenario is a distributed environment composed ofasynchronous components with different clocks. But, as weare leading with a controlled environment with few resources,we cannot observe significant differences.V. C
ONCLUSION AND F UTURE W ORK
We believe these preliminary results are promising. Weproposed the use of the embodied cognition concept to modelsmart things. To illustrate, we modeled and implemented smartstreet lights. Each smart street light had sensors and actuatorsto interact to the environment, and used an artificial neuralnetwork as a internal controller. In addition, we used a geneticalgorithm to allow smart street lights to self-develop theirown behaviors through a non-supervised training. As a result,a group of initially non-communicating smart street lightsdeveloped a simple communication system. By communicat-ing, the group of street lights seems to cooperate in orderto achieve collective targets. For example, to maintain themaximum visual comfort in illuminated areas, the street lightsused communication to reduce the impact of broken lamps.After evolving the neural controller, we designed threehomogeneous prototypes of the smart street light and trans-ferred the evolved controller into their microcontrollers. Weput them in a real scenario and compared them to the simulatedstreet lights. Previously, we described, in [6], a more complexapplication, but we only had provided a simulated scenario. Inthis work, we showed that is possible to automatically createand train a smart thing’s controllers using FIoT and to use itto control physical smart things.As an ongoing work, we need to improve the real scenario,testing the use of the evolved network to control real streetlights in a real neighborhood. In addition, we need to developmore realistic scenarios, taking several other environmentalparameters into account. Furthermore, since we had shownthat the use of an evolved neural network results in savingcode lines, we also need to test this experiment using micro-controllers with fewer resources, such as battery and memory.Another challenge from creating more realistic scenarios isto model heterogeneous experiments, training different smarthings in the same scenario. For example, the applicationof smart waste collecting will require two types of smartthings: smart trash cans and smart waste collection vehicles.Therefore, these different types of smart things will need tocooperate with each other in order to achieve the goal ofminimizing waste transportation costs and promoting environ-mental sustainability.Our next goal is to allow the system to initiate a newlearning process after the evolved network has been alreadytransferred to the physical smart things. Therefore, we willchange the neural network’s parameters at run-time and allowthe real smart things to adapt their behavior in the face ofchanging environmental demands. For this purpose, we needto use a simulator for wireless devices that allow our trainingsystem to communicate with and for programming microcon-trollers at runtime, such as Terra [35], which is a system forprogramming wireless sensor network applications. Therefore,our system will evaluate physical smart things’ behaviorsat runtime, execute adaptation in a more realistic simulatedenvironment via a learning algorithm, and then automaticallytransfer the trained controller to the physical smart things.The system will also need to provide some sort of “safe self-adaptation” or normative adaptation [36] to the developer, inwhich the device itself can avoid bad configurations or fall-back to previous configuration at runtime.A
CKNOWLEDGMENT
This work has been supported by the Laboratory of SoftwareEngineering (LES) at PUC-Rio. Our thanks to CNPq, CAPES,FAPERJ and PUC-Rio for their support through scholarshipsand fellowships. R
EFERENCES[1] J. O. Kephart and D. M. Chess, “The vision of autonomic computing,”
Computer , vol. 36, no. 1, pp. 41–50, 2003.[2] J. O. Kephart, “Research challenges of autonomic computing,” in
Software Engineering, 2005. ICSE 2005. Proceedings. 27th InternationalConference on . IEEE, 2005, pp. 15–22.[3] P. Horn, “Autonomic computing: Ibm’s perspective on the state ofinformation technology,” IBM, Tech. Rep., 2001.[4] HP, “Adaptive enterprise: Infrastructure and management solutions forthe adaptive enterprise.” Hewlett-Packard Development Company, Tech.Rep., 2003.[5] Microsoft, “Microsoft dynamic systems initiative overview.” Microsoft,Tech. Rep., 2004.[6] N. M. do Nascimento and C. J. P. de Lucena, “Fiot: An agent-basedframework for self-adaptive and self-organizing applications based onthe internet of things,”
Information Sciences , vol. 378, pp. 161–176,2017.[7] M. E. Markiewicz and C. J. P. de Lucena, “Object oriented frameworkdevelopment,”
Crossroads , vol. 7, no. 4, pp. 3–9, Jul. 2001.[8] K. Cetnarowicz, K. Kisiel-Dorohinicki, and E. Nawarecki, “The applica-tion of evolution process in multi-agent world to the prediction system,”in
Second International Conference on Multiagent Systems , 1996, pp.26–32.[9] L. Steels, “Ecagents: Embodied and communicating agents,” SONY,Tech. Rep., 2004.[10] S. Haykin,
Neural Networks: A Comprehensive Foundation . Macmillan,1994.[11] D. Marocco and S. Nolfi, “Emergence of communication in embodiedagents evolved for the ability to solve a collective navigation problem,”
Connection Science , 2007.[12] B. Jacob, R. Lanyon-Hogg, D. K. Nadgir, and A. F. Yassin, “A practicalguide to the ibm autonomic computing toolkit,” 2004. [13] D. Floreano and C. Mattiussi,
Bio-Inspired Artificial Intelligence. The-ories, Methods, and Technologies . Cambridge: MIT Press, 2008.[14] A. Katasonov, O. Kaykova, O. Khriyenko, S. Nikitin, and V. Y. Terziyan,“Smart semantic middleware for the internet of things.”
ICINCO-ICSO ,vol. 8, pp. 169–178, 2008.[15] L. Baresi, S. Guinea, and A. Shahzada, “Short paper: Harmonizingheterogeneous components in sesame,” in
Internet of Things (WF-IoT),2014 IEEE World Forum on . IEEE, 2014, pp. 197–198.[16] L. Zhu, H. Cai, and L. Jiang, “Minson: A business process self-adaptive framework for smart office based on multi-agent,” in e-BusinessEngineering (ICEBE), 2014 IEEE 11th International Conference on .IEEE, 2014, pp. 31–37.[17] J. F. De Paz, J. Bajo, S. Rodr´ıguez, G. Villarrubia, and J. M. Corchado,“Intelligent system for lighting control in smart cities,”
InformationSciences
Proceedings of the 2017 CHI Conference Extended Abstracts on HumanFactors in Computing Systems et al. , “Symbiotic cognitive computing,”
AI Magazine , vol. 37, no. 3,pp. 81–93, 2016.[23] A. Loula, R. Gudwin, C. N. El-Hani, and J. Queiroz, “Emergenceof self-organized symbol-based communication in artificial creatures,”
Cognitive Systems Research , vol. 11, no. 2, pp. 131–147, 2010.[24] S. Nolfi, J. Bongard, P. Husbands, and D. Floreano,
EvolutionaryRobotics . Cham: Springer International Publishing, 2016, ch. 76, pp.2035–2068.[25] S. Nolfi, “Laboratory of autonomous robotics and artificial life,”LARAL, http://laral.istc.cnr.it/, Tech. Rep., March 1995.[26] G. F. Miller, P. M. Todd, and S. U. Hegde, “Designing neural networksusing genetic algorithms,” in
Proceedings of the third internationalconference on Genetic algorithms . Morgan Kaufmann Publishers Inc.,1989, pp. 379–384.[27] X. Yao, “Evolving artificial neural networks,”
Proceedings of the IEEE ,vol. 87, no. 9, pp. 1423–1447, 1999.[28] N. M. Nascimento, “FIoT: An agent-based framework for self-adaptiveand self-organizing internet of things applications,” Master’s thesis,PUC-Rio, Rio de Janeiro, Brazil, August 2015.[29] E. S. de Oliveira and A. Loula, “Symbolic associations in neural networkactivations: Representations in the emergence of communication,” in
Neural Networks (IJCNN), 2015 International Joint Conference on .IEEE, 2015, pp. 1–8.[30] S. Nolfi and D. Floreano,
Evolutionary Robotics: The Biol-ogy,Intelligence,and Technology of Self-Organizing Machines . Cam-bridge, MA, USA: MIT Press, 2000.[31] A. Nelson, G. Barlow, and L. Doitsidis, “Fitness functions in evolution-ary robotics: A survey and analysis,”
Robotics and Autonomous Systems ,2007.[32] M. DA VIDE and S. Nolfi, “Emergence of communication in teamsof embodied and situated agents,” in
The Evolution of Language:Proceedings of the 6th International Conference (EVOLANG6), Rome,Italy, 12-15 April 2006 . World Scientific, 2006, p. 198.[33] N. Nascimento, C. J. Viana, A. v. Staa, and C. Lucena, “A publish-subscribe based architecture for testing multiagent systems,” in
ACMTransactions on Sensor Networks (TOSN) , vol. 11, no. 4, p. 59, 2015.[36] M. Viana, P. Alencar, and C. Lucena, “A metamodel approach todeveloping adaptive normative agents,” in