A Testing Scheme for Self-Adaptive Software Systems with Architectural Runtime Models
AA Testing Scheme for Self-Adaptive SoftwareSystems with Architectural Runtime Models
Joachim Hänsel, Thomas Vogel and Holger Giese
Hasso Plattner Institute for Software Systems Engineeringat the University of Potsdam, Potsdam, GermanyEMail: [Joachim.Haensel|Thomas.Vogel|Holger.Giese]@hpi.de
Abstract —Self-adaptive software systems (SASS) are equippedwith feedback loops to adapt autonomously to changes of thesoftware or environment. In established fields, such as embeddedsoftware, sophisticated approaches have been developed to sys-tematically study feedback loops early during the development.In order to cover the particularities of feedback, techniqueslike one-way and in-the-loop simulation and testing have beenincluded. However, a related approach to systematically testSASS is currently lacking. In this paper we therefore proposea systematic testing scheme for SASS that allows engineers totest the feedback loops early in the development by exploitingarchitectural runtime models. These models that are availableearly in the development are commonly used by the activities ofa feedback loop at runtime and they provide a suitable high-levelabstraction to describe test inputs as well as expected test results.We further outline our ideas with some initial evaluation resultsby means of a small case study.
I. I
NTRODUCTION
Traditionally, software development follows an open-loopstructure that requires human supervision when software sys-tems are exposed to changing environments [1]. To reduce hu-man supervision, software systems are equipped with feedbackloops to adapt autonomously to changing environments. Suchclosed-loop systems are designated as self-adaptive softwaresystems (SASS) [2] and they are often split in two parts, an adaptation engine realizing the feedback loops and controllingthe adaptable software [1]. As pointed out by Calinescu [3],such systems will become important for safety-critical appli-cations, where they have to fulfill high-quality standards.Testing is an established technique for ensuring quality intraditional systems, even safety-critical ones [4], and processesfor testing such systems exist. For instance, embedded soft-ware with its feedback loops is often systematically tested inthree stages [5, pp. 193–208]:i) a simulation stage that teststhe models (specification) of the software under developmentin a simulated or real-life environment, ii) a prototyping stagethat tests the real software in a simulated environment, andfinally, iii) pre-production stage that tests the real software inthe real environment. With each stage the software is more andmore refined to the final product while testing continuouslyprovides assurances for the software and particularly early inthe development.However, a similar systematic testing process providingcontinuous and early assurances does not exist for SASS.In contrast, models for substituting the environment or partsof a SASS usually cannot be obtained easily and therefore, a generic simulation environment for SASS does not exists.Consequently, testing SASS typically requires that the imple-mentations of the feedback loops and adaptable software withits sensors and effectors are available. This impedes testingearly in the development and makes it costly to remove faultsin the feedback loops discovered late in the development.Furthermore, approaches used for traditional systems arenot as easily applicable to SASS as the interface between theadaptation engine and the adaptable software is often quitedifferent from that of embedded software. SASS are usuallynot restricted to observing and adjusting parameters but addi-tionally monitor and adapt the architecture of the software [6]–[8], thus requiring support of structural adaptations [9].Some approaches address the testing of SASS but only forlater development stages [10]–[14] when the systems havealready been deployed. Others do promote testing in earlierstages but they still assume an executable and complete SASSto run the tests against [15]–[17]. Testing of only parts of thefeedback loop is not supported. In contrast, we consider testingparts of a SASS as a precondition to early validation since asystem with the completely implemented adaptable softwareand feedback loop is only available in the latest developmentstages.Therefore, we propose a systematic testing scheme forSASS that allows engineers to test the feedback loops (adapta-tion behavior) early in the development by exploiting runtimemodels. Such models represent the adaptable software andenvironment and they are typically used at runtime to drive theadaptation [18]. Our approach leverages early testing of SASSby using architectural runtime models that are available earlyin the development and are commonly used by the activitiesof a feedback loop. Therefore, feedback loop activities suchas monitor, analyze, plan, and execute (cf. MAPE-K [19]) canbe individually tested while the whole feedback loop and theadaptable software do not have to be implemented yet. Incontrast, the non-implemented parts are simulated based onthe runtime models. Consequently, the feedback loop can bemodularly tested while the different parts of the loop can beincrementally refined and implemented until they replace thesimulated parts. Moreover, we expect reduced costs of testingsince we do not require final or experimental implementationsof certain feedback loops parts to test other parts.The rest of the paper is structured as follows. We describepreliminaries in Section II and the benefits of runtime models c (cid:13) DOI: https://doi.org/10.1109/SASOW.2015.27 a r X i v : . [ c s . S E ] M a y or testing in Section III. Then, we discuss our approachby means of one-way (Section IV), in-the-loop (Section V),and online (Section VI) testing. Finally, we sketch an initialevaluation in Section VII, contrast our approach with relatedwork in Section VIII, and conclude the paper in Section IX.II. P RELIMINARIES
In this section we discuss preliminaries of the presentedtesting: MAPE-K feedback loops and architectural RTM.
A. MAPE-K Feedback Loops
The development of SASS typically follows the externalapproach [1] that separates adaptation from domain concernsby splitting up the software in two parts: an
Adaptation Engine for the adaptation concerns and an
Adaptable Software for thedomain concerns while the former senses and effects and thus,controls the latter. This constitutes a feedback loop that realizesthe self-adaptation (see Figure 1). The engine senses as wellthe
Environment with which the adaptable software interacts . Adaptable SoftwareAdaptation EngineEnvironment
RTM sensing effecting
EMA P interacting
Figure 1. MAPE-K Feedback Loop with a Runtime Model (RTM).
The resulting feedback loop between the engine and thesoftware can be refined according to the
MAPE-K referencemodel [19]. This model considers the activities of M onitoringand A nalyzing the software and environment and, if needed,of P lanning and E xecuting adaptations to the software. Allactivities share a K nowledge base as illustrated by a runtimemodel ( RTM ) in Figure 1 and discussed in the following.
B. Architectural Runtime Models
The external approach as previously discussed requires thatthe adaptation engine has a representation of the adaptablesoftware and environment to perform self-adaptation. This rep-resentation is often realized by a causally connected R un t ime M odel ( RTM ) [18]. A causal connection means that changesof the software or environment are reflected in the model andchanges of the model are reflected in the software (but, not inthe environment being a non-controllable entity).Considering Figure 1, an RTM can be used as a knowledgebase on which the MAPE activities are operating. The monitorstep observes the software and environment and updates theRTM accordingly. The analyze step then reasons on the RTMto identify any need for adaptation. Such a need is addressed by the plan step to prescribe an adaptation in the RTM, whichis eventually enacted to the software by the execute step.Using RTMs in self-adaptive software provides the benefitsof creating appropriate abstractions of runtime phenomenathat are manageable by the feedback loops and of applyingautomated model-driven engineering (MDE) techniques [18].The software architecture has been identified as such anappropriate abstraction level for representing the adaptablesoftware and environment and for supporting structural adap-tation [6]–[9,18]. Hence, architectural
RTMs of the adaptablesoftware are used by a feedback loop to reflect on the state ofthe software and environment. Such state-aware models can beenriched by a feedback loop to cover, for instance, the historyor time series of states and executed adaptations, which resultsin history-/time-aware models.In our research on self-adaptive software such as [20], weevaluate our work by using mRUBiS , an internet marketplaceon which users sell or auction products, as the adaptablesoftware. A single shop on the marketplace consists of 18components and we may scale up the number of shops. For aself-healing scenario, we created architectural runtime modelsof mRUBiS and defined different types of failures based on themodels. These failures have to be handled by the adaptationengine. Examples of such failures are exceptions emitted bycomponents, unwanted life-cycle changes of components, thecomplete removal of components because of crashes, andrepeated occurrences of these failures. Based on that, weexperiment with different adaptation mechanisms and can alsoexploit the models for testing as discussed in the following.III. E XPLOITING R UNTIME M ODELS FOR T ESTING
In the following, we assume a SASS that follows the MAPE-Kcycle with runtime models (RTMs) as schematically depictedin Figure 1. If the RTMs are just self-aware and reflect thecurrent state of the adaptable software and environment, wecan make the following two observations:(1) The behavior of the system can be described by asequence of steps ( → AS or → ENV ) ∗ → M → A → P → E ( → AS or → ENV ) ∗ ; . . . where → AS denotes a step of the adaptablesoftware, → ENV denotes a step of the environment, → M de-notes the complete monitoring step, → A denotes the completeanalysis step, → P denotes the complete planning step, and → E denotes the complete execute step.(2) The interface between those steps can be described bydifferent states S i of the RTM if we do not consider theinput of the monitoring and the output of the execute step: ( → AS or → ENV ) ∗ → M S M → A S A → P S P → E ( → AS or → ENV ) ∗ ; → M S M . . . where S Mi denotes the RTM stateafter the i -th monitoring, S Ai the RTM state after the i -thanalysis, and S Pi the RTM state after the i -th planning.Consider the self-healing example in Figure 2. An intactarchitecture is monitored and results in RTM S Mi − . For now,analysis and planning are not required to take action since thearchitecture is not broken. Without an adaptation, the execute i-1M S iM S iA S iP Figure 2. Example Trace for a Self-Healing Scenario. step will do nothing either. We can directly proceed withthe next steps in the environment or adaptable software. Dueto either an environmental influence or some failure in theadaptable software ( → ENV or → AS ), a component of thearchitecture is removed. In the next step, this is monitored asRTM S Mi . The result of the analysis step → A is the annotatedRTM S Ai that marks the missing component. The planning step → P constructs a repaired RTM S Pi which will be applied tothe adaptable software in the next step by → E .These two observations indicate that the different states ofthe RTM are the key element to describe the input/outputbehavior of the MAPE activities concerning their communi-cation with the adaptable software. Moreover, the RTMs alsofacilitate considering the required behavior of the adaptationengine at a much higher level of abstraction than the eventsobserved by the monitoring step and the effects triggered bythe execute step. Consequently, we suggest exploiting theRTMs to systematically test the adaptation engine and its partsin form of one-way testing of individual steps and fragments,in-the-loop testing of the analysis and planning steps, andonline testing of the analysis and planning steps. We furtherstudy how we can validate the model which is required forthe in-the-loop testing.IV. O NE -W AY T ESTING
We define
One-Way Testing as the following: An input RTMand an expected oracle RTM are provided. One or more stepsare tested in a single execution of a partial feedback loop.The tested parts receive the input RTM and are supposed toproduce an output RTM. The output RTM is compared againstthe oracle. In this kind of testing the steps → AS , → ENV , → M , → A , → P or → E will happen at most once. A. One-Way Testing single MAPE Activities
The most basic approach is to test each of the steps/activitiesthat process the RTM on their own. Obviously, these tests needto be run before testing combinations of feedback-loop stepsto better locate faults and tell single-step errors from errorsthat arise due to problems in the interaction of steps.
1) One-Way Testing the Analysis:
If we want to test theanalysis step, we simply provide an input RTM S M , run step → A , and compare the resulting RTM S A with an oracle RTM S Ao . Applied to the example in Figure 2, we choose S Mi with We ignore here the case that the adaptable software and environmentchange while the feedback loop is running. While this case could not beexcluded in general, we may neglect it due to the considered abstractionlevel as supported by architectural runtime models. That is, oftentimes thearchitecture does not change very frequently, for instance, due to failures. the removed component as an input RTM. We then define anoracle RTM S Ao that contains an annotation where the missingcomponent has been marked. Applying → A on S Mi would giveus S Ai which is compared to S Ao . If both RTMs are the same,that is, both especially contain the same “missing component”annotation, the test would pass, otherwise fail.
2) One-Way Testing the Planning:
Similar to the analysisstep, we provide an input RTM S A , run step → P , and checkwhether the ouput of → P is equal to an oracle RTM S Po thatwas defined before. In the example of Figure 2, we start outwith the annotated RTM S Ai . The oracle S Po would be definedas the intact architecture from the beginning ( S Mi − ) and wewould expect → P to return an RTM equal to S Po , that is, theplan step has re-created the removed component in the RTM. B. One-Way Testing MAPE Fragments
We now discuss one-way testing of fragments by jointly testingthe analyze and plan or the monitor and execute steps.
1) One-Way Testing the Analysis and Planning:
As a pre-condition to the separate test of the analysis and planning, itis necessary to have knowledge about the way the analysisworks and what kind of models to expect. Obviously it wouldbe hard to create a valid oracle model S Ao or input model S M if this knowledge is not available. In a simple scenario likethe self-healing one presented before this should not pose aproblem. But there are also more complex analysis algorithms,which will not result in models that can be tested as easily.Furthermore, some errors might only appear if the analysisand planning are tested together.Consequently, we propose to test the analyze and plan stepsas the next unit. Again we can benefit from the same pattern oftesting, that is, by providing an input model S M and an oraclemodel in state S Po . In terms of the example trace (Figure 2),this means to start with the broken monitored input model S Mi , construct an expected model S Po where the removedcomponent is redeployed and check whether the resultingmodel of the application of S Mi → A → P S Pi is equal to S Po .
2) One-Way Testing the Execute and Monitor:
The separatetesting of the monitor and execute steps via the runtime modelsis not feasible as the effect of the execute step cannot bedirectly observed. If we follow the same pattern as with theanalysis and planning, we would end up with no result modelfor the execute step and no input model for the monitor step.The effect of the execute step cannot be directly observedsince it is part of the concrete adaptable software. Likewise,the monitor step’s input is directly obtained from the software.Instead of the separate testing, we propose to test the monitorand execute steps together. In this setup we need a workingadaptable software and the tested execute and monitor steps areeffecting and sensing the software. The test input is providedby a model S P to the execute step → E which will effect theadaptable software. The adaptable software is monitored → M and a new runtime model is obtained S M .Equality and inequality of these two models can be inter-preted in different ways: (1) equal models may indicate thatthe monitor and execute steps work correctly, (2) equal models136ay also mean that a failure in the execute step is masked by afailure in the monitor step (or the other way round), or (3) thatthe adaptable software or the environment mask a fault of theexecute and/or monitor steps. If S P and S M are not equal,then either (4) the execute step, (5) the monitor step or (6)both do not work properly or (7) the environment introducedan error or the adaptable software showed erroneous behavior.Cases (3) and (7) can be ruled out by applying the testseveral times. It is unlikely that the environment will introducethe same error for all test runs and if the adaptable softwarewas tested before, it is equally unlikely that it will constantlyshow erroneous behavior. In the cases (4), (5) and (6) we canassume a broken monitor and/or execute step. Case (1) shouldbe more likely than (2) since it is not impossible but hard tohave two faults that mask each other. Case (2) should becomeless likely the more tests with different S P and S M are done.In the end, equal models are a good indicator of workingexecute and monitor steps and non-equal models show thatat least one of them is broken.With this test setup, only parts of the monitoring capabilitiescan be tested since its purpose is to detect not only correct butalso incorrect states of the adaptable software. On the otherhand, the execute step is not intended to have an effect onthe software that causes an incorrect state. Therefore, we needto be able to impose an “incorrect” RTM on the adaptablesoftware (such as S Mi in Figure 2), so that we can test whetherthe monitor step is able to properly observe this incorrectstate and create the according RTM. A special test adapteris needed, so that first a correct RTM can be imposed by theexecute step and then the incorrect parts are added by the testadapter. The incorrect input RTM S err needs to be split into S valid which will be provided to → E and S invalid whichis given to the test adapter. The oracle S Mo for this test lookslike S err and the monitor should observe an incorrect RTM.V. I N - THE -L OOP T ESTING
Considering the analysis and planning, one-way testing iseffective to find errors that always show up, independentfrom their previous executions in the feedback loop. If wewant to identify errors that arise from an accumulated stateof the system, we need to test them with sequences ofinputs. It would be a cumbersome task to construct thesesequences by hand. Instead we propose to provide a simulationthat captures the behavior of the adaptable software (AS),environment (ENV), monitor (M) and execute (E) steps. Thissimulation will provide sequences of RTMs to the analyzestep and will read back the RTMs from the plan step. Wedefine such a r un t ime m odel s imulation by an automaton RT M S = ( S RT MS , → RT MS ) that comprises the combinedbehavior of AS, ENV, M, and E. Note that RT M S is asimulation for testing purposes. The provided input RTM andthe way the simulation model reacts to the output of → A and → P are supposed to be realistic but not an exact replacementfor the real AS, ENV, M and E. It also means that it maybehave non-deterministically to reflect realistic AS and ENVand therefore involves some random component. In order to decide whether a test is successful, we also needan oracle. In the simplest case the oracle is given by a stateproperty φ for the model. In more complex cases φ may beeven a sequence property or ensemble property. With respectto our example, the oracle may be the sequence property thatsome architectural constraints for our RTM are only violatedfor at most n subsequent states. A. Black-Box In-the-Loop Testing of Analysis and Planning
With
RT M S at hand we can test the feedback loop alreadyin an early stage when neither the adaptable software or themonitor and execute steps are available or ready. The analyzeand plan steps combined with
RT M S can be simulatedtogether and produce observable sequences: → RT MS S → A S A → P S P → RT MS S . . . . From these we consider only thetraces of states: π = S ; S P ; S . . . and check whether π | = φ to ensure that → A and → P as a black box work as expected. B. Grey-Box In-the-Loop Testing of Analysis and Planning
We can also aim for a better fault location if we considerthe result of → A (i.e., the analyze and plan steps as a greybox). The sequence, we would like to look at, is the following: → RT MS S → A S A → P S P → RT MS S . . . . Here we willinspect the trace π (cid:48) = S ; S A ; S P ; S . . . . In order to test thesetraces, we need a property φ (cid:48) that covers S Ai as well. We nowrequire π (cid:48) | = φ (cid:48) to ensure that → A and → P work as expected.VI. O NLINE T ESTING AND V ALIDATION
In a later development stage we can reuse the simulation model
RT M S and the properties φ and φ (cid:48) alongside the runningsystem for online testing and validation. A. Online Testing If → A and → P in the running system will expose S Ai and S Pi in the same way as in the development stage, we can check φ and φ (cid:48) online or against a recorded trace. The simulationis simply replaced with the real system. Whether online oroffline testing is to be preferred will depend on availableresources on the system under test and the existence of loggingfacilities. Both approaches, black-box and grey-box testing, areapplicable and can be carried out in the same way as with thesimulation.The sequences will be ( → AS or → ENV ) ∗ → M S M → A S A → P S P → E ( → AS or → ENV ) ∗ ; → M S M . . . and thetraces will be the exchanged RTMs: π = S M ; S A ; S P ; S M . . . B. Validation
The in-the-loop testing heavily depends on
RT M S . If an erroris detected during in-the-loop testing, it is likely that it iscaused by an erroneous adaptation ( → A , → P or both). But the RT M S itself might also be the source of an error or mightmask an erroneous adaptation. The validation of
RT M S inthis later stage can give an indication about the quality of
RT M S and therefore the suitability for testing. Additionally,if the real system produces sequences not covered by
RT M S which cause errors in the adaptation, we exactly know which137equence reveals the error and it can be added to
RT M S forregression tests.The idea behind validating
RT M S is to observe ( → AS or → ENV ) ∗ → M S M → A S A → P S P → E ( → AS or → ENV ) ∗ ; → M S M . . . and look at the traces π (cid:48) = S M ; S P ; S M . . . . If our simulation model RT M S is correct,it should cover the observed behavior: π (cid:48) ∈ L ( RT M S ) .VII. I NITIAL E VALUATION
In this section, we report on our initial evaluation of thetesting scheme for SASS we are proposing in this paper. Thisevaluation shows the benefits of using (architectural) runtimemodels with respect to implementing a test framework bymeans of reusing MDE techniques. Moreover, it gives uspreliminary confidence about the effectivity of the schemewhen developing feedback loops.
A. One-Way Testing
To realize one-way testing, we developed a generic test adapterthat loads the input model, triggers the adaptation steps suchas analysis and planning to be tested, and finally, comparesthe resulting model with the oracle model. Developing sucha test adapter has been simplified due to MDE principles asrealized by the
Eclipse Modeling Framework (EMF) . EMFprovides mechanisms to generically load and process modelsand particularly of comparing models . Hence, we easilyobtain matches and differences between two models such asthe output model of adaptation steps and the oracle modelto obtain the testing result. This result, that is, the output ofthe comparison, is also a model that can be further analyzed.For instance, the Object Constraint Language (OCL) canbe used to check application-specific constraints such asmission-critical components like for authenticating users onthe mRUBiS marketplace are not missing in the architecture. B. In-the-Loop Testing
For the internet marketplace mRUBiS we developed a sim-ulator based on an architectural runtime model. It simulatesthe marketplace itself (i.e., the adaptable software) therebyinjecting failures as well as the monitor and execute steps.The simulator maintains the runtime model against which theanalyze and plan steps are developed.Using this simulator, we can test the analyze and plan stepsas follows: i) the simulator injects failures into the runtimemodel (this simulates the behavior of the adaptable softwareand environment as well as the monitor step that reflects thefailure in the model). ii) the analyze and plan steps to be testedare executed and they analyze and adjust the model accordingto the adaptation need. iii) the simulator performs the executestep that emulates the effects of the adaptation as performed bythe analyze and plan steps in the runtime model. For instance,response times are updated in the model if the configurationof the architecture is adapted. EMF : https://eclipse.org/modeling/emf/ EMF Compare Eclipse OCL : http://projects.eclipse.org/projects/modeling.mdt.ocl
After one run of the feedback loop and before injectingthe next failures, the simulator checks whether the analyzeand plan steps performed a well-defined adaptation (e.g.,by checking whether the life cycle of components has notbeen violated when adding or removing components) and itchecks whether the state of the runtime model represents avalid architecture (e.g., components are not missing or thereare no unsatisfied required interfaces, that is, no danglingedges). These checks are performed based on constraints andproperties that the runtime models must fulfill and the resultsof these checks are given as feedback to the engineer.This simulator has been used in research and in coursesto let students develop and test different adaptation techniques(e.g., hard-coded event-condition-action, graph transformation,or event-driven rules) for the analyze and plan steps. Thoughthe simulator helped in finding faults in the adaptation logic,the randomness included in the simulator and its basic loggingfacilities impeded the automated reproducibility of traces andtherefore, the retesting of “interesting” edge cases.
C. Online Testing and Validation
So far, we have not worked on testing the adaptation online.However, our experience with runtime models and employingMDE techniques at runtime for self-adaptation [6,20] givesus promising confidence to achieve the online testing. Forinstance, our EUREMA interpreter [20] that executes feedbackloops already maintains the runtime models used within theloops and passes them along the loop’s adaptation activities.Thus, when passing models along the activities, the interpretermay defer the execution of the next activity. Before proceed-ing, the interpreter can either (1) hand over to an online testingactivity that will compare the current RTM to one which isderived from a simulation model that runs in parallel or (2)log the RTM for later comparison in a simulation module.As discussed earlier, (1) has the advantage of immediaterevelation of errors but needs computation resources on thesystem while (2) can benefit from more resources offline butneeds persistance resources for the logs. In both cases, workingonly on changes of the RTM might reduce cost.VIII. R
ELATED W ORK
Testing of SASS has been addressed by others as well. Thisrelated work could usually be assigned to one of the followingcategories: 1) The adaptation is formally specified and veri-fied with special constructs regarding the adaptation [21,22],2) the SASS is tested/verified at runtime/online and theverification expressions are adapted to properties unique toadaptation [10]–[12], 3) tests are evolved at runtime in anattempt to test for requirement fulfillment even when theenvironment or the adaptable software changes [13,14], and4) testing is carried out at design time addressing the specialissues of adaptive systems [15]–[17]. 5) Combined assessmentof quality assurance for self-adaptive systems from more thanone direction has also been done [23].The work presented in [23] already shows that a singlequality assurance technique is not enough as an adequate138pproach to achieve high-quality SASS. Testing and formalverification have long been known as complementing tech-niques for most kinds of systems. We assume that qualityassurance for SASS can benefit in the same way from thecombination of approaches like the one presented by us andapproaches of category 1. Likewise, we see early testing as acomplementary technique to online and adaptive online testing(cf. categories 2 and 3). To our understanding, this specificallyholds for SASS where unknown circumstances may arise atruntime and need to be adequately taken care of. Nevertheless,testing still needs to be done before a system is to be deployedto ensure at least an initial and basic quality of the SASS.Approaches of category 4 also address testing of SASSat design time. We differ from these approaches by notbeing dependent on a complete system. Using RTM as thetest interface allows us to test already when there are onlyfragments of the system available, which is in the earlierdevelopment stages. Also our approach allows to test in abottom-up manner, starting from the smallest testable unitsof a SASS and proceeding to the entire system.IX. C
ONCLUSION
In this paper we presented a systematic testing scheme forSASS. It encompasses a staged testing process inspired bythe engineering of embedded software. Exploiting architecturalruntime models with their various states allows us to addressthe different stages of one-way, in-the-loop, and online testing.Supporting early development stages with tests, we may finderrors early. Furthermore, looking at the individual MAPE-Kactivities and their different integrations, we should be able tolocate faults more easily. In this context, our initial evaluationgives us preliminary confidence about the scheme’s effectivity.There are several directions to evolve the presented testingscheme in future. As of now we employ an ad hoc simulatorfor
RT M S . We could instead make use of a formal modelto automatically derive test cases by using coverage criteria,which includes the generation of test inputs and oracles (run-time models and properties) taking the uncertainty of SASSand its environment into account. Useful formalisms rangefrom simple finite state machines to timed, hybrid or evenprobabilistic automata. Such a formal approach will furtherease a thorough evaluation of the testing scheme. Anotherdirection would be to address the neglected case that theadaptable software and environment change while the MAPEloop is running. We will study special test setups for this case.R
EFERENCES[1] M. Salehie and L. Tahvildari, “Self-adaptive software: Landscape andresearch challenges,”
ACM Trans. Auton. Adapt. Syst. , vol. 4, no. 2, pp.14:1–14:42, 2009.[2] R. de Lemos, H. Giese, H. Müller, M. Shaw, J. Andersson, M. Litoiu,B. Schmerl, G. Tamura, N. M. Villegas, T. Vogel, D. Weyns, L. Baresi,B. Becker, N. Bencomo, Y. Brun, B. Cukic, R. Desmarais, S. Dustdar,G. Engels, K. Geihs, K. Goeschka, A. Gorla, V. Grassi, P. Inverardi,G. Karsai, J. Kramer, A. Lopes, J. Magee, S. Malek, S. Mankovskii,R. Mirandola, J. Mylopoulos, O. Nierstrasz, M. Pezzè, C. Prehofer,W. Schäfer, R. Schlichting, D. B. Smith, J. P. Sousa, L. Tahvildari,K. Wong, and J. Wuttke, “Software Engineering for Self-AdaptiveSystems: A second Research Roadmap,” in
SEfSAS II , ser. LNCS.Springer, 2013, vol. 7475, pp. 1–32. [3] R. Calinescu, “Emerging techniques for the engineering of self-adaptivehigh-integrity software,” in
Assurances for Self-Adaptive Systems , ser.LNCS. Springer, 2013, vol. 7740, pp. 297–310.[4] S. Nair, J. L. de la Vara, M. Sabetzadeh, and L. Briand, “An extendedsystematic literature review on provision of evidence for safety certifica-tion,”
Information and Software Technology , vol. 56, no. 7, pp. 689–717,2014.[5] B. Broekman and E. Notenboom,
Testing embedded software . PearsonEducation, 2003.[6] T. Vogel, S. Neumann, S. Hildebrandt, H. Giese, and B. Becker,“Model-Driven Architectural Monitoring and Adaptation for AutonomicSystems,” in
Proc. of the 6th Intl. Conference on Autonomic Computingand Communications (ICAC) . ACM, 2009, pp. 67–68.[7] D. Garlan, S.-W. Cheng, A.-C. Huang, B. Schmerl, and P. Steenkiste,“Rainbow: Architecture-Based Self-Adaptation with Reusable Infras-tructure,”
Computer , vol. 37, no. 10, pp. 46–54, 2004.[8] J. Kramer and J. Magee, “Self-managed systems: An architecturalchallenge,” in
Future of Software Engineering (FOSE) . IEEE, 2007,pp. 259–268.[9] P. McKinley, S. M. Sadjadi, E. P. Kasten, and B. H. Cheng, “ComposingAdaptive Software,”
Computer , vol. 37, no. 7, pp. 56–64, 2004.[10] H. J. Goldsby, B. H. Cheng, and J. Zhang, “Amoeba-rt: Run-timeverification of adaptive software,” in
Models in Software Engineering .Springer, 2008, pp. 212–224.[11] Y. Zhao, S. Oberthür, M. Kardos, and F.-J. Rammig, “Model-basedruntime verification framework for self-optimizing systems,”
ElectronicNotes in Theoretical Computer Science , vol. 144, no. 4, pp. 125–145,2006.[12] B. Eberhardinger, H. Seebach, A. Knapp, and W. Reif, “Towards testingself-organizing, adaptive systems,” in
Testing Software and Systems .Springer, 2014, pp. 180–185.[13] E. M. Fredericks and B. H. Cheng, “Automated generation of adaptivetest plans for self-adaptive systems,” in
Proceedings of the 10th Inter-national Symposium on Software Engineering for Adaptive and Self-Managing Systems (SEAMS) . IEEE, 2015.[14] E. M. Fredericks, B. DeVries, and B. H. Cheng, “Towards run-timeadaptation of test cases for self-adaptive systems in the face of uncer-tainty,” in
Proceedings of the 9th International Symposium on SoftwareEngineering for Adaptive and Self-Managing Systems . ACM, 2014, pp.17–26.[15] G. Püschel, C. Piechnick, S. Götz, C. Seidl, S. Richly, T. Schlegel, andU. Aßmann, “A combined simulation and test case generation strategyfor self-adaptive systems,”
Journal On Advances in Software , vol. 7, no.3&4, pp. 686–696, 2014.[16] Z. Wang, S. Elbaum, and D. S. Rosenblum, “Automated generation ofcontext-aware tests,” in
Software Engineering, 2007. ICSE 2007. 29thInternational Conference on . IEEE, 2007, pp. 406–415.[17] J. Cámara, R. de Lemos, N. Laranjeiro, R. Ventura, and M. Vieira,“Testing the robustness of controllers for self-adaptive systems,”
Journalof the Brazilian Computer Society , vol. 20, no. 1, pp. 1–14, 2014.[18] G. Blair, N. Bencomo, and R. B. France, “[email protected],”
Computer ,vol. 42, no. 10, pp. 22–27, 2009.[19] J. O. Kephart and D. M. Chess, “The vision of autonomic computing,”
Computer , vol. 36, no. 1, pp. 41–50, 2003.[20] T. Vogel and H. Giese, “Model-Driven Engineering of Self-AdaptiveSoftware with EUREMA,”
ACM Trans. Auton. Adapt. Syst. , vol. 8, no. 4,pp. 18:1–18:33, 2014.[21] M. Sama, D. S. Rosenblum, Z. Wang, and S. Elbaum, “Model-basedfault detection in context-aware adaptive applications,” in
Proceedingsof the 16th ACM SIGSOFT International Symposium on Foundations ofsoftware engineering . ACM, 2008, pp. 261–271.[22] M. U. Iftikhar and D. Weyns, “Formal verification of self-adaptivebehaviors in decentralized systems with uppaal,” Linnaeus UniversityVäxjö, Tech. Rep., 2012.[23] D. Weyns, “Towards an integrated approach for validating qualitiesof self-adaptive systems,” in
Proceedings of the 2012 Workshop onDynamic Analysis . ACM, 2012, pp. 24–29.. ACM, 2012, pp. 24–29.