Marcel Verhoef
Radboud University Nijmegen
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Marcel Verhoef.
International Journal on Software Tools for Technology Transfer | 2006
Ernesto Wandeler; Lothar Thiele; Marcel Verhoef; Paul Lieverse
Performance analysis plays an increasingly important role in the design of embedded real-time systems. Time-to-market pressure in this domain is high while the available implementation technology is often pushed to its limit to minimize cost. This requires analysis of performance as early as possible in the life cycle. Simulation-based techniques are often not sufficiently productive. We present an alternative, analytical, approach based on Real-Time Calculus. Modular performance analysis is presented through a case study in which several candidate architectures are evaluated for a distributed in-car radio navigation system. The analysis is efficient due to the high abstraction level of the model, which makes the technique suitable for early design exploration.
ACM Sigsoft Software Engineering Notes | 2010
Peter Gorm Larsen; Nick Battle; Miguel Alexandre Ferreira; John S. Fitzgerald; Kenneth Lausdahl; Marcel Verhoef
Overture is a community-based initiative that aims to develop a common open-source platform integrating a range of tools for constructing and analysing formal models of systems using VDM. The mission is to both provide an industrial-strength tool set for VDM and also to provide an environment that allows researchers and other stakeholders to experiment with modifications and extensions to the tools and language. This paper presents the current status and future vision of the Overture project.
formal methods | 2006
Marcel Verhoef; Peter Gorm Larsen; Jozef Hooman
The complexity of real-time embedded systems is increasing, for example due to the use of distributed architectures. An extension to the Vienna Development Method (VDM) is proposed to address the problem of deployment of software on distributed hardware. The limitations of the current notation are discussed and new language elements are introduced to overcome these deficiencies. The impact of these changes is illustrated by a case study. A constructive operational semantics is defined in VDM++ and validated using VDMTools. The associated abstract formal semantics, which is not specific to VDM, is presented in this paper. The proposed language extensions significantly reduce the modeling effort when describing distributed real-time systems in VDM++ and the revised semantics provides a basis for improved tool support.
Wiley Encyclopedia of Computer Science and Engineering | 2008
John S. Fitzgerald; Peter Gorm Larsen; Marcel Verhoef
The Vienna Development Method (VDM) is one of the longest established model-oriented formal methods for the development of computer-based systems and software. It consists of a group of mathematically well-founded languages and tools for expressing and analyzing system models during early design stages, before expensive implementation commitments are made. The construction and analysis of the model help to identify areas of incompleteness or ambiguity in informal system specifications, and to provide some level of confidence that a valid implementation will have key properties, especially those of safety or security. VDM has a strong record of industrial application, in many cases by practitioners who are not specialists in the underlying formalism or logic. Experience with the method suggests that the effort expended on formal modeling and analysis can be recovered in reduced rework costs that develop from design errors. Keywords: vienna development method; VDM-SL; system modeling; model validation; proof obligations; validation conjectures; tool support
Archive | 2014
John S. Fitzgerald; Peter Gorm Larsen; Marcel Verhoef
Thread In addition, the AbstractController class overrides the BeforeStep and AfterStep operations. In the former, it writes to the actuator and reads the sensors to be stored in local instance variables, while in the latter, it outputs some diagnostic information using the IO library provided by VDM. This means that all the concrete controllers will inherit these behaviours and only have to provide implementations for StepBody. The definitions of the BeforeStep and AfterStep operations are given below. Of particular interest is the precondition on the BeforeStep operation. This ensures that the SetupIO operation has been called by the subclass and that objects have been assigned to the instance variables. Note that the call to 130 K. Pierce et al. IO‘printf is shortened for the sake of brevity, but it shows the use of the %s placeholder in the first parameter (a string) being replaced by values in the second parameter (a sequence of values). The concrete controller Controller is defined as a subclass of the AbstractController class. It must provide an implementation for StepBody. In the case of the TorsionBar5-Extended example, this implementation follows that first shown in Sect. 4.6, deciding when to change the setpoint and calculating the output for the motor using a PID controller. The controller must call SetupThread and SetupIO from its superclasses, which is done in the constructor shown below. This also shows the creation of the PID control object, following the “limiting” design pattern described above in this chapter (see Sect. 6.4.2). Finally, the Monitor class inherits directly from AbstractThread. The operation StepBody is defined as in the CheckMonitor operation from Sect. 4.6 (not repeated here). In order to do this, it requires a reference to the controller that it 6 Co-model Structuring and Design Patterns 131 is monitoring, as well as access to the encoder on the load. As in the example above, this is done in the constructor: 6.6 Structuring Constituent Models for Flexible Simulation In this section, we look at ways to structure DE and CT models to allow them to participate in both co-simulations and single-domain simulations. One benefit of our approach is that the constituent DE and CT models of the co-models can still be analysed in their existing tools, as well as through co-simulation. In order to do this, some care has to be taken in how they are built and structured, which is what is explored in this section. The TorsionBar5-Extended co-model is built in such a way that it permits analysis through CT-only simulation, DE-only simulation and co-simulation. This property is useful when following the domain-first approaches to building co-models: DE-first and CT-first (briefly introduced in Sect. 2.7.3 and explained in detail in Chap. 8). In particular, it means that single-domain regression tests can be performed after the co-model stage has been reached. For example, if a CT-first approach is followed by creating a CT-only model with a simple controller and large changes are made to the CT-model, old tests can be performed again in a CT-only simulation to confirm that the changes are sound. We can view the switch between domain-only simulation and co-simulation as moving the “boundary” (or “interface”) between the constituent models. DE-only and CT-only simulations are the two extremes, where the other model plays no part in the simulation. Then for co-simulation, the boundary falls somewhere in between the constituent models, with the contract defining the bridge over this boundary. The choice of this boundary depends on the purpose of the model, and it may well change during the course of a development as designs evolve and become more detailed. The structuring techniques described later in this section can also be useful if the co-model needs to support switching between multiple boundaries. Before describing the structuring of CT and DE models for flexible simulation (see 132 K. Pierce et al. Sects. 6.6.2 and 6.6.3), we first look in more detail at the factors affecting the placement of the co-model boundary. 6.6.1 Co-model Boundaries In order to place the co-model boundary in the “right” place for a given comodel’s purpose, it is necessary to have a solid idea of the components that form the system. In our approach, physical components are typically modelled in CT and software elements modelled in DE (though this is by no means mandated). Certain components however fall around the co-model boundary (and could thus be modelled in either CT or DE). Examples include loop controllers, sensors and actuators. Choices here can therefore affect the boundary and the content of the co-simulation contract. In the early development stages, many design decisions will not yet have been made, so multiple alternative components might be considered. In fact, it is entirely possible that the same solution could be realised in software, hardware or a mixture of the two, in which case it is interesting to explore the cost and benefits of each solution. We revisit the possibility of trading off different solutions in the forward look to Cyber-Physical Systems (CPSs) in Chap. 14. There are a few factors that should be considered when determining the co-model boundary and where each component is modelled: Simulation Performance: Would a choice of boundary have an effect on the time taken to perform a simulation? An example here would be the location of a PID controller. If the PID were on the CT side, then the co-model interface carries the setpoint for the controller which may be updated at a lower frequency than the sensor and actuator signals, thus information traversing the co-simulation interface less frequently. Abstraction: It could be the case that the implementation details of some component or series of components is not important to the purpose of the model. An example here would be the sensing of a shaft position by an encoder. At the detailed level, the sensor is modelled by scaling, sampling and quantising the value, while at the simpler level, the actual value for the position held by the simulator is sent over the boundary. A second example of this is a movable guide that diverts paper down one of two paths with a small probability that the paper arrives and collides with the mechanism while it is switching position. This can either be modelled in CT if the dynamic response is important or in DE if a simpler model where we only consider collision as a probability is sufficient. Maturity: If part of a model is not well understood or represents an unstable part of the design, then a more abstract model may be an attractive option. As the design becomes more mature and design choices are firmed, then the extra fidelity potentially afforded by a more detailed model may be justified. 6 Co-model Structuring and Design Patterns 133 Modelling Gaps: It could be the case that either or both of the co-model parts are not yet complete, meaning that sensor implementations on either side are not yet available. In such a case, the co-model boundary could be wider than intended for the final model, where the width means that the data is read from or written to points that will not form the final interface. As mentioned above, this boundary may change during the co-modelling process. For example, co-simulation performance and control loop tuning may well be important early on, hence modelling these on the CT side is preferable. Later on in the development, when it is desirable to predict the performance of the DE controller (including its ability to meet deadlines), then modelling the control loop in DE is preferred. Abstraction levels, maturity and the closing of modelling gaps may also affect the boundary choice as co-modelling progresses. 6.6.2 Structuring CT Models for Flexible Simulation A CT model will generally contain elements for modelling the plant to be controlled and sensors and actuators. In addition, a CT-only model will have a controller block that is used to test the response of the plant and for creation and tuning of loop controllers. Once the move to a co-model is made, this controller block is replaced by a controller block that connects to the DE model through the co-simulation contract. Blocks in 20-sim can have multiple alternative “implementations” that can be switched between. We recommend using this feature to retain the CT-only test controller implementation while introducing a co-simulation implementation. In this way, it is possible to switch between CT-only simulation and co-simulation. Additionally, if different co-model boundaries are explored, further implementations can be created reflecting the choice of boundary. To create a new implementation for a block or to swap between existing implementations, you right-click on the block and select the Edit implementation menu. Figure 6.6 shows a screenshot from the TorsionBar5-Extended example, showing that the controller block has two implementations, one called “CTOnly” and the other called “CoSim” (which is currently selected as indicated by the check mark). This menu also has options to add, remove and rename implementations. Note that each implementation can have its own separate icon. We recommend altering the visual style of the different implementations to make it easy to see which is selected at a glance. In the case of the example in the figure above, the colour will change; however, more complex icon changes are possible using the 20-sim icon editor (again accessible through the right-click menu through the Edit icon option). 134 K. Pierce et al. Fig. 6.6 How to change the implementation of the controller 6.6.3 Structuring DE Models for Flexible Simulation A DE model generally contains various representations of software elements of the system. In addition, a DE-only model will usually have an approximation of the plant, sufficient to test the supervisory behaviours of the controller model. Here we focus on the struc
Mathematical Structures in Computer Science | 2013
John S. Fitzgerald; Peter Gorm Larsen; Ken Pierce; Marcel Verhoef
The development of embedded computing systems poses significant challenges. The increasing complexity of distributed control and the need to provide evidence to support assurance of safety suggest that there is merit in adopting model-based formal methods. However, such approaches require effective collaboration between the engineering disciplines involved, and in particular the integration of discrete-event models of controllers with continuous-time models of their environments. This paper proposes a new approach to the development of such combined models (co-models), in which an initial discrete-event model may include approximations of continuoustime behaviour that can later be replaced by couplings to continuous-time models. An operational semantics of co-simulation then allows the discrete and continuous models to run on their respective simulators, managed by a coordinating cosimulation engine. This permits the exploration of the composite co-model’s behaviour in a range of operational scenarios. The approach has been realised using the Vienna Development Method (VDM) as the discrete-event formalism, and 20-sim as the continuous-time framework, and has been applied successfully to a case study based on the distributed controller for a personal transporter device.
integrated formal methods | 2010
John S. Fitzgerald; Peter Gorm Larsen; Ken Pierce; Marcel Verhoef; Sune Wolff
This paper presents initial results of research aimed at developing methods and tools for multidisciplinary collaborative development of dependable embedded systems. We focus on the construction and analysis by cosimulation of formal models that combine discrete-event specifications of computer-based controllers with continuous-time models of the environment with which they interact. Basic concepts of collaborative modelling and co-simulation are presented. A pragmatic realisation using the VDM and Bond Graph formalisms is described and illustrated by means of an example, which includes the modelling of both normal and faulty behaviour. Consideration of a larger-scale example from the personal transportation domain suggests the forms of support needed to explore the design space of collaborative models. Based on experience so far, challenges for future research in this area are identified.
integrated formal methods | 2007
Marcel Verhoef; Peter M. Visser; Jozef Hooman; Jan F. Broenink
Development of computerized embedded control systems is difficult because it brings together systems theory, electrical engineering and computer science. The engineering and analysis approaches advocated by these disciplines are fundamentally different which complicates reasoning about e.g. performance at the system level. We propose a lightweight approach that alleviates this problem to some extent. An existing formal semantic framework for discrete event models is extended to allow for consistent co-simulation of continuous time models from within this framework. It enables integrated models that can be checked by simulation in addition to the verification and validation techniques already offered by each discipline individually. The level of confidence in the design can now be raised in the very early stages of the system design life-cycle instead of postponing system-level design issues until the integration and test phase is reached. We demonstrate the extended semantic framework by co-simulation of VDM++ and bond-graph models on a case study, the level control of a water tank.
Concurrency, Compositionality, and Correctness | 2010
Jozef Hooman; Marcel Verhoef
To support model-based development and analysis of embedded systems, the specification language VDM++ has been extended with asynchronous communication and improved timing primitives. In addition, we have defined an interface for the co-simulation of a VDM++ model with a continuous-time model of its environment. This enables multi-disciplinary design space exploration and continuous validation of design decisions throughout the development process. We present an operational semantics which formalizes the precise meaning of the VDM extensions and the co-simulation concept.
Collaborative Design for Embedded Systems | 2014
John S. Fitzgerald; Peter Gorm Larsen; Marcel Verhoef
Embedded systems can be seen as the first generation of a wider class of cyber-physical systems that integrate possibly large numbers of computing platforms in physical environments. Given the significant challenges facing the developers of such systems, we briefly review the state of the art in collaborative modelling and co-simulation technology for embedded systems design, and identify advances needed on the way to scaling this technology to the cyber-physical level. We consider the role of co-modelling in the design flow for cyber-physical systems and the generalisation of co-models to networks of constituent models of cyber and physical elements, the need for open co-simulation in order to support greater heterogeneity among constituent models, and the features needed to describe ubiquitous systems.