Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Simon Pickin is active.

Publication


Featured researches published by Simon Pickin.


automated software engineering | 2003

Automated requirements-based generation of test cases for product families

Clémentine Nebut; Simon Pickin; Y. Le Traon; J.-M. Jezequel

Software product families (PF) are becoming one of the key challenges of software engineering. Despite recent interest in this area, the extent to which the close relationship between PF and requirements engineering is exploited to guide the V&V tasks is still limited. In particular, PF processes generally lack support for generating test cases from requirements. In this paper, we propose a requirements-based approach to functional testing of product lines, based on a formal test generation tool. Here, we outline how product-specific test cases can be automatically generated from PF functional requirements expressed in UML (Unified Modeling Language). We study the efficiency of the generated test cases on a case study.


integrated formal methods | 2004

Using UML Sequence Diagrams as the Basis for a Formal Test Description Language

Simon Pickin; Jean-Marc Jézéquel

A formal, yet user-friendly, test description language could increase the possibilities for automation in the testing phase while at the same time gaining widespread acceptance. Scenario languages are currently one of the most popular formats for describing interactions between possibly distributed components. The question of giving a solid formal basis to scenario languages such as MSC has also received a lot of attention. In this article, we discuss using one of the most widely-known scenario languages, UML sequence diagrams, as the basis for a formal test description language for use in the distributed system context.


international conference on reliable software technologies | 2006

One million (LOC) and counting: static analysis for errors and vulnerabilities in the linux kernel source code

Peter T. Breuer; Simon Pickin

This article describes an analysis tool aimed at the C code of the Linux kernel, having been first described as a prototype (in this forum) in 2004. Its continuing maturation means that it is now capable of treating millions of lines of code in a few hours on very modest platforms. It detects about two uncorrected deadlock situations per thousand C source files or million lines of source code in the Linux kernel, and three accesses to freed memory. In distinction to model-checking techniques, the tool uses a configurable “3-phase” programming logic to perform its analysis. It carries out several different analyses simultaneously.


Expert Systems With Applications | 2002

Knowledge model reuse: therapy decision through specialisation of a generic decision model

Ángeles Manjarrés; Simon Pickin; José Mira

Abstract We present the definition of the therapy decision task and its associated Heuristic Multi-Attribute (HM) solving method, in the form of a KADS-style specification. The goal of the therapy decision task is to identify the ideal therapy, for a given patient, in accordance with a set of objectives of a diverse nature constituting a global therapy-evaluation framework in which considerations such as patient preferences and quality-of-life results are integrated. We give a high-level overview of this task as a specialisation of the generic decision task, and additional decomposition methods for the subtasks involved. These subtasks possess some reflective capabilities for reasoning about self-models, particularly the learning subtask, which incrementally corrects and refines the model used to assess the effects of the therapies. This work illustrates the process of reuse in the framework of AI software development methodologies such as KADS-CommonKADS in order to obtain new (more specialised but still generic) components for the analysis libraries developed in this context. In order to maximise reuse benefits, where possible, the therapy decision task and HM method have been defined in terms of regular components from the earlier-mentioned libraries. To emphasise the importance of using a rigorous approach to the modelling of domain and method ontologies, we make extensive use of the semi-formal object-oriented analysis notation UML, together with its associated constraint language OCL, to illustrate the ontology of the decision method and the corresponding specific one of the therapy decision domain, the latter being a refinement via inheritance of the former.


international conference on computational science | 2006

Checking for deadlock, double-free and other abuses in the linux kernel source code

Peter T. Breuer; Simon Pickin

The analysis described in this article detects about two real and uncorrected deadlock situations per thousand C source files or million lines of code in the Linux kernel source, and three accesses to freed memory, at a few seconds per file. In distinction to model-checking techniques, the analysis applies a configurable “3-phase” Hoare-style logic to an abstract interpretation of C code to obtain its results.


Innovations in Systems and Software Engineering | 2006

Symbolic approximation: an approach to verification in the large

Peter T. Breuer; Simon Pickin

This article describes symbolic approximation, a theoretical foundation for techniques evolved for large-scale verification – in particular, for post hoc verification of the C code in large-scale open-source projects such as the Linux kernel. The corresponding toolset’s increasing maturity means that it is now capable of treating millions of lines of C code source in a few hours on very modest support platforms. In order to explicitly manage the state-space-explosion problem that bedevils model-checking approaches, we work with approximations to programs in a symbolic domain where approximation has a well-defined meaning. A more approximate program means being able to say less about what the program does, which means weaker logic for reasoning about the program. So we adjust the approximation by adjusting the applied logic. That guarantees a safe approximation (one which may generate false alarms but no false negatives) provided the logic used is weaker than the exact logic of C. We choose the logic to suit the analysis.


leveraging applications of formal methods | 2006

Verification in the Large via Symbolic Approximation

Peter T. Breuer; Simon Pickin

This article describes the technique, symbolic approximation, that we have evolved to handle the post-hoc verification of C code in very large open source projects such as the Linux kernel, essentially consisting of deliberate approximation in a symbolic domain. Using the technique, we are treating millions of lines of C code source in a few hours on very modest support platforms. The theoretical foundation is a configurable compositional programming logic and a notion of approximation that is tied to what can be deduced about a program, in that adjusting the logic for reasoning about it adjusts the goodness of the approximation to it.


formal methods for open object-based distributed systems | 1997

Introducing formal notations in the development of object-based distributed applications

Simon Pickin; Carlos Sánchez; Juan C. Yelmo; Juan J. Gil; E. Rodríguez

Although the benefits of the introduction of formal approaches in software development are well-recognized, in the case of distributed object-based applications, this introduction is not straightforward. We tackle this problem using currently available technology in a pragmatic manner and on several fronts. The first of these is by semantically strengthening the standard notion of interface into a contract and then using this contract notion at different structural levels; the second is by defining a life-cycle model incorporating architectural decomposition and concurrent development of the resulting components whose relations are specified as contracts; the third is by using standardized FDTs in the design of the more critical components. We show how these different fronts are connected and we illustrate briefly how the formalisations introduced can be used to advantage in the testing and prototyping activities of the life-cycle.


Workshop on Intelligent Network | 1994

A User-View Of Services And Network: Formal Specification And Interaction Detection

Pierre Combes; Simon Pickin; B. Renard

We first present some general considerations about the service/feature interaction problem, and why formal specifications wilI really increase the efficiency and reliability of the service creation process. We introduce a formal method for detecting some interaction problems, more precisely high level interactions. This method is based on construction of an abstract model of services and network, as seen by users. This method is also based on the notion of service properties (or policies). A property is a declarative specification of basic requirements of service feature. Then, we present an introduction to the existing formal specification tools to be used for such a method. We give some examples of service properties, debating a first approach for confidentiality, security, or taxation properties. For these examples, feature interactions and how they are detected will be presented.


Expert Systems | 2002

Describing generic expertise models as object-oriented analysis patterns: the heuristic multi-attribute decision pattern

Ángeles Manjarrés; Simon Pickin

We report on work concerning the use of object-oriented analysis and design (OAD) methods in the development of artificial intelligence (AI) software applications, in which we compare such techniques to software development methods more commonly used in AI, in particular CommonKADS. As a contribution to clarifying the role of OAD methods in AI, in this paper we compare the analysis models of the object-oriented methods and the CommonKADS high-level expertise model. In particular, we study the correspondences between generic tasks, methods and ontologies in methodologies such as CommonKADS and analysis patterns in object-oriented analysis. Our aim in carrying out this study is to explore to what extent, in areas of AI where the object-oriented paradigm may be the most adequate way of conceiving applications, an analysis level ‘pattern language’ could play the role of the libraries of generic knowledge models in the more commonly used AI software development methods. As a case study we use the decision task — its importance arising from its status as the basic task of the intelligent agent — and the associated heuristic multi-attribute decision method, for which we derive a corresponding decision pattern described in the unified modelling language, a de facto standard in OAD.

Collaboration


Dive into the Simon Pickin's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ángeles Manjarrés

National University of Distance Education

View shared research outputs
Top Co-Authors

Avatar

Jean-Marc Jézéquel

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Angel Groba

Technical University of Madrid

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alejandro Alonso

Technical University of Madrid

View shared research outputs
Top Co-Authors

Avatar

Carlos Sánchez

Technical University of Madrid

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Juan C. Yelmo

Technical University of Madrid

View shared research outputs
Researchain Logo
Decentralizing Knowledge