Magnus C. Ohlsson
Lund University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Magnus C. Ohlsson.
international conference on software maintenance | 1998
Magnus C. Ohlsson; Claes Wohlin
Software systems are often getting older than expected, and it is a challenge to try to make sure that they grow old gracefully. This implies that methods are needed to ensure that system components are possible to maintain. In this paper, the need to investigate, classify and study software components is emphasized. A classification method is proposed. It is based on classifying the software components into green, yellow and red components. The classification scheme is complemented with a discussion of suitable models to identify problematic components. The scheme and the models are illustrated in a minor case study to highlight the opportunities. The long term objective of the work is to define methods, models and metrics which are suitable to use in order to identify software components which have to be taken care of through either tailored processes (e.g. additional focus on verification and validation) or reengineering. The case study indicates that the long term objective is realistic and worthwhile.
Information & Software Technology | 1998
Magnus C. Ohlsson; Claes Wohlin; Björn Regnell
This paper outlines a four step effort estimation study and focuses on the first and second step. The four steps are formulated to successively introduce a more formal effort experience base. The objective of the study is to evaluate the needed formalism to improve effort estimation and to study different approaches to record and reuse experiences from effort planning in software projects. In the first step (including seven projects), the objective is to compare estimation of effort based on a rough figure (indicating approximate size of the projects) with an informal experience base. The objective of the second step is on reuse of experiences from an effort experience base, where the outcomes of seven previous projects were stored. Seven new projects are planned based on the previous experiences.The plans are, after project completion, compared with the initial plans and with the data from six out of the seven new projects, to plan the seventh. It is clear from the studies that effort estimation is difficult and that the mean estimation error is in the range of 14%-19% independent of the approach used. Further, it is concluded that the best estimates are obtained when the projects use the previous experience and complement this information with their own thoughts and opinions. Finally, it is concluded that data collection is not enough in itself, the data collected must be processed, i.e. interpreted, generalized and synthesized into a reusable form.
Journal of Software Maintenance and Evolution: Research and Practice | 2001
Magnus C. Ohlsson; Anneliese Amschler Andrews; Claes Wohlin
Summary Many of today’s software systems evolve through a series of releases that add new functionality and features, in addition to the results of corrective maintenance. As the systems evolve over time it is necessary to keep track of and manage their problematic components. Our focus is to track system evolution and to react before the systems become difficult to maintain. To do the tracking, we use a method based on a selection of statistical techniques. In the case study we report here that had historical data available primarily on corrective maintenance, we apply the method to four releases of a system consisting of 130 components. In each release, components are classified as fault-prone if the number of defect reports written against them are above a certain threshold. The outcome from the case study shows stabilising principal components over the releases, and classification trees with lower thresholds in their decision nodes. Also, the variables used in the classification trees’ decision nodes are related to changes in the same files. The discriminant functions use more variables than the classification trees and are more difficult to interpret. Box plots highlight the findings from the other analyses. The results show that for a context of corrective maintenance, principal components analysis together with classification trees are good descriptors for tracking software evolution. Copyright
Journal of Software Maintenance and Evolution: Research and Practice | 2000
Anneliese Amschler Andrews; Magnus C. Ohlsson; Claes Wohlin
As software systems evolve over a series of releases, it becomes important to know which components are stable compared to components that show repeated need for corrective maintenance. To track these across multiple releases, we adapt a reverse architecting technique to defect reports of a series of releases. Fault relationships among system components are identified based on whether they are involved in the same defect report, and for how many defect reports this occurs. There are degrees of fault-coupling between components depending on how often these components are involved in a defect fix. After these fault-coupling relationships between components are extracted, they are abstracted to the subsystem level. We also identify a measure for fault cohesion (i.e. fault-proneness of components locally.) The resulting fault architecture figures show for each release what its most fault-prone relationships are. Comparing across releases makes it possible to see whether some relationships between components are repeatedly fault-prone, indicating an underlying systemic architecture problem. We illustrate our technique on a large commercial system consisting of over 800 KLOC of C, C++, and microcode. Copyright
workshop on program comprehension | 2000
Claes Wohlin; Martin Höst; Magnus C. Ohlsson
The paper presents a method proposal of how to use product measures and defect data to enable understanding and identification of design and programming constructs that contribute more than expected to the defect statistics. The paper describes a method that can be used to identify the most defect-prone design and programming constructs and the method proposal is illustrated on data collected from a large software project in the telecommunication domain. The example indicates that it is feasible, based on defect data and product measures, to identify the main sources of defects in terms of design and programming constructs. Potential actions to be taken include less usage of particular design and programming constructs, additional resources for verification of the constructs and further education into how to use the constructs.
ieee international software metrics symposium | 1999
Magnus C. Ohlsson; Claes Wohlin
Presents an empirical study of effort estimation in software engineering projects. In particular, this study is focused on improvements in effort estimations as more information becomes available. For example, after the requirements phase, the requirements specification is available, and the question is whether the knowledge regarding the number of requirements helps in improving the effort estimation of the project. The objective is twofold. First, it is important to find suitable measures that can be used in the re-planning of the projects. Second, the objective is to study how the effort estimations evolve as a software project is performed. The analysis is based on data from 26 projects. The analysis consists of two main steps: model building based on data from part of the projects, and evaluation of the models for the other projects. No single measure was found to be a particular good measure for an effort prediction model; instead, several measures from different phases were used. The prediction models were then evaluated, and it is concluded that it is difficult to improve effort estimations during project execution, at least if the initial estimate is fairly good. It is, however, believed that the prediction models are important for knowing that the initial estimate is of the right order, i.e. the estimates are needed to ensure that the initial estimate was fairly good. It is concluded that the re-estimation approach will help project managers to stay in control of their projects.
Archive | 2012
Claes Wohlin; Per Runeson; Martin Höst; Magnus C. Ohlsson; Björn Regnell; Anders Wesslén
Systematic literature reviews are conducted to “identify, analyse and interpret all available evidence related to a specific research question” [96]. As it aims to give a complete, comprehensive and valid picture of the existing evidence, both the identification, analysis and interpretation must be conducted in a scientifically and rigorous way. In order to achieve this goal, Kitchenham and Charters have adapted guidelines for systematic literature reviews, primarily from medicine, evaluated them [24] and updated them accordingly [96]. These guidelines, structured according to a three-step process for planning, conducting and reporting the review, are summarized below.
Archive | 2012
Claes Wohlin; Per Runeson; Martin Höst; Magnus C. Ohlsson; Björn Regnell; Anders Wesslén
When an experiment is completed, the findings may be presented for different audiences, as defined in Fig. 11.1. This could, for example, be done in a paper for a conference or a journal, a report for decision-makers, a package for replication of the experiment, or as educational material. The packaging could also be done within companies to improve and understand different processes. In this case, it is appropriate to store the experiences in an experience base according to the concepts discussed by Basili et al. [16].
Archive | 2012
Claes Wohlin; Per Runeson; Martin Höst; Magnus C. Ohlsson; Björn Regnell; Anders Wesslén
The primary objective of the presentation of this experiment is to illustrate experimentation and the steps in the experiment process introduced in the previous chapters. The presentation of the experiment in this chapter is focused on the experiment process rather than following the proposed report structure in Chap. 11.
Archive | 2012
Claes Wohlin; Per Runeson; Martin Höst; Magnus C. Ohlsson; Björn Regnell; Anders Wesslén
The experiment data from the operation is input to the analysis and interpretation. After collecting experimental data in the operation phase, we want to be able to draw conclusions based on this data. To be able to draw valid conclusions, we must interpret the experiment data.