Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Lars Bratthall is active.

Publication


Featured researches published by Lars Bratthall.


product focused software process improvement | 2000

Is a Design Rationale Vital when Predicting Change Impact? A Controlled Experiment on Software Architecture Evolution

Lars Bratthall; Enrico Johansson; Björn Regnell

Software process improvement efforts often seek to shorten development lead-time. A potential means is to facilitate architectural changes by providing a design rationale, i.e. a documentation of why the architecture is built as it is. The hypothesis is that changes will be faster and more correct if such information is available during change impact analysis. This paper presents a controlled experiment where the value of having access to a retrospective design rationale is evaluated both quantitatively and qualitatively. Realistic change tasks are applied by 17 subjects from both industry and academia on two complex systems from the domain of embedded real-time systems. The results from the quantitative analysis show that, for one of the systems, there is a significant improvement in correctness and speed when subjects have access to a design rationale document. In the qualitative analysis, design rationale was considered helpful for speeding up changes and improving correctness. For the other system the results were inconclusive, and further studies are recommended in order to increase the understanding of the role of a design rationale in architectural evolution of software systems.


Empirical Software Engineering | 2002

Can you Trust a Single Data Source Exploratory Software Engineering Case Study

Lars Bratthall; Magne Jørgensen

As the demand for empirical evidence for claims of improvements in software development and evolution has increased, the use of empirical methods such as case studies has grown. In case study methodology various types of triangulation is a commonly recommended technique for increasing validity. This study investigates a multiple data source case study with the objective of identifying whether more findings, trustworthier findings and other findings are made using multiple data source triangulation, than had a single data source been used. The case study investigated analyses key lead-time success factors for a software evolution project in a large organization developing eBusiness systems with high-availability high throughput transaction characteristics. By tracing each finding in that study to the individual evidences motivating the finding, it is suggested that a multiple data source explorative case study can have a higher validity than a single data source study. It is concluded that a careful case study design with multiple sources of evidence can result in not only better justified findings than a single data source study, but also other findings. Thus this study provides empirically derived evidence that a multiple data source case study is more trustworthy than a comparable single data source case study.


hawaii international conference on system sciences | 2001

The importance of quality requirements in software platform development-a survey

Enrico Johansson; Anders Wesslén; Lars Bratthall; Martin Höst

This paper presents a survey where some quality requirements that commonly affect software architecture have been prioritized with respect to cost and lead-time impact when developing software platforms and when using them. Software platforms are the basis for a product-line, i.e. a collection of functionality that a number of products is based on. The survey has been carried out in two large software development organizations using 34 senior participants. The prioritization was carried out using the Incomplete Pairwise Comparison method (IPC). The analysis shows that there are large differences between the importance of the quality requirements studied. The differences between the views of different stakeholders are also analysed and it is found to be less than the difference between the quality requirements. Yet this is identified as a potential source of negative impact on product development cost and lead-time, and rules of thumb for reducing the impact are given.


Information & Software Technology | 2000

A survey of lead-time challenges in the development and evolution of distributed real-time systems

Lars Bratthall; Per Runeson; K. Adelswärd; W. Eriksson

Abstract This paper presents a survey that identifies lead-time consumption in the development and evolution of distributed real-time systems DRTSs. Data has been collected through questionnaires, focused interviews and non-directive interviews with senior designers. Quantitative data has been analyzed using the Analytic Hierarchical Process (AHP). A trend in the 11 organizations is that there is a statistically significant shift of the main lead-time burden from programming to integration and testing, when distributing systems. From this, it is concluded that processes, tools and technologies that either reduce the need for or the time for testing have an impact on the development and evolution of lead-time of DRTSs.


international conference on software engineering | 2002

Integrating hundred's of products through one architecture - the industrial IT architecture

Lars Bratthall; Robert van der Geest; Holger Hofmann; Edgar Jellum; Zbigniew Korendo; Robert Martinez; Michal Orkisz; Christian Zeidler; Johan Andersson

During the last few years, software product line engineering has gained significant interest as a way for creating software products faster and cheaper. But what architecture is needed to integrate huge amounts of products, from different product lines? This paper describes such an architecture and its support processes and tools. Through cases, it is illustrated how the architecture is used to integrate new --- and old --- products in such diverse integration projects as vessel motion control, airport baggage handling systems, pulp&paper and oil&gas, in a very large organization. However, in a large organization it is a challenge to make everyone follow an architecture. Steps taken to ensure global architectural consistency are presented. It is concluded that a single architecture can be used to unify development in a huge organization, where the distributed development practices otherwise may prohibit integration of various products.


IEEE Transactions on Software Engineering | 2002

Is it possible to decorate graphical software design and architecture models with qualitative Information?-An experiment

Lars Bratthall; Claes Wohlin

Software systems evolve over time and it is often difficult to maintain them. One reason for this is that often it is hard to understand the previous release. Further, even if architecture and design models are available and up to date, they primarily represent the functional behavior of the system. To evaluate whether it is possible to also represent some nonfunctional aspects, an experiment has been conducted. The objective of the experiment is to evaluate the cognitive suitability of some visual representations that can be used to represent a control relation, software component size and component external and internal complexity. Ten different representations are evaluated in a controlled environment using 35 subjects. The results from the experiment show that representations with low cognitive accessibility weight can be found. In an example, these representations are used to illustrate some qualities in an SDL block diagram. It is concluded that the incorporation of these representations in architecture and design descriptions is both easy and probably worthwhile. The incorporation of the representations should enhance the understanding of previous releases and, hence, help software developers in evolving and maintaining complex software systems.


workshop on program comprehension | 2000

Understanding some software quality aspects from architecture and design models

Lars Bratthall; Claes Wohlin

Software systems evolve over time and it is often difficult to maintain them. One reason for this is often that it is hard to understand the previous release. Further even if architecture and design models are available and up to date, they primarily represent the functional behaviour of the system. To evaluate whether it is possible to also represent some non-functional aspects, an experiment has been conducted. The objective of the experiment is to evaluate the cognitive suitability of some visual representations that can be used to represent a control relation, software component size and component external and internal complexity. Ten different representations are evaluated in a controlled environment using 35 subjects. The results from the experiment show that it is possible to also represent some non-functional aspects. It is concluded that the incorporation of these representations in architecture and design descriptions is both easy and probably worthwhile. The incorporation of the representations should enhance the understanding of previous releases and hence help software developers in evolving and maintaining complex software systems.


product focused software process improvement | 2001

Program Understanding Behavior during Estimation of Enhancement Effort on Small Java Programs

Lars Bratthall; Erik Arisholm; Magne Jørgensen

Good effort estimation is considered a key success factor for competitive software creation services. In this study, task level effort estimation by project leaders and software designers have been investigated in two Internet software service companies through an experiment. Protocol analysis of 27 think-aloud estimations of effort required for consecutive change tasks on a small Java program have been analysed, using the AFECS coding scheme. Results indicate that a) effort estimation at the task level is very different depending on the individual, even when small problems are addressed; b) AFECS seems be appropriate to use as a coding scheme when assessing program comprehension behaviour for the purpose of effort estimation; c) protocol analysis of comprehension during effort estimation does not necessarily capture all process elements. These results can be used to further guide detailed analysis of individual task level effort estimation, as can a set of high-level estimation events indicated in this study.


product focused software process improvement | 2002

Component Certification - What is the Value?

Lars Bratthall; Johan Hasselberg; Brad Hoffman; Zbigniew Korendo; Bruno Schilli; Lars Gundersen

Component-based software is becoming increasingly popular as a means to create value through improved integration across multiple parts of a plant or business. However, sometimes components that are supposed to be integrated cannot be integrated in the same way that the user envisions at time of acquiring the component. Certification of components is one way of ensuring that components adhere to certain standards for integration. This study presents findings from two case studies assessing the value of one particular certification program from ABB, called Industrial IT Enabled. A method for facilitating complex decision-making, Incomplete Pairwise Comparison (IPC), has been used to identify the relative value of Industrial IT Enabled for customers, as well as for ABB itself. Results indicate that the certification provides practically significant added value to customers, as well as to ABB. It is believed that these results, to some extent, can be valid in other similar certification programs.


Empirical Assessment and Evaluation in Software Engineering (EASE) | 2002

Benchmarking of processes for managing product platforms: a case study

Martin Höst; Enrico Johansson; Adam Norén; Lars Bratthall

A case study is presented in which two organisations have participated in a benchmarking initiative to discover improvement suggestions for their processes for managing product platforms. The initiative, based on an instrument which consists of a list of questions, has been developed as part of this study and contains eight major categories of questions that guide the participating organisations to describe their processes. The descriptions were then reviewed by the organisations cross-wise in order to identify areas for improvement. The major objective of the case study is to evaluate the benchmarking procedure and instrument in practice. The result is that the benchmarking procedure with the benchmarking instrument has been well received in the study. It was therefore concluded that the approach is probably applicable for other similar organisations as well.

Collaboration


Dive into the Lars Bratthall's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Claes Wohlin

Blekinge Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Magne Jørgensen

Simula Research Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Johan Andersson

Mälardalen University College

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge