Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tor Stålhane is active.

Publication


Featured researches published by Tor Stålhane.


Lecture Notes in Computer Science | 2003

Post Mortem – An Assessment of Two Approaches

Tor Stålhane; Torgeir Dingsøyr; Geir Kjetil Hanssen; Nils Brede Moe

Learning from experience is the key to successes for all that develop software. Both the successes and the failures in software projects can help us to improve. Here we discuss two versions of Post Mortem Analysis (PMA) as methods for harvesting experience from completed software projects, which can be part of a larger knowledge management program. The two methods are tailored for use in small and medium size companies and are conceptually easy to apply. In addition, they require few resources compared to other methods in the field. We think that the methods are useful for companies when they need to document their knowledge, find improvement actions and as a start of systematic knowledge harvesting.


conference on software engineering education and training | 2005

Using Post Mortem Analysis to Evaluate Software Architecture Student Projects

A. Inge Wang; Tor Stålhane

This paper presents experiences and results from using the post mortem analysis (PMA) method to evaluate student projects in a software architecture course at the Norwegian University of Science and Technology (NTNU). The PMA gave students an opportunity to evaluate their own work as well as evaluating the project as a whole. The results of the analysis revealed several positive and negative issues related to the project that could be used to improve the course for the next year. We also discovered that the PMA gave us a much more detailed evaluation than the more traditional form-based course evaluations. In addition, the students found the PMA method a useful and practical way of summarizing and learning from a project since the students got to practice the PMA method, they can also bring this software process improvement practice out to companies when they start working in the IT-industry


Journal of Systems and Software | 1997

In search of the customer's quality view

Tor Stålhane; P. C. Borgersen; K. Arnesen

This paper describes the work done in the PROFF project to get the customer’s view of software product quality. The customers in this case are organizations that buy software for PCs and workstations. The main result is that the quality of service and follow-up activities are more important than the quality of the product itself. This observation should have an impact on the way we market and sell software products in the future.


sei conference on software engineering education | 1992

Software Reuse in an Educational Perspective

Tor Stålhane; Even-André Karlsson; Guttorm Sindre

Software is largely developed from scratch, whereas other engineering disciplines tend to use mass produced, off-the-shelf components. Reuse still fails to have any massive impact in the software field beyond the low level functional libraries provided with various compilers.


Journal of Systems and Software | 1994

The quest for reliability: A case study

Tor Stålhane; Kari Juul Wedde

Abstract This article describes a project—the implementation of an Ada symbolic debugger—and the assurance activities involved in the project. We discuss the error data collected during the project and the statistical analysis performed to draw some conclusions on which relationships affect the error density of a component. In brief, we found that if we keep track of all errors, related changes, and tests throughout the life cycle, it is possible to prevent the corruption of earlier corrections and changes. Furthermore, neither reuse nor unit testing have any effect on error density. The main effect is simplicity, through high maintainability and low coupling to large, complex external libraries.


Reliability Engineering & System Safety | 1992

A goal oriented approach to software testing

Eldfrid Øfsti Øvstedal; Tor Stålhane

Abstract There is a tendency to distribute the testing and validation effort in a software project uniformly over all system functions. However, to improve systems reliability and safety, testing effort must be focused on the functions with the highest failure consequences. This paper describes a method that computes the number of test cases given the accepted risk levels for each function. Input to the method are the total set of functions for the system, the set of possible accidents and their consequences plus the subset of accidents that can be caused by a failure in one particular function. Failure consequences and functions usage are then used to find the Potential Annual Loss Exposure (PALE) by using a simple diagram developed by AFSC. Given the PALE value, system functions can be ranked according to their risk. It is then possible to set goals for their failure probabilities and to compute the number of test cases needed for each function.


international conference on software maintenance | 1995

A case study of a maintenance support system

Kari Juul Wedde; Tor Stålhane; Inge Nordbø

One of the problems when maintaining a system installed at many sites and with many support centres, is that a lot of problems are reported several times, thus creating a large amount of extra work for the system maintenance organization. In order to reduce this problem, we have built a trouble report filter that will filter out more than 30% of all repeated trouble reports, even if the problem is described by different terms or at different system levels. The work consists of building a classification model based on the system description, terms used in the trouble reports, relations between these terms and by using measures for term distance and importance to compute a trouble report distance. The model can be tuned to maximum efficiency by varying the distance and importance measures. After describing the model and the term network, we describe two experiments with real data. The results show that it is possible to build a simple but highly efficient model for recognizing trouble reports.


IFAC Proceedings Volumes | 1990

Assessing Software Reliability in a Changing Environment

Tor Stålhane

Abstract The reliability of a software system depends on its initial error contents and which errors are discovered during testing and use. Which errors are discovered and then corrected will depend on how the system is used. As a consequence of this, the reliability of a system will depend on its history (installation trail). Knowledge of this history can be used to predict the reliability at a new site and the test strategy necessary in order to obtain a specific reliability for a predefined site. This paper shows how we can describe the history of a system by a function execution vector. The number of remaining errors will have a Poisson distribution with intensity depending on the execution vectors for all the previous sites and the initial error content. Once the cost of a failure in any system function is identified, it is also possible to find the optimal test strategy for test of the system at a specified site.


product focused software process improvement | 2000

The Benefits of Networking

Jørgen Bøegh; Mads Christiansen; Ebba Thora Hvannberg; Tor Stålhane

A network of 18 competence centres, called ESPINODES, supports companies conducting software process improvement (SPI) experiments under the European Commissions ESSI programme. Three Nordic ESPINODES report on experiences from participating in this network. We focus on general issues using examples from our local activities. The benefits of the network as seen from the sponsor, the participants as well as their customers are discussed.


Microprocessors and Microsystems | 1998

Modification of safety critical systems: an assessment of three approaches

Tor Stålhane; Kari Juul Wedde

Abstract This paper sums up the experience at SINTEF Telecom and Informatics on analysis of safety critical systems. After a short description of the system under consideration, the paper naturally falls into two parts. The first one is a describtion of two modifications, how they were implemented and how they were analysed for safety. The second one contains a discussion of the three methods used—FTA, FMECA and code analysis. We here concentrate on how these methods differ in focus, the knowledge and information needed, and the types of problems they can handle. The papers conclusion is that all three methods are needed when analysing the modifications of a safety critical system. The knowledge needed and the problem focus will, however, differ.

Collaboration


Dive into the Tor Stålhane's collaboration.

Top Co-Authors

Avatar

Guttorm Sindre

Norwegian University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

A. Inge Wang

Norwegian University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Even-André Karlsson

Norwegian Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Gunnar Brataas

Norwegian Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge