Jo Erskine Hannay
Simula Research Laboratory
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jo Erskine Hannay.
Information & Software Technology | 2007
Vigdis By Kampenes; Tore Dybå; Jo Erskine Hannay; Dag I. K. Sjøberg
An effect size quantifies the effects of an experimental treatment. Conclusions drawn from hypothesis testing results might be erroneous if effect sizes are not judged in addition to statistical significance. This paper reports a systematic review of 92 controlled experiments published in 12 major software engineering journals and conference proceedings in the decade 1993-2002. The review investigates the practice of effect size reporting, summarizes standardized effect sizes detected in the experiments, discusses the results and gives advice for improvements. Standardized and/or unstandardized effect sizes were reported in 29% of the experiments. Interpretations of the effect sizes in terms of practical importance were not discussed beyond references to standard conventions. The standardized effect sizes computed from the reviewed experiments were equal to observations in psychology studies and slightly larger than standard conventions in behavioral science.
Archive | 2008
Dag I. K. Sjøberg; Tore Dybå; Bente Anda; Jo Erskine Hannay
In mature sciences, building theories is the principal method of acquir- ing and accumulating knowledge that may be used in a wide range of settings. In software engineering, there is relatively little focus on theories. In particular, there is little use and development of empirically-based theories. We propose, and illustrate with examples, an initial framework for describing software engineering theories, and give advice on how to start proposing, testing, modifying and using theories to support both research and practise in software engineering.
Information & Software Technology | 2009
Jo Erskine Hannay; Tore Dybå; Erik Arisholm; Dag I. K. Sjøberg
Several experiments on the effects of pair versus solo programming have been reported in the literature. We present a meta-analysis of these studies. The analysis shows a small significant positive overall effect of pair programming on quality, a medium significant positive overall effect on duration, and a medium significant negative overall effect on effort. However, between-study variance is significant, and there are signs of publication bias among published studies on pair programming. A more detailed examination of the evidence suggests that pair programming is faster than solo programming when programming task complexity is low and yields code solutions of higher quality when task complexity is high. The higher quality for complex tasks comes at a price of considerably greater effort, while the reduced completion time for the simpler tasks comes at a price of noticeably lower quality. We conclude that greater attention should be given to moderating factors on the effects of pair programming.
IEEE Transactions on Software Engineering | 2010
Jo Erskine Hannay; Erik Arisholm; Harald Engvik; Dag I. K. Sjøberg
Personality tests in various guises are commonly used in recruitment and career counseling industries. Such tests have also been considered as instruments for predicting the job performance of software professionals both individually and in teams. However, research suggests that other human-related factors such as motivation, general mental ability, expertise, and task complexity also affect the performance in general. This paper reports on a study of the impact of the Big Five personality traits on the performance of pair programmers together with the impact of expertise and task complexity. The study involved 196 software professionals in three countries forming 98 pairs. The analysis consisted of a confirmatory part and an exploratory part. The results show that: (1) Our data do not confirm a meta-analysis-based model of the impact of certain personality traits on performance and (2) personality traits, in general, have modest predictive value on pair programming performance compared with expertise, task complexity, and country. We conclude that more effort should be spent on investigating other performance-related predictors such as expertise, and task complexity, as well as other promising predictors, such as programming skill and learning. We also conclude that effort should be spent on elaborating on the effects of personality on various measures of collaboration, which, in turn, may be used to predict and influence performance. Insights into such malleable, rather than static, factors may then be used to improve pair programming performance.
IEEE Transactions on Software Engineering | 2008
Jo Erskine Hannay; Magne Jørgensen
Increased realism in software engineering experiments is often promoted as an important means of increasing generalizability and industrial relevance. In this context, artificiality, e.g., the use of constructed tasks in place of realistic tasks, is seen as a threat. In this paper, we examine the opposite view that deliberately introduced artificial design elements may increase knowledge gain and enhance both generalizability and relevance. In the first part of this paper, we identify and evaluate arguments and examples in favor of and against deliberately introducing artificiality into software engineering experiments. We find that there are good arguments in favor of deliberately introducing artificial design elements to 1) isolate basic mechanisms, 2) establish the existence of phenomena, 3) enable generalization from particularly unfavorable to more favorable conditions (persistence of phenomena), and 4) relate experiments to theory. In the second part of this paper, we summarize a content analysis of articles that report software engineering experiments published over a 10-year period from 1993 to 2002. The analysis reveals a striving for realism and external validity, but little awareness of for what and when various degrees of artificiality and realism are appropriate. Furthermore, much of the focus on realism seems to be based on a narrow understanding of the nature of generalization. We conclude that an increased awareness and deliberation as to when and for what purposes both artificial and realistic design elements are applied is valuable for better knowledge gain and quality in empirical software engineering experiments. We also conclude that time spent on studies that have obvious threats to validity that are due to artificiality might be better spent on studies that investigate research questions for which artificiality is a strength rather than a weakness. However, arguments in favor of artificial design elements should not be used to justify studies that are badly designed or that have research questions of low relevance.
empirical software engineering and measurement | 2009
Thorbjorn Walle; Jo Erskine Hannay
The benefits of synergistic collaboration are at the heart of arguments in favor of pair programming. However, empirical studies usually investigate direct effects of various factors on pair programming performance without looking into the details of collaboration. This paper reports from an empirical study that (1) investigated the nature of pair programming collaboration, and (2) subsequently investigated postulated effects of personality on pair programming collaboration. Audio recordings of 44 professional programmer pairs were categorized according to a taxonomy of collaboration. We then measured postulated relationships between the collaboration categories and the personality of the individuals in the pairs. We found evidence that personality generally affects the type of collaboration that occurs in pairs, and that different levels of a given personality trait between two pair members increases the amount of communication-intensive collaboration exhibited by a pair.
empirical software engineering and measurement | 2011
Gunnar R. Bergersen; Jo Erskine Hannay; Dag I. K. Sjøberg; Tore Dybå; Amela Karahasanovic
The skills of software developers are important to the success of software projects. Also, when studying the general effect of a tool or method, it is important to control for individual differences in skill. However, the way skill is assessed is often ad hoc, or based on unvalidated methods. According to established test theory, validated tests of skill should infer skill levels from well-defined performance measures on multiple, small, representative tasks. In this respect, we show how time and quality, which are often analyzed separately, can be combined as task performance and subsequently be aggregated as an approximation of skill. Our results show significant positive correlations between our proposed measures of skill and other variables, such as seniority, lines of code written, and self-evaluated expertise. The method for combining time and quality is a promising first step to measuring programming skill in both industry and research settings.
computational science and engineering | 2011
Magnus Thorstein Sletholt; Jo Erskine Hannay; Dietmar Pfahl; Hans Christian Benestad; Hans Petter Langtangen
The nature of scientific research and the development of scientific software have similarities with processes that follow the agile manifesto: responsiveness to change and collaboration are of the utmost importance. But how well do current scientific software development processes match the practices found in agile development methods, and what are the effects of using agile practices in such processes? In order to investigate this, we conduct a literature review, focusing on evaluating the agility present in a selection of scientific software projects. Both projects with intentionally agile practices and projects with a certain degree of agile elements are taken into consideration. In the agility assessment, we define and utilize an agile mapping chart. The elements of the mapping chart are based on Scrum and XP, thus covering two of the most prominent agile reference models. We compared the findings of the literature review to results of a previously conducted survey. The comparison indicates that scientific software development projects adopting agile practices perceive their testing to be better than average. No difference to average projects was perceived regarding requirements-related activities. Future work includes an in-depth case study to further investigate the existence and impact of agility in three large scientific software projects, ultimately aiming at a better understanding of the particularities involved in developing scientific software.
empirical software engineering and measurement | 2010
Jo Erskine Hannay; Hans Christian Benestad
Applying agile methodology in large software development projects introduces many challenges. For example, one may expect that the combination of autonomous teams and the necessity for an overall organizational control structure may lead to conflicts, and one may expect that Agiles informal means of knowledge sharing breaks down as the number of project participants increases. Such issues may in turn compromise the projects productivity. In order to better understand potential threats to productivity in large agile development projects, we conducted repertory grid interviews with 13 project members on their perceptions of threats to productivity. The project was a large software development project consisting of 11 Scrum teams from three different subcontractors. The repertory grid sessions produced 100 issues, which were content analyzed into 10 main problem areas: (1) Restraints on collaboration due to contracts, ownership, and culture, (2) Architectural and technical qualities are given low priority, (3) Conflicts between organizational control and flexibility, (4) Volatile and late requirements from external parties, (5) Lack of a shared vision for the end product, (6) Limited dissemination of functional knowledge, (7) Excessive dependencies within the system, (8) Overloading of key personnel, (9) Difficulties in maintaining well-functioning technical environments, (10) Difficulties in coordinating test and deployment with external parties. Using critical-case reasoning, we claim that projects deploying agile practices in projects with less favorable conditions than those enjoyed in the current project, and that are larger and more complex, are likely to face similar challenges.
international conference on software engineering | 2011
Hans Christian Benestad; Jo Erskine Hannay
Numerous factors are involved when deciding when to implement which features in incremental software development. To facilitate a rational and efficient planning process, release planning models make such factors explicit and compute release plan alternatives according to optimization principles. However, experience suggests that industrial use of such models is limited. To investigate the feasibility of model and tool support, we compared input factors assumed by release planning models with factors considered by expert planners. The former factors were cataloged by systematically surveying release planning models, while the latter were elicited through repertory grid interviews in three software organizations. The findings indicate a substantial overlap between the two approaches. However, a detailed analysis reveals that models focus on only select parts of a possibly larger space of relevant planning factors. Three concrete areas of mismatch were identified: (1) continuously evolving requirements and specifications, (2) continuously changing prioritization criteria, and (3) authority-based decision processes. With these results in mind, models, tools and guidelines can be adjusted to address better real-life development processes.