Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jörgen Hansson is active.

Publication


Featured researches published by Jörgen Hansson.


international conference on management of data | 1996

DeeDS towards a distributed and active real-time database system

Sten F. Andler; Jörgen Hansson; Joakim Eriksson; Jonas Mellin; Mikael Berndtsson; Bengt Eftring

DeeDS combines active database functionality with critical timing constraints and integrated system monitoring. Since the reactive database mechanisms, or rule management system, must meet critical deadlines, we must employ methods that make triggering of rules and execution of actions predictable. We will focus on the scheduling issues associated with dynamic scheduling of workloads where the triggered transactions have hard, firm or soft deadlines, and how transient overloads may be resolved by substituting transactions by computationally cheaper ones. The rationale for a loosely coupled general purpose event monitoring facility, that works in tight connection with the scheduler, is presented. For performance and predictability, the scheduler and event monitor are executing on a separate CPU from the rest of the system. Real-time database accesses in DeeDS are made predictable and efficient by employing methods such as main memory resident data, full replication, eventual consistency, and prevention of global deadlocks.


model driven engineering languages and systems | 2014

Assessing the State-of-Practice of Model-Based Engineering in the Embedded Systems Domain

Grischa Liebel; Nadja Marko; Matthias Tichy; Andrea Leitner; Jörgen Hansson

Model-Based Engineering (MBE) aims at increasing the effectiveness of engineering by using models as key artifacts in the development process. While empirical studies on the use and the effects of MBE in industry exist, there is only little work targeting the embedded systems domain. We contribute to the body of knowledge with a study on the use and the assessment of MBE in that particular domain. We collected quantitative data from 112 subjects, mostly professionals working with MBE, with the goal to assess the current State of Practice and the challenges the embedded systems domain is facing. Our main findings are that MBE is used by a majority of all participants in the embedded systems domain, mainly for simulation, code generation, and documentation. Reported positive effects of MBE are higher quality and improved reusability. Main shortcomings are interoperability difficulties between MBE tools, high training effort for developers and usability issues.


ARTDB | 1996

Issues in Active Real-Time Databases

Mikael Berndtsson; Jörgen Hansson

Active databases and real-time databases have gained increased interest in recent. Both active and real-time databases are considered as important technologies for supporting non-traditional applications such as computer integrated manufacturing (CIM), process control and air-traffic control. These applications are often event driven and need to react to events in a timely and efficient manner. In this paper we address the problem of merging active databases and real-time databases. Active real-time database is a fairly new area, in which very little research has been carried out so far. However, the use of active real-time database applications has a great potential. In this paper we address several issues and open questions such as semantics, assignment of time constraints and rule selection, which need to be considered when designing active real-time databases. We will highlight issues associated with event detection, rule triggering, rule selection and evaluation in a active real-time database system. A real-time event detection method for multi-level realtime systems is proposed.


Journal of Systems and Software | 2014

Selecting software reliability growth models and improving their predictive accuracy using historical projects data

Rakesh Rana; Miroslaw Staron; Christian Berger; Jörgen Hansson; Martin Nilsson; Fredrik Törner; Wilhelm Meding; Christoffer Höglund

8 software reliability growth models are evaluated on 11 large projects.Logistic and Gompertz models have the best fit and asymptote predictions.Using growth rate from earlier projects improves asymptote prediction accuracy.Trend analysis allows choosing the best shape of the model at 50% of project time. During software development two important decisions organizations have to make are: how to allocate testing resources optimally and when the software is ready for release. SRGMs (software reliability growth models) provide empirical basis for evaluating and predicting reliability of software systems. When using SRGMs for the purpose of optimizing testing resource allocation, the models ability to accurately predict the expected defect inflow profile is useful. For assessing release readiness, the asymptote accuracy is the most important attribute. Although more than hundred models for software reliability have been proposed and evaluated over time, there exists no clear guide on which models should be used for a given software development process or for a given industrial domain.Using defect inflow profiles from large software projects from Ericsson, Volvo Car Corporation and Saab, we evaluate commonly used SRGMs for their ability to provide empirical basis for making these decisions. We also demonstrate that using defect intensity growth rate from earlier projects increases the accuracy of the predictions. Our results show that Logistic and Gompertz models are the most accurate models; we further observe that classifying a given project based on its expected shape of defect inflow help to select the most appropriate model.


conference on software maintenance and reengineering | 2014

Identifying risky areas of software code in Agile/Lean software development: An industrial experience report

Vard Antinyan; Miroslaw Staron; Wilhelm Meding; Per Österström; Erik Wikström; Johan Wranker; Anders Henriksson; Jörgen Hansson

Modern software development relies on incremental delivery to facilitate quick response to customers requests. In this dynamic environment the continuous modifications of software code can cause risks for software developers; when developing a new feature increment, the added or modified code may contain fault-prone or difficult-to-maintain elements. The outcome of these risks can be defective software or decreased development velocity. This study presents a method to identify the risky areas and assess the risk when developing software code in Lean/Agile environment. We have conducted an action research project in two large companies, Ericsson AB and Volvo Group Truck Technology. During the study we have measured a set of code properties and investigated their influence on risk. The results show that the superposition of two metrics, complexity and revisions of a source code file, can effectively enable identification and assessment of the risk. We also illustrate how this kind of assessment can be successfully used by software developers to manage risks on a weekly basis as well as release-wise. A measurement system for systematic risk assessment has been introduced to two companies.


international symposium on software reliability engineering | 2013

Evaluating long-term predictive power of standard reliability growth models on automotive systems

Rakesh Rana; Miroslaw Staron; Christian Berger; Jörgen Hansson; Martin Nilsson; Fredrik Törner

Software is today an integral part of providing improved functionality and innovative features in the automotive industry. Safety and reliability are important requirements for automotive software and software testing is still the main source of ensuring dependability of the software artifacts. Software Reliability Growth Models (SRGMs) have been long used to assess the reliability of software systems; they are also used for predicting the defect inflow in order to allocate maintenance resources. Although a number of models have been proposed and evaluated, much of the assessment of their predictive ability is studied for short term (e.g. last 10% of data). But in practice (in industry) the usefulness of SRGMs with respect to optimal resource allocation depends heavily on the long term predictive power of SRGMs i.e. much before the project is close to completion. The ability to reasonably predict the expected defect inflow provides important insight that can help project and quality managers to take necessary actions related to testing resource allocation on time to ensure high quality software at the release. In this paper we evaluate the long-term predictive power of commonly used SRGMs on four software projects from the automotive sector. The results indicate that Gompertz and Logistic model performs best among the tested models on all fit criterias as well as on predictive power, although these models are not reliable for long-term prediction with partial data.


joint conference of international workshop on software measurement and international conference on software process and product measurement | 2013

Measuring and Visualizing Code Stability -- A Case Study at Three Companies

Miroslaw Staron; Jörgen Hansson; Robert Feldt; Wilhelm Meding; Anders Henriksson; Sven Nilsson; Christoffer Höglund

Monitoring performance of software development organizations can be achieved from a number of perspectives - e.g. using such tools as Balanced Scorecards or corporate dashboards. In this paper we present results from a study on using code stability indicators as a tool for product stability and organizational performance, conducted at three different software development companies - Ericsson AB, Saab AB Electronic Defense Systems (Saab) and Volvo Group Trucks Technology (Volvo Group). The results show that visualizing the source code changes using heat maps and linking these visualizations to defect inflow profiles provide indicators of how stable the product under development is and whether quality assurance efforts should be directed to specific parts of the product. Observing the indicator and making decisions based on its visualization leads to shorter feedback loops between development and test, thus resulting in lower development costs, shorter lead time and increased quality. The industrial case study in the paper shows that the indicator and its visualization can show whether the modifications of software products are focused on parts of the code base or are spread widely throughout the product.


international conference on management of data | 1996

Workshop report: the first international workshop on active and real-time database systems (ARTDB-95)

Mikael Berndtsson; Jörgen Hansson

This report is a summary of the First International Workshop on Active and Real-Time Database Systems (ARTDB-95) [1], held at the University of Skovde in June 1995. The workshop brought together re ...


embedded and real-time computing systems and applications | 1998

Dynamic transaction scheduling and reallocation in overloaded real-time database systems

Jörgen Hansson; Sang Hyuk Son; John A. Stankovic; Sten F. Andler

We introduce a novel scheduling architecture with a new algorithm for dynamically resolving transient overloads, that is executed when a new transaction cannot be admitted to the system due to scarce resources. The resolver algorithm generates a cost effective overload resolution plan which, in order to admit the new transaction, finds the required time by de-allocating time among the previously admitted but not yet completed transactions. Considering the cost efficiency of executing the plan and the importance of the new transaction a decision is made whether to execute the plan and admit the new transaction, or to reject it. We consider a multi-class transaction workload consisting of hard critical and firm transactions, where critical transactions have contingency transactions that can be invoked during overloads. We present a performance analysis showing to what degree the overload resolver enforces predictability and ensures the timeliness of critical transactions when handling extreme overload scenarios in real-time database systems.


product focused software process improvement | 2013

Evaluation of Standard Reliability Growth Models in the Context of Automotive Software Systems

Rakesh Rana; Miroslaw Staron; Niklas Mellegård; Christian Berger; Jörgen Hansson; Martin Nilsson; Fredrik Törner

Reliability and dependability of software in modern cars is of utmost importance. Predicting these properties for software under development is therefore important for modern car OEMs, and using reliability growth models (e.g. Rayleigh, Goel-Okumoto) is one approach. In this paper we evaluate a number of standard reliability growth models on a real software system from automotive industry. The results of the evaluation show that models can be fitted well with defect inflow data, but certain parameters need to be adjusted manually in order to predict reliability more precisely in the late test phases. In this paper we provide recommendations for how to adjust the models and how the adjustments should be used in the development process of software in the automotive domain by investigating data from an industrial project.

Collaboration


Dive into the Jörgen Hansson's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Rakesh Rana

University of Gothenburg

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Abdullah Al Mamun

Chalmers University of Technology

View shared research outputs
Top Co-Authors

Avatar

Vard Antinyan

University of Gothenburg

View shared research outputs
Top Co-Authors

Avatar

Peter H. Feiler

Carnegie Mellon University

View shared research outputs
Researchain Logo
Decentralizing Knowledge