Rakesh Rana
University of Gothenburg
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Rakesh Rana.
Journal of Systems and Software | 2014
Rakesh Rana; Miroslaw Staron; Christian Berger; Jörgen Hansson; Martin Nilsson; Fredrik Törner; Wilhelm Meding; Christoffer Höglund
8 software reliability growth models are evaluated on 11 large projects.Logistic and Gompertz models have the best fit and asymptote predictions.Using growth rate from earlier projects improves asymptote prediction accuracy.Trend analysis allows choosing the best shape of the model at 50% of project time. During software development two important decisions organizations have to make are: how to allocate testing resources optimally and when the software is ready for release. SRGMs (software reliability growth models) provide empirical basis for evaluating and predicting reliability of software systems. When using SRGMs for the purpose of optimizing testing resource allocation, the models ability to accurately predict the expected defect inflow profile is useful. For assessing release readiness, the asymptote accuracy is the most important attribute. Although more than hundred models for software reliability have been proposed and evaluated over time, there exists no clear guide on which models should be used for a given software development process or for a given industrial domain.Using defect inflow profiles from large software projects from Ericsson, Volvo Car Corporation and Saab, we evaluate commonly used SRGMs for their ability to provide empirical basis for making these decisions. We also demonstrate that using defect intensity growth rate from earlier projects increases the accuracy of the predictions. Our results show that Logistic and Gompertz models are the most accurate models; we further observe that classifying a given project based on its expected shape of defect inflow help to select the most appropriate model.
international symposium on software reliability engineering | 2013
Rakesh Rana; Miroslaw Staron; Christian Berger; Jörgen Hansson; Martin Nilsson; Fredrik Törner
Software is today an integral part of providing improved functionality and innovative features in the automotive industry. Safety and reliability are important requirements for automotive software and software testing is still the main source of ensuring dependability of the software artifacts. Software Reliability Growth Models (SRGMs) have been long used to assess the reliability of software systems; they are also used for predicting the defect inflow in order to allocate maintenance resources. Although a number of models have been proposed and evaluated, much of the assessment of their predictive ability is studied for short term (e.g. last 10% of data). But in practice (in industry) the usefulness of SRGMs with respect to optimal resource allocation depends heavily on the long term predictive power of SRGMs i.e. much before the project is close to completion. The ability to reasonably predict the expected defect inflow provides important insight that can help project and quality managers to take necessary actions related to testing resource allocation on time to ensure high quality software at the release. In this paper we evaluate the long-term predictive power of commonly used SRGMs on four software projects from the automotive sector. The results indicate that Gompertz and Logistic model performs best among the tested models on all fit criterias as well as on predictive power, although these models are not reliable for long-term prediction with partial data.
product focused software process improvement | 2013
Rakesh Rana; Miroslaw Staron; Niklas Mellegård; Christian Berger; Jörgen Hansson; Martin Nilsson; Fredrik Törner
Reliability and dependability of software in modern cars is of utmost importance. Predicting these properties for software under development is therefore important for modern car OEMs, and using reliability growth models (e.g. Rayleigh, Goel-Okumoto) is one approach. In this paper we evaluate a number of standard reliability growth models on a real software system from automotive industry. The results of the evaluation show that models can be fitted well with defect inflow data, but certain parameters need to be adjusted manually in order to predict reliability more precisely in the late test phases. In this paper we provide recommendations for how to adjust the models and how the adjustments should be used in the development process of software in the automotive domain by investigating data from an industrial project.
Journal of Systems and Software | 2016
Rakesh Rana; Miroslaw Staron; Christian Berger; Jörgen Hansson; Martin Nilsson; Wilhelm Meding
Defect inflow distribution of 14 large projects from industry & OSS is analyzed.6 standard distributions are evaluated for their ability to fit the defect inflow.12 out of 14 projects defect inflow data was described best by beta distribution.Historical projects information is useful for early defect prediction using Bayesian inference method. Tracking and predicting quality and reliability is a major challenge in large and distributed software development projects. A number of standard distributions have been successfully used in reliability engineering theory and practice, common among these for modeling software defect inflow being exponential, Weibull, beta and Non-Homogeneous Poisson Process (NHPP). Although standard distribution models have been recognized in reliability engineering practice, their ability to fit defect data from proprietary and OSS software projects is not well understood. Lack of knowledge about underlying defect inflow distribution also leads to difficulty in applying Bayesian based inference methods for software defect prediction. In this paper we explore the defect inflow distribution of total of fourteen large software projects/release from two industrial domain and open source community. We evaluate six standard distributions for their ability to fit the defect inflow data and also assess which information criterion is practical for selecting the distribution with best fit. Our results show that beta distribution provides the best fit to the defect inflow data for all industrial projects as well as majority of OSS projects studied. In the paper we also evaluate how information about defect inflow distribution from historical projects is applied for modeling the prior beliefs/experience in Bayesian analysis which is useful for making software defect predictions early during the software project lifecycle.
international conference on software engineering | 2014
Rakesh Rana; Miroslaw Staron; Jörgen Hansson; Martin Nilsson
Software today provides an important and vital role in providing the functionality and user experience in automotive domain. With ever increasing size and complexity of software together with high demands on quality and dependability, managing software development process effectively is an important challenge. Methods of software defect predictions provide useful information for optimal resource allocation and release planning; they also help track and model software and system reliability. In this paper we present an overview of defect prediction methods and their applicability in different software lifecycle phases in the automotive domain. Based on the overview and current trends we identify that close monitoring of in-service performance of software based systems will provide useful feedback to software development teams and allow them to develop more robust and user friendly systems.
predictive models in software engineering | 2014
Rakesh Rana; Miroslaw Staron; Christian Berger; Jörgen Hansson; Martin Nilsson
Defects are real and observable indicators of software quality that can be analyzed and modelled to track the quality and reliability of software system during development and testing. A number of software reliability growth models (SRGMs) have been introduced and evaluated which are based on different family of distributions such as exponential, Weibull, Non-Homogeneous Poisson Process etc. There exist no standard way of selecting the most appropriate SRGMs for given defect data and further the distribution of defect inflow for real software projects from different industrial domains is also not well documented. In this paper we explore the defect inflow distribution of four large software projects from the automotive domain. We evaluate six standard distributions for their ability to fit the defect inflow data and also assess which information criterion is practical for selecting the distribution with best fit. Our results show that beta distribution provides the best fit to the defect inflow data from all projects with different distribution characteristics. Finding the underlying distribution of defect inflow not only help applying the appropriate statistical techniques for data analysis but also to select the appropriate SRGMs for modelling reliability. The information about defect inflow distribution is further useful for modelling the prior beliefs or experience as prior probabilities in Bayesian analysis.
joint conference on knowledge-based software engineering | 2014
Rakesh Rana; Miroslaw Staron; Christian Berger; Jörgen Hansson; Martin Nilsson; Wilhelm Meding
Existing methods for predicting reliability of software are static and need manual maintenance to adjust to the evolving data sets in software organizations. Machine learning has a potential to address the problem of manual maintenance but can also require changes in how companies works with defect prediction. In this paper we address the problem of identifying what the benefits of machine learning are compared to existing methods and which barriers exist for adopting them in practice.
international conference on software engineering | 2014
Rakesh Rana; Miroslaw Staron; Jörgen Hansson; Martin Nilsson; Wilhelm Meding
Machine learning algorithms are increasingly being used in a variety of application domains including software engineering. While their practical value have been outlined, demonstrated and highlighted in number of existing studies, their adoption in industry is still not widespread. The evaluations of machine learning algorithms in literature seem to focus on few attributes and mainly on predictive accuracy. On the other hand the decision space for adoption or acceptance of machine learning algorithms in industry encompasses much more factors. Companies looking to adopt such techniques want to know where such algorithms are most useful, if the new methods are reliable and cost effective. Further questions such as how much would it cost to setup, run and maintain systems based on such techniques are currently not fully investigated in the industry or in academia leading to difficulties in assessing the business case for adoption of these techniques in industry. In this paper we argue for the need of framework for adoption of machine learning in industry. We develop a framework for factors and attributes that contribute towards the decision of adoption of machine learning techniques in industry for the purpose of software defect predictions. The framework is developed in close collaboration within industry and thus provides useful insight for industry itself, academia and suppliers of tools and services.
17th KKIO Software Engineering Conference | 2017
Miroslaw Staron; Darko Durisic; Rakesh Rana
Base measures such as the number of lines-of-code are often used to make predictions about such phenomena as project effort, product quality or maintenance effort. However, quite often we rely on the measurement instruments where the exact algorithm for calculating the value of the measure is not known. The objective of our research is to explore how we can increase the certainty of base measures in software engineering. We conduct a benchmarking study where we use four measurement instruments for lines-of-code measurement with unknown certainty to measure five code bases. Our results show that we can adjust the measurement values by as much as 20 % knowing the systematic error of the tool. We conclude that calibrating the measurement instruments can significantly contribute to increased accuracy in measurement processes in software engineering. This will impact the accuracy of predictions (e.g. of effort in software projects) and therefore increase the cost-efficiency of software engineering processes.
international conference on software engineering | 2015
Rakesh Rana; Miroslaw Staron
The importance of software in everyday products and services has been on constant rise and so is the complexity of software. In face of this rising complexity and our dependence on software - measuring, maintaining and increasing software quality is of critical importance. Software metrics provide a quantitative means to measure and thus control various attributes of software systems. In the paradigm of machine learning, software quality prediction can be cast as a classification or concept learning problem. In this paper we provide a general framework for applying machine learning approaches for assessment and prediction of software quality in large software organizations. Using ISO 15939 measurement information model we show how different software metrics can be used to build software quality model which can be used for quality assessment and prediction that satisfies the information need of these organizations with respect to quality. We also document how machine learning approaches can be effectively used for such evaluation.