Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sunita Chulani is active.

Publication


Featured researches published by Sunita Chulani.


mining software repositories | 2011

Implementing quality metrics and goals at the corporate level

Pete Rotella; Sunita Chulani

Over the past eight years, Cisco Systems, Inc., has implemented software quality goals for most groups engaged in software development, including development for both customer and internal use. This corporate implementation has proven to be a long and difficult process for many reasons, including opposition from many groups, uncertainties as to how to proceed with key aspects of the goaling, and the many unanticipated modifications needed to adapt the program to a large and diverse development and test environment. This paper describes what has worked, what has not work so well, and what levels of improvement the Engineering organization has experienced in part as a result of these efforts. Key customer experience metrics have improved 30% to 70% over the past six years, partly as a result of metrics and process standardization, dashboarding, and goaling. As one would expect with such a large endeavor, some of the results shown are not statistically provable, but are nevertheless generally accepted within the corporation as valid. Other important results do have strong statistical substantiation, and we will also describe these. But whether or not the results are statistically provable, Cisco has in fact improved its software quality substantially over the past eight years, and the corporate goaling mechanism is generally recognized as a necessary (but of course not sufficient) part of this improvement effort.


international conference on software engineering | 2009

Seventh workshop on Software Quality

Barry W. Boehm; Sunita Chulani; June M. Verner; Bernard Wong

Software Quality has been a major challenge throughout Information Technology projects. Whether it is in software development, in software integration or whether it is in the implementation or customization of shrink-wrapped software, quality is regarded as a major issue. In the last couple of decades, much software engineering research has focused on standards, methodologies and techniques for improving software quality, measuring software quality and software quality assurance. Most of this research is focused on the internal/development view of quality. More recent studies have made attempts to understand the stakeholder view of quality. With globalization, many new challenges affect software quality. Not only do we need to understand the many stakeholder views of quality, we now need to consider the cultural issues, and the outsourcing issues. The Seventh Workshop on Software Quality aims to bring together academic, industrial and commercial communities interested in software quality topics to discuss the different technologies being defined and used in the software quality area.


mining software repositories | 2012

Analysis of customer satisfaction survey data

Pete Rotella; Sunita Chulani

Cisco Systems, Inc., conducts a customer satisfaction survey (CSAT) each year to gauge customer sentiment regarding Cisco products, technical support, partner- and Cisco-provided technical services, order fulfillment, and a number of other aspects of the companys business. The results of the analysis of this data are used for several purposes, including ascertaining the viability of new products, determining if customer support objectives are being met, setting engineering in-process and customer experience yearly metrics goals, and assessing, indirectly, the success of engineering initiatives. Analyzing this data, which includes 110,000 yearly sets of survey responses that address over 100 product and services categories, is in many respects complicated. For example, skip logic is an integral part of the survey mechanics, and forming aggregate views of customer sentiment is statistically challenging in this data environment. In this paper, we describe several of the various analysis approaches currently used, pointing out some situations where a high level of precision is not easily achieved, and some situations in which it is possible to easily end up with erroneous results. The analysis and statistical territory covered in this paper is in parts well-known and straightforward, but other parts, which we address, are susceptible to large inaccuracies and errors. We address several of these difficulties and develop reasonable solutions for two known issues, high missing value levels and high colinearity of independent variables.


international symposium on software reliability engineering | 2014

Predicting Release Quality

Pete Rotella; Sunita Chulani

Summary form only given. Identifying correlations between in-process development and test metrics is key in anticipating subsequent reliability performance in the field. For several years now at Cisco, our primary measure of field reliability has been Software Defects Per Million Hours (SWDPMH), and this metric has been goaled on a yearly basis for over 100 product families. A key reason SWDPMH is considered to be of critical importance is that we see a high correlation between SWDPMH and Software Customer Satisfaction (SW CSAT) over a wide spectrum of products and feature releases. Therefore it is important to try to anticipate SWDPMH for new releases before the software is released to customers, for several reasons: Early warning that a major feature release is likely to experience substantial quality problems in the field may allow for remediation of the release during, or even prior to, function and system testing Prediction of SWDPMH enables better planning for subsequent maintenance releases and rollout strategies Calculating the tradeoffs between SWDPMH and feature volume provides guidance concerning acceptable feature content, test effort, release cycle timing, and other key parameters affecting feature releases. Our efforts over the past two years have been to enhance our ability to predict SWDPMH in the field. Toward this end, we have developed predictive models, tested the models with a broad range of feature and maintenance releases, and have provided guidance to development, test, and release management teams on how to improve the chances of achieving best-in-class levels of SWDPMH. This work is ongoing, but several models are currently used in a production mode for more than 40 product families, with good results. In this paper we will show correlations with SWDPMH of feature release sequences for 16 product families. We will also show the models applicability to maintenance release sequences, features, and Business Unit contributions to feature releases. We will also show the models performance characteristics with agile releases, hybrid agile/waterfall releases, and traditional waterfall releases.


Proceedings of the Second International Workshop on Software Engineering Research and Industrial Practice | 2015

Predicting software field reliability

Pete Rotella; Sunita Chulani; Devesh Goyal

The objective of the work described is to accurately predict, as early as possible in the software lifecycle, how reliably a new software release will behave in the field. The initiative is based on a set of innovative mathematical models that have consistently shown a high correlation between key in-process metrics and our primary customer experience metric, SWDPMH (Software Defects per Million Hours [usage] per Month). We have focused on the three primary dimensions of testing -- incoming, fixed, and backlog bugs. All of the key predictive metrics described here are empirically-derived, and in specific quantitative terms have not previously been documented in the software engineering/quality literature.


ieee international conference on software quality reliability and security companion | 2017

Comparing and Goaling Releases Using Software Reliability Classes

Pete Rotella; Sunita Chulani

Software Software Reliability Classes (SRCs) have beendeveloped in order to compare the field reliabilityperformance of a sequence of software releases for a clusterof similar hardware products. A specific cluster ischaracterized by the type of market the hardware supports,and the software releases for the cluster have similarfunctionality, complexity, size, and customer expectations


conducting empirical studies in industry | 2018

Comparing reliability levels of software releases

Pete Rotella; Sunita Chulani

An intuitive method is needed to achieve buy-in from all sectors of Engineering for a way to gauge release-over-release change for a given products sequence of releases. Also, customers need to know if there are extant releases that are more reliable than the ones they already rely on in their networks. A new Release-Over-Release (RoR) metric can both enable customers to clearly understand the reliability risk of migrating to other available releases, and also enable Engineering to understand if their software engineering efforts are actually improving release reliability.


international symposium on software reliability engineering | 2017

SRC Ratio Method: Benchmarking Software Reliability

Pete Rotella; Sunita Chulani

Software Reliability Classes (SRCs) have been developed in order to compare the field reliability performance of a sequence of software releases for a cluster of similar hardware products. A specific cluster is characterized by the type of market the hardware supports, and the software releases for the cluster have similar functionality, complexity, size, and customer expectations. SRCs are a normalized form of an already normalized customer experience metric, software defects (encounters) per million usage hours referred to as SWDPMH. Different hardware devices, even though running identical software, can experience up to three orders of magnitude variation in SWDPMH values. The SRC method enables us to compare the best-in-class SWDPMH value, for each cluster, to the current field SWDPMH value, and this enables us to use the same SRC ratio calculation across all reliability classes to assess the reliability health of all software releases. The overall reliability health of a business units software, for all hardware devices supported, can thereby be accurately calculated, trended, and goaled, with particular attention paid to improving release-over-release reliability.


ieee international conference on software quality reliability and security companion | 2017

Software Release-Over-Release Comparisons

Pete Rotella; Sunita Chulani

Software development teams need two similar, but clearlydifferent, ways to gauge field reliability of a productssoftware release.


ieee international conference on software quality reliability and security companion | 2017

Predicting Release Reliability

Pete Rotella; Sunita Chulani

Customers need to know how reliable a new release is, and whether or not the new release has substantially different, either better or worse, reliability than the one currently in production. Customers are demanding quantitative evidence, based on pre-release metrics, to help them decide whether or not to upgrade (and thereby offer new features and capabilities to their customers). Finding ways to estimate future reliability performance is not easy – we have evaluated many prerelease development and test metrics in search of reliability predictors that are sufficiently accurate and also apply to a broad range of software products. This paper describes a successful model that has resulted from these efforts, and also presents both a functional extension and a further conceptual simplification of the extended model that enables us to better communicate key release information to internal stakeholders and customers, without sacrificing predictive accuracy or generalizability. Work remains to be done, but the results of the original model, the extended model, and the simplified version are encouraging and are currently being applied across a range of products and releases. To evaluate whether or not these early predictions are accurate, and also to compare releases that are available to customers, we use a field software reliability assessment mechanism that incorporates two types of customer experience metrics: field bug encounters normalized by usage, and field bug counts, also normalized by usage. Our release-overrelease strategy combines the maturity assessment component (i.e., estimating reliability prior to release to the field) and the reliability assessment component (i.e., gauging actual reliability after release to the field). This overall approach enables us to both predict reliability and compare reliability results for recent releases for a product.

Collaboration


Dive into the Sunita Chulani's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Len Bass

Software Engineering Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Philippe Kruchten

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Rafael Prikladnicki

Pontifícia Universidade Católica do Rio Grande do Sul

View shared research outputs
Top Co-Authors

Avatar

Barry W. Boehm

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge