Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Pete Rotella is active.

Publication


Featured researches published by Pete Rotella.


mining software repositories | 2010

Identifying security bug reports via text mining: An industrial case study

Michael Gegick; Pete Rotella; Tao Xie

A bug-tracking system such as Bugzilla contains bug reports (BRs) collected from various sources such as development teams, testing teams, and end users. When bug reporters submit bug reports to a bug-tracking system, the bug reporters need to label the bug reports as security bug reports (SBRs) or not, to indicate whether the involved bugs are security problems. These SBRs generally deserve higher priority in bug fixing than not-security bug reports (NSBRs). However, in the bug-reporting process, bug reporters often mislabel SBRs as NSBRs partly due to lack of security domain knowledge. This mislabeling could cause serious damage to software-system stakeholders due to the induced delay of identifying and fixing the involved security bugs. To address this important issue, we developed a new approach that applies text mining on natural-language descriptions of BRs to train a statistical model on already manually-labeled BRs to identify SBRs that are manually-mislabeled as NSBRs. Security engineers can use the model to automate the classification of BRs from large bug databases to reduce the time that they spend on searching for SBRs. We evaluated the models predictions on a large Cisco software system with over ten million source lines of code. Among a sample of BRs that Cisco bug reporters manually labeled as NSBRs in bug reporting, our model successfully classified a high percentage (78%) of the SBRs as verified by Cisco security engineers, and predicted their classification as SBRs with a probability of at least 0.98.


international conference on software testing, verification, and validation | 2009

Predicting Attack-prone Components

Michael Gegick; Pete Rotella; Laurie Williams

Limited resources preclude software engineers from finding and fixing all vulnerabilities in a software system. This limitation necessitates security risk management where security efforts are prioritized to the highest risk vulnerabilities that cause the most damage to the end user. We created a predictive model that identifies the software components that pose the highest security risk in order to prioritize security fortification efforts. The input variables to our model are available early in the software life cycle and include security-related static analysis tool warnings, code churn and size, and faults identified by manual inspections. These metrics are validated against vulnerabilities reported by testing and those found in the field. We evaluated our model on a large Cisco software system and found that 75.6% of the systems vulnerable components are in the top 18.6% of the components predicted to be vulnerable. The models false positive rate is 47.4% of this top 18.6% or 9.1% of the total system components. We quantified the goodness of fit of our model to the Cisco data set using a receiver operating characteristic curve that shows 94.4% of the area is under the curve.


engineering secure software and systems | 2009

Toward Non-security Failures as a Predictor of Security Faults and Failures

Michael Gegick; Pete Rotella; Laurie Williams

In the search for metrics that can predict the presence of vulnerabilities early in the software life cycle, there may be some benefit to choosing metrics from the non-security realm. We analyzed non-security and security failure data reported for the year 2007 of a Cisco software system. We used non-security failure reports as input variables into a classification and regression tree (CART) model to determine the probability that a component will have at least one vulnerability. Using CART, we ranked all of the system components in descending order of their probabilities and found that 57% of the vulnerable components were in the top nine percent of the total component ranking, but with a 48% false positive rate. The results indicate that non-security failures can be used as one of the input variables for security-related prediction models.


mining software repositories | 2011

Implementing quality metrics and goals at the corporate level

Pete Rotella; Sunita Chulani

Over the past eight years, Cisco Systems, Inc., has implemented software quality goals for most groups engaged in software development, including development for both customer and internal use. This corporate implementation has proven to be a long and difficult process for many reasons, including opposition from many groups, uncertainties as to how to proceed with key aspects of the goaling, and the many unanticipated modifications needed to adapt the program to a large and diverse development and test environment. This paper describes what has worked, what has not work so well, and what levels of improvement the Engineering organization has experienced in part as a result of these efforts. Key customer experience metrics have improved 30% to 70% over the past six years, partly as a result of metrics and process standardization, dashboarding, and goaling. As one would expect with such a large endeavor, some of the results shown are not statistically provable, but are nevertheless generally accepted within the corporation as valid. Other important results do have strong statistical substantiation, and we will also describe these. But whether or not the results are statistically provable, Cisco has in fact improved its software quality substantially over the past eight years, and the corporate goaling mechanism is generally recognized as a necessary (but of course not sufficient) part of this improvement effort.


mining software repositories | 2012

Analysis of customer satisfaction survey data

Pete Rotella; Sunita Chulani

Cisco Systems, Inc., conducts a customer satisfaction survey (CSAT) each year to gauge customer sentiment regarding Cisco products, technical support, partner- and Cisco-provided technical services, order fulfillment, and a number of other aspects of the companys business. The results of the analysis of this data are used for several purposes, including ascertaining the viability of new products, determining if customer support objectives are being met, setting engineering in-process and customer experience yearly metrics goals, and assessing, indirectly, the success of engineering initiatives. Analyzing this data, which includes 110,000 yearly sets of survey responses that address over 100 product and services categories, is in many respects complicated. For example, skip logic is an integral part of the survey mechanics, and forming aggregate views of customer sentiment is statistically challenging in this data environment. In this paper, we describe several of the various analysis approaches currently used, pointing out some situations where a high level of precision is not easily achieved, and some situations in which it is possible to easily end up with erroneous results. The analysis and statistical territory covered in this paper is in parts well-known and straightforward, but other parts, which we address, are susceptible to large inaccuracies and errors. We address several of these difficulties and develop reasonable solutions for two known issues, high missing value levels and high colinearity of independent variables.


foundations of software engineering | 2011

Does adding manpower also affect quality?: an empirical, longitudinal analysis

Andrew Meneely; Pete Rotella; Laurie Williams

With each new developer to a software development team comes a greater challenge to manage the communication, coordination, and knowledge transfer amongst teammates. Fred Brooks discusses this challenge in The Mythical Man-Month by arguing that rapid team expansion can lead to a complex team organization structure. While Brooks focuses on productivity loss as the negative outcome, poor product quality is also a substantial concern. But if team expansion is unavoidable, can any quality impacts be mitigated? Our objective is to guide software engineering managers by empirically analyzing the effects of team size, expansion, and structure on product quality. We performed an empirical, longitudinal case study of a large Cisco networking product over a five year history. Over that time, the team underwent periods of no expansion, steady expansion, and accelerated expansion. Using team-level metrics, we quantified characteristics of team expansion, including team size, expansion rate, expansion acceleration, and modularity with respect to department designations. We examined statistical correlations between our monthly team-level metrics and monthly product-level metrics. Our results indicate that increased team size and linear growth are correlated with later periods of better product quality. However, periods of accelerated team expansion are correlated with later periods of reduced software quality. Furthermore, our linear regression prediction model based on team metrics was able to predict the products post-release failure rate within a 95% prediction interval for 38 out of 40 months. Our analysis provides insight for project managers into how the expansion of development teams can impact product quality.


Proceedings of the Second International Workshop on Software Engineering Research and Industrial Practice | 2015

Predicting software field reliability

Pete Rotella; Sunita Chulani; Devesh Goyal

The objective of the work described is to accurately predict, as early as possible in the software lifecycle, how reliably a new software release will behave in the field. The initiative is based on a set of innovative mathematical models that have consistently shown a high correlation between key in-process metrics and our primary customer experience metric, SWDPMH (Software Defects per Million Hours [usage] per Month). We have focused on the three primary dimensions of testing -- incoming, fixed, and backlog bugs. All of the key predictive metrics described here are empirically-derived, and in specific quantitative terms have not previously been documented in the software engineering/quality literature.


ieee international conference on software quality reliability and security companion | 2017

Comparing and Goaling Releases Using Software Reliability Classes

Pete Rotella; Sunita Chulani

Software Software Reliability Classes (SRCs) have beendeveloped in order to compare the field reliabilityperformance of a sequence of software releases for a clusterof similar hardware products. A specific cluster ischaracterized by the type of market the hardware supports,and the software releases for the cluster have similarfunctionality, complexity, size, and customer expectations


empirical software engineering and measurement | 2011

Composite Release Values for Normalized Product-level Metrics

Pete Rotella; Satyabrata Pradhan

For metrics normalized using field data, variability is often high, since different classes of customers use different features of large releases. It is important to understand the quality health of the entire release, so we need to have a way to estimate the overall normalized metric value for the entire release, across all product lines. This paper looks at three different ways to calculate release values for a key normalized metric, software defects per million usage hours per month (SWDPMH). The ‘Aggregate,’ ‘Averaging,’ and ‘Indexing’ approaches are defined and examined for two major IOS release variants. Each of these approaches has general strengths and weaknesses, and these are described. Each of the three approaches has been found to be useful in particular situations, and these scenarios are also described. The primary objective of this study is to find an accurate method to estimate SWDPMH for software releases that can be used to improve release management operations, corporate goaling, and best practices evaluation for successor feature releases. Since the methods described are not particular to SWDPMH, we believe they may be useful for other normalized metrics – this is an area of future work.


conducting empirical studies in industry | 2018

Comparing reliability levels of software releases

Pete Rotella; Sunita Chulani

An intuitive method is needed to achieve buy-in from all sectors of Engineering for a way to gauge release-over-release change for a given products sequence of releases. Also, customers need to know if there are extant releases that are more reliable than the ones they already rely on in their networks. A new Release-Over-Release (RoR) metric can both enable customers to clearly understand the reliability risk of migrating to other available releases, and also enable Engineering to understand if their software engineering efforts are actually improving release reliability.

Collaboration


Dive into the Pete Rotella's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michael Gegick

North Carolina State University

View shared research outputs
Top Co-Authors

Avatar

Laurie Williams

North Carolina State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge