Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Bonnie K. Ray is active.

Publication


Featured researches published by Bonnie K. Ray.


IEEE Transactions on Software Engineering | 1992

Orthogonal defect classification-a concept for in-process measurements

Ram Chillarege; Inderpal S. Bhandari; Jarir K. Chaar; Michael J. Halliday; Diane S. Moebus; Bonnie K. Ray; Man-Yuen Wong

Orthogonal defect classification (ODC), a concept that enables in-process feedback to software developers by extracting signatures on the development process from defects, is described. The ideas are evolved from an earlier finding that demonstrates the use of semantic information from defects to extract cause-effect relationships in the development process. This finding is leveraged to develop a systematic framework for building measurement and analysis methods. The authors define ODC and discuss the necessary and sufficient conditions required to provide feedback to a developer; illustrate the use of the defect type distribution to measure the progress of a product through a process; illustrate the use of the defect trigger distribution to evaluate the effectiveness and eventually the completeness of verification processes such as inspection or testing; provides sample results from pilot projects using ODC; and open the doors to a wide variety of analysis techniques for providing effective and fast feedback based on the concepts of ODC. >


foundations of software engineering | 2004

Empirical evaluation of defect projection models for widely-deployed production software systems

Paul Luo Li; Mary Shaw; James D. Herbsleb; Bonnie K. Ray; Peter Santhanam

Defect-occurrence projection is necessary for the development of methods to mitigate the risks of software defect occurrences. In this paper, we examine user-reported software defect-occurrence patterns across twenty-two releases of four widely-deployed, business-critical, production, software systems: a commercial operating system, a commercial middleware system, an open source operating system (OpenBSD), and an open source middleware system (Tomcat). We evaluate the suitability of common defect-occurrence models by first assessing the match between characteristics of widely-deployed production software systems and model structures. We then evaluate how well the models fit real world data. We find that the Weibull model is flexible enough to capture defect-occurrence behavior across a wide range of systems. It provides the best model fit in 16 out of the 22 releases. We then evaluate the ability of the moving averages and the exponential smoothing methods to extrapolate Weibull model parameters using fitted model parameters from historical releases. Our results show that in 50% of our forecasting experiments, these two naive parameter-extrapolation methods produce projections that are worse than the projection from using the same model parameters as the most recent release. These findings establish the need for further research on parameter-extrapolation methods that take into account variations in characteristics of widely-deployed, production, software systems across multiple releases.


Journal of Business & Economic Statistics | 2000

Long-range Dependence in Daily Stock Volatilities

Bonnie K. Ray; Ruey S. Tsay

Recent empirical studies show that the squares of high-frequency stock returns are long-range dependent and can be modeled as fractionally integrated processes, using, for example, long-memory stochastic volatility models. Are such long-range dependencies common among stocks? Are they caused by the same sources of variation? In this article, we classify daily stock returns of Standard and Poor 500 companies on the basis of a companys size and its business or industrial sector and estimate the strength of long-range dependence in the stock volatilities using two different methods. Almost all of the companies analyzed exhibit strong persistence in volatility. We then use a canonical correlation method to identify common long-range dependent components in groups of companies, finding strong evidence in support of common persistence in volatility. Finally, we use a chi-squared test to study the effects of company size and sector on the number of common long-range dependent volatility components detected in groups of companies. Our results indicate the existence of some size effects, although they are not related to company size in a monotonic manner. On the other hand, the effects of company sector are pronounced. Randomly selected companies are found to be driven by a significantly larger number of persistent components than companies in certain business sectors, implying that persistence in stock volatility of companies in the same sector is more likely caused by the same source. These results suggest, among other interesting implications, that the volatilities of stocks for companies in the same business sector will be more often tied together in the longer run than will the volatilities of companies grouped only on the basis of size.


Journal of Forecasting | 1996

Model selection and forecasting for long‐range dependent processes

Nuno Crato; Bonnie K. Ray

Fractionally integrated autoregressive moving-average (ARFIMA) models have proved useful tools in the analysis of time series with long-range dependence. However, little is known about various practical issues regarding model selection and estimation methods, and the impact of selection and estimation methods on forecasts. By means of a large-scale simulation study, we compare three different estimation procedures and three automatic model-selection criteria on the basis of their impact on forecast accuracy. Our results endorse the use of both the frequency-domain Whittle estimation procedure and the time-domain approximate MLE procedure of Haslett and Raftery in conjunction with the AIC and SIC selection criteria, but indicate that considerable care should be exercised when using ARFIMA models. In general, we find that simple ARMA models provide competitive forecasts. Only a large number of observations and a strongly persistent time series seem to justify the use of ARFIMA models for forecasting purposes.


Journal of Futures Markets | 2000

Memory in Returns and Volatilities of Futures' Contracts

Nuno Crato; Bonnie K. Ray

Various authors claim to have found evidence of stochastic long‐memory behavior in futures’ contract returns using the Hurst statistic. This paper reexamines futures’ returns for evidence of persistent behavior using a biased‐corrected version of the Hurst statistic, a nonparametric spectral test, and a spectral‐regression estimate of the long‐memory parameter. Results based on these new methods provide no evidence for persistent behavior in futures’ returns. However, they provide overwhelming evidence of long‐memory behavior for the volatility of futures’ returns. This finding adds to the emerging literature on persistent volatility in financial markets and suggests the use of new methods of forecasting volatility, assessing risk, and optimizing portfolios in futures’ markets.


Computers & Operations Research | 2007

A logistic regression framework for information technology outsourcing lifecycle management

Aleksandra Mojsilovic; Bonnie K. Ray; Richard D. Lawrence; Samer Takriti

We present a methodology for managing outsourcing projects from the vendors perspective, designed to maximize the value to both the vendor and its clients. The methodology is applicable across the outsourcing lifecycle, providing the capability to select and target new clients, manage the existing client portfolio and quantify the realized benefits to the client resulting from the outsourcing agreement. Specifically, we develop a statistical analysis framework to model client behavior at each stage of the outsourcing lifecycle, including: (1) a predictive model and tool for white space client targeting and selection-opportunity identification (2) a model and tool for client risk assessment and project portfolio management-client tracking, and (3) a systematic analysis of outsourcing results, impact analysis, to gain insights into potential benefits of IT outsourcing as a part of a successful management strategy. Our analysis is formulated in a logistic regression framework, modified to allow for non-linear input-output relationships, auxiliary variables, and small sample sizes. We provide examples to illustrate how the methodology has been successfully implemented for targeting, tracking, and assessing outsourcing clients within IBM global services division. Scope and purpose The predominant literature on IT outsourcing often examines various aspects of vendor-client relationship, strategies for successful outsourcing from the client perspective, and key sources of risk to the client, generally ignoring the risk to the vendor. However, in the rapidly changing market, a significant share of risks and responsibilities falls on vendor, as outsourcing contracts are often renegotiated, providers replaced, or services brought back in house. With the transformation of outsourcing engagements, the risk on the vendors side has increased substantially, driving the vendors financial and business performance and eventually impacting the value delivery to the client. As a result, only well-ran vendor firms with robust processes and tools that allow identification and active management of risk at all stages of the outsourcing lifecycle are able to deliver value to the client. This paper presents a framework and methodology for managing a portfolio of outsourcing projects from the vendors perspective, throughout the entire outsourcing lifecycle. We address three key stages of the outsourcing process: (1) opportunity identification and qualification (i.e. selection of the most likely new clients), (2) client portfolio risk management during engagement and delivery, and (3) quantification of benefits to the client throughout the life of the deal.


Ibm Systems Journal | 2008

BEAM: a framework for business ecosystem analysis and modeling

Chunhua Tian; Bonnie K. Ray; Juhnyoung Lee; Rongzeng Cao; Wei Ding

This paper presents a framework for the modeling and analysis of business model designs involving a network of interconnected business entities. The framework includes an ecosystem-modeling component, a simulation component, and a service-analysis component, and integrates methods from value network modeling, game theory analysis, and multiagent systems. A role-based paradigm is introduced for characterizing ecosystem entities in order to easily allow for the evolution of the ecosystem and duplicated functionality for entities. We show how the framework can be used to provide insight into value distribution among the entities and evaluation of business model performance under different scenarios. The methods are illustrated using a case study of a retail business-to-business service ecosystem.


Ibm Journal of Research and Development | 2007

Statistical methods for automated generation of service engagement staffing plans

Jianying Hu; Bonnie K. Ray; Moninder Singh

In order to successfully deliver a labor-based professional service, the right people with the right skills must be available to deliver the service when it is needed. Meeting this objective requires a systematic, repeatable approach for determining the staffing requirements that enable informed staffing management decisions. We present a methodology developed for the Global Business Services (GBS) organization of IBM to enable automated generation of staffing plans involving specific job roles, skill sets, and employee experience levels. The staffing plan generation is based on key characteristics of the expected project as well as selection of a project type from a project taxonomy that maps to staffing requirements. The taxonomy is developed using statistical clustering techniques applied to labor records from a large number of historical GBS projects. We describe the steps necessary to process the labor records so that they are in a form suitable for analysis, as well as the clustering methods used for analysis, and the algorithm developed to dynamically generate a staffing plan based on a selected group. We also present results of applying the clustering and staffing plan generation methodologies to a variety of GBS projects.


Journal of the American Statistical Association | 1997

Modeling Long-Range Dependence, Nonlinearity, and Periodic Phenomena in Sea Surface Temperatures Using TSMARS

Peter A. W. Lewis; Bonnie K. Ray

Abstract We analyze a time series of 20 years of daily sea surface temperatures measured off the California coast. The temperatures exhibit quite complicated features, such as effects on many different time scales, nonlinear effects, and long-range dependence. We show how a time series version of the multivariate adaptive regression splines (MARS) algorithm, TSMARS, can be used to obtain univariate adaptive spline threshold autoregressive models that capture many of the physical characteristics of the temperatures and are useful for short- and long-term prediction. We also discuss practical modeling issues, such as handling cycles, long-range dependence, and concurrent predictor time series using TSMARS. Models for the temperatures are evaluated using out-of-sample forecast comparisons, residual diagnostics, model skeletons, and sample functions of simulated series. We show that a categorical seasonal indicator variable can be used to model nonlinear structure in the data that is changing with time of yea...


Technometrics | 2006

Dynamic Reliability Models for Software Using Time-Dependent Covariates

Bonnie K. Ray; Zhaohui Liu; Nalini Ravishanker

This article presents a new model for software reliability characterization using a growth curve formulation that allows model parameters to vary as a function of covariate information. In the software reliability framework, covariates may include such things as the number of lines of code for a product throughout its development cycle and the number of customer licenses sold over the field life of a product. We describe a Bayesian framework for inference and model assessment, using Markov chain Monte Carlo techniques, that allows for incorporation of subjective information about the parameters through the assumed prior distributions. The methods are illustrated using simulated defect data and defect data collected during development for two large commercial software products.

Collaboration


Dive into the Bonnie K. Ray's collaboration.

Researchain Logo
Decentralizing Knowledge