Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where D. Ross Jeffery is active.

Publication


Featured researches published by D. Ross Jeffery.


Empirical Software Engineering | 1999

An Empirical Study of Analogy-based Software Effort Estimation

Fiona Walkerden; D. Ross Jeffery

Conventional approaches to software cost estimation have focused on algorithmic cost models, where an estimate of effort is calculated from one or more numerical inputs via a mathematical model. Analogy-based estimation has recently emerged as a promising approach, with comparable accuracy to algorithmic methods in some studies, and it is potentially easier to understand and apply. The current study compares several methods of analogy-based software effort estimation with each other and also with a simple linear regression model. The results show that people are better than tools at selecting analogues for the data set used in this study. Estimates based on their selections, with a linear size adjustment to the analogues effort value, proved more accurate than estimates based on analogues selected by tools, and also more accurate than estimates based on the simple regression model.


Journal of Systems and Software | 2007

An exploratory study of why organizations do not adopt CMMI

Mark Staples; Mahmood Niazi; D. Ross Jeffery; Alan Abrahams; Paul Byatt; Russell Murphy

This paper explores why organizations do not adopt CMMI (Capability Maturity Model Integration), by analysing two months of sales data collected by an Australian company selling CMMI appraisal and improvement services. The most frequent reasons given by organizations were: the organization was small; the services were too costly, the organization had no time, and the organization was using another SPI approach. Overall, we found small organizations not adopting CMMI tend to say that adopting it would be infeasible, but do not say it would be unbeneficial. We comment on the significance of our findings and research method for SPI research.


australian software engineering conference | 2004

A framework for classifying and comparing software architecture evaluation methods

Muhammad Ali Babar; Liming Zhu; D. Ross Jeffery

Software architecture evaluation has been proposed as a means to achieve quality attributes such as maintainability and reliability in a system. The objective of the evaluation is to assess whether or not the architecture lead to the desired quality attributes. Recently, there have been a number of evaluation methods proposed. There is, however, little consensus on the technical and nontechnical issues that a method should comprehensively address and which of the existing methods is most suitable for a particular issue. We present a set of commonly known but informally described features of an evaluation method and organizes them within a framework that should offer guidance on the choice of the most appropriate method for an evaluation exercise. We use this framework to characterise eight SA evaluation methods.


ieee international software metrics symposium | 2001

Using public domain metrics to estimate software development effort

D. Ross Jeffery; Melanie Ruhe; Isabella Wieczorek

The authors investigate the accuracy of cost estimates when applying most commonly used modeling techniques to a large-scale industrial data set which is professionally maintained by the International Software Standards Benchmarking Group (ISBSG). The modeling techniques applied are ordinary least squares regression (OLS), analogy based estimation, stepwise ANOVA, CART, and robust regression. The questions addresses in the study are related to important issues. The first is the appropriate selection of a technique in a given context. The second is the assessment of the feasibility of using multi-organizational data compared to the benefits from company-specific data collection. We compare company-specific models with models based on multi-company data. This is done by using the estimates derived for one company that contributed to the ISBSG data set and estimates from using carefully matched data from the rest of the ISBSG data. When using the ISBSG data set to derive estimates for the company, generally poor results were obtained. Robust regression and OLS performed most accurately. When using the companys own data as the basis for estimation, OLS, a CART-variant, and analogy performed best. In contrast to previous studies, the estimation accuracy when using the companys data is significantly higher than when using the rest of the ISBSG data set. Thus, from these results, the company that contributed to the ISBSG data set, would be better off when using its own data for cost estimation.


Information & Software Technology | 2000

A comparative study of two software development cost modeling techniques using multi-organizational and company-specific data

D. Ross Jeffery; Melanie Ruhe; Isabella Wieczorek

Abstract This research examined the use of the International Software Benchmarking Standards Group (ISBSG) repository for estimating effort for software projects in an organization not involved in ISBSG. The study investigates two questions: (1) What are the differences in accuracy between ordinary least-squares (OLS) regression and Analogy-based estimation? (2) Is there a difference in accuracy between estimates derived from the multi-company ISBSG data and estimates derived from company-specific data? Regarding the first question, we found that OLS regression performed as well as Analogy-based estimation when using company-specific data for model building. Using multi-company data the OLS regression model provided significantly more accurate results than Analogy-based predictions. Addressing the second question, we found in general that models based on the company-specific data resulted in significantly more accurate estimates.


international conference on software engineering | 2003

Cost estimation for web applications

Melanie Ruhe; D. Ross Jeffery; Isabella Wieczorek

In this paper, we investigate the application of the COBRA/spl trade/ method (Cost Estimation, Benchmarking, and Risk Assessment) in a new application domain, the area of web development. COBRA combines expert knowledge with data on a small number of projects to develop cost estimation models, which can also be used for risk analysis and benchmarking purposes. We modified and applied the method to the web applications of a small Australian company, specializing in web development. In this paper we present the modifications made to the COBRA method and results of applying the method In our study, using data on twelve web applications, the estimates derived from our Web-COBRA model showed a Mean Magnitude of Relative Error (MMRE) of 0.17. This result significantly outperformed expert estimates from Allette Systems (MMRE 0.37). A result comparable to Web-COBRA was obtained when applying ordinary least squares regression with size in terms of Web Objects as an independent variable (MMRE 0.23).


IEEE Software | 1997

Establishing software measurement programs

Raymond J. Offen; D. Ross Jeffery

In seeking to improve software, companies are finding out how much is involved in measuring it. They are also learning that the more integral software measurement is to the companys underlying business strategy, the more likely it is to succeed. We propose a framework or metamodel called the Model, Measure, Manage Paradigm (M/sup 3/P), which is our extension of the well known Quality Improvement Paradigm/Goal-Question-Metric paradigm (R.B. Grady, 1992; V.R. Basili and H.D. Rombach, 1988). M/sup 3/P helps counter a contributing factor commonly seen in failed measurement programs, namely the lack of well defined links between the numerical data and the surrounding development and business contexts, by coupling technical, business, and organizational issues into a given measurement program context. An example where links are important is in highly technical measurement and analysis reports which do not generally answer the concerns of senior executives. We present some early experience with our framework gathered from several case studies, discuss the 8 stages of M/sup 3/P implementation and describe a tool set we developed for use with M/sup 3/P.


Software Quality Journal | 2005

Tradeoff and Sensitivity Analysis in Software Architecture Evaluation Using Analytic Hierarchy Process

Liming Zhu; Aybüke Aurum; Ian Gorton; D. Ross Jeffery

Software architecture evaluation involves evaluating different architecture design alternatives against multiple quality-attributes. These attributes typically have intrinsic conflicts and must be considered simultaneously in order to reach a final design decision. AHP (Analytic Hierarchy Process), an important decision making technique, has been leveraged to resolve such conflicts. AHP can help provide an overall ranking of design alternatives. However it lacks the capability to explicitly identify the exact tradeoffs being made and the relative size of these tradeoffs. Moreover, the ranking produced can be sensitive such that the smallest change in intermediate priority weights can alter the final order of design alternatives. In this paper, we propose several in-depth analysis techniques applicable to AHP to identify critical tradeoffs and sensitive points in the decision process. We apply our method to an example of a real-world distributed architecture presented in the literature. The results are promising in that they make important decision consequences explicit in terms of key design tradeoffs and the architectures capability to handle future quality attribute changes. These expose critical decisions which are otherwise too subtle to be detected in standard AHP results.


IEEE Software | 1997

Status report on software measurement

Shari Lawrence Pfleeger; D. Ross Jeffery; Bill Curtis; Barbara A. Kitchenham

The most successful software measurement programs are ones in which researcher, practitioner, and customer work hand in hand to meet goals and solve problems. But such collaboration is rare. The authors explore the gaps between these groups and point toward ways to bridge them.


Journal of Internet Services and Applications | 2012

On understanding the economics and elasticity challenges of deploying business applications on public cloud infrastructure

Basem Suleiman; Sherif Sakr; D. Ross Jeffery; Anna Liu

The exposure of business applications to the web has considerably increased the variability of its workload patterns and volumes as the number of users/customers often grows and shrinks at various rates and times. Such application characteristics have increasingly demanded the need for flexible yet inexpensive computing infrastructure to accommodate variable workloads. The on-demand and per-use cloud computing model, specifically that of public Cloud Infrastructure Service Offerings (CISOs), has quickly evolved and adopted by majority of hardware and software computing companies with the promise of provisioning utility-like computing resources at massive economies of scale. However, deploying business applications on public cloud infrastructure does not lead to achieving desired economics and elasticity gains, and some challenges block the way for realizing its real benefits. These challenges are due to multiple differences between CISOs and application’s requirements and characteristics. This article introduces a detailed analysis and discussion of the economics and elasticity challenges of business applications to be deployed and operate on public cloud infrastructure. This includes analysis of various aspects of public CISOs, modeling and measuring CISOs’ economics and elasticity, application workload patterns and its impact on achieving elasticity and economics, economics-driven elasticity decisions and policies, and SLA-driven monitoring and elasticity of cloud-based business applications. The analysis and discussion are supported with motivating scenarios for cloud-based business applications. The paper provides a multi-lenses overview that can help cloud consumers and potential business application’s owners to understand, analyze, and evaluate important economics and elasticity capabilities of different CISOs and its suitability for meeting their business application’s requirements.

Collaboration


Dive into the D. Ross Jeffery's collaboration.

Top Co-Authors

Avatar

Liming Zhu

Commonwealth Scientific and Industrial Research Organisation

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mark Staples

University of New South Wales

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Louise Scott

University of New South Wales

View shared research outputs
Top Co-Authors

Avatar

Aybüke Aurum

University of New South Wales

View shared research outputs
Top Co-Authors

Avatar

Gerwin Klein

University of New South Wales

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge