Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where William M. Evanco is active.

Publication


Featured researches published by William M. Evanco.


IEEE Software | 2005

In-house software development: what project management practices lead to success?

June M. Verner; William M. Evanco

Project management is an important part of software development, both for organizations that rely on third-party software development and for those whose software is developed primarily in-house. Moreover, quantitative survey-based research regarding software developments early, nontechnical aspects is lacking. To help provide a project management perspective for managers responsible for in-house software development, we conducted a survey in an attempt to determine the factors that lead to successful projects. We chose a survey because of its simplicity and because we hoped to find relationships among variables. Also, a survey let us cover more projects at a lower cost than would an equivalent number of interviews or a series of case studies. Our results provide general guidance for business and project managers to help ensure that their projects succeed.


Information & Software Technology | 2007

State of the practice: An exploratory analysis of schedule estimation and software project success prediction

June M. Verner; William M. Evanco; Narciso Cerpa

During discussions with a group of U.S. software developers we explored the effect of schedule estimation practices and their implications for software project success. Our objective is not only to explore the direct effects of cost and schedule estimation on the perceived success or failure of a software development project, but also to quantitatively examine a host of factors surrounding the estimation issue that may impinge on project outcomes. We later asked our initial group of practitioners to respond to a questionnaire that covered some important cost and schedule estimation topics. Then, in order to determine if the results are generalizable, two other groups from the US and Australia, completed the questionnaire. Based on these convenience samples, we conducted exploratory statistical analyses to identify determinants of project success and used logistic regression to predict project success for the entire sample, as well as for each of the groups separately. From the developer point of view, our overall results suggest that success is more likely if the project manager is involved in schedule negotiations, adequate requirements information is available when the estimates are made, initial effort estimates are good, take staff leave into account, and staff are not added late to meet an aggressive schedule. For these organizations we found that developer input to the estimates did not improve the chances of project success or improve the estimates. We then used the logistic regression results from each single group to predict project success for the other two remaining groups combined. The results show that there is a reasonable degree of generalizability among the different groups.


Accident Analysis & Prevention | 1999

The potential impact of rural mayday systems on vehicular crash fatalities

William M. Evanco

Rural mayday systems can reduce the time between the occurrence of an accident and the notification of emergency medical services--called the accident notification time. Reductions in this time, in turn, may affect the numbers of fatalities. A statistical analysis is used to estimate the quantitative relationship between fatalities and the accident notification time. The elasticity of rural fatalities with respect to the accident notification time was found to be 0.14. If a rural mayday system were fully implemented (i.e. a 100% market penetration) and the service availability were 100%, then we would expect monetary benefits of about


Scientometrics | 2005

The use of bibliometric and knowledge elicitation techniques to map a knowledge domain: Software Engineering in the 1990s

Katherine W. McCain; June M. Verner; Gregory W. Hislop; William M. Evanco; Vera J. Cole

1.83 billion per year and comprehensive benefits (which includes the monetary value attached to the lost quality of life) of


IEEE Transactions on Software Engineering | 2003

Comments on "The confounding effect of class size on the validity of object-oriented metrics"

William M. Evanco

6.37 billion per year.


conference on software maintenance and reengineering | 2001

Prediction models for software fault correction effort

William M. Evanco

SummaryParallel mappings of the intellectual and cognitive structure of Software Engineering (SE) were conducted using Author Cocitation Analysis (ACA), PFNet Analysis, and card sorting, a Knowledge Elicitation (KE) method. Cocitation counts for 60 prominent SE authors over the period 1990 - 1997 were gathered from SCISEARCH. Forty-six software engineers provided similar data by sorting authors’ names into labeled piles. At the 8 cluster level, ACA and KE identified similar author clusters representing key areas of SE research and application, though the KE labels suggested some differences between the way that the authors’ works were used and how they were perceived by respondents. In both maps, the clusters were arranged along a horizontal axis moving from “micro” to “macro” level R&D activities (correlation of X axis coordinates = 0.73). The vertical axis of the two maps differed (correlation of Y axis coordinates = -0.08). The Y axis of the ACA map pointed to a continuum of high to low formal content in published work, whereas the Y axis of the KE map was anchored at the bottom by “generalist” authors and at the top by authors identified with a single, highly specific and consistent specialty. The PFNet of the raw ACA counts identified Boehm, Basili, and Booch as central figures in subregions of the network with Boehm being connected directly or through a single intervening author with just over 50% of the author set. The ACA and KE combination provides a richer picture of the knowledge domain and provide useful cross-validation.


Journal of Systems and Software | 1997

Poisson analyses of defects for small software components

William M. Evanco

It has been proposed by El Emam et al. (ibid. vol.27 (7), 2001) that size should be taken into account as a confounding variable when validating object-oriented metrics. We take issue with this perspective since the ability to measure size does not temporally precede the ability to measure many of the object-oriented metrics that have been proposed. Hence, the condition that a confounding variable must occur causally prior to another explanatory variable is not met. In addition, when specifying multivariate models of defects that incorporate object-oriented metrics, entering size as an explanatory variable may result in misspecified models that lack internal consistency. Examples are given where this misspecification occurs.


ieee international software metrics symposium | 1999

Analyzing change effort in software during development

William M. Evanco

We have developed a model to explain and predict the effort associated with changes made to software to correct faults while it is undergoing development. Since the effort data available for this study is ordinal in nature, ordinal response models are used to explain the effort in terms of measures of fault locality and the characteristics of the software components being changed. The calibrated ordinal response model is then applied to two projects not used in the calibration to examine predictive validity.


STEP '99. Proceedings Ninth International Workshop Software Technology and Engineering Practice | 1999

Using a proportional hazards model to analyze software reliability

William M. Evanco

Poisson analyses are proposed for the identification of the determinants of defects in small software components such as subprograms, modules, or functional units. Software complexity (which can be measured during design or implementation) and software development environment characteristics influence the numbers of defects emerging in the testing phase. Poisson models are calibrated using software complexity measures from several Ada projects along with their associated software change report data. One of the models is used to estimate defects at the subprogram, subsystem, and project levels for the calibration data, and these estimates are then compared to the actual defects. Notable results from this analysis are that extensively modified reused subprograms (> 25% changed) have substantially more defects than new code of otherwise comparable characteristics and that software development environment volatility (as measured by non-defect changes per thousand source lines of code) is a strong determinant of subprogram defects. To demonstrate cross-language applicability, defect predictions are made at the subsystem level for a project coded in the C programming language and compared to the actual subsystem defects. Finally, for software projects developed through multiple builds, we show that a Poisson model that incorporates a measure of testing effort is applicable.


international conference on case based reasoning | 2003

Predicting software development project outcomes

Rosina O. Weber; Michael Waller; June M. Verner; William M. Evanco

We develop ordinal response models to explain the effort associated with non-defect changes of software during development. The explanatory variables include the extent of the change, the change type, and the internal complexity of the software components undergoing the change. The models are calibrated on the basis of a single software system and are then validated on two additional systems.

Collaboration


Dive into the William M. Evanco's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge