Joanne E. Hale
University of Alabama
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Joanne E. Hale.
Journal of Business Communication | 2005
Joanne E. Hale; Ronald E. Dulek; David P. Hale
This article reports results of a qualitative study that examined communication challenges decision makers experience during the response stage of crisis management. Response is perhaps the most critical of the three stages (prevention, response, recovery) identified in crisis research literature. Response is the point when crisis managers make decisions that may save lives and mitigate the effects of the crisis. Actions at this point also significantly influence public opinion about the crisis and an organizations handling of the event. This study provides additional insight into the complexities of the response stage through analysis of 26 interviews conducted with crisis decision makers involved in 15 organizational crises. Ten additional crises were analyzed through secondary data sources. The result of these analyses is the identification and explication of four crisis response steps: observation, interpretation, choice, and dissemination.
International Journal of Human-computer Studies \/ International Journal of Man-machine Studies | 2013
Mark Keith; Samuel C. Thompson; Joanne E. Hale; Paul Benjamin Lowry; Chapman Greer
The use of mobile applications continues to experience exponential growth. Using mobile apps typically requires the disclosure of location data, which often accompanies requests for various other forms of private information. Existing research on information privacy has implied that consumers are willing to accept privacy risks for relatively negligible benefits, and the offerings of mobile apps based on location-based services (LBS) appear to be no different. However, until now, researchers have struggled to replicate realistic privacy risks within experimental methodologies designed to manipulate independent variables. Moreover, minimal research has successfully captured actual information disclosure over mobile devices based on realistic risk perceptions. The purpose of this study is to propose and test a more realistic experimental methodology designed to replicate real perceptions of privacy risk and capture the effects of actual information disclosure decisions. As with prior research, this study employs a theoretical lens based on privacy calculus. However, we draw more detailed and valid conclusions due to our use of improved methodological rigor. We report the results of a controlled experiment involving consumers (n=1025) in a range of ages, levels of education, and employment experience. Based on our methodology, we find that only a weak, albeit significant, relationship exists between information disclosure intentions and actual disclosure. In addition, this relationship is heavily moderated by the consumer practice of disclosing false data. We conclude by discussing the contributions of our methodology and the possibilities for extending it for additional mobile privacy research.
Journal of Software Engineering and Applications | 2009
Graylin Trevor Jay; Joanne E. Hale; Randy K. Smith; David P. Hale; Nicholas A. Kraft; Charles Ward
Researchers have often commented on the high correlation between McCabe’s Cyclomatic Complexity (CC) and lines of code (LOC). Many have believed this correlation high enough to justify adjusting CC by LOC or even substituting LOC for CC. However, from an empirical standpoint the relationship of CC to LOC is still an open one. We undertake the largest statistical study of this relationship to date. Employing modern regression techniques, we find the linearity of this relationship has been severely underestimated, so much so that CC can be said to have absolutely no explanatory power of its own. This research presents evidence that LOC and CC have a stable practically perfect linear relationship that holds across programmers, languages, code paradigms (procedural versus object-oriented), and software processes. Linear models are developed relating LOC and CC. These models are verified against over 1.2 million randomly selected source files from the SourceForge code repository. These files represent software projects from three target languages (C, C++, and Java) and a variety of programmer experience levels, software architectures, and development methodologies. The models developed are found to successfully predict roughly 90% of CC’s variance by LOC alone. This suggest not only that the linear relationship between LOC and CC is stable, but the aspects of code complexity that CC measures, such as the size of the test case space, grow linearly with source code size across languages and programming paradigms.
Journal of Software Maintenance and Evolution: Research and Practice | 2009
Uzma Raja; David P. Hale; Joanne E. Hale
A robust model reference controller which supplies manipulated variables for controlling a multi-input multi-output process of the type which may not be modelled perfectly consists of a pre-compensator, a diagonal filter, and a post-compensator. The input signals to the robust model reference controller are first projected dynamically into decoupled signals by the pre-compensator. The diagonal filters then filter the decoupled signals individually. The filtered signals are projected back dynamically to the manipulated variables for the controlled process. The filter can easily be tuned to attain the optimal response of the closed-loop system with a given bound of model uncertainty.
IEEE Software | 2000
Joanne E. Hale; Allen S. Parrish; Brandon Dixon; Randy K. Smith
In software engineering, team task assignments appear to have a significant potential impact on a projects overall success. The authors propose task assignment effort adjustment factors that can help tune existing estimation models. They show significant improvements in the predictive abilities of both Cocomo I and II by enhancing them with these factors.
IEEE Software | 2007
Richard W. Woolridge; Denise Johnson McManus; Joanne E. Hale
Managing software projects can often degrade into fighting fires lit by the embers of unrecognized and unmanaged risks. Stakeholders are a recognized source of significant software project risk, but few researchers have focused on providing a practical method for identifying specific project stakeholders. Furthermore, no methods provide guidance in identifying and managing project risks arising from those stakeholders. We developed the outcome-based stakeholder risk assessment model to provide this practical guidance. OBSRAM offers the project team a step-by-step approach to identifying stakeholders during requirements engineering, identifying stakeholder influences on the project, identifying the projects impact on stakeholders, and assessing the risks that their potential negative responses pose. We illustrate OBSRAM using a case study of a simulated airline-crew-scheduling system project that aims to reduce aircraft ground turnaround time to 30 minutes or less
Information Resources Management Journal | 2009
Charles J. Kacmar; Denise Johnson McManus; Evan W. Duggan; Joanne E. Hale; David P. Hale
The theories of social exchange, task-technology fit, and technology acceptance are utilized in a field study of software development methodologies. This investigation includes the effects of user experiences on perceptions of acceptance and usage of a methodology. More specifically, perceptions of the outputs and deliverables from a methodology and perceptions of challenges and obstacles to using and applying a methodology were found to significantly and positively influence perceived usefulness and negatively influence ease of use of a methodology, respectively, within a developers organization. Perceived usefulness was a positive and strong antecedent to perceptions of fit between the methodology and client problems, and the strengthening of efficacy beliefs about the methodology.
Communications of The ACM | 2009
Richard W. Woolridge; David P. Hale; Joanne E. Hale; R. Shane Sharpe
DecaDes of eviDence reveal a shockingly low success rate for software projects. In their first CHAOS report in 1994, The Standish Group reported that 16% of IT projects were successful, 31% were failed, and the remaining 53% were challenged. In their 2004, reported ratios improved to 29% (successful), 18% (failed) and 53% (challenged). Although the record is slowly improving, much work remains before software project success becomes the norm rather than the exception. Glass 2 and Field, 1 among others, cite inadequate project scoping as a major contributing factor to project failure. Scoping is a project initiation activity that defines the projects boundary, by identifying the problem domain needs to be met and the software elements expected to be delivered. In addition to identifying the product (problem domain needs and software elements), scoping also sets out the projects value and quality met-rics and resource (such as schedule and budget) requirements. 7 This article is grounded on the premise that effective scope definition will reduce the impact of scope changes and increase the resource estimate accuracy , thus reducing the likelihood of scoping-caused project failure. The Outcome-Based Scoping (OBS) model is proposed to reduce the likelihood of scoping-caused failure. Using OBS, project leaders develop a more complete understanding of how to meet the problem domain objectives, not just deliver a working software solution. OBS recognizes , as did Vessey and Glass (1998), that much of the software engineering process is better characterized as domain problem solving. Thus, OBS first defines the problem domain scope model; from that foundation, the software domain scope model is then developed. The OBS model further structures the scoping effort by decomposing the concept of scope into two dimensions: intent (representing the goal) and blueprint (representing the resources required to meet a specified goal). The remainder of this article develops the OBS model and provides a case study illustrating its use. OBS defines the relationships within and between the problem and software domains to include (as shown in Figure 1): The Problem Domain Intent (PDI) ˲ defines the intended problem-domain outcomes that provide the boundary for the following model component (the Problem Domain Blueprint). The Problem Domain Blueprint ˲ (PDB) identifies the stakeholder-specific changes and resources required to enable the PDI-expressed intent and drive the following model component (the Software Project Intent). The Software Project Intent (SPI) trans-˲ lates the PDB candidate solutions into software scope elements …
Communications of The ACM | 2009
Gary W. Brock; Denise Johnson McManus; Joanne E. Hale
Raising the American Flag at Iwo Jima on Mount Suribachi is symbolic of the war effort of 1945. The bombing of Pearl Harbor, D-day, the evacuation of Saigon, and the toppling of Sadaam’s statue are symbols that pervade our society as reminders of our Fathers commitment to protecting our country and the world. Historically, after veterans returned home, they considered their war experiences taboo subjects and reluctantly shared details of specific war missions. If, indeed, history repeats itself, we must diligently document the successes and failures of military missions. Realizing the criticality of lessons learned the military has devised a plan to capture immediate feedback after every completed mission. We extract these learnings to apply in software process improvement efforts. Process improvement requires understanding and measuring existing processes, analyzing the data to discover cost, quality, or schedule improvement opportunities, and introducing process changes to capture those opportunities. Within the domain of software development, these efforts are often framed within large enterpriselevel frameworks such as the SEI’s IDEAL (initiating, diagnosing, establishing, acting, and learning) model. Similarly, Basili’s Experience Factory model sets out the structure and process approaches for identifying, developing, refining, and adopting “clusters of competencies.” Organizational-level process improvement efforts can be successful only if improvement ideas are generated and proposed. Haley suggests the reliance on ad hoc task teams to develop process improvement proposals (PIP) that may be maintained in a database for periodic review. Kellner recommends that PIP be identified from process performers, other projects, and the literature. The military’s After Action Review (AAR) method, as a model for identifying process performer-generated PIP, may then be integrated into an organizational-level effort. In typical software projects today, such PIPs are identified through lessons learned meetings that may be held quarterly, by phases, or at the end of a project. In contrast, Army AARs are an integral part of a mission plan. We propose that this ingrained process improvement strategy serves as a valuable model for application within software development projects. What is an AAR? The AAR process has been effectively utilized by the U.S. Army since 1981; it has been so embraced by soldiers and units that it spread voluntarily to other units. Army teams utilize the AAR process as a channel to foster process improvements through interactive team discussions.
Journal of Software Engineering and Applications | 2011
Uzma Raja; Joanne E. Hale; David P. Hale
This study examines temporal patterns of software systems defects using the Autoregressive Integrated Moving Average (ARIMA) approach. Defect reports from ten software application projects are analyzed; five of these projects are open source and five are closed source from two software vendors. Across all sampled projects, the ARIMA time series modeling technique provides accurate estimates of reported defects during software maintenance, with organizationally dependent parameterization. In contrast to causal models that require extraction of source-code level metrics, this approach is based on readily available defect report data and is less computation intensive. This approach can be used to improve software maintenance and evolution resource allocation decisions and to identify outlier projects—that is, to provide evidence of unexpected defect reporting patterns that may indicate troubled projects.