Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Wenhua Hu is active.

Publication


Featured researches published by Wenhua Hu.


software engineering and knowledge engineering | 2016

Effectiveness of Human Error Taxonomy during Requirements Inspection: An Empirical Investigation.

Vaibhav K. Anu; Gursimran S. Walia; Wenhua Hu; Jeffrey C. Carver; Gary L. Bradshaw

Software inspections are an effective method for achieving high quality software. We hypothesize that inspections focused on identifying errors (i.e., root cause of faults) are better at finding requirements faults when compared to inspection methods that rely on checklists created using lessons-learned from historical fault-data. Our previous work verified that, error based inspections guided by an initial requirements errors taxonomy (RET) performed significantly better than standard fault-based inspections. However, RET lacked an underlying human information processing model grounded in Cognitive Psychology research. The current research reports results from a systematic literature review (SLR) of Software Engineering and Cognitive Science literature Human Error Taxonomy (HET) that contains requirements phase human errors. The major contribution of this paper is a report of control group study that compared the fault detection effectiveness and usefulness of HET with the previously validated RET. Results of this study show that subjects using HET were not only more effective at detecting faults, but they found faults faster. Post-hoc analysis of HET also revealed meaningful insights into the most commonly occurring human errors at different points during requirements development. The results provide motivation and feedback for further refining HET and creating formal inspection tools based on HET. Keywords-human error; requirements inspection; taxonomy; empirical study


empirical software engineering and measurement | 2016

Detection of Requirement Errors and Faults via a Human Error Taxonomy: A Feasibility Study

Wenhua Hu; Jeffrey C. Carver; Vaibhav K. Anu; Gursimran S. Walia; Gary L. Bradshaw

Background: Developing correct software requirements is important for overall software quality. Most existing quality improvement approaches focus on detection and removal of faults (i.e. problems recorded in a document) as opposed identifying the underlying errors that produced those faults. Accordingly, developers are likely to make the same errors in the future and fail to recognize other existing faults with the same origins. Therefore, we have created a Human Error Taxonomy (HET) to help software engineers improve their software requirement specification (SRS) documents. Aims: The goal of this paper is to analyze whether the HET is useful for classifying errors and for guiding developers to find additional faults. Methods: We conducted a empirical study in a classroom setting to evaluate the usefulness and feasibility of the HET. Results: First, software developers were able to employ error categories in the HET to identify and classify the underlying sources of faults identified during the inspection of SRS documents. Second, developers were able to use that information to detect additional faults that had gone unnoticed during the initial inspection. Finally, the participants had a positive impression about the usefulness of the HET. Conclusions: The HET is effective for identifying and classifying requirements errors and faults, thereby helping to improve the overall quality of the SRS and the software.


international symposium on software reliability engineering | 2016

Using a Cognitive Psychology Perspective on Errors to Improve Requirements Quality: An Empirical Investigation

Vaibhav K. Anu; Gursimran S. Walia; Wenhua Hu; Jeffrey C. Carver; Gary L. Bradshaw

Software inspections are an effective method for early detection of faults present in software development artifacts (e.g., requirements and design documents). However, many faults are left undetected due to the lack of focus on the underlying sources of faults (i.e., what caused the injection of the fault?). To address this problem, research work done by Psychologists on analyzing the failures of human cognition (i.e., human errors) is being used in this research to help inspectors detect errors and corresponding faults (manifestations of errors) in requirements documents. We hypothesize that the fault detection performance will demonstrate significant gains when using a formal taxonomy of human errors (the underlying source of faults). This paper describes a newly developed Human Error Taxonomy (HET) and a formal Error-Abstraction and Inspection (EAI) process to improve fault detection performance of inspectors during the requirements inspection. A controlled empirical study evaluated the usefulness of HET and EAI compared to fault based inspection. The results verify our hypothesis and provide useful insights into commonly occurring human errors that contributed to requirement faults along with areas to further refine both the HET and the EAI process.


Information & Software Technology | 2018

Development of a human error taxonomy for software requirements: A systematic literature review

Vaibhav K. Anu; Wenhua Hu; Jeffrey C. Carver; Gursimran S. Walia; Gary L. Bradshaw

Abstract Background Human-centric software engineering activities, such as requirements engineering, are prone to error. These human errors manifest as faults. To improve software quality, developers need methods to prevent and detect faults and their sources. Aims Human error research from the field of cognitive psychology focuses on understanding and categorizing the fallibilities of human cognition. In this paper, we applied concepts from human error research to the problem of software quality. Method We performed a systematic literature review of the software engineering and psychology literature to identify and classify human errors that occur during requirements engineering. Results We developed the Human Error Taxonomy (HET) by adding detailed error classes to Reasons well-known human error taxonomy of Slips, Lapses, and Mistakes. Conclusion The process of identifying and classifying human error identification provides a structured way to understand and prevent the human errors (and resulting faults) that occur during human-centric software engineering activities like requirements engineering. Software engineering can benefit from closer collaboration with cognitive psychology researchers.


requirements engineering: foundation for software quality | 2017

Defect Prevention in Requirements Using Human Error Information: An Empirical Study

Wenhua Hu; Jeffrey C. Carver; Vaibhav K. Anu; Gursimran S. Walia; Gary L. Bradshaw

Context and Motivation: The correctness of software requirements is of critical importance to the success of a software project. Problems that occur during requirements collection and specification, if not fixed early, are costly to fix later. Therefore, it is important to develop approaches that help requirement engineers not only detect, but also prevent requirements problems. Because requirements engineering is a human-centric activity, we can build upon developments from the field of human cognition. Question/Problem: Human Errors are the failings of human cognition during the process of solving, planning, or executing a task. We have employed research about Human Errors to describe the types of problems that occur during requirements engineering. The goal of this paper is to determine whether knowledge of Human Errors can serve as a fault prevention mechanism during requirements engineering. Principal ideas/results: The results of our study show that a better understanding of human errors does lead developers to insert fewer problems into their own requirements documents. Our results also indicate that different types of Human Error information have different impacts on fault prevention. Contribution: In this paper, we show that the use of Human Error information from Cognitive Psychology is useful for fault prevention during requirements engineering.


requirements engineering: foundation for software quality | 2017

Usefulness of a Human Error Identification Tool for Requirements Inspection: An Experience Report

Vaibhav K. Anu; Gursimran S. Walia; Gary L. Bradshaw; Wenhua Hu; Jeffrey C. Carver

Context and Motivation: Our recent work leverages Cognitive Psychology research on human errors to improve the standard fault-based requirements inspections. Question: The empirical study presented in this paper investigates the effectiveness of a newly developed Human Error Abstraction Assist (HEAA) tool in helping inspectors identify human errors to guide the fault detection during the requirements inspection. Results: The results showed that the HEAA tool, though effective, presented challenges during the error abstraction process. Contribution: In this experience report, we present major challenges during the study execution and lessons learned for future replications.


international symposium on software reliability engineering | 2016

Error Abstraction Accuracy and Fixation during Error-Based Requirements Inspections

Vaibhav K. Anu; Gursimran S. Walia; Wenhua Hu; Jeffrey C. Carver; Gary L. Bradshaw

Software inspections are widely used as a requirements verification technique. Our research uses the tried-and-tested perspective of cognitive failures (i.e., human errors) to improve the effectiveness of fault detection during requirements inspections. We have previously shown that inspection effectiveness can be significantly improved by augmenting the current fault-based inspection technique with the proposed Error Abstraction and Inspection (supported by a Human Error Taxonomy). This paper investigates the impact of an inspectors ability to accurately abstract human errors on their fault-detection effectiveness.


Empirical Software Engineering | 2018

Using human error information for error prevention

Wenhua Hu; Jeffrey C. Carver; Vaibhav K. Anu; Gursimran S. Walia; Gary L. Bradshaw

Developing error-free software requirements is of critical importance to the success of a software project. Problems that occur during requirements collection and specification, if not fixed early, are costly to fix later. Therefore, it is important to develop techniques that help requirements engineers detect and prevent requirements problems. As a human-centric activity, requirements engineering can be influenced by psychological research about human errors, which are the failings of human cognition during the process of planning and executinge a task. We have employed human error research to describe the types of problems that occur during requirements engineering. The goals of this research are: (1) to evaluate whether understanding human errors contributes to the prevention of errors and concomitant faults during requirements engineering and (2) to identify error prevention techniques used in industrial practice. We conducted a controlled classroom experiment to evaluate the benefits that knowledge of errors has on error prevention. We then analyzed data from two industrial surveys to identify specific prevention and mitigation approaches employed in practice. The classroom study showed that the better a requirements engineer understands human errors, the fewer errors and concomitant faults that engineer makes when developing a new requirements document. Furthermore, different types of Human Errors have different impacts on fault prevention. The industry study results identified prevention and mitigation mechanisms for each error type. Human error information is useful for fault prevention during requirements engineering. There are practices that requirements engineers can employ to prevent or mitigate specific human errors.


empirical software engineering and measurement | 2017

Issues and opportunities for human error-based requirements inspections: an exploratory study

Vaibhav K. Anu; Gursimran S. Walia; Wenhua Hu; Jeffrey C. Carver; Gary L. Bradshaw

[Background] Software inspections are extensively used for requirements verification. Our research uses the perspective of human cognitive failures (i.e., human errors) to improve the fault detection effectiveness of traditional fault-checklist based inspections. Our previous evaluations of a formal human error based inspection technique called Error Abstraction and Inspection (EAI) have shown encouraging results, but have also highlighted a real need for improvement. [Aims and Method] The goal of conducting the controlled study presented in this paper was to identify the specific tasks of EAI that inspectors find most difficult to perform and the strategies that successful inspectors use when performing the tasks. [Results] The results highlighted specific pain points of EAI that can be addressed by improving the training and instrumentation.


REFSQ Workshops | 2017

Understanding Human Errors In Software Requirements: An Online Survey.

Wenhua Hu; Jeffrey C. Carver; Gursimran Singh Walia; Vaibhav K. Anu; Gary L. Bradshaw

Collaboration


Dive into the Wenhua Hu's collaboration.

Top Co-Authors

Avatar

Gary L. Bradshaw

Mississippi State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Vaibhav K. Anu

North Dakota State University

View shared research outputs
Top Co-Authors

Avatar

Gursimran S. Walia

North Dakota State University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge