Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Vaibhav K. Anu is active.

Publication


Featured researches published by Vaibhav K. Anu.


software engineering and knowledge engineering | 2016

Effectiveness of Human Error Taxonomy during Requirements Inspection: An Empirical Investigation.

Vaibhav K. Anu; Gursimran S. Walia; Wenhua Hu; Jeffrey C. Carver; Gary L. Bradshaw

Software inspections are an effective method for achieving high quality software. We hypothesize that inspections focused on identifying errors (i.e., root cause of faults) are better at finding requirements faults when compared to inspection methods that rely on checklists created using lessons-learned from historical fault-data. Our previous work verified that, error based inspections guided by an initial requirements errors taxonomy (RET) performed significantly better than standard fault-based inspections. However, RET lacked an underlying human information processing model grounded in Cognitive Psychology research. The current research reports results from a systematic literature review (SLR) of Software Engineering and Cognitive Science literature Human Error Taxonomy (HET) that contains requirements phase human errors. The major contribution of this paper is a report of control group study that compared the fault detection effectiveness and usefulness of HET with the previously validated RET. Results of this study show that subjects using HET were not only more effective at detecting faults, but they found faults faster. Post-hoc analysis of HET also revealed meaningful insights into the most commonly occurring human errors at different points during requirements development. The results provide motivation and feedback for further refining HET and creating formal inspection tools based on HET. Keywords-human error; requirements inspection; taxonomy; empirical study


empirical software engineering and measurement | 2016

Detection of Requirement Errors and Faults via a Human Error Taxonomy: A Feasibility Study

Wenhua Hu; Jeffrey C. Carver; Vaibhav K. Anu; Gursimran S. Walia; Gary L. Bradshaw

Background: Developing correct software requirements is important for overall software quality. Most existing quality improvement approaches focus on detection and removal of faults (i.e. problems recorded in a document) as opposed identifying the underlying errors that produced those faults. Accordingly, developers are likely to make the same errors in the future and fail to recognize other existing faults with the same origins. Therefore, we have created a Human Error Taxonomy (HET) to help software engineers improve their software requirement specification (SRS) documents. Aims: The goal of this paper is to analyze whether the HET is useful for classifying errors and for guiding developers to find additional faults. Methods: We conducted a empirical study in a classroom setting to evaluate the usefulness and feasibility of the HET. Results: First, software developers were able to employ error categories in the HET to identify and classify the underlying sources of faults identified during the inspection of SRS documents. Second, developers were able to use that information to detect additional faults that had gone unnoticed during the initial inspection. Finally, the participants had a positive impression about the usefulness of the HET. Conclusions: The HET is effective for identifying and classifying requirements errors and faults, thereby helping to improve the overall quality of the SRS and the software.


international symposium on software reliability engineering | 2016

Using a Cognitive Psychology Perspective on Errors to Improve Requirements Quality: An Empirical Investigation

Vaibhav K. Anu; Gursimran S. Walia; Wenhua Hu; Jeffrey C. Carver; Gary L. Bradshaw

Software inspections are an effective method for early detection of faults present in software development artifacts (e.g., requirements and design documents). However, many faults are left undetected due to the lack of focus on the underlying sources of faults (i.e., what caused the injection of the fault?). To address this problem, research work done by Psychologists on analyzing the failures of human cognition (i.e., human errors) is being used in this research to help inspectors detect errors and corresponding faults (manifestations of errors) in requirements documents. We hypothesize that the fault detection performance will demonstrate significant gains when using a formal taxonomy of human errors (the underlying source of faults). This paper describes a newly developed Human Error Taxonomy (HET) and a formal Error-Abstraction and Inspection (EAI) process to improve fault detection performance of inspectors during the requirements inspection. A controlled empirical study evaluated the usefulness of HET and EAI compared to fault based inspection. The results verify our hypothesis and provide useful insights into commonly occurring human errors that contributed to requirement faults along with areas to further refine both the HET and the EAI process.


Information & Software Technology | 2018

Development of a human error taxonomy for software requirements: A systematic literature review

Vaibhav K. Anu; Wenhua Hu; Jeffrey C. Carver; Gursimran S. Walia; Gary L. Bradshaw

Abstract Background Human-centric software engineering activities, such as requirements engineering, are prone to error. These human errors manifest as faults. To improve software quality, developers need methods to prevent and detect faults and their sources. Aims Human error research from the field of cognitive psychology focuses on understanding and categorizing the fallibilities of human cognition. In this paper, we applied concepts from human error research to the problem of software quality. Method We performed a systematic literature review of the software engineering and psychology literature to identify and classify human errors that occur during requirements engineering. Results We developed the Human Error Taxonomy (HET) by adding detailed error classes to Reasons well-known human error taxonomy of Slips, Lapses, and Mistakes. Conclusion The process of identifying and classifying human error identification provides a structured way to understand and prevent the human errors (and resulting faults) that occur during human-centric software engineering activities like requirements engineering. Software engineering can benefit from closer collaboration with cognitive psychology researchers.


requirements engineering: foundation for software quality | 2017

Defect Prevention in Requirements Using Human Error Information: An Empirical Study

Wenhua Hu; Jeffrey C. Carver; Vaibhav K. Anu; Gursimran S. Walia; Gary L. Bradshaw

Context and Motivation: The correctness of software requirements is of critical importance to the success of a software project. Problems that occur during requirements collection and specification, if not fixed early, are costly to fix later. Therefore, it is important to develop approaches that help requirement engineers not only detect, but also prevent requirements problems. Because requirements engineering is a human-centric activity, we can build upon developments from the field of human cognition. Question/Problem: Human Errors are the failings of human cognition during the process of solving, planning, or executing a task. We have employed research about Human Errors to describe the types of problems that occur during requirements engineering. The goal of this paper is to determine whether knowledge of Human Errors can serve as a fault prevention mechanism during requirements engineering. Principal ideas/results: The results of our study show that a better understanding of human errors does lead developers to insert fewer problems into their own requirements documents. Our results also indicate that different types of Human Error information have different impacts on fault prevention. Contribution: In this paper, we show that the use of Human Error information from Cognitive Psychology is useful for fault prevention during requirements engineering.


requirements engineering: foundation for software quality | 2017

Usefulness of a Human Error Identification Tool for Requirements Inspection: An Experience Report

Vaibhav K. Anu; Gursimran S. Walia; Gary L. Bradshaw; Wenhua Hu; Jeffrey C. Carver

Context and Motivation: Our recent work leverages Cognitive Psychology research on human errors to improve the standard fault-based requirements inspections. Question: The empirical study presented in this paper investigates the effectiveness of a newly developed Human Error Abstraction Assist (HEAA) tool in helping inspectors identify human errors to guide the fault detection during the requirements inspection. Results: The results showed that the HEAA tool, though effective, presented challenges during the error abstraction process. Contribution: In this experience report, we present major challenges during the study execution and lessons learned for future replications.


technical symposium on computer science education | 2017

Incorporating Human Error Education into Software Engineering Courses via Error-based Inspections

Vaibhav K. Anu; Gursimran S. Walia; Gary L. Bradshaw

In spite of the human-centric aspect of software engineering (SE) discipline, human error knowledge has been ignored by SE educators as it is often thought of as something that belongs in the realm of Psychology. SE curriculum is also severely devoid of educational content on human errors, while other human-centric disciplines (aviation, medicine, process control) have developed human error training and other interventions. To evaluate the feasibility of using such interventions to teach students about human errors in SE, this paper describes an exploratory study to evaluate whether requirements inspections driven by human errors can be used to deliver both requirements validation knowledge (a key industry skill) and human error knowledge to students. The results suggest that human error based inspections can enhance the fault detection abilities of students, a primary learning outcome of inspection exercises conducted in software engineering courses. Additionally, results showed that students found human error information useful for understanding the underlying causes of requirement faults.


international symposium on software reliability engineering | 2016

Error Abstraction Accuracy and Fixation during Error-Based Requirements Inspections

Vaibhav K. Anu; Gursimran S. Walia; Wenhua Hu; Jeffrey C. Carver; Gary L. Bradshaw

Software inspections are widely used as a requirements verification technique. Our research uses the tried-and-tested perspective of cognitive failures (i.e., human errors) to improve the effectiveness of fault detection during requirements inspections. We have previously shown that inspection effectiveness can be significantly improved by augmenting the current fault-based inspection technique with the proposed Error Abstraction and Inspection (supported by a Human Error Taxonomy). This paper investigates the impact of an inspectors ability to accurately abstract human errors on their fault-detection effectiveness.


india software engineering conference | 2018

Validating Requirements Reviews by Introducing Fault-Type Level Granularity: A Machine Learning Approach

Maninder Singh; Vaibhav K. Anu; Gursimran S. Walia; Anurag Goswami

Inspections are a proven approach for improving software requirements quality. Owing to the fact that inspectors report both faults and non-faults (i.e., false-positives) in their inspection reports, a major chunk of work falls on the person who is responsible for consolidating the reports received from multiple inspectors. We aim at automation of fault-consolidation step by using supervised machine learning algorithms that can effectively isolate faults from non-faults. Three different inspection studies were conducted in controlled environments to obtain real inspection data from inspectors belonging to both industry and from academic backgrounds. Next, we devised a methodology to separate faults from non-faults by first using ten individual classifiers from five different classification families to categorize different fault-types (e.g., omission, incorrectness, and inconsistencies). Based on the individual performance of classifiers for each fault-type, we created targeted ensembles that are suitable for identification of each fault-type. Our analysis showed that our selected ensemble classifiers were able to separate faults from non-faults with very high accuracy (as high as 85-89% for some fault-types), with a notable result being that in some cases, individual classifiers performed better than ensembles. In general, our approach can significantly reduce effort required to isolate faults from false-positives during the fault consolidation step of requirements inspections. Our approach also discusses the percentage possibility of correctly classifying each fault-type.


Empirical Software Engineering | 2018

Using human error information for error prevention

Wenhua Hu; Jeffrey C. Carver; Vaibhav K. Anu; Gursimran S. Walia; Gary L. Bradshaw

Developing error-free software requirements is of critical importance to the success of a software project. Problems that occur during requirements collection and specification, if not fixed early, are costly to fix later. Therefore, it is important to develop techniques that help requirements engineers detect and prevent requirements problems. As a human-centric activity, requirements engineering can be influenced by psychological research about human errors, which are the failings of human cognition during the process of planning and executinge a task. We have employed human error research to describe the types of problems that occur during requirements engineering. The goals of this research are: (1) to evaluate whether understanding human errors contributes to the prevention of errors and concomitant faults during requirements engineering and (2) to identify error prevention techniques used in industrial practice. We conducted a controlled classroom experiment to evaluate the benefits that knowledge of errors has on error prevention. We then analyzed data from two industrial surveys to identify specific prevention and mitigation approaches employed in practice. The classroom study showed that the better a requirements engineer understands human errors, the fewer errors and concomitant faults that engineer makes when developing a new requirements document. Furthermore, different types of Human Errors have different impacts on fault prevention. The industry study results identified prevention and mitigation mechanisms for each error type. Human error information is useful for fault prevention during requirements engineering. There are practices that requirements engineers can employ to prevent or mitigate specific human errors.

Collaboration


Dive into the Vaibhav K. Anu's collaboration.

Top Co-Authors

Avatar

Gary L. Bradshaw

Mississippi State University

View shared research outputs
Top Co-Authors

Avatar

Gursimran S. Walia

North Dakota State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Wenhua Hu

University of Alabama

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Anurag Goswami

North Dakota State University

View shared research outputs
Top Co-Authors

Avatar

Maninder Singh

North Dakota State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge