Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sunil Kumar Khatri is active.

Publication


Featured researches published by Sunil Kumar Khatri.


International Journal of Reliability, Quality and Safety Engineering | 2007

SOFTWARE RELIABILITY GROWTH MODELLING FOR ERRORS OF DIFFERENT SEVERITY USING CHANGE POINT

P. K. Kapur; Archana Kumar; Kalpana Yadav; Sunil Kumar Khatri

During the last two decades many researchers have analyzed the reliability growth of software during the testing and operational phases and proposed the mathematical models to estimate and predict the reliability measures. During the software testing on the detection of a failure the fault that has caused the failure is isolated and removed. Most of the existing research in this area considers that similar testing efforts and strategy are required on each debugging effort. However this may not be true in practice. Different faults may require different amount of testing efforts and testing strategy for their removal. In software reliability modeling in order to incorporate this phenomenon faults are classified into different categories as simple, hard and/or complex faults. This categorization is also extended to n-types of faults. Some of the existing research incorporates this phenomenon considering that the fault removal rate is different for different types of faults and remains constant during the overall period of testing. However this assumption may not apply in general testing environment in practice. It is a common observation that as the testing progresses the fault detection and/or removal rate changes. This change can be due to a number of reasons. The changing testing environment, testing strategy, skill, motivation and constitution of the testing and debugging personnel etc. are some of the major reasons behind this change. In this paper we have formulated the model for the software system developed for safety critical application under a specific testing environment. The model is validated on real life data sets.


International Journal of Reliability, Quality and Safety Engineering | 2008

SOFTWARE RELIABILITY ASSESSMENT USING ARTIFICIAL NEURAL NETWORK BASED FLEXIBLE MODEL INCORPORATING FAULTS OF DIFFERENT COMPLEXITY

P. K. Kapur; Sunil Kumar Khatri; Mashaallah Basirzadeh

With growth in demand for zero defects, predicting reliability of software products is gaining importance. Software Reliability Growth Models (SRGM) are used to estimate the reliability of a software product. We have a large number of SRGM; however none of them works across different environments. Recently, Artificial Neural Networks have been applied in software reliability assessment and software reliability growth prediction. In most of the existing research available in the literature, it is considered that similar testing effort is required on each debugging effort. However, in practice, different amount of testing efforts may be required for detection and removal of different type of faults on basis of their complexity. Consequently, faults are classified into three categories on basis of complexity: simple, hard and complex. In this paper we apply neural network methods to build software reliability growth models (SRGM) considering faults of different complexity. Logistic learning function accounting for the expertise gained by the testing team is used for modeling the proposed model. The proposed model assumes that in the simple faults the growth in removal process is uniform whereas, for hard and complex faults, removal process follows logistic growth curve due to the fact that learning of removal team grows as testing progresses. The proposed model has been validated, evaluated and compared with other NHPP model by applying it on two failure/fault removal data sets cited from real software development projects. The results show that the proposed model with logistic function provides improved goodness-of-fit for software failure/fault removal data.


International Journal of Systems Assurance Engineering and Management | 2014

Critical success factor utility based tool for ERP health assessment: a general framework

P. K. Kapur; Shruti Nagpal; Sunil Kumar Khatri; V. S. Sharma Yadavalli

Enterprise resource planning (ERP) systems present themselves as a practical answer to link all enterprise-wide operations. In spite of robust ERP software packages available world-wide with organizations delving in heavy investments, many ERP implementations fail or do not attain the desired objectives, thus leaving an indelible mark on the organizational budgets. Past and current researchers have exhaustively studied and listed several critical success factors (CSF) that are critical to the success of ERP implementations. In this research paper, we present a framework of CSFs based utility tool that can be used to assess the ERP Health at various stages of the ERP implementation. This tool provides the implementers to effectively monitor the CSF through various stages of ERP Implementation, such that any dip in the CSF would send a signal to implementers to work on the particular CSF showing trouble and thus bring back the ongoing ERP implementation back on the success track.


international conference on computer communications | 2015

Improving patient matching: Single patient view for Clinical Decision Support using Big Data analytics

Reena Duggal; Sunil Kumar Khatri; Balvinder Shukla

In this era of open information and data explosion, Healthcare industry is on a tipping point. Big Data plays a major role in this new change. One of the biggest challenges that the healthcare industry faces while it steps up digitization is the sheer size of the data, speed of generation of this data and complexity arising out of multiple & non-standard formats. Patient data residing in disparate systems is a roadblock to having the right information at the right time. Clinical Decision Support systems need a single view of the patient for making better diagnosis and treatments. Patient identification and matching is a critical challenge in interfacing to the Electronic Health Record (EHR). Different documents and results from various disparate systems like laboratory, pharmacy, claims systems etc. need to be linked to the correct patient record. At this point when healthcare organizations share patient information internally as well as externally, patient records from numerous disparate databases should be connected effectively to guarantee that the decisions made by the clinicians are based on correct patient records and minimizing duplicate information and overheads. This arises the need of improving patient matching for better decision support using single patient view. This paper attempts to study the problem of matching patient records from disparate systems and proposes a solution by using Big Data Analytic techniques like Fuzzy Matching algorithms & MapReduce for better clinical decision support. The main benefits of the proposed system are scalability, cost-effectiveness, flexibility of using any fuzzy algorithm and handling of any data source.


international conference on advanced computing | 2012

Flexible Discrete Software Reliability Growth Model for Distributed Environment Incorporating Two Types of Imperfect Debugging

Sunil Kumar Khatri; P. K. Kapur; Prashant Johri

In literature we have several software reliability growth models developed to monitor the reliability growth during the testing phase of the software development. These models typically use the calendar / execution time and hence are known as continuous time SRGM. However, very little seems to have been done in the literature to develop discrete SRGM. Discrete SRGM uses test cases in computer test runs as a unit of testing. Debugging process is usually imperfect because during testing all software faults are not completely removed as they are difficult to locate or new faults might be introduced. In real software development environment, the number of failures observed need not be same as the number of errors removed. If the number of failures observed is more than the number of faults removed then we have the case of imperfect debugging. Due to the complexity of the software system and the incomplete understanding of the software requirements, specifications and structure, the testing team may not be able to remove the fault perfectly on detection of the failure and the original fault may remain or get replaced by another fault. In this paper, we discuss a discrete software reliability growth model for distributed system considering imperfect debugging that faults are not always corrected/removed when they are detected and fault generation. The proposed model assumes that the software system consists of a finite number of reused and newly developed sub-systems. The reused sub-systems do not involve the effect of severity of the faults on the software reliability growth phenomenon because they stabilize over a period of time i.e. the growth is uniform whereas, the newly developed subsystem does involve. For newly developed component, it is assumed that removal process follows logistic growth curve due to the fact that learning of removal team grows as testing progresses. The fault removal phenomena for reused and newly developed sub-systems have been modeled separately and are summed to obtain the total fault removal phenomenon of the software system. The model has been validated on two software data sets and it is shown that the proposed model fairs comparatively better than the existing one.


International Journal of Reliability, Quality and Safety Engineering | 2011

ENHANCING SOFTWARE RELIABILITY OF A COMPLEX SOFTWARE SYSTEM ARCHITECTURE USING ARTIFICIAL NEURAL-NETWORKS ENSEMBLE

P. K. Kapur; V. S. Sarma Yadavalli; Sunil Kumar Khatri; Mashaallah Basirzadeh

Modeling of software reliability has gained lot of importance in recent years. Use of software-critical applications has led to tremendous increase in amount of work being carried out in software reliability growth modeling. Number of analytic software reliability growth models (SRGM) exists in literature. They are based on some assumptions; however, none of them works well across different environments. The current software reliability literature is inconclusive as to which models and techniques are best, and some researchers believe that each organization needs to try several approaches to determine what works best for them. Data-driven artificial neural-network (ANN) based models, on other side, provide better software reliability estimation. In this paper we present a new dimension to build an ensemble of different ANN to improve the accuracy of estimation for complex software architectures. Model has been validated on two data sets cited from the literature. Results show fair improvement in forecasting software reliability over individual neural-network based models.


long island systems, applications and technology conference | 2015

Comparative study of ERP implementation strategies

Shruti Nagpal; Sunil Kumar Khatri; Ashok Kumar

Enterprise Resource Planning, ERP software has come a long way since its inception as Inventory Management and Control Systems of 1960s. The value of ERP Implementation Strategy has been stressed over the years and it has been included as an important Critical Success Factor, CSF, as recorded by previous researchers. Traditional ERP implementation followed more or less a sequential approach akin to the Waterfall Model. Researchers over the years have categorized ERP Implementation methodology and developed frameworks. These are based on varied ERP Implementation observations. Given the variety of methodologies and frameworks available, the real-world ERP implementation demands the development and adoption of a strategy as a guiding principle for underlying methods. This paper suggests a new classification approach based on the ERP implementation strategy that can be categorized as custom-made, vendor-specific or consultant-specific. This research paper also conducts a comparative study of leading vendor-specific ERP implementation methodologies along-with their example cases. It then discusses how the principles of Agile Methodology as laid down in the Agile Manifesto [1] are being incorporated in ERP implementations.


International Journal of Reliability, Quality and Safety Engineering | 2014

AN ASSESSMENT OF TESTING COST WITH EFFORT-DEPENDENT FDP AND FCP UNDER LEARNING EFFECT: A GENETIC ALGORITHM APPROACH

Vijay Kumar; Sunil Kumar Khatri; Hitesh Dua; Manisha Sharma; Paridhi Mathur

Software testing involves verification and validation of the software to meet the requirements elucidated by customers in the earlier phases and to subsequently increase software reliability. Around half of the resources, such as manpower and CPU time are consumed and a major portion of the total cost of developing the software is incurred in testing phase, making it the most crucial and time-consuming phase of a software development lifecycle (SDLC). Also the fault detection process (FDP) and fault correction process (FCP) are the important processes in SDLC. A number of software reliability growth models (SRGM) have been proposed in the last four decades to capture the time lag between detected and corrected faults. But most of the models are discussed under static environment. The purpose of this paper is to allocate the resources in an optimal manner to minimize the cost during testing phase using FDP and FCP under dynamic environment. An elaborate optimization policy based on optimal control theory for resource allocation with the objective to minimize the cost is proposed. Further, genetic algorithm is applied to obtain the optimum value of detection and correction efforts which minimizes the cost. Numerical example is given in support of the above theoretical result. The experimental results help the project manager to identify the contribution of model parameters and their weight.


international conference on software engineering | 2012

Software Reliability Growth Model with testing effort using learning function

Sunil Kumar Khatri; Deepak Kumar; Asit Dwivedi; Nitika Mrinal

Software Reliability Growth Models have been proposed in the literature to measure the quality of software and to release the software at minimum cost. Testing is an important part to find out faults during Software Development Life Cycle of integrated software. Testing can be defined as the execution of a program to find a fault which might have been introduced during the testing time under different assumptions. The testing team may not be able to remove the fault perfectly on the detection of the failure and the original fault may remain or get replaced by another fault. While the former phenomenon is known as imperfect fault removal, the latter is called error generation. In this paper, we have proposed a new SRGM with two types of imperfect debugging with testing effort using learning function reflecting the expertise gained by testing team depending on its complexity, the skills of the debugging team, the available manpower and the software development environment and it is estimated and compared other existing models on real time data sets. These estimation result shows compare performance and application of different SRGM with testing effort.


international conference on computer communications | 2015

Unravelling unstructured data: A wealth of information in big data

Mona Tanwar; Reena Duggal; Sunil Kumar Khatri

Big Data is data of high volume and high variety being produced or generated at high velocity which cannot be stored, managed, processed or analyzed using the existing traditional software tools, techniques and architectures. With big data many challenges such as scale, heterogeneity, speed and privacy are associated but there are opportunities as well. Potential information is locked in big data which if properly leveraged will make a huge difference to business. With the help of big data analytics, meaningful insights can be extracted from big data which is heterogeneous in nature comprising of structured, unstructured and semi-structured content. One prime challenge in big data analytics is that nearly 95% data is unstructured. This paper describes what big data and big data analytics is. A review of different techniques and approaches to analyze unstructured data is given. This paper emphasizes the importance of analysis of unstructured data along with structured data in business to extract holistic insights. The need for appropriate and efficient analytical methods for knowledge discovery from huge volumes of heterogeneous data in unstructured formats has been highlighted.

Collaboration


Dive into the Sunil Kumar Khatri's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Rana Majumdar

Guru Gobind Singh Indraprastha University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dolly Sharma

University of Connecticut

View shared research outputs
Top Co-Authors

Avatar

Kamaldeep Kaur

Guru Gobind Singh Indraprastha University

View shared research outputs
Researchain Logo
Decentralizing Knowledge