Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ritu Sibal is active.

Publication


Featured researches published by Ritu Sibal.


international conference on computer and communication technology | 2011

A genetic algorithm based approach for prioritization of test case scenarios in static testing

Sangeeta Sabharwal; Ritu Sibal; Chayanika Sharma

White box testing is a test technique that takes into account program code, code structure and internal design flow. White box testing is primarily of two kinds-static and structural. Whereas static testing requires only the source code of the product, not the binaries or executables, in structural testing tests are actually run by the computer on built products. In this paper, we propose a technique for optimizing static testing efficiency by identifying the critical path clusters using genetic algorithm. The testing efficiency is optimized by applying the genetic algorithm on the test data. The test case scenarios are derived from the source code. The information flow metric is adopted in this work for calculating the information flow complexity associated with each node of the control flow graph generated from the source code. This research paper is an extension of our previous research paper [18].


international conference on computer and communication technology | 2010

Prioritization of test case scenarios derived from activity diagram using genetic algorithm

Sangeeta Sabharwal; Ritu Sibal; Chayanika Sharma

Software testing involves identifying the test cases which discovers the errors in the program. However, the exhaustive testing is rarely impossible and very time consuming. In this paper, the software testing efficiency is optimized by identifying the critical path clusters. The test case scenarios are derived from the activity diagram and the testing efficiency is optimized by applying the genetic algorithm on the test data. The information flow metric is adopted in this work for calculating the information flow complexity associated with each node of activity diagram.


Proceedings of the CUBE International Information Technology Conference on | 2012

Prioritization of test scenarios derived from UML activity diagram using path complexity

Preeti Kaur; Priti Bansal; Ritu Sibal

This paper presents a novel approach for prioritizing test scenarios generated from UML 2.0 activity graph using path complexity. Activity Diagram is used as it is available at an early stage of the software development life cycle allowing us to detect faults at early stages, hence reducing the overall time and effort required for testing. In the proposed approach, activity diagram is converted into control flow graph and then test scenarios are derived from it using basis path method. The methodology adopted for prioritizing test scenarios is based on path complexity using the concept of path length, information flow metric, predicate node and multiple condition coverage.


conference on advanced information systems engineering | 1999

Modelling Method Heuristics for Better Quality Products

Naveen Prakash; Ritu Sibal

Development methods contain heuristics and constraints that help in producing good quality products. Whereas CASE tools enforce method constraints, they rarely support heuristic checking. This paper develops a generic quality model, capable of handling both method constraints and heuristics, which forms the basis of a uniform mechanism for building quality products. The model is metric based, hierarchical in nature, and links metrics to the developmental decisions that are available in a method. The use of this model and the associated quality assessment process is demonstrated through an example of the Yourdon method.


international conference on communication information computing technology | 2012

Software complexity: A fuzzy logic approach

Sangeeta Sabharwal; Ritu Sibal; Preeti Kaur

Software complexity is one of the important quality attribute and predicting complexity is a difficult task for software engineers. Current measures can be used to compute complexity but these methods are not sufficient. New methods or paradigms are being searched for predicting complexity because complexity prediction can help us in estimating many other quality attributes like testability and maintainability. The main goal of this paper is to explore the role of new paradigms like fuzzy logic in complexity prediction. In this paper we have proposed a fuzzy logic based approach to predict software complexity.


Archive | 2019

Software Vulnerability Prioritization: A Comparative Study Using TOPSIS and VIKOR Techniques

Ruchi Sharma; Ritu Sibal; Sangeeta Sabharwal

The ever-mounting existence of security vulnerabilities in a software is an inevitable challenge for organizations. Additionally, developers have to operate within limited budgets while meeting the deadlines. So they need to prioritize their vulnerability responses. In this paper, we propose an approach for vulnerability response prioritization using “closeness to the ideal” approach. We used TOPSIS and VIKOR method in this study. Both of these techniques employ an aggregating function to achieve the ranking of desired alternatives. VIKOR method determines a compromise solution on the basis of measure of closeness to a single ideal solution while TOPSIS method determines a feasible solution while taking into account the shortest distance from the positive ideal solution and the maximum distance from negative ideal solution. Both these methods share some significant similarities and differences. A comparative analysis of these two methods is done by applying them on real-life software vulnerability datasets for achieving vulnerability prioritization.


World Wide Web | 2018

A framework for classifying trust for online systems

Anuradha Yadav; Shampa Chakraverty; Ritu Sibal

There is no single universally agreed definition of trust for online systems. This has resulted in various trust metrics being proposed by researchers with diverse methods of computing and sourcing trust information. Consequently, it becomes difficult to carry out a comparative study of different types of trust in an organized manner. A classification framework of trust research is required to understand research trends and gaps in existing research. Existing trust classification frameworks are based on few dimensions and have restricted scope. There is a need for developing a generic and scalable method of classifying online trust approaches that can serve as a framework for researchers to analyze upcoming works and carry them forward. In this paper, we develop an open, extensible and scalable trust classification framework built on a set of orthogonal trust dimensions. After taking a bird’s eye view of existing trust classifications, we use wide applicability and extensibility as guidelines to incorporate and extend some of the existing trust categories and propose new dimensions for developing a more comprehensive classification framework. We enhance the prior proposed dimensions computational approach, computational model, trust inference, genesis of trust and trust context by adding new trust categories within each of them. We introduce a new classification dimension that is based on the life cycle maturity of trust systems. Taking a cue from the study of offline trust in social sciences, we refine implicit trust into two new sub-branches namely, relational and similarity based. We categorize existing works on online trust within the proposed framework to highlight their properties along different trust dimensions. Finally, we present a comprehensive review of similarity based implicit and relational implicit trust metrics, the two new key trust sub-categories defined in this paper.


International Journal of Secure Software Engineering | 2016

Vulnerability Discovery Modeling for Open and Closed Source Software

Ruchi Sharma; Ritu Sibal; A. K. Shrivastava

With growing concern for security, the researchers began with the quantitative modeling of vulnerabilities termed as vulnerability discovery models VDM. These models aim at finding the trend of vulnerability discovery with time and facilitate the developers in patch management, optimal resource allocation and assessing associated security risks. Among the existing models for vulnerability discovery, Alhazmi-Malaiya Logistic Model AML is considered the best fitted model on all kinds of datasets. But, each of the existing models has a predefined basic shape and can only fit datasets following their basic shapes. Thus, shape of the dataset forms the decisive parameter for model selection. In this paper, the authors have proposed a new model to capture a wide variety of datasets irrespective of their shape accounting for better goodness of fit. The proposed model has been evaluated on three real life datasets each for open and closed source software and the models are ranked based on their suitability to discover vulnerabilities using normalized criteria distance NCD technique.


international symposium on signal processing and information technology | 2014

Deriving Complexity Metric based on Use Case Diagram and its validation

Sangeeta Sabharwal; Ritu Sibal; Preeti Kaur

Use Case based requirements analysis has attained vast acceptance in requirements engineering. Use Case Diagram represents the functional requirements of the system to be developed from users perspective and also forms the starting point for documenting requirements using Unified Modeling Language (UML). Therefore the quality of Use Case Diagram has substantial impact on the quality of the resulting system. This paper presents a novel approach for deriving a Use Case Complexity Metric based on dependency and association relationships in the Use Case Diagram. The proposed Use Case Complexity Metric can be used to measure the complexity of the software to be developed and will be particularly useful for early software development estimations.


arXiv: Software Engineering | 2014

Applications of different metaheuristic techniques for finding optimal tst order during integration testing of object oriented systems and their comparative study.

Chayanika Sharma; Ritu Sibal

In recent past, a number of researchers have proposed genetic algorithm (GA) based strategies for finding optimal test order while minimizing the stub complexity during integration testing. Even though, metaheuristic algorithms have a wide variety of use in various medium to large size optimization problems (21), their application to solve the class integration test order (CITO) problem (12) has not been investigated. In this research paper, we propose to find a solution to CITO problem by the use of a GA based approach. We have proposed a class dependency graph (CDG) to model dependencies namely, association, aggregation, composition and inheritance between classes of unified modeling language (UML) class diagram. In our approach, weights are assigned to the edges connecting nodes of CDG and then these weights are used to model the cost of stubbing. Finally, we compare and discuss the empirical results of applying our approach with existing graph based and metaheuristic techniques to the CITO problem and highlight the relative merits and demerits of the various techniques.

Collaboration


Dive into the Ritu Sibal's collaboration.

Top Co-Authors

Avatar

Chayanika Sharma

Netaji Subhas Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Preeti Kaur

Netaji Subhas Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Anuradha Yadav

Netaji Subhas Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Ruchi Sharma

Netaji Subhas Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Shampa Chakraverty

Netaji Subhas Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Naveen Prakash

Netaji Subhas Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Priti Bansal

Netaji Subhas Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge