Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where V. B. Singh is active.

Publication


Featured researches published by V. B. Singh.


intelligent systems design and applications | 2012

Predicting the priority of a reported bug using machine learning techniques and cross project validation

Meera Sharma; Punam Bedi; K. K. Chaturvedi; V. B. Singh

In bug repositories, we receive a large number of bug reports on daily basis. Managing such a large repository is a challenging job. Priority of a bug tells that how important and urgent it is for us to fix. Priority of a bug can be classified into 5 levels from PI to P5 where PI is the highest and P5 is the lowest priority. Correct prioritization of bugs helps in bug fix scheduling/assignment and resource allocation. Failure of this will result in delay of resolving important bugs. This requires a bug prediction system which can predict the priority of a newly reported bug. Cross project validation is also an important concern in empirical software engineering where we train classifier on one project and test it for prediction on other projects. In the available literature, we found very few papers for bug priority prediction and none of them dealt with cross project validation. In this paper, we have evaluated the performance of different machine learning techniques namely Support Vector Machine (SVM), Naive Bayes (NB), K-Nearest Neighbors (KNN) and Neural Network (NNet) in predicting the priority of the newly coming reports on the basis of different performance measures. We performed cross project validation for 76 cases of five data sets of open office and eclipse projects. The accuracy of different machine learning techniques in predicting the priority of a reported bug within and across project is found above 70% except Naive Bayes technique.


International Journal of Computer Applications | 2012

E-Governance: Past, Present and Future in India

Nikita Yadav; V. B. Singh

Due to widespread demand of E-governance and exponentially increasing size of data, new technologies like Open source solutions and cloud computing need to be incorporated. In this paper, the latest trends of technology that the government of most of the country has adopted have been discussed. While working on this project we have concluded that E-Governance has made the working of government more efficient and more transparent to its citizens We have also presented an exhaustive list of E-Governance projects which is currently being used in India and in international scenario. We have provided a mechanism for improving E-Governance by including technologies such as Open Source and Cloud Computing.


intelligent systems design and applications | 2012

Entropy based bug prediction using support vector regression

V. B. Singh; K. K. Chaturvedi

Predicting software defects is one of the key areas of research in software engineering. Researchers have devised and implemented a plethora of defect/bug prediction approaches namely code churn, past bugs, refactoring, number of authors, file size and age, etc by measuring the performance in terms of accuracy and complexity. Different mathematical models have also been developed in the literature to monitor the bug occurrence and fixing process. These existing mathematical models named software reliability growth models are either calendar time or testing effort dependent. The occurrence of bugs in the software is mainly due to the continuous changes in the software code. The continuous changes in the software code make the code complex. The complexity of the code changes have already been quantified in terms of entropy as follows in Hassan [9]. In the available literature, few authors have proposed entropy based bug prediction using conventional simple linear regression (SLR) method. In this paper, we have proposed an entropy based bug prediction approach using support vector regression (SVR). We have compared the results of proposed models with the existing one in the literature and have found that the proposed models are good bug predictor as they have shown the significant improvement in their performance.


International Journal of Systems Assurance Engineering and Management | 2014

Predicting the complexity of code changes using entropy based measures

K. K. Chaturvedi; P. K. Kapur; Sameer Anand; V. B. Singh

AbstractChanges in software source codes are inevitable. The source codes of software are frequently changed to meet the user’s enormous requirements. These changes are occurring due to bug repairs (BR), enhancement/modification (EM) and the addition of new features (NF). The maintenance task becomes quite difficult if these changes are not properly recorded. The versions of these frequent changes are being maintained using the software configuration management repository. These continuous changes in the software source code make the code complex and negatively affect the quality of the product. In the literature, the complexity of the code changes has been quantified using entropy based measures (Hassan, in: Proceedings of the 31st international conference on software engineering, pp. 78–88, 2009). In this paper, we have proposed a model to predict the potential complexity of code changes using entropy based measures. The predicted potential complexity of code changes helps in determining the remaining code changes yet to be diffused in the software. The proposed model has been validated using seven components of web browser Mozilla. The model has been evaluated using goodness of fit criteria namely R squared, bias, mean squared error, variation, and root mean squared prediction error (RMSPE).The statistical significance of the proposed model has been tested using χ2 and Kolmogorov–Smirnov (K–S) test. The proposed model is found statistically significant based on the associated p value of the K–S test. Further, we conclude that the rate of complexity diffusion due to BR is found higher in four cases namely Bonsai, Mozbot, tables and XUL. The other components of Mozilla namely AUS, MXR and Tinderbox show increase in complexity due to EM and NF.


international conference on computational science and its applications | 2013

Improving the Quality of Software by Quantifying the Code Change Metric and Predicting the Bugs

V. B. Singh; K. K. Chaturvedi

“When you can measure what you are speaking about and express it in numbers, you know something about it; but when you cannot measure, when you cannot express it in numbers, your knowledge is of a meagre and unsatisfactory kind; it may be the beginning of knowledge, but you have scarcely, in your thoughts, advanced to the stage of science.” LORD WILLIAM KELVIN (1824 – 1907). During the last decade, the quantification of software engineering process has got a pace due to availability of a huge amount of software repositories. These repositories include source code, bug, communication among developers/users, changes in code, etc. Researchers are trying to find out useful information from these repositories for improving the quality of software.


Proceedings of the 5th IBM Collaborative Academia Research Exchange Workshop on | 2013

Understanding the meaning of bug attributes and prediction models

Meera Sharma; Madhu Kumari; V. B. Singh

Software bug is a buzz word now a day. A software bug has many attributes, some of which are filled at the time of reporting and others are filled during the process of fixing. Some attributes are qualitative in nature but some are quantitative. A clear understanding of bug attributes, their interdependence and their contribution in predicting the other attributes will help in improving the quality of software. In the literature, prediction models based on linear regression have been proposed to predict the bug attributes and to determine their linear relationships. cc list (manpower involved in monitoring the progress of bug fix) is an important bug attribute for which no prediction model has been developed in literature. We investigated the contribution of bug attributes in predicting the bug cc list i.e. the manpower involved in monitoring the progress of bug fix based on multiple linear regression (MLR), support vector regression (SVR) and fuzzy linear regression (FLR). We conducted the experiments to develop prediction models for 21,424 bug reports of Firefox, Thunderbird, Seamonkey, Boot2Gecko, Add-on SDK, Bugzilla, Webtools and addons.mozilla.org products of the Mozilla open source project. We have also investigated a linear relation among bug attributes. The empirical results conclude that the value of R2 in predicting cc list across all datasets lies in the range of 0.31 to 0.70, 0.54 to 0.88, 0.25 to 0.68 and 0.69 to 0.93 for multiple linear regression, support vector regression, fuzzy linear regression(robust off) and fuzzy linear regression (robust bisquare) respectively.


international conference on computational science and its applications | 2015

Bug Assignee Prediction Using Association Rule Mining

Meera Sharma; Madhu Kumari; V. B. Singh

In open source software development we have bug repository to which both developers and users can report bugs. Bug triage, deciding what to do with an incoming bug report, takes a large amount of developer resources and time. All newly coming bug reports must be triaged to determine whether the report is correct and requires attention and if it is, which potentially experienced developer/fixer will be assigned the responsibility of resolving the bug report. In this paper, we propose to apply association mining to assist in bug triage by using Apriori algorithm to predict the developer that should work on the bug based on the bugs severity, priority and summary terms. We demonstrate our approach on collection of 1,695 bug reports of Thunderbird, AddOnSDK and Bugzilla products of Mozilla open source project. We have analyzed the association rules for top five assignee of the three products. Association rules can support the managers to improve its process during development and save time and resources.


international symposium on software reliability engineering | 2014

Prediction of the Complexity of Code Changes Based on Number of Open bugs, New Feature and Feature Improvement

V. B. Singh; Meera Sharma

During the last decade, a paradigm shift has been taken place in the software development process. Advancement in the internet technology has eased the software development under distributed environment irrespective of geographical locations. Result of this, Open Source Software systems which serve as key components of critical infrastructures in the society are still ever-expanding now. Open source software is evolved through an active participation of the users in terms of reporting of bugs, request for new features and feature improvements. These active users distributed across different geographical locations and are working towards the evolution of open source software. The code-changes due to bug fixes, new features and feature improvements for a given time period are used to predict the possible code changes in the software over a long run (potential complexity of code changes). It is evident that the open source software are evolved through these modification but an empirical understanding among the bug fix, new features, feature improvements and modifications in the files are unexplored till now. In this paper, we have predicted the potential of bugs that can be detected/fixed and new features, improvements that can be diffused in the software over a period of time. We have quantified the complexity of code changes (entropy) and after that predicted the complexity of code changes by applying Cobb-Douglas and extended Cobb-Douglas (two dimensions and three dimensions) based diffusion models. The developed models can be used to determine the quantitative value of complexity of code changes for reported bugs, new features and feature improvements in addition to their potential values. This empirical study mathematically models the interaction of a system (the debugging and code change system) with the external open world which will assist support managers in software maintenance activities and software evolution.


International Journal of Computer Applications | 2014

Generalized Reliability Model for Cloud Computing

Nikita Yadav; V. B. Singh; Madhu Kumari

Performance of cloud computing depends on effective utilization of resources and reliability. With resource allocation algorithms such as banker’s algorithm resource utilization can be done in an effective manner in cloud computing. With reliability we can estimate the fault tolerance capability of a system. Reliability improvement is largely dependent on the availability of operational profile that statistically models the pattern in which the system is more likely to be used in the operating environment. System is less reliable if it exhibits a degree of hardware and software dependency and more reliable if hardware and software failure occur independently. In Cloud computing environment, hundreds of thousands of systems are hosted that consume cloud computing services. These services have of lots of hardware, software platform and infrastructure support, each of which though carefully engineered are still capable of failure. These failure rates and complexity of database make cloud less reliable. In this paper, we have proposed a reliability model that estimates the mean time to failure and failure rate based on delayed exponential distribution. Through this model, we study the effect of older and newer systems on cloud computing reliability that consumes the cloud computing services.


ACM Sigsoft Software Engineering Notes | 2014

Developing an intelligent cloud for higher education

Nikita Yadav; Sujata Khatri; V. B. Singh

With rapid development in the IT world, technologies are becoming more dynamic and advanced. Today, technologies are changing with customer requirements. In the IT world, research is carried out to make technology better to meet the requirements that change with time. With the advancement in the IT world, online services have proliferated. Now a days, cloud computing is the hottest buzzword in the IT world. Cloud computing is not limited to the E-Governance and business worlds, but is also making a great impact in the education world. With growing demand for education, technologies and research, all universities and education institutions have their eyes on cloud computing. The main pillars of educational institutions are students, faculties, administrations and libraries. Faculty and students do research and need quality data while students of a particular field need a subject-oriented knowledge. Manually getting these kinds of data is time consuming as students depend on literature, books, different kind of software and hardware. With cloud computing in higher education, cost-effective measures can be taken to minimize the dependency on books, hardware and software. In this paper, we discuss how Artifical Intelligence based cloud computing in higher education will improve quality and ease the process of getting e-resources (software/hardware platform, storage etc.). This study will help in understanding effective cost-cutting measures. We also discuss how cloud computing in the library and administration will brighten the education prospects.

Collaboration


Dive into the V. B. Singh's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

K. K. Chaturvedi

Indian Agricultural Statistics Research Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Vijay Kumar

Guru Gobind Singh Indraprastha University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ashish Dhamija

Guru Gobind Singh Indraprastha University

View shared research outputs
Top Co-Authors

Avatar

Ashish Garg

Guru Gobind Singh Indraprastha University

View shared research outputs
Researchain Logo
Decentralizing Knowledge