Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yashwant K. Malaiya is active.

Publication


Featured researches published by Yashwant K. Malaiya.


IEEE Software | 1992

Using neural networks in reliability prediction

Nachimuthu Karunanithi; Darrell Whitley; Yashwant K. Malaiya

It is shown that neural network reliability growth models have a significant advantage over analytic models in that they require only failure history as input and not assumptions about either the development environment or external parameters. Using the failure history, the neural-network model automatically develops its own internal model of the failure process and predicts future failures. Because it adjusts model complexity to match the complexity of the failure history, it can be more accurate than some commonly used analytic models. Results with actual testing and debugging data which suggest that neural-network models are better at endpoint predictions than analytic models are presented.<<ETX>>


IEEE Transactions on Software Engineering | 1992

Prediction of software reliability using connectionist models

Nachimuthu Karunanithi; Darrell Whitley; Yashwant K. Malaiya

The usefulness of connectionist models for software reliability growth prediction is illustrated. The applicability of the connectionist approach is explored using various network models, training regimes, and data representation methods. An empirical comparison is made between this approach and five well-known software reliability growth models using actual data sets from several different software projects. The results presented suggest that connectionist models may adapt well across different data sets and exhibit a better predictive accuracy. The analysis shows that the connectionist approach is capable of developing models of varying complexity. >


Computers & Security | 2007

Measuring, analyzing and predicting security vulnerabilities in software systems

Omar H. Alhazmi; Yashwant K. Malaiya; Indrajit Ray

In this work we examine the feasibility of quantitatively characterizing some aspects of security. In particular, we investigate if it is possible to predict the number of vulnerabilities that can potentially be present in a software system but may not have been found yet. We use several major operating systems as representatives of complex software systems. The data on vulnerabilities discovered in these systems are analyzed. We examine the results to determine if the density of vulnerabilities in a program is a useful measure. We also address the question about what fraction of software defects are security related, i.e., are vulnerabilities. We examine the dynamics of vulnerability discovery hypothesizing that it may lead us to an estimate of the magnitude of the undiscovered vulnerabilities still present in the system. We consider the vulnerability discovery rate to see if models can be developed to project future trends. Finally, we use the data for both commercial and open-source systems to determine whether the key observations are generally applicable. Our results indicate that the values of vulnerability densities fall within a range of values, just like the commonly used measure of defect density for general defects. Our examination also reveals that it is possible to model the vulnerability discovery using a logistic model that can sometimes be approximated by a linear model.


international symposium on software reliability engineering | 1995

Antirandom testing: getting the most out of black-box testing

Yashwant K. Malaiya

Random testing is a well known concept that requires that each test is selected randomly regardless of the test previously applied. The paper introduces the concept of antirandom testing. In this testing strategy each test applied is chosen such that its total distance from all previous tests is maximum. Two distance measures are defined. Procedures to construct antirandom sequences are developed. A checkpoint encoding scheme is introduced that allows automatic generation of efficient test cases. Further developments and studies needed are identified.


IEEE Design & Test of Computers | 1984

Modeling and Testing for Timing Faults in Synchronous Sequential Circuits

Yashwant K. Malaiya; Ramesh Narayanaswamy

Even with proper design, integrated circuits and systems can have timing problems because of physical faults or variation of parameters. The authors introduce a fault model that takes into account timing related failures in both the combinational logic and the storage elements. Using their fault model and the systems requirements for proper operation, the authors propose ways to handle flipflop-to-flipflop delay, path selection, initialization, error propagation, race-around, and anomalous behavior. They discuss the advantages of scan designs like LSSD and the effectiveness of random delay testing.


international symposium on software reliability engineering | 1994

The relationship between test coverage and reliability

Yashwant K. Malaiya; Naixin Li; James M. Bieman; Rick Karcich; Bob Skibbe

Models the relationship between testing effort, coverage and reliability, and presents a logarithmic model that relates testing effort to test coverage: statement (or block) coverage, branch (or decision) coverage, computation use (c-use) coverage, or predicate use (p-use) coverage. The model is based on the hypothesis that the enumerables (like branches or blocks) for any coverage measure have different detectability, just like the individual defects. This model allows us to relate a test coverage measure directly to the defect coverage. Data sets for programs with real defects are used to validate the model. The results are consistent with the known inclusion relationships among block, branch and p-use coverage measures. We show how the defect density controls the time-to-next-failure. The model can eliminate variables like the test application strategy from consideration. It is suitable for high-reliability applications where automatic (or manual) test generation is used to cover enemerables which have not yet been tested.<<ETX>>


international symposium on software reliability engineering | 2005

Modeling the vulnerability discovery process

Omar H. Alhazmi; Yashwant K. Malaiya

Security vulnerabilities in servers and operating systems are software defects that represent great risks. Both software developers and users are struggling to contain the risk posed by these vulnerabilities. The vulnerabilities are discovered by both developers and external testers throughout the life-span of a software system. A few models for the vulnerability discovery process have just been published recently. Such models will allow effective resource allocation for patch development and are also needed for evaluating the risk of vulnerability exploitation. Here we examine these models for the vulnerability discovery process. The models are examined both analytically and using actual data on vulnerabilities discovered in three widely-used systems. The applicability of the proposed models and significance of the parameters involved are discussed. The limitations of the proposed models are examined and major research challenges are identified


IEEE Transactions on Reliability | 2008

Application of Vulnerability Discovery Models to Major Operating Systems

Omar H. Alhazmi; Yashwant K. Malaiya

A number of security vulnerabilities have been reported in the Windows, and Linux operating systems. Both the developers, and users of operating systems have to utilize significant resources to evaluate, and mitigate the risk posed by these vulnerabilities. Vulnerabilities are discovered throughout the life of a software system by both the developers, and external testers. Vulnerability discovery models are needed that describe the vulnerability discovery process for determining readiness for release, future resource allocation for patch development, and evaluating the risk of vulnerability exploitation. Here, we analytically describe six models that have been recently proposed, and evaluate those using actual data for four major operating systems. The applicability of the proposed models, and the significance of the parameters involved are examined. The results show that some of the models tend to capture the discovery process better than others.


reliability and maintainability symposium | 2005

Quantitative vulnerability assessment of systems software

Omar H. Alhazmi; Yashwant K. Malaiya

This paper addresses feasibility of vulnerabilities present in the software. Vulnerabilities present in such software represent significant security risks. For Windows 98 and Windows NT 4.0, we present plots for cumulative numbers of vulnerabilities found. A time-based model for the total vulnerabilities discovered is proposed and is fitted to the data for two operating systems. We introduce a measure termed equivalent effort and propose an alternative model which is analogous to the software reliability growth models. We present the data on known defect densities for the two operating systems and discuss the relation between densities of vulnerabilities and the general defects. This relationship could lead us to potential ways of estimating the number of vulnerabilities in future.


international symposium on software reliability engineering | 1999

Requirements volatility and defect density

Yashwant K. Malaiya; Jason Denton

Ideally the requirements for a software system should be completely and unambiguously determined before design, coding and testing rake place. In practice, often there are changes in the requirements, causing software components to be redesigned, deleted or added. This requirements volatility causes the software to have a higher defect density. In this paper we analytically examine the influence of requirement changes taking place during different times by examining the consequences of software additions, removals and modifications. We take into account interface defects which arise due to errors at the interfaces among software sections. We compare the resulting defect density in the presence of requirement volatility, with the defect density that would have resulted had requirements not changed. The results show that if the requirement changes take place close to the release date, there is a greater impact on defect density. In each case we compute the defect equivalence factor representing the overall impact of requirement volatility.

Collaboration


Dive into the Yashwant K. Malaiya's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Omar H. Alhazmi

Colorado State University

View shared research outputs
Top Co-Authors

Avatar

Rochit Rajsuman

Case Western Reserve University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

HyunChul Joh

Colorado State University

View shared research outputs
Top Co-Authors

Avatar

Naixin Li

Colorado State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Carol Q. Tong

Colorado State University

View shared research outputs
Top Co-Authors

Avatar

Jason Denton

Colorado State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge