Dejan Desovski
West Virginia University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Dejan Desovski.
international symposium on software reliability engineering | 2005
Petar Popic; Dejan Desovski; Walid Abdelmoez; Bojan Cukic
Component based development is gaining popularity in the software engineering community. The reliability of components affects the reliability of the system. Different models and theories have been developed to estimate system reliability given the information about system architecture and the quality of the components. Almost always in these models a key attribute of component-based systems, the error propagation between the components, is overlooked and not taken into account in the reliability prediction. We extend our previous work on Bayesian reliability prediction of component based systems by introducing the error propagation probability into the model. We demonstrate the impact of the error propagation in a case study of an automated personnel access control system. We conclude that error propagation may have a significant impact on the system reliability prediction and, therefore, future architecture-based models should not ignore it
high-assurance systems engineering | 2005
Dejan Desovski; Yan Liu; Bojan Cukic
Sensor failures in process control programs can be tolerated through application of well known modular redundancy schemes. The reliability of a specific modular redundancy scheme depends on the predefined number of sensors that may fail, f, out of the total number of sensors available, n. Some recent sensor fusion algorithms offer the benefit of tolerating a more significant number of sensor failures than modular redundancy techniques at the expense of degrading the precision of sensor readings. In this paper, we present a novel sensor fusion algorithm based on randomized voting, having linear - O(n) expected execution time. The precision (the length) of the resulting interval is dependent on the number of faulty sensors - parameter f. A novel reliability model applicable to general sensor fusion schemes is proposed. Our modeling technique assumes the coexistence of two major types of sensor failures, permanent and transient. The model offers system designers the ability to analyze and define application specific balances between the expected system reliability and the desired interval estimate precision. Under the assumptions of failure independence and exponentially distributed failure occurrences, we use Markov models to compute system reliability. The model is then validated empirically and examples of reliability prediction are provided for networks with fairly large number of sensors (n>100).
international symposium on software reliability engineering | 2004
M. Lil; Y. Wei; Dejan Desovski; H. Nejad; Susmita Ghose; Bojan Cukic; Carol S. Smidts
Software-based digital systems are progressively replacing analog systems in safety-critical applications. However the ability to predict their reliability is not well understood and needs further study. A first step towards systematic resolution of this issue was presented in a recent software engineering measure study. In that study a set of software engineering measures were ranked with respect to their ability in predicting software reliability through an expert opinion elicitation process. This study also proposed a concept of reliability prediction system (RePS) to bridge the gap between software engineering measures and software reliability. The research presented in this paper validates the rankings obtained and the concept of RePS proposed in the previous study.
international symposium on software testing and analysis | 2006
David Owen; Dejan Desovski; Bojan Cukic
This paper presents a methodology for random testing of software models. Random testing tools can be used very effectively early in the modeling process, e.g., while writing formal requirements specification for a given system. In this phase users cannot know whether a correct operational model is being built or whether the properties that the model must satisfy are correctly identified and stated. So it is very useful to have tools to quickly identify errors in the operational model or in the properties, and make appropriate corrections. Using Lurch, our random testing tool for finite-state models, we evaluated the effectiveness of random model testing by detecting manually seeded errors in an SCR specification of a real-world personnel access control system. Having detected over 80% of seeded errors quickly, our results appear to be very encouraging. We further defined and measured test coverage metrics with the goal of understanding why some of the mutants were not detected. Coverage measures allowed us to understand the pitfalls of random testing of formal models, thus providing opportunities for future improvement.
international symposium on software reliability engineering | 2006
David Owen; Dejan Desovski; Bojan Cukic
In this paper we describe an experiment in which inconsistent results between two tools for testing formal models (and a third used to determine which of the two was correct) led us to a more careful look at the way each tool was being used and a clearer understanding of the output of the tools. For the experiment, we created error-seeded versions of an SCR specification representing a real-world personnel access control system. They were checked using the model checker SPIN and Lurch, our random testing tool for finite-state models. In one case a property violation was detected by Lurch, an incomplete tool, but missed by SPIN, a model checking tool designed for complete verification. We used the SCR Toolset and the Salsa invariant checker to determine that the violation detected by Lurch was indeed present in the specification. We then looked more carefully at how we were using SPIN in conjunction with the SCR Toolset and, eventually, made adjustments so that SPIN also detected the property violation initially detected only by Lurch. Once it was clear the tools were being used correctly and would give consistent results, we did an experiment to determine how they could be combined to optimize completeness and efficiency. We found that combining tools made it possible to verify the specifications faster and with much less memory in most cases
high assurance systems engineering | 2004
Dejan Desovski
Proving the correctness of a developed specification with respect to the requirements is the most important and the most difficult task in the development of high assurance systems. Studies have shown that significant number of faults in real systems can be traced back to the specifications. In this short paper, we present our initial ideas on combining formal methods and specification testing for the purposes of specification verification.
EURASIP Journal on Advances in Signal Processing | 2006
Dejan Desovski; Vijai Gandikota; Yan Liu; Yue Jiang; Bojan Cukic
The need for reliable identification and authentication is driving the increased use of biometric devices and systems. Verification and validation techniques applicable to these systems are rather immature and ad hoc, yet the consequences of the wide deployment of biometric systems could be significant. In this paper we discuss an approach towards validation and reliability estimation of a fingerprint registration software. Our validation approach includes the following three steps: (a) the validation of the source code with respect to the system requirements specification; (b) the validation of the optimization algorithm, which is in the core of the registration system; and (c) the automation of testing. Since the optimization algorithm is heuristic in nature, mathematical analysis and test results are used to estimate the reliability and perform failure analysis of the image registration module.
workshop on object-oriented real-time dependable systems | 2005
Bojan Cukic; Martin Mladenovski; Dejan Desovski; Sampath Yerramalla
We describe a data fusion technique suitable for use in validation of a real-time autonomous system. The technique is based on the Dempster-Shafer theory and Murphys rule for beliefs combination. The methodology is applied for fusing the learning stability estimates, provided by an online neural network monitoring methodology, into a single probabilistic learning stability measure. The case study shows that our data fusion technique is capable of handing real-time requirements and provides unique, meaningful results for interpreting the stability information provided by the online monitoring system.
international conference on computational science | 2005
K. Subramani; Dejan Desovski
In this paper, we present a comprehensive empirical analysis of the Vertex Contraction (VC) algorithm for the problem of checking whether a directed graph with positive and negative costs on its edges has a negative cost cycle (NCCD). VC is a greedy algorithm, first presented in [SK05], for NCCD and is the only known greedy strategy for this problem. In [SK05] we compared a naive implementation of VC with the “standard” Bellman-Ford (BF) algorithm for the same problem. We observed that our algorithm performed an order of magnitude better than the BF algorithm on a range of randomly generated inputs, thereby conclusively demonstrating the superiority of our approach. This paper continues the study of contrasting greedy and dynamic programming approaches, by comparing VC with a number of sophisticated implementations of the BF algorithm.
formal modeling and analysis of timed systems | 2005
K. Subramani; Dejan Desovski
In this paper, we describe a new algorithm for the problem of checking whether a real-time system has a Partially Clairvoyant schedule (PCS). Existing algorithms for the PCS problem are predicated on sequential quantifier elimination, i.e., the innermost quantifier is eliminated first, followed by the next one and so on. Our technique is radically different in that the quantifiers in the schedulability specification are eliminated in arbitrary fashion. We demonstrate the usefulness of this technique by achieving significant performance improvement over a wide range of inputs. Additionally, the analysis developed for the new procedure may find applications in domains such as finite model theory and classical logic.