Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Bojan Cukic is active.

Publication


Featured researches published by Bojan Cukic.


dagstuhl seminar proceedings | 2013

Software Engineering for Self-Adaptive Systems: A Second Research Roadmap

Rogério de Lemos; Holger Giese; Hausi A. Müller; Mary Shaw; Jesper Andersson; Marin Litoiu; Bradley R. Schmerl; Gabriel Tamura; Norha M. Villegas; Thomas Vogel; Danny Weyns; Luciano Baresi; Basil Becker; Nelly Bencomo; Yuriy Brun; Bojan Cukic; Ron Desmarais; Schahram Dustdar; Gregor Engels; Kurt Geihs; Karl M. Göschka; Alessandra Gorla; Vincenzo Grassi; Paola Inverardi; Gabor Karsai; Jeff Kramer; Antónia Lopes; Jeff Magee; Sam Malek; Serge Mankovskii

The goal of this roadmap paper is to summarize the state-of-the-art and identify research challenges when developing, deploying and managing self-adaptive software systems. Instead of dealing with a wide range of topics associated with the field, we focus on four essential topics of self-adaptation: design space for self-adaptive solutions, software engineering processes for self-adaptive systems, from centralized to decentralized control, and practical run-time verification & validation for self-adaptive systems. For each topic, we present an overview, suggest future directions, and focus on selected challenges. This paper complements and extends a previous roadmap on software engineering for self-adaptive systems published in 2009 covering a different set of topics, and reflecting in part on the previous paper. This roadmap is one of the many results of the Dagstuhl Seminar 10431 on Software Engineering for Self-Adaptive Systems, which took place in October 2010.


international symposium on software reliability engineering | 2004

Robust prediction of fault-proneness by random forests

Lan Guo; Yan Ma; Bojan Cukic; Harshinder Singh

Accurate prediction of fault prone modules (a module is equivalent to a C function or a C+ + method) in software development process enables effective detection and identification of defects. Such prediction models are especially beneficial for large-scale systems, where verification experts need to focus their attention and resources to problem areas in the system under development. This paper presents a novel methodology for predicting fault prone modules, based on random forests. Random forests are an extension of decision tree learning. Instead of generating one decision tree, this methodology generates hundreds or even thousands of trees using subsets of the training data. Classification decision is obtained by voting. We applied random forests in five case studies based on NASA data sets. The prediction accuracy of the proposed methodology is generally higher than that achieved by logistic regression, discriminant analysis and the algorithms in two machine learning software packages, WEKA [I. H. Witten et al. (1999)] and See5. The difference in the performance of the proposed methodology over other methods is statistically significant. Further, the classification accuracy of random forests is more significant over other methods in larger data sets.


automated software engineering | 2010

Defect prediction from static code features: current results, limitations, new approaches

Tim Menzies; Zach Milton; Burak Turhan; Bojan Cukic; Yue Jiang; Ayse Basar Bener

Building quality software is expensive and software quality assurance (QA) budgets are limited. Data miners can learn defect predictors from static code features which can be used to control QA resources; e.g. to focus on the parts of the code predicted to be more defective.Recent results show that better data mining technology is not leading to better defect predictors. We hypothesize that we have reached the limits of the standard learning goal of maximizing area under the curve (AUC) of the probability of false alarms and probability of detection “AUC(pd, pf)”; i.e. the area under the curve of a probability of false alarm versus probability of detection.Accordingly, we explore changing the standard goal. Learners that maximize “AUC(effort, pd)” find the smallest set of modules that contain the most errors. WHICH is a meta-learner framework that can be quickly customized to different goals. When customized to AUC(effort, pd), WHICH out-performs all the data mining methods studied here. More importantly, measured in terms of this new goal, certain widely used learners perform much worse than simple manual methods.Hence, we advise against the indiscriminate use of learners. Learners must be chosen and customized to the goal at hand. With the right architecture (e.g. WHICH), tuning a learner to specific local business goals can be a simple task.


Proceedings of SPIE, the International Society for Optical Engineering | 2006

Image quality assessment for iris biometric

Nathan D. Kalka; Jinyu Zuo; Natalia A. Schmid; Bojan Cukic

Iris recognition, the ability to recognize and distinguish individuals by their iris pattern, is the most reliable biometric in terms of recognition and identification performance. However, performance of these systems is affected by poor quality imaging. In this work, we extend previous research efforts on iris quality assessment by analyzing the effect of seven quality factors: defocus blur, motion blur, off-angle, occlusion, specular reflection, lighting, and pixel-counts on the performance of traditional iris recognition system. We have concluded that defocus blur, motion blur, and off-angle are the factors that affect recognition performance the most. We further designed a fully automated iris image quality evaluation block that operates in two steps. First each factor is estimated individually, then the second step involves fusing the estimated factors by using Dempster-Shafer theory approach to evidential reasoning. The designed block is tested on two datasets, CASIA 1.0 and a dataset collected at WVU. Considerable improvement in recognition performance is demonstrated when removing poor quality images evaluated by our quality metric. The upper bound on processing complexity required to evaluate quality of a single image is O(n2 log n), that of a 2D-Fast Fourier Transform.


IEEE Transactions on Reliability | 2004

A scenario-based reliability analysis approach for component-based software

Sherif Yacoub; Bojan Cukic; Hany H. Ammar

This paper introduces a reliability model, and a reliability analysis technique for component-based software. The technique is named Scenario-Based Reliability Analysis (SBRA). Using scenarios of component interactions, we construct a probabilistic model named Component-Dependency Graph (CDG). Based on CDG, a reliability analysis algorithm is developed to analyze the reliability of the system as a function of reliabilities of its architectural constituents. An extension of the proposed model and algorithm is also developed for distributed software systems. The proposed approach has the following benefits: 1) It is used to analyze the impact of variations and uncertainties in the reliability of individual components, subsystems, and links between components on the overall reliability estimate of the software system. This is particularly useful when the system is built partially or fully from existing off-the-shelf components; 2) It is suitable for analyzing the reliability of distributed software systems because it incorporates link and delivery channel reliabilities; 3) The technique is used to identify critical components, interfaces, and subsystems; and to investigate the sensitivity of the application reliability to these elements; 4) The approach is applicable early in the development lifecycle, at the architecture level. Early detection of critical architecture elements, those that affect the overall reliability of the system the most, is useful in delegating resources in later development phases.


Empirical Software Engineering | 2008

Techniques for evaluating fault prediction models

Yue Jiang; Bojan Cukic; Yan Ma

Many statistical techniques have been proposed to predict fault-proneness of program modules in software engineering. Choosing the “best” candidate among many available models involves performance assessment and detailed comparison, but these comparisons are not simple due to the applicability of varying performance measures. Classifying a software module as fault-prone implies the application of some verification activities, thus adding to the development cost. Misclassifying a module as fault free carries the risk of system failure, also associated with cost implications. Methodologies for precise evaluation of fault prediction models should be at the core of empirical software engineering research, but have attracted sporadic attention. In this paper, we overview model evaluation techniques. In addition to many techniques that have been used in software engineering studies before, we introduce and discuss the merits of cost curves. Using the data from a public repository, our study demonstrates the strengths and weaknesses of performance evaluation techniques and points to a conclusion that the selection of the “best” model cannot be made without considering project cost characteristics, which are specific in each development environment.


international symposium on software reliability engineering | 2001

A Bayesian approach to reliability prediction and assessment of component based systems

Harshinder Singh; Vittorio Cortellessa; Bojan Cukic; Erdogan Gunel; Vijayanand Bharadwaj

It is generally believed that component-based software development leads to improved application quality, maintainability and reliability. However most software reliability techniques model integrated systems. These models disregard systems internal structure, taking into account only the failure data and interactions with the environment. We propose a novel approach to reliability analysis of component-based systems. Reliability prediction algorithm allows system architects to analyze reliability of the system before it is built, taking into account component reliability estimates and their anticipated usage. Fully integrated with the UML, this step can guide the process of identifying critical components and analyze the effect of replacing them with the more/less reliable ones. Reliability assessment algorithm, applicable in the system test phase, utilizes these reliability predictions as prior probabilities. In the Bayesian estimation. framework, posterior probability of failure is calculated from the priors and test failure data.


international symposium on software reliability engineering | 2007

Fault Prediction using Early Lifecycle Data

Yue Jiang; Bojan Cukic; Tim Menzies

The prediction of fault-prone modules in a software project has been the topic of many studies. In this paper, we investigate whether metrics available early in the development lifecycle can be used to identify fault-prone software modules. More precisely, we build predictive models using the metrics that characterize textual requirements. We compare the performance of requirements-based models against the performance of code-based models and models that combine requirement and code metrics. Using a range of modeling techniques and the data from three NASA projects, our study indicates that the early lifecycle metrics can play an important role in project management, either by pointing to the need for increased quality monitoring during the development or by using the models to assign verification and validation activities.


systems man and cybernetics | 2010

Estimating and Fusing Quality Factors for Iris Biometric Images

Nathan D. Kalka; Jinyu Zuo; Natalia A. Schmid; Bojan Cukic

Iris recognition, the ability to recognize and distinguish individuals by their iris pattern, is one of the most reliable biometrics in terms of recognition and identification performance. However, the performance of these systems is affected by poor-quality imaging. In this paper, we extend iris quality assessment research by analyzing the effect of various quality factors such as defocus blur, off-angle, occlusion/specular reflection, lighting, and iris resolution on the performance of a traditional iris recognition system. We further design a fully automated iris image quality evaluation block that estimates defocus blur, motion blur, off-angle, occlusion, lighting, specular reflection, and pixel counts. First, each factor is estimated individually, and then, the second step fuses the estimated factors by using a Dempster-Shafer theory approach to evidential reasoning. The designed block is evaluated on three data sets: Institute of Automation, Chinese Academy of Sciences (CASIA) 3.0 interval subset, West Virginia University (WVU) non-ideal iris, and Iris Challenge Evaluation (ICE) 1.0 dataset made available by National Institute for Standards and Technology (NIST). Considerable improvement in recognition performance is demonstrated when removing poor-quality images selected by our quality metric. The upper bound on computational complexity required to evaluate the quality of a single image is O(n2 log n).


international symposium on software reliability engineering | 2005

Error propagation in the reliability analysis of component based systems

Petar Popic; Dejan Desovski; Walid Abdelmoez; Bojan Cukic

Component based development is gaining popularity in the software engineering community. The reliability of components affects the reliability of the system. Different models and theories have been developed to estimate system reliability given the information about system architecture and the quality of the components. Almost always in these models a key attribute of component-based systems, the error propagation between the components, is overlooked and not taken into account in the reliability prediction. We extend our previous work on Bayesian reliability prediction of component based systems by introducing the error propagation probability into the model. We demonstrate the impact of the error propagation in a case study of an automated personnel access control system. We conclude that error propagation may have a significant impact on the system reliability prediction and, therefore, future architecture-based models should not ignore it

Collaboration


Dive into the Bojan Cukic's collaboration.

Top Co-Authors

Avatar

Tim Menzies

North Carolina State University

View shared research outputs
Top Co-Authors

Avatar

Yan Liu

West Virginia University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Edgar Fuller

West Virginia University

View shared research outputs
Top Co-Authors

Avatar

Sean Banerjee

West Virginia University

View shared research outputs
Top Co-Authors

Avatar

Yue Jiang

Thomas Jefferson University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dejan Desovski

West Virginia University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge