Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ayse Basar Bener is active.

Publication


Featured researches published by Ayse Basar Bener.


IEEE Transactions on Software Engineering | 2012

Exploiting the Essential Assumptions of Analogy-Based Effort Estimation

Ekrem Kocaguneli; Tim Menzies; Ayse Basar Bener; Jacky Keung

Background: There are too many design options for software effort estimators. How can we best explore them all? Aim: We seek aspects on general principles of effort estimation that can guide the design of effort estimators. Method: We identified the essential assumption of analogy-based effort estimation, i.e., the immediate neighbors of a project offer stable conclusions about that project. We test that assumption by generating a binary tree of clusters of effort data and comparing the variance of supertrees versus smaller subtrees. Results: For 10 data sets (from Coc81, Nasa93, Desharnais, Albrecht, ISBSG, and data from Turkish companies), we found: 1) The estimation variance of cluster subtrees is usually larger than that of cluster supertrees; 2) if analogy is restricted to the cluster trees with lower variance, then effort estimates have a significantly lower error (measured using MRE, AR, and Pred(25) with a Wilcoxon test, 95 percent confidence, compared to nearest neighbor methods that use neighborhoods of a fixed size). Conclusion: Estimation by analogy can be significantly improved by a dynamic selection of nearest neighbors, using only the project data from regions with small variance.


Software Quality Journal | 2011

An industrial case study of classifier ensembles for locating software defects

Ayse Tosun Misirli; Ayse Basar Bener; Burak Turhan

As the application layer in embedded systems dominates over the hardware, ensuring software quality becomes a real challenge. Software testing is the most time-consuming and costly project phase, specifically in the embedded software domain. Misclassifying a safe code as defective increases the cost of projects, and hence leads to low margins. In this research, we present a defect prediction model based on an ensemble of classifiers. We have collaborated with an industrial partner from the embedded systems domain. We use our generic defect prediction models with data coming from embedded projects. The embedded systems domain is similar to mission critical software so that the goal is to catch as many defects as possible. Therefore, the expectation from a predictor is to get very high probability of detection (pd). On the other hand, most embedded systems in practice are commercial products, and companies would like to lower their costs to remain competitive in their market by keeping their false alarm (pf) rates as low as possible and improving their precision rates. In our experiments, we used data collected from our industry partners as well as publicly available data. Our results reveal that ensemble of classifiers significantly decreases pf down to 15% while increasing precision by 43% and hence, keeping balance rates at 74%. The cost-benefit analysis of the proposed model shows that it is enough to inspect 23% of the code on local datasets to detect around 70% of defects.


Proceedings of the 2009 ICSE Workshop on Emerging Trends in Free/Libre/Open Source Software Research and Development | 2009

Merits of using repository metrics in defect prediction for open source projects

Bora Caglayan; Ayse Basar Bener; Stefan Koch

Many corporate code developers are the beta testers of open source software.They continue testing until they are sure that they have a stable version to build their code on. In this respect defect predictors play a critical role to identify defective parts of the software. Performance of a defect predictor is determined by correctly finding defective parts of the software without giving any false alarms. Having high false alarms means testers/ developers would inspect bug free code unnecessarily. Therefore in this research we focused on decreasing the false alarm rates by using repository metrics. We conducted experiments on the data sets of Eclipse project. Our results showed that repository metrics decreased the false alarm rates on the average to 23% from 32% corresponding up to 907 less files to inspect.


innovative applications of artificial intelligence | 2011

AI-Based Software Defect Predictors: Applications and Benefits in a Case Study

Ayse Tosun Misirli; Ayse Basar Bener; Resat Kale

Software defect prediction aims to reduce software testing efforts by guiding testers through the defect-prone sections of software systems. Defect predictors are widely used in organizations to predict defects in order to save time and effort as an alternative to other techniques such as manual code reviews. The usage of a defect prediction model in a real-life setting is difficult because it requires software metrics and defect data from past projects to predict the defect-proneness of new projects. It is, on the other hand, very practical because it is easy to apply, can detect defects using less time and reduces the testing effort. We have built a learning-based defect prediction model for a telecommunication company in the space of one year. In this study, we have briefly explained our model, presented its pay-off and described how we have implemented the model in the company. Furthermore, we compared the performance of our model with that of another testing strategy applied in a pilot project that implemented a new process called Team Software Process (TSP). Our results show that defect predictors can predict 87 percent of code defects, decrease inspection efforts by 72 percent and hence, reduces post-release defects by 44 percent. Furthermore, they can be used as complementary tools for a new process implementation whose effects on testing activities are limited.


software engineering and advanced applications | 2006

Software Defect Identification Using Machine Learning Techniques

Evren Ceylan; F. Kutlubay; Ayse Basar Bener

Software engineering is a tedious job that includes people, tight deadlines and limited budgets. Delivering what customer wants involves minimizing the defects in the programs. Hence, it is important to establish quality measures early on in the project life cycle. The main objective of this research is to analyze problems in software code and propose a model that will help catching those problems earlier in the project life cycle. Our proposed model uses machine learning methods. Principal component analysis is used for dimensionality reduction, and decision tree, multi layer perceptron and radial basis functions are used for defect prediction. The experiments in this research are carried out with different software metric datasets that are obtained from real-life projects of three big software companies in Turkey. We can say that, the improved method that we proposed brings out satisfactory results in terms of defect prediction


IEEE Transactions on Software Engineering | 2014

Bayesian Networks For Evidence-Based Decision-Making in Software Engineering

Ayse Tosun Misirli; Ayse Basar Bener

Recommendation systems in software engineering (SE) should be designed to integrate evidence into practitioners experience. Bayesian networks (BNs) provide a natural statistical framework for evidence-based decision-making by incorporating an integrated summary of the available evidence and associated uncertainty (of consequences). In this study, we follow the lead of computational biology and healthcare decision-making, and investigate the applications of BNs in SE in terms of 1) main software engineering challenges addressed, 2) techniques used to learn causal relationships among variables, 3) techniques used to infer the parameters, and 4) variable types used as BN nodes. We conduct a systematic mapping study to investigate each of these four facets and compare the current usage of BNs in SE with these two domains. Subsequently, we highlight the main limitations of the usage of BNs in SE and propose a Hybrid BN to improve evidence-based decision-making in SE. In two industrial cases, we build sample hybrid BNs and evaluate their performance. The results of our empirical analyses show that hybrid BNs are powerful frameworks that combine expert knowledge with quantitative data. As researchers in SE become more aware of the underlying dynamics of BNs, the proposed models will also advance and naturally contribute to evidence based-decision-making.


software engineering and advanced applications | 2011

Empirical Evaluation of Mixed-Project Defect Prediction Models

Burak Turhan; Ayse Tosun; Ayse Basar Bener

Defect prediction research mostly focus on optimizing the performance of models that are constructed for isolated projects. On the other hand, recent studies try to utilize data across projects for building defect prediction models. We combine both approaches and investigate the effects of using mixed (i.e. within and cross) project data on defect prediction performance, which has not been addressed in previous studies. We conduct experiments to analyze models learned from mixed project data using ten proprietary projects from two different organizations. We observe that code metric based mixed project models yield only minor improvements in the prediction performance for a limited number of cases that are difficult to characterize. Based on existing studies and our results, we conclude that using cross project data for defect prediction is still an open challenge that should only be considered in environments where there is no local data collection activity, and using data from other projects in addition to a projects own data does not pay off in terms of performance.


international conference on software and systems process | 2011

Defect prediction using social network analysis on issue repositories

Serdar Biçer; Ayse Basar Bener; Bora Caglayan

People are the most important pillar of software development process. It is critical to understand how they interact with each other and how these interactions affect the quality of the end product in terms of defects. In this research we propose to include a new set of metrics, a.k.a. social network metrics on issue repositories in predicting defects. Social network metrics on issue repositories has not been used before to predict defect proneness of a software product. To validate our hypotheses we used two datasets, development data of IBM1 Rational ® Team Concert™ (RTC) and Drupal, to conduct our experiments. The results of the experiments revealed that compared to other set of metrics such as churn metrics using social network metrics on issue repositories either considerably decreases high false alarm rates without compromising the detection rates or considerably increases low prediction rates without compromising low false alarm rates. Therefore we recommend practitioners to collect social network metrics on issue repositories since people related information is a strong indicator of past patterns in a given team.


Software Quality Journal | 2013

Influence of confirmation biases of developers on software quality: an empirical study

Gul Calikli; Ayse Basar Bener

The thought processes of people have a significant impact on software quality, as software is designed, developed and tested by people. Cognitive biases, which are defined as patterned deviations of human thought from the laws of logic and mathematics, are a likely cause of software defects. However, there is little empirical evidence to date to substantiate this assertion. In this research, we focus on a specific cognitive bias, confirmation bias, which is defined as the tendency of people to seek evidence that verifies a hypothesis rather than seeking evidence to falsify a hypothesis. Due to this confirmation bias, developers tend to perform unit tests to make their program work rather than to break their code. Therefore, confirmation bias is believed to be one of the factors that lead to an increased software defect density. In this research, we present a metric scheme that explores the impact of developers’ confirmation bias on software defect density. In order to estimate the effectiveness of our metric scheme in the quantification of confirmation bias within the context of software development, we performed an empirical study that addressed the prediction of the defective parts of software. In our empirical study, we used confirmation bias metrics on five datasets obtained from two companies. Our results provide empirical evidence that human thought processes and cognitive aspects deserve further investigation to improve decision making in software development for effective process management and resource allocation.


predictive models in software engineering | 2010

Empirical analyses of the factors affecting confirmation bias and the effects of confirmation bias on software developer/tester performance

Gul Calikli; Ayse Basar Bener

Background: During all levels of software testing, the goal should be to fail the code. However, software developers and testers are more likely to choose positive tests rather than negative ones due to the phenomenon called confirmation bias. Confirmation bias is defined as the tendency of people to verify their hypotheses rather than refuting them. In the literature, there are theories about the possible effects of confirmation bias on software development and testing. Due to the tendency towards positive tests, most of the software defects remain undetected, which in turn leads to an increase in software defect density. Aims: In this study, we analyze factors affecting confirmation bias in order to discover methods to circumvent confirmation bias. The factors, we investigate are experience in software development/testing and reasoning skills that can be gained through education. In addition, we analyze the effect of confirmation bias on software developer and tester performance. Method: In order to measure and quantify confirmation bias levels of software developers/testers, we prepared pen-and-paper and interactive tests based on two tasks from cognitive psychology literature. These tests were conducted on the 36 employees of a large scale telecommunication company in Europe as well as 28 graduate computer engineering students of Bogazici University, resulting in a total of 64 subjects. We evaluated the outcomes of these tests using the metrics we proposed in addition to some basic methods which we inherited from the cognitive psychology literature. Results: Results showed that regardless of experience in software development/testing, abilities such as logical reasoning and strategic hypotheses testing are differentiating factors in low confirmation bias levels. Moreover, the results of the analysis to investigate the relationship between code defect density and confirmation bias levels of software developers and testers showed that there is a direct correlation between confirmation bias and defect proneness of the code. Conclusions: Our findings show that having strong logical reasoning and hypothesis testing skills are differentiating factors in the software developer/tester performance in terms of defect rates. We recommend that companies should focus on improving logical reasoning and hypothesis testing skills of their employees by designing training programs. As future work, we plan to replicate this study in other software development companies. Moreover, we will use confirmation bias metrics in addition to product and process metrics in for software defect prediction. We believe that confirmation bias metrics would improve the prediction performance of learning based defect prediction models which we have been building over a decade.

Collaboration


Dive into the Ayse Basar Bener's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge