Gul Calikli
Boğaziçi University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Gul Calikli.
Software Quality Journal | 2013
Gul Calikli; Ayse Basar Bener
The thought processes of people have a significant impact on software quality, as software is designed, developed and tested by people. Cognitive biases, which are defined as patterned deviations of human thought from the laws of logic and mathematics, are a likely cause of software defects. However, there is little empirical evidence to date to substantiate this assertion. In this research, we focus on a specific cognitive bias, confirmation bias, which is defined as the tendency of people to seek evidence that verifies a hypothesis rather than seeking evidence to falsify a hypothesis. Due to this confirmation bias, developers tend to perform unit tests to make their program work rather than to break their code. Therefore, confirmation bias is believed to be one of the factors that lead to an increased software defect density. In this research, we present a metric scheme that explores the impact of developers’ confirmation bias on software defect density. In order to estimate the effectiveness of our metric scheme in the quantification of confirmation bias within the context of software development, we performed an empirical study that addressed the prediction of the defective parts of software. In our empirical study, we used confirmation bias metrics on five datasets obtained from two companies. Our results provide empirical evidence that human thought processes and cognitive aspects deserve further investigation to improve decision making in software development for effective process management and resource allocation.
predictive models in software engineering | 2010
Gul Calikli; Ayse Basar Bener
Background: During all levels of software testing, the goal should be to fail the code. However, software developers and testers are more likely to choose positive tests rather than negative ones due to the phenomenon called confirmation bias. Confirmation bias is defined as the tendency of people to verify their hypotheses rather than refuting them. In the literature, there are theories about the possible effects of confirmation bias on software development and testing. Due to the tendency towards positive tests, most of the software defects remain undetected, which in turn leads to an increase in software defect density. Aims: In this study, we analyze factors affecting confirmation bias in order to discover methods to circumvent confirmation bias. The factors, we investigate are experience in software development/testing and reasoning skills that can be gained through education. In addition, we analyze the effect of confirmation bias on software developer and tester performance. Method: In order to measure and quantify confirmation bias levels of software developers/testers, we prepared pen-and-paper and interactive tests based on two tasks from cognitive psychology literature. These tests were conducted on the 36 employees of a large scale telecommunication company in Europe as well as 28 graduate computer engineering students of Bogazici University, resulting in a total of 64 subjects. We evaluated the outcomes of these tests using the metrics we proposed in addition to some basic methods which we inherited from the cognitive psychology literature. Results: Results showed that regardless of experience in software development/testing, abilities such as logical reasoning and strategic hypotheses testing are differentiating factors in low confirmation bias levels. Moreover, the results of the analysis to investigate the relationship between code defect density and confirmation bias levels of software developers and testers showed that there is a direct correlation between confirmation bias and defect proneness of the code. Conclusions: Our findings show that having strong logical reasoning and hypothesis testing skills are differentiating factors in the software developer/tester performance in terms of defect rates. We recommend that companies should focus on improving logical reasoning and hypothesis testing skills of their employees by designing training programs. As future work, we plan to replicate this study in other software development companies. Moreover, we will use confirmation bias metrics in addition to product and process metrics in for software defect prediction. We believe that confirmation bias metrics would improve the prediction performance of learning based defect prediction models which we have been building over a decade.
international conference on software engineering | 2010
Gul Calikli; Ayse Basar Bener; Berna Arslan
In this paper, we present a preliminary analysis of factors such as company culture, education and experience, on confirmation bias levels of software developers and testers. Confirmation bias is defined as the tendency of people to verify their hypotheses rather than refuting them and thus it has an effect on all software testing.
foundations of software engineering | 2012
Bora Caglayan; Ayse Tosun Misirli; Gul Calikli; Ayse Basar Bener; Turgay Aytac; Burak Turhan
We present an integrated measurement and defect prediction tool: Dione. Our tool enables organizations to measure, monitor, and control product quality through learning based defect prediction. Similar existing tools either provide data collection and analytics, or work just as a prediction engine. Therefore, companies need to deal with multiple tools with incompatible interfaces in order to deploy a complete measurement and prediction solution. Dione provides a fully integrated solution where data extraction, defect prediction and reporting steps fit seamlessly. In this paper, we present the major functionality and architectural elements of Dione followed by an overview of our demonstration.
international symposium on computer and information sciences | 2009
Gul Calikli; Ayse Tosun; Ayse Basar Bener; Melih Celik
Application of defect predictors in software development helps the managers to allocate their resources such as time and effort more efficiently and cost effectively to test certain sections of the code. In this research, we have used Naïve Bayes Classifier (NBC) to construct our defect prediction framework. Our proposed framework uses the hierarchical structure information about the source code of the software product, to perform defect prediction at a functional method level and source file level. We have applied our model on SoftLAB and Eclipse datasets. We have measured the performance of our proposed model and applied cost benefit analysis. Our results reveal that source file level defect prediction improves the verification effort, while decreasing the defect prediction performance in all datasets.
The Art and Science of Analyzing Software Data | 2015
Ayse Basar Bener; Ayse Tosun Misirli; Bora Caglayan; Ekrem Kocaguneli; Gul Calikli
Abstract In this chapter, we share our experience and views on software data analytics in practice with a review of our previous work. In more than 10 years of joint research projects with industry, we have encountered similar data analytics patterns in diverse organizations and in different problem cases. We discuss these patterns following a “software analytics” framework: problem identification, data collection, descriptive statistics, and decision making. In the discussion, our arguments and concepts are built around our experiences of the research process in six different industry research projects in four different organizations. Methods: Spearman rank correlation, Pearson correlation, Kolmogorov-Smirnov test, chi-square goodness-of-fit test, t test, Mann-Whitney U test, Kruskal-Wallis analysis of variance, k-nearest neighbor, linear regression, logistic regression, naive Bayes, neural networks, decision trees, ensembles, nearest-neighbor sampling, feature selection, normalization.
Software Quality Journal | 2015
Gul Calikli; Ayse Basar Bener
Confirmation bias is defined as the tendency of people to seek evidence that verifies a hypothesis rather than seeking evidence to falsify it. Due to the confirmation bias, defects may be introduced in a software product during requirements analysis, design, implementation and/or testing phases. For instance, testers may exhibit confirmatory behavior in the form of a tendency to make the code run rather than employing a strategic approach to make it fail. As a result, most of the defects that have been introduced in the earlier phases of software development may be overlooked leading to an increase in software defect density. In this paper, we quantify confirmation bias levels in terms of a single derived metric. However, the main focus of this paper is the analysis of factors affecting confirmation bias levels of software engineers. Identification of these factors can guide project managers to circumvent negative effects of confirmation bias, as well as providing guidance for the recruitment and effective allocation of software engineers. In this empirical study, we observed low confirmation bias levels among participants with logical reasoning and hypothesis testing skills.
empirical software engineering and measurement | 2010
Gul Calikli; Ayse Basar Bener
In cognitive psychology, confirmation bias is defined as the tendency of people to verify hypotheses rather than refuting them. During unit testing software developers should aim to fail their code. However, due to confirmation bias, most defects might be overlooked leading to an increase in software defect density. In this research, we empirically analyze the effect of confirmation bias of software developers on software defect density.
Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies | 2017
Blaine A. Price; Avelie Stuart; Gul Calikli; Ciaran McCormick; Vikram Mehta; Luke Hutton; Arosha K. Bandara; Mark Levine; Bashar Nuseibeh
Low cost digital cameras in smartphones and wearable devices make it easy for people to automatically capture and share images as a visual lifelog. Having been inspired by a US campus based study that explored individual privacy behaviours of visual lifeloggers, we conducted a similar study on a UK campus, however we also focussed on the privacy behaviours of groups of lifeloggers. We argue for the importance of replicability and therefore we built a publicly available toolkit, which includes camera design, study guidelines and source code. Our results show some similar sharing behaviour to the US based study: people tried to preserve the privacy of strangers, but we found fewer bystander reactions despite using a more obvious camera. In contrast, we did not find a reluctance to share images of screens but we did find that images of vices were shared less. Regarding privacy behaviours in groups of lifeloggers, we found that people were more willing to share images of people they were interacting with than of strangers, that lifelogging in groups could change what defines a private space, and that lifelogging groups establish different rules to manage privacy for those inside and outside the group.
ubiquitous computing | 2014
Gul Calikli; Mads Schaarup Andersen; Arosha K. Bandara; Blaine A. Price; Bashar Nuseibeh
We have been studying how ordinary people use personal informatics technologies for several years. In this paper we briefly describe our early studies, which influenced our design decisions in a recent pilot study that included junior doctors in a UK hospital. We discuss a number of failures in compliance and data collection as well as lessons learned.