Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Akinori Ihara is active.

Publication


Featured researches published by Akinori Ihara.


working conference on reverse engineering | 2010

Predicting Re-opened Bugs: A Case Study on the Eclipse Project

Emad Shihab; Akinori Ihara; Yasutaka Kamei; Walid M. Ibrahim; Masao Ohira; Bram Adams; Ahmed E. Hassan; Ken-ichi Matsumoto

Bug fixing accounts for a large amount of the software maintenance resources. Generally, bugs are reported, fixed, verified and closed. However, in some cases bugs have to be re-opened. Re-opened bugs increase maintenance costs, degrade the overall user-perceived quality of the software and lead to unnecessary rework by busy practitioners. In this paper, we study and predict re-opened bugs through a case study on the Eclipse project. We structure our study along 4 dimensions: 1) the work habits dimension (e.g., the weekday on which the bug was initially closed on), 2) the bug report dimension (e.g., the component in which the bug was found) 3) the bug fix dimension (e.g., the amount of time it took to perform the initial fix) and 4) the team dimension (e.g., the experience of the bug fixer). Our case study on the Eclipse Platform 3.0 project shows that the comment and description text, the time it took to fix the bug, and the component the bug was found in are the most important factors in determining whether a bug will be re-opened. Based on these dimensions we create decision trees that predict whether a bug will be re-opened after its closure. Using a combination of our dimensions, we can build explainable prediction models that can achieve 62.9% precision and 84.5% recall when predicting whether a bug will be re-opened.


international conference on software engineering | 2015

The impact of mislabelling on the performance and interpretation of defect prediction models

Chakkrit Tantithamthavorn; Shane McIntosh; Ahmed E. Hassan; Akinori Ihara; Ken-ichi Matsumoto

The reliability of a prediction model depends on the quality of the data from which it was trained. Therefore, defect prediction models may be unreliable if they are trained using noisy data. Recent research suggests that randomly-injected noise that changes the classification (label) of software modules from defective to clean (and vice versa) can impact the performance of defect models. Yet, in reality, incorrectly labelled (i.e., mislabelled) issue reports are likely non-random. In this paper, we study whether mislabelling is random, and the impact that realistic mislabelling has on the performance and interpretation of defect models. Through a case study of 3,931 manually-curated issue reports from the Apache Jackrabbit and Lucene systems, we find that: (1) issue report mislabelling is not random; (2) precision is rarely impacted by mislabelled issue reports, suggesting that practitioners can rely on the accuracy of modules labelled as defective by models that are trained using noisy data; (3) however, models trained on noisy data typically achieve 56%-68% of the recall of models trained on clean data; and (4) only the metrics in top influence rank of our defect models are robust to the noise introduced by mislabelling, suggesting that the less influential metrics of models that are trained on noisy data should not be interpreted or used to make decisions.


Proceedings of the joint international and annual ERCIM workshops on Principles of software evolution (IWPSE) and software evolution (Evol) workshops | 2009

An analysis method for improving a bug modification process in open source software development

Akinori Ihara; Masao Ohira; Ken-ichi Matsumoto

As open source software products have evolved over time to satisfy a variety of demands from increasing users, they have become large and complex in general. Open source developers often face with challenges in fixing a considerable amount of bugs which are reported into a bug tracking system on a daily basis. As a result, the mean time to resolve bugs has been protracted in these days. In order to reduce the mean time to resolve bugs, managers/leaders of open source projects need to identify and understand the bottleneck of a bug modification process in their own projects. In this paper, we propose an analysis method which represents a bug modification process using a bug tracking system as a state transition diagram and then calculates the amount of time required to transit between states. We have conducted a case study using Firefox and Apache project data to confirm the usefulness of the analysis method. From the results of the case study, we have found that the method helped to reveal that both of the projects took a lot of time to verify results of bug modifications by developers.


software engineering, artificial intelligence, networking and parallel/distributed computing | 2013

Using Co-change Histories to Improve Bug Localization Performance

Chakkrit Tantithamthavorn; Akinori Ihara; Ken-ichi Matsumoto

A large open source software (OSS) project receives many bug reports on a daily basis. Bug localization techniques automatically pinpoint source code fragments that are relevant to a bug report, thus enabling faster correction. Even though many bug localization methods have been introduced, their performance is still not efficient. In this research, we improved on existing bug localization methods by taking into account co-change histories. We conducted experiments on two OSS datasets, the Eclipse SWT 3.1 project and the Android ZXing project. We validated our approach by evaluating effectiveness compared to the state-of-the-art approach Bug Locator. In the Eclipse SWT 3.1 project, our approach reliably identified source code that should be fixed for a bug in 72.46% of the total bugs, while Bug Locator identified only 51.02%. In the Android ZXing project, our approach identified 85.71%, while Bug Locator identified 60%.


mining software repositories | 2015

A dataset of high impact bugs: manually-classified issue reports

Masao Ohira; Yutaro Kashiwa; Yosuke Yamatani; Hayato Yoshiyuki; Yoshiya Maeda; Nachai Limsettho; Keisuke Fujino; Hideaki Hata; Akinori Ihara; Ken-ichi Matsumoto

The importance of supporting test and maintenance activities in software development has been increasing, since recent software systems have become large and complex. Although in the field of Mining Software Repositories (MSR) there are many promising approaches to predicting, localizing, and triaging bugs, most of them do not consider impacts of each bug on users and developers but rather treat all bugs with equal weighting, excepting a few studies on high impact bugs including security, performance, blocking, and so forth. To make MSR techniques more actionable and effective in practice, we need deeper understandings of high impact bugs. In this paper we introduced our dataset of high impact bugs which was created by manually reviewing four thousand issue reports in four open source projects (Ambari, Camel, Derby and Wicket).


asia-pacific software engineering conference | 2013

Patch Reviewer Recommendation in OSS Projects

John Boaz Lee; Akinori Ihara; Akito Monden; Ken-ichi Matsumoto

In an Open Source Software (OSS) project, many developers contribute by submitting source code patches. To maintain the quality of the code, certain experienced developers review each patch before it can be applied or committed. Ideally, within a short amount of time after its submission, a patch is assigned to a reviewer and reviewed. In the real world, however, many large and active OSS projects evolve at a rapid pace and the core developers can get swamped with a large number of patches to review. Furthermore, since these core members may not always be available or may choose to leave the project, it can be challenging, at times, to find a good reviewer for a patch. In this paper, we propose a graph-based method to automatically recommend the most suitable reviewers for a patch. To evaluate our method, we conducted experiments to predict the developers who will apply new changes to the source code in the Eclipse project. Our method achieved an average recall of 0.84 for top-5 predictions and a recall of 0.94 for top-10 predictions.


international symposium on software reliability engineering | 2013

Mining A change history to quickly identify bug locations : A case study of the Eclipse project

Chakkrit Tantithamthavorn; Rattamont Teekavanich; Akinori Ihara; Ken-ichi Matsumoto

In this study, we proposed an approach to mine a change history to improve the bug localization performance. The key idea is that a recently fixed file may be fixed in the near future. We used a combination of textual feature and mining the change history to recommend source code files that are likely to be fixed for a given bug report. First, we adopted the Vector Space Model (VSM) to find relevant source code files that are textually similar to the bug report. Second, we analyzed the change history to identify previously fixed files. We then estimated the fault proneness of these files. Finally, we combined the two scores, from textual similarity and fault proneness, for every source code file. We then recommend developers examine source code files with higher scores. We evaluated our approach based on 1,212 bug reports from the Eclipse Platform and Eclipse JDT. The experimental results show that our proposed approach can improve the bug localization performance and effectively identify buggy files.


2012 Fourth International Workshop on Empirical Software Engineering in Practice | 2012

Locating Source Code to Be Fixed Based on Initial Bug Reports - A Case Study on the Eclipse Project

Phiradet Bangcharoensap; Akinori Ihara; Yasutaka Kamei; Ken-ichi Matsumoto

In most software development, a Bug Tracking System is used to improve software quality. Based on bug reports managed by the bug tracking system, triagers who assign a bug to fixers and fixers need to pinpoint buggy files that should be fixed. However if triagers do not know the details of the buggy file, it is difficult to select an appropriate fixer. If fixers can identify the buggy files, they can fix the bug in a short time. In this paper, we propose a method to quickly locate the buggy file in a source code repository using 3 approaches, text mining, code mining, and change history mining to rank files that may be causing bugs. (1) The text mining approach ranks files based on the textual similarity between a bug report and source code. (2) The code mining approach ranks files based on prediction of the fault-prone module using source code product metrics. (3) The change history mining approach ranks files based on prediction of the fault-prone module using change process metrics. Using Eclipse platform project data, our proposed model gains around 20% in TOP1 prediction. This result means that the buggy files are ranked first in 20% of bug reports. Furthermore, bug reports that consist of a short description and many specific words easily identify and locate the buggy file.


2016 IEEE/ACM 1st International Workshop on Emotional Awareness in Software Engineering (SEmotion) | 2016

Understanding question quality through affective aspect in Q&A site

Jirayus Jiarpakdee; Akinori Ihara; Ken-ichi Matsumoto

Ever since the Internet has become widely available, question and answer sites have been used as a knowledge sharing service. Users ask the community about how to solve problems, hoping that there will be someone to provide a solution. However, not every question is answered. Eric Raymond claimed that how an user asks a question is important. Existing studies have presented ways to study the question quality by textual, community-based or affective features. In this paper, we investigated how affective features are related to the question quality, and we found that using affective features improves the prediction of question quality. Moreover, Favorite Vote Count feature has the highest influence on our prediction models.


international symposium on software reliability engineering | 2016

A Study of Redundant Metrics in Defect Prediction Datasets

Jirayus Jiarpakdee; Chakkrit Tantithamthavorn; Akinori Ihara; Ken-ichi Matsumoto

Defect prediction models can help Software Quality Assurance (SQA) teams understand their past pitfalls that lead to defective modules. However, the conclusions that are derived from defect prediction models without mitigating redundant metrics issues may be misleading. In this paper, we set out to investigate if redundant metrics issues are affecting defect prediction studies, and its degree and causes of redundancy. Through a case study of 101 publicly-available defect datasets of systems that span both proprietary and open source domains, we observe that (1) 10%-67% of metrics of the studied defect datasets are redundant, and (2) the redundancy of metrics has to do with the aggregation functions of metrics. These findings suggest that researchers should be aware of redundant metrics prior to constructing a defect prediction model in order to maximize internal validity of their studies.

Collaboration


Dive into the Akinori Ihara's collaboration.

Top Co-Authors

Avatar

Ken-ichi Matsumoto

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Masao Ohira

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Chakkrit Tantithamthavorn

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Akito Monden

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hideaki Hata

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Hirohiko Suwa

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Daiki Fujibayashi

Nara Institute of Science and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge