Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yasunari Takagi is active.

Publication


Featured researches published by Yasunari Takagi.


international symposium on software reliability engineering | 2003

A Bayesian belief network for assessing the likelihood of fault content

Sousuke Amasaki; Yasunari Takagi; Osamu Mizuno; Tohru Kikuno

To predict software quality, we must consider various factors because software development consists of various activities, which the software reliability growth model (SRGM) does not consider. In this paper, we propose a model to predict the final quality of a software product by using the Bayesian belief network (BBN) model. By using the BBN, we can construct a prediction model that focuses on the structure of the software development process explicitly representing complex relationships between metrics, and handling uncertain metrics, such as residual faults in the software products. In order to evaluate the constructed model, we perform an empirical experiment based on the metrics data collected from development projects in a certain company. As a result of the empirical evaluation, we confirm that the proposed model can predict the amount of residual faults that the SRGM cannot handle.


Empirical Software Engineering | 2005

An Empirical Approach to Characterizing Risky Software Projects Based on Logistic Regression Analysis

Yasunari Takagi; Osamu Mizuno; Tohru Kikuno

During software development, projects often experience risky situations. If projects fail to detect such risks, they may exhibit confused behavior. In this paper, we propose a new scheme for characterization of the level of confusion exhibited by projects based on an empirical questionnaire. First, we designed a questionnaire from five project viewpoints, requirements, estimates, planning, team organization, and project management activities. Each of these viewpoints was assessed using questions in which experience and knowledge of software risks are determined. Secondly, we classify projects into “confused” and “not confused,” using the resulting metrics data. We thirdly analyzed the relationship between responses to the questionnaire and the degree of confusion of the projects using logistic regression analysis and constructing a model to characterize confused projects. The experimental result used actual project data shows that 28 projects out of 32 were characterized correctly. As a result, we concluded that the characterization of confused projects was successful. Furthermore, we applied the constructed model to data from other projects in order to detect risky projects. The result of the application of this concept showed that 7 out of 8 projects were classified correctly. Therefore, we concluded that the proposed scheme is also applicable to the detection of risky projects.


international symposium on software reliability engineering | 1995

Analysis of review's effectiveness based on software metrics

Yasunari Takagi; Toshifumi Tanaka; Naoki Niihara; Keishi Sakamoto; Shinji Kusumoto; Tohru Kikuno

The paper statistically analyzes the relationship between review and software quality and the relationship between review and productivity, utilizing software metrics on 36 actual projects executed in the OMRON Corporation from 1992 to 1994. Firstly, by examining the relationship between review effort and field quality (the number of faults after delivery) of each project, and the relationship between the number of faults detected in review and field quality of each project, we reasoned that: (1) greater review effort helps to increase field quality (decrease the number of faults after delivery); (2) source code review is more effective in order to increase field quality than design review; (3) if more than 10% of total design and programming effort is spent on review, one can achieve a quite stable field quality. We noticed that no relevant effects were recognized in productivity (LOC/staff month) with respect to a review rate of up to 20%. As a result of the analysis above, we recommended that 15% of review effort is a suitable percentage to use as a guideline for our software project management.


international conference on software engineering | 1998

Toward computational support for software process improvement activities

Keishi Sakamoto; Kumiyo Nakakoji; Yasunari Takagi; Naoki Niihara

Software organizations and projects need guidance on how to improve software process, not just guidelines on what to improve. Several surveys demonstrate that the Capability Maturity Model (CMM) and ISO-9000 only provide the latter. We report our in-depth analysis on a seventeen-month effort in software process improvement (SPI) at OMRON Corporation. The goal of the analysis was to identify issues and challenges of SPI and to design a step-wise practical method to avoid such problems. Major problems we have found include the lack of shared goal among stakeholders, insufficient understanding of the current progress of SPI efforts, and underutilization of a large amount of complex information generated during SPI. We present the method for software organizations and projects for dealing with the problems, and argue for a knowledge-based SPI support system based on the method.


international symposium on software reliability engineering | 2002

On estimating testing effort needed to assure field quality in software development

Osamu Mizuno; Eijiro Shigematsu; Yasunari Takagi; Tohru Kikuno

In practical software development, software quality is generally evaluated by the number of residual defects. To keep the number of residual defects within a permissible value, too much effort is often assigned to software testing. We try to develop a statistical model to determine the amount of testing effort which is needed to assure the field quality. The model explicitly includes design, review, and test (including debug) activities. Firstly, we construct a linear multiple regression model that can clarify the relationship among the number of residual defects and the efforts assigned to design, review, and test activities. We then confirm the applicability of the model by statistical analysis using actual project data. Next, we obtain an equation based on the model to determine the test effort. As parameters in the equation, the permissible number of residual defects, the design effort, and the review effort are included. Then, the equation determines the test effort that is needed to assure the permissible residual defects. Finally, we conduct an experimental evaluation using actual project data and show the usefulness of the equation.


Software Quality Journal | 2005

A New Challenge for Applying Time Series Metrics Data to Software Quality Estimation

Sousuke Amasaki; Takashi Yoshitomi; Osamu Mizuno; Yasunari Takagi; Tohru Kikuno

In typical software development, a software reliability growth model (SRGM) is applied in each testing activity to determine the time to finish the testing. However, there are some cases in which the SRGM does not work correctly. That is, the SRGM sometimes mistakes quality for poor quality products. In order to tackle this problem, we focussed on the trend of time series data of software defects among successive testing phases and tried to estimate software quality using the trend. First, we investigate the characteristics of the time series data on the detected faults by observing the change of the number of detected faults. Using the rank correlation coefficient, the data are classified into four kinds of trends. Next, with the intention of estimating software quality, we investigate the relationship between the trends of the time series data and software quality. Here, software quality is defined by the number of faults detected during six months after shipment. Finally, we find a relationship between the trends and metrics data collected in the software design phase. Using logistic regression, we statistically show that two review metrics in the design and coding phase can determine the trend.


international conference on software engineering | 2000

Characterization of risky projects based on project managers' evaluation

Osamu Mizuno; Tohru Kikuno; Yasunari Takagi; Keishi Sakamoto

During the process of software development, senior managers often find indications that projects are risky and take appropriate actions to recover them from this dangerous status. If senior managers fail to detect such risks, it is possible that such projects may collapse completely. In this paper, we propose a new scheme for the characterization of risky projects based on an evaluation by the project manager. In order to acquire the relevant data to make such an assessment, we first designed a questionnaire from five view-points within the projects: requirements, estimations, team organization, planning capability and project management activities. Each of these viewpoints consisted of a number of concrete questions. We then analyzed the responses to the questionnaires as provided by project managers by applying a logistic regression analysis. That is, we determined the coefficients of the logistic model from a set of the questionnaire responses. The experimental results using actual project data in Company A showed that 27 projects out of 32 were predicted correctly. Thus we would expect that the proposed characterizing scheme is the first step toward predicting which projects are risky at an early phase of the development.


Journal of Software Engineering and Applications | 2009

Explanation vs Performance in Data Mining: A Case Study with Predicting Runaway Projects

Tim Menzies; Osamu Mizuno; Yasunari Takagi; Tohru Kikuno

Often, the explanatory power of a learned model must be traded off against model performance. In the case of predicting runaway software projects, we show that the twin goals of high performance and good explanatory power are achievable after applying a variety of data mining techniques (discrimination, feature subset selection, rule covering algorithms). This result is a new high water mark in predicting runaway projects. Measured in terms of precision, this new model is as good as can be expected for our data. Other methods might out-perform our result (e.g. by generating a smaller, more explainable model) but no other method could out-perform the precision of our learned model.


international conference on software engineering | 1998

Analyzing effects of cost estimation accuracy on quality and productivity

Osamu Mizuno; Tohru Kikuno; Katsumi Inagaki; Yasunari Takagi; Keishi Sakamoto

This paper discusses the effects of estimation accuracy for software development cost on both the quality of the delivered code and the productivity of the development team. The estimation accuracy is measured by metric RE (relative error). The quality and productivity are measured by metrics FQ (field quality) and TP (team productivity). Using actual project data on thirty-one projects at a certain company, the following are verified by correlation analysis and testing of statistical hypotheses. There is a high correlation between the faithfulness of the development plan to standards and the value of RE (a coefficient of correlation between them is -0.60). Both FQ and TP are significantly different between projects with -10%<RE<+10% and projects with RE/spl ges/+10% (the level of significance is chosen as 0.05).


Information & Software Technology | 2000

Statistical analysis of deviation of actual cost from estimated cost using actual project data

Osamu Mizuno; Tohru Kikuno; Katsumi Inagaki; Yasunari Takagi; Keishi Sakamoto

Abstract This paper analyzes an association of a deviation of the actual cost (measured by person-month) from the estimated cost with the quality and the productivity of software development projects. Although the obtained results themselves may not be new from the academic point of view, they could provide motivation for developers to join process improvement activities in a software company and thus become a driving force for promoting the process improvement. We show that if a project is performed faithfully under a well-organized project plan (i.e. the plan is first constructed according to the standards of good writing, and then a project is managed and controlled to meet the plan), the deviation of the actual cost from the estimated one becomes small. Next we show statistically that projects with small deviation of the cost estimate tend to achieve high quality of final products and high productivity of development teams. In this analysis, the actual project data on 37 projects at a certain company are extensively applied.

Collaboration


Dive into the Yasunari Takagi's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Osamu Mizuno

Kyoto Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sousuke Amasaki

Okayama Prefectural University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge