Jeff Tian
Southern Methodist University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jeff Tian.
IEEE Transactions on Software Engineering | 2001
Chaitanya Kallepalli; Jeff Tian
Statistical testing and reliability analysis can be used effectively to assure quality for Web applications. To support this strategy, we extract Web usage and failure information from existing Web logs. The usage information is used to build models for statistical Web testing. The related failure information is used to measure the reliability of Web applications and the potential effectiveness of statistical Web testing. We applied this approach to analyze some actual Web logs. The results demonstrated the viability and effectiveness of our approach.
IEEE Transactions on Software Engineering | 2004
Jeff Tian; Sunita Rudraraju; Zhao Li
We characterize usage and problems for Web applications, evaluate their reliability, and examine the potential for reliability improvement. Based on the characteristics of Web applications and the overall Web environment, we classify Web problems and focus on the subset of source content problems. Using information about Web accesses, we derive various measurements that can characterize Web site workload at different levels of granularity and from different perspectives. These workload measurements, together with failure information extracted from recorded errors, are used to evaluate the operational reliability for source contents at a given Web site and the potential for reliability improvement. We applied this approach to the Web sites www.seas.smu.edu and www.kde.org. The results demonstrated the viability and effectiveness of our approach.
IEEE Software | 2004
Akif Günes Koru; Jeff Tian
Open source projects have resulted in numerous high-quality, widely used products. Understanding the defect-handling strategies such projects employ can help us use the publicly accessible defect data from these projects to provide valuable quality-improvement feedback and to better understand the defect characteristics for a wider variety of software products. We conducted a survey to understand defect handling in selected open source projects and compared the particular approaches taken in different projects. We focused on defect handling instead of the broader quality assurance activities other researchers have previously reported. Our results provided quantitative evidence about the current practice of defect handling in an important subset of open source projects.
IEEE Transactions on Software Engineering | 2005
Akif Günes Koru; Jeff Tian
Identifying change-prone modules can enable software developers to take focused preventive actions that can reduce maintenance costs and improve quality. Some researchers observed a correlation between change proneness and structural measures, such as size, coupling, cohesion, and inheritance measures. However, the modules with the highest measurement values were not found to be the most troublesome modules by some of our colleagues in industry, which was confirmed by our previous study of six large-scale industrial products. To obtain additional evidence, we identified and compared high-change modules and modules with the highest measurement values in two large-scale open-source products, Mozilla and OpenOffice, and we characterized the relationship between them. Contrary to common intuition, we found through formal hypothesis testing that the top modules in change-count rankings and the modules with the highest measurement values were different. In addition, we observed that high-change modules had fairly high places in measurement rankings, but not the highest places. The accumulated findings from these two open-source products, together with our previous similar findings for six closed-source products, should provide practitioners with additional guidance in identifying the change-prone modules.
Journal of Systems and Software | 2003
A. Güneş Koru; Jeff Tian
We analyzed a large set of complexity metrics and defect data collected from six large-scale software products, two from IBM and four from Nortel Networks, to compare and characterize the similarities and differences between the high defect (HD) and high complexity modules. We observed that the most complex modules often have an acceptable quality and HD modules are not typically thc most complex ones. This observation was statistically validated through hypothesis testing. Our analyses also indicated that the clusters of modules with the highest defects are usually those whose complexity rankings are slightly below the most complex ones. These results should help us better understand the complexity behavior of HD modules and guide future software development and research efforts.
IEEE Transactions on Software Engineering | 1995
Jeff Tian; Marvin V. Zelkowitz
A formal model of program complexity developed earlier by the authors is used to derive evaluation criteria for program complexity measures. This is then used to determine which measures are appropriate within a particular application domain. A set of rules for determining feasible measures for a particular application domain are given, and an evaluation model for choosing among alternative feasible measures is presented. This model is used to select measures from the classification trees produced by the empirically guided software development environment of R.W. Selby and A.A. Porter, and early experiments show it to be an effective process. >
Annals of Software Engineering | 1995
Joel Troster; Jeff Tian
This paper analyzes the quality of a large-scale legacy software system using selected metrics. Quality measurements include defect information collected during product development and in-field operation. Other software metrics include measurements on various product and process attributes, including design, size, change, and complexity. Preliminary analyses revealed the high degree of skew in our data and a weak correlation between defects and software metrics. Tree-based models were then used to uncover relationships between defects and software metrics, and to identify high-defect modules together with their associated measurement characteristics. As results presented in tree forms are natural to the decision process and are easy to understand, tree-based modeling is shown to be suitable for change solicitation and useful in guiding remedial actions for quality improvement.
IEEE Transactions on Software Engineering | 1995
Jeff Tian; Peng Lu; Joe Palma
The paper studies practical reliability measurement and modeling for large commercial software systems based on test execution data collected during system testing. The application environment and the goals of reliability assessment were analyzed to identify appropriate measurement data. Various reliability growth models were used on failure data normalized by test case executions to track testing progress and provide reliability assessment. Practical problems in data collection, reliability measurement and modeling, and modeling result analysis were also examined. The results demonstrated the feasibility of reliability measurement in a large commercial software development environment and provided a practical comparison of various reliability measurements and models under such an environment. >
IEEE Transactions on Software Engineering | 2002
Jeff Tian
This paper presents a new approach to software reliability modeling by grouping data into clusters of homogeneous failure intensities. This series of data clusters associated with different time segments can be directly used as a piecewise linear model for reliability assessment and problem identification, which can produce meaningful results early in the testing process. The dual model fits traditional software reliability growth models (SRGMs) to these grouped data to provide long-term reliability assessments and predictions. These models were evaluated in the testing of two large software systems from IBM. Compared with existing SRGMs fitted to raw data, our models are generally more stable over time and produce more consistent and accurate reliability assessments and predictions.
Journal of Systems and Software | 1998
Jeff Tian; Joel Troster
Abstract This paper compares the quality characteristics of a large legacy software system and a new one. Defect fixes applied to specific modules during testing are used as the direct quality metric. Indirect quality indicators used in this paper include software metrics of various product and process attributes, including design, size, change, and complexity. We analyze and compare the measurement results by examining their individual distributions, the correlations between defects and quality indicators, and tree-based models linking defects to quality indicators. In both these systems, most of the defects are found to be concentrated on relatively few high-defect modules, which points to the need for appropriate risk identification techniques so that defect removal effort can be focused on those high-defect modules for effective quality improvement. In addition, defects in the legacy system are more closely related to change and data complexity metrics; while defects in the new system are more closely related to various design metrics. These results demonstrate different measurement characteristics for these two types of software systems, and suggest that different quality analysis and improvement methods may be more appropriate and effective for different kinds of software systems.