Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jun-Ru Chang is active.

Publication


Featured researches published by Jun-Ru Chang.


Journal of Systems and Software | 2010

Design and analysis of GUI test-case prioritization using weight-based methods

Chin-Yu Huang; Jun-Ru Chang; Yung-Hsin Chang

Testing the correctness of a GUI-based application is more complex than the conventional code-based application. In addition to testing the underlying codes of the GUI application, the space of possible combinations of events with a large GUI-input sequence also requires creating numerous test cases to confirm the adequacy of the GUI testing. Running all GUI test cases and then fixing all found bugs may be time-consuming and delaying the project completion. Hence, it is important to advance the test cases that uncover the most faults as fast as possible in the testing process. Test-case prioritization has been proposed and used in recent years because it can improve the rate of fault detection during the testing phase. However, few studies have discussed the problem of GUI test-case prioritization. In this paper, we propose a weighted-event flow graph for solving the non-weighted GUI test case and ranking GUI test cases based on weight scores. The weighted scores can either be ranked from high to low or be ordered by dynamic adjusted scores. Finally, three experiments are performed, and experimental results show that the adjusted-weight method can obtain a better fault-detection rate.


computer software and applications conference | 2010

Design and Analysis of Cost-Cognizant Test Case Prioritization Using Genetic Algorithm with Test History

Yu-Chi Huang; Chin-Yu Huang; Jun-Ru Chang; Tsan-Yuan Chen

During software development, regression testing is usually used to assure the quality of modified software. The techniques of test case prioritization schedule the test cases for regression testing in an order that attempts to increase the effectiveness in accordance with some performance goal. The most general goal is the rate of fault detection. It assumes all test case costs and fault severities are uniform. However, those factors usually vary. In order to produce a more satisfactory order, the cost-cognizant metric that incorporates varying test case costs and fault severities is proposed. In this paper, we propose a cost-cognizant test case prioritization technique based on the use of historical records and a genetic algorithm. We run a controlled experiment to evaluate the proposed technique’s effectiveness. Experimental results indicate that our proposed technique frequently yields a higher Average Percentage of Faults Detected per Cost (APFDc). The results also show that our proposed technique is also useful in terms of APFDc when all test case costs and fault severities are uniform.


computer software and applications conference | 2007

A Study of Enhanced MC/DC Coverage Criterion for Software Testing

Jun-Ru Chang; Chin-Yu Huang

The coverage criteria of verification techniques play an important role in software development and testing. The goal is to reduce the size of test suites to economize on time, and to ensure whether all statements (or conditions) are covered. We can use these criteria to track test progress, assess current situations, predict emerging events, and so on. Thus we can take the necessary actions upon early indications that testing activity is falling behind. Modified condition/decision coverage (MC/DC) was proposed by NASA in 1994, and had been widely adopted and discussed since then. As evident from the definition, MC/DC criterion is used to judge whether each Boolean operator can be satisfied or not. However, we find that the selected test cases sometimes may not be able to satisfy the original definition of MC/DC under some Boolean expressions. In this paper, we will propose a simple but useful method which focuses on all conditions of Boolean expression to practice MC/DC. Specifically, our proposed approach will use n-cube graph and Gray code to implement the MC/DC criterion. We will further show how to use the proposed method to differentiate the necessary and redundancy test cases. Finally, a practical regression testing tool, TASTE (Tool for Automatic Software regression TEsting), will be presented in this paper. An example is given to illustrate the detailed working process and is explained in detail.


asia-pacific software engineering conference | 2005

Integrating generalized Weibull-type testing-effort function and multiple change-points into software reliability growth models

Chu-Ti Lin; Chin-Yu Huang; Jun-Ru Chang

In modern societies, software is everywhere and we need software to be reliable. In practice, during software development processes, software reliability assessment can greatly help managers to understand effectiveness of consumed testing-effort and deploy testing-resource. In the 1970s-2000, many software reliability growth models (SRGMs) have been proposed for estimation of reliability growth of software products. In this paper, the concept of multiple change-points is incorporated into Weibull-type testing-effort dependent SRGM because the consumption phenomenon of testing resource may vary at some moments. The performance and application of proposed models are demonstrated through one real data set. The experimental results show that the models give an excellence performance on failure prediction. Besides, we also discuss the optimal release time problems based on reliability requirement and cost criteria.


ieee region 10 conference | 2005

Reliability and Sensitivity Analysis of Embedded Systems with Modular Dynamic Fault Trees

Hsiang-Kai Lo; Chin-Yu Huang; Yung-Ruei Chang; Wei-Chih Huang; Jun-Ru Chang

Fault trees theories have been used in years because they can easily provide a concise representation of failure behavior of general non-repairable fault-tolerant systems. But the defect of traditional fault trees is lack of accuracy when modeling dynamic failure behavior of certain systems with fault-recovery process. A solution to this problem is called behavioral decomposition. A system will be divided into several dynamic or static modules, and each module can be further analyzed using BDD or Markov chains separately. In this paper, we will show a decomposition scheme that independent subtrees of a dynamic module are detected and solved hierarchically for saving computation time of solving Markov chains without losing unacceptable accuracy when assessing components sensitivities. In the end, we present our analyzing software toolkit that implements our enhanced methodology.


computer software and applications conference | 2009

A Study of Applying Extended PIE Technique to Software Testability Analysis

Tsung-Han Tsai; Chin-Yu Huang; Jun-Ru Chang

During the software development process, data that has been gained from the testing phase can help developers to predict software reliability more precisely. But the testing stage usually takes more and more effort due to the growing complexity of software. How to build software that can be tested efficiently has become an important topic in addition to enhancing and developing new testing methods. Thus, research on software testability has been developed variously. In the past, a dynamic technique for estimating program testability was proposed and called propagation, infection, and execution (PIE) analysis. Previous research studies show that PIE analysis can complement software testing. However, this technique requires a lot of computational overhead in estimating the testability of software components. In this paper, we propose an Extended PIE (EPIE) technique to accelerate the traditional PIE analysis, based on generating group testability as a substitute for location testability. This technique can be separated into three steps: breaking a program into blocks, dividing blocks into groups, and marking target statements. We developed a tool called ePAT (extended PIE Analysis Tool) to help us identify the locations which will be analyzed. The experimental results show that the number of analyzed locations can be effectively decreased and that the estimated value of testability remains acceptable and useful.


International Journal of Systems Science | 2012

Comparative performance evaluation of applying extended PIE technique to accelerate software testability analysis

Jun-Ru Chang; Chin-Yu Huang; Chao-Jung Hsu; Tsung-Han Tsai

The rapid development of technology provides high performance and reliability for the hardware system; based on this, software engineers can focus their developed software on more convenience and ultra-high reliability. To reach this goal, the testing stage of software development life cycle usually takes more time and effort due to the growing complexity of the software. How to build software that can be tested efficiently has become an important topic in addition to enhancing and developing new testing methods. Thus, research on software testability has been conducted and various methods have been developed. In the past, a dynamic technique for estimating program testability was proposed and called propagation, infection and execution (PIE) analysis. Previous research studies have shown that PIE analysis can complement software testing. However, this method requires a lot of computational overhead in estimating the testability of software components. In this article, we propose an extended PIE (EPIE) method to accelerate the conventional PIE analysis, based on generating group testability as a substitute for statement testability. Our proposed method can be systematically separated into three steps: breaking a program into blocks, dividing the blocks into groups and marking target statements. Experiments and evaluations with the Siemens suite, together with cost-effectiveness analysis, clearly show that the number of analysed statements can be effectively decreased, and the calculated values of testability are still acceptable.


industrial engineering and engineering management | 2010

An investigation into whether the NHPP framework is suitable for software reliability prediction and estimation

Chu-Ti Lin; Kai-Wei Tang; Jun-Ru Chang; Chin-Yu Huang

Many software reliability growth models (SRGMs) based on non-homogeneous Poisson process (NHPP) framework have been proposed for estimating the reliability growth of products. However, some concerns regarding the properties of NHPP framework were exposed and discussed while the NHPP models have received considerable attention. Two main concerns are (I) the variance of an NHPP-based model grows as software testing proceeds, which was considered an unreasonable NHPP property for describing software failure behavior; and (II) the numbers of failures observed in disjoint time intervals are independent, which may fails in the early stage of software testing. With regard to Concern (I), we will justify the validity of NHPP framework through a mathematical perspective, i.e. the process of parameter estimation for NHPP models. Considering Concern (II), we will explain why NHPP SRGMs are still workable from the applicable perspectives. As a result, we believe the NHPP framework may still have merit.


international conference on management of innovation and technology | 2008

Delay analysis of Admission Control mechanism for supporting QoS in 802.11e

Ching-Hsun Chen; Chin-Yu Huang; Jun-Ru Chang; Jenn-Wei Lin

IEEE 802.11e standard is developed for QoS provisioning in WLANs. Admission Control is a mechanism that controls stations to be served in WLANs. In practice, evaluation of admission control plays an important role for QoS support. The goal of this paper is to calculate the WLAN delays with ACs. The delay calculation can be used to guarantee the QoS of the VoIP and video streams, which is divided into three parts. One is the propagation delay. Another is that packets queue up in stations or APs. Lastly, the delay is taken due to collisions. For VoIP and video streams, the second delay is more important than other two delays. Consequently, we will focus on the calculation of the second delay. Some simulations and numerical results are provided and discussed in detail. Experimental results show that the proposed method is more accurate in calculating the delay of packets in WLANs.


ieee region 10 conference | 2006

Application-Specific RISC Architecture for ITU-T G.729 Decoding Processing

Chien-Hsuan Wu; Chin-Yu Huang; Jun-Ru Chang

In this paper, we propose a new application-specific RISC processor architecture to overcome the performance bottleneck of traditional RISC processor executing complex speech decoding application such as, what we mostly concern about, ITU-T G.729 decoder. By introducing the enhanced hardware components into ARM v4 processor architecture, the new application-specific instructions are accomplished and the performance of this new architecture is about 52% improvement than original ARM v4 processor architecture

Collaboration


Dive into the Jun-Ru Chang's collaboration.

Top Co-Authors

Avatar

Chin-Yu Huang

National Tsing Hua University

View shared research outputs
Top Co-Authors

Avatar

Chao-Jung Hsu

National Tsing Hua University

View shared research outputs
Top Co-Authors

Avatar

Chu-Ti Lin

National Chiayi University

View shared research outputs
Top Co-Authors

Avatar

Tsung-Han Tsai

National Tsing Hua University

View shared research outputs
Top Co-Authors

Avatar

Hsiang-Kai Lo

National Tsing Hua University

View shared research outputs
Top Co-Authors

Avatar

Jenn-Wei Lin

Fu Jen Catholic University

View shared research outputs
Top Co-Authors

Avatar

Kai-Wei Tang

National Chiayi University

View shared research outputs
Top Co-Authors

Avatar

Wei-Chih Huang

National Tsing Hua University

View shared research outputs
Top Co-Authors

Avatar

Yu-Chi Huang

National Tsing Hua University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge