Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Xiaoxia Ren is active.

Publication


Featured researches published by Xiaoxia Ren.


conference on object-oriented programming systems, languages, and applications | 2004

Chianti: a tool for change impact analysis of java programs

Xiaoxia Ren; Fenil Shah; Frank Tip; Barbara G. Ryder; Ophelia C. Chesley

This paper reports on the design and implementation of Chianti, a change impact analysis tool for Java that is implemented in the context of the Eclipse environment. Chianti analyzes two versions of an application and decomposes their difference into a set of atomic changes. Change impact is then reported in terms of affected (regression or unit) tests whose execution behavior may have been modified by the applied changes. For each affected test, Chianti also determines a set of affecting changes that were responsible for the tests modified behavior. This latter step of isolating the changes that induce the failure of one specific test from those changes that only affect other tests can be used as a debugging technique in situations where a test fails unexpectedly after a long editing session. We evaluated Chianti on a year (2002) of CVS data from M. Ernsts Daikon system, and found that, on average, 52% of Daikons unit tests are affected. Furthermore, each affected unit test, on average, is affected by only 3.95% of the atomic changes. These findings suggest that our change impact analysis is a promising technique for assisting developers with program understanding and debugging.


foundations of software engineering | 2006

Finding failure-inducing changes in java programs using change classification

Maximilian Stoerzer; Barbara G. Ryder; Xiaoxia Ren; Frank Tip

Testing and code editing are interleaved activities during program development. When tests fail unexpectedly, the changes that caused the failure(s) are not always easy to find. We explore how change classification can focus programmer attention on failure-inducing changes by automatically labeling changes Red, Yellow, or Green, indicating the likelihood that they have contributed to a test failure. We implemented our change classification tool JUnit/CIA as an ex-tension to the JUnit component within Eclipse, and evaluated its effectiveness in two case studies. Our results indicate that change classification is an effective technique for finding failure-inducing changes.


international conference on software engineering | 2005

Chianti: a change impact analysis tool for java programs

Xiaoxia Ren; Barbara G. Ryder; Maximilian Stoerzer; Frank Tip

Chianti is a change impact analysis tool for Java that is implemented in the context of the eclipse environment. Chianti analyzes two versions of a Java program, decomposes their difference into a set of atomic changes, and a partial order inter-dependences of these changes is calculated. Change impact is then reported in terms of affected (regression or unit) tests whose execution behavior may have been modified by the applied changes. For each affected test, Chianti also determines a set of affecting changes that were responsible for the tests modified behavior. This latter step of isolating failure inducing changes for one specific test from irrelevant changes can be used as a debugging technique in situations where a test fails unexpectedly after a long editing session.


IEEE Transactions on Software Engineering | 2006

Identifying Failure Causes in Java Programs: An Application of Change Impact Analysis

Xiaoxia Ren; Ophelia C. Chesley; Barbara G. Ryder

During program maintenance, a programmer may make changes that enhance program functionality or fix bugs in code. Then, the programmer usually will run unit/regression tests to prevent invalidation of previously tested functionality. If a test fails unexpectedly, the programmer needs to explore the edit to find the failure-inducing changes for that test. Crisp uses results from Chianti, a tool that performs semantic change impact analysis, to allow the programmer to examine those parts of the edit that affect the failing test. Crisp then builds a compilable intermediate version of the program by adding a programmer-selected partial edit to the original code, augmenting the selection as necessary to ensure compilation. The programmer can reexecute the test on the intermediate version in order to locate the exact reasons for the failure by concentrating on the specific changes that were applied. In nine initial case studies on pairs of versions from two real Java programs, Daikon and Eclipse jdt compiler, we were able to use Crisp to identify the failure-inducing changes for all but 1 of 68 failing tests. On average, 33 changes were found to affect each failing test (of the 67), but only 1-4 of these changes were found to be actually failure-inducing


international conference on software engineering | 2009

Safe-commit analysis to facilitate team software development

Jan Wloka; Barbara G. Ryder; Frank Tip; Xiaoxia Ren

Software development teams exchange source code in shared repositories. These repositories are kept consistent by having developers follow a commit policy, such as “Program edits can be committed only if all available tests succeed.” Such policies may result in long intervals between commits, increasing the likelihood of duplicative development and merge conflicts. Furthermore, commit policies are generally not automatically enforceable. We present a program analysis to identify committable changes that can be released early, without causing failures of existing tests, even in the presence of failing tests in a developers local workspace. The algorithm can support relaxed commit policies that allow early release of changes, reducing the potential for merge conflicts. In experiments using several versions of a non-trivial software system with failing tests, 3 newly enabled commit policies were shown to allow a significant percentage of changes to be committed.


international conference on software maintenance | 2005

Crisp: a debugging tool for Java programs

Ophelia C. Chesley; Xiaoxia Ren; Barbara G. Ryder

Crisp is a tool (i.e., an Eclipse plug-in) for constructing intermediate versions of a Java program that is being edited in an IDE such as Eclipse. After a long editing session, a programmer usually would run regression tests to make sure she has not invalidated previously checked functionality. If a test fails unexpectedly, Crisp uses input from Chianti, a tool for semantic change impact analysis, to allow the programmer to select parts of the edit that affected the failing test and to add them to the original program, creating an intermediate version guaranteed to compile. Then the programmer can re-execute the test in order to locate the exact reasons for the failure by concentrating on those affecting changes that were applied. Using Crisp, a programmer can iteratively select, apply, and undo individual (or sets of) affecting changes and, thus effectively find a small set of failure-inducing changes.


international symposium on software testing and analysis | 2007

Heuristic ranking of java program edits for fault localization

Xiaoxia Ren; Barbara G. Ryder

In modern software development, regression tests are used to confirm the fundamental functionalities of an edited program and to assure the code quality. Difficulties occur when testing reveals unexpected behaviors, which indicate potential defects introduced by the edit. However, the changes that caused the failure(s) are not always easy to find. We propose a heuristic that ranks method changes that might have affected a failed test, indicating the likelihood that they may have contributed to a test failure. Our heuristic is based on the calling structure of the failed test (e.g., the number of ancestors and descendents of a method in the tests call graph, whether the caller or callee was changed, etc.). We evaluated the effectiveness of the heuristic in 14 pairs of edited versions in the Eclipse jdt core plug-in, using the test suite from its compiler tests plug-in. Our results indicate that when a failure is caused by a single method change, our heuristic ranked the failure-inducing change as number 1 or number 2 of all the method changes in 67% of the delegate tests (i.e., representatives of all failing tests). Even when the failure is caused by some combination of the changes, rather than a single change, our heuristic still helps.


international conference on software engineering | 2007

Crisp--A Fault Localization Tool for Java Programs

Ophelia C. Chesley; Xiaoxia Ren; Barbara G. Ryder; Frank Tip

Crisp is an Eclipse plug-in tool for constructing intermediate versions of a Java program that is being edited. After a long editing session, a programmer will run regression tests to make sure she has not invalidated previously tested functionality. If a test fails unexpectedly, Crisp allows the programmer to select parts of the edit that affected the failing test and to add them to the original program, creating an intermediate version guaranteed to compile. Then the programmer can re-execute the test in order to locate the exact reasons for the failure by concentrating on those affecting changes that were applied. Using Crisp, a programmer can it- eratively select, apply, and undo individual (or sets of) affecting changes and, thus effectively find a small set of failure-inducing changes. Crisp is an extension to our change impact analysis tool, Chianti, [6].


Archive | 2007

Change impact analysis for java programs and applications

Barbara G. Ryder; Xiaoxia Ren

Small changes can have major and nonlocal effects in object oriented languages, due to the extensive use of subtyping and dynamic dispatch. This makes it difficult to understand value flow through a program and complicates life for maintenance programmers. Change impact analysis provides feedback on the semantic impact of a set of program changes. The change impact analysis method presented in this thesis presumes the existence of a suite of regression tests associated with a Java program and access to the original and edited versions of the code. The primary goal of our research is to provide programmers with tool support that can help them understand why a test is suddenly failing after a long editing session by isolating the changes responsible for the failure. The tool analyzes two versions of an application and decomposes their difference into a set of atomic changes. Change impact is then reported in terms of affected tests whose execution behavior may have been modified by the applied changes. For each affected test, it also determines a set of affecting changes that were responsible for the test’s modified behavior. The first contribution of this thesis is the demonstration of the utility of the basic change impact analysis framework of [51], by implementing a proof-of-concept prototype, Chianti, and applying it to Daikon, for an experimental validation. The second contribution is the definition and implementation of the dependences between atomic changes. Extensive experiments show that our dependences can help build the intermediate programs automatically in most cases. Another contribution is the heuristics for ranking the atomic changes for fault localization. This thesis proposes a heuristic that ranks method changes that might have affected a failed test, indicating the likelihood that they may have contributed to a test failure. Our results indicate that when a failure is caused by a single method change, our heuristic ranked the failure-inducing change as number 1 or number 2 of all the method changes in 67% of the delegate tests (i.e., representatives of all failing tests).


Archive | 2003

Chianti: A Prototype Change Impact Analysis Tool for Java

Xiaoxia Ren; Fenil Shah; Frank Tip; Barbara G. Ryder; Ophelia C. Chesley; Julian Dolby

Collaboration


Dive into the Xiaoxia Ren's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge