Judit Jász
University of Szeged
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Judit Jász.
source code analysis and manipulation | 2003
Ákos Kiss; Judit Jász; Gábor Lehotai; Tibor Gyimóthy
Although the slicing of programs written in a high-level language has been widely studied in the literature, very little work has been published on the slicing of binary executable programs. The lack of existing solutions is really hard to understand since the application domain for slicing binaries is similar to that for slicing high-level languages. We present a method for the interprocedural static slicing of binary executables. We applied our slicing method to real size binaries and achieved an interprocedural slice size of between 56%-68%. We used conservative approaches to handle unresolved function calls and branching instructions. Our current implementation contains an imprecise (but safe) memory dependence model as well. However, this conservative slicing method might still be useful in analysing large binary programs. We suggest some improvements to eliminate useless edges from dependence graphs as well.
international conference on software maintenance | 2008
Judit Jász; Árpád Beszédes; Tibor Gyimóthy; Václav Rajlich
The paper explores Static Execute After (SEA) dependencies in the program and their dual Static Execute Before (SEB) dependencies. It empirically compares the SEA/SEB dependencies with the traditional dependencies that are computed by System Dependence Graph (SDG) and program slicers. In our case study we use about 30 subject programs that were previously used by other authors in empirical studies of program analysis. We report two main results. The computation of SEA/SEB is much less expensive and much more scalable than the computation of the SDG. At the same time, the precision declines only very slightly, by some 4% on average. In other words, the precision is comparable to that of the leading traditional algorithms, while intuitively a much larger difference would be expected. The paper then discusses whether based on these results the computation of the SDG should be replaced in some applications by the computation of the SEA/SEB.
international conference on software maintenance | 2007
Árpád Beszédes; Tamás Gergely; Judit Jász; Gabriella Tóth; Tibor Gyimóthy; Václav Rajlich
In this paper, we introduce static execute after (SEA) relationship among program components and present an efficient analysis algorithm. Our case studies show that SEA may approximate static slicing with perfect recall and high precision, while being much less expensive and more usable. When differentiating between explicit and hidden dependencies, our case studies also show that SEA may correlate with direct and indirect class coupling. We speculate that SEA may find applications in computation of hidden dependencies and through it in many maintenance tasks, including change propagation and regression testing.
international conference on software maintenance | 2012
Árpád Beszédes; Tamás Gergely; Lajos Schrettner; Judit Jász; Laszlo Lango; Tibor Gyimóthy
Automated regression testing is often crucial in order to maintain the quality of a continuously evolving software system. However, in many cases regression test suites tend to grow too large to be suitable for full re-execution at each change of the software. In this case selective retesting can be applied to reduce the testing cost while maintaining similar defect detection capability. One of the basic test selection methods is the one based on code coverage information, where only those tests are included that cover some parts of the changes. We experimentally applied this method to the open source web browser engine project WebKit to find out the technical difficulties and the expected benefits if this method is to be introduced into the actual build process. Although the principle is simple, we had to solve a number of technical issues, so we report how this method was adapted to be used in the official build environment. Second, we present results about the selection capabilities for a selected set of revisions of WebKit, which are promising. We also applied different test case prioritization strategies to further reduce the number of tests to execute. We explain these strategies and compare their usefulness in terms of defect detection and test suite reduction.
international conference on program comprehension | 2008
László Vidács; Judit Jász; Árpád Beszédes; Tibor Gyimóthy
Slicing C programs has been one of the most popular ways for the implementation of slicing algorithms; out of the very few practical implementations that exist many deal with this programming language. Yet, preprocessor related issues have been addressed very marginally by these slicers, despite the fact that ignoring (or handling poorly) these constructs may lead to serious inaccuracies in the slicing results and hence in the comprehension process. An accurate slicing method for preprocessor related constructs has been proposed which - when combined with existing C/C++ language slicers - can provide a more complete comprehension of these languages. In this paper, we overview our approach for this combination and report its benefits in terms of the completeness of the resulting slices.Slicing C programs has been one of the most popular ways for the implementation of slicing algorithms; out of the very few practical implementations that exist many deal with this programming language. Yet, preprocessor related issues have been addressed very marginally by these slicers, despite the fact that ignoring (or handling poorly) these constructs may lead to serious inaccuracies in the slicing results and hence in the comprehension process. Recently, an accurate slicing method for preprocessor related constructs has been proposed which - when combined with existing C/C++ language slicers - can provide a more complete comprehension of these languages. In this paper, we overview our approach for this combination and report its benefits in terms of the completeness of the resulting slices.
Journal of Software: Evolution and Process | 2014
Lajos Schrettner; Judit Jász; Tamás Gergely; Árpád Beszédes; Tibor Gyimóthy
Impact analysis based on code dependence can be an integral part of software quality assurance by providing opportunities to identify those parts of the software system that are affected by a change. Because changes usually have far reaching effects in programs, effective and efficient impact analysis is vital, which has different applications including change propagation and regression testing. Static Execute After (SEA) is a relation on program elements (procedures) that is efficiently computable and accurate enough to be a candidate for use in impact analysis in practice. To assess the applicability of SEA in terms of capturing real defects, we present results on integrating it into the build system of Web Kit, a large, open source software system, and on related experiments. We show that a large number of real defects can be captured by impact sets computed by SEA, albeit many of them are large. We demonstrate that this is not an issue in applying it to regression test prioritization, but generally it can be an obstacle in the path to efficient use of impact analysis. We believe that the main reason for large impact sets is the formation of dependence clusters in code. As apparently dependence clusters cannot be easily avoided in the majority of cases, we focus on determining the effects these clusters have on impact analysis.
Software Quality Journal | 2005
Ákos Kiss; Judit Jász; Tibor Gyimóthy
Although the slicing of programs written in a high-level language has been widely studied in the literature, relatively few papers have been published on the slicing of binary executable programs. The lack of existing solutions for the latter is really hard to understand since the application domain for slicing binaries is similar to that for slicing high-level languages. Furthermore, there are special applications of the slicing of programs without source code like source code recovery, code transformation and the detection of security critical code fragments. In this paper, in addition to describing the method of interprocedural static slicing of binaries, we discuss how the set of the possible targets of indirect call sites can be reduced by dynamically gathered information. Our evaluation of the slicing method shows that, if indirect function calls are extensively used, both the number of edges in the call graph and the size of the slices can be significantly reduced.
source code analysis and manipulation | 2012
Lajos Schrettner; Judit Jász; Tamás Gergely; Árpád Beszédes; Tibor Gyimóthy
Impact analysis based on code dependence can be an integral part of software quality assurance by providing opportunities to identify those parts of the software system that are affected by a change. Because changes usually have far reaching effects in programs, effective and efficient impact analysis is vital, which has different applications including change propagation and regression testing. Static Execute After (SEA) is a relation on program elements (procedures) that is efficiently computable and accurate enough to be a candidate for use in impact analysis in practice. To assess the applicability of SEA in terms of capturing real defects, we present results on integrating it into the build system of Web Kit, a large, open source software system, and on related experiments. We show that a large number of real defects can be captured by impact sets computed by SEA, albeit many of them are large. We demonstrate that this is not an issue in applying it to regression test prioritization, but generally it can be an obstacle in the path to efficient use of impact analysis. We believe that the main reason for large impact sets is the formation of dependence clusters in code. As apparently dependence clusters cannot be easily avoided in the majority of cases, we focus on determining the effects these clusters have on impact analysis.
source code analysis and manipulation | 2013
Árpád Beszédes; Lajos Schrettner; Béla Csaba; Tamás Gergely; Judit Jász; Tibor Gyimóthy
Dependence clusters are (maximal) groups of source code entities that each depend on the other according to some dependence relation. Such clusters are generally seen as detrimental to many software engineering activities, but their formation and overall structure are not well understood yet. In a set of subject programs from moderate to large sizes, we observed frequent occurrence of dependence clusters using Static Execute After (SEA) dependences (SEA is a conservative yet efficiently computable dependence relation on program procedures). We identified potential linchpins inside the clusters; these are procedures that can primarily be made responsible for keeping the cluster together. Furthermore, we found that as the size of the system increases, it is more likely that multiple procedures are jointly responsible as sets of linchpins. We also give a heuristic method based on structural metrics for locating possible linchpins as their exact identification is unfeasible in practice, and presently there are no better ways than the brute-force method. We defined novel metrics and comparison methods to be able to demonstrate clusters of different sizes in programs.
conference on software maintenance and reengineering | 2013
Béla Csaba; Lajos Schrettner; Árpád Beszédes; Judit Jász; Peter Hegedus; Tibor Gyimóthy
Empirical studies have shown that dependence clusters are both prevalent in source code and detrimental to many activities related to software, including maintenance, testing and comprehension. Based on such observations, it would be worthwhile to try to give a more precise characterization of the connection between dependence clusters and software quality. Such attempts are hindered by a number of difficulties: there are problems in assessing the quality of software, measuring the degree of clusterization of software and finding the means to exhibit the connection (or lack of it) between the two. In this paper we present our approach to establish a connection between software quality and clusterization. Software quality models comprise of low- and high-level quality attributes, in addition we defined new clusterization metrics that give a concise characterization of the clusters contained in programs. Apart from calculating correlation coefficients, we used mutual information to quantify the relationship between clusterization and quality. Results show that a connection can be demonstrated between the two, and that mutual information combined with correlation can be a better indicator to conduct deeper examinations in the area.