Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Janusz Laski is active.

Publication


Featured researches published by Janusz Laski.


Information Processing Letters | 1988

Dynamic program slicing

Bogdan Korel; Janusz Laski

Abstract A dynamic program slice is an executable subset of the original program that produces the same computations on a subset of selected variables and inputs. It differs from the static slice (Weiser, 1982, 1984) in that it is entirely defined on the basis of a computation. The two main advantages are the following: Arrays and dynamic data structures can be handled more precisely and the size of slice can be significantly reduced, leading to a finer localization of the fault. The approach is being investigated as a possible extension of the debugging capabilities of STAD, a recently developed System for Testing and Debugging (Korel and Laski, 1987; Laski, 1987).


Journal of Systems and Software | 1990

Dynamic slicing of computer programs

Bogdan Korel; Janusz Laski

Abstract Program slicing is a useful tool in program debugging [25, 26]. Dynamic slicing introduced in this paper differs from the original static slicing in that it is defined on the basis of a computation. A dynamic program slice is an executable part of the original program that preserves part of the programs behavior for a specific input with respect to a subset of selected variables, rather than for all possible computations. As a result, the size of a slice can be significantly reduced. Moreover, the approach allows us to treat array elements and fields in dynamic records as individual variables. This leads to a further reduction of the slice size.


international conference on software maintenance | 1992

Identification of program modifications and its applications in software maintenance

Janusz Laski; Wojciech Szermer

It is pointed out that a major problem in software maintenance is the revalidation of a modified code. It is economically desirable to restrict that process only to those parts of the program that are affected by the modifications. Towards that goal, a formal method is needed to identify the modifications in an automatic way. Such a method is proposed in the present work. The modifications are localized within clusters in the flow graphs of the original and modified programs. Both flow graphs are transformed into reduced flow graphs, between which an isomorphic correspondence is established. Cluster-nodes in the reduced graphs encapsulate modifications to the original program. An algorithm to derive the reduced flow graphs has been implemented as an extension to the recently developed system for testing and debugging (STAD 1.0) and early experiments with the algorithm are reported. Potential applications in regression testing and reasoning about the program are discussed.<<ETX>>


[1988] Proceedings. Second Workshop on Software Testing, Verification, and Analysis | 1988

STAD-a system for testing and debugging: user perspective

Bogdan Korel; Janusz Laski

A recently developed experimental integrated system for testing and debugging (STAD) is presented. Its testing part supports three data-flow coverage criteria. The debugging part guides the programmer in the localization of faults by generating and interactively verifying hypotheses about their location. An example is given to illustrate the process, followed by a debugging session, a discussion of the principles of data-flow testing, a structural testing scenario, and an introduction to the debugging principles in STAD.<<ETX>>


Journal of Systems and Software | 1990

Data flow testing in STAD

Janusz Laski

The System for Testing And Debugging (STAD) is an experimental installation for the investigation of the use of data flow patterns in the program for testing and debugging. Its main parts are a static analyzer, a test monitor and a knowledge-based debugger. The testing component, described in this article, supports the strategies of chain, U- and L-context testing. The first one exercises exchange patterns between single variables in the program, while the others involve tuples of variables. The advantages and drawbacks of the method and its relation to black-box testing are discussed.


hawaii international conference on system sciences | 1991

Algorithmic software fault localization

Bogdan Korel; Janusz Laski

Debugging tools offer a rich set of breakpoint and displaying facilities. Those do not involve, however, a means for automatic identification of potentially faulty parts of the programs being debugged. Although that goal might be unrealistic in general, even partial solution to the problem, for a restricted classes of faults, might be useful. Towards that goal the authors present a novel fault localization algorithm that is capable of identifying a restricted class of programming faults for Pascal-like languages. The algorithm uses the following principle for fault localization: if a program component produces an incorrect result, while its input is correct, then the program component is faulty. The algorithm uses the computation trajectory-based influence relations to formulate hypotheses about the nature of the fault. User input is needed to assess correctness of intermediate situations on the trajectory. The set of potentially faulty statements generated by the algorithm is closely related to the concept of dynamic slice (B. Korel et al., 1988). The algorithm has been implemented in the System for Testing and Debugging (B. Korel et al., 1988). Early experiments indicate that the approach can be quite useful, particularly for inexperienced programmers.<<ETX>>


Sigplan Notices | 1982

On data flow guided program testing

Janusz Laski

A structural approach to testing employing properties of data flow in a program is proposed. The basic notion introduced is that of data context of a program block. It represents the set of all tuples of definitions of the block arguments that are simultaneously live when the control reaches the block. Two testing strategies have been proposed: block testing, exercising every block for all its elementary contexts and d-tree testing exercising the definition tree rooted at an elementary context of the stop/exit instruction.


Software Testing, Verification & Reliability | 1995

Error masking in computer programs

Janusz Laski; Wojciech Szermer; Piotr Luczycki

Programming faults are defined in the framework of a program proof outline. A component C in program P is faulty if P cannot be proved correct with the current implementation of C but it can be proved using the design specification for C, which defines the role of C in the overall design of P. A programming error is a state that violates the implementation specification of C. The error is masked if the design specification is satisfied by the incorrect state. Given a passing test t, the probability of error masking is 1–s(t), where s(t) is the sensitivity of t. Dynamic mutation testing (DMT), a Monte Carlo method, is used to estimate s(t). DMT is extended to estimate the probability of the existence of a hidden fault in C for a passing test suite. That probabillity can be viewed as a metric that measures the quality of the passing test suite.


ACM Sigada Ada Letters | 1998

Dependency analysis of Ada programs

Janusz Laski; William Stanley; Jim Hurst

1. ABSTRACT The working hypothesis of this paper has been the belief that Software Testing and Analysis (STA) methods should be integrated around a common conceptual framework. An analysis of two potential candidates for such a framework, Program Dependencies and Information Flow relations, shows that the ideal framework should posses the mathematical elegance of the flow relations and the generality of program dependencies. However, program dependencies have originally been formulated for compiler optimization and their uncritical use in software engineering is lacking. Therefore, modifications to the original dependencies have been proposed in this paper. They include partial vs. total definitions of Ada arrays, new concept of reaching definitions, potential Vs guaranteed dependencies, interprocedural dependencies, and an explanation feature that helps the user understand the reasons for the generated reports. It has been shown how the modified model can support descriptive and proscriptive (e.g. anomalies) queries about the program and, due to the clear separation of control flow from data flow and the lack of language restrictions, they are potentially applicable to a wider class of STA methods, including dynamic (execution-based) analysis. Also, Path Analysis, a novel method for the identification of dependencies along individual program paths has been proposed. It has been shown that path analysis offers a more accurate model for procedure calls, allows one to detect otherwise undetectable data flow anomalies and can serve as a vehicle for the analysis of error creation and propagation in testing and debugging.


computer software and applications conference | 1990

Path expression in data flow program testing

Janusz Laski

The language of regular expressions is used for the identification of constructors of definition-use chains. Activation of the chains is essential for all data flow testing strategies. The algorithm is based on the node-elimination method of J.A. Brzozowski and E.J. McCluskey (1963). It generates a regular expression that represents the (possibly infinite) set of all constructors of the chain involved. A particular path can then be derived from that expression. The algorithm has been implemented as an extension to STAD, a recently implemented system for testing and debugging.<<ETX>>

Collaboration


Dive into the Janusz Laski's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bogdan Korel

Illinois Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jim Hurst

University of Rochester

View shared research outputs
Researchain Logo
Decentralizing Knowledge