Igor B. Bourdonov
Russian Academy of Sciences
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Igor B. Bourdonov.
formal methods | 2002
Igor B. Bourdonov; Alexander S. Kossatchev; Victor V. Kuliamin; Alexander K. Petrenko
The article presents the main components of the test suite architecture underlying UniTesK test development technology, an automated specification based test development technology for use in industrial testing of general-purpose software. The architecture presented contains such elements as automatically generated oracles, components to monitor formally defined test coverage criteria, and test scenario specifications for test sequence generation with the help of an automata based testing mechanism. This work stems from the ISP RAS results of academic research and 7-years experience in industrial application of formal testing techniques [1].
Programming and Computer Software | 2004
Igor B. Bourdonov; Alexander S. Kossatchev; Victor V. Kuliamin
Problems of testing program systems modeled by deterministic finite automata are considered. The necessary (and, sometimes, sufficient) component of such testing is a traversal of the graph of the automaton state transitions. The main attention is given to the so-called irredundant traversal algorithms (algorithms for traversing unknown graphs, or on-line algorithms), which do not require an a priori knowledge of the total graph structure.
international andrei ershov memorial conference on perspectives of system informatics | 2003
Victor V. Kuliamin; Alexander K. Petrenko; Nick V. Pakoulin; Alexander S. Kossatchev; Igor B. Bourdonov
The article presents an approach to model based testing of complex systems based on a generalization of finite state machines (FSM) and input output state machines (IOSM). The approach presented is used in the context of UniTesK specification based test development method. The results of its practical applications are also discussed. Practical experience demonstrates the applicability of the approach for model based testing of protocol implementations, distributed and concurrent systems, and real-time systems. This work stems from ISPRAS results of academic research and industrial application of formal techniques in verification and testing [1].
Electronic Notes in Theoretical Computer Science | 2006
Igor B. Bourdonov; Alexander S. Kossatchev; Victor V. Kuliamin
The article introduces an extension of the well-known conformance relation ioco on labeled transition systems (LTS) with refused inputs and forbidden actions. This extension helps to apply the usual formal testing theory based on LTS models to incompletely specified systems, which are often met in practice. Another topic concerned in the article is compositional conformance. More precisely, we try to define a completion operation that turns any LTS into input-enabled one having the same set of ioco-conforming implementations. Such a completion enforces preservation of ioco conformance by parallel composition operation on LTSes.
Programming and Computer Software | 2007
Igor B. Bourdonov; Alexander S. Kossatchev; Victor V. Kuliamin
Formal methods for testing conformance of the system under examination to its specification are examined. The operational interaction semantics is specified by a special testing machine that formally determines the testing capabilities. A set of theoretically powerful and practically important capabilities is distinguished that can be reduced to the observation of external actions and refusals (the absence of external actions). The novelties are as follows. (1) Parameterization of the semantics by the families of observable and not observable refusals, which makes it possible to take into account various constraints on the (correct) interactions. (2) Destruction as a forbidden action, which is possible but should not be performed in the case of a correct interaction. (3) Modeling of the divergence by the Δ-action, which also should be avoided in the case of a correct interaction. On the basis of this semantics, the concept of safe testing, the implementation safety hypothesis, and the safe conformance relation are proposed. The safe conformance relation corresponds to the principle of independent observations: a behavior of an implementation is correct or incorrect independently of its other possible behaviors. For a more narrow class of interactions, another version of the semantics based on the ready traces may be used along with the corresponding conformance relation. Some propositions concerning the relationships between the conformance relations under various semantics are formulated. The completion transformation that solves the problem of the conformance relation reflexivity and a monotone transformation that solves the monotonicity problem (preservation of the conformance under composition) are defined.
Programming and Computer Software | 2011
Igor B. Bourdonov; Alexander S. Kossatchev
The paper is devoted to the ioco relation, which determines conformance of an implementation to the specification. Problems related to nonreflexivity of the ioco relation, presence of nonconformal traces (which are lacking in any conformal implementation) in the specification, lack of the ioco preservation upon composition (composition of implementations conformal to their specifications may be not conformal to the composition of these specifications), and “false” errors when testing in a context are considered. To solve these problems, an algorithm of specification completion preserving ioco is proposed (the class of conformal implementations is preserved). The above-specified problems are lacking in the class of completed specifications.
Programming and Computer Software | 2009
Igor B. Bourdonov; Alexander S. Kossatchev
An approach to modeling the components of distributed systems whose interaction is based on handling events with regard for their priorities is considered. Although the priority-based servicing of requests or messages is widely used in practice, the mathematical models of the interaction of such programs often neglect the priorities thus introducing extra nondeterminism in the description of their behavior. The proposed approach attempts to avoid this drawback by defining the parallel composition that provides a model for the interaction of this kind. The subject matter of this paper is the development of a formal theory of testing the components that use priorities. Within this theory, the concept of a safe execution of the model and the conformance relation between the models are introduced, and the generation of test suites that check conformity is considered.
Programming and Computer Software | 2004
Igor B. Bourdonov
A covering path in a directed graph is a path passing through all vertices and arcs of the graph, with each arc being traversed only in the direction of its orientation. A covering path exists for any initial vertex only if the graph is strongly connected, i.e., any of its vertices can be reached from any other vertex by some path. The strong connectivity is the only restriction on the considered class of graphs. As is known, on the class of such graphs, the covering path length is Θ(nm), where n is the number of vertices and m is the number of arcs. For any graph, there exists a covering path of length O(nm), and there exist graphs with covering paths of the minimum length Ω(nm). The traversal of an unknown graph implies that the topology of the graph is not a priori known, and we learn it only in the course of traversing the graph. At each vertex, one can see which arcs originate from the vertex, but one can learn to which vertex a given arc leads only after traversing this arc. This is similar to the problem of traversing a maze by a robot in the case where the plan of the maze is not available. If the robot is a “general-purpose computer” without any limitations on the number of its states, then traversal algorithms with the same estimate O(nm) are known. If the number of states is bounded, then this robot is a finite automaton. Such a robot is an analogue of the Turing machine, where the tape is replaced by a graph and the cells are assigned to the graph vertices and arcs. Currently, the lower estimate of the length of the traversal by a finite robot is not known. In 1971, the author of this paper suggested a robot with the traversal length O(nm + n2log n). The algorithm of the robot is based on the construction of the output directed spanning tree of the graph and on the breadth-first search (BFS) on this tree. In 1993, Afek and Gafni [1] suggested an algorithm with the same estimate of the covering path length, which was also based on constructing a spanning tree but used the depth-first search (DFS) method. In this paper, an algorithm is suggested that combines the breadth-first search with the backtracking (suggested by Afek and Gafni), which made it possible to reach the estimate O(nm + n2loglog n). The robot uses a constant number of memory bits for each vertex and arc of the graph.
Programming and Computer Software | 2010
Igor B. Bourdonov; Alexander S. Kossatchev
Formal methods for testing the conformance of a software system to its specification are considered. The interaction semantics determines the testing capabilities, which are reduced to the observation of actions and refusals (absence of actions). The semantics is parameterized by the families of observable and unobservable refusals. The concept of destruction as a prohibited action that should be avoided in the course of interaction is introduced. The concept of safe testing, the implementation safety hypothesis, safe conformance, and generation of a complete test suite based on the specification are defined. Equivalences of traces, specifications, safety relations, and interaction semantics are examined. A specification completion is proposed that can be used to remove from the specification irrelevant (not included in the safely testable implementations) and nonconformal specification traces is proposed. The concept of total testing that detects all the errors in the implementation (rather than at least one error as is the case in complete testing) is introduced. On the basis of the analysis of dependences between errors, a method for the minimization of test suites is proposed. The problem of preserving the conformance under composition (the monotonicity of conformance) is investigated, and a monotone transformation of the specification solving this problem is proposed.
Programming and Computer Software | 2004
Igor B. Bourdonov
A covering path in a directed graph is a path passing through all vertices and arcs of the graph, with each arc being traversed only in the direction of its orientation. A covering path exists for any initial vertex only if the graph is strongly connected. The traversal of an unknown graph implies that the topology of the graph is not a priori known, and we learn it only in the course of traversing the graph. This is similar to the problem of traversing a maze by a robot in the case where the plan of the maze is not available. If the robot is a “general-purpose computer” without any limitations on the number of its states, then traversal algorithms with the estimate O(nm) are known, where n is the number of vertices and m is the number of arcs. If the number of states is finite, then this robot is a finite automaton. Such a robot is an analogue of the Turing machine, where the tape is replaced by a graph and the cells are assigned to the graph vertices and arcs. The selection of the arc that has not been traversed yet among those originating from the current vertex is determined by the order of the outgoing arcs, which is a priori specified for each vertex. The best known traversal algorithms for a finite robot are based on constructing the output directed spanning tree of the graph with the root at the initial vertex and traversing it with the aim to find all untraversed arcs. In doing so, we face the backtracking problem, which consists in searching for all vertices of the tree in the order inverse to their natural partial ordering, i.e., from the leaves to the root. Therefore, the upper estimate of the algorithms is different from the optimal estimate O(nm) by the number of steps required for the backtracking along the outgoing tree. The best known estimate O(nm + n2 log log n) has been suggested by the author in the previous paper [1]. In this paper, a finite robot is suggested that performs a backtracking with the estimate O(n2 log*(n)). The function log* is defined as an integer solution of the inequality 1 ≤ log2log*(n) < 2, where logt = log º log º ... º log (the superposition º is applied t – 1 times) is the tth compositional degree of the logarithm. The estimate O(nm + n2 log*(n)) for the covering path length is valid for any strongly connected graph for a certain (unfortunately, not arbitrary) order of the outgoing arcs. Interestingly, such an order of the arcs can be marked by symbols of the finite robot traversing the graph. Hence, there exists a robot that traverses the graph twice: first traversal with the estimate O(nm + n2 log log n) and the second traversal with the estimate O(nm + n2 log*(n)).