Petr Fiser
Czech Technical University in Prague
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Petr Fiser.
international on-line testing symposium | 2006
Pavel Kubalik; Petr Fiser; Hana Kubatova
This paper describes a highly reliable digital circuit design method based on totally self checking blocks implemented in FPGAs. The bases of the self checking blocks are parity predictors. The parity predictor design method based on multiple parity groups is proposed. Proper parity groups are chosen in order to obtain minimal area overhead and to decrease the number of undetectable faults
digital systems design | 2010
Jiri Balcarek; Petr Fiser; Jan Schmidt
In this paper we propose a new method of test patterns compression based on a design of a dedicated SAT-based ATPG (Automatic Test Pattern Generator). This compression method is targeted to systems on chip (SoCs)provided with the P1500 test standard. The RESPIN architecture can be used for test patterns decompression. The main idea is based on finding the best overlap of test patterns during the test generation, unlike other methods, which are based on efficient overlapping of pre-generated test patterns. The proposed algorithm takes advantage of an implicit test representation as SAT problem instances. The results of test patterns compression obtained for standard ISCAS’85 and ‘89benchmark circuits are shown and compared with competitive test compression methods.
digital systems design | 2007
Petr Fiser
This paper discusses possibilities for a choice of a pseudorandom pattern generator that is to be used in combination with the column-matching based built-in self-test design method. The pattern generator should be as small as possible, whereas patterns generated by it should guarantee satisfactory fault coverage. Weighted random pattern generators offer this. Several weighted pattern generator designs are proposed and their effectiveness is evaluated in this paper. Moreover, two methods for computing the weights are compared. The column-matching method is primarily intended for a test-per-clock BIST, i.e., test patterns are applied to the tested circuit in parallel. Pseudorandom vectors obtained by an LFSR are modified here by a combinational circuit, to obtain deterministic test patterns. The number of inputs of this block corresponds to the width of the LFSR, the outputs correspond to the tested circuit inputs. This paper discusses possibilities of a reduction of the LFSR width.
digital systems design | 2006
Petr Fiser; Hana Kubatova
We propose a novel two-level Boolean minimizer coming in succession to our previously developed minimizer BOOM, so we have named it BOOM-II. It is a combination of two minimizers, namely BOOM and FC-Min. Each of these two methods has its own area where it is most efficiently applicable. We have combined these two methods together to be able to solve all kinds of problems efficiently, independently on their size or nature. The tool is very scalable in terms of required runtime and/or quality of the solution. It is applicable to functions with an extremely large number of both input and output variables. The minimization process is very flexible and can be driven by miscellaneous user-defined constraints, such as low-power design, design-for-testability and decomposition constraints. Some of the application areas are described in the paper
digital systems design | 2003
Petr Fiser; Jan Hlavicka; Hana Kubatova
We present a novel heuristic algorithm for two-level Boolean minimization. In contrast to the other approaches, the proposed method firstly finds the coverage of the on-sets and from that it derives the group implicants. No prime implicants of the single functions are being computed; only the necessary implicants needed to cover the on-sets are produced. This reverse approach makes the algorithm extremely fast and minimizes the memory demands. It is most efficient for functions with a large number of output variables, where the other minimization algorithms (e.g. ESPRESSO) are too slow. It is also very efficient for highly unspecified functions, i.e. functions with only few terms defined.
design and diagnostics of electronic circuits and systems | 2014
Petr Fiser; Jan Schmidt; Jiří Balcárek
In this paper we present an experimental analysis of robustness of Electronic Design Automation (EDA) tools, with respect to different seemingly unimportant aspects (bias) introduced by the designer, “from outside”. The algorithms employed in EDA tools should be immune to these completely, since such aspects do not carry any useful information - source files differing in these aspects are semantically equivalent. However, we show that most of the studied tools are seriously sensitive here, much more than ever reported. The results indicate, that experiments conducted to evaluate the performance of EDA tools must take such behavior into consideration. Also the notion of a benchmark is questioned.
design and diagnostics of electronic circuits and systems | 2010
Petr Fiser; Jan Schmidt; Zdenek Vasicek; Lukas Sekanina
Recently, it has been shown that synthesis of some circuits is quite difficult for conventional methods. In this paper we present a method of minimization of multi-level logic networks which can solve these difficult circuit instances. The synthesis problem is transformed on the search problem. A search algorithm called Cartesian genetic programming (CGP) is applied to synthesize various difficult circuits. Conventional circuit synthesis usually fails for these difficult circuits; specific synthesis processes must be employed to obtain satisfactory results. We have found that CGP is able to implicitly discover new efficient circuit structures. Thus, it is able to optimize circuits universally, regardless their structure. The circuit optimization by CGP has been found especially efficient when applied to circuits already optimized by a conventional synthesis. The total runtime is reduced, while the result quality is improved further more.
digital systems design | 2009
Jan Schmidt; Petr Fiser
We present experiments with synthesis tools using examples which are currently believed to be very hard, namely the LEKU examples by Cong and Minkovich and parity examples of our construction. In both cases, we found a way to produce reasonable results with existing tools. We identify the abilities that are crucial for achieving such results, and also generalize them to avoid similar cases of poor performance in future tools. I. INTRODUCTION Logic synthesis is believed to be a matured process, giving results reasonably close to optimum. Yet, there are still circuits which are very hard for any synthesis process. Cong and Minkovich (1) published a method for the construction of combinational circuits with known optimal implementa- tion (LEKO) or with known upper bound (LEKU). Here we study the latter ones, as the gap between the upper bound and obtained results are the largest. Our parity examples (2) are another case of difficult circuits. Synthesis tools give results an order or two bigger than a known upper bound. We investigated the reasons of the observed poor per- formance experimentally. We succeeded in finding tools and procedures that give satisfactory (i.e. not orders of magnitude worse) results, experimented further to obtain clues what makes those tools and procedures successful. First we describe our experimental methods. Secondly, ex- periments with both sets of examples are described together with the results obtained. Finally, we interpret the results and give requirements for future tools.
digital systems design | 2009
Petr Fiser; David Toman
We introduce a fast and efficient minimization method for functions described by many (up to millions) product terms. The algorithm is based on processing a newly proposed efficient representation of a set of product terms - a ternary tree. A significant speedup of the look-up of the term operation is achieved, with respect to a standard tabular function representation. The minimization procedure is based on a fast application of basic Boolean operations upon a ternary tree. Minimization of incompletely specified functions is supported as well. The minimization method was tested on randomly generated large sums-of-products and collapsed ISCAS benchmark circuits. The performance of the proposed algorithm was compared with Espresso. A very advantageous application of the new minimization algorithm has been found - if it is used for pre-processing a function having a large number of product terms, run prior to Espresso, the total minimization runtime is significantly reduced, whereas the result quality is not affected.
digital systems design | 2001
Petr Fiser; Jan Hlavicka
The paper presents a new method of Boolean function minimization based on an original approach to implicant generation by inclusion of literals. The selection of these newly included literals, as well as the subsequent rejection of some others to obtain prime implicants, is based on heuristics working with the frequency of literal occurrence. Instead of using this data directly, some mutations are used in certain places in the algorithm. The technique of mutations and their influence on the quality of the result obtained is evaluated. The BOOM system implementing the proposed method is efficient especially for functions with several hundreds of input variables, whose values are defined only for a small part of their range. It has been tested both on standard benchmarks and on problems of a much larger dimension, generated randomly. These experiments proved that the new algorithm is very fast and that for large circuits it delivers better results than the state-of-the-art ESPRESSO.