Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Massimo Narizzano is active.

Publication


Featured researches published by Massimo Narizzano.


theory and applications of satisfiability testing | 2010

sQueezeBF: an effective preprocessor for QBFs based on equivalence reasoning

Enrico Giunchiglia; Paolo Marin; Massimo Narizzano

In this paper we present sQueezeBF, an effective preprocessor for QBFs that combines various techniques for eliminating variables and/or redundant clauses. In particular sQueezeBF combines (i) variable elimination via Q-resolution, (ii) variable elimination via equivalence substitution and (iii) equivalence breaking via equivalence rewriting. The experimental analysis shows that sQueezeBF can produce significant reductions in the number of clauses and/or variables - up to the point that some instances are solved directly by sQueezeBF - and that it can significantly improve the efficiency of a range of state-of-the-art QBF solvers - up to the point that some instances cannot be solved without sQueezeBF preprocessing.


formal methods in computer aided design | 2004

QuBE++: An Efficient QBF Solver

Enrico Giunchiglia; Massimo Narizzano; Armando Tacchella

In this paper we describe QuBE++, an efficient solver for Quantified Boolean Formulas (QBFs). To the extent of our knowledge, QuBE++ is the first QBF reasoning engine that uses lazy data structures both for unit clauses propagation and for pure literals detection. QuBE++ also features non-chronological backtracking and a branching heuristic that leverages the information gathered during the backtracking phase. Owing to such techniques and to a careful implementation, QuBE++ turns out to be an efficient and robust solver, whose performances exceed those of other state-of-the-art QBF engines, and are comparable with the best engines currently available on SAT instances.


theory and applications of satisfiability testing | 2003

Watched Data Structures for QBF Solvers

Ian P. Gent; Enrico Giunchiglia; Massimo Narizzano; Andrew G. D. Rowley; Armando Tacchella

In the last few years, we have seen a tremendous boost in the efficiency of SAT solvers, this boost being mostly due to Chaff. Chaff owes some of its efficiency to its “two-literal watching” data structure.


Ai Communications | 2009

Evaluating and certifying QBFs: A comparison of state-of-the-art tools

Massimo Narizzano; Claudia Peschiera; Luca Pulina; Armando Tacchella

In this paper we compare the performance of all the currently available suites to evaluate and certify QBFs. Our aim is to assess the current state of the art, and also to understand to which extent QBF encodings can be evaluated producing certificates that can be checked in a reliable and efficient way. We conclude that, while the evaluation of some QBFs is still an open challenge, producing and checking certificates for many medium-to-large scale QBFs is feasible with the current technology.


IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems | 2007

Quantifier Structure in Search-Based Procedures for QBFs

Enrico Giunchiglia; Massimo Narizzano; Armando Tacchella

The best currently available solvers for quantified Boolean formulas (QBFs) process their input in prenex form, i.e., all the quantifiers have to appear in the prefix of the formula separated from the purely propositional part representing the matrix. However, in many QBFs derived from applications, the propositional part is intertwined with the quantifier structure. To tackle this problem, the standard approach is to convert such QBFs in prenex form, thereby losing structural information about the prefix. In the case of search-based solvers, the prenex-form conversion introduces additional constraints on the branching heuristic and reduces the benefits of the learning mechanisms. In this paper, we show that conversion to prenex form is not necessary: current search-based solvers can be naturally extended in order to handle nonprenex QBFs and to exploit the original quantifier structure. We highlight the two mentioned drawbacks of the conversion in prenex form with a simple example, and we show that our ideas can also be useful for solving QBFs in prenex form. To validate our claims, we implemented our ideas in the state-of-the-art search-based solver QuBE and conducted an extensive experimental analysis. The results show that very substantial speedups can be obtained


Journal of Automated Reasoning | 2010

Using Bounded Model Checking for Coverage Analysis of Safety-Critical Software in an Industrial Setting

Damiano Angeletti; Enrico Giunchiglia; Massimo Narizzano; Alessandra Puddu; Salvatore Sabina

Testing and Bounded Model Checking (BMC) are two techniques used in Software Verification for bug-hunting. They are expression of two different philosophies: testing is used on the compiled code and it is more suited to find errors in common behaviors, while BMC is used on the source code to find errors in uncommon behaviors of the system. Nowadays, testing is by far the most used technique for software verification in industry: it is easy to use and even when no error is found, it can release a set of tests certifying the (partial) correctness of the compiled system. In the case of safety critical software, in order to increase the confidence of the correctness of the compiled system, it is often required that the provided set of tests covers 100% of the code. This requirement, however, substantially increases the costs associated to the testing phase, since it often involves the manual generation of tests. In this paper we show how BMC can be productively applied to the Software Verification process in industry. In particular, we show how to productively use a Bounded Model Checker for C programs (CBMC) as an automatic test generator for the Coverage Analysis of Safety Critical Software. In particular, we experimented CBMC on a subset of the modules of the European Train Control System (ETCS) of the European Rail Traffic Management System (ERTMS) source code, an industrial system for the control of the traffic railway, provided by Ansaldo STS. The Code of the ERTMS/ETCS, with thousands of lines, has been used as trial application with CBMC obtaining a set of tests satisfying the target 100% code coverage, requested by the CENELEC EN50128 guidelines for software development of safety critical systems. The use of CBMC for test generation led to a dramatic increase in the productivity of the entire Software Development process by substantially reducing the costs of the testing phase. To the best of our knowledge, this is the first time that BMC techniques have been used in an industrial setting for automatically generating tests achieving full coverage of Safety-Critical Software. The positive results demonstrate the maturity of Bounded Model Checking techniques for automatic test generation in industry.


BMC Bioinformatics | 2015

Automatic segmentation of deep intracerebral electrodes in computed tomography scans

Gabriele Arnulfo; Massimo Narizzano; Francesco Cardinale; Marco Fato; Jaakko Matias Palva

BackgroundInvasive monitoring of brain activity by means of intracerebral electrodes is widely practiced to improve pre-surgical seizure onset zone localization in patients with medically refractory seizures. Stereo-Electroencephalography (SEEG) is mainly used to localize the epileptogenic zone and a precise knowledge of the location of the electrodes is expected to facilitate the recordings interpretation and the planning of resective surgery. However, the localization of intracerebral electrodes on post-implant acquisitions is usually time-consuming (i.e., manual segmentation), it requires advanced 3D visualization tools, and it needs the supervision of trained medical doctors in order to minimize the errors. In this paper we propose an automated segmentation algorithm specifically designed to segment SEEG contacts from a thresholded post-implant Cone-Beam CT volume (0.4 mm, 0.4 mm, 0.8 mm). The algorithm relies on the planned position of target and entry points for each electrode as a first estimation of electrode axis. We implemented the proposed algorithm into DEETO, an open source C++ prototype based on ITK library.ResultsWe tested our implementation on a cohort of 28 subjects in total. The experimental analysis, carried out over a subset of 12 subjects (35 multilead electrodes; 200 contacts) manually segmented by experts, show that the algorithm: (i) is faster than manual segmentation (i.e., less than 1s/subject versus a few hours) (ii) is reliable, with an error of 0.5 mm ± 0.06 mm, and (iii) it accurately maps SEEG implants to their anatomical regions improving the interpretability of electrophysiological traces for both clinical and research studies. Moreover, using the 28-subject cohort we show here that the algorithm is also robust (error < 0.005 mm) against deep-brain displacements (< 12 mm) of the implanted electrode shaft from those planned before surgery.ConclusionsOur method represents, to the best of our knowledge, the first automatic algorithm for the segmentation of SEEG electrodes. The method can be used to accurately identify the neuroanatomical loci of SEEG electrode contacts by a non-expert in a fast and reliable manner.


theory and applications of satisfiability testing | 2004

The second QBF solvers comparative evaluation

Daniel Le Berre; Massimo Narizzano; Laurent Simon; Armando Tacchella

This paper reports about the 2004 comparative evaluation of solvers for quantified Boolean formulas (QBFs), the second in a series of non-competitive events established with the aim of assessing the advancements in the field of QBF reasoning and related research. We evaluated sixteen solvers on a test set of about one thousand benchmarks selected from instances submitted to the evaluation and from those available at www.qbflib.org. In the paper we present the evaluation infrastructure, from the criteria used to select the benchmarks to the hardware set up, and we show different views about the results obtained, highlighting the strength of different solvers and the relative hardness of the benchmarks included in the test set.


theory and applications of satisfiability testing | 2009

PaQuBE: Distributed QBF Solving with Advanced Knowledge Sharing

Matthew D. T. Lewis; Paolo Marin; Tobias Schubert; Massimo Narizzano; Bernd Becker; Enrico Giunchiglia

In this paper we present the parallel QBF Solver PaQuBE . This new solver leverages the additional computational power that can be exploited from modern computer architectures, from pervasive multicore boxes to clusters and grids, to solve more relevant instances and faster than previous generation solvers. PaQuBE extends QuBE , its sequential core, by providing a Master/Slave Message Passing Interface (MPI) based design that allows it to split the problem up over an arbitrary number of distributed processes. Furthermore, PaQuBE s progressive parallel framework is the first to support advanced knowledge sharing in which solution cubes as well as conflict clauses can be shared. According to the last QBF Evaluation, QuBE is the most powerful state-of-the-art QBF Solver. It was able to solve more than twice as many benchmarks as the next best independent solver. Our results here, show that PaQuBE provides additional speedup, solving even more instances, faster.


theory and applications of satisfiability testing | 2004

QBF reasoning on real-world instances

Enrico Giunchiglia; Massimo Narizzano; Armando Tacchella

During the recent years, the development of tools for deciding Quantified Boolean Formulas (QBFs) satisfiability has been accompanied by a steady supply of real-world instances, i.e., QBFs originated by translations from application domains such as formal verification and planning. QBFs from these domains showed to be challenging for current state-of-the-art QBF solvers, and, in order to tackle them, several techniques and even specialized solvers have been proposed. Among these techniques, there are (i) efficient detection and propagation of unit and monotone literals, (ii) branching heuristics that leverages the information extracted during the learning phase, and (iii) look-back techniques based on learning. In this paper we discuss their implementation in our state-of-the-art solver QuBE, pointing out the non trivial issues that arised in the process. We show that all the techniques positively contribute to QuBE performances on average. In particular, we show that monotone literal fixing is the most important technique in order to improve capacity, followed by learning and the heuristics. The situation is reversed if we consider productivity. These and other observations are detailed in the body of the paper. For our analysis, we consider the formal verification and planning benchmarks from the 2003 QBF evaluation.

Collaboration


Dive into the Massimo Narizzano's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Paolo Marin

University of Freiburg

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge