Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Zissis Poulos is active.

Publication


Featured researches published by Zissis Poulos.


international test conference | 2014

Clustering-based failure triage for RTL regression debugging

Zissis Poulos; Andreas G. Veneris

Regression verification at the pre-silicon stage has experienced a dramatic boost in capabilities over the past years. With the aid of assertions, improved simulation coverage and formal verification tools, a vast amount of trace data and myriads of failures are often generated after each regression run. Along these lines, modern flows face an emerging need to appropriately categorize, prioritize and distribute these failures to the engineer(s) best-suited for detailed debugging of each failure. This task is known as failure triage. Despite its resource-intensive nature, triage remains a predominantly manual process. In this work, an automated data-mining failure triage framework is introduced that mines simulation and SAT-based design debugging data, uncovers relations among verification failures and automatically groups the related ones together. The core characteristic of the framework is a novel feature-based representation for verification failures and a new multiple-pass clustering strategy that surpass previous methodologies in accuracy, robustness and flexibility. The proposed triage engine achieves an 89% average accuracy in failure categorization and compared to existing solutions, it reduces the number of misplaced verification failures by 47% on the average.


design, automation, and test in europe | 2012

Leveraging reconfigurability to raise productivity in FPGA functional debug

Zissis Poulos; Yu-Shen Yang; Jason Helge Anderson; Andreas G. Veneris; Bao Le

We propose new hardware and software techniques for FPGA functional debug that leverage the inherent reconfigurability of the FPGA fabric to reduce functional debugging time. The functionality of an FPGA circuit is represented by a programming bitstream that specifies the configuration of the FPGAs internal logic and routing. The proposed methodology allows different sets of design internal signals to be traced solely by changes to the programming bitstream followed by device reconfiguration and hardware execution. Evidently, the advantage of this new methodology vs. existing debug techniques is that it operates without the need of iterative executions of the computationally-intensive design re-synthesis, placement and routing tools. In essence, with a single execution of the synthesis flow, the new approach permits a large number of internal signals to be traced for an arbitrary number of clock cycles using a limited number of external pins. Experimental results using commercial FPGA vendor tools demonstrate productivity (i.e. run-time) improvements of up to 30× vs. a conventional approach to FPGA functional debugging. These results demonstrate the practicality and effectiveness of the proposed approach.


international conference on computer design | 2015

Clustering-based revision debug in regression verification

Djordje Maksimovic; Andreas G. Veneris; Zissis Poulos

Modern digital systems are growing in size and complexity, introducing significant organizational and verification challenges in the design cycle. Verification today takes as much as 70% of the design time with debugging being responsible for half of this effort. Automation has mitigated part of the resource-intensive nature of rectifying erroneous designs. Nevertheless, most tools target failures in isolation. Since regression verification can discover myriads of failures in one run, automation is also required to guide an engineer to rank them and expedite debugging. To address this growing regression pain, this paper presents a framework that utilizes traditional machine learning techniques along with historical data in version control systems and the results of functional debugging. Its aim is to rank revisions based on their likelihood of being responsible for a particular failure. Ranking prioritizes revisions that ought to be targeted first, and therefore it speeds-up the localization of the error source. This effectively reduces the number of debug iterations. Experiments on industrial designs demonstrate a 68% improvement in the ranking of actual erroneous revisions versus the ranking obtained through existing industrial methodologies. This benefit arrives with negligible run-time overhead.


2015 16th Latin-American Test Symposium (LATS) | 2015

Exemplar-based failure triage for regression design debugging

Zissis Poulos; Andreas G. Veneris

Modern regression verification often exposes myriads of failures at the pre-silicon stage. Typically, these failures need to be properly grouped into bins, which then have to be distributed to engineers for detailed analysis. The above process is coined as failure triage, and is nowadays increasing in complexity, as the size of both design logic and verification environment continues to grow. However, it remains a predominantly manual process that can prolong the debug cycle and jeopardize time-sensitive design milestones. In this paper, we propose an exemplar-based data-mining formulation of Failure Triage that efficiently automates both failure grouping and bin distribution. The proposed framework maps failures as data points, applies an affinity-propagation (AP) clustering algorithm, and operates in both metric and non-metric spaces, offering complete flexibility and significant user control over the process. Experimental results show that the proposed approach groups related failures together with 87% accuracy on the average, and improves bin distribution accuracy by 21% over existing methods.


international symposium on quality electronic design | 2014

Simulation and satisfiability guided counter-example triage for RTL design debugging

Zissis Poulos; Yu-Shen Yang; Andreas G. Veneris; Bao Le

Regression verification flows in modern integrated circuit development environments expose a plethora of counterexamples during simulation. Sorting these counter-examples today is a tedious and time-consuming process. High level design debugging aims to triage these counter-examples into groups that will be assigned to the appropriate verification and/or design engineers for detailed root cause analysis. In this work, we present an automated triage process that leverages knowledge extracted from simulation and SAT-based debugging. We introduce novel metrics that correlate counter-examples based on the likelihood of sharing the same root cause. Triage is formulated as a pattern recognition problem and solved by hierarchical clustering techniques to generate groups of related counter-examples. Experimental results demonstrate an overall accuracy of 94% for the proposed automated triage framework, which corresponds to a 40% improvement over conventional scripting methods.


international on-line testing symposium | 2013

A failure triage engine based on error trace signature extraction

Zissis Poulos; Yu-Shen Yang; Andreas G. Veneris

The ever growing demand for functionally robust and error-free industrial electronics necessitates the development of techniques that will prohibit the propagation of functional errors to the final tape-out stage. This paramount requirement in the semiconductor world is imposed by the equivocal observation that functional errors slipping to silicon production introduce immense amounts of cost and jeopardize chip release dates. Functional verification and debugging are burdened with the tedious task of guaranteeing logic functionality early in the design cycle. In this paper, we present an automated method for the very first stage of functional debugging, called failure triage. Failure triage is the task of analyzing large sets of failures, grouping together those that are likely to be caused by the same design error, and then allocating those groups to the appropriate engineers for fixing. The introduced framework instruments techniques from the machine learning domain combined with the root cause analysis power of modern SAT-based debugging tools, in order to exploit information from error traces and bin the corresponding failures using clustering algorithms. Preliminary experimental results indicate an average accuracy of 93 % for the proposed failure triage engine, which corresponds to a 43 % improvement over conventional automated methods.


international on-line testing symposium | 2013

Accelerating post silicon debug of deep electrical faults

Bao Le; Dipanjan Sengupta; Andreas G. Veneris; Zissis Poulos

With the growing complexity of current designs and shrinking time-to-market, traditional ATPG methods fail to detect all electrical faults in the design. Debug teams have to spend considerable amount of time and effort to identify these faults during post silicon debug. This work proposes off-chip analysis to speed-up the effort of identifying hard-to-find electrical faults that are not detected using conventional test methods, but cause the chip to crash during functional testing or silicon-bring-up. With the goal of reducing the search space for reconstructing the failure trace path, formal methodology is used to analyze the reachable states along the path. Isolating the root cause of failure is also accelerated. Moreover, we propose a forward traversal technique on selected few possible faults to generate a complete failure trace starting from the initial state to the crash state. Experimental results show that the proposed approach can lead to a 44% reduction in actual silicon run with a commensurate reduction in off-chip debug time.


IEEE Computer | 2018

Exploiting Typical Values to Accelerate Deep Learning

Andreas Moshovos; Jorge Albericio; Patrick Judd; Alberto Delmas Lascorz; Sayeh Sharify; Zissis Poulos; Tayler H. Hetherington; Tor M. Aamodt; Natalie D. Enright Jerger

To deliver the hardware computation power advances needed to support deep learning innovations, identifying deep learning properties that designers could potentially exploit is invaluable. This article articulates our strategy and overviews several value properties of deep learning models that we identified and some of our hardware designs that exploit them to reduce computation, and on- and off-chip storage and communication.


industrial conference on data mining | 2017

Fast GPU-Based Influence Maximization Within Finite Deadlines via Node-Level Parallelism

Koushik Pal; Zissis Poulos; Edward Kim; Andreas G. Veneris

Influence maximization in the continuous-time domain is a prevalent topic in social media analytics. It relates to the problem of identifying those individuals in a social network, whose endorsement of an opinion will maximize the number of expected follow-ups within a finite time window. This work presents a novel GPU-accelerated algorithm that enables node-parallel estimation of influence spread in the continuous-time domain. Given a finite time window, the method involves decomposing a social graph into multiple local regions within which influence spread can be estimated in parallel to allow for fast and low-cost computations. Experiments show that the proposed method achieves up to x85 speed-up vs. the state-of-the-art on real-world social graphs with up to 100K nodes and 2.5M edges. In addition, our optimization solutions are within 98.9% of the influence spread achieved by current state-of-the-art. The memory consumption of our method is also substantially lower. Indicatively, our method can achieve, on a single GPU, similar running time performance as the state-of-the-art, when the latter distributes execution across hundreds of CPU cores.


Journal of Electronic Testing | 2016

Exemplar-based Failure Triage for Regression Design Debugging

Zissis Poulos; Andreas G. Veneris

Modern regression verification often exposes myriads of failures at the pre-silicon stage. Typically, these failures need to be properly grouped into bins, which then have to be distributed to engineers for detailed analysis. The above process is coined as failure triage, and is nowadays increasing in complexity, as the size of both design logic and verification environment continues to grow. However, it remains a predominantly manual process that can prolong the debug cycle and jeopardize time-sensitive design milestones. In this paper, we propose an exemplar-based data-mining formulation of failure triage that efficiently automates both failure grouping and bin distribution. The proposed framework maps failures as data points, applies an affinity-propagation (AP) clustering algorithm, and operates in both metric and non-metric spaces, offering complete flexibility and significant user control over the process. Experimental results show that the proposed approach groups related failures together with 87 % accuracy on the average, and improves bin distribution accuracy by 21 % over existing methods.

Collaboration


Dive into the Zissis Poulos's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bao Le

University of Toronto

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge