Giovanni Beltrame
École Polytechnique de Montréal
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Giovanni Beltrame.
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems | 2009
Giovanni Beltrame; Luca Fossati; Donatella Sciuto
This paper presents reflective simulation platform (ReSP), a transaction-level multiprocessor simulation platform based on the integration of SystemC and Python. ReSP exploits the concept of reflection, enabling the integration of SystemC components without source-code modifications and providing full observability of their internal state. ReSP offers fine-grained simulation control and supports the evaluation of different hardware/software configurations of a given application, enabling complete design space exploration. ReSP allows the evaluation of real-time applications on high-level hardware models since it provides the transparent emulation of POSIX-compliant real-time operating systems (RTOS) primitives. A number of experiments have been performed to validate ReSP and its capabilities, using a set of single- and multithreaded benchmarks, with both POSIX Threads (PThreads) and OpenMP programming styles. These experiments confirm that reflection introduces negligible ( <1%) overhead when comparing ReSP to plain SystemC simulation. The results also show that ReSP can be successfully used to analyze and explore concurrent and reconfigurable applications even at very early development stages. In fact, the average error introduced by ReSPs RTOS emulation is below 6.6 plusmn 5% w.r.t. the same RTOS running on an instruction set simulator, while simulation speed increases by a factor of ten. Owing to the integration with a scripted language, simulation management is simplified, and experimental setup effort is considerably reduced.
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems | 2010
Giovanni Beltrame; Luca Fossati; Donatella Sciuto
This paper presents an efficient technique to perform design space exploration of a multiprocessor platform that minimizes the number of simulations needed to identify a Pareto curve with metrics like energy and delay. Instead of using semi-random search algorithms (like simulated annealing, tabu search, genetic algorithms, etc.), we use the domain knowledge derived from the platform architecture to set-up the exploration as a discrete-space Markov decision process. The system walks the design space changing its parameters, performing simulations only when probabilistic information becomes insufficient for a decision. A learning algorithm updates the probabilities of decision outcomes as simulations are performed. The proposed technique has been tested with two multimedia industrial applications, namely the ffmpeg transcoder and the parallel pigz compression algorithm. Results show that the exploration can be performed with 5% of the simulations necessary for the most used algorithms (Pareto simulated annealing, nondominated sorting genetic algorithm, etc.), increasing the exploration speed by more than one order of magnitude.
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems | 2007
Giovanni Beltrame; Donatella Sciuto; Cristina Silvano
This paper introduces a modeling and simulation technique that extends transaction-level modeling (TLM) to support multi-accuracy models and power estimation. This approach provides different combinations of power and performance models, and the switching of model accuracy during simulation, allowing the designer to trade off between simulation accuracy and speed at runtime. This is particularly useful during the exploration phase of a design, when the designer changes the features or the parameters of the design, trying to satisfy its constraints. Usually, only limited portions of a system are affected by a single parameter change, and therefore, it is possible to fast-simulate uninteresting sections of the application. In particular, we show how to extend the TLM and modify the SystemC kernel to support multi-accuracy features. The proposed methodology has been tested on several benchmarks, among which is an MPEG4 encoder, showing that simulation speed can be increased of one order of magnitude. On the same benchmarks, we also show how it is possible to choose the optimal performance simulation accuracy for a given power model, maximizing simulation speed for the desired accuracy.
asia and south pacific design automation conference | 2008
Giovanni Beltrame; Luca Fossati; Antonio Miele; Donatella Sciuto
This paper presents ReSP (Reflective Simulation Platform), a Transaction-Level multi-processor simulation platform based on SystemC and Python; SystemC is a standard language for system modeling and verification, and Python provides the platform with reflective capabilities. These are employed to give the designer an easy way to specify the architecture of a system, simulate the given configuration and perform automatic analysis on it. ReSP enables SystemC and Python interoperability through automatic Python wrapper generation. We show that the overhead associated with the Python intermediate layer is around 1%, therefore execution speed is not compromised. The advantages of our approach are: (a) easy integration of external IPs (b) fine grain control of the simulation (c) effortless integration of tools for system analysis and design space exploration. A case study shows how the platform can be extended to support system reliability assessment.
adaptive hardware and systems | 2013
Jacopo Panerati; Filippo Sironi; Matteo Carminati; Martina Maggio; Giovanni Beltrame; Piotr J. Gmytrasiewicz; Donatella Sciuto; Marco D. Santambrogio
Autonomic computing was proposed as a promising solution to overcome the complexity of modern systems, which is causing management operations to become increasingly difficult for human beings. This work proposes the Adaptation Manager, a comprehensive framework to implement autonomic managers capable of pursuing some of the objectives of autonomic computing (i.e., self-optimization and self-healing). The Adaptation Manager features an active performance monitoring infrastructure and two dynamic knobs to tune the scheduling decisions of an operating system and the working frequency of cores. The Adaptation Manager exploits artificial intelligence and reinforcement learning to close the Monitor-Plan-Analyze-Execute with Knowledge adaptation loop at the very base of every autonomic manager. We evaluate the Adaptation Manager, and especially the adaptation policies it learns by means of reinforcement learning, using a set of representative applications for multicore processors and show the effectiveness of our prototype on commodity computing systems.
great lakes symposium on vlsi | 2009
Giovanni Beltrame; Antonio Miele
Fault modeling is a fundamental element for several activities, ranging from off- and on-line testing, to fault tolerance and dependability-aware design. These activities are carried out during various design phases, dealing with specifications at different abstraction levels. Therefore, modeling faults across abstraction levels is of paramount importance to introduce dependability-related issues from the early phases of design. This paper analyzes how faults can be modeled at the different levels of abstraction with respect to Transaction Level Models, and how these models are related across levels. The work focuses on soft errors and aims at providing support to dependability analysis. A case study of a Transaction Level specification of a Network-on-Chip switch is used to evaluate the methodology and its applicability.
intelligent robots and systems | 2016
Carlo Pinciroli; Giovanni Beltrame
We present Buzz, a novel programming language for heterogeneous robot swarms. Buzz advocates a compositional approach, offering primitives to define swarm behaviors both from the perspective of the single robot and of the overall swarm. Single-robot primitives include robot-specific instructions and manipulation of neighborhood data. Swarm-based primitives allow for the dynamic management of robot teams, and for sharing information globally across the swarm. Self-organization stems from the completely decentralized mechanisms upon which the Buzz run-time platform is based. The language can be extended to add new primitives (thus supporting heterogeneous robot swarms), and its run-time platform is designed to be laid on top of other frameworks, such as the Robot Operating System. We showcase the capabilities of Buzz by providing code examples, and analyze scalability and robustness of the run-time platform through realistic simulated experiments with representative swarm algorithms.
design, automation, and test in europe | 2014
Alain Fourmigue; Giovanni Beltrame; Gabriela Nicolescu
Three-dimensional integrated circuits (3D ICs) with advanced cooling systems are emerging as a viable solution for many-core platforms. These architectures generate a high and rapidly changing thermal flux. Their design requires accurate transient thermal models. Several models have been proposed, either with limited capabilities, or poor simulation performance. This work introduces an efficient algorithm based on the Finite Difference Method to compute the transient temperature in liquid-cooled 3D ICs. Our experiments show a 5x speedup versus state-of-the-art models, while maintaining the same level of accuracy, and demonstrate the effect of large through silicon vias arrays on thermal dissipation.
ACM Transactions on Design Automation of Electronic Systems | 2014
Jacopo Panerati; Giovanni Beltrame
This article presents a detailed overview and the experimental comparison of 15 multi-objective design-space exploration (DSE) algorithms for high-level design. These algorithms are collected from recent literature and include heuristic, evolutionary, and statistical methods. To provide a fair comparison, the algorithms are classified according to the approach used and examined against a large set of metrics. In particular, the effectiveness of each algorithm was evaluated for the optimization of a multiprocessor platform, considering initial setup effort, rate of convergence, scalability, and quality of the resulting optimization. Our experiments are performed with statistical rigor, using a set of very diverse benchmark applications (a video converter, a parallel compression algorithm, and a fast Fourier transformation algorithm) to take a large spectrum of realistic workloads into account. Our results provide insights on the effort required to apply each algorithm to a target design space, the number of simulations it requires, its accuracy, and its precision. These insights are used to draw guidelines for the choice of DSE algorithms according to the type and size of design space to be optimized.
ieee international conference on software analysis evolution and reengineering | 2016
Rubén Saborido; Giovanni Beltrame; Foutse Khomh; Enrique Alba; Giuliano Antoniol
In this paper, we present a recommendation system aimed at helping users and developers alike. We help users to choose optimal sets of applications belonging to different categories (eg. browsers, e-mails, cameras) while minimizing energy consumption, transmitted data, and maximizing application rating. We also help developers by showing the relative placement of their applications efficiency with respect to selected others. When the optimal set of applications is computed, it is leveraged to position a given application with respect to the optimal, median and worst application in its category (eg. browsers). Out of eight categories we selected 144 applications, manually defined typical execution scenarios, collected the relevant data, and computed the Pareto optimal front solving a multi-objective optimization problem. We report evidence that, on the one hand, ratings do not correlate with energy efficiency and data frugality. On the other hand, we show that it is possible to help developers understanding how far is a new Android application power consumption and network usage with respect to optimal applications in the same category. From the user perspective, we show that choosing optimal sets of applications, power consumption and network usage can be reduced by 16.61% and 40.17%, respectively, in comparison to choosing the set of applications that maximizes only the rating.