Claudio Alberti
École Normale Supérieure
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Claudio Alberti.
Nature Methods | 2016
Ibrahim Numanagić; James K. Bonfield; Faraz Hach; Jan Voges; Jörn Ostermann; Claudio Alberti; Marco Mattavelli; S. Cenk Sahinalp
High-throughput sequencing (HTS) data are commonly stored as raw sequencing reads in FASTQ format or as reads mapped to a reference, in SAM format, both with large memory footprints. Worldwide growth of HTS data has prompted the development of compression methods that aim to significantly reduce HTS data size. Here we report on a benchmarking study of available compression methods on a comprehensive set of HTS data using an automated framework.
signal processing systems | 2013
S. Casale Brunet; Endri Bezati; Claudio Alberti; Marco Mattavelli; E. Amaldi; Jorn W. Janneck
In this paper we propose a design methodology to partition dataflow applications on a multi clock domain architecture. This work shows how starting from a high level dataflow representation of a dynamic program it is possible to reduce the overall power consumption without impacting the performances. Two different approaches are illustrated, both based on the post-processing and analysis of the causation trace of a dataflow program. Methodology and experimental results are demonstrated in an at-size scenario using an MPEG-4 Simple Profile decoder.
international symposium on parallel and distributed processing and applications | 2013
Simone Casale Brunet; Claudio Alberti; Marco Mattavelli; Jorn W. Janneck
This paper presents a dataflow design methodology and an associated co-exploration environment, focusing on the optimization of buffer sizes. The approach is applicable to dynamic dataflow designs and its performance is presented and validated by experimental results on the porting of an MPEG-4 Simple Profile decoder to the STM STHORM manycore platform. For the purpose of this work, the decoder has been written using the RVC-CAL dataflow language standardized by ISO/IEC. Starting from this high-level representation it is demonstrated how the buffer size configuration can be optimized, based on a novel buffer size minimization algorithm suitable for a very general class of dataflow programs.
asilomar conference on signals, systems and computers | 2014
Khaled Jerbi; Daniele Renzi; Damien Jack De Saint Jorre; Hervé Yviquel; Mickaël Raulet; Claudio Alberti; Marco Mattavelli
With the standardization of the new High Efficiency Video Coding (HEVC) compression algorithm, a dataflow specification of the HEVC decoding process is also available as part of the standard. This paper presents methodologies to improve and optimize the performance of implementations derived by the dataflow specification. Regarding the architectural aspect of dataflow network, the throughput has been increased by developing more potential parallelism. For the platform aspect, critical processes have been optimized by applying SIMD functions and communications have been improved by cache efficient FIFO implementation. Results revealed an average acceleration factor of 7 in the decoding framerate over the reference dataflow implementation.
computational intelligence communication systems and networks | 2013
Simone Casale Brunet; Marco Mattavelli; Claudio Alberti; Jorn W. Janneck
Heterogeneous parallel systems are becoming mainstream computing platforms nowadays. One of the main challenges the development community is currently facing is how to fully exploit the available computational power when porting existing programs or developing new ones with available techniques. In this direction, several design space exploration methods have been presented and extensively adopted. However, defining the feasible design space of a dynamic dataflow program still remains an open issue. This paper proposes a novel methodology for defining such a space through a serial execution. Homotopy theoretic methods are used to demonstrate how the design space of a program can be reconstructed from its serial execution trajectory. Moreover, the concept of dependencies graph of a dataflow program defined in the literature is extended with the definition of two new kinds of dependencies - the Guard Enable and Disable - and the 3-tuple notion needed to represent them.
international conference on image processing | 2014
Damien Jack De Saint Jorre; Claudio Alberti; Marco Mattavelli; Simone Casale-Brunet
MPEG High Efficient Video Coding (HEVC) is likely to emerge as the video coding standard for HD and Ultra-HD TV resolutions. The two elements that push HEVC beyond the previous standards are a higher compression efficiency of about a factor of two, and the introduction of new coding tools, tiles and wavefront that are intended to ease the largely increased encoding complexity particularly for Ultra HD resolutions such as 4K and 8K. However, for HEVC decoder implementations, the achievement of the desired performance on massive parallel platforms cannot rely on the use of such optional (not enforced by MPEG profiles) tools. This paper reports results about the intrinsic parallelism of compliant HEVC decoding algorithms obtained by analyzing a dataflow implementation written using the standard language specified in ISO/IEC 23001-4 and structured attempting to maximize the algorithmic potential parallelism. The experimental results show what is the parallelism achieved by different dataflow architectures and how it can be further combined with the parallelism achieved by relying on tiles and wavefront, whenever they would be available, for porting a compliant HEVC decoder on massive parallel many-core platforms.
international conference on acoustics, speech, and signal processing | 2014
Ab Al Hadi Ab Rahman; Simone Casale-Brunet; Claudio Alberti; Marco Mattavelli
Minimizing buffer sizes of dynamic dataflow implementations without introducing deadlocks or reducing the design performance is in general an important and useful design objective. Indeed, buffer sizes that are too small causing a system to deadlock during execution, or dimensioning unnecessarily large sizes leading to a resource inefficient design are both not a desired design option. This paper presents an implementation, validation, and comparison of several buffer size optimization techniques for the generic class of dynamic dataflow model of computation called the dataflow process network. The paper presents an heuristic capable of finding a close-to-minimum buffer size configuration for deadlock-free executions, and a methodology to efficiently explore different configurations for feasible design alternatives. The approach is demonstrated using as experimental design case, an MPEG-4 AVC/H.264 decoder implemented on an FPGA.
asilomar conference on signals, systems and computers | 2013
S. Casale Brunet; Endri Bezati; Claudio Alberti; Marco Mattavelli; E. Amaldi; Jorn W. Janneck
This paper proposes a new design methodology to partition streaming applications onto a multi clock domain architecture. The objective is to save power by running different parts of the application at the lowest possible clock frequency that will not violate the throughput requirements. The solution involves partitioning the application into an appropriate number of clock domains, and then assigning each of those domains a clock frequency. Two different approaches are illustrated, both based on the post-processing and analysis of the causation trace of a dataflow program. Methodology and initial experimental results are demonstrated in an at-size scenario using an MPEG-4 Simple Profile decoder implemented in a FPGA platform.
electronic imaging | 2004
Artur Lugmayr; Abdellatif Benjelloun Touimi; Itaru Kaneko; Jong-Nam Kim; Claudio Alberti; Sadigurschi Yona; Jaejoon Kim; Maria Teresa Andrade; Seppo Kalli
The MPEG experts are currently developing the MPEG-21 set of standards and this includes a framework and specifications for digital rights management (DRM), delivery of quality of services (QoS) over heterogeneous networks and terminals, packaging of multimedia content and other things essential for the infrastructural aspects of multimedia content distribution. Considerable research effort is being applied to these new developments and the capabilities of MPEG-21 technologies to address specific application areas are being investigated. One such application area is broadcasting, in particular the development of digital TV and its services. In more practical terms, digital TV addresses networking, events, channels, services, programs, signaling, encoding, bandwidth, conditional access, subscription, advertisements and interactivity. MPEG-21 provides an excellent framework of standards to be applied in digital TV applications. Within the scope of this research work we describe a new model based on MPEG-21 and its relevance to digital TV: the digital broadcast item model (DBIM). The goal of the DBIM is to elaborate the potential of MPEG-21 for digital TV applications. Within this paper we focus on a general description of the DBIM, quality of service (QoS) management and metadata filtering, digital rights management and also present use-cases and scenarios where the DBIM’s role is explored in detail.
signal processing systems | 2017
Khaled Jerbi; Hervé Yviquel; Alexandre Sanchez; Daniele Renzi; Damien Jack De Saint Jorre; Claudio Alberti; Marco Mattavelli; Mickaël Raulet
With the emergence of the High Efficiency Video Coding (HEVC) standard, a dataflow description of the decoder part was developed as part of the MPEG-B standard. This dataflow description presented modest framerate results which led us to propose methodologies to improve the performance. In this paper, we introduce architectural improvements by exposing more parallelism using YUV and frame-based parallel decoding. We also present platform optimizations based on the use of SIMD functions and cache efficient FIFOs. Results show an average acceleration factor of 5.8 in the decoding framerate over the reference architecture.