Marco Mattavelli
École Polytechnique Fédérale de Lausanne
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Marco Mattavelli.
ACM Sigarch Computer Architecture News | 2008
Shuvra S. Bhattacharyya; Gordon J. Brebner; Jorn W. Janneck; Johan Eker; Carl Von Platen; Marco Mattavelli; Mickaël Raulet
This paper presents the OpenDF framework and recalls that dataflow programming was once invented to address the problem of parallel computing. We discuss the problems with an imperative style, von Neumann programs, and present what we believe are the advantages of using a dataflow programming model. The CAL actor language is briefly presented and its role in the ISO/MPEG standard is discussed. The Dataflow Interchange Format (DIF) and related tools can be used for analysis of actors and networks, demonstrating the advantages of a dataflow approach. Finally, an overview of a case study implementing an MPEG- 4 decoder is given.
IEEE Signal Processing Magazine | 2010
Marco Mattavelli; Ihab Amer; Mickaël Raulet
More than two decades of research in digital video technologies, together with the emergence of successful international standards for digital video compression, have led to a wide variety of digital video products using video compression for professional and consumer applications. Although many of these video compression standards share common and/or similar coding tools, there is currently no explicit way to exploit such commonalities at the level of the specifications nor at the level of implementations. Moreover, the possibility of taking advantage of the benefits of the continuous improvements of coding technology is only possible by replacing an old standard with a new one. This usually results in the replacement of the existing multimedia devices with new ones capable of handling the new deployed standards. Such necessity is not always well accept-ed by the public and professionals for obvious reasons.
Signal Processing-image Communication | 2013
Simone Casale-Brunet; Abdallah Elguindy; Endri Bezati; Richard Thavot; Ghislain Roquier; Marco Mattavelli; Jorn W. Janneck
The recent MPEG Reconfigurable Media Coding (RMC) standard aims at defining media processing specifications (e.g. video codecs) in a form that abstracts from the implementation platform, but at the same time is an appropriate starting point for implementation on specific targets. To this end, the RMC framework has standardized both an asynchronous dataflow model of computation and an associated specification language. Either are providing the formalism and the theoretical foundation for multimedia specifications. Even though these specifications are abstract and platform-independent the new approach of developing implementations from such initial specifications presents obvious advantages over the approaches based on classical sequential specifications. The advantages appear particularly appealing when targeting the current and emerging homogeneous and heterogeneous manycore or multicore processing platforms. These highly parallel computing machines are gradually replacing single-core processors, particularly when the system design aims at reducing power dissipation or at increasing throughput. However, a straightforward mapping of an abstract dataflow specification onto a concurrent and heterogeneous platform does often not produce an efficient result. Before an abstract specification can be translated into an efficient implementation in software and hardware, the dataflow networks need to be partitioned and then mapped to individual processing elements. Moreover, system performance requirements need to be accounted for in the design optimization process. This paper discusses the state of the art of the combinatorial problems that need to be faced at this design space exploration step. Some recent developments and experimental results for image and video coding applications are illustrated. Both well-known and novel heuristics for problems such as mapping, scheduling and buffer minimization are investigated in the specific context of exploring the design space of dataflow program implementations.
signal processing systems | 2007
Christophe Lucarz; Marco Mattavelli; Joseph Thomas-Kerr; Jorn W. Janneck
Multimedia coding technology, after about 20 years of active research, has delivered a rich variety of different and complex coding algorithms. Selecting an appropriate subset of these algorithms would, in principle, enable a designer to produce the codec supporting any desired functionality as well as any desired trade-off between compression performance and implementation complexity. Currently, interoperability demands that this selection process be hard-wired into the normative descriptions of the codec, or at a lower level, into a predefined number of choices, known as profiles, codified within each standard specification. This paper presents an alternative paradigm for codec deployment that is currently under development by MPEG, known as Reconfigurable Media Coding (RMC). Using the RMC framework, arbitrary combinations of fundamental algorithms may be assembled, without predefined standardization, because everything necessary for specifying the decoding process is delivered alongside the content itself. This side-information consists of a description of the bitstream syntax, as well as a description of the decoder configuration. Decoder configuration information is provided as a description of the interconnections between algorithmic blocks. The approach has been validated by development of an RMC format that matches MPEG-4 Video, and then extending the format by adding new chroma-subsampling patterns.
IEEE Signal Processing Magazine | 2009
Ihab Amer; Christophe Lucarz; Ghislain Roquier; Marco Mattavelli; Mickaël Raulet; Jean François Nezan; Olivier Déforges
This article provides an overview of the main objectives of the new RVC standard, with an emphasis on the features that enable efficient implementation on platforms with multiple cores. A brief introduction to the methodologies that efficiently map RVC codec specifications to multicore platforms is accompanied with an example of the possible breakthroughs that are expected to occur in the design and deployment of multimedia services on multicore platforms.
Discrete Applied Mathematics | 2002
Edoardo Amaldi; Marco Mattavelli
We consider a new combinatorial optimization problem related to linear systems (MIN PFS) that consists, given an infeasible system, in finding a partition into a minimum number of feasible subsystems. MIN PFS allows formalization of the fundamental problem of piecewise linear model estimation, which is an attractive alternative when modeling a wide range of nonlinear phenomena. Since MIN PFS turns out to be NP-hard to approximate within every factor strictly smaller than 3/2 and we are mainly interested in real-time applications, we propose a greedy strategy based on randomized and thermal variants of the classical Agmon-Motzkin-Schoenberg relaxation method for solving systems of linear inequalities. Our method provides good approximate solutions in a short amount of time. The potential of our approach and the performance of our algorithm are demonstrated on two challenging problems from image and signal processing. The first one is that of detecting line segments in digital images and the second one that of modeling time-series using piecewise linear autoregressive models. In both cases the MIN PFS-based approach presents various advantages with respect to conventional alternatives, including wider range of applicability, lower computational requirements and no need for a priori assumptions regarding the underlying structure of the data.
IEEE Transactions on Audio, Speech, and Language Processing | 2008
Ruohua Zhou; Marco Mattavelli; Giorgio Zoia
This paper describes a new method for music onset detection. The novelty of the approach consists mainly of two elements: the time-frequency processing and the detection stages. The resonator time frequency image (RTFI) is the basic time-frequency analysis tool. The time-frequency processing part is in charge of transforming the RTFI energy spectrum into more natural energy- change and pitch-change cues that are then used as input elements for the detection of music onsets by detection tools. Two detection algorithms have been developed: an energy-based algorithm and a pitch-based one. The energy-based detection algorithm exploits energy-change cues and performs particularly well for the detection of hard onsets. The pitch-based algorithm successfully exploits stable pitch cues for the onset detection in polyphonic music, and achieves much better performances than the energy-based algorithm when applied to the detection of soft onsets. Results for both the energy-based and pitch-based detection algorithms have been obtained on a large music dataset.
IEEE Transactions on Circuits and Systems for Video Technology | 2009
Gwo Giun Lee; Yen-Kuang Chen; Marco Mattavelli; Euee S. Jang
Concurrently exploring both algorithmic and architectural optimizations is a new design paradigm. This survey paper addresses the latest research and future perspectives on the simultaneous development of video coding, processing, and computing algorithms with emerging platforms that have multiple cores and reconfigurable architecture. As the algorithms in forthcoming visual systems become increasingly complex, many applications must have different profiles with different levels of performance. Hence, with expectations that the visual experience in the future will become continuously better, it is critical that advanced platforms provide higher performance, better flexibility, and lower power consumption. To achieve these goals, algorithm and architecture co-design is significant for characterizing the algorithmic complexity used to optimize targeted architecture. This paper shows that seamless weaving of the development of previously autonomous visual computing algorithms and multicore or reconfigurable architectures will unavoidably become the leading trend in the future of video technology.Concurrently exploring both algorithmic and architectural optimizations is a new design paradigm. This survey paper addresses the latest research and future perspectives on the simultaneous development of video coding, processing, and computing algorithms with emerging platforms that have multiple cores and reconfigurable architecture. As the algorithms in forthcoming visual systems become increasingly complex, many applications must have different profiles with different levels of performance. Hence, with expectations that the visual experience in the future will become continuously better, it is critical that advanced platforms provide higher performance, better flexibility, and lower power consumption. To achieve these goals, algorithm and architecture co-design is significant for characterizing the algorithmic complexity used to optimize targeted architecture. This paper shows that seamless weaving of the development of previously autonomous visual computing algorithms and multicore or reconfigurable architectures will unavoidably become the leading trend in the future of video technology.
IEEE SIGNAL PROCESSING MAGAZINE, Special issue on Multicore Platforms | 2009
Ihab Amer; Christophe Lucarz; Ghislain Roquier; Marco Mattavelli; Mickaël Raulet; Jean-François Nezan; Olivier Déforges
This article provides an overview of the main objectives of the new RVC standard, with an emphasis on the features that enable efficient implementation on platforms with multiple cores. A brief introduction to the methodologies that efficiently map RVC codec specifications to multicore platforms is accompanied with an example of the possible breakthroughs that are expected to occur in the design and deployment of multimedia services on multicore platforms.
Nature Methods | 2016
Ibrahim Numanagić; James K. Bonfield; Faraz Hach; Jan Voges; Jörn Ostermann; Claudio Alberti; Marco Mattavelli; S. Cenk Sahinalp
High-throughput sequencing (HTS) data are commonly stored as raw sequencing reads in FASTQ format or as reads mapped to a reference, in SAM format, both with large memory footprints. Worldwide growth of HTS data has prompted the development of compression methods that aim to significantly reduce HTS data size. Here we report on a benchmarking study of available compression methods on a comprehensive set of HTS data using an automated framework.