Mario Guglielmo
CSELT
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Mario Guglielmo.
global communications conference | 1988
Ronald Plompen; Yoshinori Hatori; Wilfried Geuen; Jacques Guichard; Mario Guglielmo; Harald Brusewitz
A description is given of studies of source coding in the specialists group on coding for visual telephony in CCITT SG XV. Using a macro block approach, the source coding algorithm will have the capability of operating from 64 kb/s for videophone applications up to 2 Mb/s for high-quality videoconferencing applications. The authors describe the existing reference model and the evolution toward the latest reference model (RM5), which is going to be submitted for standardization. Theoretical information is provided and some examples for possible improvements are presented for the most important techniques used in the reference model, among which are quantization, variable block size, scanning classes, loop filter, entropy coding, multiple vs. single variable-length coding, and block-type discrimination.<<ETX>>
IEEE Transactions on Communications | 1986
Mario Guglielmo
The paper analyzes the effect of finite-length arithmetic in the calculation of 2-D linear transformations employed in some picture coding algorithms. Since the condition of zero-error in general direct and reverse transformations leads to results of little practical importance, an analysis is carried out on the statistical properties of error in 2-D linear transformation with given length of arithmetics. Then the important case of discrete cosine transform (DCT) applied to real images is considered in detail. The results of the paper allow a circuit designer to determine the representation accuracy of the one- and two-dimensional coefficients required to satisfy a preassigned reconstruction error on the image.
international conference on acoustics, speech, and signal processing | 1986
Fabrizio Oliveri; G. Conte; Mario Guglielmo
The main problem for vector quantisation is the codebook generation. The LBG method solves it with an iterative approach. This algorithm needs a long training sequence and an initial approximation of the codebook. These problems may be overcome by using space filling curves, also called Peanos curves, which are mappings from the unitary interval (image point) to the unitary hypercube. By using this kind of mapping the generation of the codebook, is reduced to the computation of the one-dimensional (optimum) quantiser associated to the probability density function (pdf) of the image points on the Hilbert curve. The paper describes the way of applying these mappings: a new algorithm is derived, generating codebooks for vector quantisation, and experimental results on image coding are provided.
IEEE Journal on Selected Areas in Communications | 1987
Leonardo Chiariglione; Stefano Fontolan; Mario Guglielmo; Francesco Tommasi
Demand for interpersonal video and audio communication at bit rates available on basic and subprimary ISDN accesses is growing and it is expected that service implementation will require large numbers of low-cost high-quality equipment. This paper describes the concept and simulation results of a coding scheme of the hybrid (intraframe transform and temporal DPCM) type whose main feature is the possibility of regaining high resolution for slowly changing pictures. Methods for controlling the spatio/temporal resolutions and simulation results are given. A byproduct of the algorithm allows the reduction of the transmission buffer to negligible values.
European Transactions on Telecommunications | 1991
Mario Guglielmo; Giulio Modena; Roberto Montagna
The paper is presenting a sketch of the main results achieved in the recent years and of the ongoing activities (mainly within the Standardization Bodies) in the areas of audio and video coding for bandwidth reduction. The evolution of the telecommunication Field and the needs which originate the different standards are also considered. The paper is organized in two main chapters: the first one is dealing with the techniques for audio and speech coding while the second presents the main achievements in terms of Recommendations and consolidate algorithms for image coding. Concerning speech and audio coding the paper presents and overview of the coding techniques and then examines the different environments of transmissions (network and mobile communication), the bandwidths required to satisfy the considered applications and the defined speech quality evaluation methods. A similar approach has been used for video coding and the evolution of the coding techniques appearing in the CCITT Recommendations is presented. The ongoing activities within CCIR, CMIT, CCITT and ISO towards new worldwide standards are also outlined providing indications on the areas which will require further developments.
international conference on acoustics, speech, and signal processing | 1986
L. Chiariglione; L. Corgnier; Mario Guglielmo
Several video codec applications are currently considered and there is a danger that digitisation will lead to more incompatibilities than experienced in the analogue era. A video codec architecture is thus proposed having the property to accomodate a wide range of coding algorithms (hybrid scheme), to operate on a wide bitrate range (by means of an adaptive prefilter) and able to deal with different video standards (by integration of postfilter with standards conversion).
Lecture Notes in Computer Science | 1999
Roberto Becchini; Gianluca De Petris; Mario Guglielmo; Alain Morvan
The MPEG-2 standard has allowed Digital Television to become a reality at reasonable costs. The introduction of MPEG-4 will allow content and service providers to enrich MPEG-2 content with new 2D and 3D scenes capable of a high degree of interactivity even in broadcast environments. While ensuring compatibility with existing terminals, the user of the new terminal will be provided with other applications, such as Electronic Program Guides, Advanced Teletext, but also a new non-intrusive kind of advertisements. For these reasons, a scalable device opened both to the consolidated past (MPEG-2) and to the incoming future (MPEG-4) is considered a key issue for the delivery of new services. The presence of a Java Virtual Machine and a set of standard APIs is foreseen in the current terminal architecture, to give extended programmatic capabilities to content creators. This paper covers the application areas, the design and the implementation of a prototype for this terminal, which has been developed in the context of the ACTS SOMMIT project.
Signal Processing-image Communication | 1990
Luigi Corgnier; Mario Guglielmo
Abstract This paper considers one of the problems encountered in the definition of video coding algorithms for moving and still pictures due to the finite length arithmetic in the computation of orthonormal transforms. In particular two aspects are taken into account: mismatch and reversibility. The different causes which influence the final representation of the reconstructed video samples are examined and a formula is given, expressing said final error as a function of the errors introduced at the different stages of the computation. From an upper bound of said final error the minimum number of bits required for the representation of the quantities appearing at the different stages of the computation are derived for two cases of particular interest. Alternatively, assigning the length of the arithmetic registers, it is possible to know the worst case error. With some care the results are valid for any type of fast algorithms and not only for the matrix multiplication case which is used here to attain the widest validity of the results.
Electronic Imaging '90, Santa Clara, 11-16 Feb'92 | 1990
Ronald Plompen; Yoshinori Hatori; Wilfried Geuen; Jacques Guichard; Mario Guglielmo; Harald Brusewitz
CCITT Study Group XV (Working Party XV/1) is charged with transmission systems. Under WP XV/1 a Specialists Group was established dealing with drafting recommendatioms for the secomd generation sub-primary rate ( n x 384 kbit/s, ii = 1, . . . , 5) or (p x 64 lcbit/s, p = 1, . . . , 30) video codecs. The Specialists Group on Coding for Visual Telephony reaches their objectives by ezchanging the results with the different partners involved (Europe, Japan, USA, Korea and Canada). During the study period 1984-1989 the Specialists Group agreed upon the usage of a so called reference model (hereafter abbreviated RM) for simulation purposes. The specification for a fiezible hardware is derived from these simulations. In this paper a description of the reference model and the evolution towards the last reference model (RM8) is given which is the basis for the H.261. The intentions of this contribution is to show the ft ezibility of the algorithm for different applications. The term universal approach makes therefor reference to the usage of the algorithm for a range of possible applications. In an joint expert group of ISO/IEC, the Moving Picture Coding Expert Group (MPEG) ISO/IEC JCT/5C2/WG8, work is carried out to select a standard for coded represenation of moving images and sound for the provision of interactive moving picture applications. In this expert group, members of the specialists group on coding for visual telephony are participating trying to realize interworking between the standard in preparation by the joint expert group MPEG and the coming CCITT recommendation H.261. For the most important used techniques in the CCITT Reference Model, among which are quantization, scanning, loop filter, entropy coding, multiple versus single VL C and block type discrimination theoretical information is provided and some examples for possible improvements are included. A significant development is reported i.e a modification of the reported n x 384 kbit/s algorithm paving the way to a universal standard codec capable of operating at p x 64 kbit/s ( p = 1,...,30).
1985 International Technical Symposium/Europe | 1986
Leonardo Chiariglione; Mario Guglielmo; F. Tommasi
The goal of implementing videoconference codec at 384 Kbit/s cannot be achieved by maintaining the full spatio-temporal resolution of the original images. Consequently the videoconference codec will require pre-processing and spatio-temporal subsampling operations at the transmitter side with corresponding postfiltering and interpolations, at the receiver side, to reconstruct the missing information. These operations are very similar to those implied by the conversion between the American and European video standards required by the videoconference service, basically international. The integration of the two processings in a single operation would result in a further advance toward a world-wide standard in the video codec implementation, overcoming the dichotomy between 625/50 and 525/60 systems. The present paper describes the architecture of a codec which gives an integrated solution to pre/post filtering and standard conversion problems.