Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Peter List is active.

Publication


Featured researches published by Peter List.


IEEE Transactions on Circuits and Systems for Video Technology | 2003

Adaptive deblocking filter

Peter List; Anthony Joch; Jani Lainema; Gisle Bjøntegaard; Marta Karczewicz

This paper describes the adaptive deblocking filter used in the H.264/MPEG-4 AVC video coding standard. The filter performs simple operations to detect and analyze artifacts on coded block boundaries and attenuates those by applying a selected filter.


international conference on acoustics, speech, and signal processing | 2008

T-V-model: Parameter-based prediction of IPTV quality

Alexander Raake; Marie-Neige Garcia; Sebastian Möller; Jens Berger; Fredrik Kling; Peter List; Jens Johann; Cornelius Heidemann

The paper presents a parameter-based model for predicting the perceived quality of transmitted video for IPTV applications. The core model we derived can be applied both to service monitoring and network or service planning. In its current form, the model covers H.264 and MPEG-2 coded video (standard and high definition) transmitted over IP-links. The model includes factors like the coding bit-rate, the packet loss percentage and the type of packet loss handling used by the codec. The paper provides an overview of the model, of its integration into a multimedia model predicting audio-visual quality, and of its application to service monitoring. A performance analysis is presented showing a high correlation with the results of different subjective video quality perception tests. An outlook highlights future model extensions.


international conference on acoustics, speech, and signal processing | 2008

Towards content-related features for parametric video quality prediction of IPTV services

Marie-Neige Garcia; Alexander Raake; Peter List

This paper investigates video content-related features, such as measures of spatio-temporal complexity, for inclusion into parametric video quality models. Our goal is to find a parametric content description that correlates with perceived video quality. In the course of the development of a parametric IPTV video quality prediction model (T-V-model), a large number of subjective tests have been conducted for standard definition and high definition video with different types of content. As expected from previous studies, we observed content dependencies that were different for different types of degradations. As descriptors of the content, we employ spatio-temporal related information obtained either before encoding and from the decoder or obtained from the decoder only. We compare those two approaches and explore their application to a reduced- or no-reference parametric model. An outlook highlights future steps for integrating the spatio- temporal features into the parametric model.


IEEE Signal Processing Magazine | 2011

IP-Based Mobile and Fixed Network Audiovisual Media Services

Alexander Raake; Jörgen Gustafsson; Savvas Argyropoulos; Marie-Neige Garcia; David Lindegren; Gunnar Heikkilä; Martin Pettersson; Peter List; Bernhard Feiten

This article provides a tutorial overview of current approaches for monitoring the quality perceived by users of IP-based audiovisual media services. The article addresses both mobile and fixed network services such as mobile TV or Internet Protocol TV (IPTV). It reviews the different quality models that exploit packet- header-, bit stream-, or signal-information for providing audio, video, and audiovisual quality estimates, respectively. It describes how these models can be applied for real-life monitoring, and how they can be adapted to reflect the information available at the given measurement point. An outlook gives insight into emerging trends for near- and mid-term future requirements and solutions.


quality of multimedia experience | 2011

No-reference video quality assessment for SD and HD H.264/AVC sequences based on continuous estimates of packet loss visibility

Savvas Argyropoulos; Alexander Raake; Marie-Neige Garcia; Peter List

In this paper, a novel method for predicting the visibility of packet losses in SD and HD H.264/AVC video sequences and modeling their impact on perceived quality is proposed. Based on the findings of a new subjective experiment it is initially shown that the classification of packet loss visibility in a binary fashion is not sufficient to model the perceptual degradations caused by the transmission errors. The proposed no-reference algorithm extracts a set of features from the video bit-stream to account for the spatial and temporal characteristics of the video content and the induced distortion due to the network impairments. Subsequently, the visibility of packet losses is predicted in a continuous fashion using Support Vector Regression. Finally, a no-reference bit-stream based video quality assessment model that explicitly employs the predicted packet loss visibility estimates is presented. The evaluation of the proposed model demonstrates that the use of continuous estimates for the visibility of packet losses improves the performance of the video quality assessment model.


international conference on acoustics, speech, and signal processing | 2011

No-reference bit stream model for video quality assessment of h.264/AVC video based on packet loss visibility

Savvas Argyropoulos; Alexander Raake; Marie-Neige Garcia; Peter List

In this paper, a no reference bit stream model for quality assessment of SD and HD H.264/AVC video sequences based on packet loss visibility is proposed. The method considers the impact of network impairments on human perception and uses the visibility of packet losses to predict objective scores. Also, a new subjective experiment has been designed to provide insight into the perceptual effect of degradations caused by transmission errors. The proposed algorithm extracts a set of features from the received bit stream. Then, the visibility of each packet loss event is determined by classifying the extracted features using a Support Vector Machines classifier. Finally, analytical expressions are developed to account for visual degradation due to compression and channel induced distortion based on the outcome of the visibility classifier. The evaluation demonstrates the validity of the proposed method.


multimedia signal processing | 2013

Parametric model for audiovisual quality assessment in IPTV: ITU-T Rec. P.1201.2

Marie-Neige Garcia; Peter List; Savvas Argyropoulos; David Lindegren; Martin Pettersson; Bernhard Feiten; Jörgen Gustafsson; Alexander Raake

A parametric packet-based model has been created to estimate user perceived audiovisual quality of Internet Protocol Television (IPTV) services. It is divided into three modules, for audio, video and audiovisual quality. The model is applicable to the quality monitoring of encrypted and non-encrypted audiovisual streams. Typical audio and video degradations for IPTV are covered for Standard Definition (SD) and High Definition (HD) video formats. The model supports the H.264 video codec and the audio codecs MPEG-I Layer II, MPEG-2 AAC-LC, MPEG-4 HE-AACv2 and AC3. It handles various types of IP-network layer transmission errors. The model was developed and validated using a large database of subjective tests. The underlying concept is based on an impairment factor approach, which enables detection of how users build their individual judgment of quality of a given audiovisual signal. Each impairment factor captures the perceived quality impact of a possible degradation and therefore enables diagnostic analysis of quality problems. The model shows high performance results, both in terms of Pearsons Correlation coefficient (r) and Root-Mean-Square-Error (RMSE). The model is standardized as ITU-T Recommendation P.1201.2, the higher resolution (IPTV and Video on Demand (VoD)) algorithm of Recommendation P.1201.


quality of multimedia experience | 2017

A bitstream-based, scalable video-quality model for HTTP adaptive streaming: ITU-T P.1203.1

Alexander Raake; Marie-Neige Garcia; Werner Robitza; Peter List; Steve Gcoring; Bernhard Feiten

The paper presents the scalable video quality model part of the P.1203 Recommendation series, developed in a competition within ITU-T Study Group 12 previously referred to as P.NATS. It provides integral quality predictions for 1 up to 5 min long media sessions for HTTP Adaptive Streaming (HAS) with up to HD video resolution. The model is available in four modes of operation for different levels of media-related bitstream information, reflecting different types of encryption of the media stream. The video quality model presented in this paper delivers short-term video quality estimates that serve as input to the integration component of the P.1203 model. The scalable approach consists in the usage of the same components for spatial and temporal scaling degradations across all modes. The third component of the model addresses video coding artifacts. To this aim, a single model parameter is introduced that can be derived from different types of bitstream input information. Depending on the complexity of the available input, one of four scaling-levels of the model is applied. The paper presents the different novelties of the model and scientific choices made during its development, the test design, and an analysis of the model performance across the different modes.


international conference on image processing | 2013

Scene change detection in encrypted video bit streams

Savvas Argyropoulos; Peter List; Marie-Neige Garcia; Bernhard Feiten; Martin Pettersson; Alexander Raake

In this paper, a novel method to detect scene changes in encrypted video streams is presented. Typically, in IPTV systems, the media stream is transmitted in encrypted form, and therefore the only available information to determine the scene changes are the packet headers which transport the video signal. Thus, the proposed method estimates the size and the type of each picture of the video sequence by extracting information from the packet headers. Then, based on the GOP structure, a set of rules are determined to predict changes of frame sizes which are indicative of scene changes in the video sequences. Furthermore, the application of the proposed method in the recently standardized ITU-T Recommendation P.1201.2 for no-reference audio-visual quality assessment for IPTV-grade services is presented to highlight how such method could be deployed. Finally, the proposed method is evaluated on a large set of video databases to demonstrate the validity of the proposed method.


acm sigmm conference on multimedia systems | 2018

HTTP adaptive streaming QoE estimation with ITU-T rec. P. 1203: open databases and software

Werner Robitza; Steve Goring; Alexander Raake; David Lindegren; Gunnar Heikkilä; Jörgen Gustafsson; Peter List; Bernhard Feiten; Ulf Wüstenhagen; Marie-Neige Garcia; Kazuhisa Yamagishi; Simon Broom

This paper describes an open dataset and software for ITU-T Ree. P.1203. As the first standardized Quality of Experience model for audiovisual HTTP Adaptive Streaming (HAS), it has been extensively trained and validated on over a thousand audiovisual sequences containing HAS-typical effects (such as stalling, coding artifacts, quality switches). Our dataset comprises four of the 30 official subjective databases at a bitstream feature level. The paper also includes subjective results and the model performance. Our software for the standard was made available to the public, too, and it is used for all the analyses presented. Among other previously unpublished details, we show the significant performance improvements of using bitstream-based models over metadata-based ones for video quality analysis, and the robustness of combining classical models with machine-learning-based approaches for estimating user QoE.

Collaboration


Dive into the Peter List's collaboration.

Top Co-Authors

Avatar

Alexander Raake

Technische Universität Ilmenau

View shared research outputs
Top Co-Authors

Avatar

Marie-Neige Garcia

Technical University of Berlin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge