Bruno Gardlo
Austrian Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Bruno Gardlo.
quality of multimedia experience | 2012
Peter Fröhlich; Sebastian Egger; Raimund Schatz; Michael Mühlegger; Kathrin Masuch; Bruno Gardlo
Standard methodologies for subjective video quality testing are based on very short test clips of 10 seconds. But is this duration sufficient for Quality of Experience assessment? In this paper, we present the results of a comparative user study that tests whether quality perception and rating behavior may be different if video clip durations are longer. We did not find strong overall MOS differences between clip durations, but the three longer clips (60, 120 and 240 seconds) were rated slightly more positively than the three shorter durations under comparison (10, 15 and 30 seconds). This difference was most apparent when high quality videos were presented. However, we did not find an interaction between content class and the duration effect itself. Furthermore, methodological implications of these results are discussed.
Proceedings of the 2014 Workshop on Design, Quality and Deployment of Adaptive Video Streaming | 2014
Sebastian Egger; Bruno Gardlo; Michael Seufert; Raimund Schatz
Changing network conditions like bandwidth fluctuations and resulting bad user experience issues (e.g. video freezes) pose severe challenges to Internet video streaming. To address this problem, an increasing number of video services utilizes HTTP adaptive streaming (HAS). HAS enables service providers to improve Quality of Experience (QoE) and resource utilization by incorporating information from different layers. However, these adaptation possibilities of HAS also introduce new perceivable impairments such as the fluctuation of audiovisual quality levels over time, which in turn lead to novel QoE-related research questions. The main contribution of this paper is the formulation of open research questions as well as a thorough systematic user-centric analysis of different quality adaptation dimensions and strategies. The underlying data has been acquired through two crowdsourcing and one lab study. The results provide guidance w.r.t. which encoding dimensions are combined best for the creation of the adaptation set and what type of adaptation strategy should be used. Furthermore it provides insights on the impact of adaptation frequency and the true QoE gain of adaptation over stallings.
multimedia signal processing | 2014
Tobias Hossfeld; Matthias Hirth; Pavel Korshunov; Philippe Hanhart; Bruno Gardlo; Christian Keimel; Christian Timmerer
The popularity of the crowdsourcing for performing various tasks online increased significantly in the past few years. The low cost and flexibility of crowdsourcing, in particular, attracted researchers in the field of subjective multimedia evaluations and Quality of Experience (QoE). Since online assessment of multimedia content is challenging, several dedicated frameworks were created to aid in the designing of the tests, including the support of the testing methodologies like ACR, DCR, and PC, setting up the tasks, training sessions, screening of the subjects, and storage of the resulted data. In this paper, we focus on the web-based frameworks for multimedia quality assessments that support commonly used crowdsourcing platforms such as Amazon Mechanical Turk and Microworkers. We provide a detailed overview of the crowdsourcing frameworks and evaluate them to aid researchers in the field of QoE assessment in the selection of frameworks and crowdsourcing platforms that are adequate for their experiments.
IEEE Transactions on Network and Service Management | 2016
Pedro Casas; Michael Seufert; Florian Wamser; Bruno Gardlo; Andreas Sackl; Raimund Schatz
A quarter of the world population will be using smartphones to access the Internet in the near future. In this context, understanding the quality of experience (QoE) of popular apps in such devices becomes paramount to cellular network operators, who need to offer high-quality levels to reduce the risks of customers churning for quality dissatisfaction. In this paper, we address the problem of QoE provisioning in smartphones from a double perspective, combining the results obtained from subjective laboratory tests with end-device passive measurements and QoE crowd-sourced feedback obtained in operational cellular networks. The study addresses the impact of both access bandwidth and latency on the QoE of five different services and mobile apps: YouTube, Facebook, Web browsing through Chrome, Google Maps, and WhatsApp. We evaluate the influence of both constant and dynamically changing network access conditions, tackling in particular the case of fluctuating downlink bandwidth, which is typical in cellular networks. As a main contribution, we show that the results obtained in the laboratory are highly applicable in the live scenario, as mappings track the QoE provided by users in real networks. We additionally provide hints and bandwidth thresholds for good QoE levels on such apps, as well as discussion on end-device passive measurements and analysis. The results presented in this paper provide a sound basis to better understand the QoE requirements of popular mobile apps, as well as for monitoring the underlying provisioning network. To the best of our knowledge, this is the first paper providing such a comprehensive analysis of QoE in mobile devices, combining network measurements with users QoE feedback in laboratory tests, and operational networks.
quality of multimedia experience | 2017
Raimund Schatz; Andreas Sackl; Christian Timmerer; Bruno Gardlo
Currently, we witness dramatically increasing interest in immersive media technologies like Virtual Reality (VR), particularly in omnidirectional video (OV) streaming. Omnidirectional (also called 360-degree) videos are panoramic spherical videos in which the user can look around during playback and which therefore can be understood as hybrids between traditional movie streaming and interactive VR worlds. Unfortunately, streaming this kind of content is extremely bandwidth intensive (compared to traditional 2D video) and therefore, Quality of Experience (QoE) tends to deteriorate significantly in absence of continuous optimal bandwidth conditions. In this paper, we present a first approach towards subjective QoE assessment for omnidirectional video (OV) streaming. We present the results of a lab study on the QoE impact of stalling in the context of OV streaming using head-mounted displays (HMDs). Our findings show that subjective testing for immersive media like OV is not trivial, with even simple cases like stalling leading to unexpected results. After a discussion of characteristic pitfalls and lessons learned, we provide a a set of recommendations for upcoming OV assessment studies.
acm special interest group on data communication | 2016
Pedro Casas; Bruno Gardlo; Raimund Schatz; Marco Mellia
Network monitoring and reporting systems as well as network quality benchmarking campaigns use the Average Downlink Throughput (ADT) as the main Key Performance Indicators (KPIs) reflecting the health of the network. In this paper we address the problem of network performance monitoring and assessment in operational networks from a user-centric, Quality of Experience (QoE) perspective. While accurate QoE estimation requires measurements and KPIs collected at multiple levels of the communications stack -- including network, transport, application and end-user layers, we take a practical approach and provide an educated guess on QoE using only a standard ADT-based KPI as input. Armed with QoE models mapping downlink bandwidth to user experience, we estimate the QoE undergone by customers of both cellular and fixed-line networks, using large-scale passive traffic measurements. In particular, we study the performance of three highly popular end-customer services: YouTube, Facebook and WhatsApp. Results suggest that up to 33\% of the observed traffic flows might result in sub-optimal -- or even poor, end-customer experience in both types of network.
conference on network and service management | 2015
Pedro Casas; Bruno Gardlo; Michael Seufert; Florian Wamser; Raimund Schatz
A quarter of the world population will be using smartphones to access the Internet in the near future. In this context, understanding the Quality of Experience (QoE) of popular apps in such devices becomes paramount to cellular network operators, who need to offer high quality levels to reduce the risks of customers churning for quality dissatisfaction. In this paper we address the problem of QoE provisioning in smartphones from a double perspective, combining the results obtained from subjective lab tests with end-device passive measurements and QoE crowd-sourced feedback obtained in operational cellular networks. The study addresses the impact of the downlink bandwidth on the QoE of three popular smartphone apps: YouTube, Facebook and Google Maps. As a main contribution, we show that the results obtained in the lab are highly applicable in the live scenario, as mappings track the QoE provided by users in real networks. We additionally provide hints and bandwidth thresholds for good QoE levels on such apps, as well as discussion on end-device passive measurements and analysis. The results presented in this paper provide a sound basis to better understand the QoE requirements of popular mobile apps, as well as for monitoring the underlying provisioning network. To the best of our knowledge, this is the first paper providing such a comprehensive analysis of QoE in mobile devices, combining network measurements with users QoE feedback in lab tests and operational networks.
acm multimedia | 2015
Bruno Gardlo; Sebastian Egger; Tobias Hossfeld
Crowdsourcing (CS) has evolved into a mature assessment methodology for subjective experiments in diverse scientific fields and in particular for QoE assessment. However, the results acquired for absolute category rating (ACR) scales through CS are often not fully comparable to QoE assessments done in laboratory environments. A possible reason for such differences may be the scale usage heterogeneity problem caused by deviant scale usage of the crowd workers. In this paper, we study different implementations of (quality) rating scales (in terms of design and number of answer categories) in order to identify if certain scales can help to overcome scale usage problems in crowdsourcing. Additionally, training of subjects is well known to enhance result quality for laboratory ACR evaluations. Hence, we analyzed the appropriateness of training conditions to overcome scale usage problems across different samples in crowdsourcing. As major results, we found that filtering of user ratings and different scale designs are not sufficient to overcome scale usage heterogeneity, but training sessions despite their additional costs, enhance result quality in CS and properly counterfeit the identified scale usage heterogeneity problems.
quality of multimedia experience | 2017
Pedro Casas; Alessandro D'Alconzo; Florian Wamser; Michael Seufert; Bruno Gardlo; Anika Schwind; Phuoc Tran-Gia; Raimund Schatz
Monitoring the Quality of Experience (QoE) undergone by cellular network customers has become paramount for cellular ISPs, who need to ensure high quality levels to limit customer churn due to quality dissatisfaction. This paper tackles the problem of QoE monitoring, assessment and prediction in cellular networks, relying on end-user device (i.e., smart-phone) QoS passive traffic measurements and QoE crowdsourced feedback. We conceive different QoE assessment models based on supervised machine learning techniques, which are capable to predict the QoE experienced by the end user of popular smartphone apps (e.g., YouTube and Facebook), using as input the passive in-device measurements. Using a rich QoE dataset derived from field trials in operational cellular networks, we benchmark the performance of multiple machine learning based predictors, and construct a decision-tree based model which is capable to predict the per-user overall experience and service acceptability with a success rate of 91% and 98% respectively To the best of our knowledge, this is the first paper using end-user, in-device passive measurements and machine learning models to predict the QoE of smartphone users in operational cellular networks.
5th ISCA/DEGA Workshop on Perceptual Quality of Systems (PQS 2016) | 2016
Andreas Sackl; Bruno Gardlo; Raimund Schatz
Over the last couple of years, crowdsourcing has become a widely used method for conducting subjective QoE experiments over the Internet. However, the scope of crowdsourced QoE experiments so far has been mostly limited to video and image quality testing, despite the existence of many other relevant application categories. In this paper we demonstrate the applicability of crowdsourced QoE testing to the case of file downloads. We conducted several campaigns in which participants had to download large (10-50MB) media files (with defined waiting times) and subsequently rate their QoE. The results are compared with those of a lab-based file download QoE study featuring an equivalent design. Our results show that crowdsourced QoE testing can also be applied to file downloads with a size of 10 MB as rating results are very similar to the lab. However, beyond user reliability checks and filtering, we found the study design to be a highly critical element as it exerted strong influence on overall participant behavior. For this reason we also present a discussion of valuable lessons learned in terms of test design and participant behavior.