Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Stefan Wilk is active.

Publication


Featured researches published by Stefan Wilk.


local computer networks | 2015

An analysis of the YouNow live streaming platform

Denny Stohr; Tao Li; Stefan Wilk; Silvia Santini; Wolfgang Effelsberg

Video streaming platforms like Twitch.tv or YouNow have attracted the attention of both users and researchers in the last few years. Users increasingly adopt these platforms to share user-generated videos while researchers study their usage patterns to learn how to provide better and new services.


acm sigmm conference on multimedia systems | 2015

Video composition by the crowd: a system to compose user-generated videos in near real-time

Stefan Wilk; Stephan Kopf; Wolfgang Effelsberg

To compose high-quality movies directors need life-long learning and talent. User-generated video defines a new era of video production in which non-professionals record videos and share them on platforms such as YouTube. As hiring professional directors results in high costs, our work focuses on replacing those directors by crowdsourcing. The proposed system allows users to record and stream live videos to servers on which workers create a video mashup. A smartphone application for recording live video has been designed that supports the composition in the crowd by a multi-modal analysis of the recording quality. The contributions of this work are: The proposed system demonstrates that composing a large number of video views can be achieved in near real-time. Second, the system achieves comparable video quality for user-generated video in comparison to manual composition. Third, it offers insights on how to design real-time capable crowdsourcing systems. Fourth, by leveraging multi-modal features that can already be evaluated during recording the number of streams considered for presentation can be reduced.


international conference on multimedia and expo | 2014

The influence of camera shakes, harmful occlusions and camera misalignment on the perceived quality in user generated video

Stefan Wilk; Wolfgang Effelsberg

Video sharing sites such as Youtube are prominent examples for the increasing demand of private camera owners to record, save and share their real life experiences as videos. Whereas professional video productions use advanced camera equipment in a controlled environment with skilled cameramen, User-Generated Video (UGV) differs significantly in regard of equipment, skill and thus in video quality. In this work we focus on a detailed analysis of the effects of camera shakes, harmful occlusions as well as a possible misalignment between the recording camera and the scene, on the resulting video quality. In contrast, most of the video quality discussion in the literature has focused on encoding artifacts and other compression problems. Our data was systematically gathered using large crowdsourcing experiments over three genres of video, and it was validated in a controlled lab setting. Our results show that even minimal camera shaking as well as occlusions lead to a significant reduction in the perceived video quality.


Proceedings of the 8th International Workshop on Mobile Video | 2016

Leveraging transitions for the upload of user-generated mobile video

Stefan Wilk; Roger Zimmermann; Wolfgang Effelsberg

A recent trend in user-generated content production is the broadcasting of live video streams from mobile devices. A set of upload protocols have been proposed supporting the live transmission of user-generated video. Their performance depends on the environmental conditions, e.g., the mobility of users, the network conditions or the popularity of the streams. Thus, we propose a novel mobile broadcasting framework, which exchanges different uploading protocols during the runtime of the application. Our goal is to use the protocol performing best under given application requirements and environmental conditions. If the requirements or the conditions change, the system dynamically assesses whether the the protocol currently used is still the most appropriate one for streaming. In case a superior protocol is available, the system transitions to the new protocol. By leveraging such transitions for video upload protocols, we achieve a superior overall performance under changing network conditions in comparison to a single upload protocol.


2015 International Conference and Workshops on Networked Systems (NetSys) | 2015

The potential of social-aware multimedia prefetching on mobile devices

Stefan Wilk; Julius Rückert; Timo Thräm; Christian Koch; Wolfgang Effelsberg; David Hausheer

The access to Online Social Networks (OSN) and to media shared over these platforms account for around 20% of todays mobile Internet traffic. For mobile device users, the access to media content and specifically videos is still challenging and costly. Mobile contracts usually have a data cap and connection qualities can vary greatly, depending on the cellular network coverage. Prefetching mechanisms that fetch content items beforehand, in times when the mobile device is connected to a WiFi network, have a high potential to address these problems. Yet, such a mechanism can only be effective if relevant content can be predicted with a high accuracy. Therefore, in this paper, an analysis of content properties and their potential for prediction are presented. An initial user study with 14 Facebook users running an app on their mobile device was conducted. The results show that video consumption is very diverse across the users. This work discusses the evaluation setup, the data analysis, and their potential to define an effective prefetching algorithm.


international conference on multimedia and expo | 2012

Bringing Videos to Social Media

Stephan Kopf; Stefan Wilk; Wolfgang Effelsberg

Although the importance of video sharing and of social media is increasing from day to day, a full integration of videos into social media is not achieved yet. We have developed a system that maps the concept of hypervideo - allowing to annotate objects in a video - to social media. We define this combination as social video that simultaneously allows a large number of users to contribute to the content of a video. Users can annotate video objects by adding images, text, other videos, Web links, or even communication topics. An integrated chat system allows users to communicate with friends and to link these topics to distinct objects in the video. We analyze the technical functionality and the user acceptance of our social video system in detail. Due to the integration into the social network Facebook more than 12,000 users have already accessed our system.


international symposium on multimedia | 2015

On Influencing Mobile Live Video Broadcasting Users

Stefan Wilk; Dimitri Wulffert; Wolfgang Effelsberg

A recent trend in user-generated video is to broadcast live video from mobile phones. Mobile broadcasting platforms such as YouNow or Periscope understood this trend and attract multiple thousands of concurrent views per second. Amateur produced mobile live video often suffers from only a limited duration of the recordings. Usually, the live recordings do not cover entire events. To address this problem our approach tries to understand how to increase the recording duration by integrating features for influencing the user behavior including gamification.


acm multimedia | 2016

RT-VQM: real-time video quality assessment for adaptive video streaming using GPUs

Matthias Wichtlhuber; Gregor Wicklein; Stefan Wilk; Wolfgang Effelsberg; David Hausheer

Adaptive streaming systems gain rising relevance for streaming services. Therefore, the same video is offered in multiple quality versions to clients for adaptation during playback. However, optimizing adaptation in a Quality of Experience (QoE) centric way is difficult. Current systems maximize bit rate, ignoring that different types of adaptation (resolution, framerate, quantization) correlate differently and in a non-linear way with users perception. User validated video quality metrics can provide precise quality information. However, measurements of state-of-the-art metrics show either high computational intensity or weak correlation with subjective tests. This makes large-scale offline quality assessment processing intensive while real-time constrained scenarios like live streaming and video conferencing are hardly supportable. Consequently, this work presents the Real-Time Video Quality Metric (RT-VQM), a real-time, Graphics Processing Unit (GPU) supported version of the widely used Video Quality Metric (VQM). RT-VQM introduces efficient filtering operations, hardware-supported scaling and high-performance feature pooling. The approach outperforms VQM by a factor of 30, thus enabling a real-time assessment of up to 9 parallel video stream representations up to High Definition (HD) 720 resolution at 30fps.


network and operating system support for digital audio and video | 2015

VAS: a video adaptation service to support mobile video

Stefan Wilk; Denny Stohr; Wolfgang Effelsberg

Even though cellular networks offer a ubiquitous access to the Internet for mobile devices, their throughput is often insufficient for the rising demand for mobile video. Classical video streaming approaches can not cope with bandwidth fluctuations common in those networks. As a result adaptive approaches for video streaming have been proposed and are increasingly adopted on mobile devices. However, existing adaptive video systems often rely on available network resources alone. As video content properties have a large influence on the perception of occurring quality adaptations our belief is that this is not sufficient. In this work, we thus present a support service for a content-aware video adaptation on mobile devices. Based on the actual video content the adaptation process is improved for both the available network resources and the perception of the user. By leveraging the content properties of a video stream, the system is able to keep a stable video quality and at the same time reduce the network load.


consumer communications and networking conference | 2015

Systematic, large-scale analysis on the feasibility of media prefetching in Online Social Networks

Thomas Paul; Daniel Puscher; Stefan Wilk; Thorsten Strufe

Huge quantities of videos are shared via Online Social Networks (OSN) like Facebook and are watched on mobile devices. Internet connections via cellular networks (UMTS / LTE) require the scarce resources radio bandwidth and battery power. Prefetching of videos in areas of WLAN availability has the potential to reduce the power consumption in comparison to data transmission via cellular networks and prefetching can help to avoid users running into traffic caps of their network providers. Furthermore, startup delays can be reduced. Social networks offer contextual information such as likes and comments as well as social graph information which can potentially be used to predict which content will be consumed in the near future. In this paper, we elaborate possibilities to predict content consumption based on the number of likes, comments and the social graph distance. Our detailed analysis of the media access patterns of more than 700 users in Facebook shows that the media consumption does not solely depend on the number of likes or comments. Users tend to watch videos that are uploaded by close friends and family members. Furthermore, the time a video preview stays in the browser-viewport before being clicked (pre-click delay) can be exploited to decrease startup delays.

Collaboration


Dive into the Stefan Wilk's collaboration.

Top Co-Authors

Avatar

Wolfgang Effelsberg

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar

Denny Stohr

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Julius Rückert

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar

David Hausheer

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar

Anja Klein

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar

Björn Richerzhagen

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar

Christian Koch

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar

Hussein Al-Shatri

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar

Mousie Fasil

Technische Universität Darmstadt

View shared research outputs
Researchain Logo
Decentralizing Knowledge