Oliver Hohlfeld
RWTH Aachen University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Oliver Hohlfeld.
quality of multimedia experience | 2010
Thomas Zinner; Oliver Hohlfeld; Osama Abboud; Tobias Hossfeld
Video streaming applications are a major driver for the evolution of the future Internet. In this paper we introduce a framework for QoE management for video streaming systems based on the H.264/SVC codec, the scalable extension of H.264/AVC. A relevant feature is to control the user perceived quality of experience (QoE) by exploiting parameters offered by SVC. A proper design of a control mechanism requires the quantification of the main influence parameters on the QoE. For this purpose, we conducted a measurement study and quantified the influence of i) video resolution, ii) scaling method, iii) video frame rate and iv) video content types on the QoE by means of the SSIM and VQM full-reference metrics. Further, we discuss the trade-off between these different control knobs and their influence on the QoE.
international teletraffic congress | 2010
Gerhard Hasslinger; Oliver Hohlfeld
Traffic engineering and an economical provisioning of bandwidth is crucial for network providers in times of high competition in broadband access networks. We investigate the efficiency of caching as an option to shorten end-to-end paths and delays while at the same time reducing traffic loads. The portion of HTTP based distribution of cacheable content on the Internet is increasing in recent time. In addition, the favourable effect of Zipf-like access pattern on caches is also confirmed for currently most popular web sites with user generated content. Content delivery (CDN) and peer-to-peer (P2P) networks are distributing a major portion of IP traffic with different impact on caching. P2P traffic is subject to long transport paths although appropriate for caching in principle. CDNs are based on server infrastructures allowing for shorter paths on a global scale on top of network provider platforms. We give a brief overview of the options for deploying caches by content and network providers at different points in the interconnection, backbone or aggregation. The main part of the work focuses on the analysis of replacement strategies with regard to Zipf-like and fixed or slowly varying access pattern. A comparative evaluation shows that least recently used (LRU) essentially differs from caching strategies based on access statistics in terms of the achievable hit rates.
allerton conference on communication, control, and computing | 2010
Florin Ciucu; Oliver Hohlfeld; Pan Hui
The class of Gupta-Kumar results give the asymptotic throughput in multi-hop wireless networks but cannot predict the throughput behavior in networks of typical size. This paper addresses the non-asymptotic analysis of the multihop wireless communication problem and provides, for the first time, closed-form results on multi-hop throughput and delay distributions. The results are non-asymptotic in that they hold for any number of nodes and also fully account for transient regimes, i.e., finite time scales, delays, as well as bursty arrivals. Their accuracy is supported by the recovery of classical single-hop results, and also by simulations from empirical data sets with realistic mobility settings. Moreover, for a specific network scenario and a fixed pair of nodes, the results confirm Gupta-Kumars Ω(1√n log n) asymptotic scaling law.
computer and communications security | 2014
Gianluca Stringhini; Oliver Hohlfeld; Christopher Kruegel; Giovanni Vigna
A spammer needs three elements to run a spam operation: a list of victim email addresses, content to be sent, and a botnet to send it. Each of these three elements are critical for the success of the spam operation: a good email list should be composed of valid email addresses, a good email content should be both convincing to the reader and evades anti-spam filters, and a good botnet should efficiently sent spam. Given how critical these three elements are, figures specialized on one of these elements have emerged in the spam ecosystem. Email harvesters crawl the web and compile email lists, botmasters infect victim computers and maintain efficient botnets for spam dissemination, and spammers rent botnets and buy email lists to run spam campaigns. Previous research suggested that email harvesters and botmasters sell their services to spammers in a prosperous underground economy. No rigorous research has been performed, however, on understanding the relations between these three actors. This paper aims to shed some light on the relations between harvesters, botmasters, and spammers. By disseminating email addresses on the Internet, fingerprinting the botnets that contact these addresses, and looking at the content of these emails, we can infer the relations between the actors involved in the spam ecosystem. Our observations can be used by researchers to develop more effective anti-spam systems.
acm special interest group on data communication | 2014
Thomas Krenc; Oliver Hohlfeld; Anja Feldmann
On March 17, 2013, an Internet census data set and an accompanying report were released by an anonymous author or group of authors. It created an immediate media buzz, mainly because of the unorthodox and unethical data collection methodology (i.e., exploiting default passwords to form the Carna botnet), but also because of the alleged unprecedented large scale of this census (even though legitimate census studies of similar and even larger sizes have been performed in the past). Given the unknown source of this released data set, little is known about it. For example, can it be ruled out that the data is faked? Or if it is indeed real, what is the quality of the released data? The purpose of this paper is to shed light on these and related questions and put the contributions of this anonymous Internet census study into perspective. Indeed, our findings suggest that the released data set is real and not faked, but that the measurements suffer from a number of methodological flaws and also lack adequate meta-data information. As a result, we have not been able to verify several claims that the anonymous author(s) made in the published report. In the process, we use this study as an educational example for illustrating how to deal with a large data set of unknown quality, hint at pitfalls in Internet-scale measurement studies, and discuss ethical considerations concerning third-party use of this released data set for publications.
acm special interest group on data communication | 2015
Yvonne Coady; Oliver Hohlfeld; James Kempf; Rick McGeer; Stefan Schmid
A distributed cloud connecting multiple, geographically distributed and smaller datacenters, can be an attractive alternative to todays massive, centralized datacenters. A distributed cloud can reduce communication overheads, costs, and latencies by o ering nearby computation and storage resources. Better data locality can also improve privacy. In this paper, we revisit the vision of distributed cloud computing, and identify di erent use cases as well as research challenges. This article is based on the Dagstuhl Seminar on Distributed Cloud Computing, which took place in February 2015 at Schloss Dagstuhl.
internet measurement conference | 2014
Oliver Hohlfeld; Enric Pujol; Florin Ciucu; Anja Feldmann; Paul Barford
Despite decades of operational experience and focused research efforts, standards for sizing and configuring buffers in network systems remain controversial. An extreme example of this is the recent claim that excessive buffering (i.e., bufferbloat) can severely impact Internet services. In this paper, we systematically examine the implications of buffer sizing choices from the perspective of factors impacting end user experience. To assess user perception of application quality under various buffer sizing schemes we employ Quality of Experience (QoE) metrics. We evaluate these metrics over a wide range of end-user applications (e.g., web browsing, VoIP, and RTP video streaming) and workloads in two realistic testbeds emulating access and backbone networks. The main finding of our extensive evaluations is that network workload, rather than buffer size, is the primary determinant of end user QoE. Our results also highlight the relatively narrow conditions under which bufferbloat seriously degrades QoE, i.e., when buffers are oversized and sustainably filled.
ieee international conference on cloud engineering | 2016
Martin Henze; Jens Hiller; Oliver Hohlfeld; Klaus Wehrle
Todays public cloud services suffer from fundamental privacy issues, e.g., as demonstrated by the global surveillance disclosures. The lack of privacy in cloud computing stems from its inherent centrality. State-of-the-art approaches that increase privacy for cloud services either operate cloud-like services on users devices or encrypt data prior to upload to the cloud. However, these techniques jeopardize advantages of the cloud such as elasticity of processing resources. In contrast, we propose decentralized private clouds to allow users to protect their privacy and still benefit from the advantages of cloud computing. Our approach utilizes idle resources of friends and family to realize a trusted, decentralized system in which cloud services can be operated securely and privacy-preserving. We discuss our approach and substantiate its feasibility with initial experiments.
human computer interaction with mobile devices and services | 2015
Oliver Hohlfeld; André Pomp; Jó Ágila Bitsch Link; Dennis Guse
Gaze tracking is a common technique to study user interaction but is also increasingly used as input modality. In this regard, computer vision based systems provide a promising low-cost realization of gaze tracking on mobile devices. This paper complements related work focusing on algorithmic designs by conducting two users studies aiming to i) independently evaluate EyeTab as promising gaze tracking approach and ii) by providing the first independent use case driven evaluation of its applicability in mobile scenarios. Our evaluation elucidates the current state of mobile computer vision based gaze tracking and aims to pave the way for improved algorithms. In this regard, we aim to further foster the development by releasing our source data as reference database open to the public.
internet measurement conference | 2012
Oliver Hohlfeld; Thomas Graf; Florin Ciucu
This paper investigates the origins of the spamming process, specifically concerning address harvesting on the web, by relying on an extensive measurement data set spanning over three years. Concretely, we embedded more than 23 million unique spamtrap addresses in web pages. 0.5% of the embedded trap addresses received a total of 620,000 spam messages. Besides the scale of the experiment, the critical aspect of our methodology is the uniqueness of the issued spamtrap addresses, which enables the mapping of crawling activities to the actual spamming process. Our observations suggest that simple obfuscation methods are still efficient for protecting addresses from being harvested. A key finding is that search engines are used as proxies, either to hide the identity of the harvester or to optimize the harvesting process.