Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where David Hausheer is active.

Publication


Featured researches published by David Hausheer.


international conference on communications | 2005

PeerMart: the technology for a distributed auction-based market for peer-to-peer services

David Hausheer; Burkhard Stiller

P2P networks are becoming increasingly popular for a wide variety of applications going beyond pure file sharing. However, a commercial use of P2P technology is currently not possible as efficient and reliable market mechanisms are missing. This paper presents PeerMart, a distributed technology in support of a market for trading P2P services. PeerMart combines the economic efficiency of double auctions with the technical efficiency and resilience of structured P2P networks. The system is implemented on top of a redundant P2P infrastructure and is being evaluated with respect to scalability, efficiency, and reliability.


EWSDN '14 Proceedings of the 2014 Third European Workshop on Software Defined Networks | 2014

Position Paper: Software-Defined Network Service Chaining

Jeremias Blendin; Julius Rückert; Nicolai Leymann; Georg Schyguda; David Hausheer

Network service chaining allows composing services out of multiple service functions. Traditional network service functions include, e.g., firewalls, TCP optimizers, web proxies, or higher layer applications. Network service chaining requires flexible service function deployment models. Related work facilitating service chaining include, e.g., the network service header proposal discussed at the IETF or the complementary work on network function virtualization (NFV) at ETSI. This position paper presents a high-level concept and architecture to enable service chaining using Software Defined Networking (SDN), specifically OpenFlow in a telecommunication environment. The paper discusses required functionalities, challenges, and testbed aspects to implement and test such an approach. Finally, the set of implemented service functions and management interfaces are highlighted to demonstrate the approach as a proof of concept for a selection of relevant use cases.


IEEE Transactions on Network and Service Management | 2015

An SDN-Based CDN/ISP Collaboration Architecture for Managing High-Volume Flows

Matthias Wichtlhuber; Robert Reinecke; David Hausheer

The collaboration of Internet service providers (ISPs) and content distribution network (CDN) providers was shown to be beneficial for both parties in a number of recent works. Influencing CDN edge server (surrogate) selection allows the ISP to manage the rising amount of traffic emanating from CDNs to reduce the operational expenditures (OPEX) of his infrastructure, e.g., by preventing peered traffic. At the same time, including the ISPs hidden network knowledge in the surrogate selection process influences the quality of service a CDN provider can deliver positively. As a large amount of CDN traffic is video-on-demand traffic, this paper investigates the topic of CDN/ISP collaboration from a perspective of high-volume long-living flows. These types of flows are hardly manageable with state-of-the-art Dynamic Name Service (DNS)-based redirection, as a reassignment of flows during the session is difficult to achieve. Consequently, varying load of surrogates caused by flash crowds and congestion events in the ISPs network are hard to compensate. This paper presents a novel approach promoting ISP and CDN collaboration based on a minimal deployment of software-defined networking switches in the ISPs network. The approach complements standard DNS-based redirection by allowing for a migration of high-volume flows between surrogates in the backend even if the communication has state information, such as Hyper Text Transfer Protocol sessions. In addition to a proof-of-concept, the evaluation identifies factors influencing performance and shows large performance increases when compared to standard DNS-based redirection.


international conference on networking | 2005

PeerMint: decentralized and secure accounting for peer-to-peer applications

David Hausheer; Burkhard Stiller

P2P-based applications like file-sharing or distributed storage benefit from the scalability and performance of completely decentralized P2P infrastruc tures. However, existing P2P infrastructures like Chord or Pastry are vulnerable against selfish and malicious behavior and provide currently little support for commercial applications. There is a need for reliable mechanisms that enable the commercial use of P2P technology, while maintaining favorable scalability prop erties. PeerMint is a completely decentralized and secure accounting scheme which facilitates market-based management of P2P applications. The scheme ap plies a structured P2P overlay network to keep accounting information in an effi cient and reliable way. Session mediation peers are used to minimize the impact of collusion among peers. A prototype has been implemented as part of a modular Accounting and Charging system to show PeerMints practical applicability. Ex periments were performed to provide evidence of the schemes scalability and reliability.


Computer Networks | 2003

The Cumulus Pricing Model as an adaptive framework for feasible, efficient, and user-friendly tariffing of internet services

Peter Reichl; David Hausheer; Burkhard Stiller

Within the intense debate on Quality-of-Service (QoS) management in the future Internet, charging for QoS-enabled services has adopted a crucial place in research, since business relations are the driving force for commercial service offerings. Economic management principles are applied in a number of traditional markets, but only recently have gained increasing attention in the control of Internet services and traffic with respect to economic efficiency, user acceptability, and technical feasibility. The present paper addresses these three main areas of current research in a holistic approach.Based on a generic time-scale model for Internet tariffs and three well-defined axioms of feasible Internet charging, the Cumulus Pricing Scheme (CPS) serves as a framework for pricing Internet services. CPS allows a well-balanced compromise of economic viability and technical feasibility, while relying on crucial user and provider points of view on acceptability and transparency of the pricing scheme. This dilemma is investigated with respect to a set of detailed aspects and their interdependencies, which are measured, simulated, and evaluated in a quantifiable manner. Service level specifications are investigated, while results obtained show that CPS achieves the central role of an economic and scalable management tool for Internet traffic.


international conference on peer-to-peer computing | 2003

Token-based accounting and distributed pricing to introduce market mechanisms in a peer-to-peer file sharing scenario

David Hausheer; Nicolas Liebau; Andreas Mauthe; Ralf Steinmetz; Burkhard Stiller

We present a token-based accounting mechanism that alleviates the free riding problem in P2P networks. The approach is complemented by distributed pricing as a flexible and viable scheme to incite users to share valuable content and to efficiently balance requests among all peers based on economic decisions.


Journal of Network and Systems Management | 2015

Software-Defined Multicast for Over-the-Top and Overlay-based Live Streaming in ISP Networks

Julius Rückert; Jeremias Blendin; David Hausheer

The increasing amount of over-the-top (OTT) live streams and the lack of global network layer multicast support poses challenges for a scalable and efficient streaming over the Internet. Content Delivery Networks (CDNs) help by delivering the streams to the edge of almost every Internet Service Provider (ISP) network of the world but usually also end there. From there on, the streams are to be delivered to the clients using IP unicast, although an IP multicast functionality would be desirable to reduce the load on CDN nodes, transit links, and the ISP infrastructure. IP multicast is usually not available due to missing control and management features of the protocol. Alternatively, Peer-to-Peer (P2P) mechanisms can be applied to extend the overlay multicast functionality of the CDN towards the clients. Unfortunately, P2P only improves the situation for the CDN but makes it more challenging for the ISP as even more unicast flows are generated between clients inside and outside the ISP network. To tackle this problem, a Software-Defined Networking-based cross-layer approach, called Software-Defined Multicast (SDM), is proposed in this paper, enabling ISPs to offer network layer multicast support for OTT and overlay-based live streaming as a service. SDM is specifically tailored towards the needs of P2P-based video stream delivery originating from outside the ISP network and can easily be integrated with existing streaming systems. Prototypical evaluations show significantly improved network layer transmission efficiencies when compared to other overlay streaming mechanisms, down to a level as low as for IP multicast, at linearly bounded costs.


IFIP'12 Proceedings of the 11th international IFIP TC 6 conference on Networking - Volume Part II | 2012

Quality adaptation in p2p video streaming based on objective qoe metrics

Julius Rückert; Osama Abboud; Thomas Zinner; Ralf Steinmetz; David Hausheer

The transmission of video data is a major part of traffic on todays Internet. Since the Internet is a highly dynamic environment, quality adaptation is essential in matching user device resources with the streamed video quality. This can be achieved by applying mechanisms that follow the Scalable Video Coding (SVC) standard, which enables scalability of the video quality in multiple dimensions. In SVC-based streaming, adaptation decisions have long been driven by Quality of Service (QoS) metrics, such as throughput. However, these metrics do not well match the way human users perceive video quality. Therefore, in this paper, the classical SVC-based video streaming approach is expanded to consider Quality of Experience (QoE) for adaptation decisions. The video quality is assessed using existing objective techniques with a high correlation to the human perception. The approach is evaluated in context of a P2P-based Video-on-Demand (VoD) system and shows that by making peers favor always layers with a high estimated QoE but not necessarily high bandwidth requirements, the performance of the entire system can be enhanced in terms of playback delay and SVC video quality by up to 20%. At the same time, content providers are able to reduce up to 60 of their server costs, compared to the classical QoS-based approach.


local computer networks | 2014

PowerPi: Measuring and modeling the power consumption of the Raspberry Pi

Fabian Kaup; Philip Gottschling; David Hausheer

An increasing number of households is connected to the Internet via DSL or cable, for which home gateways are required. The optimization of these - caused by their large number - is a promising area for energy efficiency improvements. Since no power models for home gateways are currently available, the optimization of their power state is not possible. This paper presents PowerPi, a power consumption model for the Raspberry Pi which is used as a substitute to conventional home gateways to derive the impact of typical hardware components on the energy consumption. The different power states of the platform are measured and a power model is derived, allowing to estimate the power consumption based on CPU and network utilization only. The proposed power model estimates the power consumption resulting in a RMSE of less than 3.3%, which is slightly larger than the maximum error of the measurements of 2.5%.


Archive | 2008

Resilient Networks and Services

David Hausheer; Jürgen Schönwälder

This book constitutes the refereed proceedings of the Second International Conference on Autonomous Infrastructure, Management and Security, AIMS 2008, held in Bremen, Germany, in June 2008, under the auspices of IFIP. The 13 revised full papers presented together with 8 papers of the AIMS PhD workshop were carefully reviewed and selected from 33 submissions to the main conference and 12 papers for the PhD workshop respectively. The papers are discussing topics such as autonomy, incentives and trust, overlays and virtualization, load balancing and fault recovery, network traffic engineering and analysis, and convergent behavior of distributed systems.

Collaboration


Dive into the David Hausheer's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Julius Rückert

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar

Matthias Wichtlhuber

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar

Jeremias Blendin

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Fabian Kaup

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar

Ralf Steinmetz

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Christian Koch

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar

Leonhard Nobach

Technische Universität Darmstadt

View shared research outputs
Researchain Logo
Decentralizing Knowledge