Wim Van de Meerssche
Ghent University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Wim Van de Meerssche.
Computer Networks | 2009
Steven Latré; Pieter Simoens; Bart De Vleeschauwer; Wim Van de Meerssche; Filip De Turck; Bart Dhoedt; Piet Demeester; Steven Van den Berghe; Edith Gilon-de Lumley
The recent emergence of multimedia services, such as Broadcast TV and Video on Demand over traditional twisted pair access networks, has complicated the network management in order to guarantee a decent Quality of Experience (QoE) for each user. The huge amount of services and the wide variety of service specifics require a QoE management on a per-user and per-service basis. This complexity can be tackled through the design of an autonomic QoE management architecture. In this article, the Knowledge Plane is presented as an autonomic layer that optimizes the QoE in multimedia access networks from the service originator to the user. It autonomously detects network problems, e.g. a congested link, bit errors on a link, etc. and determines an appropriate corrective action, e.g. switching to a lower bit rate video, adding an appropriate number of FEC packets, etc. The generic Knowledge Plane architecture is discussed, incorporating the triple design goal of an autonomic, generic and scalable architecture. The viability of an implementation using neural networks is investigated, by comparing it with a reasoner based on analytical equations. Performance results are presented of both reasoners in terms of both QoS and QoE metrics.
Journal of Network and Systems Management | 2013
Niels Bouten; Steven Latré; Wim Van de Meerssche; Bart De Vleeschauwer; Koen De Schepper; Werner Van Leekwijck; Filip De Turck
Over-The-Top (OTT) video services are becoming more and more important in today’s broadband access networks. While original OTT services only offered short duration medium quality videos, more recently, premium content such as high definition full feature movies and live video are offered as well. For operators, who see the potential in providing Quality of Experience (QoE) assurance for an increased revenue, this introduces important new network management challenges. Traditional network management paradigms are often not suited for ensuring QoE guarantees as the provider does not have any control on the content’s origin. In this article, we focus on the management of an OTT-based video service. We present a loosely coupled architecture that can be seamlessly integrated into an existing OTT-based video delivery architecture. The framework has the goal of resolving the network bottleneck that might occur from high peaks in the requests for OTT video services. The proposed approach groups the existing Hypertext Transfer Protocol (HTTP) based video connections to be multicasted over an access network’s bottleneck and then splits them again to reconstruct the original HTTP connections. A prototype of this architecture is presented, which includes the caching of videos and incorporates retransmission schemes to ensure robust transmission. Furthermore, an autonomic algorithm is presented that allows to intelligently select which OTT videos need to be multicasted by making a remote assessment of the cache state to predict the future availability of content. The approach was evaluated through both simulation and large scale emulation and shows a significant gain in scalability of the prototype compared to a traditional video delivery architecture.
integrated network management | 2009
Steven Latré; Bart De Vleeschauwer; Wim Van de Meerssche; Simon Perrault; Filip De Turck; Piet Demeester; Koen De Schepper; Christian Hublet; Wouter Rogiest; Stefan Custers; Werner Van Leekwijck
The introduction of new added value services such as IPTV has introduced great challenges for todays broadband DSL access networks as these services have stringent quality demands. In an attempt to protect the quality delivery of existing sessions, operators employ admission control mechanisms that limit the amount of sessions transmitted in the network. Current admission control mechanisms require a traffic specification of each stream, in order to know beforehand how many resources need to be reserved. For variable bit rate videos, which are bursty of nature, resources are reserved using the peak rate of the video. This leads to under-utilisation of the network as the amount of resources needed is over-dimensioned. We propose an autonomic measurement based admission control algorithm, optimised for the protection of video services in multimedia access networks. The algorithm is based on the IETF Pre Congestion Notification (PCN) mechanism and autonomically adjusts its parameters to the traffic characterisation of the video. The performance of this mechanism has been extensively evaluated in a packet based network simulation environment. Tests show that the autonomic nature of the algorithm leads to a better utilisation of the network while still avoiding any congestion in the network.
Journal of Network and Systems Management | 2011
Steven Latré; Bart De Vleeschauwer; Wim Van de Meerssche; Koen De Schepper; Christian Hublet; Werner Van Leekwijck; Filip De Turck
The popularity of multimedia services has introduced important new challenges for broadband access network management. As these services are very prone to network anomalies such as packet loss and jitter, accurate admission control mechanisms are needed to avoid congestion. Traditionally, centralized admission control mechanisms often underperform in combination with multimedia services, as they fail to effectively characterize the amount of needed resources. Recently, measurement based admission control mechanisms have been proposed such as the IETF Pre-Congestion Notification (PCN) mechanism, where the network load is measured at each intermediate node and signaled to the edge, where the admittance decision takes place. In this article, we design a PCN based admission control mechanism, optimized for protecting bursty traffic such as video services, which is currently not studied in the PCN working group. We evaluated and identified the effect of PCN’s configuration in protecting bursty traffic. The proposed admission control mechanism features three main improvements to the original PCN mechanism: first, it uses a new measurement algorithm, which is easier to configure for bursty traffic. Second, it allows to automatically adapt PCN’s configuration based on the traffic characteristics of the current sessions. Third, it introduces the differentiation between video quality levels to achieve an admission decision per video quality level of each request. The mechanism has been extensively evaluated in a packet switched simulation environment, which shows that the novel admission control mechanism is able to protect video traffic while maximizing the link utilization and avoiding packet loss.
global communications conference | 2009
Steven Latré; Bart De Vleeschauwer; Wim Van de Meerssche; Filip De Turck; Piet Demeester; Koen De Schepper; Christian Hublet; Wouter Rogiest; Stefan Custers; Werner Van Leekwijck
DSL aggregation networks are evolving to the standard platform for the delivery of multimedia services such as television and network based personal video recording. These multimedia services introduce large challenges for network operators as they are sensitive to packet loss. Therefore, admission control mechanisms are required to avoid congestion caused by allowing too many sessions. However, as multimedia services are often bursty it is not possible to reserve a fixed amount of bandwidth in the network since this policy will lead to either over-admittance or under-admittance. Recently, the IETF Pre-Congestion Notification (PCN) Working Group, proposed a measurement based admission control mechanism, where the network load is measured at each node and sessions are allowed or blocked at the edge of the network. In this paper, we extend and evaluate the PCN mechanism: we propose a new measurement algorithm for PCN, based on bandwidth metering, and determine the configuration guidelines for the parameters of both the original token bucket based approach and the novel algorithm for different network conditions and traffic types. More specifically, we study PCNs applicability on protecting VBR video services, which is currently not studied in the PCN Working Group. Furthermore, we characterise the gain of PCN in comparison to a centralised admission control mechanism.
network operations and management symposium | 2012
Koen De Schepper; Bart De Vleeschauwer; Chris Hawinkel; Werner Van Leekwijck; Jeroen Famaey; Wim Van de Meerssche; Filip De Turck
In recent years, the networking community has put a significant research effort in identifying new ways to distribute content to multiple users in a better-than-unicast manner. Scalable delivery is more important now video is the dominant traffic type and further growth is expected. To make content distribution scalable, in-network optimization functions are needed such as caches. The established transport layer protocols are end-to-end and do not allow optimizing transport below the application layer, hence the popularity of overlay application layer solutions located in the network. In this paper, we introduce a novel transport protocol, the Shared Content Addressing Protocol (SCAP) that allows in-network intermediate elements to participate in optimizing the delivery process, using only the transport layer. SCAP runs on top of standard IP networks, and SCAP optimization functions can be plugged-in the network transparently as needed. As such, only transport protocol based intermediate functions need to be deployed in the network, and the applications can stay at the topological end points. We define and evaluate a prototype version of the SCAP protocol using both simulation and a prototype implementation of a transparent SCAP-only intermediate optimization function.
Networks | 2013
Steven Latré; Wim Van de Meerssche; Dirk Deschrijver; Dimitri Papadimitriou; Tom Dhaene; Filip De Turck
SUMMARY The introduction of high-bandwidth demanding services such as multimedia services has resulted in important changes on how services in the Internet are accessed and what quality-of-experience requirements (i.e. limited amount of packet loss, fairness between connections) are expected to ensure a smooth service delivery. In the current congestion control mechanisms, misbehaving Transmission Control Protocol (TCP) stacks can easily achieve an unfair advantage over the other connections by not responding to Explicit Congestion Notification (ECN) warnings, sent by the active queue management (AQM) system when congestion in the network is imminent. In this article, we present an accountability mechanism that holds connections accountable for their actions through the detection and penalization of misbehaving TCP stacks with the goal of restoring the fairness in the network. The mechanism is specifically targeted at deployment in multimedia access networks as these environments are most prone to fairness issues due to misbehaving TCP stacks (i.e. long-lived connections and a moderate connection pool size). We argue that a cognitive approach is best suited to cope with the dynamicity of the environment and therefore present a cognitive detection algorithm that combines machine learning algorithms to classify connections into well-behaving and misbehaving profiles. This is in turn used by a differentiated AQM mechanism that provides a different treatment for the well-behaving and misbehaving profiles. The performance of the cognitive accountability mechanism has been characterized both in terms of the accuracy of the cognitive detection algorithm and the overall impact of the mechanism on network fairness. Copyright
International Journal of Network Management | 2013
Steven Latré; Wim Van de Meerssche; Dirk Deschrijver; Dimitri Papadimitriou; Tom Dhaene; Filip De Turck
SUMMARY The introduction of high-bandwidth demanding services such as multimedia services has resulted in important changes on how services in the Internet are accessed and what quality-of-experience requirements (i.e. limited amount of packet loss, fairness between connections) are expected to ensure a smooth service delivery. In the current congestion control mechanisms, misbehaving Transmission Control Protocol (TCP) stacks can easily achieve an unfair advantage over the other connections by not responding to Explicit Congestion Notification (ECN) warnings, sent by the active queue management (AQM) system when congestion in the network is imminent. In this article, we present an accountability mechanism that holds connections accountable for their actions through the detection and penalization of misbehaving TCP stacks with the goal of restoring the fairness in the network. The mechanism is specifically targeted at deployment in multimedia access networks as these environments are most prone to fairness issues due to misbehaving TCP stacks (i.e. long-lived connections and a moderate connection pool size). We argue that a cognitive approach is best suited to cope with the dynamicity of the environment and therefore present a cognitive detection algorithm that combines machine learning algorithms to classify connections into well-behaving and misbehaving profiles. This is in turn used by a differentiated AQM mechanism that provides a different treatment for the well-behaving and misbehaving profiles. The performance of the cognitive accountability mechanism has been characterized both in terms of the accuracy of the cognitive detection algorithm and the overall impact of the mechanism on network fairness. Copyright
network operations and management symposium | 2012
Niels Bouten; Steven Latré; Wim Van de Meerssche; Koen De Schepper; Bart De Vleeschauwer; Werner Van Leekwijck; Filip De Turck
The consumption of multimedia services over HTTP-based delivery mechanisms has recently gained popularity due to their increased flexibility and reliability. Traditional broadcast TV channels are now offered over the Internet, in order to support Live TV for a broad range of consumer devices. Moreover, service providers can greatly benefit from offering external live content (e.g., YouTube, Hulu) in a managed way. Recently, HTTP Adaptive Streaming (HAS) techniques have been proposed in which video clients dynamically adapt their requested video quality level based on the current network and device state. Unlike linear TV, traditional HTTP- and HAS-based video streaming services depend on unicast sessions, leading to a network traffic load proportional to the number of multimedia consumers. In this paper we propose a novel HAS-based video delivery architecture, which features intelligent multicasting and caching in order to decrease the required bandwidth considerably in a Live TV scenario. Furthermore we discuss the autonomic selection of multicasted content to support Video on Demand (VoD) sessions. Experiments were conducted on a large scale and realistic emulation environment and compared with a traditional HAS-based media delivery setup using only unicast connections.
Journal of Communications and Networks | 2014
Farhan Azmat Ali; Pieter Simoens; Wim Van de Meerssche; Bart Dhoedt
Multimedia content is very sensitive to packet loss and therefore multimedia streams are typically protected against packet loss, either by supporting retransmission requests or by adding redundant forward error correction (FEC) data. However, the redundant FEC information introduces significant additional bandwidth requirements, as compared to the bitrate of the original video stream. Especially on wireless and mobile networks, bandwidth availability is limited and variable. In this article, an adaptive FEC (A-FEC) system is presented whereby the redundancy rate is dynamically adjusted to the packet loss, based on feedback messages from the client. We present a statistical model of our A-FEC system and validate the proposed system under different packet loss conditions and loss probabilities. The experimental results show that 57-95% bandwidth gain can be achieved compared with a static FEC approach.