Jan Coppens
Alcatel-Lucent
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jan Coppens.
Computer Communications | 2006
Tim Wauters; Jan Coppens; Filip De Turck; Bart Dhoedt; Piet Demeester
The recent introduction of Content Distribution Networks (CDNs) enhances the delivery of high quality multimedia content to end users. In a CDN architecture, the content is replicated to so-called surrogate servers, generally at the edge of the transport network, to improve the quality of service (QoS) of streaming multimedia delivery services. By using peer-to-peer (P2P) technologies, these edge servers can co-operate and provide a more scalable and robust service in a self-organizing CDN. In this paper, we propose a set of distributed replica placement algorithms (RPAs), based on an Integer Linear Programming (ILP) formulation of the centralized content placement problem. These algorithms further enhance the CDN performance by optimizing the network and server load, reducing network delays and avoiding congestion. Although the proposed algorithms are designed for and tested on different network topologies, we focus on robust ring based CDNs in this study. Content placement on such a network topology can be calculated analytically and can be used for comparison.
next generation internet | 2005
Tim Wauters; Jan Coppens; Bart Dhoedt; Piet Demeester
The concept of content distribution networks (CDNs) has recently been introduced to enhance the delivery of bandwidth-intensive multimedia content to end users. In a CDN architecture, the content is replicated from the origin server to so-called surrogate servers at the edge of the Internet, to improve the quality of service and optimise network bandwidth usage. The introduction of peer-to-peer (P2P) architectures, where all nodes fundamentally play equal roles, enables self-organisation of the CDN and automatic recovery in case of node failures. To optimise the distribution of the content over the different surrogate servers, replica placement algorithms (RPAs) have been developed. In this paper, we present two distributed RPAs for CDNs. We will demonstrate that they further improve CDN performance by reducing the server load and the bandwidth usage. The introduction of link costs allows these algorithms to additionally support load balancing on the network links.
network operations and management symposium | 2006
Jan Coppens; Tim Wauters; F. De Turck; Bart Dhoedt; Piet Demeester
Content distribution networks (CDN) have been increasingly used to deliver bandwidth-intensive multimedia content to a large amount of users. In a CDN, the content is replicated from the origin server to so-called surrogate servers in order to improve the quality of service experienced by the end-users and decrease the network load. However, despite the promising concept, current centralized and distributed CDN architectures lack placement and retrieval algorithms that are both scalable and provide a close to optimal placement. In this article, we propose a novel replica placement algorithm called COCOA (cooperative cost optimization algorithm), suited for a self-organizing hybrid CDN architecture. Our results show that COCOA achieves a performance comparable to the less scalable centralized algorithms, while maintaining the benefits of distributed approaches. Contrary to the more common off-line content replication and management strategies, on-line replication in a self-optimizing CDN puts an additional strain on the network. We explore techniques to control this traffic and study its implications on the performance of the CDN. Because in a CDN content is replicated to geographically distributed surrogate servers, one of the main benefits is its ability to recover from network failures and increase the availability of content during flash crowds. As illustrated in this article, we succeed in making the CDN more robust by effectively reducing the convergence time of the network after the occurrence of such disruptive events
international conference on communications | 2004
Jan Coppens; Stijn De Smet; Steven Van den Berghe; Filip De Turck; Piet Demeester
Because of the ever-increasing popularity of the Internet, network monitoring becomes very mission critical to guarantee the operation of IP networks, e.g. to detect network failures and stop intrusion attempts. A majority of these monitoring tasks require only a small subset of all passing packets, which share some common properties such as identical header fields or similar patterns in their data. Nowadays, next to the increasing network speed, much of these tasks become very complex. In order to capture only the useful packets, these applications need to evaluate a large set of expressions. In this paper, we present a platform independent filter and pattern matcher optimization algorithm, which reduces the required number of evaluated expressions. The performance of the algorithm will be validated both analytically and by means of a high-speed monitoring system.
international symposium on computers and communications | 2005
Jan Coppens; Tim Wauters; F. De Turck; Bart Dhoedt; Piet Demeester
Currently, a lot of research has been devoted to (i) content distribution, (ii) traffic engineering, (iii) network monitoring and (iv) service enabling platforms. However, the integration of these four individual concepts in a single platform has not yet been studied in enough detail. In this paper we present an architecture for such a robust content delivery service. The combination of both distributed replication of videos and multi-source traffic engineering tackles specific problems such as congested network parts, overloaded servers and the occurrence of flash crowds. Contrary to most existing systems, the content placement and retrieval algorithms in the presented CDN obtain precise network state information from an integrated monitoring system, allowing even a higher efficiency. To validate the performance of the CDN, an exact placement ILP formulation and various RPA heuristics are implemented and simulated.
Future Generation Computer Systems | 2003
S. Van den Berghe; P. Van Heuven; Jan Coppens; F. De Turck; Piet Demeester
This article discusses an architecture using monitoring feedback as an assisting factor for delivering QoS on packet-based networks. The handling of this feedback is done in an automated way, through the use of a policy-based management architecture. For this, a formal model for describing data plane and measurement objects was translated into an XML-based configuration language. On top of this, a proof-of-concept management architecture was developed and evaluated, using both a modified network simulator and enhanced Linux prototype routers.
1st Home Networking Conference | 2008
Wouter Haerick; Nico Goeminne; Jan Coppens; Filip De Turck; Bart Dhoedt
Open service platforms like the OSGi-platform offer a standard, scalable way for service providers to remotely deploy their services inside many residential networks. However, the lack of control by WAN service providers on the home environment together with too complex end-user policy configuration hinder widespread e-deployment of services into the home. Several architectures have been presented for next generation home networks, coping with the deployment, discovery and run-time control of residential services in order to enforce service levels. However, to evolve towards true collaboration scenarios where a service from one service provider can interact with a service from another service provider without configuration inconvenience, or where a service from one service provider can co-exist with an identical service from another provider on the same device, proper security and policy configuration needs to be addressed. This paper contributes therefore to the already presented architectures by discussing secure remote policy configuration in a multi service provider environment. A security framework is proposed based on the OSGi specification that limits not-trusted service providers in their control on other services. The strength of the framework lies in its generic XACML-compliant policy configuration module and its compatibility with existing services. This makes the framework easy to adopt for remote configuration providers, which allows service providers to delegate configuration support to a service aggregation provider.
Annales Des Télécommunications | 2007
Jan Coppens; Stijn De Smet; Steven Van den Berghe; Filip De Turck; Piet Demeester
Effective network monitoring is vital for a growing number of control and management applications typically found in present-day networks. The ever-increasing link speeds and the complexity of monitoring applications’ needs have exposed severe limitations of existing monitoring techniques. A majority of the current monitoring tasks require only a small subset of all observed packets, which share some common properties such as identical header fields or similar patterns in their data. In order to capture only these useful packets, a large set of expressions needs to be evaluated. This evaluation should be done as efficiently as possible when monitoring multi-gigabit networks. To speed up this packet classification process, this article presents different packet filter optimization techniques. Complementary to existing approaches, we propose an adaptive optimization algorithm which dynamically reconfigures the filter expressions based on the currently observed traffic pattern. The performance of the algorithms is validated both analytically and by means of the implementation in a network monitoring framework. The various characteristics of the algorithms are investigated, including their performance in an operational network intrusion detection system.RésuméAujourd’hui, l’analyse du trafic des réseaux est essentielle pour la gestion de nombreuses applications. La croissance de la vitesse des réseaux et la complexité des besoins en analyse de trafic ont révélé des limitations sévères des méthodes et des outils existants. La plupart des tâches d’analyse n’exige qu’une partie de tous les paquets observés. Ces paquets partagent typiquement quelques propriétés comme des champs d’entêté identiques ou des séquences de caractères similaires dans leurs données. Pour capturer seulement ces paquets utiles, il faut évaluer une grande collection d’expressions. Cette évaluation doit être aussi performante que possible pour analyser des réseaux multi-gigabit. Cet article présente plusieurs techniques d’optimisation pour accélérer la procédure de classification des paquets. En complément des approches existantes, nous proposons un algorithme d’optimisation adaptif qui réorganise dynamiquement les expressions de filtrage à partir de l’observation du trafic. La performance des algorithmes est validée analytiquement et par une implémentation dans un système de surveillance du réseau. Les caractéristiques des algorithmes sont évaluées, y compris leur performance dans une application de détection d’intrusion opérationnelle.
Archive | 2008
Pascal Justen; Christoph Stevens; Werner Liekens; Jan Coppens; Christele Bouchat; Willem Jozef Amaat Acke
Archive | 2009
Willem Jozef Amaat Acke; Christoph Stevens; Pascal Justen; Werner Liekens; Jan Coppens; Christele Bouchat