Sawsan Al Zahr
Télécom ParisTech
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Sawsan Al Zahr.
Journal of Lightwave Technology | 2011
Siamak Azodolmolky; Jordi Perelló; Marianna Angelou; Fernando Agraz; Luis Velasco; Salvatore Spadaro; Yvan Pointurier; Antonio Francescon; Chava Vijaya Saradhi; Panagiotis C. Kokkinos; Emmanouel A. Varvarigos; Sawsan Al Zahr; Maurice Gagnaire; Matthias Gunkel; Dimitrios Klonidis; Ioannis Tomkos
Core optical networks using reconfigurable optical switches and tunable lasers appear to be on the road towards widespread deployment and could evolve to all-optical mesh networks in the coming future. Considering the impact of physical layer impairments in the planning and operation of all-optical (and translucent) networks is the main focus of the Dynamic Impairment Constraint Optical Networking (DICONET) project. The impairment aware network planning and operation tool (NPOT) is the main outcome of DICONET project, which is explained in detail in this paper. The key building blocks of the NPOT, consisting of network description repositories, the physical layer performance evaluator, the impairment aware routing and wavelength assignment engines, the component placement modules, failure handling, and the integration of NPOT in the control plane are the main contributions of this study. Besides, the experimental result of DICONET proposal for centralized and distributed control plane integration schemes and the performance of the failure handling in terms of restoration time is presented in this study.
international conference on computer communications and networks | 2006
Mohamed Ali Ezzahdi; Sawsan Al Zahr; Mohamed Koubaa; Nicolas Puech; Maurice Gagnaire
Over the last decade, numerous routing and wavelength assignment (RWA) algorithms have been developed for WDM optical networks planning. Most of these algorithms neglect the feasibility of the obtained lightpaths. In this paper, we propose a new algorithm, called LERP (Lightpath Establishment with Regenerator Placement), that enables to solve the problem of RWA in guaranteeing the feasibility of the obtained solution. A lightpath is said admissible if the BER (Bit Error Rate) at its destination node remains acceptable (remains under a given threshold). In the case of BER non-admissibility, one or more electrical regenerators may be placed along the lightpath. The LERP algorithm aims at minimizing the number of regenerators necessary to guarantee the quality of transmission along the lightpath. The originality of the our approach consists in considering simultaneously four physical layer impairments, namely, chromatic dispersion, polarization mode dispersion, amplified spontaneous emission and non-linear phase shift. The efficiency of the LERP algorithm is demonstrated via a numerical comparison with one of the alternative solutions proposed in the literature. Numerical simulations have been carried out in the context of the 18-node NSF network.
ieee international conference on cloud computing technology and science | 2011
Davide Tammaro; Elias A. Doumith; Sawsan Al Zahr; Jean-Paul Smets; Maurice Gagnaire
In Cloud environments, efficient resource provisioning and management present today a challenging issue because of the dynamic nature of the Cloud on one hand, and the need to satisfy heterogeneous resource requirements on the other hand. In such dynamic environments where end-users can arrive and leave the Cloud at any time, a Cloud service provider (CSP) should be able to make accurate decisions for scaling up or down its data-centers while taking into account several utility criteria, e.g., the delay of virtual resources setup, the migration of existing processes, the resource utilization, etc. In order to satisfy both parties (the CSP and the end-users), an efficient and dynamic resource allocation strategy is mandatory. In this paper, we propose an original approach for dynamic resource allocation in a Cloud environment. Our proposal considers computing job requests that are characterized by their arrival and teardown times, as well as a predictive profile of their computing requirements during their activity period. Assuming a prior knowledge of the predicted computing resources required by end-users, we propose and investigate several algorithms with different optimization criteria. However, prediction errors may occur resulting in some cases in the drop of one or several computing requests. Our proposed algorithms are compared in terms of various performance parameters including the rejection ratio, the dropping ratio, as well as the satisfaction of the endusers and the CSP.
optical network design and modelling | 2010
Mayssa Youssef; Sawsan Al Zahr; Maurice Gagnaire
Translucency in WDM networks appears as a trade-off between the low cost of full transparency and the high signal quality provided by full opacity. On the one hand, transparent networks undergo various transmission impairments due to optical components deployed in the network. On the other hand, opaque networks remain very expensive due to electrical 3R regeneration (Re-amplifying, Re-shaping, and Re-timing), performed at each network node. Translucent networks use sparse regeneration strategy in order to improve the optical signal budget. In translucent network design, the objective is to judiciously choose the regeneration sites in order to establish a set of traffic demands with an admissible quality of transmission at a minimized network cost. We address the problem of translucent network design by proposing a novel heuristic for routing, wavelength assignment and regenerator placement. Our heuristic, called COR2P for Cross Optimization for RWA and Regenerator Placement, aims at minimizing both the number of required regenerators and the number of regeneration sites in the network. The originality of COR2P lies on a CapEx/OpEx perspective for network cost evaluation. Capital Expenditure refers to the network deployment cost while Operational Expenditure refers to the network management and maintenance cost. We introduce an original cost function that contributes to the optimization of CapEx/OpEx expenditures. In this paper, we investigate the impact of different parameters introduced in our heuristic and cost function, such as the ratio of sites chosen a priori for regeneration, and the limited size of regenerator pools installed at such nodes. Our simulation results outline that a tradeoff for CapEx and OpEx costs can be achieved by judiciously adjusting these parameters.
ieee international conference on cloud computing technology and science | 2013
Felipe Díaz-Sánchez; Sawsan Al Zahr; Maurice Gagnaire
Currently, Cloud brokers bring interoperability and portability of applications across multiple Clouds. In the future, Cloud brokers will offer services based on their knowledge of Cloud providers infrastructure to automatically and cost-effectively overcome performance degradation. In this paper, we present a Mixed-Integer Linear Program (MILP) that provides a cost-effective placement across multiple Clouds. Our MILP formulation considers parameters of Cloud providers such as price, configuration of VMs, network latency, and provisioning time. We evaluate the cost-effectiveness of deploying a Cloud infrastructure into a single or across multiple Cloud providers by using real prices and VM configurations. The results show that in some cases may be cost-effective to distribute the infrastructure across multiple Cloud providers. We also propose three placement policies for faulty multi-Cloud scenarios. The best of these policies minimizes the cost of the Cloud infrastructure under fixed provisioning time values.
grid economics and business models | 2012
Felipe Díaz Sánchez; Elias A. Doumith; Sawsan Al Zahr; Maurice Gagnaire
The Cloud computing paradigm offers the illusion of infinite resources accessible to end-users anywhere at anytime. In such dynamic environment, managing distributed heterogeneous resources is challenging. A Cloud workload is typically decomposed into advance reservation and on-demand requests. Under advance reservation, end-users have the opportunity to reserve in advance the estimated required resources for the completion of their jobs without any further commitment. Thus, Cloud service providers can make a better use of their infrastructure while provisioning the proposed services under determined policies and/or time constraints. However, estimating end-users resource requirements is often error prone. Such uncertainties associated with job execution time and/or SLA satisfaction significantly increase the complexity of the resource management. Therefore, an appropriate resource management by Cloud service providers is crucial for harnessing the power of the underlying distributed infrastructure and achieving high system performance. In this paper, we investigate the resource provisioning problem for advance reservation under a Pay-as-you-Book pricing model. Our model offers to handle the extra-time required by some jobs at a higher price on a best-effort basis. However, satisfying these extra-times may lead to several advance reservations competing for the same resources. We propose a novel economic agent responsible for managing such conflicts. This agent aims at maximizing Cloud service provider revenues while complying with SLA terms. We show that our agent achieves higher return on investment compared to intuitive approaches that systematically prioritize reserved jobs or currently running jobs.
ieee international conference on cloud engineering | 2014
Felipe Díaz Sánchez; Sawsan Al Zahr; Maurice Gagnaire; Jean Pierre Laisné; Iain James Marshall
Cloud Brokers enable interoperability and portability of applications across multiple Cloud Providers. On the other hand, incoming Cloud Providers start to support more and more unbundled Cloud Instances offerings. Thus, consumers may set up at their will the quantity of CPU, network bandwidth and memory or hard disk capacities their Cloud Instances will have. These facts enable the standardization of interoperable Cloud Instance configurations. In this paper, CompatibleOne is presented as an approach to bring Cloud Computing as a commodity. For this, the requirements to make of a product a commodity have been identified and have been mapped into the CompatibleOne architecture components. Our approach shows the practical feasibility of bringing Cloud Computing as a commodity.
international conference on telecommunications | 2011
Sawsan Al Zahr; Elias A. Doumith; Maurice Gagnaire
Over the last decade, translucent WDM networks have appeared as a promising candidate for next generation core networks. Using sparse regeneration techniques, translucent networks may achieve a pretty tradeoff between the low cost of transparent networks and the quality of transmission guaranteed by fully-opaque networks. On one hand, deploying large-scale transparent networks is still a critical issue since transmission impairments arising from long-haul optical equipment may significantly limit the optical reach. On the other hand, opaque networks remain very expensive due to electrical regeneration performed at each network node. In this paper, we propose an original exact approach, based on an integer linear program (ILP) formulation, to deal with the problem of translucent network design. Existing exact approaches rely on linear approximations of the signal degradation. In this paper, we make use of a realistic estimate of the signal quality taking into account the simultaneous effect of four well-known transmission impairments. Moreover and to the best of our knowledge, all existing approaches consider the problem of translucent network design assuming either permanent or semi-permanent lightpath demands. In this paper, we consider the problem of translucent network design under dynamic but deterministic traffic pattern; i.e., scheduled lightpath demands (SLDs). In order to improve the scalability of our approach, we decompose the problem into the routing and regenerator placement, and the wavelength assignment and regenerator placement sub-problems. In the former, we place regenerators and route demands while assuming that the quality of transmission is independent of the wavelength value. In the latter, additional regenerators may be required to overcome the dependency of the quality of transmission on the wavelength value. Deployed regenerators may be shared among multiple non-concurrent SLDs. In doing so, we shorten further the gap between translucent and transparent network costs.
optical network design and modelling | 2012
Elias A. Doumith; Sawsan Al Zahr; Maurice Gagnaire
Thanks to recent advances in WDM technologies, an optical fiber is capable to carry up to 200 wavelengths operating at 40 Gbps each. In such high speed networks, service disruptions caused by network failures (e.g., fiber cut, amplifier dysfunction) may lead to high data losses. A network operator should be able to promptly locate such failures, in order to perform fast restoration. Hence, an efficient fault detection and localization mechanism is mandatory for reliable network design. In previous work, we have introduced the concept of monitoring-trees (m-trees) to achieve fast link failure detection and localization. We have proposed an integer linear programming (ILP) approach for the design of an m-tree solution that minimizes the number of required optical monitors, while achieving unambiguous failure detection and localization. In this paper, we propose a novel approach, based on the well known simulated annealing meta-heuristic, for the m-tree design in WDM networks. Simulations conducted in this study show the same results as the ILP approach at much lower computation time. Our proposal can thus be applied to large-sized and very large-sized networks.
2012 Symposium on Broadband Networks and Fast Internet (RELABIRA) | 2012
Elias A. Doumith; Sawsan Al Zahr
As traffic demands are continuously increasing, core networks must be able to react almost instantaneously to any single, even multiple, failure(s) in order to prevent huge data losses. Even though the probability of simultaneous multiple failures is very small, the time needed to repair a single failure ranges from several hours (for landline fibers) to several weeks (for submarine fibers). During this period, the network is vulnerable to any new failure that may occur in the network. Therefore, designing survivable WDM networks against multiple failures requires efficient and accurate detection and localization mechanisms. In this paper, we extend the monitoring trail concept in order to detect and unambiguously localize any single- and double- link failure in the network. Compared to our previously proposed MEMOTA algorithm, the improved MEMOTA++ algorithm benefits from an improved trail reconfiguration algorithm in order to reduce its execution time. Numerical simulations have been carried out using the Deutsche Telekom and the Geant-2 European networks. We show that a monitoring solution able to localize any double-link failure in the Deutsche Telekom network is 122% more expensive than the one able to localize any single-link failure, but remains 10.9% more economical than the traditional link-based monitoring solution.