End-to-End Delay Guaranteed SFC Deployment: A Multi-level Mapping Approach
aa r X i v : . [ c s . N I] F e b End-to-End Delay Guaranteed SFC Deployment:A Multi-level Mapping Approach
Fatemeh Yaghoubpour , Bahador Bakhshi ∗ , Fateme Seifi Amirkabir University of Technology, Hafez Avenue, Tehran, Iran
Abstract
Network Function Virtualization (NFV) enables service providers to maximize the busi-ness profit via resource-efficient QoS provisioning for customer requested Service Func-tion Chains (SFCs). In recent applications, end-to-end delay is one of the crucial QoSrequirement needs to be guaranteed in SFC deployment. While it is considered in theliterature, accurate and comprehensive modeling of the delay in the problem of maxi-mizing the service provider’s profit has not yet been addressed. In this paper, at first,the problem is formulated as a mixed integer convex programming model. Then, bydecomposing the model, a two-level mapping algorithm is developed, that at the firstlevel, maps the functions of the SFCs to virtual network function instances; and in thesecond level, deploys the instances in the physical network. Simulation results showthat the proposed approach achieves a near optimal solution in comparison the per-formance bound obtained by the optimization model, and also superiors the existingsolutions.
Keywords:
Network Function Virtualization (NFV), Service Function Chain (SFC),end-to-end delay, queuing theory, decomposition. ∗ Corresponding author
Email addresses: [email protected] (Fatemeh Yaghoubpour), [email protected] (Bahador Bakhshi), [email protected] (Fateme Seifi)
Preprint submitted to Computer Communications February 12, 2021 . Introduction
Recently, due to the advent of the Internet of Things (IoT) and 5G, NetworkFunction Virtualization (NFV) has received much attention to address heterogeneousrequirements of services. Since NFV decouples Network Functions (NFs) from thephysical substrate network, it creates this opportunity for Telecommunication ServiceProviders (TSPs) to effectively upgrade their network to guarantee a wide range of QoSmetrics, such as high data rate and reliability as well as low end-to-end delay and packetloss. That obviously reduces the Capital Expenses (CAPEX) as well as the OperationalExpenses (OPEX). Moreover, this decoupling makes a drastic increase in agility andtime-to-market of network services. To exploit its potential, extensive researches havebeen done on NFV-based networks during the last years [1, 2, 3].Resource allocation to customer services is the main challenge in NFV. In thiscontext, the services are described as Service Function Chains (SFCs) containing asequence of Virtual Network Functions (VNFs) connected with virtual links. To servethe customers, the TSP needs to deploy the SFCs by allocating the required resources,like processing, storage, and bandwidth. Service providers attempt to allocate theshared substrate network resources to SFCs efficiently to achieve objectives such asmaximum gain, acceptance rate, reliability, and energy efficiency. This problem isnamed SFC mapping [4].Quality of Service (QoS) provisioning is the crucial requirement in SFC mapping,and end-to-end delay is one of its essential metrics which is considered in recent mission-critical and multimedia applications. End-to-end delay is a combination of heteroge-neous delay factors, including propagation, transmission, processing, and queuing de-lays. Accordingly, not only modeling the different factors but also provisioning therequirement in such a complex network is a serious challenge in SFC mapping.In recent years, the SFC mapping problem is concerned by network researchers, andseveral papers have investigated it from different aspects [1, 3, 5]. However, even thoughdelay is one of the most significant requirements of SFCs, it has not been addressed2ignificantly. In [6, 7], and [8] the end-to-end delay is calculated by accumulating thepropagation delay and other factors such as transmission, and queuing delays are ig-nored. In [9], only processing delay is concerned and the other factors are neglected.Although authors in [10] consider propagation and processing delays, they do not payattention to queuing and transmission delay. In [11], the authors calculate processingand transmission delays; however, they ignore the queuing and propagation factors. In[12], processing, transmission, and queuing delays are considered in a cloud environ-ment, and in [13], queue length is considered as a queuing delay metric.The propagation, transmission, and processing delays can be formulated as linearfunctions, while the queuing delay depends on the traffic distribution, and in the sim-plest form, is a fractional function that makes the model non-linear. Hence, for thesake of simplicity, most researchers tend to eliminate it. However, queuing delay is oneof the important factors in end-to-end delay and mostly is not negligible in comparisonto the other factors. Therefore, considering queuing delay is inevitable. In this paper,for this purpose, we define a convex model that takes into account propagation, trans-mission, processing, and queuing delays as the constraints with the aim of maximizingthe business profit of the TSP. To the best of our knowledge, the problem of gain max-imization subject to assuring the end-to-end delay considering all the affecting factors,which is named Delay Constrained SFC Mapping (DCSM), has not been addressed inthe literature yet. Hereupon, for filling this research gap, we model the problem as amixed-integer convex problem and develop an efficient algorithm to solve it. This paperextends our previous work [14].In the DCSM problem, there is a set of input SFCs that the service provider canwhether or not accept them. In addition to processing and bandwidth resources, eachSFC has its end-to-end bandwidth requirement. The objective is gain maximizationwhile satisfying the end-to-end delay in terms of propagation and queuing delays.Briefly, our contributions comparing to the previous researches are: • For the first time, we formulate the DCSM problem as a mixed integer convex3odel. • We develop a multi-level mapping algorithm by decomposing the optimizationmodel. • We evaluate the proposed solution in different setting against to theoretical per-formance bound and existing solutions.The remainder of this paper is organized as follows. In Section 2, we review relatedwork, categorize them, and identify the research gap. In Section 3, first, the systemmodel is explained; afterward, DCSM is formulated as a mixed integer convex opti-mization problem. The optimization model is decomposed and a multi-level mappingalgorithm is presented in Section 4 to solve the problem efficiently. Section 5 evaluatesthe simulation results. Finally, we will make a conclusion and introduce some new ideasas future work in Section 6.
2. Related Work
In this section, related work to the DCSM problem are reviewed, the differenceswith this paper are investigated, and finally, the research gap is identified.In [10], the authors investigated resource allocation scheme for balancing the loadamong NF instances, and maximizing the total network utilization. They consideredCPU, disk, and memory of physical nodes and bandwidth capacity of physical links.For end-to-end delay, only propagation and processing delays were assumed; and theauthors proposed a heuristic algorithm. This work ignores the queuing delays and alsodoes not consider the license costs of VNF instance and the cost of physical resources.In the SFC mapping problem in [15], physical nodes and links capacity limitations areconsidered. However, VNF instances are not shared among chains and no end-to-enddelay is guaranteed. Moreover, the authors ignored the licensing and physical resourcesactivating costs. 4n [6], an ILP model is proposed to address the chain deployment problem consid-ering the delay of the chain and the maximum load among edge devices. Only CPU isconsidered as the physical node resource while disk and memory are ignored. Moreover,only the propagation delay is considered. In [11], a formulation of the SFC schedul-ing (SFCS) problem is presented that exploits interactions between NFs mapping ontoVNFs, service scheduling, and traffic routing. Physical nodes and virtual instances areshared among multiple SFCs. The authors assume that all SFCs are accepted withoutconsidering any cost for provisioning. The authors in [12], proposed an ILP model thatconsiders CPU, disk, and bandwidth constraints. The processing, transmission, andqueuing delays are addressed in a cloud environment. However, VNF instances are notshared among chains and it is assumed that all requested chains are accepted, i.e., thereis no CAC (Call Admission Control) mechanism. This paper proposes an affinity basedheuristic algorithm to solve the problem.The work presented in [7], which is the improved version of [12], assigns fixed delaysto physical nodes and physical links. The work aims at cost minimization and considersphysical nodes and links costs; however, it neglects the cost of activating VNF instancesas well as the transmission and queuing delays. This paper does not have any CACmechanism and assumes that all SFCs have to be accepted. A heuristic algorithm namedAnnealing Based Simulation Approach (ABSA) is proposed, wherein the affinity-basedalgorithm proposed in [12] is used. The SFC mapping problem to minimize the time-average cost is formulated as a MILP model in [13]. This paper considers CPU andbandwidth as the substrate network resource. It aims at minimizing the lengths ofqueues in order to decrease the end-to-end delay. Both physical nodes and virtualinstances are shared among multiple SFCs.The objective of the problem proposed in [9] is delay minimization, but the transmis-sion delay is the only considered metric. It only considers physical links bandwidth asthe constraint. It shares the physical nodes among multiple chains. The CAC problem Integer Linear Programming
3. System Model AND Problem Statement
In this section, we first discuss the assumptions and present the system model, thenwe state the problem with an illustrative example, and finally formulate it.6 able 1: Summary of Related Work
Paper Delay Capacity Constraint Sharing C A C O p t i m i z a t i o n M o d e l H e u r i s t i c A l go r i t h m P r o p aga t i o n T r a n s m i ss i o n P r o ce ss i n g Q u e u i n g C P U S t o r ag e M e m o r y N e t w o r k i n g B a nd w i d t h P h y s i c a l N o d e s VN F I n s t a n ce [10] √ √ √ √ √ √ √ √ √ √ √ √ [15] √ √ √ √ √ √ √ √ [6] √ √ √ √ √ √ √ √ [11] √ √ √ √ √ √ √ √ [12] √ √ √ √ √ √ √ √ √ [7] √ √ √ √ √ √ √ √ √ [13] √ √ √ √ √ √ √ √ [9] √ √ √ √ [8] √ √ √ √ √ √ [16] √ √ √ √ √ √ [17] √ √ √ √ √ √ √ √ √ [19] √ √ √ √ √ √ √ [14] √ √ √ √ √ √ √ √ √ √ √ √ This paper √ √ √ √ √ √ √ √ √ √ √ √ √ .1. Assumptions and System Model In this paper, G p = ( N p , E p ) demonstrates the substrate network wherein N p and E p respectively represent the physical nodes and links. Each n ∈ N p indicates a physicalserver wherein θ CP Un , θ memn , and θ strgn denote its total available CPU, memory, and stor-age, respectively. Additionally, link ( n, m ) ∈ E p shows a physical connection betweentwo physical nodes n, m ∈ N p . This link has a limited data transfer capacity θ ( n,m ) andpropagation delay d ( n,m ) .Set T contains all available VNF types that can be requested by users. In caseof requesting a type in a chain, the service provider activates an instance of the VNFtype by running the image of the VNF on a virtual machine and installing the requiredlicenses. I t represents all available instances of type t ∈ T that can be activatedon-demand. The instances of each type have three specific characteristics. The firstone is the required physical resources (CPU, storage, and memory) that need to beallocated for activating each instance of the type. The second characteristic is thetraffic processing capacity of the instances of that type. These two characteristics aredetermined by the VNFs vendors wherein the CPU, memory, and storage requirementsof each instance of type t ∈ T are denoted by CP U t , mem t , and strg t respectively,and the traffic processing capacity of the type is indicated by µ t . At last, the thirdcharacteristic of type t ∈ T is the license fee, i.e., the amount of money that the serviceprovider should pay to the VNF vendor to activate a license of that type; it is denotedby C t . It worth mentioning that we assume that each instance has the sufficient capacityto serve at least one chain, and therefore a VNF of a chain is never split among multipleinstances. Similarly, we assume that each instance also has to be mapped exactly oneone individual physical server, and it cannot be split among multiple servers.All SFCs requested by users are gathered in set G v . Each requested SFC r ∈ G v , isdefined as a sequence of VNFs which are connected by virtual links. Set N rv containsall VNFs of SFC r and E rv contains all directed virtual links connecting the sequenceof the VNFs of the SFC. Parameter w r ( u,v ) denotes the required bandwidth for virtuallink ( u, v ) ∈ E rv connecting two consecutive VNFs u, v ∈ N rv of SFC r . As mentioned,8ach VNF u ∈ N rv has a specific type which is denoted by t ru ∈ T . Table 2 summarizesthe notations used throughout this paper.In this paper, similar to [12, 13, 17, 16, 19, 21, 22, 23], we consider the average end-to-end delay between the source and the destination of each service chain as theQoS metric. Threshold d thr indicates the utmost average tolerable end-to-end delay ofSFC r .The following assumptions are made about the business model of the service provider.The provider’s revenue is made by accepting SFCs which is R r for SFC r . On the otherhand, it should pay license cost for activating the VNF instances as well as the costof allocating bandwidth on physical links as it is assumed that the physical links areleased from another transmission carrier network. Here, we assume that to minimizethe cost of service provider, the activated VNF instances in the network can be sharedamong multiple chains subject to the traffic processing capacity of the instance.In this paper, we consider propagation, processing, and queuing delays as the mostsignificant parameters imposing the end-to-end delay. In each physical node, the pro-cessing and queuing delays are induced by the hypervisor of the machine and also theVNF instances running on the hypervisor. More precisely, each packet at each node:1. enters the physical node’s hypervisor queue,2. is processed by the hypervisor and conducted to the virtual machine running thecorresponding VNF,3. enters the queue of the virtual machine, and4. takes the required service from the VNF instance.We model the delay induced by these steps as a queuing network depicted in figure1, where the left-hand server is the hypervisor and the right-hand servers are the VNFinstances. The queues in this network are assumed as an M/M/ able 2: Notations and Definitions
Notation Definition G p Infrastructure network graph N p The set of substrate nodes E p The set of substrate links θ ( n,m ) The available capacity of physical link ( n, m ) ∈ E p θ CP Un
The available CPU cores of physical node n ∈ N p θ memn The available memory of physical node n ∈ N p θ strgn The available storage of physical node n ∈ N p µ n Processing rate of physical node n ∈ N p d ( n,m ) The propagation delay of link ( n, m ) ∈ E p ρ Bandwidth fee of physical link σ n The cost paid for activating physical node n ∈ N p G v Set of all requested service function chains N v Set of all VNFs of all SFCs E v Set of all virtual links of all SFCs N vr Set of all VNFs of SFC r ∈ G v E vr Set of all virtual links of SFC r ∈ G v d thr The maximum end-to-end delay threshold of SFC r ∈ G v f r The input traffic of each VNF u ∈ N rv rev r The revenue of SFC r ∈ G v w r ( k,l ) The required bandwidth of link ( k, l ) ∈ E rv T Set of all available VNF types I t Set of available instances of types t ∈ Tµ t Traffic capacity of type t ∈ TC t The cost that the provider pays for the activation of each instance of type t ∈ TCP U t The required CPU of each instance of type t ∈ Tmem t The required memory of each instance of t ∈ Tstrg t The required storage of each instance of t ∈ Tt ru The type of VNF u ∈ N rv of chain r ∈ G v igure 1: Queuing network at each physical node. each queue is ¯ d = 1 µ − λ (1)where, λ and µ are respectively, the arrival rate, the departure rate of the packets.For physical node n ∈ N p , the arrival rate into the hypervisor queue, shown by λ n ,equals to the summation of all incoming flows to the node, and the service rate of thehypervisor, shown by µ n , is a given parameter. Therefore, the average delay of thehypervisor is: ¯ d n = 1 µ n − λ n . (2)By taking logarithm from (2), we havelog (cid:0) ¯ d n (cid:1) + log ( µ n − λ n ) = 0 , (3)which is more convenient to hand it in the optimization formulations.In a similar way, the average delay in each instance i of type t , denoted by ¯ d i,t , is¯ d i,t = 1 µ t − λ i,t , (4)where λ i,t and µ t are the total flow rate and the processing rate of of the instancerespectively. And, again by taking logarithm from (4), we havelog (cid:0) ¯ d i,t (cid:1) + log ( µ t − λ i,t ) = 0 . (5) In this paper, we investigate the DCSM problem, where, a service provider, whooperates the infrastructure network G p , aims to deploy the set G v of SFCs in order tomaximize its profit while satisfying the SFCs requirements. The profit is a functionof the revenue of accepting the demands as well as the cost paid for VNFs license fee.11 For All Nodes:
CPU=100 CoresStorage=500 GB
Memory=200 GB
Cost=200
For All Links:
Bandwidth=100MBpsPropagation delay=1µs
Figure 2: The inputs of the DCSM problem. The left side is the set of SFCs with a specified type perVNF. The right side is the substrate network.
The SFCs requirements include not only processing and bandwidth but also the end-to-end delay which is a function of processing, queuing and propagation delays. In thefollowing, we clarify the problem by an illustrative example shown in Figure 2.In this figure, the left side represents 5 input SFCs which are composed of differenttypes of VNFs, indicated by the numbers inside the VNFs; and the right side representsthe substrate network topology. Table 3 and Table 4 respectively show the settings ofthe SFCs and VNF types in this example.The optimal solution of this instance of the DCSM problem, which is obtained bysolving the optimization model proposed in the next section, is depicted in Figure 3and Table 5. Figure 3 shows which SFCs are accepted and how they are mapped tothe substrate network and Table 5 illustrates the end-to-end delay that the acceptedSFCs tolerate. As it shown, to maximize the objective function and to meet the SFCrequirements, two SFCs are rejected.
In this section, the DCSM problem is formulated. At the begging, we introducethe decision variables of the model, shown in table 6, and inter-dependencies betweenthem; then we discuss the constrains and objective function of the model.12 able 3: The settings of the SFCs in Figure 2 r | N vr | t ru f r rev r d thr Table 4: The settings of the VNF types in Figure 2 t C t CP U t mem t strg t µ t For All Nodes:
CPU=100 CoresStorage=500 GB
Memory=200 GB
Cost=200
For All Links:
Bandwidth=100MBpsPropagation delay=1µs
Figure 3: The accepted SFCs and their mappings. able 5: The components (hypervisor, VNF instances, and propagation) of the end-to-end delay ofthe accepted demands in the illustrated example of Figure 2 r P ¯ d n P ¯ d i,t P d ( n,m ) In this problem, there is a set of requested SFCs which the provider can either acceptor not. Binary variable A r is for call admission control ( CAC ) that denotes whetherSFC r is accepted, A r = 1. In this paper, by assuming the instance sharing capability,we propose a novel two-level-hierarchical mapping model for the DCSM problem. Thefirst level is the mapping of the VNFs to a set of instances, and the second level is themapping of the set of activated instances to the physical nodes. The binary variable s u,ri,t denotes whether or not VNF u ∈ N rv from SFC r is mapped to the instance i ∈ I t oftype t ∈ T . Similarly, p i,tn denotes whether the instance i ∈ I t of type t ∈ T is mappedto the physical node n ∈ N p .It is necessary to indicate if a VNF is mapped to a physical node which is achievedin two steps. At first, binary variable z u,rn,i is defined as indicating whether VNF u ∈ N rv of chain r is mapped to instance i ∈ I t u , of type t u running on physical node n ∈ N p ornot. This variable is obtained by the multiplication of s u,ri,t u and p i,t u n : z u,rn,i = s u,ri,t u .p i,t u n ∀ r ∈ G v , ∀ u ∈ N vr , ∀ i ∈ I t u , ∀ n ∈ N p (6)The following constraints linearize (6). z u,rn,i ≤ s u,ri,t u ∀ r ∈ G v , ∀ u ∈ N vr , ∀ i ∈ I t ru , ∀ n ∈ N p (7) z u,rn,i ≤ p i,t u n ∀ r ∈ G v , ∀ u ∈ N vr , ∀ i ∈ I t ru , ∀ n ∈ N p (8) s u,ri,t u + p i,t u n − ≤ z u,rn,i ∀ r ∈ G v , ∀ u ∈ N vr , ∀ i ∈ I t ru , ∀ n ∈ N p (9)At the second step, x u,rn is defined as another binary variable by taking summationof z u,rn,i on i that indicates whether VNF u ∈ N vr of SFC r is embedded into the physical14 able 6: Decision Variables and Corresponding Definitions Variable Definition A r ∈ { , } A r = 1 if and only if SFC r ∈ G v is accepted. s u,ri,t ∈ { , } s u,ri,t = 1 if and only if VNF u ∈ N vr from SFC r ∈ G v is mapped to the instance i ∈ I t of type t ∈ T . p i,tn ∈ { , } p i,tn = 1 if and only if instance i ∈ I t of type t ∈ T is mapped to the physicalnode n ∈ N p . z u,rn,i ∈ { , } z u,rn,i = 1 if and only if VNF u ∈ N rv , is mapped to instance i ∈ I t u running onphysical node n ∈ N p . a rn ∈ { , } a rn = 1 if and only if SFC r ∈ G v passes through physical node n ∈ N p . δ r, ( u,v )( n,m ) ∈ R + δ r, ( u,v )( n,m ) indicates the percentage of the traffic of the virtual link ( u, v ) ∈ E rv thatis mapped to physical link ( n, m ) ∈ E p . y i,tn ∈ R + y i,tn denotes the traffic of the hypervisor queue at node n ∈ N p which takesservice from instance i ∈ I t of type t ∈ T .¯ d n ∈ R + ¯ d n denotes the average queuing and processing delays of the hypervisor in node n ∈ N p .¯ d i,t ∈ R + ¯ d i,t denotes the average queuing and processing delays of instance i ∈ I t of type t ∈ T . d hypr ∈ R + d hypr indicates the imposed delay to SFC r ∈ G v by all the hypervisors the SFCpasses through them. d insr ∈ R + d insr indicates the imposed delay to SFC r ∈ G v by all the instances servicingit. d linkr ∈ R + d linkr indicates the imposed delay to SFC r ∈ G v by all the links forwarding itstraffic. λ i,t ∈ R + λ i,t denotes the traffic entering instance i ∈ I t of type t ∈ T . l rn ∈ R + l rn indicates the delay that SFC r ∈ G v tolerates for passing through physicalnode n ∈ N p . bw ( n,m ) ∈ R + bw ( n,m ) indicates the allocated bandwidth of physical link ( n, m ) ∈ E p to theaccepted chains crossing the link. q u,ri,t ∈ R + q u,ri,t denotes the delay that SFC r ∈ G v tolerates since its VNF u ∈ N rv ismapped to instance i ∈ I t of type t ∈ T . cost ∈ R + cost denotes the total money payed by the service provider for bandwidth allo-cation and instance activation. rev ∈ R + rev is the amount of money that the provider takes for the accepted SFCs gain ∈ R gain is the difference between total revenue and cost and indicates the provider’sprofit. n ∈ N p or not. X i ∈ I tu z u,rn,i = x u,rn ∀ r ∈ G v , ∀ u ∈ N vr , ∀ n ∈ N p (10)For mapping virtual links, we use variable δ r, ( u,v )( n,m ) which indicates the percentage of thetraffic of virtual link ( u, v ) ∈ E vr of SFC r that is mapped to physical link ( n, m ) ∈ E p of the substrate network. bw ( n,m ) is a real variable that indicates the amount of the allocated bandwidth onphysical link ( n, m ) ∈ E p . λ i,t is a real variable which denotes the traffic enteringinstance i of type t . Variable y i,tn is the entering traffic to the hypervisor queue at node n ∈ N p which should be forwarded to instance i ∈ I t of type t ∈ T , so we have y i,tn = p i,tn .λ i,t ∀ t ∈ T, ∀ i ∈ I t , ∀ n ∈ N p (11)This constraint is linearized as following where M is a big value p i,tn .M ≥ y i,tn ∀ t ∈ T, ∀ i ∈ I t , ∀ n ∈ N p (12) λ i,t − (1 − p i,tn ) .M ≤ y i,tn ∀ t ∈ T, ∀ i ∈ I t , ∀ n ∈ N p (13) λ i,t ≥ y i,tn ∀ t ∈ T, ∀ i ∈ I t , ∀ n ∈ N p (14)The end-to-end delay of every accepted SFC is the summation of a) the delay inthe hypervisors that the SFC enters them which is denoted by d hypr , b) the delay ofthe instances that serve the VNFs of the chain which is denoted by d insr , and c) thepropagation delay of the links forwarding the traffic of the chain which is shown by d linkr . Variable l rn indicates the delay that SFC r faces in passing through physical node n ∈ N p , it is obtained as l rn = a rn . ¯ d n ∀ r ∈ G v , ∀ n ∈ N p (15)where α rn indicates whether or not SFC r passes through physical node n ∈ N p . Thisnonlinear constraint is linearized in the same way as y i,tn . q u,ri,t is another variable that denotes the delay of SFC r since its VNF u ∈ N vr ismapped to instance i of type t . It is obtained as: q u,ri,t = s u,ri,t . ¯ d i,t ∀ r ∈ G v , ∀ u ∈ N vr , ∀ i ∈ I t ru , ∀ t ∈ T (16)which is linearized as y i,tn . 16he cost variable is the total money payed by the service provider for bandwidthallocation on physical links, turning on physical servers, and VNF instances activation. rev is the amount of money that the provider earns from the accepted SFCs, andthe gain variable is the difference between rev and cost that indicates the provider’sbusiness profit. In this problem, the objective is to maximize the gain .Minimize gain (17)which is computed by the constrains (18)-(20): gain = rev − cost (18) rev = X r ∈ G r rev r .A r (19) cost = X n ∈ N p σ n .α rn + X t ∈ T X i ∈ I t C t .x it + ρ. X ( m,n ) ∈ E p bw ( m,n ) (20)As already eluded, a two-level mapping is conducted to deploy SFCs, at the firstlevel VNFs are mapped to instances and in the second level instances are mapped tophysical nodes. In the first level, if a SFC is accepted, all its VNFs must be mapped toan instance, which is formulated as (21). X i ∈ I tu s u,ri,t = A r ∀ r ∈ G v , ∀ u ∈ N vr (21)In the second level, an instance must be mapped to a physical node if at least a VNFis mapped to the instance, which is imposed by constraint (22). s u,ri,t ≤ X n ∈ N p p i,tn ∀ r ∈ G v , ∀ u ∈ N vr , ∀ i ∈ I t ru (22)To map virtual links, under the assumption of flow-splitting, a set of paths shouldbe found in the physical network which is obtained by the flow conservation constraintin (23); then the required bandwidth should be allocated, which is formulated by (24),17ith subject to the capacity of physical links in constraint (25). X ( n,m ) ∈ E p δ r, ( u,v )( n,m ) − X ( m,n ) δ r, ( u,v )( m,n ) = x u,rn − x v,rn ∀ r ∈ G v , ∀ ( u, v ) ∈ E vr , ∀ n ∈ N p (23) X r ∈ G v X ( k,l ) ∈ E vr w r ( k,l ) .δ r, ( u,v )( m,n ) = bw ( n,m ) ∀ ( n, m ) ∈ E p (24) bw ( n,m ) ≤ θ ( n,m ) ∀ ( n, m ) ∈ E p (25)The CPU, storage, and memory capacity limitations in physical nodes are respec-tively imposed by constraints (26), (27), and (28). X t ∈ T X i ∈ I t CP U t .p i,tn ≤ θ CP Un ∀ n ∈ N p (26) X t ∈ T X i ∈ I t strg t .p i,tn ≤ θ strgn ∀ n ∈ N p (27) X t ∈ T X i ∈ I t mem t .p i,tn ≤ θ memn ∀ n ∈ N p (28)In order to compute the processing and queuing delays, the input traffic rate foreach instance i ∈ I t and each physical node n ∈ N p are respectively calculated by (29)and (30). X r ∈ G v X u ∈ N rv f r .s u,ri,t = λ i,t ∀ t ∈ T, ∀ i ∈ I t (29) X t ∈ T X i ∈ I t y i,tn = λ n ∀ n ∈ N p (30)Using the variables λ n and λ i,t and the given parameters u n and u t , the queuing andprocessing delays in the physical nodes and the VNF instances will be as (31) and (32),respectively. log (cid:0) ¯ d n (cid:1) + log ( µ n − λ n ) ≥ ∀ n ∈ N p (31)log (cid:0) ¯ d l,t (cid:1) + log ( µ t − λ i,t ) ≥ ∀ t ∈ T, i ∈ I t (32)It should be noted that these constraints are convex constraint.Constraints (33), (34), and (35) calculate d hypr , d insr , and d linkr for each SFC r as theimposed average delay from hypervisors, instances, and links respectively. To guaran-tee the end-to-end delay, the summation of the delays should be less than the given18hreshold which is imposed by (36). X n ∈ N p l rn = d hypr ∀ r ∈ G v (33) X u ∈ N vr X i ∈ I tru q u,ri,t ru = d insr ∀ r ∈ G v (34) X ( n,m ) ∈ E p X ( v,u ) ∈ E rv d ( n,m ) .δ r, ( u,v )( n,m ) = d linkr ∀ r ∈ G v (35) d hypr + d insr + d linkr ≤ d thr ∀ r ∈ G v (36)Finally, the values of variables α rn and x u,rn need to be determined in order to computethe cost , which are obtained by (30)-(32). x u,rn ≤ α rn ∀ r ∈ G v , ∀ u ∈ N rv , ∀ n ∈ N p (37) α rn ≤ X u ∈ N rv x u,rn ∀ r ∈ G v , ∀ n ∈ N p (38) s u,ri,t ≤ x it ∀ r ∈ G v , ∀ u ∈ N rv , ∀ i ∈ I t , ∀ t ∈ T (39)By putting the objective function and the constraints altogether, we have the for-mulation of the DCSM as following Model : DCSM
Objective : (17)
Constraints : (6)-(16), (18)-(38)This model is a mixed integer convex optimization problem. Unfortunately, it cannotbe used as a practical solution of the DCSM problem since the problem is an NP-hard[11, 12, 13]. In the following section, by decomposing this complex model, we developan efficient solution for this problem.
4. Proposed Solution
This section describes the proposed solution, named MLDG (Multi-Level Delay-Guaranteed) mapping, for the DCSM problem. In the following, we first explain themain ideas and the overall procedure of the solution, then discuss the details of thesteps. 19nspecting the problem identified two sources of complexity. The first one is themapping of the instances on physical nodes, which is formulated by decision variable p i,tn in the optimization model, and the second source is the mapping of the virtualfunctions to the instances, denoted by s u,ri,t . Combinations of different values of thesevariables yield to the exponential growth of the model solution space, and consequentlycause intractability of the model even in small instances.Based on this observation, the main idea to approach the problem is to decompose itinto two subproblems. In the first subproblem, the candidate instances of each functiontype are determined and placed in the physical network. In the second subproblem,instead of considering all SFCs in a large MINLP model, the service chains are deployedone-by-one using an iterative algorithm. This decomposition eliminates the couplingbetween p i,tn and s u,ri,t that makes that problem solvable in a polynomial time. The overallprocedure of the proposed solution is depicted in 4; and the details are discussed in thefollowing subsections. The main objective of the first subproblem is to determine the value of the decisionvariables p i,tn ∀ t ∈ T , ∀ i ∈ I t , ∀ n ∈ N p , which is the first source of the complexity of theproblem. For this purpose, the instances should be placed in the physical network thatneeds to answer two questions: 1) what is the required number of instances of eachtype? and 2) where is the suitable location for the instances in the physical network?In answering the first question, we should not underestimate the number of instancesotherwise it causes rejection of demands in the second subproblem. Therefore, byconsidering the case where all SFCs are accepted and according to the assumption ofinstance sharing among VNFs, the estimated number of the instances of the type t is η t = P ∀ r ∈ G v P ∀ u ∈ N rv | type ( u )= t f r ! θ.µ t ∀ t ∈ T (40)where 0 < θ < ub-Problem 1: Instances Mapping
Estimate
Per Server Most Connected Cluster
Placement
Place the highest priority instancePlace the most connected instance
Sub-Problem 2:
SFC Mapping
SP2-CP:
1) Substitute
2) LP Relaxation
Mapping the Least Fractional SFC
Fix the Mapping of the Least
Fractional VNFSolve SP2-CP
Solve SP2-CP
Initialize Placement PrioritiesUpdate Placement Priorities
Figure 4: The overall procedure of the proposed solution µ t , of the instances. This is the first stepof the instance mapping sub-problem in figure 4.To answer the second question, we develop an algorithm based on a key observation.As explain, three factors contribute to the end-to-end delay that are the propagationdelay of physical links, the operating system and hypervisor delay in physical servers,and the queuing and processing delay in instances. The key point is that if two suc-cessive VNFs of a chain are mapped to the instances which are located in the samephysical node, then the virtual link connecting these two VNFs resides inside the phys-ical node and consequently the first and second delay factors are eliminated. Moreover,this approach eliminates the cost of bandwidth allocation on physical links. Therefore,in the instance mapping stage, we should seek to find a cluster of instances which are mostly connected to each other and place the cluster on a single physical server.To select the first member of a cluster, we define the placement priorities per type t as following π t = ˜ η t . Λ t ∀ t ∈ T (41)where Λ t is the total number of virtual links ( u, v ) in all chains that either u or v is oftype t , and ˜ η t is the number of instances of type t which have not been placed yet. Toinitialize the priorities, ˜ η t = η t .After selecting the first instance of a cluster, subsequent instances are selected ac-cording to the number of connections (virtual links) between them and the members ofthe cluster. More precisely, we define belonginess of type t to cluster κ as β tκ = (cid:12)(cid:12)(cid:12) { ( u, v ) , ( v, u ) ∈ E vr ∀ r ∈ G v s.t. u ∈ κ and type ( v ) = t } (cid:12)(cid:12)(cid:12) (42)At each step, an instance of a type t with the largest β tκ is selected as a member ofcluster if it does not violate the capacity of the physical server. When it is not possibleto add a new member to the cluster, this procedure is repeated for a new physicalserver and its corresponding new cluster. The details of the procedure are explained inalgorithm 1. 22 lgorithm 1 Most Connected Cluster PlacementINPUT: G p , η t OUTPUT: p i,tn π t = η t Λ t ∀ t ∈ T n ← while ∃ t ∈ T s.t. π t = 0 do ˜ t ← argmax t ∈ T { π t } η t ← η t − κ ← { i s.t. type ( i ) = t } n ← n + 1 p i,tn ← repeat ˜ t ← argmax t ∈ T { β tk } if physical server n has enough resources and η t = 0 then η t ← η t − κ ← { i s.t. type ( i ) = t } ∪ κ p i,tn ← end if until ∃ t ∈ T s.t. π t = 0 and server n has sufficient capacity π t = η t Λ t ∀ t ∈ T end while
23n this algorithm, the placement priorities are initialized in line 1, the first memberof a cluster is found and placed in lines 4-8, similarly in lines 10-14 the most connectedinstance is added to the cluster, and finally in line 17 the priorities are updated.
The main steps of the second subproblem are depicted in Figure 4. In this sub-problem, by substituting the values of decision variables of p i,tn , obtained from the firstsubproblem, in the MIP model and relaxing the binary variables of it, we obtain theconvex problem named SP2-CV. While the solution of the convex model is not necessar-ily a feasible solution of the DCSM, it can be post-processed by rounding techniques toobtain integer solution. Here, we focus on the least fractional VNF mapping variable, s u,ri,t , and deploy the chains in two nested loops. The outer loop, in each iteration, findsthe chain with the minimum fractional VNF mapping, and the inner loop processesthe VNFs of the chain. In each iteration of the loop, the minimum fractional s u,ri,t isselected, and rounded to 1. After fixing the variable, the SP2-CV model is solved toupdate the remaining variables. This procedure continues until either all VNFs of thechain is mapped, or the chain is rejected due to infeasibility of the model. The subse-quent chains are deployed in the same way. The details of these steps are explained inalgorithm 2.In line 2, the SP2-CV model, wherein the s u,ri,t variables are not necessarily binary, issolved. In line 3, the least fractional s u,ri,t is found regarding to length of its correspondingSFC, and in line 4 the mentioned SFC, which is denoted by ˜ r , is identified. It is assumedthat this chain will be accepted at line 5, and in the inner loop, the VNFs of the chain areinvestigated. At each iteration, the u ∈ N v ˜ r with least fractional value s u, ˜ ri,t is identifiedin line 7. It is rounded to 1 in line 8. After substituting this value in the model, itis solved in line 10. If it is an infeasible model, this SFC is rejected in lines 11-14,otherwise, s u,ri,t values are update and the loops continues to map the remaining VNFsof SFC ˜ r . The outer loop continues until all SFCs are either accepted or rejected.24 lgorithm 2 SFC MappingINPUT: G p , G v , p i,tn , T OUTPUT: A r , s u,ri,t , p i,tn repeat s u,ri,t ← solve SP − CV θ r ← P u ∈ N vr max i ∈ I tu (cid:0) { s u,ri,t } / | N vr | (cid:1) ˜ r ← argmax u ∈ N v ˜ r { θ r } A ˜ r ← while ∃ u ∈ N v ˜ r s.t. s u, ˜ ri,t = 1 do ˜ u ← argmax u ∈ N v ˜ r { s u, ˜ ri,t } s ˜ u, ˜ ri,t ← Substitute s ˜ u, ˜ ri,t in SP − CV ¯ s u,ri,t ← solve SP − CV if ¯ s u,ri,t is infeasible then s u, ˜ ri,t ← ∀ u ∈ N v ˜ r A ˜ r ← break else s u,ri,t ← ¯ s u,ri,t end if end while until ∃ r ∈ G v .3. Complexity Analysis The computational complexity of the proposed solution is analyzed in this sec-tion. In the first subproblem, the complexity of computing η t is O ( | T | | G v | N ∗ ) where N ∗ =max | N rv | . Initializing π t takes O ( T ) time. Placement of instances on physicalserver, the two nested loops in lines 3-17 of algorithm 1, is performed in O ( H ) itera-tions where H = P η t . Operations in lines 3-17 are O (1) except the argmax operationand updating π t which take O ( T ). Therefor, the complexity of the first subproblem is O (subprobelm-1) = O ( | T || G v | N ∗ ) + O ( T ) + O ( HT ) = O ( | T || G v | N ∗ ) + O ( HT )In the second subproblem, the nested loops and solving the SP2-CV model are themain contributors of the complexity. The complexity of the outer loop is O (subproblem-2) = O ( | G v | ( CV ) + R ∗ + N ∗ O (inner))Where O ( CV ) and O (inner) are respectively the complexity of convex programmingand the complexity of lines 6-15 of algorithm 2. There are efficient algorithms for convexprogramming [24]. The complexity of lines 6-15 is O (inner) = O ( N ∗ + O ( LP ) + N ∗ )So, we have O (subproblem-2) = O ( | G r | N ∗ O ( CV ))By putting these analyses together, the complexity of the proposed method is O ( | G r | N ∗ O ( CV )) + O ( | T || G v | N ∗ ) + ( HT )which is polynomial in terms of the size of substrate network, the number of functiontypes, the number of SFCs and their sizes.
5. Simulation Results
In this section, at first, the simulation settings are discussed in detail, then com-parisons between the proposed solution and the optimal solution, and the algorithmproposed in [7] are presented; and finally, the effects of the parameters of the proposedalgorithm are analyzed. 26 igure 5: The Small Topology Used for Comparison with The Optimal Solution.
The system used for the simulations has a Core i able 7: The Setting of the SFCs Used for Comparison with the Optimal Solution in Topology of Fig.5. Parameter ValueVNF types ( t ru ) Randomly in [1, 2, 3, 4]VNF Number per SFC ( | N vr | ) 3 VNFsBandwidth ( f r ) 30 MBRevenue ( rev r ) 9000End-to-end delay threshold (cid:0) d thr (cid:1) µ s Table 8: The Settings of the VNF Types Used for Comparison with the Optimal Solution in Topologyof Fig. 5.
Parameter Value per Type t cpu t (cores) 80 50 50 60 strg t (GB) 200 250 250 300 mem t (GB) 160 100 100 120 µ t (MBps) 100 100 100 100 C t
500 600 700 500
Table 9: NFVI Setting in BtEurope Topology Used for Comparison with ABSA.
Parameter Value θ CP Un
24 cores θ strgn θ memn
256 GB θ ( n,m ) ρ σ n µ n d ( n,m ) µ s d n µ s able 10: The Setting of the SFCs Used for Comparison with ABSA in BtEurope. Parameter ValueVNF Types ( t ru ) Randomly in [1, 2, 3, 4]VNF Number per SFC ( | N vr | ) 4 VNFsBandwidth ( f r ) 100 MBRevenue ( rev r ) 3000 | N vr | +15 f r ( | N vr |− (cid:0) d thr (cid:1) µs Table 11: The Settings of the VNF Types Used for Comparison with ABSA in BtEurope.
Parameter Value per Type t CP U t (cores) 4 7 6 5 strg t (GB) 90 90 120 100 mem t (GB) 16 24 32 24 µ t (MBps) 450 500 400 450 C t shown in Table 10 and Table 11.In these simulations, in addition to the gain which is the main objective of DCSM,we also evaluate the results in terms of acceptance rate. Since the problem is NP-Hard,it is extremely time consuming to reach the exact optimal solution using a solver; hence,we set the optimality gap threshold equals to 5%. In this section, the efficiency of MLDG is evaluated by comparing it against thesolver’s 5% sub-optimal solution with respect to the number of requested SFCs andthe length of SFCs. These simulations were conducted in Figure 5 topology using theconfigurations presented in the previous subsection. The results are shown in Figures6 −
9. Figures 6 and 7 show the evaluation results with respect to the number of SFCs;the effect of the length of SFC is evaluated in Figures 8 and 9.29
Figure 6: The Average Gain of the Solver and MLDG with Respect to the Number of SFCs.
Figure 7: The Average Acceptance Rate of the Solver and MLDG with Respect to the Number ofSFCs. Figure 8: The Average Gain of the Solver and MLDG with Respect to the SFC Length.
Figure 9: The Average Acceptance Rate of the Solver and MLDG with Respect to the SFC Length.
31s it seen the gain increases by increasing the number of requested SFCs and thelength of SFC as in this case the algorithms have more opportunity to accept demandsand each accepted demand makes more profit. However, the acceptance rate is notnecessarily improved by increase the parameters because the algorithms may decide toreject some demands in favor of maximizing the gain.These results show that the differences between MLDG and the solver solution arenegligible and in some cases, the MLDG solutions are even better than the solver’s5% sub-optimal ones. That implies MLDG provides near-optimal solutions. To evalu-ate this algorithm in a more realistic topology setting, we compare it against anotherpractical algorithm proposed in [7].
To the best of our knowledge, the problem investigated in [7] is the most similarone to this paper. Nevertheless, these two problems have some differences which shouldbe manipulated in such a way that makes a fair comparison. Their most considerabledifferences and the way we handled them are as follows:1. The objective function: the objective of paper [7] is cost minimization, while theobjective of DCSM is gain maximization. Based on equation (18), the gain iscalculated by subtracting the cost from the revenue. Hence, we defined a revenuefor each SFC and changed the objective of [7] from cost minimization to gainmaximization.2. Cost calculation: paper [7] does not consider the cost of instance activation foreach VNF, therefore, we changed the cost function to consider the cost of acti-vating an instance.3. End-to-end delay calculation: In [7] authors have considered a constant delay forthe nodes and links. However, in this paper, as illustrated in inequality (36),queuing delay is also considered. We changed our delay model and simplified itas [7]. 32
Figure 10: The Average Gain of MLDG and ABSA with Respect to the Number of SFCs.
As mentioned, ABSA is based on the Simulated Annealing approach, in these sim-ulations, the corresponding parameters are follows: temperature is 1000, cooling rateis 0.05 and the λ parameter is 3.Figure 10 and Figure 11 show the average gain and the average acceptance of theproposed solution and ABSA with respect to the number of requested SFCs. Althoughthe acceptance rate of the MLDG algorithm is a bit lower than ABSA, it skillfullyaccepts a subset demands that make more gain that is justified by the considerable gapbetween MLDG and ABSA in Figure 10.By increasing the number of input SFCs the average gain of both algorithms increasebut with different rates. By the increase of the number of input SFCs, the requiredinstances capacity increases which leads to more instances sharing possibility. In theproposed solution, by exploiting this possibility, the instance cost breaks down amongmultiple chains, and consequently, the average cost of each SFC decreases, and asa result, the average gain increases. On the other hand, ABSA does not share theinstances that causes a high cost and the lower gain.Figure 12 and Figure 13 depict the effect of the length of SFCs on the performanceof the algorithms. In these simulations, 10 SFCs are requested. As it shown, SFCs33 Figure 11: The Average Acceptance Rate of MLDG and ABSA with Respect to the Number of SFCs. length and gain are positively correlated. Since by increasing the number of VNFs ofeach SFC, the profit of the accepted demands increases. Moreover, in this case, moreinstances can be shared among chains that leads to cost breakdown and gain increment.On the other hand, as depicted in figure 13, the acceptance rate and the SFCs length arenegatively correlated. The capacity of each physical node is limited, therefore, longerSFCs should be split in several physical nodes. In this case, several physical nodesshould be connected to each other that incurs additional delay that goes beyond therequired end-to-end-delay threshold; and consequently the request is rejected.Putting the results together, indicates that the proposed approach both theoreticallyand practically is an efficient solution of the DCSM problem.
In this subsection, the impact of the parameter θ on the performance of the algorithmis evaluated. The results are depicted in Figure 14 and Figure 15. According to thesefigures, there is an optimum value of this parameter, which is 0.7 in our settings. Thisparameter has a twofold effect on the acceptance rate and gain. When the value of theparameter is much less than the optimum value, according to the (40), the number ofinstances is overestimated that leads to high cost and consequently decrease in gain.34 Figure 12: The Average Gain of MLDG and ABSA with Respect to the SFCs Length.
Figure 13: The Average Acceptance Rate of MLDG and ABSA with Respect to the SFCs Length. .5 0.6 0.7 0.8 0.9 1012345678910 Figure 14: The Average Gain with respect to θ However, when the value of the parameter is much greater than the optimum value,it causes over utilization of the instances that consequently increase the queuing delayand as a result some demands are rejected due to the end-to-end delay constraint. Thisdecreases the acceptance rate as well as the gain.
6. Conclusion
In this paper, the DCSM problem that aims to maximize the profit gain of theservice provider while satisfying the QoS requirement including the end-to-end delay isinvestigated. We presented an accurate modeling of the end-to-end delay that considersthe queuing delay as well as propagation delay. The problem is formulated as a mixedinteger convex problem, which is used to obtain a performance bound. To approach theproblem, a two-level mapping algorithm is proposed by decomposing the optimizationproblem. At the first level, it maps functions to VNF instances by exploiting theinstance sharing potential. In the second level, the instances are deployed in the physicalnetwork. The simulation results showed the solution is near optimal.36 .5 0.6 0.7 0.8 0.9 100.10.20.30.40.50.60.70.80.91
Figure 15: The Average Acceptance Rate with Respect to θ References [1] H. Hawilo, A. Shami, M. Mirahmadi, R. Asal, Nfv: state of the art, challenges, andimplementation in next generation mobile networks (vepc), IEEE Network 28 (6)(2014) 18–26.[2] B. Han, V. Gopalakrishnan, L. Ji, S. Lee, Network function virtualization: Chal-lenges and opportunities for innovations, IEEE Communications Magazine 53 (2)(2015) 90–97.[3] J. G. Herrera, J. F. Botero, Resource allocation in nfv: A comprehensive survey,IEEE Transactions on Network and Service Management 13 (3) (2016) 518–532.[4] R. Yu, G. Xue, X. Zhang, Qos-aware and reliable traffic steering for service functionchaining in mobile networks, IEEE Journal on Selected Areas in Communications35 (11) (2017) 2522–2531.[5] R. Mijumbi, J. Serrat, J.-L. Gorricho, N. Bouten, F. De Turck, R. Boutaba, Net-work function virtualization: State-of-the-art and research challenges, IEEE Com-munications surveys & tutorials 18 (1) (2015) 236–262.376] A. Xie, H. Huang, X. Wang, Z. Qian, S. Lu, Online vnf chain deployment onresource-limited edges by exploiting peer edge devices, Computer Networks 170(2020) 107069.[7] A. Fischer, D. Bhamare, A. Kassler, On the construction of optimal embeddingproblems for delay-sensitive service function chains, in: 2019 28th InternationalConference on Computer Communication and Networks (ICCCN), IEEE, 2019,pp. 1–10.[8] L. Qu, C. Assi, K. Shaban, M. J. Khabbaz, A reliability-aware network servicechain provisioning with delay guarantees in nfv-enabled enterprise datacenter net-works, IEEE Transactions on Network and Service Management 14 (3) (2017)554–568.[9] B. Tahmasebi, M. A. Maddah-Ali, S. Parsaeefard, B. H. Khalaj, Optimum trans-mission delay for function computation in nfv-based networks: The role of networkcoding and redundant computing, IEEE Journal on Selected Areas in Communi-cations 36 (10) (2018) 2233–2245.[10] Y. T. Woldeyohannes, A. Mohammadkhan, K. Ramakrishnan, Y. Jiang, Cluspr:Balancing multiple objectives at scale for nfv resource allocation, IEEE Transac-tions on Network and Service Management 15 (4) (2018) 1307–1321.[11] H. A. Alameddine, S. Sebbah, C. Assi, On the interplay between network functionmapping and scheduling in vnf-based networks: A column generation approach,IEEE Transactions on Network and Service Management 14 (4) (2017) 860–874.[12] D. Bhamare, M. Samaka, A. Erbad, R. Jain, L. Gupta, H. A. Chan, Optimal virtualnetwork function placement in multi-cloud service function chaining architecture,Computer Communications 102 (2017) 1–16.[13] X. Chen, W. Ni, I. B. Collings, X. Wang, S. Xu, Automated function placement38nd online optimization of network functions virtualization, IEEE Transactions onCommunications 67 (2) (2018) 1225–1237.[14] F. Yaghoubpour, B. Bakhshi, Optimal delay-aware service function chaining innfv, in: 2019 27th Iranian Conference on Electrical Engineering (ICEE), IEEE,2019, pp. 1961–1966.[15] M. A. T. Nejad, S. Parsaeefard, M. A. Maddah-Ali, T. Mahmoodi, B. H. Khalaj,vspace: Vnf simultaneous placement, admission control and embedding, IEEEJournal on Selected Areas in Communications 36 (3) (2018) 542–557.[16] N. Yuan, W. He, J. Shen, X. Qiu, S. Guo, W. Li, Delay-aware nfv resource alloca-tion with deep reinforcement learning, in: NOMS 2020-2020 IEEE/IFIP NetworkOperations and Management Symposium, IEEE, 2020, pp. 1–7.[17] R. Gouareb, V. Friderikos, A.-H. Aghvami, Virtual network functions routing andplacement for edge cloud latency minimization, IEEE Journal on Selected Areasin Communications 36 (10) (2018) 2346–2357.[18] N. Makariye, Towards shortest path computation using dijkstra algorithm, in: 2017International Conference on IoT and Application (ICIOT), IEEE, 2017, pp. 1–3.[19] B. Kar, E. H.-K. Wu, Y.-D. Lin, Energy cost optimization in dynamic placementof virtualized network function chains, IEEE Transactions on Network and ServiceManagement 15 (1) (2017) 372–386.[20] G. Bolch, S. Greiner, H. De Meer, K. S. Trivedi, Queueing networks and Markovchains: modeling and performance evaluation with computer science applications,John Wiley & Sons, 2006.[21] B. Han, V. Sciancalepore, D. Feng, X. Costa-Perez, H. D. Schotten, A utility-drivenmulti-queue admission control solution for network slicing, in: IEEE INFOCOM2019-IEEE Conference on Computer Communications, IEEE, 2019, pp. 55–63.3922] M. Erel- ¨Oz¸cevik, B. Canberk, Take your cell a walk for ultra-low e2edeiay in soft-ware defined vehicular networks, in: 2018 IEEE Globecom Workshops (GC Wk-shps), IEEE, 2018, pp. 1–6.[23] J. Prados, P. Ameigeiras, J. J. Ramos-Munoz, J. Navarro-Ortiz, P. Andres-Maldonado, J. M. Lopez-Soler, Performance modeling of softwarized network ser-vices based on queuing theory with experimental validation, IEEE Transactionson Mobile Computing (2019).[24] A. Nemirovski, Interior point polynomial time methods in convex programming,Lecture notes (2004).[25] Anaconda software distribution (2020).URL https://docs.anaconda.com/ [26] G. Gamrath, D. Anderson, K. Bestuzheva, W.-K. Chen, L. Eifler, M. Gasse, P. Ge-mander, A. Gleixner, L. Gottwald, K. Halbig, G. Hendel, C. Hojny, T. Koch,P. Le Bodic, S. J. Maher, F. Matter, M. Miltenberger, E. M¨uhmer, B. M¨uller, M. E.Pfetsch, F. Schl¨osser, F. Serrano, Y. Shinano, C. Tawfik, S. Vigerske, F. Wegschei-der, D. Weninger, J. Witzig, The SCIP Optimization Suite 7.0, Technical report,Optimization Online (March 2020).URL H TML/2020/03/7705.html [27] S. Knight, H. Nguyen, N. Falkner, R. Bowden, M. Roughan, The internet topologyzoo, Selected Areas in Communications, IEEE Journal on 29 (9) (2011) 1765 –1775. doi:10.1109/JSAC.2011.111002doi:10.1109/JSAC.2011.111002