Onur Alparslan
Osaka University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Onur Alparslan.
Annales Des Télécommunications | 2007
Guray Gurel; Onur Alparslan; Ezhan Karasan
Performance evaluation of tcp traffic in obs networks has been under intensive study, since tcp constitutes the majority of Internet traffic. As a reliable and publicly available simulator, ns2 has been widely used for studying tcp/ip networks; however ns2 lacks many of the components for simulating optical burst switching networks. In this paper, an ns2 based obs simulation tool (nobs), which is built for studying burst assembly, scheduling and contention resolution algorithms in obs networks is presented. The node and link objects in obs are extended in nobs for developing optical nodes and optical links. The ingress, core and egress node functionalities are combined into a common optical node architecture, which comprises agents responsible for burstification, routing and scheduling. The effects of burstification parameters, e.g., burstification timeout, burst size and number of burstification buffers per egress node, on tcp performance are investigated using nobs for different tcp versions and different network topologies.RésuméL’évaluation de la performance d’un trafic tcp dans les réseaux à commutation optique de rafales est intensivement étudié puisque le trafic tcp constitue la plus grande partie du trafic de l’internet. Le simulateur ns2, fiable et accessible publiquement, a été largement utilisé pour étudier les réseaux tcp/ip, mais il ne dispose pas de nombreux composants nécessaires pour simuler les réseaux à commutation optique de rafales. L’article présente un outil de simulation fondé sur ns2, dénommé nobs, qui a été construit pour étudier les algorithmes destinés à l’assemblage des rafales, à la planification et à la résolution des conflits dans les réseaux à commutation de rafales. Les objets nœud et liaison sont étendus pour représenter des nœuds et liaisons optiques. Les fonctions de nœuds d’entrée, de cœur et de sortie sont combinées en une architecture commune de nœud optique, qui comprend les agents responsables pour l’assemblage des rafales, le routage et la planification. On étudie l’influence des paramètres tels que temporisation, taille des rafales et nombre de tempons d’assemblage par nœud d’entrée sur la performance pour différentes versions de tcp et différentes topologies de réseau.
international conference on photonics in switching | 2008
Onur Alparslan; Shin’ichi Arakawa; Masayuki Murata
One of the difficulties of optical packet switched networks is buffering optical packets in the network. Burstiness of Internet traffic causes high packet drop rates and low utilization in very small buffered OPS networks. In this paper, we propose a new node-based pacing algorithm and show that it can increase the utilization of very small optical RAM buffered core optical packet-switched (OPS) networks.
Annales Des Télécommunications | 2004
Onur Alparslan; Nail Akar; Ezhan Karasan
With this paper, we propose a distributed online traffic engineering architecture formpls networks. In this architecture, a primary and secondarympls lsp are established from an ingresslsr to every other egresslsr. We propose to split thetcp traffic between the primary and secondary paths using a distributed mechanism based onecn marking andaimd-based rate control. Inspired by the random early detection mechanism for active queue management, we propose a random early reroute scheme to adaptively control the delay difference between the primary and secondarylsps. Considering the adverse effect of packet reordering ontcp performance for packet-based load balancing schemes, we propose that thetcp splitting mechanism operates on a per-flow basis. Using flow-based models developed for Internet traffic and simulations, we show that flow-based distributed multi-path traffic engineering outperforms on a consistent basis the case of a single path in terms of per-flow goodputs. Due to the elimination of out-of-order packet arrivals, flow-based splitting also enhancestcp performance with respect to packet-based splitting especially for longtcp flows that are hit hard by packet reordering. We also compare and contrast two queuing architectures for differential treatment of data packets routed over primary and secondarylsps in thempls data plane, namely first-in-first-out and strict priority queuing. We show through simulations that strict priority queuing is more effective and relatively more robust with respect to the changes in the traffic demand matrix than first-in-first-out queuing in the context of distributed multi-path routing.FésuméCet article présente une architecture distribuée d’ingénierie du trafic des réseauxmpls (multi-protocollabelswitching ou commutation multiprotocole par étiquettes). Dans cette architecture, on établit un chemin commuté par étiquette (lsp:labelswitchedpath) primaire et un secondaire d’un routeur-commutateur (lsr:labelswitchrouter) d’entrée à un routeur-commutateur de sortie. On propose de partager le trafictcp entre les chemins primaire et secondaire en utilisant un mécanisme basé sur un marquageecn (explicitcongestionnotification: annonce explicite de congestion) et une commande basée sur l’algorithmeaimd (additiveincreasemultiplicativedecrease: croissance linéaire, diminution exponentielle). En s’inspirant des mécanismes de rejet aléatoire précoce (random early discard) de la gestion de files d’attente, on propose un mécanisme de reroutage précoce pour commander de manière adaptative la différence de délai entre les chemins primaire et secondaire. En considérant l’effet du réordonnancement des paquets sur la performance detcp, on propose que le mécanisme de partage de trafic s’appuie sur les flux de communication. En utilisant des modèles à base de flux développés pour le trafic de l’internet et des simulations, on montre que l’ingénierie du trafic multichemin distribuée à partir des flux permet de meilleurs résultats qu’avec un seul chemin. Et puisqu’on élimine l’arrivée de paquets dans le désordre, cette séparation des trafics à partir des flux est encore meilleure dans le cas des gros fluxtcp. On compare également deux architectures de files d’attente pour le traitement des paquets de données routés sur les chemins primaire et secondaire: la politique premier-entré permierservi et la politique de respect de priorité. On montre à l’aide de simulations qu’une politique de strict respect de priorité est plus efficace et relativement plus robuste dans le contexte de routage multichemin distribué.
Journal of Optical Networking | 2007
Onur Alparslan; Shin’ichi Arakawa; Masayuki Murata
Feature Issue on Transmission in Optically Transparent Core NetworksOne of the difficulties of optical packet-switched (OPS) networks is buffering optical packets in the network. O(1) reading operation is not possible in the optical domain, because there is no equivalent optical RAM available for storing packets. Currently, the only available solution that can be used for buffering in the optical domain is using long fiber lines called fiber delay lines (FDL). However, FDLs have important limitations and may cause high packet drop rates due to the burstiness of Internet traffic. We propose an architecture using an explicit congestion control protocol (XCP) based utilization control algorithm designed for OPS wavelength-division-multiplexing (WDM) networks with pacing at the edge nodes for decreasing the buffer requirements at core nodes. We evaluate the FDL requirements on a meshed network with multiple-hop paths and show how FDL requirements change with slot size, utilization, FDL granularity, scheduling, and packet size distribution.
international symposium on computers and communications | 2006
Onur Alparslan; Shin’ichi Arakawa; Masayuki Murata
Famous rule-of-thumb states that a buffer sized at B = RTT × BW, where RTT is the average round trip time and BW is the bandwidth of output link is necessary in order to achieve high utilization with TCP flows. However, as the link speeds continue increasing with technological advances, this buffer requirement starts becoming an important cost factor on routers of electronic networks. On the other hand, bursty nature of TCP limits further decreasing the buffer requirements, because it brings a high packet drop rate in small buffered networks. In this paper, we evaluate several transmission control algorithms in small buffered networks. The algorithms include TCP Reno, TCP NewReno, Highspeed TCP with SACK, and XCP. Simulation results show that all non-paced TCP and XCP variants perform poorly. Furthermore, the results show that even rule-of-thumb buffers are not enough for XCP in some cases because of high burstiness of XCP. We evaluate the effectiveness of pacing method at sender side for making transmission control algorithms suitable for very small buffered networks. We show that pacing alone is not enough for XCP to make suitable for small buffered networks, so we introduce a suitable parameter set. Simulation results show that buffer requirements on routers are greatly reduced by paced XCP with suitable parameter settings, while keeping the high fairness, fast convergence, and high utilization.
Photonic Network Communications | 2009
Onur Alparslan; Shin’ichi Arakawa; Masayuki Murata
According to a famous rule-of-thumb, buffer size of each output link of a router should be set to bandwidth-delay product of the network, in order to achieve high utilization with TCP flows. However, ultra high speed of optical networks makes it very hard to satisfy this rule-of-thumb, especially with limited choices of buffering in the optical domain, because optical RAM is under research and it is not expected to have a large capacity, soon. In this article, we evaluate the performance of our explicit congestion control protocol-based architecture designed for very small Optical RAM-buffered optical packet switched wavelength division multiplexing networks with pacing at edge nodes in order to decrease the required buffer size at core nodes. By using a mesh topology and applying TCP traffic, we evaluate the optical buffer size requirements of this architecture and compare with a common proposal in the literature.
international conference on communications | 2012
Onur Alparslan; Shin’ichi Arakawa; Masayuki Murata
Recent papers in the literature on hybrid optical architectures combining path and packet switching have shown that it can be a good candidate for future optical networks. However, the optimization of the traffic splitting parameters by some metrics is vital to maximize the benefit by the hybrid architecture. Blocking rate is one of the most important performance metrics in a path switching network. In this paper, we propose an analytical method to compute both forward and backward blocking rates in path switching optical WDM networks with destination-initiated reservation. On a mesh topology we show that the results of our analytical method and simulations are close to each other.
Optical Switching and Networking | 2011
Onur Alparslan; Shin’ichi Arakawa; Masayuki Murata
According to a historical rule of thumb, which is widely used in routers, the buffer size of each output link of a router should be set to the product of the bandwidth and the average round-trip time. However, it is very difficult to satisfy this buffer requirement for ultra-high-speed dense wavelength division multiplexing (DWDM) networks with the current technology. Recently, many researchers have challenged the rule of thumb and have proposed various buffer sizing strategies requiring less buffer. Most of them were proposed for electronic routers with input and output buffering. However, shared buffering is a strong candidate for future DWDM optical packet switching (OPS) networks because of its high efficiency. As all links use the same buffer space, the wavelength count and nodal degree have a big impact on the size requirements of shared buffering. In this paper, we present a new buffer scaling rule showing the relationship between the number of wavelengths, nodal degree, and the required shared buffer size. By an extensive simulation study, we show that the buffer requirement increases with O(N^0^.^8^5W^0^.^8^5) for both standard TCP and paced TCP, while XCP-paced TCPs buffer requirement increases with O(N^1W^0^.^8^5) for a wide range of N and W, where N is the nodal degree and W is the number of wavelengths.
broadband communications, networks and systems | 2007
Onur Alparslan; Shin’ichi Arakawa; Masayuki Murata
We show that by applying rate-based pacing at the edge nodes, very small optical RAM buffers can be enough for high utilization and low packet drop ratio inside core optical packet-switched (OPS) networks.
Lecture Notes in Computer Science | 2004
Onur Alparslan; Nail Akar; Ezhan Karasan
In this paper, we propose an AIMD-based TCP load balancing architecture in a backbone network where TCP flows are split between two explicitly routed paths, namely the primary and the secondary paths. We propose that primary paths have strict priority over the secondary paths with respect to packet forwarding and both paths are rate-controlled using ECN marking in the core and AIMD rate adjustment at the ingress nodes. We call this technique “prioritized AIMD”. The buffers maintained at the ingress nodes for the two alternative paths help us predict the delay difference between the two paths which forms the basis for deciding on which path to forward a new-coming flow. We provide a simulation study for a large mesh network to demonstrate the efficiency of the proposed approach in terms of the average per-flow goodput and byte blocking rates.