Ali Munir
Michigan State University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ali Munir.
international conference on computer communications | 2013
Ali Munir; Ihsan Ayyub Qazi; Zartash Afzal Uzmi; Aisha Mushtaq; Saad N. Ismail; M. Safdar Iqbal; Basma Khan
For provisioning large-scale online applications such as web search, social networks and advertisement systems, data centers face extreme challenges in providing low latency for short flows (that result from end-user actions) and high throughput for background flows (that are needed to maintain data consistency and structure across massively distributed systems). We propose L2DCT, a practical data center transport protocol that targets a reduction in flow completion times for short flows by approximating the Least Attained Service (LAS) scheduling discipline, without requiring any changes in application software or router hardware, and without adversely affecting the long flows. L2DCT can co-exist with TCP and works by adapting flow rates to the extent of network congestion inferred via Explicit Congestion Notification (ECN) marking, a feature widely supported by the installed router base. Though L2DCT is deadline unaware, our results indicate that, for typical data center traffic patterns and deadlines and over a wide range of traffic load, its deadline miss rate is consistently smaller compared to existing deadline-driven data center transport protocols. L2DCT reduces the mean flow completion time by up to 50% over DCTCP and by up to 95% over TCP. In addition, it reduces the completion for 99th percentile flows by 37% over DCTCP. We present the design and analysis of L2DCT, evaluate its performance, and discuss an implementation built upon standard Linux protocol stack.
acm special interest group on data communication | 2015
Ali Munir; Ghufran Baig; Syed Mohammad Irteza; Ihsan Ayyub Qazi; Alex X. Liu; Fahad R. Dogar
Many data center transports have been proposed in recent times (e.g., DCTCP, PDQ, pFabric, etc). Contrary to the common perception that they are competitors (i.e., protocol A vs. protocol B), we claim that the underlying strategies used in these protocols are, in fact, complementary. Based on this insight, we design PASE, a transport framework that synthesizes existing transport strategies, namely, self-adjusting endpoints (used in TCP style protocols), innetwork prioritization (used in pFabric), and arbitration (used in PDQ). PASE is deployment friendly: it does not require any changes to the network fabric; yet, its performance is comparable to, or better than, the state-of-the-art protocols that require changes to network elements (e.g., pFabric). We evaluate PASE using simulations and testbed experiments. Our results show that PASE performs well for a wide range of application workloads and network settings.
international conference on communications | 2013
Ali Munir; Ihsan Ayyub Qazi; Saad B. Qaisar
Todays data centers face extreme challenges in providing low latency for online services such as web search, social networking, and recommendation systems. Achieving low latency is important as it impacts user experience, which in turn impacts operator revenue. However, most current congestion control protocols approximate Processor Sharing (PS), which is known to be sub-optimal for minimizing latency. In this paper, we propose Router Assisted Capacity Sharing (RACS), a data center transport protocol that minimizes flow completion times by approximating the Shortest Remaining Processing Time (SRPT) scheduling policy, which is known to be optimal, in a distributed manner. With RACS, flows are assigned weights which determine their relative priority and thus the rate assigned to them. By changing these weights, RACS can approximate a range of scheduling disciplines. Through extensive ns-2 simulations, we demonstrate that RACS outperforms TCP, DCTCP, and RCP in data center environments. In particular, it improves completion times by up to 95% over TCP, 88% over DCTCP, and 80% over RCP. Our results also show that RACS can outperform deadline-aware transport protocols for typical data center workloads.
ieee international multitopic conference | 2009
Ali Munir; Savera Tanwir; S. M. Hasan Zaidi
To meet the demands of ever increasing bandwidth hungry applications architectures like circuit switching, packet switching and burst switching have been proposed in the literature. Dynamic Optical Circuit Switching (DOCS) is the optical counter part of the circuit switching; it can provide huge bandwidth to the end users by establishing high capacity light-paths. For provisioning requests in DOCS networks many heuristics have been proposed in literature. In this paper we summarize few of these algorithms with main focus on algorithms that make use of parameters like connection holding time or timing deadlines for routing and wavelength assignment purposes. We observe that sometimes certain parameters of a request like connection duration can be used to improve optical networks performance significantly. We have also summarized a traffic grooming approach named as S/G light tree architecture; the main feature of this architecture is its property to groom traffic without requiring optical to electrical to optical (OEO) conversion at intermediate nodes.
international symposium on high capacity optical networks and enabling technologies | 2009
Ali Munir; Savera Tanwir; S. M. Hasan Zaidi
Several switching architectures like circuit switching, packet switching and burst switching have been proposed and developed for efficient utilization of high speed optical networks. Among these Dynamic Optical Circuit Switching (DOCS) has been found to be more useful for provisioning requests in backbone (core) networks due to its capability to provide huge bandwidth by establishing high capacity light-paths. Now-a-days applications like video distribution of social events, live coverage of sports, e-learning and HDTV have gained popularity, in these applications normally there are multiple users requesting a particular service simultaneously. To make efficient use of optical networks under these conditions various algorithms exist in literature. In this paper we have proposed an algorithm for provisioning of multicast (point to multipoint (P2MP)) requests which utilizes the connection duration/holding time information of the request for traffic grooming. We have done so by integrating our wavelength selection strategy with existing routing approach known as Delay constrained Shortest Path (DCSP) algorithm. Besides using traditional grooming approaches we have used the concept of using multiple wavelengths for meeting the needs of the required bandwidth demands. Results show that this policy can significantly improve the network performance in terms of call and bandwidth blocking ratios. We have provided simulation results for both On-Demand and Advance reservation request of network resources.
ieee international multitopic conference | 2008
Ashfaq Hussain Farooqi; Ali Munir
IP multimedia subsystem (IMS) is a new next generation networking architecture that will provide better quality of service, charging infrastructure and security. The basic idea behind IMS is convergence; providing a single interface to different traditional or modern networking architectures allowing better working environment for the end users. IMS is still not commercially adopted and used but research is in progress to explore it. IMS is an IP based overlay next generation network architecture. It inherent number of security threats of session initiation protocol (SIP), TCP, UDP etc as it uses SIP and IP protocols. Some of them can degrade the performance of IMS seriously and may cause DoS or DDoS attacks. The paper presents a new approach keeping a vision of secure IMS based on intrusion detection system (IDS) using k-nearest neighbor (KNN) as classifier. The KNN classifier can effectively detect intrusive attacks and achieve a low false positive rate. It can distinguish between the normal behavior of the system or abnormal. In this paper, we have focused on the key element of IMS core known as proxy call session control function (PCSCF). Network based anomaly detection mechanism is proposed using KNN as anomaly detector. Experiments are performed on OpenIMS core and the result shows that IMS is vulnerable to different types of attacks such as UDP flooding, IP spoofing that can cause DoS. KNN classifier effectively distinguishes the behavior of the system as normal or intrusive and achieve low false positive rate.
conference on emerging network experiment and technology | 2016
Ali Munir; Ting He; Ramya Raghavendra; Franck Le; Alex X. Liu
To improve the performance of data-intensive applications, existing datacenter schedulers optimize either the placement of tasks or the scheduling of network flows. The task scheduler strives to place tasks close to their input data (i.e., maximize data locality) to minimize network traffic, while assuming fair sharing of the network. The network scheduler strives to finish flows as quickly as possible based on their sources and destinations determined by the task scheduler, while the scheduling is based on flow properties (e.g., size, deadline, and correlation) and not bound to fair sharing. Inconsistent assumptions of the two schedulers can compromise the overall application performance. In this paper, we propose NEAT, a task scheduling framework that leverages information from the underlying network scheduler to make task placement decisions. The core of NEAT is a task completion time predictor that estimates the completion time of a task under given network condition and a given network scheduling policy. NEAT leverages the predicted task completion times to minimize the average completion time of active tasks. Evaluation using ns2 simulations and real-testbed shows that NEAT improves application performance by up to 3.7x for the suboptimal network scheduling policies and up to 30% for the optimal network scheduling policy.
international conference on computer communications | 2017
Chen Tian; Ali Munir; Alex X. Liu; Yingtong Liu; Yanzhao Li; Jiajun Sun; Fan Zhang; Gong Zhang
In datacenter networks, flows can have different performance objectives. We use a tenant-objective division to denote all flows of a tenant that share the same objective. Bandwidth allocation in datacenters should support not only performance isolation among divisions but also objective-oriented scheduling among flows within the same division. This paper studies the Multi-Tenant Multi-Objective (MT-MO) bandwidth allocation problem. To our best knowledge, no existing practical work support performance isolation and objective scheduling simultaneously. We propose Stacked Congestion Control (SCC), a distributed host-based bandwidth allocation design, where an underlay congestion control (UCC) layer handles contention among divisions, and a private congestion control (PCC) layer for each division optimizes its performance objective. Via the tenant-objective tunnel abstraction, SCC achieves weighted bandwidth sharing for each division in a distributed and transparent way. By adding a rate-limiting send queue in the ingress of each tunnel, mechanisms between performance isolation and objective scheduling are completely decoupled. We evaluate SCC both on a small-scale testbed and with large-scale NS-2 simulations. Compared to the direct coexistence cases, SCC reduces latency by up to 40% for Latency-Sensitive flows, deadline miss ratio by up to 3.2× for Deadline-Sensitive flows, and average flow-completion-time by up to 53% for Completion-Sensitive flows.
international conference on computational science and its applications | 2012
Salman Ali; Ali Munir; Saad B. Qaisar; Junaid Qadir
The growth of wireless multimedia applications has increased demand for efficient utilization of scarce spectrum resources which is being realized through technologies such as Dynamic Spectrum Access, source and channel coding, distributed streaming and multicast. Using a mix of DSA and channel coding, we propose an efficient power and channel allocation framework for cognitive radio network to place multimedia data of opportunistic Secondary Users over the unused parts of radio spectrum without interfering with licensed Primary Users. We model our method as an optimization problem which determines achievable physical transmission parameters and distributes available spectrum resources among competing secondary devices. We also consider noise contributions and channel capacity as design factors. We use Luby Transform codes for encoding multimedia traffic in order to reduce dependencies involved in distributing data over multiple channels, mitigate Primary User interference and compensate channel noise and distortion caused by sudden arrival of Primary devices. Tradeoffs between number of competing users, coding overhead, available spectrum resources and fairness in channel allocation have also been studied. We also analyze the effect of number of available channels and coding overhead on quality of media content. Simulation results of the proposed framework show improved gain in-terms of PSNR of multimedia content; hence better media quality achieved strengthens the efficacy of proposed model.
IEEE ACM Transactions on Networking | 2017
Ali Munir; Ghufran Baig; Syed Mohammad Irteza; Ihsan Ayyub Qazi; Alex X. Liu; Fahad R. Dogar
Several data center transport protocols have been proposed in recent years (e.g., DCTCP, PDQ, and pFabric). In this paper, we first identify the underlying strategies used by the existing data center transports, namely, in-network Prioritization (used in pFabric), Arbitration (used in PDQ), and Self-adjusting at Endpoints (PASE) (used in DCTCP). We show that these strategies are complimentary to each other, rather than substitutes, as they have different strengths and can address each other’s limitations. Unfortunately, prior data center transports use only one of these strategies. As a result, they either achieve near-optimal performance or deployment friendliness (i.e., require no changes to the data plane) but not both. Based on this insight, we design a data center transport protocol called PASE, which carefully synthesizes these strategies by assigning different transport responsibilities to each strategy. The key advantage of PASE over prior art is that it achieves both near-optimal performance as well as deployment friendliness. PASE does not require any changes in network switches (hardware or software); yet, it achieves comparable, or even better, performance than the state-of-the-art protocols (such as pFabric) that require changes to network elements. Our evaluation results show that the PASE performs well for a wide range of application workloads and network settings.