Nadine Akkari
King Abdulaziz University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Nadine Akkari.
Computer Communications | 2015
Etimad Fadel; Vehbi Cagri Gungor; Laila Nassef; Nadine Akkari; M.G. Abbas Maik; Suleiman Almasri; Ian F. Akyildiz
The traditional power grid in many countries suffers from high maintenance costs and scalability issues along with the huge expense of building new power stations, and lack of efficient system monitoring that could increase the overall performance by acting proactively in preventing potential failures. To address these problems, a next-generation electric power system, called the smart grid (SG), has been proposed as an evolutionary system for power generation, transmission, and distribution. To this end, the SGs utilize renewable energy generation, smart meters and modern sensing and communication technologies for effective power system management, and hence, succeeding in addressing many of the requirements of a modern power grid system while significantly increase its performance. Recently, wireless sensor networks (WSNs) have been recognized as a promising technology to achieve seamless, energy efficient, reliable, and low-cost remote monitoring and control in SG applications. In these systems, the required information can be provided to electric utilities by wireless sensor systems to enable them to achieve high system efficiency. The real-time information gathered from these sensors can be analyzed to diagnose problems early and serve as a basis for taking remedial action. In this paper, first WSN-based SG applications have been explored along with their technical challenges. Then, design challenges and protocol objectives have been discussed for WSN-based SG applications. After exploring applications and design challenges, communication protocols for WSN-based SG applications have been explained in detail. Here, our goal is to elaborate on the role of WSNs for smart grid applications and to provide an overview of the most recent advances in MAC and routing protocols for WSNs in this timely and exciting field.
ad hoc networks | 2013
Pu Wang; Josep Miquel Jornet; M.G. Abbas Malik; Nadine Akkari; Ian F. Akyildiz
Wireless NanoSensor Networks (WNSNs), i.e., networks of nanoscale devices with unprecedented sensing capabilities, are the enabling technology of long-awaited applications such as advanced health monitoring systems or surveillance networks for chemical and biological attack prevention. The peculiarities of the Terahertz Band, which is the envisioned frequency band for communication among nano-devices, and the extreme energy limitations of nanosensors, which require the use of nanoscale energy harvesting systems, introduce major challenges in the design of MAC protocols for WNSNs. This paper aims to design energy and spectrum-aware MAC protocols for WNSNs with the objective to achieve fair, throughput and lifetime optimal channel access by jointly optimizing the energy harvesting and consumption processes in nanosensors. Towards this end, the critical packet transmission ratio (CTR) is derived, which is the maximum allowable ratio between the transmission time and the energy harvesting time, below which a nanosensor can harvest more energy than the consumed one, thus achieving perpetual data transmission. Based on the CTR, first, a novel symbol-compression scheduling algorithm, built on a recently proposed pulse-based physical layer technique, is introduced. The symbol-compression solution utilizes the unique elasticity of the inter-symbol spacing of the pulse-based physical layer to allow a large number of nanosensors to transmit their packets in parallel without inducing collisions. In addition, a packet-level timeline scheduling algorithm, built on a theoretical bandwidth-adaptive capacity-optimal physical layer, is proposed with an objective to achieve balanced single-user throughput with infinite network lifetime. The simulation results show that the proposed simple scheduling algorithms can enable nanosensors to transmit with extremely high speed perpetually without replacing the batteries.
Wireless Networks | 2014
Massimiliano Pierobon; Josep Miquel Jornet; Nadine Akkari; Suleiman Almasri; Ian F. Akyildiz
Wireless NanoSensor Networks (WNSNs) will allow novel intelligent nanomaterial-based sensors, or nanosensors, to detect new types of events at the nanoscale in a distributed fashion over extended areas. Two main characteristics are expected to guide the design of WNSNs architectures and protocols, namely, their Terahertz Band wireless communication and their nanoscale energy harvesting process. In this paper, a routing framework for WNSNs is proposed to optimize the use of the harvested energy to guarantee the perpetual operation of the WNSN while, at the same time, increasing the overall network throughput. The proposed routing framework, which is based on a previously proposed medium access control protocol for the joint throughput and lifetime optimization in WNSNs, uses a hierarchical cluster-based architecture that offloads the network operation complexity from the individual nanosensors towards the cluster heads, or nano-controllers. This framework is based on the evaluation of the probability of saving energy through a multi-hop transmission, the tuning of the transmission power of each nanosensor for throughput and hop distance optimization, and the selection of the next hop nanosensor on the basis of their available energy and current load. The performance of this framework is also numerically evaluated in terms of energy, capacity, and delay, and compared to that of the single-hop communication for the same WNSN scenario. The results show how the energy per bit consumption and the achievable throughput can be jointly maximized by exploiting the peculiarities of this networking paradigm.
Wireless Networks | 2016
Nadine Akkari; Josep Miquel Jornet; Pu Wang; Etimad Fadel; Lamiaa A. Elrefaei; Muhammad Ghulam Abbas Malik; Suleiman Almasri; Ian F. Akyildiz
Nanonetworks consist of nano-sized communicating devices which are able to perform simple tasks at the nanoscale. The limited capabilities of individual nanomachines and the Terahertz (THz) band channel behavior lead to error-prone wireless links. In this paper, a cross-layer analysis of error-control strategies for nanonetworks in the THz band is presented. A mathematical framework is developed and used to analyze the tradeoffs between Bit Error Rate, Packet Error Rate, energy consumption and latency, for five different error-control strategies, namely, Automatic Repeat reQuest (ARQ), Forward Error Correction (FEC), two types of Error Prevention Codes (EPC) and a hybrid EPC. The cross-layer effects between the physical and the link layers as well as the impact of the nanomachine capabilities in both layers are taken into account. At the physical layer, nanomachines are considered to communicate by following a time-spread on-off keying modulation based on the transmission of femtosecond-long pulses. At the link layer, nanomachines are considered to access the channel in an uncoordinated fashion, by leveraging the possibility to interleave pulse-based transmissions from different nodes. Throughout the analysis, accurate path loss, noise and multi-user interference models, validated by means of electromagnetic simulation, are utilized. In addition, the energy consumption and latency introduced by a hardware implementation of each error control technique, as well as, the additional constraints imposed by the use of energy-harvesting mechanisms to power the nanomachines, are taken into account. The results show that, despite their simplicity, EPCs outperform traditional ARQ and FEC schemes, in terms of error correcting capabilities, which results in further energy savings and reduced latency.
Wireless Networks | 2015
Ghadah Aldabbagh; Sheikh Tahir Bakhsh; Nadine Akkari; Sabeen Tahir; Sana Khan; John M. Cioffi
With advances in technology, network operators may need to set up a dynamic spectrum access overlay in heterogeneous networks (HetNets) to increase network coverage, spectrum efficiency, and the capacity of these networks. Uses of TV white space (TVWS) and long term evolution (LTE) are the combination of a new research direction to meet the increasing user demands in the domain of wireless cellular networks. Without the consideration of traffic flow, a network may operate with serious congestion problems that degrade the system performance. Congestion problems can be resolved by either reducing traffic flow or increasing the bandwidth provision. This paper has proposed Distributed dynamic load balancing (DDLB) cellular-based TVWS and LTE technique, such that a cellular-based device can operate on both TVWS and LTE by simply switching its frequency of operation when necessary. The objective of this paper is to resolve the congestion problems in a HetNet through dynamically constructing new clusters to increase the system bandwidth. The simulation results show that the proposed technique solved the bottleneck problem, reduced transmission control overhead and power consumption, and increased the average throughput and load balancing index.
Journal of Network and Computer Applications | 2016
Melike Yigit; V. Cagri Gungor; Etimad Fadel; Laila Nassef; Nadine Akkari; Ian F. Akyildiz
Wireless Sensor Networks (WSNs) are one of the most promising solutions for smart grid applications due to advantages, such as their low-cost, different functionalities, and successful adoption to smart grid environments. However, providing quality of service (QoS) requirements of smart grid applications with WSNs is difficult because of the power constraints of sensor nodes and harsh smart grid channel conditions, such as RF interference, noise, multi-path fading and node contentions. To address these communication challenges, in this paper link-quality-aware routing algorithm (LQ-CMST) as well as the priority and channel-aware multi-channel (PCA-MC) scheduling algorithm have been proposed for smart grid applications. Furthermore, the effect of different modulation and encoding schemes on the performance of the proposed algorithms has been evaluated under harsh smart grid channel conditions. Comparative performance evaluations through extensive simulations show that the proposed algorithms significantly reduce communication delay and the choice of encoding and modulation schemes is critical to meet the requirements of envisioned smart grid applications.
Computer Networks | 2015
Ghadah Aldabbagh; Sheikh Tahir Bakhsh; Nadine Akkari; Sabeen Tahir; Haleh Tabrizi; John M. Cioffi
Wireless networks have resource limitations; in a dense area, cellular spectrum resources are insufficient and affect the system performance. A Long Term Evolution (LTE) network aims to serve heterogeneous users with different QoS requirements. Traditional approaches need new infrastructure and degrade performance of delay sensitive applications, which may result in users with minimum rate requirements being in high blocking probability. To utilize wireless resources efficiently, users want to access the same medium to connect with the same multicast group and be overhauled at the same time. In this paper, a new technique is proposed for operator-controlled called the QoS-Aware Tethering in a Heterogeneous Wireless Network using LTE and TV White Spaces (QTHN) to improve QoS for Constant Bit Rate (CBR) and Best Effort (BE) users. The proposed QTHN converts the whole dense wireless network into hexagonal clusters via two layer network communication. In a cluster, one node is selected as a cluster head and all other nodes act as slaves. Within a cluster, a cluster head acts as an access point known as a Hotspot (H), which is further connected to the Base-station (BS). The proposed QTHN aims to improve QoS within heterogeneous wireless network using LTE and unused White Spaces in a wireless dense area. Simulation results show that the proposed QTHN reduced the numbers of blocked users and improved network utility.
Journal of Network and Computer Applications | 2014
Nadine Akkari; Ghadah Aldabbagh; Michel Nahas; John M. Cioffi
Abstract Tethering has been recently proposed as an efficient solution for the increase in number of mobile users and the requested bandwidth in cellular networks. The idea is to group the mobile nodes into clusters, each containing slaves served by a hotspot. To save licensed spectrum, the hotspot–slave link can use other frequency channels like the newly vacated TV white space (TVWS) bands. A recent work has proposed an iterative clustering algorithm for cellular networks. Despite its computational complexity and signaling overhead, this algorithm did not handle the nodes׳ mobility inside the cell neither the change in their required data rate. In this work, we propose a new dynamic protocol for clustering the nodes taking into account the possible changes occurring in a cellular network. Specifically, the Dynamic Clustering Protocol (DCP) adapts the network configuration with the variable mobiles׳ requirements and the different network events. This will reduce the needed time and signaling and offers better service quality for the clustered users. After presenting the various network events, the handover scenarios and signaling for the Dynamic Clustering Protocol, the performance of the proposed protocol is studied. This was accomplished by modeling different network scenarios and computing the required number of handovers as a function of user mobility, available network resources and data rate requirements for a given clustered nodes configuration.
saudi international electronics communications and photonics conference | 2011
Ahmed Barnawi; Nadine Akkari; Muhammad Emran; Asif Irshad Khan
Over the past few years, mobile operators are faced with enormous challenges. Of such challenges, evolved user demands on personalized applications. Telecommunications industry as well as research community have paid enormous attention to Next Generation Networks (NGN) to address this challenge. NGN is perceived as a sophisticated platform where both application developers and mobile operators cooperate to develop user applications with enhanced quality of experience. The objective of this paper is two folds, first we present an introduction to state-of-the-art NGN testbed to be developed at KAU, and second we provide initial analysis for deploying a mobile application on top of the testbed.
digital information and communication technology and its applications | 2015
Budoor Bawazeer; Nadine Akkari; Ghadah Aldabbagh; Nikos Dimitriou
Today, providing ubiquitous wireless networks have become more important due to the widespread use of smartphones. To achieve Always Best Connectivity (ABC), mobile nodes should handover to a new point of attachment to maintain connection. IEEE 802.21 is a protocol used to unify the handover process independently of the network types involved. The newly deployed IEEE 802.11af, known as WhiteFi, is a new standard that introduced an amendment to the MAC/PHY layers of IEEE 802.11 to allow WiFi to operate on the TV White Space (TVWS) spectrum. TVWS is an unused licensed radio spectrum that provides superior propagation and building penetration compared to other spectrums. Thus, WhiteFi can be used to fill the gaps between WiFi networks created by the large coverage area. This work studies the handover delay from WiFi to WhiteFi. In the case of IEEE 802.11 networks, the scanning phase creates the majority of the total handover delay. However, in WhiteFi, the scanning time will increase due to the increasing number of TVWS channels compared to WiFi. Therefore, this work proposes a new approach to reducing the scanning time based on the IEEE 802.21 standard. The proposed solution consists of a scan-free approach where a Mobile Node (MN) keeps a record of free TVWS. This information will be updated every 48 hours by the querying Information Server (IS) used in the IEEE 802.21 standard. To further reduce the scanning time, a scan-active approach is proposed where the MN queries the IS to retrieve the current active channels from a Registered Location Secure Server (RLSS), which is a local database introduced in the IEEE 802.11af standard. The projected performance of these two schemes is evaluated using an analytical model, and compared against the scan-all initial scheme. The results show that the scan-active approach will reduce the scanning delay, irrespective of the network conditions, in the case of a limited number of active channels. The scan-free scheme is more stable, but with the probability of extra time spent to scan empty channels.