Mohamed Hefeeda
Simon Fraser University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Mohamed Hefeeda.
acm multimedia | 2003
Mohamed Hefeeda; Ahsan Habib; Boyan Botev; Dongyan Xu; Bharat K. Bhargava
We present the design, implementation, and evaluation of PROMISE, a novel peer-to-peer media streaming system encompassing the key functions of peer lookup, peer-based aggregated streaming, and dynamic adaptations to network and peer conditions. Particularly, PROMISE is based on a new application level P2P service called CollectCast. CollectCast performs three main functions: (1) inferring and leveraging the underlying network topology and performance information for the selection of senders; (2) monitoring the status of peers and connections and reacting to peer/connection failure or degradation with low overhead; (3) dynamically switching active senders and standby senders, so that the collective network performance out of the active senders remains satisfactory. Based on both real-world measurement and simulation, we evaluate the performance of PROMISE, and discuss lessons learned from our experience with respect to the practicality and further optimization of PROMISE.
international conference on distributed computing systems | 2002
Dongyan Xu; Mohamed Hefeeda; Susanne E. Hambrusch; Bharat K. Bhargava
In this paper, we study a peer-to-peer media streaming system with the following characteristics: (1) its streaming capacity grows dynamically; (2) peers do not exhibit server-like behavior; (3) peers are heterogeneous in their bandwidth contribution; and (4) each streaming session may involve multiple supplying peers. Based on these characteristics, we investigate two problems: (1) how to assign media data to multiple supplying peers in one streaming session and (2) how to quickly amplify the systems total streaming capacity. Our solution to the first problem is an optimal media data assignment algorithm OTS/sub p2p/, which results in minimum buffering delay in the consequent streaming session. Our solution to the second problem is a distributed differentiated admission control protocol DAC/sub p2p/. By differentiating between requesting peers with different outbound bandwidth, DAC/sub p2p/ achieves fast system capacity amplification; benefits all requesting peers in admission rate, waiting time, and buffering delay; and creates an incentive for peers to offer their truly available out-bound bandwidth.
IEEE ACM Transactions on Networking | 2008
Mohamed Hefeeda; Osama Saleh
Peer-to-peer (P2P) file sharing systems generate a major portion of the Internet traffic, and this portion is expected to increase in the future. We explore the potential of deploying proxy caches in different Autonomous Systems (ASes) with the goal of reducing the cost incurred by Internet service providers and alleviating the load on the Internet backbone. We conduct an eight-month measurement study to analyze the P2P traffic characteristics that are relevant to caching, such as object popularity, popularity dynamics, and object size. Our study shows that the popularity of P2P objects can be modeled by a Mandelbrot-Zipf distribution, and that several workloads exist in P2P traffic. Guided by our findings, we develop a novel caching algorithm for P2P traffic that is based on object segmentation, and proportional partial admission and eviction of objects. Our trace-based simulations show that with a relatively small cache size, a byte hit rate of up to 35% can be achieved by our algorithm, which is close to the byte hit rate achieved by an off-line optimal algorithm with complete knowledge of future requests. Our results also show that our algorithm achieves a byte hit rate that is at least 40% more, and at most triple, the byte hit rate of the common web caching algorithms. Furthermore, our algorithm is robust in face of aborted downloads, which is a common case in P2P systems.
mobile adhoc and sensor systems | 2007
Mohamed Hefeeda; Majid Bagheri
We present the design and evaluation of a wireless sensor network for early detection of forest fires. We first present the key aspects in modeling forest fires. We do this by analyzing the Fire Weather Index (FWI) System, and show how its different components can be used in designing efficient fire detection systems. The FWI System is one of the most comprehensive forest fire danger rating systems in North America, and it is backed by several decades of forestry research. The analysis of the FWI System could be of interest in its own right to researchers working in the sensor network area and to sensor manufacturers who can optimize the communication and sensing modules of their products to better fit forest fire detection systems. Then, we model the forest fire detection problem as a k-coverage problem in wireless sensor networks. In addition, we present a simple data aggregation scheme based on the FWI System. This data aggregation scheme significantly prolongs the network lifetime, because it only delivers the data that is of interest to the application. We validate several aspects of our design using simulation.
international conference on network protocols | 2006
Osama Saleh; Mohamed Hefeeda
Peer-to-peer (P2P) file sharing systems generate a major portion of the Internet traffic, and this portion is expected to increase in the future. We explore the potential of deploying proxy caches in different autonomous systems (ASes) with the goal of reducing the cost incurred by Internet service providers and alleviating the load on the Internet backbone. We conduct a measurement study to model the popularity of P2P objects in different ASes. Our study shows that the popularity of P2P objects can be modeled by a Mandelbrot-Zipf distribution, regardless of the AS. Guided by our findings, we develop a novel caching algorithm for P2P traffic that is based on object segmentation, and partial admission and eviction of objects. Our trace-based simulations show that with a relatively small cache size, less than 10% of the total traffic, a byte hit rate of up to 35% can be achieved by our algorithm, which is close to the byte hit rate achieved by an off-line optimal algorithm with complete knowledge of future requests. Our results also show that our algorithm achieves a byte hit rate that is at least 40% more, and at most triple, the byte hit rate of the common Web caching algorithms. Furthermore, our algorithm is robust in face of aborted downloads, which is a common case in P2P systems.
ieee international conference computer and communications | 2007
Mohamed Hefeeda; Majid Bagheri
We propose new algorithms to achieve k-coverage in dense sensor networks. In such networks, covering sensor locations approximates covering the whole area. However, it has been shown before that selecting the minimum set of sensors to activate from an already deployed set of sensors is NP-hard. We propose an efficient approximation algorithm which achieves a solution of size within a logarithmic factor of the optimal. We prove that our algorithm is correct and analyze its complexity. We implement our algorithm and compare it against two others in the literature. Our results show that the logarithmic factor is only a worst-case upper bound and the solution size is close to the optimal in most cases. A key feature of our algorithm is that it can be implemented in a distributed manner with local information and low message complexity. We design and implement a fully distributed version of our algorithm. Our distributed algorithm does not require that sensors know their locations. Comparison with two other distributed algorithms in the literature indicates that our algorithm: (i) converges much faster than the others, (ii) activates near-optimal number of sensors, and (iii) significantly prolongs (almost doubles) the network lifetime because it consumes much less energy than the other algorithms.
international conference on network protocols | 2007
Mohamed Hefeeda; Hossein Ahmadi
We propose a new probabilistic coverage protocol (denoted by PCP) that considers probabilistic sensing models. PCP is fairly general and can be used with different sensing models. In particular, PCP requires the computation of a single parameter from the adopted sensing model, while everything else remains the same. We show how this parameter can be derived in general, and we actually do the calculations for two example sensing models: (i) the probabilistic exponential sensing model, and (ii) the commonly-used deterministic disk sensing model. The first model is chosen because it is conservative in terms of estimating sensing capacity, and it has been used before in another probabilistic coverage protocol, which enables us to conduct a fair comparison. Because it is conservative, the exponential sensing model can be used as a first approximation for many other sensing models. The second model is chosen to show that our protocol can easily function as a deterministic coverage protocol. In this case, we compare our protocol against two recent deterministic protocols that were shown to outperform others in the literature. Our comparisons indicate that our protocol outperforms all other protocols in several aspects, including number of activated sensors and total energy consumed. We also demonstrate the robustness of our protocol against random node failures, node location inaccuracy, and imperfect time synchronization.
ACM Transactions on Multimedia Computing, Communications, and Applications | 2005
Yi-Cheng Tu; Jianzhong Sun; Mohamed Hefeeda; Sunil Prabhakar
Recent research efforts have demonstrated the great potential of building cost-effective media streaming systems on top of peer-to-peer (P2P) networks. A P2P media streaming architecture can reach a large streaming capacity that is difficult to achieve in conventional server-based streaming services. Hybrid streaming systems that combine the use of dedicated streaming servers and P2P networks were proposed to build on the advantages of both paradigms. However, the dynamics of such systems and the impact of various factors on system behavior are not totally clear. In this article, we present an analytical framework to quantitatively study the features of a hybrid media streaming model. Based on this framework, we derive an equation to describe the capacity growth of a single-file streaming system. We then extend the analysis to multi-file scenarios. We also show how the system achieves optimal allocation of server bandwidth among different media objects. The unpredictable departure/failure of peers is a critical factor that affects the performance of P2P systems. We utilize the concept of peer lifespan to model peer failures. The original capacity growth equation is enhanced with coefficients generated from peer lifespans that follow an exponential distribution. We also propose a failure model under arbitrarily distributed peer lifespan. Results from large-scale simulations support our analysis.
IEEE Transactions on Multimedia | 2011
Somsubhra Sharangi; Ramesh Krishnamurti; Mohamed Hefeeda
The Multicast/Broadcast Service (MBS) feature of mobile WiMAX network is a promising technology for providing wireless multimedia, because it allows the delivery of multimedia content to large-scale user communities in a cost-efficient manner. In this paper, we consider WiMAX networks that transmit multiple video streams encoded in scalable manner to mobile receivers using the MBS feature. We focus on two research problems in such networks: 1) maximizing the video quality and 2) minimizing energy consumption for mobile receivers. We formulate and solve the substream selection problem to maximize the video quality, which arises when multiple scalable video streams are broadcast to mobile receivers with limited resources. We show that this problem is NP-Complete, and design a polynomial time approximation algorithm to solve it. We prove that the solutions computed by our algorithm are always within a small constant factor from the optimal solutions. In addition, we extend our algorithm to reduce the energy consumption of mobile receivers. This is done by transmitting the selected substreams in bursts, which allows mobile receivers to turn off their wireless interfaces to save energy. We show how our algorithm constructs burst transmission schedules that reduce energy consumption without sacrificing the video quality. Using extensive simulation and mathematical analysis, we show that the proposed algorithm: 1) is efficient in terms of execution time, 2) achieves high radio resource utilization, 3) maximizes the received video quality, and 4) minimizes the energy consumption for mobile receivers.
IEEE Transactions on Parallel and Distributed Systems | 2010
Mohamed Hefeeda; Hossein Ahmadi
Various sensor types, e.g., temperature, humidity, and acoustic, sense physical phenomena in different ways, and thus, are expected to have different sensing models. Even for the same sensor type, the sensing model may need to be changed in different environments. Designing and testing a different coverage protocol for each sensing model is indeed a costly task. To address this challenging task, we propose a new probabilistic coverage protocol (denoted by PCP) that could employ different sensing models. We show that PCP works with the common disk sensing model as well as probabilistic sensing models, with minimal changes. We analyze the complexity of PCP and prove its correctness. In addition, we conduct an extensive simulation study of large-scale sensor networks to rigorously evaluate PCP and compare it against other deterministic and probabilistic protocols in the literature. Our simulation demonstrates that PCP is robust, and it can function correctly in presence of random node failures, inaccuracies in node locations, and imperfect time synchronization of nodes. Our comparisons with other protocols indicate that PCP outperforms them in several aspects, including number of activated sensors, total energy consumed, and network lifetime.