Hossam Mahmoud Ahmad Fahmy
Ain Shams University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Hossam Mahmoud Ahmad Fahmy.
Computer Networks | 2001
Hossam Mahmoud Ahmad Fahmy
Abstract The performance of a distributed system is affected by the various functions of its components. The interaction between components such as network nodes, computer systems and system programs is examined with special interest accorded to its effect on system reliability. At affordable time and space costs, the analytic hierarchy process (AHP) is used to determine how the reliability of a distributed system may be controlled by appropriately assigning weights to its components. Illustrative case studies, that display the system structure, the assignment of weights and the AHP handling are presented.
Archive | 2016
Hossam Mahmoud Ahmad Fahmy
In this chapter an in depth study of simulators and emulators has been presented, with care accorded to their features, implementation and use. Since emulators are hardware dependent, selecting one to use is straightforward. On the other hand, with the wide variety of simulators, the choice is rather complex, and is subject mainly to how is the simulator easy to use, and fulfilling the model requirements. Remarkably, different simulators do not give similar results for the same model due to their different underlying features and implementations. Simulation has proven to be a valued tool in many areas where analytical methods are not applicable and experimentation is not feasible. Researchers generally use simulation to analyze system performance prior to physical design or to compare multiple alternatives over a wide range of conditions. Noteworthy, errors in simulation models or improper data analysis often produce incorrect or misleading results. Although, there exists an extensive row of performance evaluation tools for WSNs, it is impractical to have an all-in-one integrated tool that simultaneously supports simulation, emulation and testbed implementation. In-fact there is no all-in-one stretchy simulator for WSNs. Each simulator exhibit different features and models, each has advantages and weaknesses. Different simulators are appropriate and most effective in typical conditions, so in choosing a simulation tool from available picks it is fruitful to elect a simulator that is best suited for the intended study and targeted application. Also, it is recommended to weight the pros and cons of different simulators that do the same job, the level of complexity of each simulator, availability, extensibility and scalability. Usually, WSNs applications consist of a large number of sensor nodes; therefore it is recommended to settle on the simulation tool capable of simulating large-scale WSNs. Essentially, the reported use besides simulation results of a simulator should not be unobserved before deciding which simulator to prefer. The exercises at the end of the chapter are designed to pinpoint the simulators comparison and selection criteria suitable to the model under study. When bottom up building a simulator, many decisions need to be made. Developers must consider the pros and cons of different programming languages, whether simulation is event-based or time-based, component-based or object-oriented architecture, the level of complexity of the simulator, features to include and to not include, use of parallel execution, ability to interact with real nodes, and other design choices that are pertinent to a typical application. For researchers, choosing which simulator to use is not an easy duty, a full understanding of one’s own model is however the first major step before looking into the bookshelf of simulators. Then follows a survey of the available simulators that can do the job. A major step comes after, the careful weighting of the simulators features, against the model under study and the programming capabilities of the researcher.
Theoretical Computer Science | 1990
Hossam Mahmoud Ahmad Fahmy
Abstract In this paper, a method of analysis of large Petri nets by partitioning is proposed. This method permits a great saving of computation time and storage. Useless efforts spent in the analysis of large Petri nets are spared by a look to the partitions of interest. It is possible to study the characteristics of the required places by involving them in a partition. It was shown that partitioning preserves the characteristics of the main Petri net. The reachability tree method or the matrix equations approach, which were untractable at the whole net level, may be used at the subnet level to get the needed analysis criteria.
Archive | 2016
Hossam Mahmoud Ahmad Fahmy
Several considerations must be taken when developing protocols for wireless sensor networks. Traditional thinking where the focus is on quality of service is somehow revised. In WSNs, QoS is compromised to conserve energy and preserve the life of the network. Concern must be accorded at every level of the protocol stack to conserve energy, and to allow individual nodes to reconfigure the network and modify their set of tasks according to the resources available. The protocol stack for WSNs consists of five standard protocol layers trimmed to satisfy typical sensors features, namely, application layer, transport layer, network layer, data-link layer, and physical layer. These layers address network dynamics and energy efficiency. Functions such as localization, coverage, storage, synchronization, security, and data aggregation and compression are network services that enable proper sensors functioning. Implementation of WSNs protocols at different layers in the protocol stack aims at minimizing energy consumption, and end-to-endCongestion control: end-to-end delay, and maintaining system efficiency. Traditional networking protocols are not designed to meet these WSNs requirements, hence, new energy-efficient protocols have been proposed for all layers of the protocol stack. These protocols employ cross-layer optimization by supporting interactions across the protocol layers. Specifically, protocol state information at a particular layer is shared across all the layers to meet the specific requirements of the WSN.
International Journal of Computer Mathematics | 1993
Hossam Mahmoud Ahmad Fahmy
In this paper, two methods for the analysis of large Petri nets by partitioning are proposed. These methods permit a great saving of computation time and storage. Useless efforts spent in the analysis of large Petri nets are spared by a look to the partitions of interest. It is possible to study the characteristics of the required places by involving them in a partition. It was shown that partitioning preserves the characteristics of the main Petri net. The reachability tree method or the matrix equations approach, which were untractable at the whole net level, may be used at the subnet level to get the needed analysis criteria.
International Journal of Computer Mathematics | 1990
Hossam Mahmoud Ahmad Fahmy
In this paper, a method of analysis of large Petri nets by partitioning is proposed. This method permits a great saving of computation time and storage, which is specially useful when using mini or microcomputers. It was shown that partitioning preserves the characteristics of the main Petri net. The reachability tree method or the matrix equations approach, which were untractable at the whole net level, may be used at the subnet level to get the needed analysis criteria.
Archive | 2016
Hossam Mahmoud Ahmad Fahmy
Transport layer protocols in WSNs should support multiple applications, variable reliability, packet-loss recovery, and congestion control. A transport layer protocol should be generic and independent of the application. Transport protocols are quite abundant, with varying design goals to match their intended use. Depending on their functions, WSN applications can tolerate different levels of packet loss. Packet loss may be due to bad radio communication, congestion, packet collision, full memory capacity, and node failures. Packet loss can result in wasted energy and degraded quality of service (QoS ) in data delivery. Detection of packet loss and correctly recovering missing packets can improve throughput and energy expenditure. There are two approaches for packet recovery: hop-by-hop and end-to-end . Hop-by-hop retransmission requires that an intermediate node cache the packet information in its memory. This method is more energy efficient since retransmission distance is shorter. For end-to-end retransmission, the source caches all the packet information and performs retransmission when there is a packet loss. End-to-end retransmission allows for variable reliability whereas hop-by-hop retransmission performs better when reliability requirements are high. A congestion control mechanism monitors and detects congestion, thereby preserving energy. Before congestion occurs, the source is notified to reduce its sending rate. Congestion control helps reduce retransmission and prevents sensor buffer overrun. As in packet-loss recovery, there are two approaches to congestion control: hop-by-hop and end-to-end . Hop-by-hop mechanism requires every node along the path to monitor buffer overflows. Hop-by-hop mechanism lessens congestion at a faster rate than the end-to-end mechanism. When a sensor node detects congestion, all nodes along the path change their behavior. End-to-end mechanism relies on the end nodes to detect congestion. Congestion is flagged when timeout or redundant acknowledgements are received. There are tradeoffs between hop-by-hop and end-to-end approaches for packet-loss recovery and congestion control mechanism. Depending on the type, reliability. , and time sensitivity of the application, one approach may be better than the other. As presented in details all over this chapter, transport layer protocols in WSNs addresses, with different interests, the above design issues.
Archive | 2016
Hossam Mahmoud Ahmad Fahmy
Testbeds are representative of WSNs, they support the diversity of their hardware and software constituents, they are deployed in the same conditions and would be environment, they make use of the protocols to be used at a larger scale. Testbeds are intended to safeguard would be implemented WSNs from malfunctions that may not be seen in theoretical simulations. Malfunctions may be in inconvenient hardware, buggy software, and deployment prone to energy depletion and radio interferences. By momentarily tolerating faults, that cannot be accepted in everyday actual WSNs, testbeds find the curing solutions. In the literature many testbeds are reported, not all are typically implemented, not all are available now. Knowledge is to be acquired from who got it by researching, trying and experimenting; this chapter considers testbeds with authentic information even if they ceased to subsist. Pioneering testbeds, as fully illustrated, continue to offer models in concepts, implementation, and applications. Some of the testbeds are built for general use, while others are meant for typical applications such as visual surveillance. As fully detailed in this chapter, based on the researchers and practitioners’ interests, testbeds can be classified under several categories. They may be full-scale or miniaturized, deployed on a 2D or 3D pattern, mobile or static, provide Web services or are just accessible from the deployment location, limited to homogeneous platforms or they are extended to support heterogeneity, provide for hybrid simulation as a tool for enhanced analysis or are contented with experimentation analysis. Testbeds and simulators are complementary; ideally getting benefits from both of them is the best option. Theoretical simulation studies provide numerical metrics that are truly needed for practical testbed implementation and deployment. But, is the topmost approach always possible? Not all the wishes are usually attainable. Testbeds are the expensive choice, both in money and effort; simulation is realistically the less risky resort when budgets and time are short, and when typical deployment is not insisting. Simulators are the inevitable tools for analysis; they help previewing the performance metrics needed for proper testbeds deployment. The next chapter considers in full details the most common WSN simulators.
international conference on electronics, circuits, and systems | 2007
Hossam Mahmoud Ahmad Fahmy; Salma A. Ghoneim
The aim of this paper is to introduce the implementation of a MAC protocol on a Load-balanced short-path routing algorithm [1], and to show the effect of the MAC requirements on the behavior of the protocol. This work provides an estimate of how might the overhead introduced by the MAC protocol affect the behavior of the routing algorithm, i.e. how it would affect the load on the nodes of the network and the lifetime of each node. We are using MAC protocol with CSMA/CA for collision avoidance. The load-balanced short-path routing algorithm [1] uses only short paths for minimizing latency, and achieves good load balance. In the absence of the MAC overhead, it was proved that the routing path is at most four times the shortest path length and the maximum load on any node is at most three times that of the most load-balanced algorithm without path- length constraint. We show in this work that the MAC overhead will increase the overall load over the network compared with the cases without MAC protocol.
international conference on electronics, circuits, and systems | 2002
Salma A. Ghoneim; Hossam Mahmoud Ahmad Fahmy
An attempt to capture resource aging and specify when to do preventive maintenance (PM) is presented in this paper. A composite measure termed the DRM (Deteriorating Response Measure) is defined. It is based on the analysis of the deteriorating speed of the resource against time and load. This speed is characterized as follows: 1) It decays with increased load. 2) It does not increase again when the load decreases. This indicates loss of elasticity. The DRM is mathematically formulated based on a queueing system model. Specifying when to do preventive maintenance depends on the decision makers perspective of the manifestation of aging. The article tries to formalize this dependence. Three degrading performance metrics are defined for a DRM: 1) decaying restored speed value; 2) increasing speed offset ratio (plasticity index); and 3) increasing operation interval length offset ratio. These metrics can be used separately or aggregately.