Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where M. B. Abdelhalim is active.

Publication


Featured researches published by M. B. Abdelhalim.


international workshop on system-on-chip for real-time applications | 2006

Hardware Software Partitioning using Particle Swarm Optimization Technique

M. B. Abdelhalim; A. E. Salama; Serag E.-D. Habib

In this paper the authors investigate the application of the particle swarm optimization (PSO) technique for solving the hardware/software partitioning problem. The PSO is attractive for the hardware/software partitioning problem as it offers reasonable coverage of the design space together with O(n) main loops execution time, where n is the number of proposed solutions that will evolve to provide the final solution. The authors carried out several tests on a hypothetical, relatively-large hardware/software partitioning problem using the PSO algorithm as well as the genetic algorithm (GA), which is another evolutionary technique. The authors found that PSO outperforms GA in the cost function and the execution time. For the case of unconstrained design problem, the authors tested several hybrid combinations of PSO and GA algorithm; including PSO then GA, GA then PSO, GA followed by GA, and finally PSO followed by PSO. We found that a PSO followed by GA algorithm gives small or no improvement at all, while a GA then PSO algorithm gives the same results as the PSO alone. The PSO algorithm followed by another PSO round gave the best result as it allows another round of domain exploration. The second PSO round assign new randomized velocities to the particles, while keeping best particle positions obtained in the first round. The paper proposes to name this successive PSO algorithm as the re-excited PSO algorithm


Design Automation for Embedded Systems | 2011

An integrated high-level hardware/software partitioning methodology

M. B. Abdelhalim; Serag E.-D. Habib

Embedded systems are widely used in many sophisticated applications. To speed the time-to-market cycle, the hardware and software co-design has become one of the main methodologies in modern embedded systems. The most important challenge in the embedded system design is partitioning; i.e. deciding which modules of the system should be implemented in hardware and which ones in software. Finding an optimal partition is hard because of the large number and different characteristics of the modules that have to be considered.In this article, we develop a new high-level hardware/software partitioning methodology. Two novel features characterize this methodology. Firstly, the Particle Swarm Optimization (PSO) technique is introduced to the Hardware/Software partitioning field. Secondly, the hardware is modeled using two extreme implementations that bound different hardware scheduling alternatives. Our methodology further partitions the design into hardware and software modules at the early Control-Data Flow Graph (CDFG) level of the design; thanks to improved modeling techniques using intermediate-granularity functional modules. A new restarting technique is applied to PSO to avoid quick convergence. This technique is called Re-Excited PSO. Our numerical results prove the usefulness of the proposed technique.The target technology is Field Programmable Gate Arrays (FPGAs). We developed FPGA-based estimation techniques to evaluate the costs of implementing the design components. These costs are the area, delay, latency, and power consumption for both the hardware and software implementations. Hardware/software communication is also taken into consideration.The aforementioned methodology is embodied in an integrated CAD tool for hardware/software co-design. This tool accepts behavioral, un-timed, algorithmic-level, VHDL, design representation, and outputs a valid hardware/software partition and schedule for the design subject to a set of area/power/delay constraints. This tool is code named CUPSHOP for (Cairo University PSo-based Hardware/sOftware Partitioning tool). Finally, a JPEG-encoder case study is used to validate and contrast our partitioning methodology against the prior-art methodologies.


Applied Soft Computing | 2014

Efficient multi-feature PSO for fast gray level object-tracking

Ahmed M. Abdel Tawab; M. B. Abdelhalim; Serag E.-D. Habib

Robust and real-time moving object tracking is a tricky job in computer vision systems. The development of an efficient yet robust object tracker faces several obstacles, namely: dynamic appearance of deformable or articulated targets, dynamic backgrounds, variation in image intensity, and camera (ego) motion. In this paper, a novel tracking algorithm based on particle swarm optimization (PSO) method is proposed. PSO is a population-based stochastic optimization algorithm modeled after the simulation of the social behavior of bird flocks and animal hordes. In this algorithm, a multi-feature model is proposed for object detection to enhance the tracking accuracy and efficiency. The objects model is based on the gray level intensity. This model combines the effects of different object cases including zooming, scaling, rotating, etc. into a single cost function. The proposed algorithm is independent of object type and shape and can be used for many object tracking applications. Over 30 video sequences and having over 20,000 frames are used to test the developed PSO-based object tracking algorithm and compare it to classical object tracking algorithms as well as previously published PSO-based tracking algorithms. Our results demonstrate the efficiency and robustness of our developed algorithm relative to all other tested algorithms.


IESS | 2007

Constrained and Unconstrained Hardware-Software Partitioning using Particle Swarm Optimization Technique

M. B. Abdelhalim; A. E. Salama; Serag E.-D. Habib

In this paper we investigate the application of the Particle Swarm Optimization (PSO) technique for solving the Hardware/Software partitioning problem. The PSO is attractive for the Hardware/Software partitioning problem as it offers reasonable coverage of the design space together with O(n) main loops execution time, where n is the number of proposed solutions that will evolve to provide the final solution. We carried out several tests on a hypothetical, relatively-large Hardware/Software partitioning problem using the PSO algorithm as well as the Genetic Algorithm (GA), which is another evolutionary technique. We found that PSO outperforms GA in the cost function and the execution time. For the case of unconstrained design problem, we tested several hybrid combinations of PSO and GA algorithms; including PSO then GA, GA then PSO, GA followed by GA, and finally PSO followed by PSO. The PSO algorithm followed by another PSO round gave the best result as it allows another round of domain exploration. The second PSO round assign new randomized velocities to the particles, while keeping best particle positions obtained in the first round. We propose to name this successive PSO algorithm as the Re-excited PSO algorithm. The constrained formulations of the problem are investigated for different tuning or limiting design parameters constraints.


ieee computer society annual symposium on vlsi | 2008

Fast Hardware Upper-Bound Power Estimation for a Novel FPGA-Based HW/SW Partitioning Scheme

M. B. Abdelhalim; Serag E.-D. Habib

In this paper a fast and accurate upper-bound power consumption estimation tool for FPGA-based designs is presented. The tool is developed in the context of a HW/SW partitioning tool. Rather than modeling the hardware implementation as a single alternative, our approach for HW/SW partitioning models the hardware as two extreme alternatives that bound the latency range for different hardware implementations. The presented estimation tool estimates the power consumption for these two hardware alternatives. The computational cost of the presented estimation tool depends linearly on the design complexity as no simulation processes are performed, and hence, it is very useful for fast design space exploration. Testing this estimation tool on several designs showed that this tool is also accurate. Overall power consumption estimations are within plusmn4% of the actual power consumed with an average of 1% error. However, Logic Elements (LEs) and clock power estimates are accurate with an average error of 8.25% and 6.25%, respectively.


2014 14th Biennial Baltic Electronic Conference (BEC) | 2014

A fault-tolerant technique to detect and recover from open faults in FPGA interconnects

Gehad I. Alkady; Nahla A. El-Araby; M. B. Abdelhalim; Hassanein H. Amer; A.H. Madian

Nowadays, FPGAs play a great role in electronic circuits design especially in implementing critical applications. As a result, the need for adding fault-tolerance to FPGAs becomes very important. In this paper, a fault-tolerant technique and associated modifications on FPGA architecture are proposed. This technique can detect and recover from open faults in programmable interconnects. It was successfully simulated using an FPGA-based simulator.


Archive | 2009

Particle Swarm Optimization for HW/SW Partitioning

M. B. Abdelhalim; Serag E.-D. Habib

Embedded systems typically consist of application specific hardware parts and programmable parts, e.g. processors like DSPs, core processors or ASIPs. In comparison to the hardware parts, the software parts are much easier to develop and modify. Thus, software is less expensive in terms of costs and development time. Hardware, however, provides better performance. For this reason, a system designers goal is to design a system fulfilling all system constraints. The co-design phase, during which the system specification is partitioned onto hardware and programmable parts of the target architecture, is called Hardware/Software partitioning. This phase represents one key issue during the design process of heterogeneous systems. Some early co-design approaches [Marrec et al. 1998, Cloute et al. 1999] carried out the HW/SW partitioning task manually. This manual approach is limited to small design problems with small number of constituent modules. Additionally, automatic Hardware/Software partitioning is of large interest because the problem itself is a very complex optimization problem. Varieties of Hardware/Software partitioning approaches are available in the literature. Following Nieman [1998], these approaches can be distinguished by the following aspects: 1. The complexity of the supported partitioning problem, e.g. whether the target architecture is fixed or optimized during partitioning. 2. The supported target architecture, e.g. single-processor or multi-processor, ASIC or FPGA-based hardware. 3. The application domain, e.g. either data-flow or control-flow dominated systems. 4. The optimization goal determined by the chosen cost function, e.g. hardware minimization under timing (performance) constraints, performance maximization under resource constraints, or low power solutions. 5. The optimization technique, including heuristic, probabilistic or exact methods, compared by computation time and the quality of results. 6. The optimization aspects, e.g. whether communication and/or hardware sharing are taken into account. 7. The granularity of the pieces for which costs are estimated for partitioning, e.g. granules at the statement, basic block, function, process or task level. 8. The estimation method itself, whether the estimations are computed by special estimation tools or by analyzing the results of synthesis tools and compilers.


international conference on computer engineering and systems | 2016

Integration of Multiple Fault-Tolerant techniques for FPGA-based NCS Nodes

Gehad I. Alkady; Ali AbdelKader; Ramez M. Daoud; Hassanein H. Amer; Nahla A. El-Araby; M. B. Abdelhalim

Fault-Tolerance is quickly becoming a very important issue in the design of industrial automation systems. This paper addresses this issue in the context of temporary failures occurring in harsh industrial environments. The Fault-Tolerant design of sensors and controllers is investigated for both the In-Loop and Sensor-to-Actuator architectures. Processing is implemented on FPGAs whenever possible. Triple Modular Redundancy (TMR) is used to implement sensors for fast varying applications while Temporal Redundancy (TR) is used for sensors for slow varying applications in order to reduce cost without affecting system reliability. Dynamic Partial Reconfiguration (DPR) is used for fault recovery. Reliability models are developed for all Fault-Tolerant blocks to help system designers with the choice of the Fault-Tolerant techniques to be implemented. Two case studies are carried out with different numbers of fast and slow sensors. System reliabilities are calculated for both conventional and hybrid NCS systems. Results show that the proposed technique results in a cost-effective system at the expense of a very slight decrease in reliability.


mediterranean conference on embedded computing | 2015

Dynamic fault recovery using partial reconfiguration for highly reliable FPGAs

Gehad I. Alkady; Nahla A. El-Araby; M. B. Abdelhalim; Hassanein H. Amer; A.H. Madian

FPGAs are becoming more popular in the domain of safety-critical applications (such as space applications) due to their high performance, re-programmability and reduced development cost. Such systems require FPGAs with self-detection and self-repairing capabilities in order to cope with errors due to the harsh conditions that usually exist in such environments. In this paper, a new dynamic fault recovery technique is proposed using the runtime partial reconfiguration (PR) property in FPGAs. It focuses on open interconnect faults and relies on specifying a Partially Reconfigurable block in the FPGA that is only used during the recovery process after the failure of the first module in the system. The technique uses only one location to recover from errors in any of the FPGAs modules. Accordingly, it requires less area overhead when compared to other techniques.


international midwest symposium on circuits and systems | 2013

Reliable pre-scheduling delay estimation for hardware/software partitioning

Rania O. Hassan; M. B. Abdelhalim; S. E-D Habib

Hardware and Software co-design has become one of the main methodologies in modern embedded systems. The partitioning step, i.e. to decide which components of the system should be implemented in hardware and which ones in software, is the most important step in the embedded systems. Since the costs and delays of the final design strongly depend on partitioning results, there is a need to get an accurate estimate for hardware area, delay and power. However, accurate delay estimation methods are slow as they need a scheduling step. In this paper, we propose a reliable delay estimation method to be used within the partitioning step prior to the scheduling step.

Collaboration


Dive into the M. B. Abdelhalim's collaboration.

Top Co-Authors

Avatar

Hassanein H. Amer

American University in Cairo

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gehad I. Alkady

American University in Cairo

View shared research outputs
Top Co-Authors

Avatar

A. S. Emara

American University in Cairo

View shared research outputs
Top Co-Authors

Avatar

S. H. Amer

American University in Cairo

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

A.H. Madian

Egyptian Atomic Energy Authority

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ramez M. Daoud

American University in Cairo

View shared research outputs
Researchain Logo
Decentralizing Knowledge