Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where E. Alazawi is active.

Publication


Featured researches published by E. Alazawi.


international symposium on broadband multimedia systems and broadcasting | 2012

Depth mapping of integral images using a hybrid disparity analysis algorithm

O. Abdul Fatah; Amar Aggoun; Muhammad Nawaz; John Cosmas; Emmanuel Tsekleves; M. Rafiq Swash; E. Alazawi

This paper presents the results of a depth map algorithm applied to the recorded integral images. The novel idea of this paper is the development of automatic masking procedure, which improves the accuracy the depth map by removing the background noise. This is achieved by applying the set of morphological operators to separate the foreground and background.


international symposium on broadband multimedia systems and broadcasting | 2013

Adaptive depth map estimation from 3D integral image

E. Alazawi; Amar Aggoun; Maysam F. Abbod; O. Abdul Fatah; Mohammad Rafiq Swash

Integral Imaging (InIm) is one of the most promising technologies for producing full color 3-D images with full parallax. InIm requires only one recording in obtaining 3D information and therefore no calibration is necessary to acquire depth values. The compactness of using InIm in depth measurement has been attracting attention as a novel depth extraction technique. In this paper, an algorithm for depth extraction that builds on previous work by the authors is presented. Three main problems in depth map estimation from InIm have been solved; the uncertainty and region homogeneity at image location where errors commonly appear in disparity process, dissimilar displacements within the matching block around object borders, object segmentation. This method is based on the distribution of the sample variance in sub-dividing non-overlapping blocks. A descriptor which is unique and distinctive for each feature on InIm has been achieved. Comparing to state-of-the-art techniques, it is shown that the proposed algorithm has improvements on two aspects: depth map extraction level, computational complexity.


international conference on d imaging | 2013

Pre-processing of holoscopic 3D image for autostereoscopic 3D displays

Mohammad Rafiq Swash; Amar Aggoun; O. Abdulfatah; B. Li; J. C. Fernandez; E. Alazawi; Emmanuel Tsekleves

Holoscopic 3D imaging also known as Integral imaging is an attractive technique for creating full color 3D optical models that exist in space independently of the viewer. The constructed 3D scene exhibits continuous parallax throughout the viewing zone. In order to achieve depth control, robust and real-time, a single aperture holoscopic 3D imaging camera is used for recording holoscopic 3D image using a regularly spaced array of microlens arrays, which view the scene at a slightly different angle to its neighbor. However, the main problem is that the microlens array introduces a dark borders in the recorded image and this causes errors at playback on the holoscopic 3D Display. This paper proposes a reference based pre-processing of holoscopic 3D image for autostereoscopic holoscopic 3D displays. The proposed method takes advantages of microlens as reference point to detect amount of introduced dark borders and reduce/remove them from the holoscopic 3D image.


international symposium on broadband multimedia systems and broadcasting | 2014

Adopting multiview pixel mapping for enhancing quality of holoscopic 3D scene in parallax barriers based holoscopic 3D displays

Swash; O. Abdulfatah; E. Alazawi; Tatiana Kalganova; John Cosmas

The Autostereoscopic multiview 3D Display is robustly developed and widely available in commercial markets. Excellent improvements are made using pixel mapping techniques and achieved an acceptable 3D resolution with balanced pixel aspect ratio in lens array technology. This paper proposes adopting multiview pixel mapping for enhancing quality constructed holoscopic 3D scene in parallax barriers based holoscopic 3D displays achieving great results. The Holoscopic imaging technology mimics the imaging system of insects, such as the fly, utilizing a single camera, equipped with a large number of micro-lenses, to capture a scene, offering rich parallax information and enhanced 3D feeling without the need of wearing specific eyewear. In addition pixel mapping and holoscopic 3D rendering tools are developed including a custom built holoscopic 3D displays to test the proposed method and carry out a like-to-like comparison.


3dtv-conference: the true vision - capture, transmission and display of 3d video | 2014

Super depth-map rendering by converting holoscopic viewpoint to perspective projection

E. Alazawi; Maysam F. Abbod; Amar Aggoun; Mohammad Rafiq Swash; O. Abdul Fatah; J. C. Fernandez

The expansion of 3D technology will enable observers to perceive 3D without any eye-wear devices. Holoscopic 3D imaging technology offers natural 3D visualisation of real 3D scenes that can be viewed by multiple viewers independently of their position. However, the creation of a super depth-map and reconstruction of the 3D object from a holoscopic 3D image is still in its infancy. The aim of this work is to build a high-quality depth map of a real 3D scene from a holoscopic 3D image through extraction of multi-view high resolution Viewpoint Images (VPIs) to compensate for the poor features of VPIs. To manage this, we propose a reconstruction method based on the perspective formula to convert sets of directional orthographic low resolution VPIs into perspective projection geometry. Following that, we implement an Auto-Feature point algorithm for synthesizing VPIs to distinctive Feature-Edge (FE) blocks to localize and provide an individual feature detector that is responsible for integration of 3D information. Detailed experiments proved the reliability and efficiency of the proposed method, which outperforms state-of-the-art methods for depth map creation.


international conference on d imaging | 2013

Distributed pixel mapping for refining dark area in parallax barriers based holoscopic 3D Display

Mohammad Rafiq Swash; Amar Aggoun; O. Abdulfatah; J. C. Fernandez; E. Alazawi; Emmanuel Tsekleves

Autostereoscopic 3D Display is robustly developed and available in the market for both home and professional users. However 3D resolution with acceptable 3D image quality remains a great challenge. This paper proposes a novel pixel mapping method for refining dark areas between two pinholes by distributing it into 3 times smaller dark areas and creating micro-pinholes in parallax barriers based holoscopic 3D displays. The proposed method allows to project RED, GREEN, BLUE subpixels separately from 3 different pinholes and it distributes the dark spaces into 3 times smaller dark spaces, which become unnoticeable and improves quality of the constructed holoscopic 3D scene significantly. Parallax barrier technology refers to a pinhole sheet or device placed in front or back of a liquid crystal display, allowing to project viewpoint pixels into space that reconstructs a holoscopic 3D scene in space. The holoscopic technology mimics the imaging system of insects, such as the fly, utilizing a single camera, equipped with a large number of micro-lenses or pinholes, to capture a scene, offering rich parallax information and enhanced 3D feeling without the need of wearing specific eyewear.


international symposium on broadband multimedia systems and broadcasting | 2014

3D-Interactive-depth generation and object segmentation from Holoscopic image

E. Alazawi; John Cosmas; Swash; Maysam F. Abbod; Obaidullah Abdul Fatah

Holoscopic 3D Imaging is a technique for producing natural 3D objects that exist in our world without the need for wearing specific eyewear. An Auto-Feature-Edge (AFE) descriptor algorithm is used to simplify the edge detection of objects that uses a Multi-Quantize Adaptive Local Histogram Analysis (MQALHA) algorithm. This paper presents an exploitation of available depth estimation and Feature Edge (FE) segmentation techniques when generating a 3D-Interactive-Map (3DIM). The robustness and efficiency of the proposed method is successfully illustrated in the paper and compared with the current state-of-the-art techniques.


international symposium on broadband multimedia systems and broadcasting | 2013

Three-dimensional integral image reconstruction based on viewpoint interpolation

Obaidullah Abdul Fatah; Peter M. P. Lanigan; Amar Aggoun; Mohammad Rafiq Swash; E. Alazawi; B. Li; J. C. Fernandez; D. Chen; Emmanuel Tsekleves

This paper presents new algorithm for improving the visual definition quality of real integral images computationally throughimage reconstructing. The proposed algorithm takes advantage of true 3D “Integral imaging”. A real world scene is recorded based on the flys eye technique, which is simulated by an array of microlenses.. The proposed method works on orthographic viewpoint images, where shift-and-integration of the neighboring viewpoints are used with quadratic interpolation to increase the visual quality on the final image. This process returns a standard photographic image with enhanced image quality. Detailed experiments have been conducted to demonstrate the effectiveness of proposed method and results are offered.


digital television conference | 2013

Scene depth extraction from Holoscopic Imaging technology

E. Alazawi; Amar Aggoun; Maysam F. Abbod; Mohammad Rafiq Swash; O. Abdul Fatah; J. C. Fernandez

3D Holoscopic Imaging (3DHI) is a promising technique for viewing natural continuous parallax 3D objects within a wide viewing zone using the principle of “Flys eye”. The 3D content is captured using a single aperture camera in real-time and represents a true volume spatial optical model of the object scene. The 3D content viewed by multiple viewers independently of their position, without 3D eyewear glasses. The 3DHI technique merely requires a single recording that the acquisition of the 3D information and the compactness of depth measurement that is used has been attracting attention as a novel depth extraction technique. This paper presents a new corresponding and matching technique based on a novel automatic Feature-Match Selection (FMS) algorithm. The aim of this algorithm is to estimate and extract an accurate full parallax 3D model form from a 3D Omni-directional Holoscopic Imaging (3DOHI) system. The basis for the novelty of the paper is on two contributions: feature blocks selection and corresponding automatic optimization process. There are solutions for three main problems related to the depth map estimation from 3DHI: uncertainty and region homogeneity at image location, dissimilar displacements within the matching block around object borders, and computational complexity.


international symposium on broadband multimedia systems and broadcasting | 2013

Foreground detection using background subtraction with histogram

Muhammad Nawaz; John Cosmas; Awais Adnan; Muhammad Inam Ul Haq; E. Alazawi

In the background subtraction method one of the core issue is; how to setup the threshold value precisely at run time, which can ultimately overcome several bugs of this approach in the foreground detection. In the proposed algorithm the key feature of any foreground detection algorithm; motion is used however getting the threshold value from the original motion histogram is not possible, so for the said purpose smooth motion histogram is used in a systematic way to obtain the threshold value. In the proposed algorithm the main focus is to get a better estimation of threshold so that to get a dynamic value, from histogram at run time. If the proposed algorithm is used intelligently in terms of motion magnitude and motion direction it can distinguish accurately between background and foreground, camera motion along with camera motion and object motion.

Collaboration


Dive into the E. Alazawi's collaboration.

Top Co-Authors

Avatar

Amar Aggoun

University of Bedfordshire

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

O. Abdul Fatah

Brunel University London

View shared research outputs
Top Co-Authors

Avatar

John Cosmas

Brunel University London

View shared research outputs
Top Co-Authors

Avatar

B. Li

Brunel University London

View shared research outputs
Top Co-Authors

Avatar

O. Abdulfatah

Brunel University London

View shared research outputs
Top Co-Authors

Avatar

D. Chen

Brunel University London

View shared research outputs
Researchain Logo
Decentralizing Knowledge