Pooneh Bagheri Zadeh
De Montfort University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Pooneh Bagheri Zadeh.
Archive | 2012
Pooneh Bagheri Zadeh; Akbar Sheikh Akbari; Tom Buggy
With advances in multimedia technologies, demand for transmission and storage of voluminous multimedia data has dramatically increased and, as a consequence, data compression is now essential in reducing the amount of data prior storage or transmission. Compression techniques aim to minimise the number of bits required to represent image data while maintaining an acceptable visual quality. Image compression is achieved by exploiting the spatial and perceptual redundancies present in image data. Image compression techniques are classified into two categories, lossless and lossy. Lossless techniques refer to those that allow recovery of the original input data from its compressed representation without any loss of information, i.e. after decoding, an identical copy of the original data can be restored. Lossy techniques offer higher compression ratios but it is impossible to recover the original data from its compressed data, as some of the input information is lost during the lossy compression. These techniques are designed to minimise the amount of distortion introduced into the image data at certain compression ratios. Compression is usually achieved by transforming the image data into another domain, e.g. frequency or wavelet domains, and then quantizing and losslessy encoding the transformed coefficients (Ghanbari, 1999; Peng & Kieffer, 2004; Wang et al., 2001). In recent years much research has been undertaken to develop efficient image compression techniques. This research has led to the development of two standard image compression techniques called: JPEG and JPEG2000 (JPEG, 1994; JPEG 2000, 2000), and many nonstandard image compression algorithms (Said & Pearlman, 1996; Scargall & Dlay, 2000; Shapiro, 1993).
3dtv-conference: the true vision - capture, transmission and display of 3d video | 2011
Akbar Sheikh Akbari; Pooneh Bagheri Zadeh; Mansour Moniri
This paper presents a compressive sensing based stereo image representation technique using wavelet transform gain. The pair of input stereo images is first decomposed into its low-pass and high-pass views using a motion compensated lifting based wavelet transform. A 2D spatial wavelet transform is then further de-correlates the low-pass view into its sub-bands. Wavelet transform gains are employed to regulate threshold value for different sub-bands. The coefficients in high frequency sub-bands and high-pass view are then hard thresholded to generate their sparse sub-bands and view. The compressive sensing method is then used to generate measurements for different resulting sparse sub-bands and view. The baseband coefficients and measurements are finally losslessly coded. The application of compressive sensing in compressing natural images is in its early stages. Therefore, their performances are usually compared with each other than standard codecs. The performance of the proposed codec is superior to the state of the art and is superior to JPEG subjectively.
international conference on database theory | 2010
Pooneh Bagheri Zadeh; C.V. Serdean
this paper presents a novel multiwavelet-based stereo correspondence matching technique. A multiwavelet transform is first applied to a pair of stereo images to decorrelate the images into a number of approximation (baseband) and detail subbands. Information in the basebands is less sensitive to shift variability of the multiwavelet transform. Basebands of each input image carry different spectral content of the image. Therefore, using the basebands to generate the disparity map is likely to produce more accurate results. A global error energy minimization technique is employed to generate a disparity map for each baseband of the stereo pairs. Information in the resulting disparity maps is then combined using a Fuzzy algorithm to construct a dense disparity map. A filtering process is finally applied to smooth the disparity map and reduce its erroneous matches. Middlebury stereo test images are used to generate experimental results. Results show that the proposed technique produces smoother disparity maps with less mismatch errors compared to applying the same global error energy minimization technique to wavelet transformed image data.
Multimedia Tools and Applications | 2010
Pooneh Bagheri Zadeh; Akbar Sheikh Akbari; Tom Buggy; John J. Soraghan
In this paper a novel multiresolution human visual system and statistically based image coding scheme is presented. It decorrelates the input image into a number of subbands using a lifting based wavelet transform. The codec employs a novel statistical encoding algorithm to code the coefficients in the detail subbands. Perceptual weights are applied to regulate the threshold value of each detail subband that is required in the statistical encoding process. The baseband coefficients are losslessly coded. An extension of the codec to the progressive transmission of images is also developed. To evaluate the performance of the coding scheme, it was applied to a number of test images and its performance with and without perceptual weights is evaluated. The results indicate significant improvement in both subjective and objective quality of the reconstructed images when perceptual weights are employed. The performance of the proposed technique was also compared to JPEG and JPEG2000. The results show that the proposed coding scheme outperforms both coding standards at low compression ratios, while offering satisfactory performance at higher compression ratios.
Multimedia Tools and Applications | 2012
Akbar Sheikh Akbari; Pooneh Bagheri Zadeh; Tom Buggy; John J. Soraghan
This paper presents a novel Multiresolution, Perceptual and Vector Quantization (MPVQ) based video coding scheme. In the intra-frame mode of operation, a wavelet transform is applied to the input frame and decorrelates it into its frequency subbands. The coefficients in each detail subband are pixel quantized using a uniform quantization factor divided by the perceptual weighting factor of that subband. The quantized coefficients are finally coded using a quadtree-coding algorithm. Perceptual weights are specifically calculated for the centre of each detail subband. In the inter-frame mode of operation, a Displaced Frame Difference (DFD) is first generated using an overlapped block motion estimation/compensation technique. A wavelet transform is then applied on the DFD and converts it into its frequency subbands. The detail subbands are finally vector quantized using an Adaptive Vector Quantization (AVQ) scheme. To evaluate the performance of the proposed codec, the proposed codec and the adaptive subband vector quantization coding scheme (ASVQ), which has been shown to outperform H.263 at all bitrates, were applied to six test sequences. Experimental results indicate that the proposed codec outperforms the ASVQ subjectively and objectively at all bit rates.
international conference on global security, safety, and sustainability | 2015
Akbar Sheikh Akbari; Pooneh Bagheri Zadeh
This paper presents an image enlargement technique using a wavelet transform. The proposed technique considers the low resolution input image as the wavelet baseband and estimates the information in high-frequency sub-bands from the wavelet high-frequency sub-bands of the input image using wavelet filters. The super-resolution image is finally generated by applying an inverse wavelet transform on the high resolution sub-bands. To evaluate the performance of the proposed image enlargement technique, five standard test images with a variety of frequency components were chosen and enlarged using the proposed technique and six state of the art algorithms. Experimental results show the proposed technique significantly outperforms the classical and non-classical super-resolution methods, both subjectively and objectively.
international conference on database theory | 2007
Pooneh Bagheri Zadeh; Tom Buggy; Akbar Sheikh Akbari
This paper presents a novel hybrid multi-scale, perceptual and vector quantization based video coding scheme. In intra mode of operation, a wavelet transform is applied to the input frame and decorrelate it into a number of subbands. The lowest frequency subband is losslessly coded. The coefficient of the high frequency subbands are pixel quantized using perceptual weights, which specifically designed for each high frequency subband. The quantized coefficients are then coded using quadtree-coding scheme. In the inter mode of operation, displaced frame difference is generated using overlapped block motion estimation/compensation to exploit the inter-frame redundancy. A wavelet transform is then applied to the displaced frame difference to decorrelate it into a number of subbands. The coefficients in the resulting subbands are coded using an adaptive vector quantization scheme. To evaluate the performance of the proposed codec, the proposed codec and the adaptive subband vector quantization coding scheme (ASVQ), which has been shown outperforms H.263 at all bitrates, were applied to a number of test sequences. Results indicate that the proposed codec outperforms ASVQ subjectively and objectively at all bit rates.
2016 International Conference On Cyber Security And Protection Of Digital Services (Cyber Security) | 2016
Catrin Burrows; Pooneh Bagheri Zadeh
Mobile devices are becoming a more popular tool to use in day to day life; this means that they can accumulate a sizeable amount of information, which can be used as evidence if the device is involved in a crime. Steganography is one way to conceal data, as it obscures the data as well as concealing that there is hidden content. This paper will investigate different steganography techniques, steganography artefacts created and the forensic investigation tools used in detecting and extracting steganography in mobile devices. A number of steganography techniques will be used to generate different artefacts on two main mobile device platforms, Android and Apple. Furthermore Forensic investigation tools will be employed to detect and possibly reveal the hidden data. Finally a set of mobile forensic investigation policy and guidelines will be developed.
2016 International Conference On Cyber Security And Protection Of Digital Services (Cyber Security) | 2016
Aisha Abubakar; Pooneh Bagheri Zadeh; Helge Janicke; Richard Howley
Identity theft has been known for some centuries whereby falsified identity documents were misused as well as offences such as impersonating others were common in the society. However, the advent of technology changed the method used for conducting this crime, whereby through the use of the Internet, personal information is can be stolen and misused by criminals. The crime has its causes originating from human error and judgement to failure of computing and networking systems that allow unauthorized access to personal information. In order to provide a better tool of investigating this crime, there is the need to explore the causes of the crime thereby providing a better framework for investigating Identity theft crimes. This study uses Root Cause Analysis (RCA) as a preliminary tool that serves to provide a depicted identification of the causes of Identity theft paving the way into investigating the crime and creating incident response plans.
Open Computer Science | 2015
Pooneh Bagheri Zadeh; Akbar Sheikh Akbari; Tom Buggy
Abstract This paper presents a novel variance of subregions and discrete cosine transform based image-coding scheme. The proposed encoder divides the input image into a number of non-overlapping blocks. The coefficients in each block are then transformed into their spatial frequencies using a discrete cosine transform. Coefficients with the same spatial frequency index at different blocks are put together generating a number of matrices, where each matrix contains coefficients of a particular spatial frequency index. The matrix containing DC coefficients is losslessly coded to preserve its visually important information. Matrices containing high frequency coefficients are coded using a variance of sub-regions based encoding algorithm proposed in this paper. Perceptual weights are used to regulate the threshold value required in the coding process of the high frequency matrices. An extension of the system to the progressive image transmission is also developed. The proposed coding scheme, JPEG and JPEG2000were applied to a number of test images. Results show that the proposed coding scheme outperforms JPEG and JPEG2000 subjectively and objectively at low compression ratios. Results also indicate that the proposed codec decoded images exhibit superior subjective quality at high compression ratios compared to that of JPEG, while offering satisfactory results to that of JPEG2000.