Wided Souidene
Institut Galilée
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Wided Souidene.
IEEE Transactions on Image Processing | 2009
Wided Souidene; Karim Abed-Meraim; Azeddine Beghdadi
The aim of this paper is to propose a new look to MBID, examine some known approaches, and provide a new MC method for restoring blurred and noisy images. First, the direct image restoration problem is briefly revisited. Then a new method based on inverse filtering for perfect image restoration in the noiseless case is proposed. The noisy case is addressed by introducing a regularization term into the objective function in order to avoid noise amplification. Second, the filter identification problem is considered in the MC context. A new robust solution to the degradation matrix filter is then derived and used in conjunction with a total variation approach to restore the original image. Simulation results and performance evaluations using recent image quality metrics are provided to assess the effectiveness of the proposed methods.
european workshop on visual information processing | 2011
Yaqing Niu; Wided Souidene; Azeddine Beghdadi
We present a visual sensitivity model based method for watermarking high definition (HD) stereo images in DCT domain. Our proposal is devoted to make the stereo image watermarking scheme as robust and invisible as possible. Visual sensitivity refers to the ability of human observers to detect distortion in visual field. In order to achieve the best tradeoff point of robustness and transparency of the watermarked images, the watermarking scheme is based on the visual sensitivity model to determine the perceptual adjustment on watermark insertion. The performance of the proposed method has been tested under various attacks producing very promising robustness results while retaining the watermark transparency.
2013 Colour and Visual Computing Symposium (CVCS) | 2013
Sameh Megrhi; Wided Souidene; Azeddine Beghdadi
Video retrieval performance depends on many factors that may impact the output results in some respects. Among these factors, the selected features and the similarity function play prominent roles in the retrieval process. In this paper we propose a feature selection (FS) technique for content based video retrieval (CBVR). This scheme consists of several steps. First, the salient objects within video sequence are extracted through a segmentation process. These objects are described by spatio-temporal normalized features. Finally, during the query procedure, the derived features are compared to the recorded features database using Hausdorff distance matching. This study is carried out on a news video database. The performance of the proposed scheme in terms of recall and precision is evaluated and compared to existing algorithms. The experimental results clearly demonstrate that the proposed features are more accurate and robust for CBVR, than the basic spatio-temporal features.
advances in multimedia | 2013
Sameh Megrhi; Wided Souidene; Azeddine Beghdadi
In this paper, we propose a new spatio-temporal descriptor called ST-SURF. The latter is based on a novel combination between the speed up robust feature (SURF) and the optical flow. The Hessian detector is employed to find all interest points. To reduce the computation time, we propose a new methodology for video segmentation into Frame Packets (FPs), based on the interest points trajectory tracking. We consider only moving interest points descriptors to generate robust and powerful discriminative codebook based on K-mean clustering. We use a standard bag-of-visual-words Support Vector Machine (SVM) approach for action recognition. For the purpose of evaluation, the experimentations are carried out on KTH and UCF sports Datasets. It is demonstrated that the designed ST-SURF shows promising results. In fact, on KTH Dataset, the proposed method achieves an accuracy of 88.2% which is equivalent to the state-of-the-art. On the more realistic UCF sports Dataset, our method surpasses the performance of the best results of space-time descriptors/Hessian detector with 80.7%.
Journal of Visual Communication and Image Representation | 2016
Sameh Megrhi; Marwa Jmal; Wided Souidene; Azeddine Beghdadi
Abstract Human action recognition is still attracting the computer vision research community due to its various applications. However, despite the variety of methods proposed to solve this problem, some issues still need to be addressed. In this paper, we present a human action detection and recognition process on large datasets based on Interest Points trajectories. In order to detect moving humans in moving field of views, a spatio-temporal action detection is performed basing on optical flow and dense speed-up-robust-features (SURF). Then, a video description based on a fusion process that combines motion, trajectory and visual descriptors is proposed. Features within each bounding box are extracted by exploiting the bag-of-words approach. Finally, a support-vector-machine is employed to classify the detected actions. Experimental results on the complex benchmark UCF101, KTH and HMDB51 datasets reveal that the proposed technique achieves better performances compared to some of the existing state-of-the-art action recognition approaches.
information sciences, signal processing and their applications | 2007
Azeddine Beghdadi; Wided Souidene
A new approach for evaluating image segmentation methods is proposed. It exploits some image quality assessment measures inspired from the human visual system (HVS) mechanisms. The main idea is to consider image segmentation as a coding process. This approach is used for evaluating some grey-level thresholding methods. It could be easily extended to the evaluation of region-based segmentation methods. The obtained results confirm that this new approach is very promising and open new methodology for image segmentation evaluation.
information sciences, signal processing and their applications | 2005
Wided Souidene; Karim Abed-Meraim; Azeddine Beghdadi
In this paper, we address four deterministic methods for blind multichannel identification in a blind image restoration fr amework. These methods are: SubSpace method (SS), Minimum Noise Subspace (MNS) and Symmetric MNS (SMNS) methods, Cross Relation method (CR) and Least Squares Smoothing method (LSS). The latter is a new method that is introduced, here, for the first time, as an extension, from the 1-D to the 2-D case, of the least squares method by L. Tong et al. (1999). For each method, we detail its basic principle and provide a summary of its corresponding algorithm. In the noise free case, all the methods developed here offer a perfect channel identification. In the noisy case, these methods hav e a different behavior and their performance are compared in terms of channel identification by means of MSE.
Journal of Electronic Imaging | 2017
Marwa Jmal; Wided Souidene; Rabah Attia
Cultural heritage digitization has been of research interest for several decades. For such, the quality of the stored images should be pleasant to see. However, as images captured by digital devices may include undesirable effects, conducting an enhancement on the image is essential. In this context, we present a framework for the purpose of cultural heritage image illumination enhancement. First, a mapping curve based on saturation feedback is created to adjust the contrast. Then illumination is enhanced by applying a modified homomorphic filter in the frequency domain. The technique employs an optimization search process based on the efficient golden section search algorithm to compute the optimal parameters to produce the enhanced image. Finally, a color restoration function is applied to overcome the problem of color violation. The resulted image represents a trade-off among local contrast improvement, detail enhancement, and preserving the naturalness of the image. Experiments are conducted on a collected dataset of cultural heritage images and compared to some of the state-of-the-art image enhancement methods using a set of quantitative assessments criteria. Results have shown that our proposed approach is able to accomplish a wide set of the performance goals.
international conference on image processing | 2007
Wided Souidene; Abdeldjalil Aïssa-El-Bey; Karim Abed-Meraim; Azeddine Beghdadi
This paper focuses on the blind image separation using their sparse representation in an appropriate transform domain. A new separation method is proposed that proceeds in two steps: (i) an image pre-treatment step to transform the original sources into sparse images and to reduce the mixture matrix to an orthogonal transform (ii) and a separation step that exploits the transformed image sparsity via an lscrp-norm based contrast function. A simple and efficient natural gradient technique is used for the optimization of the contrast function. The resulting algorithm is shown to outperform existing techniques in terms of separation quality and computational cost.
international conference on computer vision theory and applications | 2016
Marwa Jmal; Wided Souidene; Rabah Attia
Nowadays, more attention is being focused on background subtraction methods regarding their importance in many computer vision applications. Most of the proposed approaches are classified as pixel-based due to their low complexity and processing speed. Other methods are considered as spatiotemporal-based as they consider the surroundings of each analyzed pixel. In this context, we propose a new texture descriptor that is suitable for this task. We benefit from the advantages of local binary patterns variants to introduce a novel spatiotemporal center-symmetric local derivative patterns (STCS-LDP). Several improvements and restrictions are set in the neighboring pixels comparison level, to make the descriptor less sensitive to noise while maintaining robustness to illumination changes. We also present a simple background subtraction algorithm which is based on our STCS-LDP descriptor. Experiments on multiple video sequences proved that our method is efficient and produces comparable results to the state of the art.