Chaouki Diab
Institut national des sciences Appliquées de Lyon
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Chaouki Diab.
Signal Processing-image Communication | 1990
Chaouki Diab; Rémy Prost; Robert Goutte
Abstract It is shown first that quadrature filters (QFs) and wavelet-generated filters designed for an exact reconstruction of an infinite signal, in a subband coding system, induce a reconstruction error at the picture boundaries. This error is evaluated. Then, a new image coding system is proposed in which sampled images are decomposed into subbands in a one-step procedure. Ideal filters are used and an error-free reconstruction is achieved. Filters are implemented with FFT. A few overhead bits are necessary. The computer load is evaluated and compared with the pyramidal subband coding with FIR QMFs. Illustrative examples allow a comparison of the proposed method with subband coding using QMFs and show clearly the PPSNR gain in the reconstructed images.
IEEE Transactions on Neural Networks | 2011
Dany Merhej; Chaouki Diab; Mohamad Khalil; Rémy Prost
In the compressed sensing framework, different algorithms have been proposed for sparse signal recovery from an incomplete set of linear measurements. The most known can be classified into two categories: l1 norm minimization-based algorithms and l0 pseudo-norm minimization with greedy matching pursuit algorithms. In this paper, we propose a modified matching pursuit algorithm based on the orthogonal matching pursuit (OMP). The idea is to replace the correlation step of the OMP, with a neural network. Simulation results show that in the case of random sparse signal reconstruction, the proposed method performs as well as the OMP. Complexity overhead, for training and then integrating the network in the sparse signal recovery is thus not justified in this case. However, if the signal has an added structure, it is learned and incorporated in the proposed new OMP. We consider three structures: first, the sparse signal is positive, second the positions of the non zero coefficients of the sparse signal follow a certain spatial probability density function, the third case is a combination of both. Simulation results show that, for these signals of interest, the probability of exact recovery with our modified OMP increases significantly. Comparisons with l1 based reconstructions are also performed. We thus present a framework to reconstruct sparse signals with added structure by embedding, through neural network training, additional knowledge to the decoding process in order to have better performance in the recovery of sparse signals of interest.
Signal Processing-image Communication | 1992
Chaouki Diab; Rémy Prost; Robert Goutte
Abstract In a recent paper an image decomposition/reconstruction subband coding scheme free of aliasing and boundary errors has been proposed. Ideal filters have been used and implemented with DFT. A few additional data are necessary to perform the exact reconstruction. We show here that the use of DCT to implement a similar filtering process avoids the use of this additional data. Practically, the computation load does not change.
IEEE Transactions on Signal Processing | 2002
Chaouki Diab; Mohammad Oueidat; Rémy Prost
This paper reconsiders the discrete cosine transform (DCT) algorithm of Narashima and Peterson (1978) in order to reduce the computational cost of the evaluation of N-point inverse discrete cosine transform (IDCT) through an N-point FFT. A new relationship between the IDCT and the discrete Fourier transform (DFT) is established. It allows the evaluation of two simultaneous N-point IDCTs by computing a single FFT of the same dimension. This IDCT implementation technique reduces by half the number of operations.
Signal Processing#R##N#Theories and Applications | 1992
N. Akrout; C. Allart; Chaouki Diab; Rémy Prost; Robert Goutte
In Vector Quantization (VQ) of subbands, a specialized codebook is required for each subband. Thus, the codebook generation is time-consuming when iterative algorithm such as Linde-Buzo-Gray (LBG) algorithm is used. We propose a Progressive Constructive Clustering (PCC) algorithm as a non-iterative technique to design the codebook for vector quantization. Subband coding at 0.57 bit per pixel indicates that the codebooks generated by the PCC and LBG algorithms both yield the same distortion. However, using the PCC algorithm, the codebook can be developed in about 2 percent of the time required by LBG algorithm.
Signal Processing-image Communication | 1995
Rémy Prost; Chaouki Diab; Robert Goutte
This work extends previously reported work on image multi-subband decomposition/reconstruction using DFT. The purpose of the method proposed here is to achieve exact decomposition and reconstruction using ideal band-pass filters implemented in the DFT domain without any excess data.
NMR in Biomedicine | 2014
Dany Merhej; Hélène Ratiney; Chaouki Diab; Mohamad Khalil; Michaël Sdika; Rémy Prost
Multidimensional NMR spectroscopy is widely used for studies of molecular and biomolecular structure. A major disadvantage of multidimensional NMR is the long acquisition time which, regardless of sensitivity considerations, may be needed to obtain the final multidimensional frequency domain coefficients. In this article, a method for under‐sampling multidimensional NMR acquisition of sparse spectra is presented. The approach is presented in the case of two‐dimensional NMR acquisitions. It relies on prior knowledge about the support in the two‐dimensional frequency domain to recover an over‐determined system from the under‐determined system induced in the linear acquisition model when under‐sampled acquisitions are performed. This over‐determined system can then be solved with linear least squares. The prior knowledge is obtained efficiently at a low cost from the one‐dimensional NMR acquisition, which is generally acquired as a first step in multidimensional NMR. If this one‐dimensional acquisition is intrinsically sparse, it is possible to reconstruct the corresponding two‐dimensional acquisition from far fewer observations than those imposed by the Nyquist criterion, and subsequently to reduce the acquisition time. Further improvements are obtained by optimizing the sampling procedure for the least‐squares reconstruction using the sequential backward selection algorithm. Theoretical and experimental results are given in the case of a traditional acquisition scheme, which demonstrate reliable and fast reconstructions with acceleration factors in the range 3–6. The proposed method outperforms the CS methods (OMP, L1) in terms of the reconstruction performance, implementation and computation time. The approach can be easily extended to higher dimensions and spectroscopic imaging. Copyright
international conference on electronics, circuits, and systems | 2009
Chaouki Diab; Mohamad Oueidat
In this paper, we present a simple quantizer “Sub-optimal” that can be applied for any probability distribution. The principle and the calculation of this quantizer are detailed. Its implementation shows that this method achieves significant results compared with other quantizer in term of mean square error distortion.
IEEE Transactions on Signal Processing | 1996
Rémy Prost; Chaouki Diab
Signals can be split into subbands using ideal bandpass filters in the discrete cosine transform (DCT) domain. We show that decimation by band folding is not equivalent to downsampling but results in two convolution operations that involve both the odd and even samples of the signal.
2010 XIth International Workshop on Symbolic and Numerical Methods, Modeling and Applications to Circuit Design (SM2ACD) | 2010
Chaouki Diab; Mohamad Oueidat
This paper proposes a method for the design of adaptive scalar quantizer based on the source statistics. Adaptivity is useful in applications where the statistics of the source are either not known a priori or will change over time. The proposed method first determines two quantizer cells and the corresponding output levels such that the distortion is minimized over all possible two-level quantizers. Then the cell with the largest empirical distortion is split into two cells in such a way that the empirical distortion is minimized over all possible splits. Each time a split is made, the number of output levels increases by one until the target number of cells is reached. Finally, the resultant quantizer serves as a good initial starting point for running the Lloyd-Max Algorithm in order to reach global optimality. Experimental results show that this new designed quantizer outperforms that obtained by the Lloyd-Max method started with an arbitrary initial point in terms of Mean Square Error (MSE). Moreover, the proposed method converges more rapidly than the Lloyd-Max one. Our method adapts itself to the histogram of the data without creating any empty output range. This feature improves the robustness of the design method.