Stefano Andriani
University of Padua
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Stefano Andriani.
IEEE Transactions on Image Processing | 2007
Daniele Menon; Stefano Andriani; Giancarlo Calvagno
Most digital cameras use a color filter array to capture the colors of the scene. Downsampled versions of the red, green, and blue components are acquired, and an interpolation of the three colors is necessary to reconstruct a full representation of the image. This color interpolation is known as demosaicing. The most effective demosaicing techniques proposed in the literature are based on directional filtering and a posteriori decision. In this paper, we present a novel approach to this reconstruction method. A refining step is included to further improve the resulting reconstructed image. The proposed approach requires a limited computational cost and gives good performance even when compared to more demanding techniques
international conference on image processing | 2013
Stefano Andriani; Harald Brendel; Tamara Seybold; Joseph Goldstone
In 1991 Kodak released a set of 24 digital color images derived from a variety of film source materials. Since then, most image processing algorithms have been developed, optimized, tested and compared using this set. Until a few years ago it was considered “the” image set; however, today it shows its limitations. Researches have expressed their need for better, more up-to-date material. We present a new set of high quality color image sequences captured with our professional digital cinema camera. This camera stores uncompressed raw sensor data and the set is freely available for FTP at ftp://[email protected]/ password: imageset.
IEEE Transactions on Image Processing | 2008
Stefano Andriani; Giancarlo Calvagno
In this paper, we present a novel technique that uses the optimal linear prediction theory to exploit all the existing redundancies in a color video sequence for lossless compression purposes. The main idea is to introduce the spatial, the spectral, and the temporal correlations in the autocorrelation matrix estimate. In this way, we calculate the cross correlations between adjacent frames and adjacent color components to improve the prediction, i.e., reduce the prediction error energy. The residual image is then coded using a context-based Golomb-Rice coder, where the error modeling is provided by a quantized version of the local prediction error variance. Experimental results show that the proposed algorithm achieves good compression ratios and it is robust against the scene change problem.
international conference on image processing | 2005
Stefano Andriani; Giancarlo Calvagno; Gian Antonio Mian
In this paper we present a new way to exploit optimal vector prediction theory for lossless compression of color images. To compress color images, the spectral correlation is usually reduced using a color space transformation (e.g., from RGB colour space to YUV Ricoh colour space, or to YCbCr when small losses are allowed). In this work, we exploit the spectral correlation to develop an optimal vector predictor in order to reduce the entropy of the residual image. To this purpose, we consider a pixel as a vector of the three components R, G, and B, and we predict this vector. As a result, we obtain an improvement of the compression ratio at the cost of an increase in the computational complexity. Some techniques to reduce the computational cost are presented. A comparison is carried out with scalar version of GLICBAWLS and JPLG-LS.
data compression conference | 2007
Stefano Andriani; Giancarlo Calvagno
Summary form only given. In this paper we present a lossless compression algorithm for colour video sequence which exploits the spatial, the spectral and the temporal correlations of a colour video sequence in the RGB colour space using the well-known optimal prediction theory. The main idea is to construct the optimal prediction coefficients estimating an autocorrelation matrix which exploits all these correlations. No colour transformation or motion compensation are applied because reversible colour transformations are not able to fully decorrelate the three bands of each frame, and motion compensation remarkably increases the complexity of the updating step of the autocorrelation matrix estimate. Furthermore, the fact that our algorithm is not based on motion compensation lead it to be robust to scene changes. The prediction errors are then coded using a context-based Golomb-Rice coder, with bias cancellation, but without run-length coding. To construct the contexts, the prediction errors are then modeled using an estimate of their local variance. This estimate considers all the previous prediction errors, using a forgetting factor to improve the adaptability of the proposed algorithm. The quantized estimated variance values are used as contexts for the Golomb-Rice coder, and, among others, we considered the following solutions: - 12 contexts estimated by sampling the standard deviation with a quantization step Delta = 1, and a saturation threshold equal to 12, [delta12]; - 128 contexts estimated by sampling the standard deviation with Delta = 1/3 and a saturation threshold equal to 128/3, [delta128]. The obtained coding results are presented for the proposed algorithm compared to JPEG-LS (without using any colour transformation), and JPEG2000 (in lossless mode, using the reversible colour YDbDr transform, and the 5/3 DWT). The results show an improvement of about 1.5 bpp and 0.65 bpp if compared with JPEG-LS coder, and JPEG2000, respectively
international conference on image processing | 2013
Stefano Andriani; Harald Brendel
In this paper we present a three-step crosstalk correction algorithm for single sensor still or video cameras provided with a Bayer color filter array. The first step is performed off-line, during the calibration or the development of the camera; it estimates the sensor response to different colors and analyzes the crosstalk. In the second step, the algorithm corrects “on-the-fly” the raw data of the sensor by using the crosstalk model estimated in the first step. The third, optional, step can be included to remove the residual crosstalk from the second step. It consists of a low-pass filter and it can be omitted if other image rescaling steps are included into the image processing chain. The resulting technique has proved to be effective in removing the crosstalk without introducing any other visual artifacts.
european signal processing conference | 2006
Daniele Menon; Stefano Andriani; Giancarlo Calvagno
european signal processing conference | 2005
Stefano Andriani; Giancarlo Calvagno; Gian Antonio Mian
european signal processing conference | 2004
Stefano Andriani; Giancarlo Calvagno; Gian Antonio Mian
european signal processing conference | 2006
Stefano Andriani; Giancarlo Calvagno; Daniele Menon