Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Carla L. Pagliari is active.

Publication


Featured researches published by Carla L. Pagliari.


IEEE Transactions on Image Processing | 2013

Multiscale Image Fusion Using the Undecimated Wavelet Transform With Spectral Factorization and Nonorthogonal Filter Banks

Andreas Ellmauthaler; Carla L. Pagliari; E.A.B. da Silva

Multiscale transforms are among the most popular techniques in the field of pixel-level image fusion. However, the fusion performance of these methods often deteriorates for images derived from different sensor modalities. In this paper, we demonstrate that for such images, results can be improved using a novel undecimated wavelet transform (UWT)-based fusion scheme, which splits the image decomposition process into two successive filtering operations using spectral factorization of the analysis filters. The actual fusion takes place after convolution with the first filter pair. Its significantly smaller support size leads to the minimization of the unwanted spreading of coefficient values around overlapping image singularities. This usually complicates the feature selection process and may lead to the introduction of reconstruction errors in the fused image. Moreover, we will show that the nonsubsampled nature of the UWT allows the design of nonorthogonal filter banks, which are more robust to artifacts introduced during fusion, additionally improving the obtained results. The combination of these techniques leads to a fusion framework, which provides clear advantages over traditional multiscale fusion approaches, independent of the underlying fusion rule, and reduces unwanted side effects such as ringing artifacts in the fused reconstruction.


international conference on multimedia and expo | 2016

Light field HEVC-based image coding using locally linear embedding and self-similarity compensated prediction

Ricardo J. S. Monteiro; Luis F. R. Lucas; Caroline Conti; Paulo Nunes; Nuno M. M. Rodrigues; Sérgio M. M. de Faria; Carla L. Pagliari; Eduardo A. B. da Silva; Luís Ducla Soares

Light field imaging is a promising new technology that allows the user not only to change the focus and perspective after taking a picture, as well as to generate 3D content, among other applications. However, light field images are characterized by large amounts of data and there is a lack of coding tools to efficiently encode this type of content. Therefore, this paper proposes the addition of two new prediction tools to the HEVC framework, to improve its coding efficiency. The first tool is based on the local linear embedding-based prediction and the second one is based on the self-similarity compensated prediction. Experimental results show improvements over JPEG and HEVC in terms of average bitrate savings of 71.44% and 31.87%, and average PSNR gains of 4.73dB and 0.89dB, respectively.


IEEE Transactions on Image Processing | 2012

Recursive Algorithms for Bias and Gain Nonuniformity Correction in Infrared Videos

Daniel R. Pipa; E.A.B. da Silva; Carla L. Pagliari; P.S.R. Diniz

Infrared focal-plane array (IRFPA) detectors suffer from fixed-pattern noise (FPN) that degrades image quality, which is also known as spatial nonuniformity. FPN is still a serious problem, despite recent advances in IRFPA technology. This paper proposes new scene-based correction algorithms for continuous compensation of bias and gain nonuniformity in FPA sensors. The proposed schemes use recursive least-square and affine projection techniques that jointly compensate for both the bias and gain of each image pixel, presenting rapid convergence and robustness to noise. The synthetic and real IRFPA videos experimentally show that the proposed solutions are competitive with the state-of-the-art in FPN reduction, by presenting recovered images with higher fidelity.


IEEE Transactions on Image Processing | 2015

Intra Predictive Depth Map Coding Using Flexible Block Partitioning

Luis F. R. Lucas; Krzysztof Wegner; Nuno M. M. Rodrigues; Carla L. Pagliari; Eduardo A. B. da Silva; Sérgio M. M. de Faria

A complete encoding solution for efficient intra-based depth map compression is proposed in this paper. The algorithm, denominated predictive depth coding (PDC), was specifically developed to efficiently represent the characteristics of depth maps, mostly composed by smooth areas delimited by sharp edges. At its core, PDC involves a directional intra prediction framework and a straightforward residue coding method, combined with an optimized flexible block partitioning scheme. In order to improve the algorithm in the presence of depth edges that cannot be efficiently predicted by the directional modes, a constrained depth modeling mode, based on explicit edge representation, was developed. For residue coding, a simple and low complexity approach was investigated, using constant and linear residue modeling, depending on the prediction mode. The performance of the proposed intra depth map coding approach was evaluated based on the quality of the synthesized views using the encoded depth maps and original texture views. The experimental tests based on all intra configuration demonstrated the superior rate-distortion performance of PDC, with average bitrate savings of 6%, when compared with the current state-of-the-art intra depth map coding solution present in the 3D extension of a high-efficiency video coding (3D-HEVC) standard. By using view synthesis optimization in both PDC and 3D-HEVC encoders, the average bitrate savings increase to 14.3%. This suggests that the proposed method, without using transform-based residue coding, is an efficient alternative to the current 3D-HEVC algorithm for intra depth map coding.


international conference on image processing | 2013

A novel iterative calibration approach for thermal infrared cameras

Andreas Ellmauthaler; Eduardo A. B. da Silva; Carla L. Pagliari; Jonathan N. Gois; Sergio R. Neves

The accurate geometric calibration of thermal infrared (IR) cameras is of vital importance in many computer vision applications. In general, the calibration procedure consists of localizing a set of calibration points within a calibration image. This set is subsequently used to solve for the camera parameters. However, due to the physical limitations of the IR acquisition process, the localization of the calibration points often poses difficulties, subsequently leading to unsatisfying calibration results. In this work a novel IR camera calibration approach is introduced. It is able to localize the calibration points within the images of a conventional calibration board consisting of miniature light-bulbs with improved accuracy. Our algorithm models the radiation pattern of each light bulb as an ellipse and considers the center of mass of the extracted ellipsoidal region as the starting calibration point, which is refined iteratively using alternating mappings to and from an undistorted grid model. The proposed processing chain leads to a significantly reduced calibration error when compared to the state-of-the-art. Furthermore, the proposed methodology can also be used to calibrate visible-light cameras thus being suitable for the calibration of multiple camera rigs involving both visible-light and IR cameras.


picture coding symposium | 2010

Multiscale recurrent pattern matching approach for depth map coding

Danillo B. Graziosi; Nuno M. M. Rodrigues; Carla L. Pagliari; Eduardo A. B. da Silva; Sérgio M. M. de Faria; Marcelo M. Perez; Murilo B. de Carvalho

In this article we propose to compress depth maps using a coding scheme based on multiscale recurrent pattern matching and evaluate its impact on depth image based rendering (DIBR). Depth maps are usually converted into gray scale images and compressed like a conventional luminance signal. However, using traditional transform-based encoders to compress depth maps may result in undesired artifacts at sharp edges due to the quantization of high frequency coefficients. The Multidimensional Multiscale Parser (MMP) is a pattern matching-based encoder, that is able to preserve and efficiently encode high frequency patterns, such as edge information. This ability is critical for encoding depth map images. Experimental results for encoding depth maps show that MMP is much more efficient in a rate-distortion sense than standard image compression techniques such as JPEG2000 or H.264/AVC. In addition, the depth maps compressed with MMP generate reconstructed views with a higher quality than all other tested compression algorithms.


international conference on image processing | 2013

Predictive depth map coding for efficient virtual view synthesis

Luis F. R. Lucas; Nuno M. M. Rodrigues; Carla L. Pagliari; Eduardo A. B. da Silva; Sergio M. M. de Farla

This paper presents a novel approach to compress depth maps envisioned for virtual view synthesis. This proposal uses a sophisticated prediction model, combining the HEVC intra prediction modes with a flexible partitioning scheme. It exhaustively evaluates the prediction modes for a large amount of block sizes, in order to find the minimum coding cost for each depth map block. Unlike HEVC, no transform is used, the residue being trivially encoded through the transmission of just its mean value. The experimental results show that, when the encoding evaluation metric is the quality of the view synthesized using the encoded depth map against the map encoding rate, the proposed algorithm generates reconstructed depth maps that provide, for most bitrates, some of the best performances among state-of-the-art depth maps encoders. In addition, it runs approximately as fast as the HEVC HM.


international conference on image processing | 2012

Efficient depth map coding using linear residue approximation and a flexible prediction framework

Luis F. R. Lucas; Nuno M. M. Rodrigues; Carla L. Pagliari; E.A.B. da Silva; S.M.M. de Faria

The importance to develop more efficient 3D and multiview data representation algorithms results from the recent market growth for 3D video equipments and associated services. One of the most investigated formats is video+depth which uses depth image based rendering (DIBR) to combine the information of texture and depth, in order to create an arbitrary number of views in the decoder. Such approach requires that depth information must be accurately encoded. However, methods usually employed to encode texture do not seem to be suitable for depth map coding. In this paper we propose a novel depth map coding algorithm based on the assumption that depth images are piecewise-linear smooth signals. This algorithm is designed to encode sharp edges using a flexible dyadic block segmentation and hierarchical intra-prediction framework. The residual signal from this operation is aggregated into blocks which are approximated using linear modeling functions. Furthermore, the proposed algorithm uses a dictionary that increases the coding efficiency for previously used approximations. Experimental results for depth map coding show that synthesized views using the depth maps encoded by the proposed algorithm present higher PSNR than their counterparts, demonstrating the methods efficiency.


international conference on image processing | 2012

Infrared-visible image fusion using the undecimated wavelet transform with spectral factorization and target extraction

Andreas Ellmauthaler; E.A.B. da Silva; Carla L. Pagliari; Sergio R. Neves

In this work we propose a fusion framework based on undecimated wavelet transforms with spectral factorization which includes information about the presence of targets within the infrared (IR) image to the fusion process. For this purpose a novel IR segmentation algorithm that extracts targets from low-contrast environments and suppresses the introduction of spurious segmentation results is introduced. Thereby, we ensure that the most relevant information from the IR image is included in the fused image, leading to a more accurate representation of the captured scene. Moreover, we propose the use of a novel, hybrid fusion scheme which combines both pixeland region-level information to guide the fusion process. This turns the fusion process more robust against possible segmentation errors which represents a common source of problems in region-level image fusion. The combination of these techniques leads to a novel fusion framework which is able to improve the fusion results of its pure pixel-level counterpart without target extraction. Additionally, traditional pixel-level fusion approaches, based on state-of-the-art transforms such as the Nonsubsampled Contourlet Transform and the Dual-Tree ComplexWavelet Transform, are significantly outperformed by the use of the proposed set of methods.


international conference on image processing | 2002

Stereo image coding using multiscale recurrent patterns

M.H.V. Duarte; M.B. Carvalho; E.A.B. da Silva; Carla L. Pagliari; G.V. Mendonca

A stereo image coding method using multiscale recurrent patterns is proposed. The method relies on the matching of recurrent patterns. The input image is segmented in variable-sized blocks. Each segment is coded by making use of contractions, expansions and displacements of elements of a dictionary. The segmentation is ruled by a rate distortion criterion and the dictionary is updated with the concatenation of previously coded elements. The novelty of this work is the absence of the disparity map in the stereo image coding process. That is, the burden of its calculation, the pre-and post-processing phases, the generation of the error images, as well as their coding and transmission are not necessary. In brief, the proposed method cuts off the load of the whole disparity estimation process yet presenting high-quality results at the decoder end.

Collaboration


Dive into the Carla L. Pagliari's collaboration.

Top Co-Authors

Avatar

Eduardo A. B. da Silva

Federal University of Rio de Janeiro

View shared research outputs
Top Co-Authors

Avatar

Nuno M. M. Rodrigues

Instituto Politécnico Nacional

View shared research outputs
Top Co-Authors

Avatar

Sérgio M. M. de Faria

Instituto Politécnico Nacional

View shared research outputs
Top Co-Authors

Avatar

Luis F. R. Lucas

Federal University of Rio de Janeiro

View shared research outputs
Top Co-Authors

Avatar

E.A.B. da Silva

Instituto Militar de Engenharia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jonathan N. Gois

Federal University of Rio de Janeiro

View shared research outputs
Top Co-Authors

Avatar

Murilo B. de Carvalho

Federal Fluminense University

View shared research outputs
Researchain Logo
Decentralizing Knowledge