Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Pedro Garcia Freitas is active.

Publication


Featured researches published by Pedro Garcia Freitas.


quality of multimedia experience | 2016

No-reference image quality assessment based on statistics of Local Ternary Pattern

Pedro Garcia Freitas; Welington Y. L. Akamine; Mylène C. Q. Farias

In this paper, we propose a new no-reference image quality assessment (NR-IQA) method that uses a machine learning technique based on Local Ternary Pattern (LTP) descriptors. LTP descriptors are a generalization of Local Binary Pattern (LBP) texture descriptors that provide a significant performance improvement when compared to LBP. More specifically, LTP is less susceptible to noise in uniform regions, but no longer rigidly invariant to gray-level transformation. Due to its insensitivity to noise, LTP descriptors are not able to detect milder image degradation. To tackle this issue, we propose a strategy that uses multiple LTP channels to extract texture information. The prediction algorithm uses the histograms of these LTP channels as features for the training procedure. The proposed method is able to blindly predict image quality, i.e., the method is no-reference (NR). Results show that the proposed method is considerably faster than other state-of-the-art no-reference methods, while maintaining a competitive image quality prediction accuracy.


brazilian symposium on computer graphics and image processing | 2011

Fast Inverse Halftoning Algorithm for Ordered Dithered Images

Pedro Garcia Freitas; Mylène C. Q. Farias; Aletéia Patrícia Favacho de Araújo

In this paper, we present a simple and fast inverse half toning algorithm, targeted at reconstructing half toned images generated using dispersed-dot ordered dithering algorithms. The proposed algorithm uses a simple set of linear filters combined with a stochastic model in order to predict the best intensity values for the binary image pixels. The algorithm produces images with a better perceptual quality than the available algorithms in the literature, preserving most of the fine details of the original gray-level image. It has a high performance, which can be further improved with the use of parallelization techniques.


Information Sciences | 2016

Detecting tampering in audio-visual content using QIM watermarking

Ronaldo Rigoni; Pedro Garcia Freitas; Mylène C. Q. Farias

We presented an approach to protect image and videos against digital forgery.We propose a technique to restore lost or tampered information in visual signals.The restoration is done using a proposed inverse halftoning technique.The halftone of signal is embedded in the signal itself using a QIM modification. This paper presents a framework for detecting tampered information in digital audio-visual content. The proposed framework uses a combination of temporal and spatial watermarks that do not decrease the quality of host videos. A modified version of the Quantization Index Modulation (QIM) algorithm is used to embed watermarks. The fragility of the QIM watermarking algorithm makes it possible to detect local, global, and temporal tampering attacks with pixel granularity. The technique is also able to identify the type of tampering attack. The framework is fast, robust, and accurate.


quality of multimedia experience | 2015

Video quality ruler: A new experimental methodology for assessing video quality

Pedro Garcia Freitas; Judith Redi; Mylène C. Q. Farias; Alexandre F. Silva

In this paper, we propose a subjective video quality assessment method called video quality ruler (VQR) that can be employed to determine the perceived quality of video sequences. The described method is an extension of the ISO 20462, which is a method to assess image quality. The VQR method provides an interface with a set of pictures. The subjects assess the video using these pictures as a scale and compare the subjective perceived video quality with their perceived quality. The pictures are calibrated to form a numerical scale in units of just noticeable differences (JNDs), which allows to analyze and compare both subjective video and image stimuli. To evaluate the effectiveness of the proposed method, we compare the VQR method with a well-used single stimulus (SS) method. The results show that proposed method can be used to quantify the overall video quality with higher efficiency and with a less biased results than the SS method.


Journal of the Brazilian Computer Society | 2016

Secure self-recovery watermarking scheme for error concealment and tampering detection

Pedro Garcia Freitas; Ronaldo Rigoni; Mylène C. Q. Farias

BackgroundIn this paper, we present a method for protecting and restoring lost or tampered information in images or videos using secure watermarks. The proposed method consists of a combination of techniques that are able to detect image and video manipulations. Contrary to most existing watermarking schemes, the method can identify the exact position of the tampered region. Furthermore, the method is capable of restoring the manipulated information and retrieve the original content. This set of capabilities make it possible to use the proposed method in error concealment and digital tampering applications.MethodsThe proposed method is employed as both an error concealment algorithm and a tampering detection algorithm. The proposed method is divided into two stages. At the encoder side, the method generates a binary version (watermark) of the original picture (image or video frame) using a halftoning technique. Then, a quantization index modulation technique is used to embed this watermark into the protected picture. At the decoder side, after the lost or tampered regions are identified, the original content is recovered by extracting the watermark corresponding to the affected areas. An inverse halftoning algorithm is used to convert the dithered version of the picture into a good-quality multi-level approximation of the original content.ResultsFirst, we test the method in error concealment applications, using a set of still images and H.264 videos. Then, we test the proposed method for tampering detection and content retrieval applications, again considering both images and videos. We compare the proposed method with several other several state-of-the-art algorithms. The results show that the proposed method is fast, robust, and accurate.ConclusionsOur results show that we can use a single approach to tackle both error concealment and tampering detection problems. The proposed method provides high levels of security, high detection accuracy, and recovery capability, and it is robust to several types of attacks.


brazilian conference on intelligent systems | 2016

No-Reference Image Quality Assessment Using Texture Information Banks

Pedro Garcia Freitas; Welington Y. L. Akamine; Mylène C. Q. Farias

In this paper, we propose a new no-reference quality assessment method which uses a machine learning technique based on texture analysis. The proposed method compares test images with texture images of a public database. Local Binary Patterns (LBPs) are used as local texture feature descriptors. With a Csiszár-Morimoto divergence measure, the histograms of the LBPs of the test images are compared with the histograms of the LBPs of the database texture images, generating a set of difference measures. These difference measures are used to blindly predict the quality of an image. Experimental results show that the proposed method is fast and has a good quality prediction power, outperforming other no-reference image quality assessment methods.


Signal Processing-image Communication | 2016

Enhancing inverse halftoning via coupled dictionary training

Pedro Garcia Freitas; Mylène C. Q. Farias; Aletéia Patrícia Favacho de Araújo

Inverse halftoning is a challenging problem in image processing. Traditionally, this operation is known to introduce visible distortions into reconstructed images. This paper presents a learning-based method that performs a quality enhancement procedure on images reconstructed using inverse halftoning algorithms. The proposed method is implemented using a coupled dictionary learning algorithm, which is based on a patchwise sparse representation. Specifically, the training is performed using image pairs composed by images restored using an inverse halftoning algorithm and their corresponding originals. The learning model, which is based on a sparse representation of these images, is used to construct two dictionaries. One of these dictionaries represents the original images and the other dictionary represents the distorted images. Using these dictionaries, the method generates images with a smaller number of distortions than what is produced by regular inverse halftone algorithms. Experimental results show that images generated by the proposed method have a high quality, with less chromatic aberrations, blur, and white noise distortions. HighlightsThis paper presents a enhancing method based on coupled dictionaries learning.The learning model is based on a sparse representation to construct the dictionaries.The method diminishes distortions produced by inverse halftone algorithm.


Signal Processing-image Communication | 2016

Hiding color watermarks in halftone images using maximum-similarity binary patterns

Pedro Garcia Freitas; Mylène C. Q. Farias; Aletéia Patrícia Favacho de Araújo

Abstract This paper presents a halftoning-based watermarking method that enables the embedding of a color image into binary black-and-white images. To maintain the quality of halftone images, the method maps watermarks to halftone channels using homogeneous dot patterns. These patterns use a different binary texture arrangement to embed the watermark. To prevent a degradation of the host image, a maximization problem is solved to reduce the associated noise. The objective function of this maximization problem is the binary similarity measure between the original binary halftone and a set of randomly generated patterns. This optimization problem needs to be solved for each dot pattern, resulting in processing overhead and a long running time. To overcome this restriction, parallel computing techniques are used to decrease the processing time. More specifically, the method is tested using a CUDA-based parallel implementation, running on GPUs. The proposed technique produces results with high visual quality and acceptable processing time.


brazilian symposium on computer graphics and image processing | 2012

Error Concealment Using a Halftone Watermarking Technique

Pedro Garcia Freitas; Ronaldo Rigoni; Mylène C. Q. Farias; Aletéia Patrícia Favacho de Araújo

In this paper, we propose an error concealment technique for H.264 coded videos. The algorithm is targeted at compressed videos degraded after packet losses caused by transmission over an unreliable channel. We use a combination of watermarking and half toning (dithering) techniques. At the encoder side, a dithered version of each video frame is embedded into the video using a watermarking technique. The watermarking technique used by the proposed algorithm is a modified version of the Quantization Index Modulation (QIM) algorithm, which provides a good data hiding capacity. At the decoder side, the algorithm identifies which packets of the video were lost, extracts the corresponding mark, and applies an inverse half toning technique to estimate the original content. The algorithm is fast and has a good performance, being able to restore content with a better quality than the default H.264 error concealment algorithm.


Signal Processing-image Communication | 2019

A framework for computationally efficient video quality assessment

Welington Y. L. Akamine; Pedro Garcia Freitas; Mylène C. Q. Farias

Abstract Objective video quality assessment (VQA) methods are essentially algorithms that estimate video quality. Recent quality assessment methods aim to provide quality predictions that are well correlated with subjective quality scores. However, most of these methods are computationally costly, which limits their use in real-time applications. A possible solution to this problem is to decrease the video resolution (spatial, temporal or both) in order to reduce the amount of processed data. Although reducing the video resolution is a simple way of decreasing the running time of a VQA method, this approach might impact the prediction accuracy of the VQA method. In this paper, we analyze this impact. More specifically, we analyze the effects of resolution reduction on the performance of the VQA methods. Based on this analysis, we propose a framework that decreases the overall processing time of VQA methods, without decreasing significantly the performance accuracy. We test the framework using six different VQA methods and four different video quality databases. Results show that the proposed framework reduces the average runtime performance of the tested VQA methods, without considerably altering their performance accuracy.

Collaboration


Dive into the Pedro Garcia Freitas's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Judith Redi

Delft University of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge