Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Welington Y. L. Akamine is active.

Publication


Featured researches published by Welington Y. L. Akamine.


quality of multimedia experience | 2016

No-reference image quality assessment based on statistics of Local Ternary Pattern

Pedro Garcia Freitas; Welington Y. L. Akamine; Mylène C. Q. Farias

In this paper, we propose a new no-reference image quality assessment (NR-IQA) method that uses a machine learning technique based on Local Ternary Pattern (LTP) descriptors. LTP descriptors are a generalization of Local Binary Pattern (LBP) texture descriptors that provide a significant performance improvement when compared to LBP. More specifically, LTP is less susceptible to noise in uniform regions, but no longer rigidly invariant to gray-level transformation. Due to its insensitivity to noise, LTP descriptors are not able to detect milder image degradation. To tackle this issue, we propose a strategy that uses multiple LTP channels to extract texture information. The prediction algorithm uses the histograms of these LTP channels as features for the training procedure. The proposed method is able to blindly predict image quality, i.e., the method is no-reference (NR). Results show that the proposed method is considerably faster than other state-of-the-art no-reference methods, while maintaining a competitive image quality prediction accuracy.


Journal of Electronic Imaging | 2014

Video quality assessment using visual attention computational models

Welington Y. L. Akamine; Mylène C. Q. Farias

Abstract. A recent development in the area of image and video quality consists of trying to incorporate aspects of visual attention in the design of visual quality metrics, mostly using the assumption that visual distortions appearing in less salient areas might be less visible and, therefore, less annoying. This research area is still in its infancy and results obtained by different groups are not yet conclusive. Among the works that have reported some improvements, most use subjective saliency maps, i.e., saliency maps generated from eye-tracking data obtained experimentally. Other works address the image quality problem, not focusing on how to incorporate visual attention into video signals. We investigate the benefits of incorporating bottom-up video saliency maps (obtained using Itti’s computational model) into video quality metrics. In particular, we compare the performance of four full-reference video quality metrics with their modified versions, which had saliency maps incorporated into the algorithm. Results show that the addition of video saliency maps improve the performance of most quality metrics tested, but the highest gains were obtained for the metrics that only took into consideration spatial degradations.


Proceedings of SPIE | 2014

Incorporating visual attention models into video quality metrics

Welington Y. L. Akamine; Mylène C. Q. Farias

A recent development in the area of image and video quality consists of trying to incorporate aspects of visual attention in the design of visual quality metrics, mostly using the assumption that visual distortions appearing in less salient areas might be less visible and, therefore, less annoying. This research area is still in its infancy and results obtained by different groups are not yet conclusive. Among the works that have reported some improvement, most use subjective saliency maps, i.e. saliency maps generated from eye-tracking data obtained experimentally. Besides, most works address the image quality problem, not focusing on how to incorporate visual attention into video signals. In this work, we investigate the benefits of incorporating saliency maps obtained with visual attention. In particular, we compare the performance of four full-reference video quality metrics with their modified versions, which had saliency maps incorporated to the algorithm. For comparison proposes, we have used a database of subjective salience maps.


brazilian conference on intelligent systems | 2016

No-Reference Image Quality Assessment Using Texture Information Banks

Pedro Garcia Freitas; Welington Y. L. Akamine; Mylène C. Q. Farias

In this paper, we propose a new no-reference quality assessment method which uses a machine learning technique based on texture analysis. The proposed method compares test images with texture images of a public database. Local Binary Patterns (LBPs) are used as local texture feature descriptors. With a Csiszár-Morimoto divergence measure, the histograms of the LBPs of the test images are compared with the histograms of the LBPs of the database texture images, generating a set of difference measures. These difference measures are used to blindly predict the quality of an image. Experimental results show that the proposed method is fast and has a good quality prediction power, outperforming other no-reference image quality assessment methods.


Signal Processing-image Communication | 2019

A framework for computationally efficient video quality assessment

Welington Y. L. Akamine; Pedro Garcia Freitas; Mylène C. Q. Farias

Abstract Objective video quality assessment (VQA) methods are essentially algorithms that estimate video quality. Recent quality assessment methods aim to provide quality predictions that are well correlated with subjective quality scores. However, most of these methods are computationally costly, which limits their use in real-time applications. A possible solution to this problem is to decrease the video resolution (spatial, temporal or both) in order to reduce the amount of processed data. Although reducing the video resolution is a simple way of decreasing the running time of a VQA method, this approach might impact the prediction accuracy of the VQA method. In this paper, we analyze this impact. More specifically, we analyze the effects of resolution reduction on the performance of the VQA methods. Based on this analysis, we propose a framework that decreases the overall processing time of VQA methods, without decreasing significantly the performance accuracy. We test the framework using six different VQA methods and four different video quality databases. Results show that the proposed framework reduces the average runtime performance of the tested VQA methods, without considerably altering their performance accuracy.


Signal Processing-image Communication | 2018

Using multiple spatio-temporal features to estimate video quality

Pedro Garcia Freitas; Welington Y. L. Akamine; Mylène C. Q. Farias

Abstract In this paper, we propose a new video quality metric based on a set of multiple features that incorporate texture, saliency, spatial activity, and temporal attributes. A random forest regression algorithm is used to combine these features and obtain a video quality score. Experimental results show that the proposed metric has a good performance when tested on several benchmark video quality databases, outperforming current state-of-the-art full-reference video quality metrics.


Journal of the Brazilian Computer Society | 2018

Referenceless image quality assessment by saliency, color-texture energy, and gradient boosting machines

Pedro Garcia Freitas; Welington Y. L. Akamine; Mylène C. Q. Farias

In most practical multimedia applications, processes are used to manipulate the image content. These processes include compression, transmission, or restoration techniques, which often create distortions that may be visible to human subjects. The design of algorithms that can estimate the visual similarity between a distorted image and its non-distorted version, as perceived by a human viewer, can lead to significant improvements in these processes. Therefore, over the last decades, researchers have been developing quality metrics (i.e., algorithms) that estimate the quality of images in multimedia applications. These metrics can make use of either the full pristine content (full-reference metrics) or only of the distorted image (referenceless metric). This paper introduces a novel referenceless image quality assessment (RIQA) metric, which provides significant improvements when compared to other state-of-the-art methods. The proposed method combines statistics of the opposite color local variance pattern (OC-LVP) descriptor with statistics of the opposite color local salient pattern (OC-LSP) descriptor. Both OC-LVP and OC-LSP descriptors, which are proposed in this paper, are extensions of the opposite color local binary pattern (OC-LBP) operator. Statistics of these operators generate features that are mapped into subjective quality scores using a machine-learning approach. Specifically, to fit a predictive model, features are used as input to a gradient boosting machine (GBM). Results show that the proposed method is robust and accurate, outperforming other state-of-the-art RIQA methods.


Electronics Letters | 2012

On performance of image quality metrics enhanced with visual attention computational models

Mylène C. Q. Farias; Welington Y. L. Akamine


Journal of Imaging Science and Technology | 2016

Blind Image Quality Assessment Using Multiscale Local Binary Patterns

Pedro Garcia Freitas; Welington Y. L. Akamine; Mylne C. Q. Farias


brazilian conference on intelligent systems | 2017

Blind Image Quality Assessment Using Local Variant Patterns

Pedro Garcia Freitas; Welington Y. L. Akamine; Mylène C. Q. Farias

Collaboration


Dive into the Welington Y. L. Akamine's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge