Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Felix C. A. Fernandes is active.

Publication


Featured researches published by Felix C. A. Fernandes.


workshop on mobile computing systems and applications | 2013

How is energy consumed in smartphone display applications

Xiang Chen; Yiran Chen; Zhan Ma; Felix C. A. Fernandes

Smartphones have emerged as a popular and frequently used platform for the consumption of multimedia. New display technologies, such as AMOLED, have been recently introduced to smartphones to fulfill the requirements of these multimedia applications. However, as an AMOLED screens power consumption is determined by the display content, such applications are often limited by the battery life of the device they are running on, inspiring many researches to develop new power management schemes. In this work, we evaluate the power consumption of several applications on a series of Samsung smartphones and take a deep look into AMOLEDs power consumption and its relative contributions for multimedia apps. We improve AMOLED power analysis by considering the dynamic factors in displaying, and analyze the individual factors affecting power consumption when streaming video, playing a video game, and recording video via a devices built-in camera. Our detailed measurements refine the power analysis of smartphones and reveal some interesting perspectives regarding the power consumption of AMOLED displays in multimedia applications.


acm multimedia | 2012

GreenTube: power optimization for mobile videostreaming via dynamic cache management

Xin Li; Mian Dong; Zhan Ma; Felix C. A. Fernandes

Mobile video streaming has become one of the most popular applications in the trend of smartphone booming and the prevalence of 3G/4G networks, i.e., HSPA, HSPA+, and LTE. However, the prohibitively high power consumption by 3G/4G radios in smartphones reduces battery life significantly and thus severely hurts user experience. To tackle this challenge, we designed GreenTube, a system that optimizes power consumption for mobile video streaming by judiciously scheduling downloading activities to minimize unnecessary active periods of 3G/4G radio. GreenTube achieves this by dynamically managing the downloading cache based on user viewing history and network condition. We implemented GreenTube on Android-based smartphones. Experimental results show that GreenTube achieves large power reductions of more than 70% (on the 3G/4G radio) and 40% (for the whole system). We believe GreenTube is a desirable upgrade to the Android system, especially in the light of increasing LTE popularity.


IEEE Transactions on Image Processing | 2013

DCT/DST-Based Transform Coding for Intra Prediction in Image/Video Coding

Ankur Saxena; Felix C. A. Fernandes

In this paper, we present a DCT/DST based transform scheme that applies either the conventional DCT or type-7 DST for all the video-coding intra-prediction modes: vertical, horizontal, and oblique. Our approach is applicable to any block-based intra prediction scheme in a codec that employs transforms along the horizontal and vertical direction separably. Previously, Han, Saxena, and Rose showed that for the intra-predicted residuals of horizontal and vertical modes, the DST is the optimal transform with performance close to the KLT. Here, we prove that this is indeed the case for the other oblique modes. The optimal choice of using DCT or DST is based on intra-prediction modes and requires no additional signaling information or rate-distortion search. The DCT/DST scheme presented in this paper was adopted in the HEVC standardization in March 2011. Further simplifications, especially to reduce implementation complexity, which remove the mode-dependency between DCT and DST, and simply always use DST for the 4 × 4 intra luma blocks, were adopted in the HEVC standard in July 2012. Simulation results conducted for the DCT/DST algorithm are shown in the reference software for the ongoing HEVC standardization. Our results show that the DCT/DST scheme provides significant BD-rate improvement over the conventional DCT based scheme for intra prediction in video sequences.


international conference on image processing | 2011

Mode dependent DCT/DST for intra prediction in block-based image/video coding

Ankur Saxena; Felix C. A. Fernandes

In this paper, we present a mode-dependent transform scheme that applies either the conventional DCT or type-7 DST for all the video-coding intra-prediction modes: vertical, horizontal or oblique. Our approach is applicable to any block-based intra prediction scheme in a codec, that employs transforms along the horizontal and vertical direction separably. Previously Han, Saxena & Rose showed that for the intra-predicted residuals of horizontal and vertical modes, the DST is the optimal transform with performance close to KLT. Here we prove that this is indeed the case for the other oblique modes. The choice of using DCT/DST is based on intra-prediction modes, and requires no additional signaling information or Rate-Distortion search. Simulations are conducted for the DCT/DST algorithm in TMuC 0.9, the reference software for the ongoing HEVC standardization. Our results show that the DCT/DST scheme provides significant BD-Rate improvement over the DCT for intra prediction in video sequences.


visual communications and image processing | 2012

Modeling power consumption for video decoding on mobile platform and its application to power-rate constrained streaming

Xin Li; Zhan Ma; Felix C. A. Fernandes

This paper proposes an analytical power consumption model for H.264/AVC video decoding using hardware (HW) accelerator on popular mobile platforms. Our proposed model is expressed as the product of the power functions of video spatial resolution (i.e., frame size) and temporal resolution (i.e., frame rate). We have demonstrated that the same analytical model is applicable to different platforms. Model parameters are fixed for a specific platform. This indicates that HW accelerated video decoding is independent of the video content. Simulation results show the high accuracy for video decoding power prediction using proposed model, with the maximum relative prediction error less than 10%. Together with the video bit rate and perceptual quality models published in separated works, we propose to solve the power-rate optimized mobile video streaming problem, so as to maximum the video quality given the limited access network bandwidth and battery life for mobile devices.


international conference on image processing | 2011

Rotational transform for image and video compression

Elena Alshina; Alexander Alshin; Felix C. A. Fernandes

To improve video coding efficiency, the Rotational Transform (ROT) was proposed for adaptive switching between different transforms cores. The Karhunen Loeve Transform (KLT) is known to be optimal for given residual but requires much side information to be signaled to the decoder. The Discrete Cosine transform (DCT) is known to be close to optimal but for strongly directional components, it is sub-optimal. The main idea of ROT is that small modification of DCT coefficients can improve energy compaction. The ROT is implemented as a secondary transform applied after the primary DCT. The ROT matrix is sparse and thus enjoys relatively small computational complexity and memory usage increment. The encoder tries every rotational transform from the dictionary. Only one number, the ROT index, needs to be signaled to the decoder. Because the ROT is an orthogonal transform, encoder search is greatly simplified: distortion can be estimated in frequency domain and no inverse transformation is needed. This makes the ROT an efficient way to improve image/video compression. The ROT Coding gain for Intra slice is 2–3% in the HM 1.0 software implementation.


asilomar conference on signals, systems and computers | 2013

Low-complexity video compression and compressive sensing

M. Salman Asif; Felix C. A. Fernandes; Justin K. Romberg

Compressive sensing (CS) provides a general signal acquisition framework that enables the reconstruction of sparse signals from a small number of linear measurements. To reduce video-encoder complexity, we present a CS-based video compression scheme. Modern video-encoder complexity arises mainly from the transform-coding and motion-estimation blocks. In our proposed scheme, we eliminate these blocks from the encoder, which achieves compression by merely taking a few linear measurements of each image in a video sequence. To guarantee stable reconstruction of the video sequence from only a few measurements, the decoder must effectively exploit the inherent spatial and temporal redundancies in a video sequence. To leverage these redundancies, we consider a motion-adaptive linear dynamical model for videos. Recovery process involves solving an l1-regularized optimization problem, which iteratively updates estimates for the video frames and motion within adjacent frames. To evaluate the performance of our proposed scheme we performed experiments on various standard test sequences.


IEEE MultiMedia | 2015

The Green Metadata Standard for Energy-Efficient Video Consumption

Felix C. A. Fernandes; Xavier Ducloux; Zhan Ma; Esmaeil Faramarzi; Patrick Gendron; Jiangtao Wen

Based on compelling evidence from responses to an April 2013 call for proposals (CFP) on energy-efficient video consumption, MPEG initiated standardization of Green Metadata for energy-efficient video consumption. This article describes how metadata enables large power reductions when QoE is maintained and even larger reductions when QoE is allowed to vary. When QoE is maintained, metadata enables average power reductions of 12, 12, and 26 percent during encoding, decoding, and display, respectively. In addition, the authors measured up to 80 percent power savings at lowered, but acceptable, QoE levels. This article describes the functional architecture of a system that exploits the Green Metadata standard for energy-efficient media consumption, referring to this functional architecture to explain power reductions in various system components.


international conference on image processing | 2014

Nearest-neighbor intra prediction for screen content video coding

Haoming Chen; Ankur Saxena; Felix C. A. Fernandes

Screen content video coding is becoming increasingly important in various applications, such as desktop sharing, video conferencing, and remote education. In general, compared to natural camera-captured content, screen content has different characteristics, such as sharp edges. In this paper, we propose a novel intra prediction scheme for screen content video. In the proposed scheme, bilinear interpolation in angular intra prediction in HEVC is selectively replaced by nearest-neighbor (NN) interpolation to preserve the sharp edges in screen content video. We present two different variants of NN interpolation. In the first implicit pixel-based method, both the encoder, and the decoder determine whether to perform NN interpolation based on the prediction pixels. The second method comprises of the encoder performing a Rate-Distortion search at a block-level, and explicitly signaling a flag to the decoder to indicate when to use the NN interpolation. Both the proposed variants provide significant gains over HEVC, and simulation results show that average gains of 3.3% BD-bitrate are achieved for screen content video. The HEVC proposal of this method was accepted in the core experiments, and would be a technology under consideration in the ongoing Screen Content Coding extension of HEVC scheduled to begin in March 2014.


data compression conference | 2013

Fast Transforms for Intra-prediction-based Image and Video Coding

Ankur Saxena; Felix C. A. Fernandes; Yuriy A. Reznik

In this paper, we provide an overview of the DCT/DST transform scheme for intra coding in the HEVC standard. A unique feature of this scheme is the use of DST-VII transforms in addition to DCT-II. We further derive factorizations for fast joint computation of DCT-II and DST-VII transforms of several sizes. Simulation results for the DCT/DST scheme in the HM reference software for HEVC are also provided together with a discussion on computational complexity.

Collaboration


Dive into the Felix C. A. Fernandes's collaboration.

Researchain Logo
Decentralizing Knowledge