Gustavo Marrero Callicó
University of Las Palmas de Gran Canaria
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Gustavo Marrero Callicó.
IEEE Transactions on Consumer Electronics | 2008
Gustavo Marrero Callicó; Sebastián López; Oliver Sosa; José Francisco López; Roberto Sarmiento
In general, all the video super-resolution (SR) algorithms present the important drawback of a very high computational load, mainly due to the huge amount of operations executed by the motion estimation (ME) stage. Commonly, there is a trade-off between the accuracy of the estimated motion, given as a motion vector (MV), and the computational cost associated. In this sense, the ME algorithms that explore more exhaustively the search area among images use to deliver better MVs, at the cost of a higher computational load and resources use. Due to this reason, the proper choice of a ME algorithm is a key factor not only to reach real-time applications, but also to obtain high quality video sequences independently of their characteristics. Under the hardware point of view, the preferred ME algorithms are based on matching fixed-size blocks in different frames. In this paper, a comparison of nine of the most representative Fast Block Matching Algorithms (FBMAs) is made in order to select the one which presents the best tradeoff between video quality and computational cost, thus allowing reliable real-time hardware implementations of video super-resolution systems.
IEEE Transactions on Consumer Electronics | 2008
F. Tobajas; Gustavo Marrero Callicó; P.A. Perez; V. de Armas; Roberto Sarmiento
In this paper, a novel hardware architecture for real-time implementation of the adaptive deblocking filtering process specified by the H.264/AVC video coding standard, is presented. The deblocking filter is a computationally and data intensive tool resulting in an increased execution time of both the encoding and decoding processes. The proposed architecture is based on a double- filter strategy that results in a significant saving in filtering cycles, memory requirements and gate count when compared with state-of-the-art approaches. The proposed architecture is implemented in synthesizable HDL at RTL level and verified with the reference software. This hardware is designed to be used as part of a complete H.264/A VC video coding system.
Multidimensional Systems and Signal Processing | 2007
D. Barreto; L. Alvarez; Rafael Molina; Aggelos K. Katsaggelos; Gustavo Marrero Callicó
Every user of multimedia technology expects good image and video visual quality independently of the particular characteristics of the receiver or the communication networks employed. Unfortunately, due to factors like processing power limitations and channel capabilities, images or video sequences are often downsampled and/or transmitted or stored at low bitrates, resulting in a degradation of their final visual quality. In this paper, we propose a region-based framework for intentionally introducing downsampling of the high resolution (HR) image sequences before compression and then utilizing super resolution (SR) techniques for generating an HR video sequence at the decoder. Segmentation is performed at the encoder on groups of images to classify their blocks into three different types according to their motion and texture. The obtained segmentation is used to define the downsampling process at the encoder and it is encoded and provided to the decoder as side information in order to guide the SR process. All the components of the proposed framework are analyzed in detail. A particular implementation of it is described and tested experimentally. The experimental results validate the usefulness of the proposed method.
IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing | 2012
Lucana Santos; Sebastián López; Gustavo Marrero Callicó; J.F. Lopez; Roberto Sarmiento
In this paper, a performance evaluation of the state-of-the-art H.264/AVC video coding standard is carried out with the aim of determining its feasibility when applied to hyperspectral image compression. Results are obtained based on configuring diverse parameters in the encoder in order to achieve an optimal trade-off between compression ratio, accuracy of unmixing and computation time. In this sense, simulations are developed in order to measure the spectral angles and signal-to-noise ratio (SNR), achieving compression ratios up to 0.13 bits per pixel per band (bpppb) for real hyperspectral images. Moreover, in this work it is detected which blocks in the encoder contribute the most to performance improvements of the compression task for the particular case of this type of images, and which ones are not relevant at all and hence could be removed. This conclusion yields to reduce the future design complexities of potential low-power/real-time hyperspectral encoders based on H.264/AVC for remote sensing applications.
IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing | 2012
Sebastián López; Pablo Horstrand; Gustavo Marrero Callicó; J.F. Lopez; Roberto Sarmiento
There is presently a high interest in the spatial industry to develop high-performance on-board processing platforms with a high degree of flexibility, so they can adapt to varying mission needs and/or to future space standards. For this purpose, Field Programmable Gate Array (FPGA) devices have demonstrated to offer an excellent compromise between flexibility and performance. This work presents a novel FPGA-based architecture to be used as part of the hyperspectral linear unmixing processing chain. In particular, this paper introduces a new architecture for hyperspectral endmember extraction accordingly to the Modified Vertex Component Analysis (MVCA) algorithm, which provides a better figure of merit in terms of endmember extraction accuracy versus computational complexity than the Vertex Component Analysis (VCA) algorithm. Two versions of the MVCA algorithm which differ on the use of floating point or integer arithmetic for iteratively projecting the hyperspectral cube onto a direction orthogonal to the subspace spanned by the endmembers already computed have been mapped onto a Xilinx Virtex-5 FPGA. The results demonstrate that both versions are capable of processing hyperspectral images captured by the NASAs AVIRIS sensor in real-time, showing the latter a better performance in terms of hardware resources and processing speed. Furthermore, our proposal constitutes the first published architecture for extracting the endmembers from a hyperspectral image based on the VCA principle and thus, it provides a basis for future FPGA implementations of state-of-the-art hyperspectral algorithms with similar characteristics, such as the Automatic Target Generation Process (ATGP) or the Orthogonal Subspace Projection (OSP) algorithms.
IEEE Geoscience and Remote Sensing Letters | 2012
Sebastián López; Pablo Horstrand; Gustavo Marrero Callicó; José Francisco López; Roberto Sarmiento
Endmember extraction represents one of the most challenging aspects of hyperspectral image processing. In this letter, a new algorithm for endmember extraction, named modified vertex component analysis (MVCA), is presented. This new technique outperforms the popular vertex component analysis (VCA) by applying a low-complexity orthogonalization method and by utilizing integer instead of floating-point arithmetic when dealing with hyperspectral data. The feasibility of this technique is demonstrated by comparing its performance with VCA on synthetic mixtures as well as on the well-known Cuprite hyperspectral image. MVCA shows promising results in terms of much lower computational complexity, still reproducing similar endmember accuracy than its original counterpart. Moreover, the features of this algorithm combined with state-of-the-art hardware implementations qualify MVCA as a good potential candidate for all those applications in which real time is a must.
conference of the industrial electronics society | 2002
Gustavo Marrero Callicó; Antonio Núñez; Rafael Peset Llopis; Ramanathan Sethuraman; M.O. de Beeck
This paper presents an approach to improve the quality of digital images over the sensor resolution using superresolution techniques. In order to obtain a feasible low cost implementation, the resources have been restricted to those that can be found in a generic video encoder, i.e.: the motion estimator, the motion compensator, image loop memory, etc. The super-resolution system has been implemented over a codesign platform developed by the Philips Research Laboratories in Eindhoven, while performing minimal changes on the overall hardware architecture. Nevertheless, this methodology can easily be extended to any generic video encoder architecture. The results show important improvements in the image quality, assuming that sufficient sample data is available. Based on these results, some generalizations can be made about the impact of the sampling process on the quality of the super-resolution image.
IEEE Geoscience and Remote Sensing Letters | 2013
Sebastián López; Javier F. Moure; Antonio Plaza; Gustavo Marrero Callicó; José Francisco López; Roberto Sarmiento
Hyperspectral image processing represents a valuable tool for remote sensing of the Earth. This fact has led to the inclusion of hyperspectral sensors in different airborne and satellite missions for Earth observation. However, one of the main drawbacks encountered when dealing with hyperspectral images is the huge amount of data to be processed, in particular, when advanced analysis techniques such as spectral unmixing are used. The main contribution of this letter is the introduction of a novel preprocessing (PP) module, called SE2PP, which is based on the integration of spatial and spectral information. The proposed approach can be combined with existing algorithms for endmember extraction, reducing the computational complexity of those algorithms while providing similar figures of accuracy. The key idea behind SE2PP is to identify and select a reduced set of pixels in the hyperspectral image, so that there is no need to process a large amount of them to get accurate spectral unmixing results. Compared to previous approaches based on similar spatial and spatial-spectral PP strategies, SE2PP clearly outperforms their results in terms of accuracy and computation speed, as it is demonstrated with artificial and real hyperspectral images.
IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing | 2014
Gustavo Marrero Callicó; Sebastián López; Beatriz Aguilar; J.F. Lopez; Roberto Sarmiento
Hyperspectral imaging represents the state-of-theart technique in those applications related to environmental monitoring, military surveillance, or rare mineral detection. However, one of the requirements of paramount importance when dealing with such scenarios is the ability to achieve real-time constraints taking into account the huge amount of data involved in processing this type of images. In this paper, the authors present for the first time a combination of the newly introduced modified vertex component analysis (MVCA) algorithm for the process of endmembers extraction together with the ability of GPUs to exploit its parallelism, giving, as a result, important speedup factors with respect to its sequential counterpart, while maintaining the same levels of endmember extraction accuracy than the vertex component analysis (VCA) algorithm. Furthermore, OpenCL ensures the use of generic computing platforms without being restricted to a particular vendor. The proposed approach has been assessed on a set of synthetic images as well as on the well-known Cuprite real image, showing that the most time-consuming operations are located on the matrix projection and the maximum search processes. Comparison of the proposed technique with a single-threaded C-based implementation of the MVCA algorithm shows a speedup factor of 8.87 for a 500 × 500 pixel artificial image with 20 endmembers and 7.14 for the wellknown Cuprite hyperspectral data set, including in both cases I/O transfers. Moreover, when the proposed implementation is compared with respect to a C-based sequential implementation of the VCA algorithm, a speedup of 115 has been achieved. In all the cases, the results obtained by the MVCA are the same as the ones obtained with the VCA; thus, the accuracy of the proposed algorithm is not compromised.
IEEE Transactions on Consumer Electronics | 2009
Sebastián López; Gustavo Marrero Callicó; F. Tobajas; José Francisco López; Roberto Sarmiento
The possibility of increasing the spatial resolution of video sequences is becoming extremely important in present-day multimedia systems. In this sense, super-resolution represents a smart way to obtain high-resolution video sequences from a finite set of low-resolution video frames. This set of low-resolution images must be obtained under different capture conditions of the image, from different spatial positions and/or from different cameras - this being the super-resolution paradigm, which is one of the fundamental challenges of sensor fusion. However, the vast computational cost associated with common super-resolution techniques jeopardizes their usefulness for real-time consumer applications. To alleviate this drawback, an implementation of a proprietary super-resolution algorithm mapped onto a hardware platform based on a digital signal processor (DSP) is presented in this paper. The results obtained show that, after an incremental optimization procedure, we are able to obtain super-resolved CIF video sequences (352 × 288 pixels) at 38 frames per second.