Raúl Guerra
University of Las Palmas de Gran Canaria
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Raúl Guerra.
IEEE Transactions on Geoscience and Remote Sensing | 2015
Raúl Guerra; Lucana Santos; Sebastián López; Roberto Sarmiento
Linear spectral unmixing is nowadays an essential tool to analyze remotely sensed hyperspectral images. Although many different contributions have been uncovered during the last two decades, the majority of them are based on dividing the whole process of linearly unmixing a given hyperspectral image into three sequential steps: 1) estimation of the number of endmembers that are present in the hyperspectral image under consideration; 2) extraction of these endmembers from the hyperspectral data set; and 3) calculation of the abundances associated with the endmembers induced in the previous step per each mixed pixel of the image. Although this de facto processing chain has proven to be accurate enough for unmixing most of the images collected by hyperspectral remote sensors, it is also true that it is not exempt of drawbacks, such as the fact that all the possible combinations of algorithms in order to fully unmix a hyperspectral image according to the aforementioned processing chain demand a formidable computational effort, which tends to be higher the better the performance of the designed unmixing chain is. This troublesome issue unfortunately prevents the use of hyperspectral imaging technology in applications under real-time constraints, in which hyperspectral images have to be analyzed in a short period of time. Hence, there is a clear need to face the challenge of fully exploiting the unquestionable benefits of the hyperspectral imaging technology for these applications, but concurrently overcoming the limitations imposed by the computationally complex nature of the processes involved. For this purpose, this paper introduces a novel algorithm named fast algorithm for linearly unmixing hyperspectral images (FUN), which is capable of fully unmixing a hyperspectral image with at least the same accuracy than state-of-the-art approaches while demanding a much lower computational effort, independent of the characteristics of the image under analysis. The FUN algorithm is based on the concept of orthogonal projections and allows performing the estimation of the number of endmembers and their extraction simultaneously, using the modified Gram-Schmidt method. The operations performed by the FUN algorithm are simple and can be easily parallelized. Moreover, this algorithm is able to calculate the abundances using very similar operations, also based on orthogonal projections, which makes it easier to achieve a hardware implementation to perform the entire unmixing process. The benefits of our proposal are demonstrated with a diverse set of artificially generated hyperspectral images and with the well-known AVIRIS Cuprite image, for which the proposed FUN algorithm is able to reduce in a factor of more than 31 times the time required for processing it, while providing a better unmixing performance than traditional methods.
biomedical engineering systems and technologies | 2016
Himar Fabelo; Samuel Ortega; Raúl Guerra; Gustavo Marrero Callicó; Adam Szolna; Juan F. Piñeiro; Miguel Tejedor; Sebastián López; Roberto Sarmiento
Hyperspectral Imaging is an emerging technology for medical diagnosis issues due to the fact that it is a non-contact, non-ionizing and non-invasive sensing technique. The work presented in this paper tries to establish a novel way in the use of hyperspectral images to help neurosurgeons to accurately determine the tumour boundaries in the process of brain tumour resection, avoiding excessive extraction of healthy tissue and the accidental leaving of un-resected small tumour tissues. So as to do that, a hyperspectral database of in-vivo human brain samples has been created and a procedure to label the pixels diagnosed by the pathologists has been described. A total of 24646 samples from normal and tumour tissues from 13 different patients have been obtained. A pre-processing chain to homogenize the spectral signatures has been developed, obtaining 3 types of datasets (using different pre-processing chain) in order to determine which one provides the best classification results using a Random Forest classifier. The experimental results of this supervised classification algorithm to distinguish between normal and tumour tissues have achieved more than 99% of accuracy.
Journal of Systems Architecture | 2017
Raquel Lazcano; Daniel Madroñal; Rubén Salvador; Karol Desnos; Maxime Pelcat; Raúl Guerra; Himar Fabelo; Samuel Ortega; Sebastián López; Gustavo Marrero Callicó; Eduardo Juárez; César Sanz
This paper presents a study of the parallelism of a Principal Component Analysis (PCA) algorithm and its adaptation to a manycore MPPA (Massively Parallel Processor Array) architecture, which gathers 256 cores distributed among 16 clusters. This study focuses on porting hyperspectral image processing into many core platforms by optimizing their processing to fulfill real-time constraints, fixed by the image capture rate of the hyperspectral sensor. Real-time is a challenging objective for hyperspectral image processing, as hyperspectral images consist of extremely large volumes of data and this problem is often solved by reducing image size before starting the processing itself. To tackle the challenge, this paper proposes an analysis of the intrinsic parallelism of the different stages of the PCA algorithm with the objective of exploiting the parallelization possibilities offered by an MPPA manycore architecture. Furthermore, the impact on internal communication when increasing the level of parallelism, is also analyzed. Experimenting with medical images obtained from two different surgical use cases, an average speedup of 20 is achieved. Internal communications are shown to rapidly become the bottleneck that reduces the achievable speedup offered by the PCA parallelization. As a result of this study, PCA processing time is reduced to less than 6 s, a time compatible with the targeted brain surgery application requiring 1 frame-per-minute.
IEEE Transactions on Geoscience and Remote Sensing | 2016
Raúl Guerra; Sebastián López; Roberto Sarmiento
Remote sensing systems equipped with multispectral and hyperspectral sensors are able to capture images of the surface of the Earth at different wavelengths. In these systems, hyperspectral sensors typically provide images with a high spectral resolution but a reduced spatial resolution, while on the contrary, multispectral sensors are able to produce images with a rich spatial resolution but a poor spectral resolution. Due to this reason, different fusion algorithms have been proposed during the last years in order to obtain remotely sensed images with enriched spatial and spectral resolutions by wisely combining the data acquired for the same scene by multispectral and hyperspectral sensors. However, the algorithms so far proposed that are able to obtain fused images with a good spatial and spectral quality require a formidable amount of computationally complex operations that cannot be executed in parallel, which clearly prevent the utilization of these algorithms in applications under real-time constraints in which high-performance parallel-based computing systems are normally required for accelerating the overall process. On the other hand, there are other state-of-the-art algorithms that are capable of fusing these images with a lower computational effort but at the cost of decreasing the quality of the resultant fused image. In this paper, a new algorithm named computationally efficient algorithm for fusing multispectral and hyperspectral images (CoEf-MHI) is proposed in order to obtain a high-quality image from hyperspectral and multispectral images of the same scene with a low computational effort. The proposed CoEf-MHI algorithm is based on incorporating the spatial details of the multispectral image into the hyperspectral image, without introducing spectral distortions. To achieve this goal, the CoEf-MHI algorithm first spatially upsamples, by means of a bilinear interpolation, the input hyperspectral image to the spatial resolution of the input multispectral image, and then, it independently refines each pixel of the resulting image by linearly combining the multispectral and hyperspectral pixels in its neighborhood. The simulations performed in this work with different images demonstrate that our proposal is much more efficient than state-of-the-art approaches, being this efficiency understood as the ratio between the quality of the fused image and the computational effort required to obtain such image.
IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing | 2017
Ernestina Martel; Raúl Guerra; Sebastián López; Roberto Sarmiento
Linear spectral unmixing is one of the nowadays hottest research topics within the hyperspectral imaging community, being a proof of this fact the vast amount of papers that can be found in the scientific literature about this challenging task. A subset of these works is devoted to the acceleration of previously published unmixing algorithms for application under tight time constraints. For this purpose, hyperspectral unmixing algorithms are typically implemented onto high-performance computing architectures in which the operations involved are executed in parallel, which conducts to a reduction in the time required for unmixing a given hyperspectral image with respect to the sequential version of these algorithms. The speedup factors that can be achieved by means of these high-performance computing platforms heavily depend on the inherent level of parallelism of the algorithms to be executed onto them. However, the majority of the state-of-the-art unmixing algorithms were not originally conceived for being parallelized in an ulterior stage, which clearly restricts the amount of acceleration that can be reached. As far as advanced hyperspectral sensors have increasingly high spatial, spectral, and temporal resolutions, it is hence mandatory to follow a new approach that consists of developing a new class of highly parallel unmixing solutions that can take full advantage of the characteristics of nowadays high-performance computing architectures. This paper represents a step forward toward this direction as it proposes a new parallel algorithm for fully unmixing a hyperspectral image together with its implementation onto two different NVIDIA graphic processing units (GPUs). The results obtained reveal that our proposal is able to unmix hyperspectral images with very different spatial patterns and size better and much faster than the best GPU-based unmixing chains up-to-date published, with independence of the characteristics of the selected GPU.
Remote Sensing | 2018
Raúl Guerra; Yubal Barrios; María Asunción Romero Díaz; Lucana Santos; Sebastián López; Roberto Sarmiento
Hyperspectral sensors are able to provide information that is useful for many different applications. However, the huge amounts of data collected by these sensors are not exempt of drawbacks, especially in remote sensing environments where the hyperspectral images are collected on-board satellites and need to be transferred to the earth’s surface. In this situation, an efficient compression of the hyperspectral images is mandatory in order to save bandwidth and storage space. Lossless compression algorithms have been traditionally preferred, in order to preserve all the information present in the hyperspectral cube for scientific purposes, despite their limited compression ratio. Nevertheless, the increment in the data-rate of the new-generation sensors is making more critical the necessity of obtaining higher compression ratios, making it necessary to use lossy compression techniques. A new transform-based lossy compression algorithm, namely Lossy Compression Algorithm for Hyperspectral Image Systems (HyperLCA), is proposed in this manuscript. This compressor has been developed for achieving high compression ratios with a good compression performance at a reasonable computational burden. An extensive amount of experiments have been performed in order to evaluate the goodness of the proposed HyperLCA compressor using different calibrated and uncalibrated hyperspectral images from the AVIRIS and Hyperion sensors. The results provided by the proposed HyperLCA compressor have been evaluated and compared against those produced by the most relevant state-of-the-art compression solutions. The theoretical and experimental evidence indicates that the proposed algorithm represents an excellent option for lossy compressing hyperspectral images, especially for applications where the available computational resources are limited, such as on-board scenarios.
ieee international conference on high performance computing data and analytics | 2018
Raúl Guerra; María Asunción Romero Díaz; Yubal Barrios; Sebastián López; Roberto Sarmiento
The on-board compression of remote sensed hyperspectral images is an important task nowadays. One of the main difficulties is that the compression of these images must be performed in the satellite which carries the hyperspectral sensor, where the available power, time, and computational resources are limited. Moreover, it is important to achieve high compression ratios without compromising the quality of the decompressed image for the ulterior hyperspectral imaging applications. The HyperLCA compressor aims to fulfill these requirements, providing an efficient lossy compression process that allows achieving very high compression ratios while preserving the most relevant information for the subsequent hyperspectral applications. One extra advantage of the HyperLCA compressor is that it allows to fix the compression ratio to be achieved. In this work, the effect of the specified compression ratio in the computational burden of the compressor has been evaluated, also considering the rest of the input parameters and configurations of the HyperLCA compressor. The obtained results verify that the computational cost of the HyperLCA compressor decreases for higher compression ratios, with independence of the specified configuration. Additionally, the obtained results also suggest that this compressor could produce real-time compression results for on-board applications.
Remote Sensing | 2018
Ernestina Martel; Raquel Lazcano; J.F. Lopez; Daniel Madroñal; Rubén Salvador; Sebastián López; Eduardo Juárez; Raúl Guerra; César Sanz; Roberto Sarmiento
Dimensionality reduction represents a critical preprocessing step in order to increase the efficiency and the performance of many hyperspectral imaging algorithms. However, dimensionality reduction algorithms, such as the Principal Component Analysis (PCA), suffer from their computationally demanding nature, becoming advisable for their implementation onto high-performance computer architectures for applications under strict latency constraints. This work presents the implementation of the PCA algorithm onto two different high-performance devices, namely, an NVIDIA Graphics Processing Unit (GPU) and a Kalray manycore, uncovering a highly valuable set of tips and tricks in order to take full advantage of the inherent parallelism of these high-performance computing platforms, and hence, reducing the time that is required to process a given hyperspectral image. Moreover, the achieved results obtained with different hyperspectral images have been compared with the ones that were obtained with a field programmable gate array (FPGA)-based implementation of the PCA algorithm that has been recently published, providing, for the first time in the literature, a comprehensive analysis in order to highlight the pros and cons of each option.
ieee international conference on high performance computing data and analytics | 2017
Raúl Guerra; Sebastián López; Roberto Sarmiento
Hyperspectral imaging systems provide images in which single pixels have information from across the electromagnetic spectrum of the scene under analysis. These systems divide the spectrum into many contiguos channels, which may be even out of the visible part of the spectra. The main advantage of the hyperspectral imaging technology is that certain objects leave unique fingerprints in the electromagnetic spectrum, known as spectral signatures, which allow to distinguish between different materials that may look like the same in a traditional RGB image. Accordingly, the most important hyperspectral imaging applications are related with distinguishing or identifying materials in a particular scene. In hyperspectral imaging applications under real-time constraints, the huge amount of information provided by the hyperspectral sensors has to be rapidly processed and analysed. For such purpose, parallel hardware devices, such as Field Programmable Gate Arrays (FPGAs) are typically used. However, developing hardware applications typically requires expertise in the specific targeted device, as well as in the tools and methodologies which can be used to perform the implementation of the desired algorithms in the specific device. In this scenario, the Open Computing Language (OpenCL) emerges as a very interesting solution in which a single high-level synthesis design language can be used to efficiently develop applications in multiple and different hardware devices. In this work, the Fast Algorithm for Linearly Unmixing Hyperspectral Images (FUN) has been implemented into a Bitware Stratix V Altera FPGA using OpenCL. The obtained results demonstrate the suitability of OpenCL as a viable design methodology for quickly creating efficient FPGAs designs for real-time hyperspectral imaging applications.
ieee international conference on high performance computing data and analytics | 2016
Raúl Guerra; José Melián; Sebastián López; Roberto Sarmiento
The on-board compression of remote sensed hyperspectral images is an important task nowadays. One of the main difficulties is that the compression of these images must be performed in the satellite which carries the hyperspectral sensor. Hence, this process must be performed by space qualified hardware, having area, power and speed limitations. Moreover, it is important to achieve high compression ratios without compromising the quality of the decompress image. In this manuscript we proposed a new methodology for compressing hyperspectral images based on hyperspectral image fusion concepts. The proposed compression process has two independent steps. The first one is to spatially degrade the remote sensed hyperspectral image to obtain a low resolution hyperspectral image. The second step is to spectrally degrade the remote sensed hyperspectral image to obtain a high resolution multispectral image. These two degraded images are then send to the earth surface, where they must be fused using a fusion algorithm for hyperspectral and multispectral image, in order to recover the remote sensed hyperspectral image. The main advantage of the proposed methodology for compressing remote sensed hyperspectral images is that the compression process, which must be performed on-board, becomes very simple, being the fusion process used to reconstruct image the more complex one. An extra advantage is that the compression ratio can be fixed in advanced. Many simulations have been performed using different fusion algorithms and different methodologies for degrading the hyperspectral image. The results obtained in the simulations performed corroborate the benefits of the proposed methodology.