Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Lucana Santos is active.

Publication


Featured researches published by Lucana Santos.


IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing | 2012

Performance Evaluation of the H.264/AVC Video Coding Standard for Lossy Hyperspectral Image Compression

Lucana Santos; Sebastián López; Gustavo Marrero Callicó; J.F. Lopez; Roberto Sarmiento

In this paper, a performance evaluation of the state-of-the-art H.264/AVC video coding standard is carried out with the aim of determining its feasibility when applied to hyperspectral image compression. Results are obtained based on configuring diverse parameters in the encoder in order to achieve an optimal trade-off between compression ratio, accuracy of unmixing and computation time. In this sense, simulations are developed in order to measure the spectral angles and signal-to-noise ratio (SNR), achieving compression ratios up to 0.13 bits per pixel per band (bpppb) for real hyperspectral images. Moreover, in this work it is detected which blocks in the encoder contribute the most to performance improvements of the compression task for the particular case of this type of images, and which ones are not relevant at all and hence could be removed. This conclusion yields to reduce the future design complexities of potential low-power/real-time hyperspectral encoders based on H.264/AVC for remote sensing applications.


IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing | 2013

Highly-Parallel GPU Architecture for Lossy Hyperspectral Image Compression

Lucana Santos; Enrico Magli; Raffaele Vitulli; J.F. Lopez; Roberto Sarmiento

Graphics Processing Units (GPU) are becoming a widespread tool for general-purpose scientific computing, and are attracting interest for future onboard satellite image processing payloads due to their ability to perform massively parallel computations. This paper describes the GPU implementation of an algorithm for onboard lossy hyperspectral image compression, and proposes an architecture that allows to accelerate the compression task by parallelizing it on the GPU. The selected algorithm was amenable to parallel computation owing to its block-based operation, and has been optimized here to facilitate GPU implementation incurring a negligible overhead with respect to the original single-threaded version. In particular, a parallelization strategy has been designed for both the compressor and the corresponding decompressor, which are implemented on a GPU using Nvidias CUDA parallel architecture. Experimental results on several hyperspectral images with different spatial and spectral dimensions are presented, showing significant speed-ups with respect to a single-threaded CPU implementation. These results highlight the significant benefits of GPUs for onboard image processing, and particularly image compression, demonstrating the potential of GPUs as a future hardware platform for very high data rate instruments.


IEEE Transactions on Geoscience and Remote Sensing | 2015

A New Fast Algorithm for Linearly Unmixing Hyperspectral Images

Raúl Guerra; Lucana Santos; Sebastián López; Roberto Sarmiento

Linear spectral unmixing is nowadays an essential tool to analyze remotely sensed hyperspectral images. Although many different contributions have been uncovered during the last two decades, the majority of them are based on dividing the whole process of linearly unmixing a given hyperspectral image into three sequential steps: 1) estimation of the number of endmembers that are present in the hyperspectral image under consideration; 2) extraction of these endmembers from the hyperspectral data set; and 3) calculation of the abundances associated with the endmembers induced in the previous step per each mixed pixel of the image. Although this de facto processing chain has proven to be accurate enough for unmixing most of the images collected by hyperspectral remote sensors, it is also true that it is not exempt of drawbacks, such as the fact that all the possible combinations of algorithms in order to fully unmix a hyperspectral image according to the aforementioned processing chain demand a formidable computational effort, which tends to be higher the better the performance of the designed unmixing chain is. This troublesome issue unfortunately prevents the use of hyperspectral imaging technology in applications under real-time constraints, in which hyperspectral images have to be analyzed in a short period of time. Hence, there is a clear need to face the challenge of fully exploiting the unquestionable benefits of the hyperspectral imaging technology for these applications, but concurrently overcoming the limitations imposed by the computationally complex nature of the processes involved. For this purpose, this paper introduces a novel algorithm named fast algorithm for linearly unmixing hyperspectral images (FUN), which is capable of fully unmixing a hyperspectral image with at least the same accuracy than state-of-the-art approaches while demanding a much lower computational effort, independent of the characteristics of the image under analysis. The FUN algorithm is based on the concept of orthogonal projections and allows performing the estimation of the number of endmembers and their extraction simultaneously, using the modified Gram-Schmidt method. The operations performed by the FUN algorithm are simple and can be easily parallelized. Moreover, this algorithm is able to calculate the abundances using very similar operations, also based on orthogonal projections, which makes it easier to achieve a hardware implementation to perform the entire unmixing process. The benefits of our proposal are demonstrated with a diverse set of artificially generated hyperspectral images and with the well-known AVIRIS Cuprite image, for which the proposed FUN algorithm is able to reduce in a factor of more than 31 times the time required for processing it, while providing a better unmixing performance than traditional methods.


IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing | 2016

Multispectral and Hyperspectral Lossless Compressor for Space Applications (HyLoC): A Low-Complexity FPGA Implementation of the CCSDS 123 Standard

Lucana Santos; Luis Berrojo; Javier Moreno; J.F. Lopez; Roberto Sarmiento

An efficient compression of hyperspectral images on-board satellites is mandatory in current and future space missions in order to save bandwidth and storage space. Reducing the data volume in space is a challenge that has been faced with a twofold approach: to propose new highly efficient compression algorithms; and to present technologies and strategies to execute the compression in the hardware available on-board. The Consultative Committee for Space Data Systems (CCSDS), a consortium of the major space agencies in the world, has recently issued the CCSDS 123 standard for multispectral and hyperspectral image (MHI) compression, with the aim of facilitating the inclusion of on-board compression on satellites by the space industry. In this paper, we present a low-complexity feld programmable gate arrays (FPGAs) implementation of this recent CCSDS 123 standard, which demonstrates its main features in terms of compression efficiency and suitability for an implementation on the available on-board technologies. A hardware architecture is conceived and designed with the aim of achieving low hardware occupancy and high performance on a space-qualified FPGA from the Microsemi RTAX family. The resulting FPGA implementation is therefore suitable for on-board compression. The effect of the several CCSDS-123 configuration parameters on the compression efficiency and hardware complexity is taken into consideration to provide flexibility in such a way that the implementation can be adapted to different application scenarios. Synthesis results show a very low occupancy of 34% and a maximum frequency of 43 MHz on a space-qualified RTAX1000S. The benefits of the proposed implementation are further evidenced by a demonstrator, which is implemented on a commercial prototyping board from Xilinx. Finally, a comparison with other FPGA implementations of on-board data compression algorithms is provided.


Remote Sensing | 2018

A New Algorithm for the On-Board Compression of Hyperspectral Images

Raúl Guerra; Yubal Barrios; María Asunción Romero Díaz; Lucana Santos; Sebastián López; Roberto Sarmiento

Hyperspectral sensors are able to provide information that is useful for many different applications. However, the huge amounts of data collected by these sensors are not exempt of drawbacks, especially in remote sensing environments where the hyperspectral images are collected on-board satellites and need to be transferred to the earth’s surface. In this situation, an efficient compression of the hyperspectral images is mandatory in order to save bandwidth and storage space. Lossless compression algorithms have been traditionally preferred, in order to preserve all the information present in the hyperspectral cube for scientific purposes, despite their limited compression ratio. Nevertheless, the increment in the data-rate of the new-generation sensors is making more critical the necessity of obtaining higher compression ratios, making it necessary to use lossy compression techniques. A new transform-based lossy compression algorithm, namely Lossy Compression Algorithm for Hyperspectral Image Systems (HyperLCA), is proposed in this manuscript. This compressor has been developed for achieving high compression ratios with a good compression performance at a reasonable computational burden. An extensive amount of experiments have been performed in order to evaluate the goodness of the proposed HyperLCA compressor using different calibrated and uncalibrated hyperspectral images from the AVIRIS and Hyperion sensors. The results provided by the proposed HyperLCA compressor have been evaluated and compared against those produced by the most relevant state-of-the-art compression solutions. The theoretical and experimental evidence indicates that the proposed algorithm represents an excellent option for lossy compressing hyperspectral images, especially for applications where the available computational resources are limited, such as on-board scenarios.


Journal of Applied Remote Sensing | 2013

Lossy hyperspectral image compression on a graphics processing unit: parallelization strategy and performance evaluation

Lucana Santos; Enrico Magli; Raffaele Vitulli; Antonio Núñez; J.F. Lopez; Roberto Sarmiento

Abstract There is an intense necessity for the development of new hardware architectures for the implementation of algorithms for hyperspectral image compression on board satellites. Graphics processing units (GPUs) represent a very attractive opportunity, offering the possibility to dramatically increase the computation speed in applications that are data and task parallel. An algorithm for the lossy compression of hyperspectral images is implemented on a GPU using Nvidia computer unified device architecture (CUDA) parallel computing architecture. The parallelization strategy is explained, with emphasis on the entropy coding and bit packing phases, for which a more sophisticated strategy is necessary due to the existing data dependencies. Experimental results are obtained by comparing the performance of the GPU implementation with a single-threaded CPU implementation, showing high speedups of up to 15.41. A profiling of the algorithm is provided, demonstrating the high performance of the designed parallel entropy coding phase. The accuracy of the GPU implementation is presented, as well as the effect of the configuration parameters on performance. The convenience of using GPUs for on-board processing is demonstrated, and solutions to the potential difficulties encountered when accelerating hyperspectral compression algorithms are proposed, if space-qualified GPUs become a reality in the near future.


ieee international conference on high performance computing data and analytics | 2014

FPGA implementation of the hyperspectral Lossy Compression for Exomars (LCE) algorithm

Aday García; Lucana Santos; S. López; Gustavo Marrero Callicó; J.Fco López; Roberto Sarmiento

The increase of data rates and data volumes in present remote sensing payload instruments, together with the restrictions imposed in the downlink connection requirements, represent at the same time a challenge and a must in the field of data and image compression. This is especially true for the case of hyperspectral images, in which both, reduction of spatial and spectral redundancy is mandatory. Recently the Consultative Committee for Space Data Systems (CCSDS) published the Lossless Multispectral and Hyperespectral Image Compression recommendation (CCSDS 123), a prediction-based technique resulted from the consensus of its members. Although this standard offers a good trade-off between coding performance and computational complexity, the appearance of future hyperspectral and ultraspectral sensors with vast amount of data imposes further efforts from the scientific community to ensure optimal transmission to ground stations based on greater compression rates. Furthermore, hardware implementations with specific features to deal with solar radiation problems play an important role in order to achieve real time applications. In this scenario, the Lossy Compression for Exomars (LCE) algorithm emerges as a good candidate to achieve these characteristics. Its good quality/compression ratio together with its low complexity facilitates the implementation in hardware platforms such as FPGAs or ASICs. In this work the authors present the implementation of the LCE algorithm into an antifuse-based FPGA and the optimizations carried out to obtain the RTL description code using CatapultC, a High Level Synthesis (HLS) Tool. Experimental results show an area occupancy of 75% in an RTAX2000 FPGA from Microsemi, with an operating frequency of 18 MHz. Additionally, the power budget obtained is presented giving an idea of the suitability of the proposed algorithm implementation for onboard compression applications.


adaptive hardware and systems | 2013

FPGA implementation of a lossy compression algorithm for hyperspectral images with a high-level synthesis tool

Lucana Santos; José Francisco López; Roberto Sarmiento; Raffaele Vitulli

In this paper, we present an FPGA implementation of a novel adaptive and predictive algorithm for lossy hyperspectral image compression. This algorithm was specifically designed for on-board compression, where FPGAs are the most attractive and popular option, featuring low power and high-performance. However, the traditional RTL design flow is rather time-consuming. High-level synthesis (HLS) tools, like the well-known CatapultC, can help to shorten these times. Utilizing CatapultC, we obtain an FPGA implementation of the lossy compression algorithm directly from a source code written in C language with a double motivation: demonstrating how well the lossy compression algorithm would perform on an FPGA in terms of throughput and area; and at the same time showing how HLS is applied, in terms of source code preparation and CatapultC settings, to obtain an efficient hardware implementation in a relatively short time. The P&R on a Virtex 5 5VFX130 displays effective performance terms of area (maximum device utilization at 14%) and frequency (80 MHz). A comparison with a previous FPGA implementation of a lossless to near-lossless algorithm is also provided. Results on a Virtex 4 4VLX200 show less memory requirements and higher frequency for the LCE algorithm.


conference on design and architectures for signal and image processing | 2016

SystemC modelling of lossless compression IP cores for space applications

Lucana Santos; Ana Gomez; Pedro Hernandez-Fernandez; Roberto Sarmiento

In this paper, we perform the Electronic System Level (ESL) modelling and verification of two lossless compression standard algorithms for space applications using the SystemC language. In particular we present the architectures and a description in SystemC of the CCSDS-121 universal lossless compressor and the CCSDS-123 lossless compressor for hyperspectral and multispectral images. Both algorithms were specifically designed to operate on board satellites and they can be utilized as independent standalone compressors as well as jointly. In the latter case, the CCSDS-121 performs the entropy coding stage of the CCSDS-123 compressor. The computational capabilities of the hardware available on a satellite are limited, and hence, it is necessary to design hardware architectures that make it possible to execute the algorithms in an efficient way in terms of throughput, resource utilization and power consumption. On-board compression algorithms are usually implemented on ASICs or FPGAs that are tolerant to solar radiation. The main objective of this work is to describe models of the compressors in SystemC, that enable the generation of specifications for a subsequent implementation phase where the algorithms will be described in a hardware design language (VHDL) that can be efficiently mapped into space-qualified FPGAs. With the SystemC models, we perform an exploration of the design space, refining the architecture, and retrieving information about the limits in performance of the cores, storage requirements, data dependencies and prospective hardware requirements of the later FPGA implementation. The described models also comprise connections to shared communication buses using transaction-level modelling (TLM), allowing their inclusion in an embedded system model that may include a software co-processor as well as other processing cores. Additionally, the models are verified by creating SystemC testbenches that can be reused to verify the IP cores when described in VHDL.


ieee international conference on high performance computing data and analytics | 2015

A novel hardware-friendly algorithm for hyperspectral linear unmixing

Raúl Guerra; Lucana Santos; Sebastián López; Roberto Sarmiento

Linear unmixing of hyperspectral images has rapidly become one of the most widely utilized tools for analyzing the content of hyperspectral images captured by state-of-the-art remote hyperspectral sensors. The aforementioned unmixing process consists of the following three sequential steps: dimensionality estimation, endmember extraction and abundances computation. Within this procedure, the first two steps are by far the most demanding from a computational point of view, since they involve a large amount of matrix operations. Moreover, the complex nature of these operations seriously difficult the hardware implementation of these two unmixing steps, leading to non-optimized implementations which are not able to satisfy the strict delay requirements imposed by those applications under real-time or near real-time requirements. This paper uncovers a new algorithm which is capable of estimating the number of endmembers and extracting them from a given hyperspectral image with at least the same accuracy than state-of-the-art approaches while demanding a much lower computational effort, with independence of the characteristics of the image under analysis. In particular, the proposed algorithm is based on the concept of orthogonal projections and allows performing the estimation of the number of end- members and their extraction simultaneously, using simple operations, which can be also easily parallelized. In this sense, it is worth to mention that our algorithm does not perform complex matrix operations, such as the inverse of a matrix or the extraction of eigenvalues and eigenvectors, which makes easier its ulterior hardware. The experimental results obtained with synthetic and real hyperspectral images demonstrate that the accuracy obtained with the proposed algorithm when estimating the number of endmembers and extracting them is similar or better than the one provided by well-known state-of-the-art algorithms, while the complexity of the overall process is significantly reduced.

Collaboration


Dive into the Lucana Santos's collaboration.

Top Co-Authors

Avatar

Roberto Sarmiento

University of Las Palmas de Gran Canaria

View shared research outputs
Top Co-Authors

Avatar

J.F. Lopez

University of Las Palmas de Gran Canaria

View shared research outputs
Top Co-Authors

Avatar

Sebastián López

University of Las Palmas de Gran Canaria

View shared research outputs
Top Co-Authors

Avatar

Aday García

University of Las Palmas de Gran Canaria

View shared research outputs
Top Co-Authors

Avatar

Gustavo Marrero Callicó

University of Las Palmas de Gran Canaria

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Raúl Guerra

University of Las Palmas de Gran Canaria

View shared research outputs
Top Co-Authors

Avatar

Ana Gomez

University of Las Palmas de Gran Canaria

View shared research outputs
Top Co-Authors

Avatar

Antonio Núñez

University of Las Palmas de Gran Canaria

View shared research outputs
Top Co-Authors

Avatar

Gustavo Marrero

University of Las Palmas de Gran Canaria

View shared research outputs
Researchain Logo
Decentralizing Knowledge