Alberto G. Villafranca
Polytechnic University of Catalonia
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Alberto G. Villafranca.
Journal of Applied Remote Sensing | 2010
Jordi Portell de Mora; Alberto G. Villafranca; Enrique García-Berro
More than a decade has passed since the Consultative Committee for Space Data Systems (CCSDS) made its recommendation for lossless data compression. The CCSDS standard is commonly used for scientific missions because it is a general-purpose lossless compression technique with a low computational cost which results in acceptable compression ratios. At the core of this compression algorithm it is the Rice coding method. Its performance rapidly degrades in the presence of outliers, as the Rice coder is conceived for noiseless data following geometric distributions. To overcome this problem we present here a new entropy coder, the so-called Prediction Error Coder (PEC), as well as its fully adaptive version (FAPEC) which we show is a reliable alternative to the CCSDS standard. We show that PEC and FAPEC achieve high compression ratios even when a large amount of outliers are present in the data. This is done by testing our compressors with synthetic and real data, comparing the compression ratios and processor requirements with those obtained using the CCSDS standard.
data compression communications and processing | 2009
Jordi Portell; Alberto G. Villafranca; Enrique García-Berro
It has passed more than a decade since the Consultative Committee for Space Data Systems (CCSDS) made its recommendation for lossless data compression. The CCSDS standard is commonly used for scientific missions because it is a general-purpose lossless compression technique with a low computational cost which results in acceptable compression ratios. At the core of this compression algorithm it is the Rice coding method. Its performance rapidly degrades in the presence of noise and outliers, as the Rice coder is conceived for noiseless data following geometric distributions. To overcome this problem we present here a new coder, the so-called Prediction Error Coder (PEC), as well as its fully adaptive version (FAPEC) which we show is a reliable alternative to the CCSDS standard. We show that PEC and FAPEC achieve large compression ratios even when high levels of noise are present in the data. This is done testing our compressors with synthetic and real data, and comparing the compression ratios and processor requirements with those obtained using the CCSDS standard.
data compression communications and processing | 2010
Alberto G. Villafranca; Iu Mora; Patrícia Ruiz-Rodríguez; Jordi Portell; Enrique García-Berro
The Global Positioning System (GPS) has long been used as a scientific tool, and it has turned into a very powerful technique in domains like geophysics, where it is commonly used to study the dynamics of a large variety of systems, like glaciers, tectonic plates and others. In these cases, the large distances between receivers as well as their remote locations usually pose a challenge for data transmission. The standard format for scientific applications is a compressed RINEX file - a raw data format which allows post-processing. Its associated compression algorithm is based on a pre-processing stage followed by a commercial data compressor. In this paper we present a new compression method which can achieve better compression ratios with a faster operation. We have improved the pre-compression stage, split the resulting file into two, and applied the most appropriate compressor to each file. FAPEC, a highly resilient entropy coder, is applied to the observables file. The results obtained so far demonstrate that it is possible to obtain average compression gains of about 35% with respect to the original compressor.
data compression communications and processing | 2010
Marcial Clotet; Jordi Portell; Alberto G. Villafranca; Enrique García-Berro
The Consultative Committee for Space Data Systems (CCSDS) recommends the use of a two-stage strategy for lossless data compression in space. At the core of the second stage is the Rice coding method. The Rice compression ratio rapidly decreases in the presence of noise and outliers, since this coder is specially conceived for noiseless data following geometric distributions. This, in turn, makes the CCSDS recommendation too sensitive in front of outliers in the data, leading to non-optimal ratios in realistic scenarios. In this paper we propose to substitute the Rice coder of the CCSDS recommendation by a subexponential coder. We show that this solution offers high compression ratios even when large amounts of noise are present in the data. This is done by testing both compressors with synthetic and real data. The performance is actually similar to that obtained with the FAPEC coder, although with slightly higher processing requirements. Therefore, this solution appears as a simple improvement that can be done to the current CCSDS standard with an excellent return.
Archive | 2012
Jordi Portell; Alberto G. Villafranca; Enrique García-Berro
Many data compression systems rely on a final stage based on an entropy coder, generating short codes for the most probable symbols. Images, multispectroscopy or hyperspectroscopy are just some examples, but the space mission concept covers many other fields. In some cases, especially when the on-board processing power available is very limited, a generic data compression system with a very simple pre-processing stage could suffice. The Consultative Committee for Space Data Systems made a recommendation on lossless data compression in the early 1990s, which has been successfully used in several missions so far owing to its low computational cost and acceptable compression ratios. Nevertheless, its simple entropy coder cannot perform optimally when large amounts of outliers appear in the data, which can be caused by noise, prompt particle events, or artifacts in the data or in the pre-processing stage. Here we discuss the effect of outliers on the compression ratio and we present efficient solutions to this problem. These solutions are not only alternatives to the CCSDS recommendation, but can also be used as the entropy coding stage of more complex systems such as image or spectroscopy compression.
data compression conference | 2010
Alberto G. Villafranca; Jordi Portell; Enrique García-Berro
The scientific instruments included in modern space missions require high compression ratios in order to downlink all the acquired data to the ground. In many cases, this must be achieved without losses and the available processing power is modest. Algorithms requiring large amounts of data for their optimum operation cannot be used due to the limited reliability of the communications channel. Existing methods for lossless data compression often have difficulties in fulfilling such tight requirements. We present a method for the development of lossless compression systems achieving high compression ratios at a low processing cost while guaranteeing a reliable downlink. This is done using a two–stage compressor, with an adequate pre–processing stage followed by an entropy coder. The pre–processor should be tailored for each case and carefully evaluated. For the second stage, we analyze some existing solutions and we present a new entropy coder, which has comparable or even better performances than those offered by most coders and guarantees high ratios in front of outliers. Finally, we present the application of this method to the case of the Gaia mission and we present the results obtained.
adaptive hardware and systems | 2010
Alberto G. Villafranca; Shan Mignot; Jordi Portell; Enrique García-Berro
The instruments used in modern space missions require increasing amounts of telemetry resources to download the acquired data to the ground. Data compression helps to mitigate this problem and, therefore, it is currently seen as a mandatory stage for most of the missions, although the available on-board processing power is often modest. In many cases, data compression must be performed without losses. FAPEC is a lossless data compression algorithm that typically offers better ratios than the CCSDS 121.0 recommendation on realistic data sets. Its compression efficiency is higher than 90% of the Shannon limit in most cases, even in presence of large amounts of noise and outliers. FAPEC has been successfully implemented in software and its low-complexity algorithm also seemed suitable for a hardware implementation. In this paper we describe a prototype FPGA implementation which has been developed targeting the antifuse radiation-hardened RTAX Actel family. We have assessed that FAPEC can be easily implemented in hardware without requiring an external memory. The prototype presents an initial throughput of 32 Mbit/s and a complexity of 120 Kgate, hence being a compact and a robust solution for generic lossless compression. Finally, we discuss potential improvements that could easily boost the performance beyond the barrier of 100 Mbit/s.
Instrumentation viewpoint | 2016
David Amblas; Jordi Portell de Mora; Xavier Rayo; Alberto G. Villafranca; Enrique García-Berro Montilla; Miquel Canals
Multibeam echosounders can generate vast amounts of data when recording the complete water column, which poses logistic, economic and technical challenges. Lossy data compression can reduce data size up to one or two orders of magnitude, but often at the expense of significant image distortion. Lossless compression ratios tend to be modest and at a high computing cost. In this work we test a high-performance data compression algorithm, FAPEC, initially developed for Space data communications with low computing requirements. FAPEC provides good compression ratios and supports tailored pre-processing stages. Here we show its advantages over standard and high-end lossless compression solutions currently available, both in terms of ratios and speed.
data compression communications and processing | 2011
Alberto G. Villafranca; Shan Mignot; Jordi Portell; Enrique García-Berro
Future space missions are based on a new generation of instruments. These missions find a serious constraint in the telemetry system, which cannot download to ground the large volume of data generated. Hence, data compression algorithms are often mandatory in space, despite the modest processing power usually available on-board. We present here a compact solution implemented in hardware for such missions. FAPEC is a lossless compressor which typically can outperform the CCSDS 121.0 recommendation on realistic data sets. With efficiencies higher than 90% of the Shannon limit in most cases - even in presence of noise or outliers - FAPEC has been successfully validated in its software version as a robust low-complexity alternative to the recommendation. This work describes the FAPEC implementation on an FPGA, targeting the space-qualified Actel RTAX family. We prove that FAPEC is hardwarefriendly and that it does not require external memory. We also assess the correct operation of the prototype for an initial throughput of 32 Mbits/s with very low power consumption (about 20 mW). Finally, we discuss further potential applications of FAPEC, and we set the basis for the improvements that will boost FAPEC performance beyond the 100 Mbit/s level.
Spie Newsroom | 2011
Jordi Portell de Mora; Alberto G. Villafranca; Enrique García-Berro
New space-mission concepts often require the generation of large amounts of data, but the capacity of communications channels has not increased proportionally. Thus, data compression has become a crucial aspect in designing new missions and instruments. In addition, many modern ground-based systems that must transfer impressive amounts of data—both between distant locations and within local networks of high-performance computers—are currently in the planning or operation stages. These systems will also benefit from highly efficient data compressors. Existing data-compression solutions either require large amounts of computational resources or are unable to efficiently compress unexpected values that may be found in data streams. This applies to general-purpose compressors that are based on dictionary coding1 (such as .zip or .rar), which additionally require excessively long data blocks for adequate operation. They are, therefore, not suitable for use onboard satellites. Even in ground-based systems involving high throughputs, these solutions are inefficient. Some alternative codes, including arithmetic,2 range,3 and Huffman,4 offer optimal or close-to-optimal efficiencies but at the price of excessive computational loads. Currently, the solution generally adopted for space systems5 is based on two-stage data processing, where it is first preprocessed (often using a data predictor), followed by coding of the prediction errors with some simple entropy coder. Although this is an appropriate solution, it is too sensitive to outliers in the data stream (i.e., values outside the expected statistical distribution).6 The most problematic situation is encountered when the compressor receives values that are much larger than expected, which often leads to a significant decrease in the compression ratio. This occurs frequently for space-based Figure 1. Compression efficiency of Rice-Golomb and prediction error coder (PEC) codes for discrete Laplacian distributions, using only three calibration points (at data entropies of 3, 5, and 11bits/sample).