Alaitz Zabala
Autonomous University of Barcelona
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Alaitz Zabala.
International Journal of Geographical Information Science | 2012
Joan Masó; Xavier Pons; Alaitz Zabala
Spatial data infrastructure (SDI) actors have great expectations for the second-generation SDI currently under development. However, SDIs have many implementation problems at different levels that are delaying the development of the SDI framework. The aims of this article are to identify these difficulties, in the literature and based on our own experience, in order to determine how mature and useful the current SDI phenomena are. We can then determine whether a general reconceptualization is necessary or rather a set of technical improvements and good practices needs to be developed before the second-generation SDI is completed. This study is based on the following aspects: metadata about data and services, data models, data download, data and processing services, data portrayal and symbolization, and mass market aspects. This work aims to find an equilibrium between user-focused geoportals and web service interconnection (the user side vs. the server side). These deep reflections are motivated by a use case in the healthcare area in which we employed the Catalan regional SDI. The use case shows that even one of the best regional SDI implementations can fail to provide the required information and processes even when the required data exist. Several previous studies recognize the value of applying Web 2.0 and user participation approaches but few of these studies provide a real implementation. Another objective of this work is to show that it is easy to complement the classical, international standard-based SDI with a participative Web 2.0 approach. To do so, we present a mash-up portal built on top of the Catalan SDI catalogues.
Philosophical Transactions of the Royal Society A | 2012
Xiaoyu Yang; Jonathan D. Blower; Lucy Bastin; Victoria Lush; Alaitz Zabala; Joan Masó; Dan Cornford; Paula Díaz; Jo Lumsden
Data quality is a difficult notion to define precisely, and different communities have different views and understandings of the subject. This causes confusion, a lack of harmonization of data across communities and omission of vital quality information. For some existing data infrastructures, data quality standards cannot address the problem adequately and cannot fulfil all user needs or cover all concepts of data quality. In this study, we discuss some philosophical issues on data quality. We identify actual user needs on data quality, review existing standards and specifications on data quality, and propose an integrated model for data quality in the field of Earth observation (EO). We also propose a practical mechanism for applying the integrated quality information model to a large number of datasets through metadata inheritance. While our data quality management approach is in the domain of EO, we believe that the ideas and methodologies for data quality management can be applied to wider domains and disciplines to facilitate quality-enabled scientific research.
Journal of remote sensing | 2013
Alaitz Zabala; Xavier Pons
This study measures the effect of lossy image compression (JPEG 2000 and JPG) on the digital classification of crop areas. The results provide new insights into the influence of compression on the quality of the cartography produced. Both a multitemporal and a single-date classification approach were analysed. With the multitemporal approach, it is possible to use high compression ratios (CRs), up to 20:1 or even 100:1, and the overall accuracy of the classification is similar to that obtained with the original images. Moreover, the classified area is similar or even greater (fewer pixels are uncertain). For a single-date approach, it is only advisable to use 3D-JPEG 2000 at CRs up to 20:1. The optimum CR is also affected by landscape fragmentation (fragmented images tolerate less compression) and the classification method (hybrid classifiers are affected less than the maximum likelihood and minimum distance classifiers). Finally, classifications from compressed images have less ‘salt and pepper’ effect than those obtained from the originals, especially when JPEG 2000 (3D or not) is used.
International Journal of Applied Earth Observation and Geoinformation | 2011
Alaitz Zabala; Xavier Pons
Abstract Lossy compression is being increasingly used in remote sensing; however, its effects on classification have scarcely been studied. This paper studies the implications of JPEG (JPG) and JPEG 2000 (J2K) lossy compression for image classification of forests in Mediterranean areas. Results explore the impact of the compression on the images themselves as well as on the obtained classification. The results indicate that classifications made with previously compressed radiometrically corrected images and topoclimatic variables are not negatively affected by compression, even at quite high compression ratios. Indeed, JPG compression can be applied to images at a compression ratio (CR, ratio between the size of the original file and the size of the compressed file) of 10:1 or even 20:1 (for both JPG and J2K). Nevertheless, the fragmentation of the study area must be taken into account: in less fragmented zones, high CR are possible for both JPG and J2K, but in fragmented zones, JPG is not advisable, and when J2K is used, only a medium CR is recommended (3.33:1 to 5:1). Taking into account that J2K produces fewer artefacts at higher CR, the study not only contributes with optimum CR recommendations, but also found that the J2K compression standard (ISO 15444-1) is better than the JPG (ISO 10918-1) when applied to image classification. Although J2K is computationally more expensive, this is no longer a critical issue with current computer technology.
IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing | 2013
Alaitz Zabala; Anna Riverola; Ivette Serral; Paula Díaz; Victoria Lush; Joan Masó; Xavier Pons; Ted Habermann
Geospatial data have become a crucial input for the scientific community for understanding the environment and developing environmental management policies. The Global Earth Observation System of Systems (GEOSS) Clearinghouse is a catalogue and search engine that provides access to the Earth Observation metadata. However, metadata are often not easily understood by users, especially when presented in ISO XML encoding. Data quality included in the metadata is basic for users to select datasets suitable for them. This work aims to help users to understand the quality information held in metadata records and to provide the results to geospatial users in an understandable and comparable way. Thus, we have developed an enhanced tool (Rubric-Q) for visually assessing the metadata quality information and quantifying the degree of metadata population. Rubric-Q is an extension of a previous NOAA Rubric tool used as a metadata training and improvement instrument. The paper also presents a thorough assessment of the quality information by applying the Rubric-Q to all dataset metadata records available in the GEOSS Clearinghouse. The results reveal that just 8.7% of the datasets have some quality element described in the metadata, 63.4% have some lineage element documented, and merely 1.2% has some usage element described.
Journal of Applied Remote Sensing | 2010
Alaitz Zabala; Jorge González-Conejero; Joan Serra-Sagristà; Xavier Pons
The aim of this work is to, within the JPEG2000 framework, enhance the coding performance obtained for images that contain regions without useful information, or without information at all, here named as NODATA regions. In Geographic Information Systems (GIS) and in Remote Sensing (RS), NODATA regions arise due to several factors, such as geometric and radiometric corrections, atmospheric events, the overlapping of successive layers of information, etc. Most coding systems are not devised to consider these regions separately from the rest of the image, sometimes causing a loss in the coding efficiency and in the post-processing applications. We propose two approaches that address this issue; the first technique (Average Data Region, ADR) is carried out as simple pre-processing and the second technique (Shape-Adaptive JPEG2000, SA-JPEG2000) modifies the coding system to avoid the regions without information. Experimental results, performed on data from real applications and different scenarios, suggest that the proposed approaches can achieve, e.g., for SA-JPEG2000, a Signal-to- Noise Ratio improvement of about 8 dB. Moreover, in a post-processing application such as a digital classification, the best classification results are obtained when the proposed approaches SA-JPEG2000 and ADR are applied.
international conference on knowledge based and intelligent information and engineering systems | 2008
Ian Blanes; Alaitz Zabala; Gerard Moré; Xavier Pons; Joan Serra-Sagristà
Classification of hyperspectral images is paramount to an increasing number of user applications. With the advent of more powerful technology, sensed images demand for larger requirements in computational and memory capabilities, which has led to devise compression techniques to alleviate the transmission and storage necessities. Classification of compressed images is addressed in this paper. Compression takes into account the spectral correlation of hyperspectral images together with more simple approaches. Experiments have been performed on a large hyperspectral CASI image with 72 bands. Both coding and classification results indicate that the performance of 3d-DWT is superior to the other two lossy coding approaches, providing consistent improvements of more than 10 dB for the coding process, and maintaining both the global accuracy and the percentage of classified area for the classification process.
Journal of Electrical and Computer Engineering | 2012
Alaitz Zabala; Raffaele Vitulli; Xavier Pons
This study measures the impact of both on-board and user-side lossy image compression (CCSDS-IDC and JPEG 2000) on image quality and classification. The Sentinel-2 Image Performance Simulator was modified to include these compression algorithms in order to produce Sentinel-2 simulated images with on-board lossy compression. A multitemporal set of Landsat images was used for the user-side compression scenario in order to study a crop area. The performance of several compressors was evaluated by computing the Signal-to-Noise Ratio (SNR) of the compressed images. The overall accuracy of land-cover classifications of these images was also evaluated. The results show that on-board CCSDS performs better than JPEG 2000 in terms of compression fidelity, especially at lower compression ratios (from CR 2:1 up to CR 4:1, i.e., 8 to 4 bpppb). The effect of compression on land cover classification follows the same trends, but compression fidelity may not be enough to assess the impact of compression on end-user applications. If compression is applied by end-users, the results show that 3D-JPEG 2000 obtains higher compression fidelity than CCSDS and JPEG 2000 with other parameterizations. This is due to the high dynamic range of the images (representing reflectances * 10000), which JPEG 2000 is able to exploit better.
International Journal of Digital Earth | 2017
Juan José Vidal-Macua; Alaitz Zabala; Miquel Ninyerola; Xavier Pons
ABSTRACT Global or regional land cover change on a decadal time scale can be studied at a high level of detail using the availability of remote sensing data such as that provided by Landsat. However, there are three main technical challenges in this goal. First, the generation of land cover maps without reference data is problematic (backdating). Second, it is important to maintain high accuracies in land cover change map products, requiring a reasonably rich legend within each map. Third, a high level of automation is necessary to aid the management of large volumes of data. This paper describes a robust methodology for processing time series of satellite data over large spatial areas. The methodology includes a retrospective analysis used for the generation of training and test data for historical periods lacking reference information. This methodology was developed in the context of research on global change in the Iberian Peninsula. In this study we selected two scenes covering geographic regions that are representative of the Iberian Peninsula. For each scene, we present the results of two classifications (1985–1989 and 2000–2004 quinquennia), each with a legend of 13 categories. An overall accuracy of over 92% was obtained for all 4 maps.
International Journal of Digital Earth | 2014
Joan Masó; Xavier Pons; Alaitz Zabala
The hypermap concept was introduced in 1992 as a way to hyperlink geospatial features to text, multimedia or other geospatial features. Since then, the concept has been used in several applications, although it has been found to have some limitations. On the other hand, Spatial Data Infrastructures (SDIs) adopt diverse and heterogeneous service oriented architectures (SOAs). They are developed by different standard bodies and are generally disconnected from mass market web solutions. This work expands the hypermap concept to overcome its limitations and harmonise it with geospatial resource oriented architecture (ROA), connecting it to the semantic web and generalising it to the World Wide Hypermap (WWH) as a tool for building a single ‘Digital Earth’. Global identifiers, dynamic links, link purposes and resource management capabilities are introduced as a solution that orchestrates data, metadata and data access services in a homogeneous way. This is achieved by providing a set of rules using the current Internet paradigm formalised in the REpresentational State Transfer (REST) architecture and combining it with existing Open Geospatial Consortium (OGC) and International Organization for Standardization (ISO) standards. A reference implementation is also presented and the strategies needed to implement the WWH, which mainly consist in a set of additions to current Geographic Information System (GIS) products and a RESTful server that mediates between the Internet and the local GIS applications.