Stephan Kiemle
German Aerospace Center
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Stephan Kiemle.
Journal of Geophysical Research | 2011
Diego Loyola; M. E. Koukouli; Pieter Valks; Dimitris Balis; Nan Hao; M. Van Roozendael; Robert Spurr; Walter Zimmer; Stephan Kiemle; Christophe Lerot; J.-C. Lambert
The Global Ozone Monitoring Instrument (GOME-2) was launched on EUMESATs MetOp-A satellite in October 2006. This paper is concerned with the retrieval algorithm GOME Data Processor (GDP) version 4.4 used by the EUMETSAT Satellite Application Facility on Ozone and Atmospheric Chemistry Monitoring (O3M-SAF) for the operational generation of GOME-2 total ozone products. GDP 4.4 is the latest version of the GDP 4.0 algorithm, which is employed for the generation of official Level 2 total ozone and other trace gas products from GOME and SCIAMACHY. Here we focus on enhancements introduced in GDP 4.4: improved cloud retrieval algorithms including detection of Sun glint effects, a correction for intracloud ozone, better treatment of snow and ice conditions, accurate radiative transfer modeling for large viewing angles, and elimination of scan angle dependencies inherited from Level 1 radiances. Furthermore, the first global validation results for 3 years (2007–2009) of GOME-2/MetOp-A total ozone measurements using Brewer and Dobson measurements as references are presented. The GOME-2/MetOp-A total ozone data obtained with GDP 4.4 slightly underestimates ground-based ozone by about 0.5% to 1% over the middle latitudes of the Northern Hemisphere and slightly overestimates by around 0.5% over the middle latitudes in the Southern Hemisphere. Over high latitudes in the Northern Hemisphere, GOME-2 total ozone has almost no offset relative to Dobson readings, while over high latitudes in the Southern Hemisphere GOME-2 exhibits a small negative bias below 1%. For tropical latitudes, GOME-2 measures on average lower ozone by 0% to 2% compared to Dobson measurements.
IEEE Transactions on Geoscience and Remote Sensing | 2009
Meinhard Wolfmüller; Daniele Dietrich; Edgar Sireteanu; Stephan Kiemle; Eberhard Mikusch; Martin Bottcher
The payload ground segment (PGS) for the recently launched German radar satellite, TerraSAR-X, performs the operational data management of the acquired satellite data. This comprises well-known functions such as reception, systematic and on-demand processing, archiving and cataloguing, ordering, and dissemination of digital Earth-observation products. In addition, it comprises new functions like large-scale multimode acquisition ordering by users, integration with a commercial service segment, and new interfaces and workflows within the complete ground segment. The TerraSAR-X PGS is based on the Data Information and Management System (DIMS), the multimission data-handling infrastructure of the German Remote Sensing Data Center (DFD) at the German Aerospace Center. The development and integration of the new functions and complex workflows for TerraSAR-X were achieved and successfully tested on time. After the support of commissioning phase for five months, the system is now operational. As an intended side effect, the PGS for TerraSAR-X is, in several aspects, a pattern being reused for upcoming future missions, thus substantially reducing overall developmental costs. This paper investigates features of the TerraSAR-X PGS that enable the reuse in a multimission environment. It summarizes the achieved enhancements and extensions of DIMS to support the TerraSAR-X mission. Special emphasis is placed on the implementation of the request workflow initiated by user orders and the corresponding data flow within the distributed DFD multimission facility.
IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing | 2009
Torsten Heinen; Stephan Kiemle; B. Buckl; Eberhard Mikusch
This paper describes the motivation, requirements, and challenges of integrating a geospatial infrastructure, based on standardized web services, into an earth observation (EO) data library. The design of harmonized data and information models of the EO and geospatial community is a precondition for interoperability at metadata, data and semantic levels. A major challenge arises from raising the awareness that interoperability is essential for an interdisciplinary use of EO data in Geographic Information System (GIS) and value-adding services.
european conference on research and advanced technology for digital libraries | 2002
Stephan Kiemle
The German Remote Sensing Data Center (DFD) has developed a digital library for the long-term management of earth observation data products. This Product Library is a central part of DFDs multi-mission ground segment Data Information and Management System (DIMS) and is successfully in operation since 2000. Its data model is regularly extended to support products of upcoming earth observation missions. The Product Library implements a middleware filling the gap between application-level object data models and physical storage structures such as a digital robot archive with hierarchical storage management. This paper presents the principles of the Product Library middleware and its application in the specific earth observation context.
IEEE Geoscience and Remote Sensing Magazine | 2016
Stephan Kiemle; Katrin Molch; Stephan Schropp; Nicolas Weiland; Eberhard Mikusch
The German Satellite Data Archive (D-SDA) at the German Aerospace Center (DLR) has been managing largevolume Earth-observation (EO) data in the context of EOmission payload ground segments (PGSs) for more than two decades. Hardware, data management, processing, user access, long-term preservation, and data exploitation expertise are under one roof and interact closely. Upcoming EO-mission PGSs benefit as much from the comprehensive expertise, close interaction, and integrated infrastructure as do in-house scientific application projects requiring access, processing, and archiving of large-volume EO data. Using a number of examples, we will demonstrate how EO data life cycles benefit from the proximity of data management and application scientists and from the extensive operational experience gathered over time.
Archive | 2014
Stephan Kiemle; Katrin Molch; Stephan Schropp; Nicolas Weiland; Eberhard Mikusch
This paper presents a Near-Real-Time multi-GPU accelerated solution of the ωk Algorithm for Synthetic Aperture Radar (SAR) data focusing, obtained in Stripmap SAR mode. Starting from an input raw data, the algorithm subdivides it in a grid of a configurable number of bursts along track. A multithreading CPU-side support is made available in order to handle each graphic device in parallel. Then each burst is assigned to a separate GPU and processed including Range Compression, Stolt Mapping via ChirpZ and Azimuth Compression steps. We prove the efficiency of our algorithm by using Sentinel-1 raw data (approx. 3.3 GB) on a commodity graphics card; the single-GPU solution is approximately 4x faster than the industrial multi-core CPU implementation (General ACS SAR Processor, GASP), without significant loss of quality. Using a multi-GPU system, the algorithm is approximately 6x faster with respect to the CPU processor.For decades, field help in case of disasters on the Earth’s surface - like floods, fires or earthquakes - is supported by the analysis of remotely sensed data. In recent years, the monitoring of vehicles, buildings or areas fraught with risk has become another major task for satellite-based crisis intervention. Since these scenarios are unforeseen and time-critical, they require a fast and well coordinated reaction. If useful information is extracted out of image data in realtime directly on board a spacecraft, the timespan between image acquisition and an appropriate reaction can be shortened significantly. Furthermore, on board image analysis allows data of minor interest, e.g. cloud-contaminated scenes, to be discarded and/or treated with lower priority, which leads to an optimized usage of storage and downlink capacity. This paper describes the modular application framework of VIMOS, an on board image processing experiment for remote sensing applications. Special focus will be on resource management, safety and modular commandability.Gaia is an ESA cornerstone mission, which was successfully launched December 2013 and commenced operations in July 2014. Within the Gaia Data Processing and Analysis consortium, Coordination Unit 7 (CU7) is responsible for the variability analysis of over a billion celestial sources and nearly 4 billion associated time series (photometric, spectrophotometric, and spectroscopic), encoding information in over 800 billion observations during the 5 years of the mission, resulting in a petabyte scale analytical problem. In this article, we briefly describe the solutions we developed to address the challenges of time series variability analysis: from the structure for a distributed data-oriented scientific collaboration to architectural choices and specific components used. Our approach is based on Open Source components with a distributed, partitioned database as the core to handle incrementally: ingestion, distributed processing, analysis, results and export in a constrained time window.The seamless mosaicing of massive very high resolution imagery addresses several aspects related to big data from space. Data volume is directly proportional to the size the input data, i.e., order of several TeraPixels for a continent. Data velocity derives from the fact that the input data is delivered over several years to meet maximum cloud contamination constraints with the considered satellites. Data variety results from the need to collect and integrate various ancillary data for cloud detection, land/sea mask delineation, and adaptive colour balancing. This paper details how these 3 aspects of big data are handled and illustrates them for the creation of a seamless pan-European mosaic from 2.5m imagery (Land Monitoring/Urban Atlas Copernicus CORE 03 data set).The current development of satellite imagery means that a great volume of images acquired globally has to be understood in a fast and precise manner. Processing this large quantity of information comes at the cost of finding unsupervised algorithms to fulfill these tasks. Change detection is one of the main issues when talking about the analysis of satellite image time series (SITS). In this paper, we propose a method to analyze changes in SITS based on binary descriptors and on the Hamming distance, regarded as a similarity metric. In order to render an automatic and completely unsupervised technique towards solving this problem, the obtained distances are quantized into change levels using the Lloyd-Max’s algorithm. The experiments are carried on 11 Landsat images at 30 meters spatial resolution, covering an area of approximately 59 × 51 km2 over the surroundings of Bucharest, Romania, and containing information from six subbands of frequency.The Euclid Archive System prototype is a functional information system which is used to address the numerous challenges in the development of fully functional data processing system for Euclid. The prototype must support the highly distributed nature of the Euclid Science Ground System, with Science Data Centres in at least eight countries. There are strict requirements both on data quality control and traceability of the data processing. Data volumes will be greater than 10 Pbyte, with the actual volume being dependent on the amount of reprocessing required.In the space domain, all scientific and technological developments are accompanied by a growth of the number of data sources. More specifically, the world of observation knows this very strong acceleration and the demand for information processing follows the same pace. To meet this demand, the problems associated with non-interoperability of data must be efficiently resolved upstream and without loss of information. We advocate the use of linked data technologies to integrate heterogeneous and schema-less data that we aim to publish in the 5 stars scale in order to foster their re-use. By proposing the 5 stars data model, Tim Berners-Lee drew the perfect roadmap for the production of high quality linked data. In this paper, we present a technological framework that allows to go from raw, scattered and heterogeneous data to structured data with a well-defined and agreed upon semantics, interlinked with other dataset for their common objects.Reference data sets, necessary to the advancement of the field of object recognition by providing a point of comparison for different algorithms, are prevalent in the field of multimedia. Although sharing the same basic object recognition problem, in the field of remote sensing there is a need for specialized reference data sets. This paper would like to open the topic for discussion, by taking a first attempt at creating a reference data set for a satellite image. In doing so, important differences between annotating photographic and satellite images are highlighted, along with their impact on the creation of a reference data set. The results are discussed with a view toward creating a future methodology for the manual annotation of satellite images.The future atmospheric composition Sentinel missions will generate two orders of magnitude more data than the current missions and the operational processing of these big data is a big challenge. The trace gas retrieval from remote sensing data usually requires high-performance radiative transfer model (RTM) simulations and the RTM are usually the bottleneck for the operational processing of the satellite data. To date, multi-core CPUs and also Graphical Processing Units (GPUs) have been used for highly intensive parallel computations. In this paper, we are comparing multi-core and GPU implementations of an RTM based on the discrete ordinate solution method. With GPUs, we have achieved a 20x-40x speed-up for the multi-stream RTM, and 50x speed-up for the two-stream RTM with respect to the original single-threaded CPU codes. Based on these performance tests, an optimal workload distribution scheme between GPU and CPU is proposed. Finally, we discuss the performance obtained with the multi-core-CPU and GPU implementations of the RTM.The effective use of Big Data in current and future scientific missions requires intelligent data handling systems which are able to interface the user to complicated distributed data collections. We review the WISE Concept of Scientific Information Systems and the WISE solutions for the storage and processing as applied to Big Data.Interactive visual data mining, where the user plays a key role in learning process, has gained high attention in data mining and human-machine communication. However, this approach needs Dimensionality Reduction (DR) techniques to visualize image collections. Although the main focus of DR techniques lays on preserving the structure of the data, the occlusion of images and inefficient usage of display space are their two main drawbacks. In this work, we propose to use Non-negative Matrix Factorization (NMF) to reduce the dimensionality of images for immersive visualization. The proposed method aims to preserve the structure of data and at the same time reduce the occlusion between images by defining regularization terms for NMF. Experimental validations performed on two sets of image collections show the efficiency of the proposed method in respect to controlling the trade-off between structure preserving and less occluded visualization.This article provides a short overview about the TanDEM-X mission, its objectives and the payload ground segment (PGS) based on data management, processing systems and long term archive. Due to the large data volume of the acquired and processed products a main challenge in the operation of the PGS is to handle the required data throughput, which is a new dimension for the DLR PGS. To achieve this requirement, several solutions were developed and coordinated. Some of them were more technical nature whereas others optimized the workflows.Clustering of Earth Observation (EO) images has gained a high amount of attention in remote sensing and data mining. Here, each image is represented by a high-dimensional feature vector which could be computed as the results of coding algorithms of extracted local descriptors or raw pixel values. In this work, we propose to learn the features using discriminative Nonnegative Matrix factorization (DNMF) to represent each image. Here, we use the label of some images to produce new representation of images with more discriminative property. To validate our algorithm, we apply the proposed algorithm on a dataset of Synthetic Aperture Radar (SAR) and compare the results with the results of state-of-the-art techniques for image representation. The results confirm the capability of the proposed method in learning discriminative features leading to higher accuracy in clustering.
Archive | 2016
Torsten Heinen; Bernhard Buckl; Simone Giannecchini; Stephan Kiemle; Meißl Stephan
Remote sensors on spacecrafts acquire huge volumes of data that can be processed for other purposes in addition to those they were designed for. The project TECSEL2 was born for the usage of the Gaia AIM/AVU daily pipeline output and solar events data to characterize the response of detectors subjected to strong radiation damage within an environment not protected by the terrestrial magnetic field, the Lagrangian point L2, where Gaia operates. The project also aims at identifying anomalies in the scientific output parameters and relate them to detectors malfunctioning due to radiation damage issues correlating with solar events occurred in the same time range. TECSEL2 actually designs and implements a system based on big data technologies which are the state of art in the fields of data processing and data storage. The final goal of TECSEL2 is not only related to the Gaia project, because it provides useful analysis techniques for generic and potentially huge time series datasets.The Copernicus programme of the European Union with its fleet of Sentinel satellites operated by the European Space Agency are effectively making Earth Observation (EO) entering the big data era. Consequently, most application projects at continental or global scale cannot be addressed with conventional techniques. That is, the EO data revolution brought in by Copernicus needs to be matched by a processing revolution. Existing approaches such as those based on the processing of massive archives of Landsat data are reviewed and the concept of the Joint Research Centre Earth Observation Data and Processing platform is briefly presented.In this paper we present the Earth Observation Image Librarian (called EOLib) as a new generation of Image Information Mining Systems. EOLib is operated in the Payload Ground Segment of TerraSAR-X. The main goal of EOLib is to provide semantic annotations of satellite image content and offer to the end user a semantic catalogue via a web user interface. Moreover, EOLib has more functionality such as searches based on image metadata and semantics, visual exploration of the image archives, metadata extraction, etc. The system consists of components such as a query engine, knowledge discovery in databases, visual data mining, epitome generation, and user services. EOLib is able to ingest a TerraSAR-X scene with 8000×8000 pixels in about three minutes. The EOLib workflow starts with the ingestion of a scene, it continues with the semantic annotation of the image content based on machine learning methods, and it ends with publishing the semantic catalogue and enabling the search by metadata and semantic image descriptions.With the Heterogeneous Missions Accessibility (HMA) initiative, the OGC standard “Web Coverage Service (WCS) Earth Observation Application Profile” has been developed to harmonize online access to very large primary Earth Observation data holdings. Although its use in web mapping servers has proven valuable capabilities, this standard is not yet widely adopted. Its acceptance for data download by end users is hampered by the lack of interpretation guidelines and its complexity requiring considerable server and client implementation efforts. In this context, the project “Evolution of EO Online Data Access Services” funded by the European Space Agency (ESA) and presented in this paper analyses relevant scenarios and technologies for data publication and access, identifies potential for improvements of standards and their implementations, prototypes and evaluates selected improvements and proposes standard extensions for future releases. We hope hereby to considerably improve the acceptance of online EO data access services and standards and to promote their evolution and diffusion.The paper explores how multimedia approaches used in image understanding tasks could be adapted for remote sensing image analysis. In a first step, we show on 3 channels color images through the UC Merced Land Use Dataset how Deep Learning approach provides a significant performance increase compared to Bag of VisualWords approach. In a second step, we propose an extension of deep learning scheme to deal with hyperspectral data. The proposed scheme is based on a 3D architecture which jointly processes spectral and spatial information.Based on an omnibus likelihood ratio test statistic for the equality of several variance-covariance matrices following the complex Wishart distribution with an associated p-value and a factorization of this test statistic, change analysis in a short sequence of multilook, polarimetric SAR data in the covariance matrix representation is carried out. The omnibus test statistic and its factorization detect if and when change(s) occur. The technique is demonstrated on airborne EMISAR L-band data but may be applied to Sentinel-1, Cosmo-SkyMed, TerraSAR-X, ALOS and RadarSat-2 or other dualand quad/full-pol, and even single-pol data also.The operational processing of remote sensing data requires high-performance radiative transfer model (RTM) simulations. To date, considerable success has been achieved in dimensionality reduction techniques as well as in heterogeneous multi-CPU/GPU computing for highly intensive parallel computations. We have developed several techniques for accelerating the radiative transfer solver. They include (1) analytical methods which allow to compute set of atmospheric scenarios in one RTM call; (2) dimensionality reduction of the datasets, and (3) GPU-computing using CUDA framework. These techniques provide almost 300x cumulative speed-up for the RTM with respect to the original single-threaded CPU code. In this paper, we analyze the applicability of the proposed methods to a practical problem of total ozone column retrieval from UV-backscatter measurements.We review the architectural design and implementation of the Euclid Archive System (EAS) which is in the core of the Euclid Science Ground System (SGS) and represents a new generation of data-centric scientific information systems. It will handle up to one hundred PBs of mission data in a heterogeneous storage environment and will allow intensive access both to the data and metadata produced during the mission. This paper makes a particular emphasis on the access to science-ready products and interfaces which will be provided for the end-user.
Archive | 2014
Sven Kröger; Silke Kerkhoff; Stephan Kiemle; Stephan Schropp; Maximilian Schwinger; Max Wegner; Eberhard Mikusch
This paper presents a Near-Real-Time multi-GPU accelerated solution of the ωk Algorithm for Synthetic Aperture Radar (SAR) data focusing, obtained in Stripmap SAR mode. Starting from an input raw data, the algorithm subdivides it in a grid of a configurable number of bursts along track. A multithreading CPU-side support is made available in order to handle each graphic device in parallel. Then each burst is assigned to a separate GPU and processed including Range Compression, Stolt Mapping via ChirpZ and Azimuth Compression steps. We prove the efficiency of our algorithm by using Sentinel-1 raw data (approx. 3.3 GB) on a commodity graphics card; the single-GPU solution is approximately 4x faster than the industrial multi-core CPU implementation (General ACS SAR Processor, GASP), without significant loss of quality. Using a multi-GPU system, the algorithm is approximately 6x faster with respect to the CPU processor.For decades, field help in case of disasters on the Earth’s surface - like floods, fires or earthquakes - is supported by the analysis of remotely sensed data. In recent years, the monitoring of vehicles, buildings or areas fraught with risk has become another major task for satellite-based crisis intervention. Since these scenarios are unforeseen and time-critical, they require a fast and well coordinated reaction. If useful information is extracted out of image data in realtime directly on board a spacecraft, the timespan between image acquisition and an appropriate reaction can be shortened significantly. Furthermore, on board image analysis allows data of minor interest, e.g. cloud-contaminated scenes, to be discarded and/or treated with lower priority, which leads to an optimized usage of storage and downlink capacity. This paper describes the modular application framework of VIMOS, an on board image processing experiment for remote sensing applications. Special focus will be on resource management, safety and modular commandability.Gaia is an ESA cornerstone mission, which was successfully launched December 2013 and commenced operations in July 2014. Within the Gaia Data Processing and Analysis consortium, Coordination Unit 7 (CU7) is responsible for the variability analysis of over a billion celestial sources and nearly 4 billion associated time series (photometric, spectrophotometric, and spectroscopic), encoding information in over 800 billion observations during the 5 years of the mission, resulting in a petabyte scale analytical problem. In this article, we briefly describe the solutions we developed to address the challenges of time series variability analysis: from the structure for a distributed data-oriented scientific collaboration to architectural choices and specific components used. Our approach is based on Open Source components with a distributed, partitioned database as the core to handle incrementally: ingestion, distributed processing, analysis, results and export in a constrained time window.The seamless mosaicing of massive very high resolution imagery addresses several aspects related to big data from space. Data volume is directly proportional to the size the input data, i.e., order of several TeraPixels for a continent. Data velocity derives from the fact that the input data is delivered over several years to meet maximum cloud contamination constraints with the considered satellites. Data variety results from the need to collect and integrate various ancillary data for cloud detection, land/sea mask delineation, and adaptive colour balancing. This paper details how these 3 aspects of big data are handled and illustrates them for the creation of a seamless pan-European mosaic from 2.5m imagery (Land Monitoring/Urban Atlas Copernicus CORE 03 data set).The current development of satellite imagery means that a great volume of images acquired globally has to be understood in a fast and precise manner. Processing this large quantity of information comes at the cost of finding unsupervised algorithms to fulfill these tasks. Change detection is one of the main issues when talking about the analysis of satellite image time series (SITS). In this paper, we propose a method to analyze changes in SITS based on binary descriptors and on the Hamming distance, regarded as a similarity metric. In order to render an automatic and completely unsupervised technique towards solving this problem, the obtained distances are quantized into change levels using the Lloyd-Max’s algorithm. The experiments are carried on 11 Landsat images at 30 meters spatial resolution, covering an area of approximately 59 × 51 km2 over the surroundings of Bucharest, Romania, and containing information from six subbands of frequency.The Euclid Archive System prototype is a functional information system which is used to address the numerous challenges in the development of fully functional data processing system for Euclid. The prototype must support the highly distributed nature of the Euclid Science Ground System, with Science Data Centres in at least eight countries. There are strict requirements both on data quality control and traceability of the data processing. Data volumes will be greater than 10 Pbyte, with the actual volume being dependent on the amount of reprocessing required.In the space domain, all scientific and technological developments are accompanied by a growth of the number of data sources. More specifically, the world of observation knows this very strong acceleration and the demand for information processing follows the same pace. To meet this demand, the problems associated with non-interoperability of data must be efficiently resolved upstream and without loss of information. We advocate the use of linked data technologies to integrate heterogeneous and schema-less data that we aim to publish in the 5 stars scale in order to foster their re-use. By proposing the 5 stars data model, Tim Berners-Lee drew the perfect roadmap for the production of high quality linked data. In this paper, we present a technological framework that allows to go from raw, scattered and heterogeneous data to structured data with a well-defined and agreed upon semantics, interlinked with other dataset for their common objects.Reference data sets, necessary to the advancement of the field of object recognition by providing a point of comparison for different algorithms, are prevalent in the field of multimedia. Although sharing the same basic object recognition problem, in the field of remote sensing there is a need for specialized reference data sets. This paper would like to open the topic for discussion, by taking a first attempt at creating a reference data set for a satellite image. In doing so, important differences between annotating photographic and satellite images are highlighted, along with their impact on the creation of a reference data set. The results are discussed with a view toward creating a future methodology for the manual annotation of satellite images.The future atmospheric composition Sentinel missions will generate two orders of magnitude more data than the current missions and the operational processing of these big data is a big challenge. The trace gas retrieval from remote sensing data usually requires high-performance radiative transfer model (RTM) simulations and the RTM are usually the bottleneck for the operational processing of the satellite data. To date, multi-core CPUs and also Graphical Processing Units (GPUs) have been used for highly intensive parallel computations. In this paper, we are comparing multi-core and GPU implementations of an RTM based on the discrete ordinate solution method. With GPUs, we have achieved a 20x-40x speed-up for the multi-stream RTM, and 50x speed-up for the two-stream RTM with respect to the original single-threaded CPU codes. Based on these performance tests, an optimal workload distribution scheme between GPU and CPU is proposed. Finally, we discuss the performance obtained with the multi-core-CPU and GPU implementations of the RTM.The effective use of Big Data in current and future scientific missions requires intelligent data handling systems which are able to interface the user to complicated distributed data collections. We review the WISE Concept of Scientific Information Systems and the WISE solutions for the storage and processing as applied to Big Data.Interactive visual data mining, where the user plays a key role in learning process, has gained high attention in data mining and human-machine communication. However, this approach needs Dimensionality Reduction (DR) techniques to visualize image collections. Although the main focus of DR techniques lays on preserving the structure of the data, the occlusion of images and inefficient usage of display space are their two main drawbacks. In this work, we propose to use Non-negative Matrix Factorization (NMF) to reduce the dimensionality of images for immersive visualization. The proposed method aims to preserve the structure of data and at the same time reduce the occlusion between images by defining regularization terms for NMF. Experimental validations performed on two sets of image collections show the efficiency of the proposed method in respect to controlling the trade-off between structure preserving and less occluded visualization.This article provides a short overview about the TanDEM-X mission, its objectives and the payload ground segment (PGS) based on data management, processing systems and long term archive. Due to the large data volume of the acquired and processed products a main challenge in the operation of the PGS is to handle the required data throughput, which is a new dimension for the DLR PGS. To achieve this requirement, several solutions were developed and coordinated. Some of them were more technical nature whereas others optimized the workflows.Clustering of Earth Observation (EO) images has gained a high amount of attention in remote sensing and data mining. Here, each image is represented by a high-dimensional feature vector which could be computed as the results of coding algorithms of extracted local descriptors or raw pixel values. In this work, we propose to learn the features using discriminative Nonnegative Matrix factorization (DNMF) to represent each image. Here, we use the label of some images to produce new representation of images with more discriminative property. To validate our algorithm, we apply the proposed algorithm on a dataset of Synthetic Aperture Radar (SAR) and compare the results with the results of state-of-the-art techniques for image representation. The results confirm the capability of the proposed method in learning discriminative features leading to higher accuracy in clustering.
european conference on research and advanced technology for digital libraries | 2007
Stephan Kiemle; Burkhard Freitag
The German Remote Sensing Data Center (DFD) has developed a digital library for the long-term management of earth observation data products. This Product Library is a central part of DFDs multi-mission ground segment Data and Information Management System (DIMS) currently hosting one million digital products, corresponding to 150 Terabyte of data. Its data model is regularly extended to support products of upcoming earth observation missions. The ever increasing complexity led to the development of operating interfaces which use a-priori and context knowledge, allowing efficient management of the dynamic library content. This paper presents the development and operating of context-sensitive library access tools based on meta modeling and online grammar interpretation.
Atmospheric Measurement Techniques | 2016
S. Hassinen; Dimitris Balis; H. Bauer; M. Begoin; Andy Delcloo; K. Eleftheratos; S. Gimeno García; J. Granville; M. Grossi; Nan Hao; Pascal Hedelt; F. Hendrick; Michael Hess; Klaus-Peter Heue; Jari Hovila; H. Jønch-Sørensen; N. Kalakoski; A. Kauppi; Stephan Kiemle; L. Kins; M. E. Koukouli; J. Kujanpää; J.-C. Lambert; R. Lang; Christophe Lerot; Diego Loyola; Mattia Pedergnana; Gaia Pinardi; Fabian Romahn; M. Van Roozendael