Radu Tanase
Politehnica University of Bucharest
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Radu Tanase.
IEEE Geoscience and Remote Sensing Letters | 2017
Radu Tanase; Reza Bahmanyar; Gottfried Schwarz; Mihai Datcu
We propose a multilevel semantics discovery approach for bridging the semantic gap when mining high-resolution polarimetric synthetic aperture radar (PolSAR) remote sensing images. First, an Entropy/Anisotropy/Alpha-Wishart classifier is employed to discover low-level semantics as classes representing the physical scattering properties of targets (e.g., low-entropy/surface scattering/high anisotropy). Then, the images are tiled into patches and each patch is modeled as a bag-of-words, a histogram of the class labels. Next, latent Dirichlet allocation is applied to discover their higher level semantics as a set of topics. Our results demonstrate that topic semantics are close to human semantics used for basic land-cover types (e.g., grassland). Therefore, using the topic description (bag-of-topics) of PolSAR images leads to a narrower semantic gap in image mining. In addition, a visual exploration of the topic descriptions helps to find semantic relationships, which can be used for defining new semantic categories (e.g., mixed land-cover types) and designing rule-based categorization schemes.
international geoscience and remote sensing symposium | 2016
Radu Tanase; Mihai Datcu; Dan Raducanu
This paper proposes a custom convolutional deep belief network for polarimetric synthetic aperture radar (PolSAR) data feature extraction. The proposed architecture stands out through the interesting features it shows, starting with the fact that it is adapted to fully polarimetric SAR data. Then, the multilayer approach allows the stepwise discovery of higher-level features. The convolutional approach allows the discovery of local, spatially invariant features and makes the architecture scalable to fully sized PolSAR images. The network is trained in an unsupervised manner, without using labeled data and then it succeeds to extract powerful features from PolSAR patches. This fact is demonstrated by applying supervised and unsupervised classification algorithms on features extracted from patches of a fully polarimetric multi-look F-SAR image over Kaufbeuren airfield, Germany.
Archive | 2014
Anamaria Radoi; Radu Tanase; Mihai Datcu
This paper presents a Near-Real-Time multi-GPU accelerated solution of the ωk Algorithm for Synthetic Aperture Radar (SAR) data focusing, obtained in Stripmap SAR mode. Starting from an input raw data, the algorithm subdivides it in a grid of a configurable number of bursts along track. A multithreading CPU-side support is made available in order to handle each graphic device in parallel. Then each burst is assigned to a separate GPU and processed including Range Compression, Stolt Mapping via ChirpZ and Azimuth Compression steps. We prove the efficiency of our algorithm by using Sentinel-1 raw data (approx. 3.3 GB) on a commodity graphics card; the single-GPU solution is approximately 4x faster than the industrial multi-core CPU implementation (General ACS SAR Processor, GASP), without significant loss of quality. Using a multi-GPU system, the algorithm is approximately 6x faster with respect to the CPU processor.For decades, field help in case of disasters on the Earth’s surface - like floods, fires or earthquakes - is supported by the analysis of remotely sensed data. In recent years, the monitoring of vehicles, buildings or areas fraught with risk has become another major task for satellite-based crisis intervention. Since these scenarios are unforeseen and time-critical, they require a fast and well coordinated reaction. If useful information is extracted out of image data in realtime directly on board a spacecraft, the timespan between image acquisition and an appropriate reaction can be shortened significantly. Furthermore, on board image analysis allows data of minor interest, e.g. cloud-contaminated scenes, to be discarded and/or treated with lower priority, which leads to an optimized usage of storage and downlink capacity. This paper describes the modular application framework of VIMOS, an on board image processing experiment for remote sensing applications. Special focus will be on resource management, safety and modular commandability.Gaia is an ESA cornerstone mission, which was successfully launched December 2013 and commenced operations in July 2014. Within the Gaia Data Processing and Analysis consortium, Coordination Unit 7 (CU7) is responsible for the variability analysis of over a billion celestial sources and nearly 4 billion associated time series (photometric, spectrophotometric, and spectroscopic), encoding information in over 800 billion observations during the 5 years of the mission, resulting in a petabyte scale analytical problem. In this article, we briefly describe the solutions we developed to address the challenges of time series variability analysis: from the structure for a distributed data-oriented scientific collaboration to architectural choices and specific components used. Our approach is based on Open Source components with a distributed, partitioned database as the core to handle incrementally: ingestion, distributed processing, analysis, results and export in a constrained time window.The seamless mosaicing of massive very high resolution imagery addresses several aspects related to big data from space. Data volume is directly proportional to the size the input data, i.e., order of several TeraPixels for a continent. Data velocity derives from the fact that the input data is delivered over several years to meet maximum cloud contamination constraints with the considered satellites. Data variety results from the need to collect and integrate various ancillary data for cloud detection, land/sea mask delineation, and adaptive colour balancing. This paper details how these 3 aspects of big data are handled and illustrates them for the creation of a seamless pan-European mosaic from 2.5m imagery (Land Monitoring/Urban Atlas Copernicus CORE 03 data set).The current development of satellite imagery means that a great volume of images acquired globally has to be understood in a fast and precise manner. Processing this large quantity of information comes at the cost of finding unsupervised algorithms to fulfill these tasks. Change detection is one of the main issues when talking about the analysis of satellite image time series (SITS). In this paper, we propose a method to analyze changes in SITS based on binary descriptors and on the Hamming distance, regarded as a similarity metric. In order to render an automatic and completely unsupervised technique towards solving this problem, the obtained distances are quantized into change levels using the Lloyd-Max’s algorithm. The experiments are carried on 11 Landsat images at 30 meters spatial resolution, covering an area of approximately 59 × 51 km2 over the surroundings of Bucharest, Romania, and containing information from six subbands of frequency.The Euclid Archive System prototype is a functional information system which is used to address the numerous challenges in the development of fully functional data processing system for Euclid. The prototype must support the highly distributed nature of the Euclid Science Ground System, with Science Data Centres in at least eight countries. There are strict requirements both on data quality control and traceability of the data processing. Data volumes will be greater than 10 Pbyte, with the actual volume being dependent on the amount of reprocessing required.In the space domain, all scientific and technological developments are accompanied by a growth of the number of data sources. More specifically, the world of observation knows this very strong acceleration and the demand for information processing follows the same pace. To meet this demand, the problems associated with non-interoperability of data must be efficiently resolved upstream and without loss of information. We advocate the use of linked data technologies to integrate heterogeneous and schema-less data that we aim to publish in the 5 stars scale in order to foster their re-use. By proposing the 5 stars data model, Tim Berners-Lee drew the perfect roadmap for the production of high quality linked data. In this paper, we present a technological framework that allows to go from raw, scattered and heterogeneous data to structured data with a well-defined and agreed upon semantics, interlinked with other dataset for their common objects.Reference data sets, necessary to the advancement of the field of object recognition by providing a point of comparison for different algorithms, are prevalent in the field of multimedia. Although sharing the same basic object recognition problem, in the field of remote sensing there is a need for specialized reference data sets. This paper would like to open the topic for discussion, by taking a first attempt at creating a reference data set for a satellite image. In doing so, important differences between annotating photographic and satellite images are highlighted, along with their impact on the creation of a reference data set. The results are discussed with a view toward creating a future methodology for the manual annotation of satellite images.The future atmospheric composition Sentinel missions will generate two orders of magnitude more data than the current missions and the operational processing of these big data is a big challenge. The trace gas retrieval from remote sensing data usually requires high-performance radiative transfer model (RTM) simulations and the RTM are usually the bottleneck for the operational processing of the satellite data. To date, multi-core CPUs and also Graphical Processing Units (GPUs) have been used for highly intensive parallel computations. In this paper, we are comparing multi-core and GPU implementations of an RTM based on the discrete ordinate solution method. With GPUs, we have achieved a 20x-40x speed-up for the multi-stream RTM, and 50x speed-up for the two-stream RTM with respect to the original single-threaded CPU codes. Based on these performance tests, an optimal workload distribution scheme between GPU and CPU is proposed. Finally, we discuss the performance obtained with the multi-core-CPU and GPU implementations of the RTM.The effective use of Big Data in current and future scientific missions requires intelligent data handling systems which are able to interface the user to complicated distributed data collections. We review the WISE Concept of Scientific Information Systems and the WISE solutions for the storage and processing as applied to Big Data.Interactive visual data mining, where the user plays a key role in learning process, has gained high attention in data mining and human-machine communication. However, this approach needs Dimensionality Reduction (DR) techniques to visualize image collections. Although the main focus of DR techniques lays on preserving the structure of the data, the occlusion of images and inefficient usage of display space are their two main drawbacks. In this work, we propose to use Non-negative Matrix Factorization (NMF) to reduce the dimensionality of images for immersive visualization. The proposed method aims to preserve the structure of data and at the same time reduce the occlusion between images by defining regularization terms for NMF. Experimental validations performed on two sets of image collections show the efficiency of the proposed method in respect to controlling the trade-off between structure preserving and less occluded visualization.This article provides a short overview about the TanDEM-X mission, its objectives and the payload ground segment (PGS) based on data management, processing systems and long term archive. Due to the large data volume of the acquired and processed products a main challenge in the operation of the PGS is to handle the required data throughput, which is a new dimension for the DLR PGS. To achieve this requirement, several solutions were developed and coordinated. Some of them were more technical nature whereas others optimized the workflows.Clustering of Earth Observation (EO) images has gained a high amount of attention in remote sensing and data mining. Here, each image is represented by a high-dimensional feature vector which could be computed as the results of coding algorithms of extracted local descriptors or raw pixel values. In this work, we propose to learn the features using discriminative Nonnegative Matrix factorization (DNMF) to represent each image. Here, we use the label of some images to produce new representation of images with more discriminative property. To validate our algorithm, we apply the proposed algorithm on a dataset of Synthetic Aperture Radar (SAR) and compare the results with the results of state-of-the-art techniques for image representation. The results confirm the capability of the proposed method in learning discriminative features leading to higher accuracy in clustering.
international geoscience and remote sensing symposium | 2015
Radu Tanase; Anamaria Radoi; Mihai Datcu; Dan Raducanu
Several algorithms for polarimetric synthetic aperture radar (PolSAR) data indexing and classification were proposed in the state of the art literature. In particular, one of them computes powerful, compact feature descriptors composed of the first three logarithmic cumulants of the BiQuaternion Fractional Fourier Transform (BiQFrFT) coefficients of PolSAR patches. Since the BiQFrFT of each patch is computed at three different angles, the algorithms result consists in nine complex-valued features (18 real-valued features) for single polarization images and in nine biquaternion-valued features (72 real-valued features) for fully polarimetric images. In this paper feature selection based on mutual information (MI) is employed to optimally select a subset of features, in order to improve the indexing performances and to minimize the classification error. The improved results are shown on two polarimetric images: a L-band PALSAR image over Danubes Delta, Romania and a C-band RadarSAT2 image over Brâila, Romania.
international geoscience and remote sensing symposium | 2015
Anamaria Radoi; Radu Tanase; Mihai Datcu
Satellite image time series are a valuable resource for enhancing land exploitation by respecting the natural cycles, analyzing urban expansion and its positive and negative effects, limiting the unhealthy rhythm of deforestation, understanding natural hazards and so on. In this context, understanding only the changes in multitemporal images is not sufficient. This paper aims to correlate multi-level change detection techniques with image semantic segmentation methods in order to build an hierarchy of changes for each semantic class. In this way, we are able to provide statistics regarding the levels of change suffered by a certain area. The methods are demonstrated with examples involving bi-temporal Land-sat images.
Archive | 2014
Radu Tanase; Anamaria Radoi; Florin-Andrei Georgescu; Mihai Datcu; Dan Raducani
This paper presents a Near-Real-Time multi-GPU accelerated solution of the ωk Algorithm for Synthetic Aperture Radar (SAR) data focusing, obtained in Stripmap SAR mode. Starting from an input raw data, the algorithm subdivides it in a grid of a configurable number of bursts along track. A multithreading CPU-side support is made available in order to handle each graphic device in parallel. Then each burst is assigned to a separate GPU and processed including Range Compression, Stolt Mapping via ChirpZ and Azimuth Compression steps. We prove the efficiency of our algorithm by using Sentinel-1 raw data (approx. 3.3 GB) on a commodity graphics card; the single-GPU solution is approximately 4x faster than the industrial multi-core CPU implementation (General ACS SAR Processor, GASP), without significant loss of quality. Using a multi-GPU system, the algorithm is approximately 6x faster with respect to the CPU processor.For decades, field help in case of disasters on the Earth’s surface - like floods, fires or earthquakes - is supported by the analysis of remotely sensed data. In recent years, the monitoring of vehicles, buildings or areas fraught with risk has become another major task for satellite-based crisis intervention. Since these scenarios are unforeseen and time-critical, they require a fast and well coordinated reaction. If useful information is extracted out of image data in realtime directly on board a spacecraft, the timespan between image acquisition and an appropriate reaction can be shortened significantly. Furthermore, on board image analysis allows data of minor interest, e.g. cloud-contaminated scenes, to be discarded and/or treated with lower priority, which leads to an optimized usage of storage and downlink capacity. This paper describes the modular application framework of VIMOS, an on board image processing experiment for remote sensing applications. Special focus will be on resource management, safety and modular commandability.Gaia is an ESA cornerstone mission, which was successfully launched December 2013 and commenced operations in July 2014. Within the Gaia Data Processing and Analysis consortium, Coordination Unit 7 (CU7) is responsible for the variability analysis of over a billion celestial sources and nearly 4 billion associated time series (photometric, spectrophotometric, and spectroscopic), encoding information in over 800 billion observations during the 5 years of the mission, resulting in a petabyte scale analytical problem. In this article, we briefly describe the solutions we developed to address the challenges of time series variability analysis: from the structure for a distributed data-oriented scientific collaboration to architectural choices and specific components used. Our approach is based on Open Source components with a distributed, partitioned database as the core to handle incrementally: ingestion, distributed processing, analysis, results and export in a constrained time window.The seamless mosaicing of massive very high resolution imagery addresses several aspects related to big data from space. Data volume is directly proportional to the size the input data, i.e., order of several TeraPixels for a continent. Data velocity derives from the fact that the input data is delivered over several years to meet maximum cloud contamination constraints with the considered satellites. Data variety results from the need to collect and integrate various ancillary data for cloud detection, land/sea mask delineation, and adaptive colour balancing. This paper details how these 3 aspects of big data are handled and illustrates them for the creation of a seamless pan-European mosaic from 2.5m imagery (Land Monitoring/Urban Atlas Copernicus CORE 03 data set).The current development of satellite imagery means that a great volume of images acquired globally has to be understood in a fast and precise manner. Processing this large quantity of information comes at the cost of finding unsupervised algorithms to fulfill these tasks. Change detection is one of the main issues when talking about the analysis of satellite image time series (SITS). In this paper, we propose a method to analyze changes in SITS based on binary descriptors and on the Hamming distance, regarded as a similarity metric. In order to render an automatic and completely unsupervised technique towards solving this problem, the obtained distances are quantized into change levels using the Lloyd-Max’s algorithm. The experiments are carried on 11 Landsat images at 30 meters spatial resolution, covering an area of approximately 59 × 51 km2 over the surroundings of Bucharest, Romania, and containing information from six subbands of frequency.The Euclid Archive System prototype is a functional information system which is used to address the numerous challenges in the development of fully functional data processing system for Euclid. The prototype must support the highly distributed nature of the Euclid Science Ground System, with Science Data Centres in at least eight countries. There are strict requirements both on data quality control and traceability of the data processing. Data volumes will be greater than 10 Pbyte, with the actual volume being dependent on the amount of reprocessing required.In the space domain, all scientific and technological developments are accompanied by a growth of the number of data sources. More specifically, the world of observation knows this very strong acceleration and the demand for information processing follows the same pace. To meet this demand, the problems associated with non-interoperability of data must be efficiently resolved upstream and without loss of information. We advocate the use of linked data technologies to integrate heterogeneous and schema-less data that we aim to publish in the 5 stars scale in order to foster their re-use. By proposing the 5 stars data model, Tim Berners-Lee drew the perfect roadmap for the production of high quality linked data. In this paper, we present a technological framework that allows to go from raw, scattered and heterogeneous data to structured data with a well-defined and agreed upon semantics, interlinked with other dataset for their common objects.Reference data sets, necessary to the advancement of the field of object recognition by providing a point of comparison for different algorithms, are prevalent in the field of multimedia. Although sharing the same basic object recognition problem, in the field of remote sensing there is a need for specialized reference data sets. This paper would like to open the topic for discussion, by taking a first attempt at creating a reference data set for a satellite image. In doing so, important differences between annotating photographic and satellite images are highlighted, along with their impact on the creation of a reference data set. The results are discussed with a view toward creating a future methodology for the manual annotation of satellite images.The future atmospheric composition Sentinel missions will generate two orders of magnitude more data than the current missions and the operational processing of these big data is a big challenge. The trace gas retrieval from remote sensing data usually requires high-performance radiative transfer model (RTM) simulations and the RTM are usually the bottleneck for the operational processing of the satellite data. To date, multi-core CPUs and also Graphical Processing Units (GPUs) have been used for highly intensive parallel computations. In this paper, we are comparing multi-core and GPU implementations of an RTM based on the discrete ordinate solution method. With GPUs, we have achieved a 20x-40x speed-up for the multi-stream RTM, and 50x speed-up for the two-stream RTM with respect to the original single-threaded CPU codes. Based on these performance tests, an optimal workload distribution scheme between GPU and CPU is proposed. Finally, we discuss the performance obtained with the multi-core-CPU and GPU implementations of the RTM.The effective use of Big Data in current and future scientific missions requires intelligent data handling systems which are able to interface the user to complicated distributed data collections. We review the WISE Concept of Scientific Information Systems and the WISE solutions for the storage and processing as applied to Big Data.Interactive visual data mining, where the user plays a key role in learning process, has gained high attention in data mining and human-machine communication. However, this approach needs Dimensionality Reduction (DR) techniques to visualize image collections. Although the main focus of DR techniques lays on preserving the structure of the data, the occlusion of images and inefficient usage of display space are their two main drawbacks. In this work, we propose to use Non-negative Matrix Factorization (NMF) to reduce the dimensionality of images for immersive visualization. The proposed method aims to preserve the structure of data and at the same time reduce the occlusion between images by defining regularization terms for NMF. Experimental validations performed on two sets of image collections show the efficiency of the proposed method in respect to controlling the trade-off between structure preserving and less occluded visualization.This article provides a short overview about the TanDEM-X mission, its objectives and the payload ground segment (PGS) based on data management, processing systems and long term archive. Due to the large data volume of the acquired and processed products a main challenge in the operation of the PGS is to handle the required data throughput, which is a new dimension for the DLR PGS. To achieve this requirement, several solutions were developed and coordinated. Some of them were more technical nature whereas others optimized the workflows.Clustering of Earth Observation (EO) images has gained a high amount of attention in remote sensing and data mining. Here, each image is represented by a high-dimensional feature vector which could be computed as the results of coding algorithms of extracted local descriptors or raw pixel values. In this work, we propose to learn the features using discriminative Nonnegative Matrix factorization (DNMF) to represent each image. Here, we use the label of some images to produce new representation of images with more discriminative property. To validate our algorithm, we apply the proposed algorithm on a dataset of Synthetic Aperture Radar (SAR) and compare the results with the results of state-of-the-art techniques for image representation. The results confirm the capability of the proposed method in learning discriminative features leading to higher accuracy in clustering.
international symposium on signals, circuits and systems | 2017
Radu Tanase; Constantin Cazacu; Daniela Faur; Dragos Ioan Sacaleanu; Mihai Datcu
Archive | 2016
Florin-Andrei Georgescu; Radu Tanase; Mihai Datcu; Dan Raducanu
international conference on image processing | 2015
Radu Tanase; Corina Vaduva; Mihai Datcu; Dan Raducanu
Archive | 2015
Radu Tanase; Mihai Datcu; Dan Raducanu