Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Florin-Andrei Georgescu is active.

Publication


Featured researches published by Florin-Andrei Georgescu.


IEEE Geoscience and Remote Sensing Letters | 2016

Feature Extraction for Patch-Based Classification of Multispectral Earth Observation Images

Florin-Andrei Georgescu; Corina Vaduva; Dan Raducanu; Mihai Datcu

Recently, various patch-based approaches have emerged for high and very high resolution multispectral image classification and indexing. This comes as a consequence of the most important particularity of multispectral data: objects are represented using several spectral bands that equally influence the classification process. In this letter, by using a patch-based approach, we are aiming at extracting descriptors that capture both spectral information and structural information. Using both the raw texture data and the high spectral resolution provided by the latest sensors, we propose enhanced image descriptors based on Gabor, spectral histograms, spectral indices, and bag-of-words framework. This approach leads to a scene classification that outperforms the results obtained when employing the initial image features. Experimental results on a WorldView-2 scene and also on a test collection of tiles created using Sentinel 2 data are presented. A detailed assessment of speed and precision was provided in comparison with state-of-the-art techniques. The broad applicability is guaranteed as the performances obtained for the two selected data sets are comparable, facilitating the exploration of previous and newly lunched satellite missions.


IEEE Geoscience and Remote Sensing Letters | 2017

New MPEG-7 Scalable Color Descriptor Based on Polar Coordinates for Multispectral Earth Observation Image Analysis

Florin-Andrei Georgescu; Dan Raducanu; Mihai Datcu

Continuously expanding high-resolution and very high resolution multispectral image collections, provided by remote sensing satellites, require specific methods and techniques for data analysis and understanding. Even though there are several patch-based approaches for image classification and indexing, none of them are integrated within a standard. Having the goal to develop an MPEG-7 compliant descriptor for patch-based multispectral earth observation image classification and indexing, we propose a new feature extraction method able to extract maximum information from all the available spectral bands that Sentinel 2, the last generation of remote sensing satellites, provides. Using the polar coordinate transformation of the reflectance values, we obtain illumination invariant features, which can be used along with the scalable color descriptor present in MPEG-7 standard. Also, our method proves to enhance land cover classification of the areas affected by clouds and their shadows and provide similar classification results compared with the homogeneous texture descriptor (HTD), spectral histogram (SH), concatenated HTD with SH features, spectral indices (SIs), and bag-of-words-based descriptors, such as bag-of-SIs and bag-of-spectral-values on cloud-free areas.


advanced concepts for intelligent vision systems | 2015

Dictionary-Based Compact Data Representation for Very High Resolution Earth Observation Image Classification

Corina Văduva; Florin-Andrei Georgescu; Mihai Datcu

In the context of fast growing data archives, with continuous changes in volume and diversity, information mining has proven to be a difficult, yet highly recommended task. The first and perhaps the most important part of the process is data representation for efficient and reliable image classification. This paper is presenting a new approach for describing the content of Earth Observation Very High Resolution images, by comparison with traditional representations based on specific features. The benefit of data compression is exploited in order to express the scene content in terms of dictionaries. The image is represented as a distribution of recurrent patterns, removing redundant information, but keeping all the explicit features, like spectral, texture and context. Further, a data domain analysis is performed using Support Vector Machine aiming to compare the influence of data representation to semantic scene annotation. WorldView2 data and a reference map are used for algorithm evaluation.


Archive | 2014

Using Biquarternions Algebra and Joint Time Frequency Aanalysis Towards a New PolSAR Data Indexing Method

Radu Tanase; Anamaria Radoi; Florin-Andrei Georgescu; Mihai Datcu; Dan Raducani

This paper presents a Near-Real-Time multi-GPU accelerated solution of the ωk Algorithm for Synthetic Aperture Radar (SAR) data focusing, obtained in Stripmap SAR mode. Starting from an input raw data, the algorithm subdivides it in a grid of a configurable number of bursts along track. A multithreading CPU-side support is made available in order to handle each graphic device in parallel. Then each burst is assigned to a separate GPU and processed including Range Compression, Stolt Mapping via ChirpZ and Azimuth Compression steps. We prove the efficiency of our algorithm by using Sentinel-1 raw data (approx. 3.3 GB) on a commodity graphics card; the single-GPU solution is approximately 4x faster than the industrial multi-core CPU implementation (General ACS SAR Processor, GASP), without significant loss of quality. Using a multi-GPU system, the algorithm is approximately 6x faster with respect to the CPU processor.For decades, field help in case of disasters on the Earth’s surface - like floods, fires or earthquakes - is supported by the analysis of remotely sensed data. In recent years, the monitoring of vehicles, buildings or areas fraught with risk has become another major task for satellite-based crisis intervention. Since these scenarios are unforeseen and time-critical, they require a fast and well coordinated reaction. If useful information is extracted out of image data in realtime directly on board a spacecraft, the timespan between image acquisition and an appropriate reaction can be shortened significantly. Furthermore, on board image analysis allows data of minor interest, e.g. cloud-contaminated scenes, to be discarded and/or treated with lower priority, which leads to an optimized usage of storage and downlink capacity. This paper describes the modular application framework of VIMOS, an on board image processing experiment for remote sensing applications. Special focus will be on resource management, safety and modular commandability.Gaia is an ESA cornerstone mission, which was successfully launched December 2013 and commenced operations in July 2014. Within the Gaia Data Processing and Analysis consortium, Coordination Unit 7 (CU7) is responsible for the variability analysis of over a billion celestial sources and nearly 4 billion associated time series (photometric, spectrophotometric, and spectroscopic), encoding information in over 800 billion observations during the 5 years of the mission, resulting in a petabyte scale analytical problem. In this article, we briefly describe the solutions we developed to address the challenges of time series variability analysis: from the structure for a distributed data-oriented scientific collaboration to architectural choices and specific components used. Our approach is based on Open Source components with a distributed, partitioned database as the core to handle incrementally: ingestion, distributed processing, analysis, results and export in a constrained time window.The seamless mosaicing of massive very high resolution imagery addresses several aspects related to big data from space. Data volume is directly proportional to the size the input data, i.e., order of several TeraPixels for a continent. Data velocity derives from the fact that the input data is delivered over several years to meet maximum cloud contamination constraints with the considered satellites. Data variety results from the need to collect and integrate various ancillary data for cloud detection, land/sea mask delineation, and adaptive colour balancing. This paper details how these 3 aspects of big data are handled and illustrates them for the creation of a seamless pan-European mosaic from 2.5m imagery (Land Monitoring/Urban Atlas Copernicus CORE 03 data set).The current development of satellite imagery means that a great volume of images acquired globally has to be understood in a fast and precise manner. Processing this large quantity of information comes at the cost of finding unsupervised algorithms to fulfill these tasks. Change detection is one of the main issues when talking about the analysis of satellite image time series (SITS). In this paper, we propose a method to analyze changes in SITS based on binary descriptors and on the Hamming distance, regarded as a similarity metric. In order to render an automatic and completely unsupervised technique towards solving this problem, the obtained distances are quantized into change levels using the Lloyd-Max’s algorithm. The experiments are carried on 11 Landsat images at 30 meters spatial resolution, covering an area of approximately 59 × 51 km2 over the surroundings of Bucharest, Romania, and containing information from six subbands of frequency.The Euclid Archive System prototype is a functional information system which is used to address the numerous challenges in the development of fully functional data processing system for Euclid. The prototype must support the highly distributed nature of the Euclid Science Ground System, with Science Data Centres in at least eight countries. There are strict requirements both on data quality control and traceability of the data processing. Data volumes will be greater than 10 Pbyte, with the actual volume being dependent on the amount of reprocessing required.In the space domain, all scientific and technological developments are accompanied by a growth of the number of data sources. More specifically, the world of observation knows this very strong acceleration and the demand for information processing follows the same pace. To meet this demand, the problems associated with non-interoperability of data must be efficiently resolved upstream and without loss of information. We advocate the use of linked data technologies to integrate heterogeneous and schema-less data that we aim to publish in the 5 stars scale in order to foster their re-use. By proposing the 5 stars data model, Tim Berners-Lee drew the perfect roadmap for the production of high quality linked data. In this paper, we present a technological framework that allows to go from raw, scattered and heterogeneous data to structured data with a well-defined and agreed upon semantics, interlinked with other dataset for their common objects.Reference data sets, necessary to the advancement of the field of object recognition by providing a point of comparison for different algorithms, are prevalent in the field of multimedia. Although sharing the same basic object recognition problem, in the field of remote sensing there is a need for specialized reference data sets. This paper would like to open the topic for discussion, by taking a first attempt at creating a reference data set for a satellite image. In doing so, important differences between annotating photographic and satellite images are highlighted, along with their impact on the creation of a reference data set. The results are discussed with a view toward creating a future methodology for the manual annotation of satellite images.The future atmospheric composition Sentinel missions will generate two orders of magnitude more data than the current missions and the operational processing of these big data is a big challenge. The trace gas retrieval from remote sensing data usually requires high-performance radiative transfer model (RTM) simulations and the RTM are usually the bottleneck for the operational processing of the satellite data. To date, multi-core CPUs and also Graphical Processing Units (GPUs) have been used for highly intensive parallel computations. In this paper, we are comparing multi-core and GPU implementations of an RTM based on the discrete ordinate solution method. With GPUs, we have achieved a 20x-40x speed-up for the multi-stream RTM, and 50x speed-up for the two-stream RTM with respect to the original single-threaded CPU codes. Based on these performance tests, an optimal workload distribution scheme between GPU and CPU is proposed. Finally, we discuss the performance obtained with the multi-core-CPU and GPU implementations of the RTM.The effective use of Big Data in current and future scientific missions requires intelligent data handling systems which are able to interface the user to complicated distributed data collections. We review the WISE Concept of Scientific Information Systems and the WISE solutions for the storage and processing as applied to Big Data.Interactive visual data mining, where the user plays a key role in learning process, has gained high attention in data mining and human-machine communication. However, this approach needs Dimensionality Reduction (DR) techniques to visualize image collections. Although the main focus of DR techniques lays on preserving the structure of the data, the occlusion of images and inefficient usage of display space are their two main drawbacks. In this work, we propose to use Non-negative Matrix Factorization (NMF) to reduce the dimensionality of images for immersive visualization. The proposed method aims to preserve the structure of data and at the same time reduce the occlusion between images by defining regularization terms for NMF. Experimental validations performed on two sets of image collections show the efficiency of the proposed method in respect to controlling the trade-off between structure preserving and less occluded visualization.This article provides a short overview about the TanDEM-X mission, its objectives and the payload ground segment (PGS) based on data management, processing systems and long term archive. Due to the large data volume of the acquired and processed products a main challenge in the operation of the PGS is to handle the required data throughput, which is a new dimension for the DLR PGS. To achieve this requirement, several solutions were developed and coordinated. Some of them were more technical nature whereas others optimized the workflows.Clustering of Earth Observation (EO) images has gained a high amount of attention in remote sensing and data mining. Here, each image is represented by a high-dimensional feature vector which could be computed as the results of coding algorithms of extracted local descriptors or raw pixel values. In this work, we propose to learn the features using discriminative Nonnegative Matrix factorization (DNMF) to represent each image. Here, we use the label of some images to produce new representation of images with more discriminative property. To validate our algorithm, we apply the proposed algorithm on a dataset of Synthetic Aperture Radar (SAR) and compare the results with the results of state-of-the-art techniques for image representation. The results confirm the capability of the proposed method in learning discriminative features leading to higher accuracy in clustering.


Archive | 2014

A Framework for Benchmarking of Feature Extraction Methods in Earth Observation image Analysis

Florin-Andrei Georgescu; Corina Vaduva; Mihai Datcu; Dan Răducanu

In the last few years, thanks to projects like TELEIOS, the linked open data cloud has been rapidly populated with geospatial data some of it describing Earth Observation products (e.g., CORINE Land Cover, Urban Atlas). The abundance of this data can prove very useful to the new missions (e.g., Sentinels) as a means to increase the usability of the millions of images and EO products that are expected to be produced by these missions. In this paper, we explain the relevant opportunities by demonstrating how the process of knowledge discovery from TerraSAR-X images can be improved using linked open data and Sextant, a tool for browsing and exploration of linked geospatial data, as well as the creation of thematic maps.Dimensionality reduction for visualization is widely used in visual data mining where the data is represented by high dimensional features. However, this leads to have an unbalanced and occluded distribution of visual data in display space giving rise to difficulties in browsing images. In this paper, we propose an approach to the visualization of images in a 3D display space in such a way that: (1) images are not occluded and the provided space is used efficiently; (2) similar images are positioned close together. An immersive virtual environment is employed as a 3D display space. Experiments are performed on an optical image dataset represented by color features. A library of dimensionality reduction is employed to reduce the dimensionality to 3D. The results confirm that the proposed technique can be used in immersive visual data mining for exploring and browsing large-scale datasets.In this paper, we evaluate sample selection strategies based on optimum experimental design for SAR image classification. Traditionally, support vector machine active learning is widely used by selecting the samples close to the decision surface. Recently, new methods based on optimum experimental design have been developed. To gain a complete understanding of these selection strategies, a comparative study on three approaches, transductive experimental design, manifold adaptive experimental design and locally linear reconstruction, has been performed for SAR image classification using different features. Among the three approaches,we show that manifold adaptive experimental design performs best and stably in terms of both accuracy and computational complexity.Large volume of detailed features of land covers, provided by High-Resolution Earth Observation (EO) images, has attracted the interests to assess the discovery of these features by Content-Based Image Retrieval systems. In this paper, we perform Latent Dirichlet Allocation (LDA) on the Bag-of-Words (BoW) representation of collections of EO images to discover their high-level features, so-called topics. To assess the discovered topics, the images are represented based on the occurrence of different topics, we name it Bag-of-Topics (BoT). Then, the BoT model is compared to the BoW model of images based on the given human-annotations of the data. In our experiments, we compare the classification accuracy resulted by BoT and BoW representations of two different EO datasets, a Synthetic Aperture Radar (SAR) dataset and a multi-spectral satellite dataset. Moreover, we provide isualizations of feature space for better perceiving the changes in the discovered information by BoT and BoW models. Experimental results demonstrate that the dimensionality of the data can be reduced by BoT representation of images; while it either causes no significant reduction in the classification accuracy or even increase the accuracy by sufficient number of topics.In the context of Earth Observation (EO), image information retrieval systems have gained importance as a way to explore terabytes of archive data. Concurrently, evaluation of these systems becomes a topic. Evaluation has typically been conducted in the form of metrics such as Precision Recall measures, with more recent approaches attempting to include the user in the evaluation process. This paper presents a more user centered evaluation of a CBIR tool in an EO context. The evaluation methodology involved open ended user feedback, which was then inductively categorized, and its distribution and content were analyzed. Results are presented, with conclusions indicating certain aspects of the user experience cannot be obtained from metrics alone, but can be complementary to metrics.This paper presents SAR patch categorization based on feature descriptors within the dual tree complex wavelet transform using non-parametric features, which were estimated for each wavelet based subband, which was additionally transformed using a Fourier transform. Spectral properties of wavelet transform were characterized using thefirst and second moments, Kolmogorov Sinai entropy and coding gainwithin an oriented dual tree complex wavelet transform (2D ODTCWT). A database with 2000 images representing 20 different classes with 100 images per class was used for estimation of classification efficiency. A window size for estimation feature parameters was estimated. A supervised learning stage was implemented with support vector machine using 10 % and 20% of the test images per class. The experimental results showed that the non-parametric features achieved 94.3 % accuracy, when 20 % of database was used for supervised training.This paper presents an application of visual data mining technique to Earth-Observation images for exploring very large image archives. We present a visual data mining workstation solution and create some use cases in order to demonstrate its functionality. This tool allows interactive exploration and analysis of very large, high complexity, and non-visual data sets stored into a database by using human-machine communication. The tool relies on image processing components that transform the image content to primitive feature vectors and a graphical user interface, which allows the exploration of the entire image archive. The use cases are based on Synthetic Aperture Radar images, digital orthophotos and photos in-situ.α-trees provide a hierarchical representation of an image into partitions of regions with increasing heterogeneity. This model, inspired from the single-linkage paradigm, has recently been revisited for grayscale images and has been successfully used in the field of remote sensing. This article shows how this representation can be adapted to more complex data here hyperspectral images, according to different strategies. We know that the measure of distance between two neighbouring pixels is a key element for the quality of the underlying tree, but usual metrics are not satisfying. We show here that a relevant solution to understand hyperspectral data relies on the prior learning of the metric to be used and the exploitation of domain knowledge.The multitude of sensors used to acquire Earth Observation (EO) images have led to the creation of an extremely various collection of data. Along with appropriate methods able to work with great amount of data, informat ion retrieval process requires algorithms to cope with a range of input imagery. Even if the geometry and the manner of creating Synthetic Aperture Radar (SAR) images are totally different than multispectral data, there are attempts of finding a common ground such that optical image indexing algorithms can be applied for SAR data and vice versa. Moreover, new concepts must be defined in order to obtain satisfying results, enabling measurements and comparisons between the extracted features [4]. Regarding this idea, the goal is to develop an application capable to join feature extraction algorithms and classification algorithms . Its success will sustain the integration of a reliable EO data search engine. This paper presents a framework for feature extraction and classification aiming to support EO image annotation. Weber Local Descriptors (WLD), Gabor filter and Support Vector Machine (SVM) are combined in order to define an application to be tested on both SAR and optical data.We introduce a map algebra based on a cochain extension of the Linear Algebraic Representation (LAR), used to efficiently represent and query geometric and physical information through sparse matrix algebra. LAR, based on standard algebraic topology methods, supports all incidence structures, including enumerative (images), decompositive (meshes) and boundary (CAD) representations, is dimension-independent and not restricted to regular complexes. This algebraic representation enjoys a neat mathematical format— being based on chains, the domains of discrete integration, and cochains, the discrete prototype of differential forms, so naturally integrating the geometric shape with the supported physical properties, and provides a mechanism for strongly typed representation of all physical quantities associated with images. It is easy to show that k-cochains form a linear vector space over k-cells, which means that they can used as basic objects in a rich and virtually unlimited calculus of physical properties.In this paper, we present a knowledge-driven content-based information mining system for data fusion in Big Data. The tool combines, at pixel level, the unsupervised clustering results of different number of features, extracted from different image types, with a user given semantic concepts in order to calculate the posterior probability that allows the final search. The system is able to learn different semantic labels based on Bayesian networks and retrieve the related images with only a few user interactions, greatly optimizing the computational costs and over performing existing similar systems in various orders of magnitude.


IEEE Access | 2018

Understanding Heterogeneous EO Datasets: A Framework for Semantic Representations

Corina Vaduva; Florin-Andrei Georgescu; Mihai Datcu


international geoscience and remote sensing symposium | 2017

Visual data mining applied on earth observation datasets

Andreea Griparis; Florin-Andrei Georgescu; Mihai Datcu


Archive | 2017

Land Cover Classification using Sentinel-2 Data

Florin-Andrei Georgescu; Corina Vaduva; Mihai Datcu


Archive | 2016

Patch-based Image Classification for Sentinel-1 and Sentinel-2 Earth Observation Image Data Products

Florin-Andrei Georgescu; Radu Tanase; Mihai Datcu; Dan Raducanu


Archive | 2015

Gabor and Weber Local Descriptors performance in multispectral Earth Observation image data analysis

Florin-Andrei Georgescu; Mihai Datcu; Dan Raducanu

Collaboration


Dive into the Florin-Andrei Georgescu's collaboration.

Top Co-Authors

Avatar

Mihai Datcu

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Dan Raducanu

Military Technical Academy

View shared research outputs
Top Co-Authors

Avatar

Mihai Datcu

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Corina Vaduva

Politehnica University of Bucharest

View shared research outputs
Top Co-Authors

Avatar

Radu Tanase

Politehnica University of Bucharest

View shared research outputs
Top Co-Authors

Avatar

Anamaria Radoi

Politehnica University of Bucharest

View shared research outputs
Top Co-Authors

Avatar

Andreea Griparis

Politehnica University of Bucharest

View shared research outputs
Top Co-Authors

Avatar

Corina Văduva

Politehnica University of Bucharest

View shared research outputs
Researchain Logo
Decentralizing Knowledge