Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gintautas Palubinskas is active.

Publication


Featured researches published by Gintautas Palubinskas.


IEEE Transactions on Signal Processing | 2000

Bayesian approaches to phase unwrapping: theoretical study

Giovanni Nico; Gintautas Palubinskas; Mihai Datcu

The problem of phase unwrapping of two-dimensional (2-D) phase signals has gained a considerable interest. It deals with the problem of estimating (reconstructing) an absolute phase from the observation of its noisy principal (wrapped) values. This is an ill-posed problem since many possible solutions correspond to a given observation. Many phase unwrapping algorithms have been proposed relying on different constraints for the phase signal sampling process or the nature (e.g., smoothness, regularity) of the phase signal. We look at these algorithms from the Bayesian point of view (estimation theory) and analyze the role of the prior assumptions, studying their equivalencies to the regularization constraints already used. This study leads to the development of the two new phase unwrapping algorithms which are able to work in quite difficult conditions of aliasing and noise. The theoretical study of the analyzed schemes is illustrated by some experiments on synthetic phase signals.


International Journal of Image and Data Fusion | 2010

Image acquisition geometry analysis for the fusion of optical and radar remote sensing data

Gintautas Palubinskas; Peter Reinartz; Richard Bamler

Fusion of optical and radar remote sensing data is becoming an actual topic of discussion recently in various application areas though the results are not always satisfactory. In this article, we analyse some disturbing aspects of fusing orthoimages from sensors having different acquisition geometries. These aspects arise due to errors in digital elevation models (DEM), used for image orthorectification, and the existence of 3-D objects in the scene which are not accounted in the DEM. We analyse how these effects influence the ground displacement in orthoimages produced from optical and radar data. Further, we propose sensor formations with acquisition geometry parameters which allow to minimise or compensate for ground displacements in different orthoimages due to the above-mentioned effects and to produce good prerequisites for the following fusion for specific application areas, e.g. matching, filling data gaps, classification, etc. To demonstrate the potential of the proposed approach, two pairs of optical–radar data were acquired over the urban area–Munich City, Germany. The first collection of WorldView-1 and TerraSAR-X (TS-X) data followed the proposed recommendations for acquisition geometry parameters, whereas the second collection of IKONOS and TS-X data was acquired with accidental parameters. The experiment confirmed our ideas fully. Moreover, it opens new possibilities for optical and radar image fusion.


IEEE Transactions on Geoscience and Remote Sensing | 2009

Processors for ALOS Optical Data: Deconvolution, DEM Generation, Orthorectification, and Atmospheric Correction

Peter Schwind; Mathias Schneider; Gintautas Palubinskas; Tobias Storch; Rupert Müller; Rudolf Richter

The German Aerospace Center (DLR) is responsible for the development of prototype processors for PRISM and AVNIR-2 data under a contract of the European Space Agency. The PRISM processor comprises the radiometric correction, an optional deconvolution to improve image quality, the generation of a digital elevation model, and orthorectification. The AVNIR-2 processor comprises radiometric correction, orthorectification, and atmospheric correction over land. Here, we present the methodologies applied during these processing steps as well as the results achieved using the processors.


urban remote sensing joint event | 2011

Multi-resolution, multi-sensor image fusion: general fusion framework

Gintautas Palubinskas; Peter Reinartz

Multi-resolution image fusion also known as pansharpening aims to include spatial information from a high resolution image, e.g. panchromatic or Synthetic Aperture Radar (SAR) image, into a low resolution image, e.g. multi-spectral or hyper-spectral image, while preserving spectral properties of a low resolution image. A signal processing view at this problem allowed us to perform a systematic classification of most known multi-resolution image fusion approaches and resulted in a General Framework for image Fusion (GFF) which is very well suitable for a fusion of multi-sensor data such as optical-optical and optical-radar imagery. Examples are presented for WorldView-1/2 and TerraSAR-X data.


IEEE Geoscience and Remote Sensing Letters | 2007

Radar Signatures of a Passenger Car

Gintautas Palubinskas; Hartmut Runge

Upcoming new synthetic aperture radar (SAR) satellites such as TerraSAR-X and Radarsat-2 offer high spatial image resolution and dual receive antenna capabilities, which open new opportunities for worldwide traffic monitoring applications. If the radar cross section (RCS) of the vehicles is strong enough, they can be detected in the SAR data, and their speed can be measured. For system performance prediction and algorithm development, it is therefore indispensable to know the RCS of typical passenger cars. The geometry parameters that have to be considered are the radar look direction, incidence angle, and vehicle orientation. In this letter, the radar signatures of nonmoving or parking cars are presented. They are measured experimentally from airborne experimental SAR (E-SAR) data, which have been collected during flight campaigns in 2005 and 2006 with multiple overflights at different aircraft headings. The radar signatures could be measured for the whole range of aspect angles from 0 to 180 and with high angular resolution due to the large synthetic aperture length of the E-SAR radar sensor. The analysis for one type of passenger car and particular incidence angles showed that the largest radar cross-sectional values and, thus, the greatest chance of detection of the vehicles appear when the car is seen from the front, back, and side. Radar cross-sectional values for slanted views are much lower and are therefore less suitable for car detection. The measurements have been performed in the -band (9.6 GHz) with VV-polarization, and at incidence angles of 41.5deg and 42.5deg. The derived radar signature profile can also be used for the verification of radar cross-sectional simulation studies.


international conference on pattern recognition | 1998

An unsupervised clustering method using the entropy minimization

Gintautas Palubinskas; Xavier Descombes; Frithjof Kruggel

We address the problem of unsupervised clustering using a Bayesian framework. The entropy is considered to define a priori and enables one to overcome the problem of defining a priori the number of clusters and an initialization of their centers. A deterministic algorithm derived from the standard k-means algorithm is proposed and compared with simulated annealing algorithms. The robustness of the proposed method is shown on a magnetic resonance images database containing 65 volumetric (3D) images.


Journal of Applied Remote Sensing | 2013

Fast, simple and good pan-sharpening method

Gintautas Palubinskas

Abstract Pan-sharpening of optical remote sensing multispectral imagery aims to include spatial information from a high-resolution image (high frequencies) into a low-resolution image (low frequencies) while preserving spectral properties of a low-resolution image. From a signal processing view, a general fusion filtering framework (GFF) can be formulated, which is very well suitable for a fusion of multiresolution and multisensor data such as optical-optical and optical-radar imagery. To reduce computation time, a simple and fast variant of GFF-high-pass filtering method (HPFM)—is proposed, which performs filtering in signal domain and thus avoids time-consuming FFT computations. A new joint quality measure based on the combination of two spectral and spatial measures was proposed for quality assessment by a proper normalization of the ranges of variables. Quality and speed of six pan-sharpening methods—component substitution (CS), Gram-Schmidt (GS) sharpening, Ehlers fusion, Amélioration de la Résolution Spatiale par Injection de Structures, GFF, and HPFM—were evaluated for WorldView-2 satellite remote sensing data. Experiments showed that the HPFM method outperforms all the fusion methods used in this study, even its parentage method GFF. Moreover, it is more than four times faster than GFF method and competitive with CS and GS methods in speed.


Journal of Applied Remote Sensing | 2012

Analysis and selection of pan-sharpening assessment measures

Aliaksei Makarau; Gintautas Palubinskas; Peter Reinartz

Pan-sharpening of remote sensing multispectral imagery directly influences the accuracy of interpretation, classification, and other data mining methods. Different tasks of multispectral image analysis and processing require specific properties of input pan-sharpened multispectral data such as spectral and spatial consistency, complexity of the pan-sharpening method, and other properties. The quality of a pan-sharpened image is assessed using quantitative measures. Generally, the quantitative measures for pan-sharpening assessment are taken from other topics of image processing (e.g., image similarity indexes), but the applicability basis of these measures (i.e., whether a measure provides correct and undistorted assessment of pan-sharpened imagery) is not checked and proven. For example, should (or should not) a quantitative measure be used for pan-sharpening assessment is still an open research topic. Also, there is a chance that some measures can provide distorted results of the quality assessment and the suitability of these quantitative measures as well as the application for pan-sharpened imagery assessment is under question. The aim of the authors is to perform statistical analysis of widely employed measures for remote sensing imagery pan-sharpening assessment and to show which of the measures are the most suitable for use. To find and prove which measures are the most suitable, sets of multispectral images are processed by the general fusion framework method (GFF) with varying parameters. The GFF is a type of general image fusion method. Variation of the method parameter set values allows one to produce imagery data with predefined quality (i.e., spatial and spectral consistency) for further statistical analysis of the assessment measures. The use of several main multispectral sensors (Landsat 7 ETM + , IKONOS, and WorldView-2) imagery allows one to assess and compare available quality assessment measures and illustrate which of them are most suitable for each satellite. Experimental analysis illustrates adequate assessment decisions produced by the selected measures for the results of representative pan-sharpening methods.


international geoscience and remote sensing symposium | 2008

Detection of Traffic Congestion in Optical Remote Sensing Imagery

Gintautas Palubinskas; Franz Kurz; Peter Reinartz

A new approach for the traffic congestion detection in time series of optical digital camera images is proposed. It is well suited to derive various traffic parameters such as vehicle density, average vehicle velocity, beginning and end of congestion, length of congestion or for other traffic monitoring applications. The method is based on the vehicle detection on the road segment by change detection between two images with a short time lag, the usage of a priori information such as road data base, vehicle sizes and road parameters and a simple linear traffic model based on the spacing between vehicles. The estimated velocity profiles for experimental data acquired by airborne optical remote sensing sensor - 3 K camera system - coincide quite well with the reference measurements.


urban remote sensing joint event | 2011

Interpretation of SAR images in urban areas using simulated optical and radar images

Junyi Tao; Gintautas Palubinskas; Peter Reinartz; Stefan Auer

Because of the all-weather and all-time data acquisition capability, high resolution space borne synthetic aperture radar (SAR) plays an important role in remote sensing applications like earth mapping. However, the visual interpretation of SAR images is usually difficult, especially for urban areas. This paper shows a method for visual interpreting SAR images by means of optical and SAR images simulated from digital elevation models (DEM), which are derived from LiDAR data. The simulated images are automatically geocoded and enable a direct comparison with the real SAR image. An application for the simulation concept is presented for the city center of Munich where the comparison to the TerraSAR-X data shows good similarity. The simulated optical image can be used for direct and quick identification of objects in the corresponding SAR image. Additionally, simulated SAR image can separate multiple reflections mixed in the real SAR image, thus enabling easier interpretation of an urban scene.

Collaboration


Dive into the Gintautas Palubinskas's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mihai Datcu

German Aerospace Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge