Hannes Kazianka
Alpen-Adria-Universität Klagenfurt
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Hannes Kazianka.
Computers & Geosciences | 2011
Hannes Kazianka; Jürgen Pilz
Classical Bayesian spatial interpolation methods are based on the Gaussian assumption and therefore lead to unreliable results when applied to extreme valued data. Specifically, they give wrong estimates of the prediction uncertainty. Copulas have recently attracted much attention in spatial statistics and are used as a flexible alternative to traditional methods for non-Gaussian spatial modeling and interpolation. We adopt this methodology and show how it can be incorporated in a Bayesian framework by assigning priors to all model parameters. In the absence of simple analytical expressions for the joint posterior distribution we propose a Metropolis-Hastings algorithm to obtain posterior samples. The posterior predictive density is approximated by averaging the plug-in predictive densities. Furthermore, we discuss the deficiencies of the existing spatial copula models with regard to modeling extreme events. It is shown that the non-Gaussian @g^2-copula model suffers from the same lack of tail dependence as the Gaussian copula and thus offers no advantage over the latter with respect to modeling extremes. We illustrate the proposed methodology by analyzing a dataset here referred to as the Helicopter dataset, which includes strongly skewed radioactivity measurements in the city of Oranienburg, Germany.
Archive | 2010
Hannes Kazianka; Jürgen Pilz
It is common practice in geostatistics to use the variogram to describe the spatial dependence structure of the underlying random field. However, the variogram is sensitive to outlying observations and strongly influenced by the marginal distribution of the random field. As an alternative to spatial modeling using the variogram we consider describing the spatial correlation by means of copula functions. We present three methods for performing spatial interpolation using copulas. By exploiting the relationship between bivariate copulas and indicator covariances, the first method performs indicator kriging and disjunctive kriging. As a second method we propose a simple kriging of the rank-transformed data. The third method is a plug-in Bayes predictor, where the predictive distribution is calculated using the conditional copula given the observed data and the model parameters. We show that the latter approach generalizes the frequently applied trans-Gaussian kriging. Finally, we report on the results obtained for the so-called Joker data set from the spatial interpolation comparison SIC2004.
Stochastic Environmental Research and Risk Assessment | 2013
Hannes Kazianka
The present paper reports on the use of copula functions to describe the distribution of discrete spatial data, e.g. count data from environmental mapping or areal data analysis. In particular, we consider approaches to parameter point estimation and propose a fast method to perform approximate spatial prediction in copula-based spatial models with discrete marginal distributions. We assess the goodness of the resulting parameter estimates and predictors under different spatial settings and guide the analyst on which approach to apply for the data at hand. Finally, we illustrate the methodology by analyzing the well-known Lansing Woods data set. Software that implements the methods proposed in this paper is freely available in Matlab language on the author’s website.
Stochastic Environmental Research and Risk Assessment | 2013
Hannes Kazianka
The spatialCopula toolbox contains a set of Matlab functions that provides utilities for copula-based analysis of spatially referenced data, a topic which has re cently attracted much attention in spatial statistics. These tools have been developed to support the work flow in parameter estimation, spatial interpolation and visualization. They offer flexible and user-friendly software for dealing with non-Gaussian and extreme value data that possibly contain a spatial trend or geometric anisotropy. The objective of this paper is to give an introduction to the concept behind the software and to outline the functionality of the toolbox. We illustrate its usefulness by analyzing a data set here referred to as the Gomel data set, which includes moderately skewed radioactivity measurements in the region of Gomel, Belarus. The source codes are freely available in Matlab language on the author’s website (fam.tuwien.ac.at/~hakazian/software.html).
Computational Statistics & Data Analysis | 2012
Hannes Kazianka
The issue of objective prior specification for the parameters in the normal compositional model is considered within the context of statistical analysis of linearly mixed structures in image processing. In particular, the Jeffreys prior for the vector of fractional abundances in case of a known covariance matrix is derived. If an additional unknown variance parameter is present, the Jeffreys prior and the reference prior are computed and it is proven that the resulting posterior distributions are proper. Markov chain Monte Carlo strategies are proposed to efficiently sample from the posterior distributions and the priors are compared on the grounds of the frequentist properties of the resulting Bayesian inferences. The default Bayesian analysis is illustrated by a dataset taken from fluorescence spectroscopy.
Statistical Methods and Applications | 2014
Muhammad Mohsin; Hannes Kazianka; Jürgen Pilz; Albrecht Gebhardt
This paper introduces a new bivariate exponential distribution, called the Bivariate Affine-Linear Exponential distribution, to model moderately negative dependent data. The construction and characteristics of the proposed bivariate distribution are presented along with estimation procedures for the model parameters based on maximum likelihood and objective Bayesian analysis. We derive Jeffreys prior and discuss its frequentist properties based on a simulation study and MCMC sampling techniques. A real data set of mercury concentration in largemouth bass from Florida lakes is used to illustrate the methodology.
Journal of Applied Statistics | 2011
Hannes Kazianka; Michael Mulyk; Jürgen Pilz
In this paper, we study a new Bayesian approach for the analysis of linearly mixed structures. In particular, we consider the case of hyperspectral images, which have to be decomposed into a collection of distinct spectra, called endmembers, and a set of associated proportions for every pixel in the scene. This problem, often referred to as spectral unmixing, is usually considered on the basis of the linear mixing model (LMM). In unsupervised approaches, the endmember signatures have to be calculated by an endmember extraction algorithm, which generally relies on the supposition that there are pure (unmixed) pixels contained in the image. In practice, this assumption may not hold for highly mixed data and consequently the extracted endmember spectra differ from the true ones. A way out of this dilemma is to consider the problem under the normal compositional model (NCM). Contrary to the LMM, the NCM treats the endmembers as random Gaussian vectors and not as deterministic quantities. Existing Bayesian approaches for estimating the proportions under the NCM are restricted to the case that the covariance matrix of the Gaussian endmembers is a multiple of the identity matrix. The self-evident conclusion is that this model is not suitable when the variance differs from one spectral channel to the other, which is a common phenomenon in practice. In this paper, we first propose a Bayesian strategy for the estimation of the mixing proportions under the assumption of varying variances in the spectral bands. Then we generalize this model to handle the case of a completely unknown covariance structure. For both algorithms, we present Gibbs sampling strategies and compare their performance with other, state of the art, unmixing routines on synthetic as well as on real hyperspectral fluorescence spectroscopy data.
GfKl | 2008
Hannes Kazianka; Raimund Leitner; Jürgen Pilz
Supervised classification methods require reliable and consistent training sets. In image analysis, where class labels are often assigned to the entire image, the manual generation of pixel-accurate class labels is tedious and time consuming. We present an independent component analysis (ICA)-based method to generate these pixel-accurate class labels with minimal user interaction. The algorithm is applied to the detection of skin cancer in hyperspectral images. Using this approach it is possible to remove artifacts caused by sub-optimal image acquisition. We report on the classification results obtained for the hyper-spectral skin cancer data set with 300 images using support vector machines (SVM) and model-based discriminant analysis (MclustDA, MDA).
Computers & Geosciences | 2013
Muhammad Mohsin; Hannes Kazianka; Jürgen Pilz
Modeling the acidity in rainfall at certain locations is a complex task because of different environmental conditions for different rainfall regimes and the large variability in the covariates involved. In this paper, concentration of acidity and major ions in the rainfall in UK is analyzed by assuming a bivariate pseudo-Gamma distribution. The model parameters are estimated by using the maximum likelihood method and the goodness of fit is checked. Furthermore, the non-informative Jeffreys prior for the distribution parameters is derived and a hybrid Gibbs sampling strategy is proposed to sample the corresponding posterior for conducting an objective Bayesian analysis. Finally, related quantities such as the deposition flux density are derived where the general pattern of the observed data appears to follow the fitted densities closely.
Stochastic Environmental Research and Risk Assessment | 2010
Hannes Kazianka; Jürgen Pilz