Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Stefan Kindermann is active.

Publication


Featured researches published by Stefan Kindermann.


Multiscale Modeling & Simulation | 2005

Deblurring and Denoising of Images by Nonlocal Functionals

Stefan Kindermann; Stanley Osher; Peter W. Jones

This paper investigates the use of regularization functionals with nonlocal correlation terms for the problem of image denoising and image deblurring. These functionals are expressed as integrals over the Cartesian product of the pixel space. We show that the class of neighborhood filters can be described in this framework. Using these functionals we can consider the functional analytic properties of some of these neighborhood filters and show how they can be seen as regularization terms using a smoothed version of the Prokhorov metric. Moreover, we define a nonlocal variant of the well-known bounded variation regularization, which does not suffer from the staircase effect. We show existence of a minimizer of the corresponding regularization functional for the denoising and deblurring problem, and we present some numerical examples comparing the nonlocal version to the bounded variation regularization and the nonlocal mean filter.


Evolving Systems | 2015

Generalized smart evolving fuzzy systems

Edwin Lughofer; Carlos Cernuda; Stefan Kindermann; Mahardhika Pratama

AbstractIn this paper, we propose a new methodology for learning evolving fuzzy systems (EFS) from data streams in terms of on-line regression/system identification problems. It comes with enhanced dynamic complexity reduction steps, acting on model components and on the input structure and by employing generalized fuzzy rules in arbitrarily rotated position. It is thus termed as Gen-Smart-EFS (GS-EFS), short for generalized smart evolving fuzzy systems. Equipped with a new projection concept for high-dimensional kernels onto one-dimensional fuzzy sets, our approach is able to provide equivalent conventional TS fuzzy systems with axis-parallel rules, thus maintaining interpretability when inferring new query samples. The on-line complexity reduction on rule level integrates a new merging concept based on a combined adjacency–homogeneity relation between two clusters (rules). On input structure level, complexity reduction is motivated by a combined statistical-geometric concept and acts in a smooth and soft manner by incrementally adapting feature weights: features may get smoothly out-weighted over time (


IEEE Transactions on Fuzzy Systems | 2010

SparseFIS: Data-Driven Learning of Fuzzy Systems With Sparsity Constraints

Edwin Lughofer; Stefan Kindermann


Inverse Problems | 2000

Identification of the diffusion coefficient in a one-dimensional parabolic equation

Victor Isakov; Stefan Kindermann

\rightarrow


Inverse Problems | 2008

The quasi-optimality criterion for classical inverse problems

Frank Bauer; Stefan Kindermann


Applicable Analysis | 2010

Improved and extended results for enhanced convergence rates of Tikhonov regularization in Banach spaces

Andreas Neubauer; Torsten Hein; Bernd Hofmann; Stefan Kindermann; Ulrich Tautenhahn

→soft on-line dimension reduction) but also may become reactivated at a later stage. Out-weighted features will contribute little to the rule evolution criterion, which prevents the generation of unnecessary rules and reduces over-fitting due to curse of dimensionality. The criterion relies on a newly developed re-scaled Mahalanobis distance measure for assuring monotonicity between feature weights and distance values. Gen-Smart-EFS will be evaluated based on high-dimensional real-world data (streaming) sets and compared with other well-known (evolving) fuzzy systems approaches. The results show improved accuracy with lower rule base complexity as well as smaller rule length when using Gen-Smart-EFS.


IEEE Transactions on Ultrasonics Ferroelectrics and Frequency Control | 2008

Estimation of the surface normal velocity of high frequency ultrasound transducers

Stefan J. Rupitsch; Stefan Kindermann; Bernhard G. Zagar

In this paper, we deal with a novel data-driven learning method [sparse fuzzy inference systems (SparseFIS)] for Takagi-Sugeno (T-S) fuzzy systems, extended by including rule weights. Our learning method consists of three phases: The first phase conducts a clustering process in the input/output feature space with iterative vector quantization and projects the obtained clusters onto 1-D axes to form the fuzzy sets (centers and widths) in the antecedent parts of the rules. Hereby, the number of clusters = rules is predefined and denotes a kind of upper bound on a reasonable granularity. The second phase optimizes the rule weights in the fuzzy systems with respect to least-squares error measure by applying a sparsity-constrained steepest descent-optimization procedure. Depending on the sparsity threshold, weights of many or a few rules can be forced toward 0, thereby, switching off (eliminating) some rules (rule selection). The third phase estimates the linear consequent parameters by a regularized sparsity-constrained-optimization procedure for each rule separately (local learning approach). Sparsity constraints are applied in order to force linear parameters to be 0, triggering a feature-selection mechanism per rule. Global feature selection is achieved whenever the linear parameters of some features in each rule are (near) 0. The method is evaluated, which is based on high-dimensional data from industrial processes and based on benchmark datasets from the Internet and compared with well-known batch-training methods, in terms of accuracy and complexity of the fuzzy systems.


Journal of Scientific Computing | 2006

Denoising by BV-duality

Stefan Kindermann; Stanley Osher; Jinjun Xu

This paper deals with the problem of the identification of the diffusion coefficient in a parabolic equation. This inverse problem is formulated as a nonlinear operator equation in Hilbert spaces. Continuity and differentiability of the corresponding operator is shown. In the one-dimensional case uniqueness and conditional stability results are obtained by using the heat equation transform. Finally, the problem is solved numerically by iteratively regularized Gauss-Newton method.


Inverse Problems | 2005

Convergence rates in the Prokhorov metric for assessing uncertainty in ill-posed problems

Heinz W. Engl; Andreas Hofinger; Stefan Kindermann

The quasi-optimality criterion chooses the regularization parameter in inverse problems without requiring knowledge about the noise level. It is well known that this cannot yield convergence for ill-posed problems in the worst case. In this paper, we establish conditions providing lower bounds on the approximation error and the propagated noise error, such that these terms can be estimated from above and below by a geometric series. Using these we can show convergence and optimal-order error bounds for Tikhonov regularization with the quasi-optimality criterion both in the case of deterministic problems as well as for stochastic noise.


Journal of Computational Physics | 2009

Reconstruction of shapes and impedance functions using few far-field measurements

Lin He; Stefan Kindermann; Mourad Sini

Even if the recent literature on enhanced convergence rates for Tikhonov regularization of ill-posed problems in Banach spaces shows substantial progress, not all factors influencing the best possible convergence rates under sufficiently strong smoothness assumptions were clearly determined. In particular, it was an open problem whether the structure of the residual term can limit the rates. For residual norms of power type in the functional to be minimized for obtaining regularized solutions, the latest rates results for nonlinear problems by Neubauer [On enhanced convergence rates for Tikhonov regularization of non-linear ill-posed problems in Banach spaces, Inverse Prob. 25 (2009), p. 065009] indicate an apparent qualification of the method caused by the residual norm exponent p. The new message of the present article is that optimal rates are shown to be independent of that exponent in the range 1 ≤ p < ∞. However, on the one hand, the smoothness of the image space influences the rates, and on the other hand best possible rates require specific choices of the regularization parameters α > 0. While for all p > 1 the regularization parameters have to decay to zero with some prescribed speed depending on p when the noise level tends to zero in order to obtain the best rates, the limiting case p = 1 shows some curious behaviour. For that case, the α-values must get asymptotically frozen at a fixed positive value characterized by the properties of the solution as the noise level decreases.

Collaboration


Dive into the Stefan Kindermann's collaboration.

Top Co-Authors

Avatar

Andreas Neubauer

Johannes Kepler University of Linz

View shared research outputs
Top Co-Authors

Avatar

Edwin Lughofer

Johannes Kepler University of Linz

View shared research outputs
Top Co-Authors

Avatar

Carmeliza Navasca

University of Alabama at Birmingham

View shared research outputs
Top Co-Authors

Avatar

Heinz W. Engl

Johannes Kepler University of Linz

View shared research outputs
Top Co-Authors

Avatar

Ronny Ramlau

Johannes Kepler University of Linz

View shared research outputs
Top Co-Authors

Avatar

Bernd Hofmann

Chemnitz University of Technology

View shared research outputs
Top Co-Authors

Avatar

Ctirad Matonoha

Academy of Sciences of the Czech Republic

View shared research outputs
Top Co-Authors

Avatar

Andreas Marn

Graz University of Technology

View shared research outputs
Top Co-Authors

Avatar

Bernhard G. Zagar

Johannes Kepler University of Linz

View shared research outputs
Top Co-Authors

Avatar

Davide Lengani

Graz University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge