Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jochen Garcke is active.

Publication


Featured researches published by Jochen Garcke.


Computing | 2001

Data mining with sparse grids

Jochen Garcke; Michael Griebel; Michael Thess

O(hn−1nd−1) instead of O(hn−d) grid points and unknowns are involved. Here d denotes the dimension of the feature space and hn = 2−n gives the mesh size. To be precise, we suggest to use the sparse grid combination technique [42] where the classification problem is discretized and solved on a certain sequence of conventional grids with uniform mesh sizes in each coordinate direction. The sparse grid solution is then obtained from the solutions on these different grids by linear combination. In contrast to other sparse grid techniques, the combination method is simpler to use and can be parallelized in a natural and straightforward way. We describe the sparse grid combination technique for the classification problem in terms of the regularization network approach. We then give implementational details and discuss the complexity of the algorithm. It turns out that the method scales only linearly with the number of instances, i.e. the amount of data to be classified. Finally we report on the quality of the classifier built by our new method. Here we consider standard test problems from the UCI repository and problems with huge synthetical data sets in up to 9 dimensions. It turns out that our new method achieves correctness rates which are competitive to that of the best existing methods.


SIAM Journal on Scientific Computing | 2009

Multivariate Regression and Machine Learning with Sums of Separable Functions

Gregory Beylkin; Jochen Garcke; Martin J. Mohlenkamp

We present an algorithm for learning (or estimating) a function of many variables from scattered data. The function is approximated by a sum of separable functions, following the paradigm of separated representations. The central fitting algorithm is linear in both the number of data points and the number of variables and, thus, is suitable for large data sets in high dimensions. We present numerical evidence for the utility of these representations. In particular, we show that our method outperforms other methods on several benchmark data sets.


Archive | 2012

Sparse Grids in a Nutshell

Jochen Garcke

The technique of sparse grids allows to overcome the curse of dimensionality, which prevents the use of classical numerical discretization schemes in more than three or four dimensions, under suitable regularity assumptions. The approach is obtained from a multi-scale basis by a tensor product construction and subsequent truncation of the resulting multiresolution series expansion. This entry level article gives an introduction to sparse grids and the sparse grid combination technique.


international conference on machine learning | 2006

Regression with the optimised combination technique

Jochen Garcke

We consider the sparse grid combination technique for regression, which we regard as a problem of function reconstruction in some given function space. We use a regularised least squares approach, discretised by sparse grids and solved using the so-called combination technique, where a certain sequence of conventional grids is employed. The sparse grid solution is then obtained by addition of the partial solutions with combination co-efficients dependent on the involved grids. This approach shows instabilities in certain situations and is not guaranteed to converge with higher discretisation levels. In this article we apply the recently introduced optimised combination technique, which repairs these instabilities. Now the combination coefficients also depend on the function to be reconstructed, resulting in a non-linear approximation method which achieves very competitive results. We show that the computational complexity of the improved method still scales only linear in regard to the number of data.


Journal of Scientific Computing | 2013

An Adaptive Sparse Grid Semi-Lagrangian Scheme for First Order Hamilton-Jacobi Bellman Equations

Olivier Bokanowski; Jochen Garcke; Michael Griebel; Irene Klompmaker

We propose a semi-Lagrangian scheme using a spatially adaptive sparse grid to deal with non-linear time-dependent Hamilton-Jacobi Bellman equations. We focus in particular on front propagation models in higher dimensions which are related to control problems. We test the numerical efficiency of the method on several benchmark problems up to space dimension d=8, and give evidence of convergence towards the exact viscosity solution. In addition, we study how the complexity and precision scale with the dimension of the problem.


Archive | 2012

Sparse Grids and Applications

Jochen Garcke; Michael Griebel

In the recent decade, there has been a growing interest in the numerical treatment of high-dimensional problems. It is well known that classical numerical discretization schemes fail in more than three or four dimensions due to the curse of dimensionality. The technique of sparse grids helps overcome this problem to some extent under suitable regularity assumptions. This discretization approach is obtained from a multi-scale basis by a tensor product construction and subsequent truncation of the resulting multiresolution series expansion. This volume of LNCSE is a collection of the papers from the proceedings of the workshop on sparse grids and its applications held in Bonn in May 2011. The selected articles present recent advances in the mathematical understanding and analysis of sparse grid discretization. Aspects arising from applications are given particular attention.


Computing | 2009

Fitting multidimensional data using gradient penalties and the sparse grid combination technique

Jochen Garcke; Markus Hegland

Sparse grids, combined with gradient penalties provide an attractive tool for regularised least squares fitting. It has earlier been found that the combination technique, which builds a sparse grid function using a linear combination of approximations on partial grids, is here not as effective as it is in the case of elliptic partial differential equations. We argue that this is due to the irregular and random data distribution, as well as the proportion of the number of data to the grid resolution. These effects are investigated both in theory and experiments. As part of this investigation we also show how overfitting arises when the mesh size goes to zero. We conclude with a study of modified “optimal” combination coefficients who prevent the amplification of the sampling noise present while using the original combination coefficients.


knowledge discovery and data mining | 2001

Data mining with sparse grids using simplicial basis functions

Jochen Garcke; Michael Griebel

Recently we presented a new approach [18] to the classification problem arising in data mining. It is based on the regularization network approach but, in contrast to other methods which employ ansatz functions associated to data points, we use a grid in the usually high-dimensional feature space for the minimization process. To cope with the curse of dimensionality, we employ sparse grids [49]. Thus, only <i>O</i>(<i>h</i><inf><i>n</i></inf><sup>-1</sup><i>n</i><sup><i>d</i>-1</sup>) instead of <i>O</i>(<i>h</i><inf><i>n</i></inf><sup>-<i>d</i></sup>) grid points and unknowns are involved. Here <i>d</i> denotes the dimension of the feature space and <i>h</i><inf><i>n</i></inf> = 2<sup>-<i>n</i></sup> gives the mesh size. We use the sparse grid combination technique [28] where the classification problem is discretized and solved on a sequence of conventional grids with uniform mesh sizes in each dimension. The sparse grid solution is then obtained by linear combination. In contrast to our former work, where <i>d</i>-linear functions were used, we now apply linear basis functions based on a simplicial discretization. This allows to handle more dimensions and the algorithm needs less operations per data point.We describe the sparse grid combination technique for the classification problem, give implementational details and discuss the complexity of the algorithm. It turns out that the method scales linearly with the number of given data points. Finally we report on the quality of the classifier built by our new method on data sets with up to 10 dimensions. It turns out that our new method achieves correctness rates which are competitive to that of the best existing methods.


international conference on conceptual structures | 2013

Analysis of Car Crash Simulation Data with Nonlinear Machine Learning Methods

Bastian Bohn; Jochen Garcke; Rodrigo Iza-Teran; Alexander Paprotny; Benjamin Peherstorfer; Ulf Schepsmeier; Clemens-August Thole

Abstract Nowadays, product development in the car industry heavily relies on numerical simulations. For example, it is used to explore the influence of design parameters on the weight, costs or functional properties of new car models. Car engineers spend a considerable amount of their time analyzing these influences by inspecting the arising simulations one at a time. Here, we propose using methods from machine learning to semi-automatically analyze the arising finite element data and thereby significantly assist in the overall engineering process. We combine clustering and nonlinear dimensionality reduction to show that the method is able to automatically detect parameter dependent structure instabilities or reveal bifurcations in the time-dependent behavior of beams. In particular we study recent nonlinear and sparse grid approaches, respectively. Our examples demonstrate the strong potential of our approach for reducing the data analysis effort in the engineering process, and emphasize the need for nonlinear methods for such tasks.


international conference on large scale scientific computing | 2001

On the Parallelization of the Sparse Grid Approach for Data Mining

Jochen Garcke; Michael Griebel

Recently we presented a new approach [5, 6] to the classification problem arising in data mining. It is based on the regularization network approach, but in contrast to other methods which employ ansatz functions associated to data points, we use basis functions coming from a grid in the usually high-dimensional feature space for the minimization process. Here, to cope with the curse of dimensionality, we employ so-called sparse grids. To be precise we use the sparse grid combination technique [11] where the classification problem is discretized and solved on a sequence of conventional grids with uniform mesh sizes in each dimension. The sparse grid solution is then obtained by linear combination. The method scales only linearly with the number of data points and is well suited for data mining applications where the amount of data is very large, but where the dimension of the feature space is moderately high. The computation on each grid of the sequence of grids is independent of each other and therefore can be done in parallel already on a coarse grain level. A second level of parallelization on a fine grain level can be introduced on each grid through the use of threading on shared-memory multi-processor computers.We describe the sparse grid combination technique for the classification problem, we discuss the two ways of parallelisation, and we report on the results on a 10 dimensional data set.

Collaboration


Dive into the Jochen Garcke's collaboration.

Top Co-Authors

Avatar

Michael Griebel

Technical University of Berlin

View shared research outputs
Top Co-Authors

Avatar

Markus Hegland

Australian National University

View shared research outputs
Top Co-Authors

Avatar

Alexander Paprotny

Technical University of Berlin

View shared research outputs
Top Co-Authors

Avatar

Alvaro Aguilera

Dresden University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Irene Klompmaker

Technical University of Berlin

View shared research outputs
Top Co-Authors

Avatar

Richard Grunzke

Dresden University of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge