Klaus Glashoff
University of Hamburg
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Klaus Glashoff.
Computer Graphics Forum | 2013
Artiom Kovnatsky; Michael M. Bronstein; Alexander M. Bronstein; Klaus Glashoff; Ron Kimmel
The use of Laplacian eigenbases has been shown to be fruitful in many computer graphics applications. Today, state‐of‐the‐art approaches to shape analysis, synthesis, and correspondence rely on these natural harmonic bases that allow using classical tools from harmonic analysis on manifolds. However, many applications involving multiple shapes are obstacled by the fact that Laplacian eigenbases computed independently on different shapes are often incompatible with each other. In this paper, we propose the construction of common approximate eigenbases for multiple shapes using approximate joint diagonalization algorithms, taking as input a set of corresponding functions (e.g. indicator functions of stable regions) on the two shapes. We illustrate the benefits of the proposed approach on tasks from shape editing, pose transfer, correspondence, and similarity.
european conference on computer vision | 2016
Artiom Kovnatsky; Klaus Glashoff; Michael M. Bronstein
Numerous problems in computer vision, pattern recognition, and machine learning are formulated as optimization with manifold constraints. In this paper, we propose the Manifold Alternating Directions Method of Multipliers (MADMM), an extension of the classical ADMM scheme for manifold-constrained non-smooth optimization problems. To our knowledge, MADMM is the first generic non-smooth manifold optimization method. We showcase our method on several challenging problems in dimensionality reduction, non-rigid correspondence, multi-modal clustering, and multidimensional scaling.
IEEE Transactions on Pattern Analysis and Machine Intelligence | 2015
Davide Eynard; Artiom Kovnatsky; Michael M. Bronstein; Klaus Glashoff; Alexander M. Bronstein
We construct an extension of spectral and diffusion geometry to multiple modalities through simultaneous diagonalization of Laplacian matrices. This naturally extends classical data analysis tools based on spectral geometry, such as diffusion maps and spectral clustering. We provide several synthetic and real examples of manifold learning, object classification, and clustering, showing that the joint spectral geometry better captures the inherent structure of multi-modal data. We also show the relation of many previous approaches for multimodal manifold analysis to our framework.
Review of Symbolic Logic | 2010
Klaus Glashoff
Since Frege’s predicate logical transcription of Aristotelian categorical logic, the standard semantics of Aristotelian logic considers terms as standing for sets of individuals. From a philosophical standpoint, this extensional model poses problems: There exist serious doubts that Aristotle’s terms were meant to refer always to sets, that is, entities composed of individuals. Classical philosophy up to Leibniz and Kant had a different view on this question—they looked at terms as standing for concepts (“Begriffe”). In 1972, Corcoran presented a formal system for Aristotelian logic containing a calculus of natural deduction, while, with respect to semantics, he still made use of an extensional interpretation. In this paper we deal with a simple intensional semantics for Corcoran’s syntax—intensional in the sense that no individuals are needed for the construction of a complete Tarski model of Aristotelian syntax. Instead, we view concepts as containing or excluding other, “higher” concepts—corresponding to the idea which Leibniz used in the construction of his characteristic numbers. Thus, this paper is an addendum to Corcoran’s work, furnishing his formal syntax with an adequate semantics which is free from presuppositions which have entered into modern interpretations of Aristotle’s theory via predicate logic.
Computing | 1972
Klaus Glashoff
ZusammenfassungDas allgemeine konvexe Optimierungsproblemf(x)=Min! unter den Restriktionenx∈Q, g(x)∈Y in reflexiven Banachräumen wird als zweistufige Optimierungs-aufgabe gedeutet und mit der Methode der Regularisierung behandelt. Es ergibt sich so eine Penalty-Methode für Aufgaben mit unendlich vielen Nebenbedingungen. Die bei “inkorrekt gestellten” Problemen auftretenden „Scheinlösungen” werden identifiziert. In einigen Fällen erhalten wir eine Darstellung der zugehörigen Multiplikatoren, die deren näherungsweise Berechnung gestattet.AbstractThe general convex programming problemf(x)=Min! with constraintsx∈Q, g(x)∈Y in reflexive Banach spaces will be treated as a two step problem which we solve with the aid of the method of regularization. The result of this is a penalty method for convex programming problems with infinitely many constraints. Fictitious solutions appearing in connection with ill posed problems are identified. In some cases we obtain a representation and a method for the approximate calculation of multipliers.
international conference on 3d vision | 2016
Davide Eynard; Emanuele Rodolà; Klaus Glashoff; Michael M. Bronstein
Classical formulations of the shape matching problem involve the definition of a matching cost that directly depends on the action of the desired map when applied to some input data. Such formulations are typically one-sided - they seek for a mapping from one shape to the other, but not vice versa. In this paper we consider an unbiased formulation of this problem, in which we solve simultaneously for a low-distortion map relating the two given shapes and its inverse. We phrase the problem in the spectral domain using the language of functional maps, resulting in an especially compact and efficient optimization problem. The benefits of our proposed regularization are especially evident in the scarce data setting, where we demonstrate highly competitive results with respect to the state of the art.
international conference on 3d imaging, modeling, processing, visualization & transmission | 2012
Klaus Glashoff; Michael M. Bronstein
The classical Tomasi-Kanade method for Structure from Motion (SfM) based on measurement matrix factorization using SVD is known to perform poorly in the presence of occlusions and outliers. In this paper, we present an efficient approach by which we are able to deal with both problems at the same time. We use the Augmented Lagrangian alternative minimization method to solve iteratively a robust version of the matrix factorization approach. Experiments on synthetic and real data show the computational efficiency and good convergence of the method, which make it favorably compare to other approaches used in the SfM problem.
Archive | 1983
Klaus Glashoff; Sven-Åke Gustafson
This chapter will be devoted to the study of the problem pairs (P) - (D) and (PA) - (DA) in a special but important case, namely when the moment generating functions a1, …,an form a so-called Chebyshev system. The most well-known instance of such a system is ar(s) = sr-1, r = 1,…,n, on a closed and bounded real interval. In all the linear optimization problems to be treated in this chapter, the structure of the nonlinear system can be determined from the outset, which simplifies the numerical treatment considerably in comparison to a direct application of the three-phase algorithm.
Archive | 1983
Klaus Glashoff; Sven-Åke Gustafson
This and the next chapter are devoted to the presentation of the simplex algorithm for the numerical solution of linear optimization problems. This very important scheme was developed by Dantzig around 1950. We will see that the simplex algorithm consists of a sequence of exchange steps. A special algorithm, related to the simplex algorithm and also based on exchange steps, was used in 1934 by Remez for the calculation of best approximations in the uniform norm. His procedure is described in Cheney (1966).
Archive | 1983
Klaus Glashoff; Sven-Åke Gustafson
Optimization problems are encountered in many branches of technology, in science, and in economics as well as in our daily life. They appear in so many different shapes that it is useless to attempt a uniform description of them or even try to classify them according to one principle or another. In the present section we will introduce a few general concepts which occur in all optimization problems. Simple examples will elucidate the presentation.