Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hans Ulrich Simon is active.

Publication


Featured researches published by Hans Ulrich Simon.


Journal of Computer and System Sciences | 1995

Robust Trainability of Single Neurons

K.U. Hoffgen; Hans Ulrich Simon; K.S. Vanhorn

It is well known that (McCulloch-Pitts) neurons are efficiently trainable to learn an unknown halfspace from examples, using linear-programming methods. We want to analyze how the learning performance degrades when the representational power of the neuron is overstrained, i.e., if more complex concepts than just halfspaces are allowed. We show that the problem of learning a probably almost optimal weight vector for a neuron is so difficult that the minimum error cannot even be approximated to within a constant factor in polynomial time (unless RP = NP); we obtain the same hardness result for several variants of this problem. We considerably strengthen these negative results for neurons with binary weights 0 or 1. We also show that neither heuristical learning nor learning by sigmoidal neurons with a constant reject rate is efficiently possible (unless RP = NP).


Theoretical Computer Science | 2000

Contrast-optimal k out of n secret sharing schemes in visual cryptography

Thomas Hofmeister; Matthias Krause; Hans Ulrich Simon

Visual cryptography and (k, n)-visual secret sharing schemes were introduced by Naor and Shamir in [NaSh1]. A sender wishing to transmit a secret message distributes n transparencies among n recipients, where the transparencies contain seemingly random pictures. A (k, n)-scheme achieves the following situation: If any k recipients stack their transparencies together, then a secret message is revealed visually. On the other hand, if only k - 1 recipients stack their transparencies, or analyze them by any other means, they are not able to obtain any information about the secret message.


SIAM Journal on Discrete Mathematics | 1990

On Approximate Solutions for Combinatorial Optimization Problems

Hans Ulrich Simon

The usefulness of a special kind of approximability-preserving transformations (called continuous reductions) among combinatorial optimization problems is demonstrated. One common measure for the approximability of an optimization problem is its best performance ratio. This parameter attains the same value for two problems (up to a bounded factor) whenever they are mutually related by continuous reductions. Therefore, lower and upper bounds or gap-theorems valid for a particular problem are transferred along reduction chains. In this paper, continuous reductions are used for the analysis of several basic combinatorial problems including graph coloring, consistent deterministic finite automaton, covering by cliques, covering by complete bipartite subgraphs, independent set, set packing, and others. The results obtained and the methods involved are a contribution towards a systematic classification of NP-complete problems with regard to their approximability.


conference on learning theory | 2007

Stability of k-means clustering

Shai Ben-David; Dávid Pál; Hans Ulrich Simon

We consider the stability of k-means clustering problems. Clustering stability is a common heuristics used to determine the number of clusters in a wide variety of clustering applications. We continue the theoretical analysis of clustering stability by establishing a complete characterization of clustering stability in terms of the number of optimal solutions to the clustering optimization problem. Our results complement earlier work of Ben-David, von Luxburg and Pal, by settling the main problem left open there. Our analysis shows that, for probability distributions with finite support, the stability of k-means clusterings depends solely on the number of optimal solutions to the underlying optimization problem for the data distribution. These results challenge the common belief and practice that view stability as an indicator of the validity, or meaningfulness, of the choice of a clustering algorithm and number of clusters.


foundations of software technology and theoretical computer science | 2001

Relations Between Communication Complexity, Linear Arrangements, and Computational Complexity

Jürgen Forster; Matthias Krause; Satyanarayana V. Lokam; Rustam Mubarakzjanov; Niels Schmitt; Hans Ulrich Simon

Recently, Forster [7] proved a new lower bound on probabilistic communication complexity in terms of the operator norm of the communication matrix. In this paper, we want to exploit the various relations between communication complexity of distributed Boolean functions, geometric questions related to half space representations of these functions, and the computational complexity of these functions in various restricted models of computation. In order to widen the range of applicability of Forsters bound, we start with the derivation of a generalized lower bound. We present a concrete family of distributed Boolean functions where the generalized bound leads to a linear lower bound on the probabilistic communication complexity (and thus to an exponential lower bound on the number of Euclidean dimensions needed for a successful half space representation), whereas the old bound fails. We move on to a geometric characterization of the well known communication complexity class C-PP in terms of half space representations achieving a large margin. Our characterization hints to a close connection between the bounded error model of probabilistic communication complexity and the area of large margin classification. In the final section of the paper, we describe how our techniques can be used to prove exponential lower bounds on the size of depth-2 threshold circuits (with still some technical restrictions). Similar results can be obtained for read-k-times randomized ordered binary decision diagram and related models.


Information & Computation | 1982

A tight ω(loglog n)-bound on the time for parallel RAM's to compute nondegenerated boolean functions

Hans Ulrich Simon

A function f : , 1 n → , 1 is said to depend on dimension i iff there exists an input vector x such that f ( x ) differs from f ( x i }), where itx i } agrees with x in every dimension except iti.} In this case x is said to be itcritical} for f with respect to iti}. Function f is called itnondegenerated} iff it depends on all itn} dimensions. The main result of this paper is that for each nondegenerated function f : , 1 n → , 1 there exists an input vector x which is critical with respect to at least ω(log itn}) dimensions. A function achieving this bound is presented. Together with earlier results from Cook and Dwork (“Proceeding, 14th ACM Symp. on Theory of Computing,≓ 1982) and Reischuk (IBM Research Report, No. RJ 3431, 1982) it can be concluded that a parallel RAM requires at least ω(loglog itn}) steps to compute f .


conference on learning theory | 1992

Robust trainability of single neurons

Klaus-Uwe Höffgen; Hans Ulrich Simon

We investigate the problem of learning concepts by presenting labeled and randomly chosen training–examples to single neurons. It is well-known that linear halfspaces are learnable by the method of linear programming. The corresponding (Mc-Culloch-Pitts) neurons are therefore efficiently trainable to learn an unknown halfspace from examples. We want to analyze how fast the learning performance degrades when the representational power of the neuron is overstrained, i.e., if more complex concepts than just halfspaces are allowed. We show that a neuron cannot efficently find its probably almost optimal adjustment (unless RP = NP). If the weights and the threshold of the neuron have a fixed constant bound on their coding length, the situation is even worse: There is in general no polynomial time training method which bounds the resulting prediction error of the neuron by k.opt for a fixed constant k (unless RP = NP). Other variants of learning more complex concepts than halfspaces by single neurons are also investigated. We show that neither heuristical learning nor learning by sigmoidal neurons with a constant reject-rate is efficiently possible (unless RP = NP).


conference on learning theory | 1993

General bounds on the number of examples needed for learning probabilistic concepts

Hans Ulrich Simon

Given a p-concept classC, we define two important functionsdC(�),d�C(�) (related to the notion of�-shattering). We prove a lower bound of�((dC(�)�1)/(��2)) on the number of examples required for learningCwith an (�,��)-good model of probability. We prove similar lower bounds for some other learning models like learning with�-bounded absolute (or quadratic) difference or learning with a�-good decision rule. For the class ND of nondecreasing p-concepts on the real domain,dND(�)=�(1/�). It can be shown that the resulting lower bounds for learning ND (within the models in consideration) are tight to within a logarithmic factor. In order to get the “almost-matching” upper bounds, we introduce a new method for designing learning algorithms: dynamic partitioning of the domain by use of splitting trees. The paper also contains a discussion of the gaps between the general lower bounds and the corresponding general upper bounds. It can be shown that, under very mild conditions, these gaps are quite narrow.


computing and combinatorics conference | 1997

Contrast-Optimal k out of n Secret Sharing Schemes in Visual Cryptography

Thomas Hofmeister; Matthias Krause; Hans Ulrich Simon

Visual cryptography and (k, n)-visual secret sharing schemes were introduced by Naor and Shamir in [NaSh1]. A sender wishing to transmit a secret message distributes n transparencies among n recipients, where the transparencies contain seemingly random pictures. A (k, n)-scheme achieves the following situation: If any k recipients stack their transparencies together, then a secret message is revealed visually. On the other hand, if only k - 1 recipients stack their transparencies, or analyze them by any other means, they are not able to obtain any information about the secret message.


SIAM Journal on Computing | 1992

On learning ring-sum-expansions

Paul Fischer; Hans Ulrich Simon

The problem of learning ring-sum-expansions from examples is studied. Ring-sum-expansions (RSE) are representations of Boolean functions over the base

Collaboration


Dive into the Hans Ulrich Simon's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Paul Fischer

Technical University of Dortmund

View shared research outputs
Top Co-Authors

Avatar

Andreas Birkendorf

Technical University of Dortmund

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Norbert Klasner

Technical University of Dortmund

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge