Wolfgang Konen
Cologne University of Applied Sciences
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Wolfgang Konen.
IEEE Transactions on Computers | 1993
Martin Lades; Jan C. Vorbrüggen; Joachim M. Buhmann; Jorg Lange; C. von der Malsburg; Rolf P. Würtz; Wolfgang Konen
An object recognition system based on the dynamic link architecture, an extension to classical artificial neural networks (ANNs), is presented. The dynamic link architecture exploits correlations in the fine-scale temporal structure of cellular signals to group neurons dynamically into higher-order entities. These entities represent a rich structure and can code for high-level objects. To demonstrate the capabilities of the dynamic link architecture, a program was implemented that can recognize human faces and other objects from video images. Memorized objects are represented by sparse graphs, whose vertices are labeled by a multiresolution description in terms of a local power spectrum, and whose edges are labeled by geometrical distance vectors. Object recognition can be formulated as elastic graph matching, which is performed here by stochastic optimization of a matching cost function. The implementation on a transputer network achieved recognition of human faces and office objects from gray-level camera images. The performance of the program is evaluated by a statistical analysis of recognition results from a portrait gallery comprising images of 87 persons. >
Neural Computation | 1993
Wolfgang Konen; Christoph von der Malsburg
A large attraction of neural systems lies in their promise of replacing programming by learning. A problem with many current neural models is that with realistically large input patterns learning time explodes. This is a problem inherent in a notion of learning that is based almost entirely on statistical estimation. We propose here a different learning style where significant relations in the input pattern are recognized and expressed by the unsupervised self-organization of dynamic links. The power of this mechanism is due to the very general a priori principle of conservation of topological structure. We demonstrate that style with a system that learns to classify mirror symmetric pixel patterns from single examples.
FGR | 1995
Jörg Kopecz; Wolfgang Konen; Ekkehard Schulze-Krüger
We present a biometric access control device which is based on the identification of human faces. The system combines a console for semi-automated image acquisition with the necessary algorithms for face recognition. Facial features are stored in a relatively compact data format (1.6 kB). ZN-Face runs on a Pentium 90 without any special accelerator hardware where it performs image acquisition, face localization and identification in less than 3 seconds. ZN-Face not only allows robust identification of stored persons (despite changes in facial expression or size), but also reliable rejection of unknown persons. With an acceptance criterion which safely rejects all unknown persons we achieve an identification rate above 99% (FRR< 1%). The ZN Bochum GmbH has sold more than 100 licences to various institutions and companies, among them the Kremlin in Moscow. The ZN Bochum GmbH holds the relevant patents for ZN-Face.
Evolutionary Intelligence | 2012
Patrick Koch; Bernd Bischl; Oliver Flasch; Thomas Bartz-Beielstein; Claus Weihs; Wolfgang Konen
Kernel-based methods like Support Vector Machines (SVM) have been established as powerful techniques in machine learning. The idea of SVM is to perform a mapping from the input space to a higher-dimensional feature space using a kernel function, so that a linear learning algorithm can be employed. However, the burden of choosing the appropriate kernel function is usually left to the user. It can easily be shown that the accuracy of the learned model highly depends on the chosen kernel function and its parameters, especially for complex tasks. In order to obtain a good classification or regression model, an appropriate kernel function in combination with optimized pre- and post-processed data must be used. To circumvent these obstacles, we present two solutions for optimizing kernel functions: (a) automated hyperparameter tuning of kernel functions combined with an optimization of pre- and post-processing options by Sequential Parameter Optimization (SPO) and (b) evolving new kernel functions by Genetic Programming (GP). We review modern techniques for both approaches, comparing their different strengths and weaknesses. We apply tuning to SVM kernels for both regression and classification. Automatic hyperparameter tuning of standard kernels and pre- and post-processing options always yielded to systems with excellent prediction accuracy on the considered problems. Especially SPO-tuned kernels lead to much better results than all other tested tuning approaches. Regarding GP-based kernel evolution, our method rediscovered multiple standard kernels, but no significant improvements over standard kernels were obtained.
international symposium on neural networks | 2010
Patrick Koch; Wolfgang Konen; Kristine Hein
Slow Feature Analysis (SFA) has been established as a robust and versatile technique from the neurosciences to learn slowly varying functions from quickly changing signals. Recently, the method has been also applied to classification tasks. Here we apply SFA for the first time to a time series classification problem originating from gesture recognition. The gestures used in our experiments are based on acceleration signals of the Bluetooth Wiimote controller (Nintendo). We show that SFA achieves results comparable to the well-known Random Forest predictor in shorter computation time, given a sufficient number of training patterns. However - and this is a novelty to SFA classification - we discovered that SFA requires the number of training patterns to be strictly greater than the dimension of the nonlinear function space. If too few patterns are available, we find that the model constructed by SFA severely overfits and leads to high test set errors. We analyze the reasons for overfitting and present a new solution based on parametric bootstrap to overcome this problem.
international conference on artificial neural networks | 1996
Wolfgang Konen
We report on an application of neural face recognition algorithms to a task with relevance to forensic investigations: The software tool PHANTOMAS (phantom automatic search) allows to compare facial line drawings (the German “Phantomzeichnung”) with gray-level images of faces. In addition to normal (textual) database search actions, this software tool allows picture-to-picture searches. We present first results on the evaluation of a benchmark on this task. The ranking quality of PHANTOMAS allows to spot the true match belonging to a certain drawing on the average within the upper 2.7% of the database (N=103). It is shown that this is comparable to the human performance on the same data material. Computation time makes it feasible to search online in large databases (N ≈ 10000). — With the same algorithm it is also possible to classify complex characteristica in faces or facial line drawings, which we demonstrate on the example of gender classification.
Computer Aided Surgery | 1998
Wolfgang Konen; Martin Scholz; S. Tombrock
We developed a navigation support system for endoscopic interventions that allows three-dimensional (3-D) information to be extracted from endoscopic video data and superimposed onto such live video sequences. The endoscope is coupled to a position measurement system and a video camera as components of a calibrated system. In this article we show that the radial distortions of the wide-angle endoscopic lens system can be successfully corrected and an overall accuracy of about 0.7 mm is achievable. Tracking on live endoscopic video sequences allows accurate 3-D depth data to be obtained from multiple camera views.
parallel problem solving from nature | 2012
Markus Thill; Patrick Koch; Wolfgang Konen
Learning complex game functions is still a difficult task. We apply temporal difference learning (TDL), a well-known variant of the reinforcement learning approach, in combination with n-tuple networks to the game Connect-4. Our agent is trained just by self-play. It is able, for the first time, to consistently beat the optimal-playing Minimax agent (in game situations where a win is possible). The n-tuple network induces a mighty feature space: It is not necessary to design certain features, but the agent learns to select the right ones. We believe that the n-tuple network is an important ingredient for the overall success and identify several aspects that are relevant for achieving high-quality results. The architecture is sufficiently general to be applied to similar reinforcement learning tasks as well.
Applied Soft Computing | 2015
Patrick Koch; Tobias Wagner; Michael Emmerich; Thomas Bäck; Wolfgang Konen
Graphical abstractDisplay Omitted HighlightsThe Kriging-based EGO techniques performed better than the baseline LHS approach.The use of re-interpolation is crucial to cope with noise.Repeats can be necessary but also decrease the number of possible infill points. Recent research revealed that model-assisted parameter tuning can improve the quality of supervised machine learning (ML) models. The tuned models were especially found to generalize better and to be more robust compared to other optimization approaches. However, the advantages of the tuning often came along with high computation times, meaning a real burden for employing tuning algorithms. While the training with a reduced number of patterns can be a solution to this, it is often connected with decreasing model accuracies and increasing instabilities and noise. Hence, we propose a novel approach defined by a two criteria optimization task, where both the runtime and the quality of ML models are optimized. Because the budgets for this optimization task are usually very restricted in ML, the surrogate-assisted Efficient Global Optimization (EGO) algorithm is adapted. In order to cope with noisy experiments, we apply two hypervolume indicator based EGO algorithms with smoothing and re-interpolation of the surrogate models. The techniques do not need replicates. We find that these EGO techniques can outperform traditional approaches such as latin hypercube sampling (LHS), as well as EGO variants with replicates.
genetic and evolutionary computation conference | 2011
Wolfgang Konen; Patrick Koch; Oliver Flasch; Thomas Bartz-Beielstein; Martina Friese; Boris Naujoks
The complex, often redundant and noisy data in real-world data mining (DM) applications frequently lead to inferior results when out-of-the-box DM models are applied. A tuning of parameters is essential to achieve high-quality results. In this work we aim at tuning parameters of the preprocessing and the modeling phase conjointly. The framework TDM (Tuned Data Mining) was developed to facilitate the search for good parameters and the comparison of different tuners. It is shown that tuning is of great importance for high-quality results. Surrogate-model based tuning utilizing the Sequential Parameter Optimization Toolbox (SPOT) is compared with other tuners (CMA-ES, BFGS, LHD) and evidence is found that SPOT is well suited for this task. In benchmark tasks like the Data Mining Cup (DMC) tuned models achieve remarkably better ranks than their untuned counterparts.