Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Wolfram Schenck is active.

Publication


Featured researches published by Wolfram Schenck.


Cognitive Science | 2008

Bootstrapping Cognition from Behavior--A Computerized Thought Experiment

Ralf Möller; Wolfram Schenck

We show that simple perceptual competences can emerge from an internal simulation of action effects and are thus grounded in behavior. A simulated agent learns to distinguish between dead ends and corridors without the necessity to represent these concepts in the sensory domain. Initially, the agent is only endowed with a simple value system and the means to extract low-level features from an image. In the interaction with the environment, it acquires a visuo-tactile forward model that allows the agent to predict how the visual input is changing under its movements, and whether movements will lead to a collision. From short-term predictions based on the forward model, the agent learns an inverse model. The inverse model in turn produces suggestions about which actions should be simulated in long-term predictions, and long-term predictions eventually give rise to the perceptual ability.


Biological Cybernetics | 2005

Learning visuomotor transformations for gaze-control and grasping

Heiko Hoffmann; Wolfram Schenck; Ralf Möller

For reaching to and grasping of an object, visual information about the object must be transformed into motor or postural commands for the arm and hand. In this paper, we present a robot model for visually guided reaching and grasping. The model mimics two alternative processing pathways for grasping, which are also likely to coexist in the human brain. The first pathway directly uses the retinal activation to encode the target position. In the second pathway, a saccade controller makes the eyes (cameras) focus on the target, and the gaze direction is used instead as positional input. For both pathways, an arm controller transforms information on the target’s position and orientation into an arm posture suitable for grasping. For the training of the saccade controller, we suggest a novel staged learning method which does not require a teacher that provides the necessary motor commands. The arm controller uses unsupervised learning: it is based on a density model of the sensor and the motor data. Using this density, a mapping is achieved by completing a partially given sensorimotor pattern. The controller can cope with the ambiguity in having a set of redundant arm postures for a given target. The combined model of saccade and arm controller was able to fixate and grasp an elongated object with arbitrary orientation and at arbitrary position on a table in 94% of trials.


Journal of The Optical Society of America A-optics Image Science and Vision | 2007

Spectral contrasts for landmark navigation

Thomas Kollmeier; Frank Röben; Wolfram Schenck; Ralf Möller

Visual robot navigation in outdoor environments would benefit from an illumination-independent representation of images. We explore how such a representation, comprising a black skyline of objects in front of a white sky, can be obtained from dual-channel spectral contrast measures. Light from sky and natural objects under different conditions of illumination was analyzed by five spectral channels: ultraviolet, blue, green, red, and near infrared. Linear discriminant analysis was applied to determine the optimal linear separation between sky and object points. A statistical comparison shows that contrasts with large differences in the wavelength of the two channels, specifically ultraviolet-infrared, blue-infrared, and ultraviolet-red, yield the best separation. Within a single channel, the best separation was obtained for ultraviolet light. The gain in separation quality when all five channels were included is relatively small.


simulation of adaptive behavior | 2007

Training and Application of a Visual Forward Model for a Robot Camera Head

Wolfram Schenck; Ralf Möller

Visual forward models predict future visual data from the previous visual sensory state and a motor command. The adaptive acquisition of visual forward models in robotic applications is plagued by the high dimensionality of visual data which is not handled well by most machine learning and neural network algorithms. Moreover, the forward model has to learn which parts of the visual output are really predictable and which are not because they lack any corresponding part in the visual input. In the present study, a learning algorithm is proposed which solves both problems. It relies on predicting the mapping between pixel positions in the visual input and output instead of directly forecasting visual data. The mapping is learned by matching corresponding regions in the visual input and output while exploring different visual surroundings. Unpredictable regions are detected by the lack of any clear correspondence. The proposed algorithm is applied successfully to a robot camera head under additional distortion of the camera images by a retinal mapping. Two future applications of the final visual forward model are proposed, saccade learning and a task from the domain of eye-hand coordination.


Proceedings of the Eighth Neural Computation and Psychology Workshop | 2004

Staged learning of saccadic eye movements with a robot camera head

Wolfram Schenck; Ralf Möller

In motor learning, two main problems arise: the missing teacher signal, and the necessity to explore high-dimensional sensorimotor spaces. Several solutions have been proposed, all of them limited in some respect. In the present work, an alternative learning mechanism is developed for the example of saccade control, implemented on a stereo vision robot camera head. This approach relies on two main principles: averaging over imperfect learning examples, and learning in multiple stages. In each stage, a saccade controller is trained with a set of imperfect learning examples. Afterwards, the output of this controller serves as starting point for the creation of a new training set with better quality. By the repetition of these steps, the controllers’ performance can be incrementally improved without the need to search from scratch for the rare learning examples with very good quality.


International Journal of Neural Systems | 2010

COUPLED SINGULAR VALUE DECOMPOSITION OF A CROSS-COVARIANCE MATRIX

Alexander Kaiser; Wolfram Schenck; Ralf Möller

We derive coupled on-line learning rules for the singular value decomposition (SVD) of a cross-covariance matrix. In coupled SVD rules, the singular value is estimated alongside the singular vectors, and the effective learning rates for the singular vector rules are influenced by the singular value estimates. In addition, we use a first-order approximation of Gram-Schmidt orthonormalization as decorrelation method for the estimation of multiple singular vectors and singular values. Experiments on synthetic data show that coupled learning rules converge faster than Hebbian learning rules and that the first-order approximation of Gram-Schmidt orthonormalization produces more precise estimates and better orthonormality than the standard deflation method.


Adaptive Behavior | 2013

Solving the correspondence problem in stereo vision by internal simulation

Alexander Kaiser; Wolfram Schenck; Ralf Möller

We present a computational model for object matching in a pair of stereo images based on internal sensorimotor simulation. In our study, we use pairs of retinal images, i.e. the resolution is higher towards the image center and low in the periphery, which stem from two cameras, each one mounted on a pan–tilt unit (PTU). The internal simulation is driven by two internal models: a saccade controller (SC) which generates a fixation movement to a certain point in either image, and a visual forward model (VFM) that models the effect on camera movements (by the PTU) onto the image. The SC takes as sensory input the current position of a salient point (in image coordinates) and generates a motor command that would lead to the fixation of that point. The VFM takes as sensory input a current camera image and a motor command, i.e. a saccade, and generates an image that appears as if the saccade was executed. By using the internal models, the salient objects are virtually fixated in both images. These fixated views are matched against each other using a simple difference-based matching approach. The performance of the model is evaluated through a large number of experiments on an image database and compared to a widely used approach from computer vision. In addition, a comparison on a commonplace scene is presented.


ieee international conference on high performance computing, data, and analytics | 2016

Early Evaluation of the “Infinite Memory Engine” Burst Buffer Solution

Wolfram Schenck; Salem El Sayed; Maciej Foszczynski; Wilhelm Homberg; D. Pleiter

Hierarchical storage architectures are required to meet both, capacity and bandwidth requirements for future high-end storage architectures. In this paper we present the results of an evaluation of an emerging technology, DataDirect Networks’ (DDN) Infinite Memory Engine (IME). IME allows to realize a fast buffer in front of a large capacity storage system. We collected benchmarking data with IOR and with the HPC application NEST. The IOR bandwidth results show how well network bandwidth towards such fast buffer can be exploited compared to the external storage system. The NEST benchmarks clearly demonstrate that IME can reduce I/O-induced load imbalance between MPI ranks to a minimum while speeding up I/O as a whole by a considerable factor.


Connection Science | 2011

Kinematic motor learning

Wolfram Schenck

This paper focuses on adaptive motor control in the kinematic domain. Several motor-learning strategies from the literature are adopted to kinematic problems: ‘feedback-error learning’, ‘distal supervised learning’, and ‘direct inverse modelling’ (DIM). One of these learning strategies, DIM, is significantly enhanced by combining it with abstract recurrent neural networks. Moreover, a newly developed learning strategy (‘learning by averaging’) is presented in detail. The performance of these learning strategies is compared with different learning tasks on two simulated robot setups (a robot-camera-head and a planar arm). The results indicate a general superiority of DIM if combined with abstract recurrent neural networks. Learning by averaging shows consistent success if the motor task is constrained by special requirements.


international conference on cloud computing | 2018

A Case Study on Benchmarking IoT Cloud Services

Kevin Grünberg; Wolfram Schenck

The Internet of Things (IoT) is on the rise, forming networks of sensors, machines, cars, household appliances, and other physical items. In the industrial area, machines in assembly lines are connected to the internet for quick date exchange and coordinated operation. Cloud services are an obvious choice for data integration and processing in the (industrial) IoT. However, manufacturing machines have to exchange data in close to real-time in many use cases, requiring small round-trip times (RTT) in the communication between device and cloud. In this study, two of the major IoT cloud services, Microsoft Azure IoT Hub and Amazon Web Services IoT, are benchmarked in the area of North Rhine-Westphalia (Germany) regarding RTT, varying factors like time of day, day of week, location, inter-message interval, and additional data processing in the cloud. The results show significant performance differences between the cloud services and a considerable impact of some of the aforementioned factors. In conclusion, as soon as (soft) real-time conditions come into play, it is highly advisable to carry out benchmarking in advance to identify an IoT cloud workflow which meets these conditions.

Collaboration


Dive into the Wolfram Schenck's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Wilhelm Homberg

Forschungszentrum Jülich

View shared research outputs
Top Co-Authors

Avatar

D. Pleiter

Forschungszentrum Jülich

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Salem El Sayed

Forschungszentrum Jülich

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge