Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where O. Baruth is active.

Publication


Featured researches published by O. Baruth.


Journal of Neural Engineering | 2005

Tunable retina encoders for retina implants: why and how

Rolf Eckmiller; D. Neumann; O. Baruth

Current research towards retina implants for partial restoration of vision in blind humans with retinal degenerative dysfunctions focuses on implant and stimulation experiments and technologies. In contrast, our approach takes the availability of an epiretinal multi-electrode neural interface for granted and studies the conditions for successful joint information processing of both retinal prosthesis and brain. Our proposed learning retina encoder (RE) includes information processing modules to simulate the complex mapping operation of parts of the 5-layered neural retina and to provide an iterative, perception-based dialog between RE and human subject. Alternative information processing technologies in the learning RE are being described, which allow an individual optimization of the RE mapping operation by means of iterative tuning with learning algorithms in a dialog between implant wearing subject and RE. The primate visual system is modeled by a retina module (RM) composed of spatio-temporal (ST) filters and a central visual system module (VM). RM performs a mapping 1 of an optical pattern P1 in the physical domain onto a retinal output vector R1(t) in a neural domain, whereas VM performs a mapping 2 of R1(t) in a neural domain onto a visual percept P2 in the perceptual domain. Retinal ganglion cell properties represent non-invertible ST filters in RE, which generate ambiguous output signals. VM generates visual percepts only if the corresponding R1(t) is properly encoded, contains sufficient information, and can be disambiguated. Based on the learning RE and the proposed visual system model, a novel retina encoder (RE*) is proposed, which considers both ambiguity removal and miniature eye movements during fixation. Our simulation results suggest that VM requires miniature eye movements under control of the visual system to retrieve unambiguous patterns P2 corresponding to P1. For retina implant applications, RE* can be tuned to generate optimal ganglion cell codes for epiretinal stimulation.


international conference on neural information processing | 2004

Neural Information Processing Efforts to Restore Vision in the Blind

Rolf Eckmiller; O. Baruth; D. Neumann

Retina Implants belong to the most advanced and truly ‘visionary’ man-machine interfaces. Such neural prostheses for retinally blind humans with previous visual experience require technical information processing modules (in addition to implanted microcontact arrays for communication with the remaining intact central visual system) to simulate the complex mapping operation of the 5-layered retina and to generate a parallel, asynchronous data stream of neural impulses corresponding to a given optical input pattern. In this paper we propose a model of the human visual system from the information science perspective. We describe the unique information processing approaches implemented in a learning Retina Encoder (RE), which functionally mimics parts the central human retina and which allows an individual optimization of the RE mapping operation by means of iterative tuning using learning algorithms in a dialog between implant wearing subject and RE.


international joint conference on neural network | 2006

On Human Factors for Interactive Man-Machine Vision: Requirements of the Neural Visual System to transform Objects into Percepts

Rolf Eckmiller; O. Baruth; D. Neumann

This paper combines recent findings from neuroscience of the primate visual system, neural computation simulations, and dialog-based man-machine tuning experiments in order to define the properties that are essential for novel interactive image acquisition and display systems for human visual augmentation challenges: e.g. a-Interactive Imagery Analysis, b-Visual Prosthetics, and c-Generation of realistic, dynamically explorable Objects. Technical systems need for elicitation of visual percepts in humans a detailed consideration of sensory and motor parameters of the human visual system and also learning control capabilities to provide an optimization of visual percepts in a feedback loop between a technical system and the human.


international symposium on neural networks | 2003

Retina encoder tuning and data encryption for learning retina implants

O. Baruth; Rolf Eckmiller; D. Neumann

A retina encoder (RE) as part of a visual prosthesis (retina implant) for blind subjects with retinal degenerative disorders was implemented by an array of 256 tunable spatio-temporal filters (ST) to map visual patterns P1 onto encoded output patterns as functional approximation of retina information processing. These RE-output signals were fed into a visual system module (VM) to simulate the mapping (by the central visual system) of the retinal output onto visual percepts P2, which were visualized on a monitor. Alternative tuning strategies for roaming within the large spatio-temporal state space of RE were developed and tested. Initially, VM as neural network was trained to generate a P2 very similar to a selected set of P1s by feeding the corresponding RE-outputs into VM. RE tuning was tested by presenting a given small set of P1s to the serial coupling of RE and VM and by monitoring a modified Hamming distance between P2 and P1, while the ST-parameters were tuned. Human volunteers with normal vision were asked to approximate P2 to P1 directly, with random search, or by means of a dialog-based tuning with an evolutionary algorithm (EA). Typically, dialog-based EA tuning reached the optimal similarity between P2 and P1 (limited by the quality of VM) within less than 200 iterations. In contrast, manual tuning required considerably more iteration steps and converged to a smaller similarity between P2 and P1. Structure and function of our tunable RE offer not only an optimization of visual perception, but may also be important to serve as technical encryption unit (EU) in connection with the human central visual system as biological decryption unit (DU); an important feature to meet the authentication requirements of active medical devices.


international conference on artificial neural networks | 2006

Development of a neural net-based, personalized secure communication link

D. Neumann; Rolf Eckmiller; O. Baruth

This paper describes a novel ultra-secure, unidirectional communication channel for use in public communication networks, which is based on a) learning algorithms in combination with neural nets for fabrication of a unique pair of modules for encryption and decryption, and b) in combination with decision trees for the decryption process, c) signal transformation from spatial to temporal patterns by means of ambiguous spatial-temporal filters (ST filters), d) absence of public- or private keys, and e) requirement of biometric data of one of the users for both generation of the pair of hardware/software modules and for the decryption by the receiver. To achieve these features we have implemented an encryption-unit (EU) using ST filters for encryption and a decryption unit (DU) using learning algorithms and decision trees for decryption.


Archive | 2008

Towards Learning Retina Implants: How to Induce Visual Percepts with Electrical Stimulation Patterns

Rolf Eckmiller; O. Baruth; Stefan Borbe

We studied the conditions for joint information processing of a learning retina implant and central visual system in humans with normal vision in preparation of future retina implant applications in the blind. The visual system was modeled by a retina module (RM) as a learning Retina Encoder (RE) with spatio-temporal (ST) filters and a central visual system module (VM). RE performs a mapping of an optical pattern P1 from the physical- onto a neural domain, whereas VM performs a mapping from the neural- onto the perceptual domain and yields a visual percept P2. Our simulation results suggest that the elicitation of ‘Gestalt’ percepts may be improved by dialog-based RE tuning with evolutionary algorithms and by simulated miniature eye movements. However, considerable efforts in neuroinformatics are still needed to elucidate not only the algorithmic representation of data in the neural domain but also its enigmatic mapping onto the perceptual domain.


international symposium on neural networks | 2007

Portable Biomimetic Retina for Learning, Perception-based Image Acquisition

Rolf Eckmiller; Rolf Schatten; O. Baruth

We developed a portable biomimetic retina for blind subjects with retinal defects. 1-To simplify the visual prosthetic function, image segmentation of input patterns PI was provided by a set of line elements with specific lengths and orientations. 2-The segmented images were mapped by a filter module (FM: array of tunable spatio-temporal (ST) filters) onto a data stream as future stimulation signals for the human central visual system. 3-The foveal region of the central visual system was simulated by an inverter module (IM) to test the generation of a visual percept P2 of a given PI before the entire system will be applied to blind humans. 4-The parameter vector (PV) of FM could be modified interactively by the human user with evolutionary algorithms (EA) based on a perceptual comparison. 5-Two small displays for separate presentation of PI and the simulated percept P2 were integrated in a lightweight head mount and were combined with a 3-D acceleration sensor (AS) for head movement detection. 6-Subjects with normal vision were able to tune FM in a perception-based dialog exclusively by means of specific small head movements and to iteratively select the best three out of six possible percepts P2 until the output of IM, P2 became identical to a given PI.


international symposium on neural networks | 2004

Concerning the mapping of ambiguous retinal output vectors onto unambiguous visual percepts

Rolf Eckmiller; D. Neumann; O. Baruth

Summary form only given. From a systems theory and computational neuroscience perspective, the primate foveal visual system in the photopic range consists of a retina module as a large ensemble of spatio-temporal (ST) filters represented by the receptive field (RF) properties of mostly P- and Mganglion cells feeding into a corresponding central visual system module (VM). VM in turn elicits visual percepts P2 corresponding to optical input patterns P1. Human visual perception, which transcends neuroscience and biophysics, is considered here as the result of a sequence of two unidirectional mapping operations. The paper outlines a novel retina encoder (RE*) for mapping of optical patterns P1 onto vectors of ambiguous output signals; RE* serves both as retina module simulator and as neuroprosthetic retinal replacement. The paper also identifies essential requirements for the mapping of an ambiguous signal vector onto an unambiguous pattern. It also discusses perceptual consequences of a low-dimensional (e.g. 100) vector of multiple ganglion cell activity generated by RE* in blind subjects with an epiretinal, learning retina implant, vs. the high-dimensional vector of single ganglion cell activity generated by the human retina during normal vision.


Investigative Ophthalmology & Visual Science | 2003

Pattern Encoding and Data Encryption in Learning Retina Implants

O. Baruth; D. Neumann; R.E. Eckmiller


Integrated Computer-aided Engineering | 2007

Combination of biometric data and learning algorithms for both generation and application of a secure communication link

D. Neumann; Rolf Eckmiller; O. Baruth

Collaboration


Dive into the O. Baruth's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge