Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Josef Skrzypek is active.

Publication


Featured researches published by Josef Skrzypek.


IEEE Transactions on Software Engineering | 1992

A software environment for studying computational neural systems

Edmond Mesrobian; Josef Skrzypek

UCLA-SFINX is a neural network simulation environment that enables users to simulate a wide variety of neural network models at various levels of abstraction. A network specification language enables users to construct arbitrary network structures. Small, structurally irregular networks can be modeled by explicitly defining each neuron and can be modeled by explicitly defining each neuron and corresponding connections. Very large networks with regular connectivity patterns can be implicitly specified using array constructs. Graphics support, based on X Windows System, is provided to visualize simulation results. Details of the simulation environment are described, and simulation examples are presented to demonstrate SFINXs capabilities. >


computer vision and pattern recognition | 1992

Neural network models for illusory contour perception

Josef Skrzypek; Brian Ringer

A physiologically motivated model of illusory contour perception is examined by simulating a neural network architecture that was tested with gray-level images. The results indicate that a model that combines a bottom-up feature aggregation strategy with recurrent processing is best suited for describing this type of perceptual completion.<<ETX>>


systems man and cybernetics | 1990

Dynamics of clustering multiple backpropagation networks

W.P. Lincoln; Josef Skrzypek

It is known that synergistic effects of clustering multiple backpropagation nets improves supervised learning, generalization, fault tolerance, and self-organization with respect to a comparably complex nonclustered system. A model that captures the underlying reasons for the synergy of clustering is outlined. The underlying ideas can apply to any net trained with a supervised learning rule.<<ETX>>


systems man and cybernetics | 1990

Lightness constancy: connectionist architecture for controlling sensitivity

Josef Skrzypek

A simple yet biologically plausible neuronal architecture that can account for the phenomenon of lightness constancy is explored. The model uses a hierarchical structure of overlapping operators at multiple levels of resolution. All operators have receptive fields organized concentrically into two antagonistic zones of center and surround. This C/S organization allows the computing structure to adaptively adjust the thresholds without prior knowledge of the intensity distribution. The principal idea here is to have larger operators set the thresholds for smaller ones. Local and global averages are used to shift IR curves. The architecture is based on the simple principles of convergence, divergence, and inhibition. The result is a computationally robust, regular structure, which can simplify implementation in VLSI. >


Archive | 1992

Light Sensitivity in Cones is Affected by the Feedback from Horizontal Cells

Josef Skrzypek

We examined the effect of annular illumination on resetting of the I-R relation measured intracellularly in the tiger salamander cones. Our results suggest that peripheral illumination contributes to the cellular mechanism of adaptation. This is done by a neural network involving feedback synapse from horizontal cell to cones. The effect is to unsaturate the membrane potential of a fully hyperpolarized cone, by “instantaneously” shifting cone’s I-R curves along intensity axis to be in register with ambient light level of the periphery. An equivalent electrical circuit with three different transmembrane channels, (leakage, photocurrent and feedback) was used to model static behavior of a cone. SPICE simulation showed that interactions between feedback and the light sensitive conductance can shift the I-R curves along the intensity domain, provided that phototransduction mechanism is not saturated during maximally hyperpolarized light response.


OE LASE'87 and EO Imaging Symp (January 1987, Los Angeles) | 1987

UCLA PUNNS --A Neural Network Machine For Computer Vision

David Gungner; Josef Skrzypek

The sequential processing paradigm limits current solutions for computer vision by restricting the number of functions which naturally map onto Von Neumann computing architectures. A variety of physical computing structures underlie the massive parallelism inherent in many visual functions. Therefore, further advances in general purpose vision must assume inseparability of function from structure. To combine function and structure we are investigating connectionist architectures using PUNNS (Perception Using Neural Network Simulation). Our approach is inspired and constrained by the analysis of visual functions that are computed in the neural networks of living things. PUNNS represents a massively parallel computer architecture which is evolving to allow the execution of certain visual functions in constant time, regardless of the size and complexity of the image. Due to the complexity and cost of building a neural net machine, a flexible neural net simulator is needed to invent, study and understand the behavior of complex vision algorithms. Some of the issues involved in building a simulator are how to compactly describe the interconnectivity of the neural network, how to input image data, how to program the neural network, and how to display the results of the network. This paper describes the implementation of PUNNS. Simulation examples and a comparison of PUNNS to other neural net simulators will be presented.


Simulation | 1992

A general purpose simulation environment for neural models

Edmond Mesrobian; Josef Skrzypek

Current interest in neural networks has produced a diverse set of algorithms and architectures that vary in connectivity pattern, temporal behavior, update rules, and convergence properties. We have designed a flexible simulation system that can support the implementation of a wide range of neural network approaches. The UCLA-SFINX simulator is especially suited for the exploration of structured, irregular, and layered connectivity patterns. Func tions, such as those in early vision, are modeled using the regular connectivity of center/surround antagonistic receptive fields and can be implemented as the difference of concentric gaussians. Higher level cognitive functions, such as supervised and unsuper vised learning, have more irregular, dynamic connectivity structures and update mechanisms that are also supported. To visualize weight spaces, input/output training sets, image data, or other network characteristics, SFINX provides an X- windows based graphical output that assists in rapidly assessing the consequences of altering connectivity patterns, parameter tuning, and other experiments.


International Journal of Pattern Recognition and Artificial Intelligence | 1992

LIGHTNESS CONSTANCY FROM LUMINANCE CONTRAST

Josef Skrzypek; David Gungner

To discount effects of uneven illumination we have designed and tested a neural network that can adaptively control light sensitivity at the photosensor level. Our neural network architecture simulates the ON channel response of the visual system using multiple layers of hexagonally arranged nodes having partially overlapping receptive fields of different spatial frequencies. Feedforward connections are excitatory while feedback pathways subserve lateral inhibition. The outputs of these nodes are combined so as to maximize the signal to noise ratio while providing constant feedback that resets photosensor thresholds to maintain high sensitivity. A sparse primitive interpolation technique was applied to the ensemble output of the sensitivity control module to determine if it sufficiently encodes surface reflectance. The motivation is to determine to what extent the ratio principle, as captured by the sensitivity control system, explains the lightness constancy phenomenon and whether information contained within an ON channel response is adequate to reconstruct the surface lightness. Our connectionist architecture can account for many characteristics attributed to the lightness constancy phenomenon observed in biological systems. The results suggest that our module maintains high sensitivity across a large range of intensities without interfering with the transmission of visual information embedded in the spatial discontinuities of intensity. However, the amplitude of the luminance derivative as encoded in ON channel responses is not sufficient to approximate surface reflectance.


Journal of Intelligent Manufacturing | 1993

Neural architecture for robotic vision sensor that discounts uneven illumination in a manufacturing environment

Josef Skrzypek

Computer vision algorithms for inspection or pick-and-place operations often depend on spatially uniform illumination of a workplace. This necessitates expensive lighting fixtures. To discount effects of uneven illumination we designed and tested a neural network that can adaptively control light sensitivity at the photosensor level. Our neural network architecture consists of multiple layers with hexagonally arranged nodes. All nodes have partially overlapping receptive fields of different spatial frequencies. Feedforward connections are excitatory while feedback pathways subserve lateral inhibition. The outputs of these nodes are combined so as to maximize the signal-to-noise ratio while constantly resetting thresholds to maintain high sensitivity. Our connectionist architecture can account for many characteristics attributed to the lightness constancy phenomenon observed in biological systems. The results suggest that our module maintains high sensitivity over the whole domain of intensities without interfering with transmission of visual information embedded in spatial discontinuities of intensity.


systems man and cybernetics | 1990

Categorizing visual stimuli: specification of a neural network architecture

Valter Rodrigues; Josef Skrzypek

The problem of categorizing visual stimuli on the basis of a hierarchical structure of basic, superordinate, and subordinate categories is addressed. A specification for a simplified neural network architecture that uses a uniform linear measure to determine similarity of common features and dissimilarity of distinctive features is derived. The hierarchy is mapped onto a neural network structure in which input-level cells correspond to activities generated by exemplars and output cells correspond to basic level categories. The network can be used for visual categorization at all three levels of abstraction and for the particular case of recognition. Experimental results on the XOR problem and letter recognition have shown that by introducing similarity and dissimilarity in cell activation the network exhibits superior convergence behavior for the backpropagation algorithm.<<ETX>>

Collaboration


Dive into the Josef Skrzypek's collaboration.

Top Co-Authors

Avatar

David Gungner

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michael Stiber

University of Washington

View shared research outputs
Top Co-Authors

Avatar

Brian Ringer

University of California

View shared research outputs
Top Co-Authors

Avatar

E. Mesrobian

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

W.P. Lincoln

University of California

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge