Jacob M. J. Murre
Leiden University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jacob M. J. Murre.
Neural Networks | 1992
Jacob M. J. Murre; R. Hans Phaf; Gezinus Wolters
A new procedure (CALM: Categorizing and Learning Module) is introduced for unsupervised learning in modular neural networks. The work described addresses a number of problems in connectionist modeling, such as lack of speed, lack of stability, inability to learn either with or without supervision, and the inability to both discriminate between and generalize over patterns. CALM is a single module that can be used to construct larger networks. A CALM module consists of pairs of excitatory Representation- and inhibitory Veto-nodes, and an Arousal-node. Because of the fixed internal wiring pattern of a module, the Arousal-node is sensitive to the novelty of the input pattern. The activation of the Arousal-node determines two psychologically motivated types of learning operating in the module: elaboration learning, which implies a high learning rate and the distribution of nonspecific, random activations in the module, and activation learning, which has only base rate learning without random activations. The learning rule used is a modified version of a rule described by Grossberg. The workings of CALM networks are illustrated in a number of simulations. It is shown that a CALM module quickly reaches a categorization, even with new patterns. Though categorization and learning are relatively fast compared to other models, CALM modules do not suffer from excessive plasticity. They are also shown to be capable of both discriminating between and generalizing over patterns. When presented with a pattern set exceeding the number of Representation-nodes, similar patterns are assigned to the same node. Multi-modular simulations showed that with supervised learning an average of 1.6 presentations sufficed to learn the EXOR function. Moreover, an unsupervised learning version of the McClelland and Rumelhart model successfully simulated a word superiority effect. It is concluded that the incorporation of psychologically and biologically plausible structural and functional characteristics, like modularity, unsupervised (competitive) learning, and a novelty dependent learning rate, may contribute to solving some of the problems often encountered in connectionist modeling.
IEEE Transactions on Neural Networks | 1993
Jacob M. J. Murre
A performance analysis is presented that focuses on the achievable speedup of a neural network implementation and on the optimal size of a processor network (transputers or multicomputers that communicate in a comparable manner). For fully and randomly connected neural networks the topology of the processor network can only have a small, constant effect on the iteration time. With randomly connected neural networks, even severely limiting node fan-in has only a negligible effect on decreasing the communication overhead. The class of modular neural networks is studied as a separate case which is shown to have better implementation characteristics. On the basis of implementation constraints, it is argued that randomly connected neural networks cannot be realistic models of the brain.
Microprocessors and Microsystems | 1994
Jan N. H. Heemskerk; Jaap Hoekstra; Jacob M. J. Murre; Leon H. J. G. Kemna; Patrick Hudson
Abstract This paper discusses the main architectural issues, the implementation, and the performance of a parallel neurocomputer, the Brain-Style Processor or BSP400. This project presents a feasibility study for larger parallel neurocomputers. The design principles are hardware modularity, simple processors, and in situ (local) learning. The modular approach of the design ensures extensibility of the present version. The BSP400 consists of 25 modules (boards) each containing 16 simple 8-bit single-chip computers. The module boards are connected to a dedicated connection network. The architectural configuration of the BSP400 supports local activation and learning rules. The ability to communicate activations with the outside world in real-time makes the BSP400 particularly suited for real-world applications. The present version implements a modular type of neural network, the CALM (categorizing and learning module) neural network. In this implementation of CALM, activations are transmitted as single bits, but an internal representation of one byte is kept for both activations and weights. The system has a capacity of 400 processing elements and 32 000 connections. Even with slow and simple processing elements, it still achieves a speed of 6.4 million connections per second for a non-learning CALM network. Some small network simulation studies carried out on the BSP400 are reported. A comparison with a design study (Mark III and Mark IV) is made.
Archive | 1990
Jacob M. J. Murre; Steven E. Kleynenberg
The MetaNet network environment allows users to build and examine modular neural networks, and to specify and run complex simulations. It consists of a graphical editor, a network compiler and a graphical (de)compiler, a network specification language (MetaNet), and hardware drivers. Its requirements are based on experiences with a text based network environment, which has been in use at our department since early 1988. Using the environment requires minimal programming experience. Currently, the system is implemented only on PCs. Off-loading of calculation processes to other machines is achieved through hardware drivers. It is possible to convert MetaNet code into ANSI C for direct compilation to stand-alone applications on any machine.
international conference on artificial neural networks | 1992
Bart L. M. Happel; Jacob M. J. Murre
It is argued that modular constraints on network structure might provide an important, architecture principle to overcome limitations of learning and generalization ability with current neural network approaches. A modular network algorithm called CALM is used to implement modular constraints on network connectivity. A genetic algorithm is applied to search suitable, modular network architectures for a handwritten digit recognition task. The simulation results indicate that efficient, neural, information processing mechanisms emerge from genetically established modular neural network architectures.
international conference on artificial neural networks | 1991
Jacob M. J. Murre
A performance analysis is presented, that focusses on the achievable speedup of an implementation and on the optimal size of a processor network. It is shown that for fully and randomly connected networks the topology of the transputer network can only have a small, constant effect on the iteration time. The class of modular neural networks is studied as a separate case, and is shown to have better implementation characteristics.
Archive | 1990
Jacob M. J. Murre; R. Hans Phaf; Gezinus Wolters
CALM (Categorizing And Learning Module) forms a basic unit for the construction of multi-modular networks suited for supervised and unsupervised learning. Its design is guided by considerations from a practical, neurobiological, and psychological nature. In CALM networks, modules may or may not be interconnected. If modules are linked, interconnections exist which are modifiable according to a modified Hebb rule. Bidirectional (nonsymmetric) connections are possible. Categorization and learning speed in a CALM module are dependent on the novelty of a local activation pattern. Single module simulations indicate that module convergence and pattern discrimination of new inputs are learned relatively fast. A CALM module can both discriminate and generalize, depending on module size and structure of the pattern set. CALM networks have, among other things, been applied to pattern (character) recognition tasks, and to the modeling of psychological experiments on the dissociation of explicit and implicit memory. For the construction and evaluation of CALM networks a high level tool has been developed. CALM is currently also being implemented in a 400 processor parallel machine.
international conference on artificial neural networks | 1992
Jan N. H. Heemskerk; Jacob M. J. Murre; Arend Melissant; Mirko Pelgrom; Patrick Hudson
Abstract A parallel architecture for implementing massive neural networks, called MindShape, is presented. MindShape is the successor to the Brain Style Processor, a 400-processor neurocomputer based on the principle of modularity. The MindShape machine consists of Neural Processing Elements (NPEs) and Communication Elements (CEs) organized in what we have called a fractal architecture. The architecture is by definition scalable, permitting the implementation of very large networks consisting of many thousands of nodes. Through simulations of data-communication flow on different architectures, and through implementation studies of VLSI hardware on a chip simulator, the specific requirements of the CEs and the NPEs have been investigated.
Neural Networks | 1994
Bart L. M. Happel; Jacob M. J. Murre
international symposium on neural networks | 1991
Jacob M. J. Murre; Jan N. H. Heemskerk; S.E. Kleynenberg; Patrick Hudson