Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where George N. Reeke is active.

Publication


Featured researches published by George N. Reeke.


Neuroscience | 1994

VALUE-DEPENDENT SELECTION IN THE BRAIN: SIMULATION IN A SYNTHETIC NEURAL MODEL

K. J. Friston; Giulio Tononi; George N. Reeke; Olaf Sporns; Gerald M. Edelman

Many forms of learning depend on the ability of an organism to sense and react to the adaptive value of its behavior. Such value, if reflected in the activity of specific neural structures (neural value systems), can selectively increase the probability of adaptive behaviors by modulating synaptic changes in the circuits relevant to those behaviors. Neuromodulatory systems in the brain are well suited to carry out this process since they respond to evolutionarily important cues (innate value), broadcast their responses to widely distributed areas of the brain through diffuse projections, and release substances that can modulate changes in synaptic strength. The main aim of this paper is to show that, if value-dependent modulation is extended to the inputs of neural value systems themselves, initially neutral cues can acquire value. This process has important implications for the acquisition of behavioral sequences. We have used a synthetic neural model to illustrate value-dependent acquisition of a simple foveation response to a visual stimulus. We then examine the improvement that ensues when the connections to the value system are themselves plastic and thus become able to mediate acquired value. Using a second-order conditioning paradigm, we demonstrate that auditory discrimination can occur in the model in the absence of direct positive reinforcement and even in the presence of slight negative reinforcement. The discriminative responses are accompanied by value-dependent plasticity of receptive fields, as reflected in the selective augmentation of unit responses to valuable sensory cues. We then consider the time-course during learning of the responses of the value system and the transfer of these responses from one sensory modality to another. Finally, we discuss the relation of value-dependent learning to models of reinforcement learning. The results obtained from these simulations can be directly related to various reported experimental findings and provide additional support for the application of selectional principles to the analysis of brain and behavior.


Proceedings of the IEEE | 1990

Synthetic neural modeling: the 'Darwin' series of recognition automata

George N. Reeke; Olaf Sporns; Gerald M. Edelman

The authors describe how the theory of neuronal group selection (TNGS) can form the basis for an approach to computer modeling of the nervous system. Three examples of synthetic neural modeling are discussed. Darwin I was designed to examine the process of pattern recognition and some general factors relating to degeneracy and amplification in selective systems. Darwin II introduced recognition units with some of the properties of neuronal groups, connected in reentrant networks to permit exploration of aspects of the recognition process leading to categorization, generalization, and associative memory. Darwin III is a behaving automation whose behavior is not programmed but results from its encounter with events in its world under constraints of neuronal and synaptic selection. The oculomotor, reaching, touch, and categorization subsystems of the Darwin-III system are discussed. Darwin III is compared to models based on neurobiology, models based on artificial intelligence, and connectionist models. >


Annals of the New York Academy of Sciences | 1984

Selective networks and recognition automata

George N. Reeke; Gerald M. Edelman

The results we have presented demonstrate that a network based on a selective principle can function in the absence of forced learning or an a priori program to give recognition, classification, generalization, and association. While Darwin II is not a model of any actual nervous system, it does set out to solve one of the same problems that evolution had to solve--the need to form categories in a bottom-up manner from information in the environment, without incorporating the assumptions of any particular observer. The key features of the model that make this possible are (1) Darwin II incorporates selective networks whose initial specificities enable them to respond without instruction to unfamiliar stimuli; (2) degeneracy provides multiple possibilities of response to any one stimulus, at the same time providing functional redundancy against component failure; (3) the output of Darwin II is a pattern of response, making use of the simultaneous responses of multiple degenerate groups to avoid the need for very high specificity and the combinatorial disaster that would imply; (4) reentry within individual networks vitiates the limitations described by Minsky and Papert for a class of perceptual automata lacking such connections; and (5) reentry between intercommunicating networks with different functions gives rise to new functions, such as association, that either one alone could not display. The two kinds of network are roughly analogous to the two kinds of category formation that people use: Darwin, corresponding to the exemplar description of categories, and Wallace, corresponding to the probabilistic matching description of categories. These principles lead to a new class of pattern-recognizing machine of which Darwin II is just an example. There are a number of obvious extensions to this work that we are pursuing. These include giving Darwin II the capability to deal with stimuli that are in motion, an ability that probably precedes the ability of biological organisms to deal with stationary stimuli, giving it the capability to deal with multiple stimulus objects through some form of attentional mechanism, and giving it a means to respond directly and to receive feedback from the world so that it can learn conventionally. Already, however, we have shown that a working pattern-recognition automaton can be built based on a selective principle. This development promises ultimately to show us how to build recognizing machines without programs and to provide a sound basis for the study of both natural and artificial intelligence.


Advances in Experimental Medicine and Biology | 1975

Structure and Function of Concanavalin A

George N. Reeke; Joseph W. Becker; Bruce A. Cunningham; John L. Wang; Ichiro Yahara; Gerald M. Edelman

Lectins have been extensively used to analyze a variety of fundamental processes in cell biology. In conjuntion with our studies on the cell surface and mitosis, we have determined the amino acid sequence and three-dimensional struction of concanavalin A (Con A), the mitogenic lectin from the jack bean. Knowledge of the structure has been helpful in interpreting experiments on lymphocyte mitogenesis and the effects of Con A on cell surface receptor mobility. Con A subunits for molecular weight 25,500 are folded into dome-like structures of maximum dimensions 42 times 40 times 39 A. The domes are related by 222 symmetry to form roughly tetrahedral tetramers. Each subunit contains two large antiparallel pleated sheets, and subunits are joined to form dimers and tetramers by interactions involving one of these pleated sheets. We have examined the binding of a variety of carbohydrates to Con A and have obtained preliminary data which suggest that there are differences in the saccharide-binding behavior of Con A in solution and in the crystalline state. Dimeric chemical derivatives of Con A have been prepared and shown to have biological activities different from those of the native tetrameric protein. Under different conditions, native Con A exhibits two antagonistic activities on the lymphoid cell surface: the induction of cap formation by its own receptors and the inhibition of the mobility of a variety of receptors, including its own receptors. The dimeric derivative, succinyl-Con A, is just as effective a mitogen as the native lectin, but it lacks the ability to modulate cell surface receptor mobility. The data suggest that neither extensive immobilization of cell surface receptors nor cap formation is required for cell stimulation. Further studies on modulation of receptor translocation suggest that hypothesis that there exists a connecting network of colchicine-sensitive proteins that links receptors of different kinds and mediates their rearrangement. The degree of connectivity of this postulated network appears to be altered by changes in the state of attachment of various surface receptors to the network. Thus the network might provide the cell with a means of transmitting signals such as the stimulus for mitosis by lectins or antigens.


Proceedings of the National Academy of Sciences of the United States of America | 2013

Network model of top-down influences on local gain and contextual interactions in visual cortex

Valentin Piëch; Wu Li; George N. Reeke; Charles D. Gilbert

Significance Perceptual grouping links line segments that define object contours and distinguishes them from background contours. This process is reflected in the responses to contours of neurons in primary visual cortex (V1), and depends on long-range horizontal cortical connections. We present a network model, based on an interaction between recurrent inputs to V1 and intrinsic connections within V1, which accounts for task-dependent changes in the properties of V1 neurons. The model simulates top-down modulation of effective connectivity of intrinsic cortical connections among biophysically realistic neurons. It quantitatively reproduces the magnitude and time course of the facilitation of V1 neuronal responses to contours. The visual system uses continuity as a cue for grouping oriented line segments that define object boundaries in complex visual scenes. Many studies support the idea that long-range intrinsic horizontal connections in early visual cortex contribute to this grouping. Top-down influences in primary visual cortex (V1) play an important role in the processes of contour integration and perceptual saliency, with contour-related responses being task dependent. This suggests an interaction between recurrent inputs to V1 and intrinsic connections within V1 that enables V1 neurons to respond differently under different conditions. We created a network model that simulates parametrically the control of local gain by hypothetical top-down modification of local recurrence. These local gain changes, as a consequence of network dynamics in our model, enable modulation of contextual interactions in a task-dependent manner. Our model displays contour-related facilitation of neuronal responses and differential foreground vs. background responses over the neuronal ensemble, accounting for the perceptual pop-out of salient contours. It quantitatively reproduces the results of single-unit recording experiments in V1, highlighting salient contours and replicating the time course of contextual influences. We show by means of phase-plane analysis that the model operates stably even in the presence of large inputs. Our model shows how a simple form of top-down modulation of the effective connectivity of intrinsic cortical connections among biophysically realistic neurons can account for some of the response changes seen in perceptual learning and task switching.


Journal of Molecular Biology | 1974

Favin, a crystalline lectin from Vicia faba

John L. Wang; Joseph W. Becker; George N. Reeke; Gerald M. Edelman

Abstract A lectin from the fava bean (Vicia faba) has been purified and crystallized in a form suitable for high-resolution crystallographic structure analysis. This protein binds glucose- and mannose-like saccharides, and it is mitogenic for lymphocytes. The fava lectin crystallizes in the orthorhombic space group. P212121 with unit cell dimensions a = 90.0, b = 89.3, and c = 67.4 A . The mass of protein in the asymmetric unit is 53,000 daltons, corresponding to the molecular weight of the protein in solution.


Neural Computation | 2004

Estimating the temporal interval entropy of neuronal discharge

George N. Reeke; Allan D. Coop

To better understand the role of timing in the function of the nervous system, we have developed a methodology that allows the entropy of neuronal discharge activity to be estimated from a spike train record when it may be assumed that successive interspike intervals are temporally uncorrelated. The so-called interval entropy obtained by this methodology is based on an implicit enumeration of all possible spike trains that are statistically indistinguishable from a given spike train. The interval entropy is calculated from an analytic distribution whose parameters are obtained by maximum likelihood estimation from the interval probability distribution associated with a given spike train. We show that this approach reveals features of neuronal discharge not seen with two alternative methods of entropy estimation. The methodology allows for validation of the obtained data models by calculation of confidence intervals for the parameters of the analytic distribution and the testing of the significance of the fit between the observed and analytic interval distributions by means of Kolmogorov-Smirnov and Anderson-Darling statistics. The method is demonstrated by analysis of two different data sets: simulated spike trains evoked by either Poissonian or near-synchronous pulsed activation of a model cerebellar Purkinje neuron and spike trains obtained by extracellular recording from spontaneously discharging cultured rat hippocampal neurons.


Proceedings of the National Academy of Sciences of the United States of America | 2011

Quantitative descriptions of generalized arousal, an elementary function of the vertebrate brain

Amy Wells Quinkert; Vivek Vimal; Zachary M. Weil; George N. Reeke; Nicholas D. Schiff; Jayanth R. Banavar; Donald W. Pfaff

We review a concept of the most primitive, fundamental function of the vertebrate CNS, generalized arousal (GA). Three independent lines of evidence indicate the existence of GA: statistical, genetic, and mechanistic. Here we ask, is this concept amenable to quantitative analysis? Answering in the affirmative, four quantitative approaches have proven useful: (i) factor analysis, (ii) information theory, (iii) deterministic chaos, and (iv) application of a Gaussian equation. It strikes us that, to date, not just one but at least four different quantitative approaches seem necessary for describing different aspects of scientific work on GA.


ieee international conference on high performance computing data and analytics | 1987

Selective Neural Networks and Their Implications for Recognition Automata

George N. Reeke; Gerald M. Edelman; Dan Sulzbach

Higher mental functions require the prior abitity to categorize objects and events according to sensory signals reaching the brain. The neuronal group selection theory postulates that this ability arises from a kind of Darwinian selection operating in somatic time on groups of interconnected neurons. These groups develop with varied and overlapping abilities to respond to patterns of input at their synapses. Groups that contribute to responses having adap tive value for the organism undergo modi fications in the efficacies of their synaptic connections that enhance their future re sponses to similar stimuli. Computer models of automata based on these prin ciples can carry out simple tasks involving recognition, categorization, generalization, and visual tracking. A general program for implementing such models is presented.


Computational Intelligence and Neuroscience | 2012

Quantitative tools for examining the vocalizations of juvenile songbirds

Cameron D. Wellock; George N. Reeke

The singing of juvenile songbirds is highly variable and not well stereotyped, a feature that makes it difficult to analyze with existing computational techniques. We present here a method suitable for analyzing such vocalizations, windowed spectral pattern recognition (WSPR). Rather than performing pairwise sample comparisons, WSPR measures the typicality of a sample against a large sample set. We also illustrate how WSPR can be used to perform a variety of tasks, such as sample classification, song ontogeny measurement, and song variability measurement. Finally, we present a novel measure, based on WSPR, for quantifying the apparent complexity of a birds singing.

Collaboration


Dive into the George N. Reeke's collaboration.

Top Co-Authors

Avatar

Gerald M. Edelman

The Neurosciences Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Olaf Sporns

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge