Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Xiangbao Wu is active.

Publication


Featured researches published by Xiangbao Wu.


Neural Networks | 2005

2005 Special issue: Interpreting hippocampal function as recoding and forecasting

William B. Levy; Ashlie B. Hocking; Xiangbao Wu

A model of hippocampal function, centered on region CA3, reproduces many of the cognitive and behavioral functions ascribed to the hippocampus. Where there is precise stimulus control and detailed quantitative data, this model reproduces the quantitative behavioral results. Underlying the model is a recoding conjecture of hippocampal computational function. The expanded conjecture includes a special role for randomization and, as recoding progresses with experience, the occurrence of sequence learning and sequence compression. These functions support the putative higher-order hippocampal function, i.e. production of representations readable by a linear decoder and suitable for both neocortical storage and forecasting. Simulations confirm the critical importance of randomly driven recoding and the neurocognitive relevance of sequence learning and compression. Two forms of sequence compression exist, on-line and off-line compression: both are conjectured to support neocortical encoding of context and declarative memory as described by .


Biological Cybernetics | 1996

Context codes and the effect of noisy learning on a simplified hippocampal CA3 model

Xiangbao Wu; Robert A. Baxter; William B. Levy

This paper investigates how noise affects a minimal computational model of the hippocampus and, in particular, region CA3. The architecture and physiology employed are consistent with the known anatomy and physiology of this region. Here, we use computer simulations to demonstrate and quantify the ability of this model to create context codes in sequential learning problems. These context codes are mediated by local context neurons which are analogous to hippocampal place-coding cells. These local context neurons endow the network with many of its problem-solving abilities. Our results show that the network encodes context on its own and then uses context to solve sequence prediction under ambiguous conditions. Noise during learning affects performance, and it also affects the development of context codes. The relationship between noise and performance in a sequence prediction is simple and corresponds to a disruption of local context neuron firing. As noise exceeds the signal, sequence completion and local context neuron firing are both lost. For the parameters investigated, extra learning trials and slower learning rates do not overcome either of the effects of noise. The results are consistent with the important role played, in this hippocampal model, by local context neurons in sequence prediction and for disambiguation across time.


Biological Cybernetics | 2005

The formation of neural codes in the hippocampus: trace conditioning as a prototypical paradigm for studying the random recoding hypothesis

William B. Levy; A. Sanyal; Xiangbao Wu; Paul Rodriguez; David W. Sullivan

The trace version of classical conditioning is used as a prototypical hippocampal-dependent task to study the recoding sequence prediction theory of hippocampal function. This theory conjectures that the hippocampus is a random recoder of sequences and that, once formed, the neuronal codes are suitable for prediction. As such, a trace conditioning paradigm, which requires a timely prediction, seems by far the simplest of the behaviorally-relevant paradigms for studying hippocampal recoding. Parameters that affect the formation of these random codes include the temporal aspects of the behavioral/cognitive paradigm and certain basic characteristics of hippocampal region CA3 anatomy and physiology such as connectivity and activity. Here we describe some of the dynamics of code formation and describe how biological and paradigmatic parameters affect the neural codes that are formed. In addition to a backward cascade of coding neurons, we point out, for the first time, a higher-order dynamic growing out of the backward cascade—a particular forward and backward stabilization of codes as training progresses. We also observe that there is a performance compromise involved in the setting of activity levels due to the existence of three behavioral failure modes. Each of these behavioral failure modes exists in the computational model and, presumably, natural selection produced the compromise performance observed by psychologists. Thus, examining the parametric sensitivities of the codes and their dynamic formation gives insight into the constraints on natural computation and into the computational compromises ensuing from these constraints.


Biological Cybernetics | 1998

A neural network solution to the transverse patterning problem depends on repetition of the input code

Xiangbao Wu; Joanna Tyrcha; William B. Levy

Abstract. Using computer simulations, this paper investigates how input codes affect a minimal computational model of the hippocampal region CA3. Because encoding context seems to be a function of the hippocampus, we have studied problems that require learning context for their solution. Here we study a hippocampally dependent, configural learning problem called transverse patterning. Previously, we showed that the network does not produce long local context codings when the sequential input patterns are orthogonal, and it fails to solve many context-dependent problems in such situations. Here we show that this need not be the case if we assume that the input changes more slowly than a processing interval. Stuttering, i.e., repeating inputs, allows the network to create long local context firings even for orthogonal inputs. With these long local context firings, the network is able to solve the transverse patterning problem. Without stuttering, transverse patterning is not learned. Because stuttering is so useful, we investigate the relationship between the stuttering repetition length and relative context length in a simple, idealized sequence prediction problem. The relative context length, defined as the average length of the local context codes divided by the stuttering length, interacts with activity levels and has an optimal stuttering repetition length. Moreover, the increase in average context length can reach this maximum without loss of relative capacity. Finally, we note that stuttering is an example of maintained or introduced redundancy that can improve neural computations.


Network: Computation In Neural Systems | 2000

Controlling activity fluctuations in large, sparsely connected random networks

A C Smith; Xiangbao Wu; William B Levy

Controlling activity in recurrent neural network models of brain regions is essential both to enable effective learning and to reproduce the low activities that exist in some cortical regions such as hippocampal region CA3. Previous studies of sparse, random, recurrent networks constructed with McCulloch–Pitts neurons used probabilistic arguments to set the parameters that control activity. Here, we extend this work by adding an additional, biologically appropriate, parameter to control the magnitude and stability of activity oscillations. The new constant can be considered to be the rest conductance in a shunting model or the threshold when subtractive inhibition is used. This new parameter is critical for large networks run at low activity levels. Importantly, extreme activity fluctuations that act to turn large networks totally on or totally off can now be avoided. We also show how the size of external input activity interacts with this parameter to affect network activity. Then the model based on fixed weights is extended to estimate activities in networks with distributed weights. Because the theory provides accurate control of activity fluctuations, the approach can be used to design a predictable amount of pseudorandomness into deterministic networks. Such nonminimal fluctuations improve learning in simulations trained on the transitive inference problem.


Neurocomputing | 2001

Simulating symbolic distance effects in the transitive inference problem

Xiangbao Wu; William B. Levy

Abstract The hippocampus is needed to store memories that are reconfigurable. Therefore, a hippocampal-like computational model should be able to solve transitive inference (TI) problems. By turning TI into a problem of sequence learning (stimuli-decisions-outcome), a sequence learning, hippocampal-like neural network solves the TI problem. In the transitive inference problem studied here, a network simulation begins by learning six pairwise relationships: A>B, B>C, C>D, D>E, E>F, and F>G where the underlying relationship is the linear string: A>B>C>D>E>F>G. The simulation is then tested with the novel pairs: B?D, C?E, D?F, B?E, C?F, B?F, and A?G. The symbolic distance effect, found in animal and human experiments, is reproduced by the network simulations. That is, the simulations give stronger decodings for B>F than for B>E or C>F and decodings for B>F and C>F are stronger than for B>D, C>E, or D>F.


CNS '97 Proceedings of the sixth annual conference on Computational neuroscience : trends in research, 1998: trends in research, 1998 | 1998

A Hippocampal-like neural network model solves the transitive inference problem

Xiangbao Wu; William B. Levy

Both rats and humans can solve configural learning problems. Based on lesion experiments in rats, configural learning is regarded as a hippocampally dependent function1 when reconfigurability is critical. We have previously shown that a hippocampal-like neural network model2,3 solves the configural problem of transitive inference (TI)4. Here we confirm this result and investigate the robustness of this demonstration as a function of network activity levels.


Neurocomputing | 2005

Increasing CS and US longevity increases the learnable trace interval

Xiangbao Wu; William B Levy

It has been hypothesized that increasing CS longevity affects performance on trace conditioning. Using a hippocampal model, we find that increasing CS and US longevity increases the learnable trace interval. Our simulations show that, over a modest range, the maximal learnable trace interval is approximately a linear function of CS/US longevity.


Neurocomputing | 1999

Enhancing the performance of a hippocampal model by increasing variability early in learning

Xiangbao Wu; William B. Levy

Abstract Using computer simulations of a minimal computational model of hippocampal region CA3, this report investigates how randomization during training alters learned performance. The transitive inference problem is employed for this purpose. Randomizing just the initial network state at the beginning of each training trial profoundly affects learning. That is, no randomization makes the problem unlearnable while a moderate amount of randomized activity optimizes network performance. These results suggest a way to alter learning which may be tested in neuropsychological experiments.


Neurocomputing | 2000

Using computational simulations to discover optimal training paradigms

Aaron P. Shon; Xiangbao Wu; William B. Levy

Abstract The organization of training is an important determinant of how well subjects learn a cognitive task. To understand why different training schedules produce different learned performance, we used a hippocampal model to compare three training paradigms for the hippocampally dependent cognitive task called transverse patterning. Simulations reproduce training effects seen in humans and rats. As in behavioral studies, progressive training produces robust learning while random training renders the task essentially unlearnable. The simulations predict that a third training paradigm, called staged learning, will produce more robust learning on average than the progressive paradigm used in published behavioral studies. Possible mechanisms underlying performance differences between paradigms are investigated and discussed.

Collaboration


Dive into the Xiangbao Wu's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Anthony J. Greene

University of Wisconsin–Milwaukee

View shared research outputs
Top Co-Authors

Avatar

Ashlie B. Hocking

University of Virginia Health System

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

A. P. Shon

University of Virginia

View shared research outputs
Top Co-Authors

Avatar

A. Sanyal

University of Virginia

View shared research outputs
Researchain Logo
Decentralizing Knowledge