The Dynamics of Bimodular Continuous Attractor Neural Networks with Static and Moving Stimuli
TThe Dynamics of Bimodular Continuous Attractor Neural Networks with Static andMoving Stimuli
Min Yan , ∗ , Wen-Hao Zhang , He Wang , , and K. Y. Michael Wong , Department of Physics, Hong Kong University of Science and Technology, Hong Kong SAR, P. R. China Department of Mathematics, University of Pittsburgh, USA Hong Kong University of Science and Technology Shenzhen Research Institute, Shenzhen 518057, China (Dated: October 17, 2019)The brain achieves multisensory integration by combining the information received from differentsensory inputs to yield inferences with higher speed or more accuracy. We consider a bimodularneural network each processing a modality of sensory input and interacting with each other. Thedynamics of excitatory and inhibitory couplings between the two modules are studied with staticand moving stimuli. The modules exhibit non-trivial interactive behaviors depending on the inputstrengths, their disparity and speed (for moving inputs), and the inter-modular couplings. They giverise to a family of models applicable to causal inference problems in neuroscience. They also providea model for the experiment of motion-bounce illusion, yielding consistent results and predicting theirrobustness.
I. INTRODUCTION
The human brain is sophisticated and advanced. Itperforms computations efficiently[1–3]. The brain re-ceives inputs from surrounding environment all the timevia different sensory modalities, e.g., visual, auditory, ol-factory and vestibular, and so on. Experiments showedthat different cortical regions in the brain are not com-pletely isolated from each other, and there exist interac-tions between different sensory modalities. When thebrain is processing information, it is able to combinecues coming from different sensory modalities, producingresponses with higher accuracy or speed. In addition,this kind of multisensory integration can also give rise tosome interesting behaviors, such as sensory illusion andresponse enhancement.Various models have been built to elucidate the infor-mation processing mechanism of the brain [4–10]. In thispaper, we study the dynamics of bimodular ContinuousAttractor Neural Networks (CANNs) [11–17]. CANNshave gained widespread attention due to their propertyof translational invariance of neuronal interactions, whichendows the networks with the ability to hold a continuousfamily of stationary states [10–13]. This feature enablesthe network to track a moving stimulus continuously, pro-viding a convincing model of processing continuous infor-mation in the brain.The single-module CANNs have been studied exten-sively [14–17]. Nevertheless, our brain receives signalsfrom more than one channels, such as visual, auditory,vestibular, olfactory, and so on. Experiments have foundthat the brain is organized in different modules, eachplaying a certain role in processing the information thebrain receives [18, 19]. However, different modules inthe brain interact with each other, enabling it to inte-grate the information it collects to get a comprehensivepicture of the surroundings [20, 21]. Multisensory infor-mation processing has been investigated extensively inareas such as visual-auditory [22], visual-vestibular [23]or other kinds of combinations [24]. In this paper, we gen-eralize the single-module CANNs to bimodular structure, to simulate and explore the interactions between differ-ent sensory modalities in the brain. Compared with thesingle-module CANNs, bimodular networks are able toprocess information coming from different sensory modal-ities separately or simultaneously. As shown in this pa-per, by applying distinct inputs, the bimodular CANNsrespond diversely. Furthermore, the couplings betweenthe two neural modules also play vital roles during theinformation processing, especially when tracking a mov-ing stimulus in one modality and a static stimulus in theother [25–27].We compared the theoretical predictions of the bimod-ular CANNs with the experiments in which different sen-sory modalities are involved during information process-ing, e.g., sensory illusions [28–30]. In this paper we studythe Motion-Bounce Illusion experiment, which incorpo-rates visual and auditory sensory modalities [22, 31, 32],and show that sensory inputs from one modality can af-fect the perception of stimuli in another modality, con-sistent with experimental results.
II. NETWORK ARCHITECTUREA. Single Layer CANNs
We first describe single-module CANNs which processa one-dimensional stimulus. The stimulus can be re-garded as the position or moving direction of an object,or head direction or other continuous variables. Eachneuron in the network has its own preferred stimulus(direction), hence the whole network can encode all thestimuli information in the population of neurons. We use U ( x, t ) to denote the synaptic input at time t to the neu-rons whose preferred stimulus is x , and x ∈ [0, 2 π ). Thedynamics of the U ( x, t ) is [14–16] τ ∂U ( x, t ) ∂t = − U ( x, t )+ ρ (cid:90) ∞−∞ J ( x, x (cid:48) ) r ( x (cid:48) , t ) dx (cid:48) + I ext ( x, t ) , (1) a r X i v : . [ q - b i o . N C ] O c t where τ is a time constant, controlling the rate at whichthe synaptic input decays to the total input of the neuron,typically of the order of 1 ms. The function I ext ( x, t )denotes the external input to the network at time t andposition x , and ρ is the density of neurons. The couplingsbetween the neurons located at positions x and x (cid:48) aredenoted as Gaussian functions J ( x, x (cid:48) ): J ( x, x (cid:48) ) = J √ πa exp (cid:20) − ( x − x (cid:48) ) a (cid:21) , (2)where a defines the interaction range among the neu-rons. It can be seen from Eq.(2) that the coupling istranslational invariant, so the coupling depends on thedisplacement x − x (cid:48) (modulo 2 π for angular variables).This endows the network the ability to support a contin-uous family of attractors. The function r ( x, t ) denotesthe firing rate at time t and position x : r ( x, t ) = [ U ( x, t )] kρ (cid:82) ∞−∞ [ U ( x (cid:48) , t )] dx (cid:48) , (3)in which [ U ] + ≡ max ( U, k is the global inhibi-tion, which controls the extent to which the firing ratesaturates [33].Before applying external stimuli to the neural network,we first consider the intrinsic dynamics of the CANNswithout external inputs ( I ext = 0). For 0 < k < k c ≡ J ρ/ (8 √ πa ) and for a (cid:28) π . , the CANNs support acontinuous family of stationary states (plotted in Fig. 1),denoted as: FIG. 1. The stationary states of the CANNs when there is noexternal input. They are also the solutions of Eq. (1) under I ext ( x, t ) = 0. (cid:101) U ( x | z ) = U exp (cid:20) − ( x − z ) a (cid:21) , (4) (cid:101) r ( x | z ) = r exp (cid:20) − ( x − z ) a (cid:21) , (5)where U = (cid:2) − k/k c ) / (cid:3) J / (4 √ πak ), r = (cid:2) − k/k c ) / (cid:3) / (2 √ πakρ ), and z is a free param-eter denoting the peak position of the Gaussian bump. To simplify, we rescale the parameters: (cid:101) U = ρJ U , (cid:101) I ext = ρJ I ext , (cid:101) r = ( ρJ ) r , (cid:101) k = √ πakρJ . Then Eq. (1)and Eq. (3) can be rewritten as: τ ∂ (cid:101) U ( x, t ) ∂t = − (cid:101) U ( x, t )+ (cid:90) ∞−∞ J ( x, x (cid:48) ) J (cid:101) r ( x (cid:48) , t ) dx (cid:48) + (cid:101) I ext ( x, t ) , (6) (cid:101) r ( x, t ) = [ (cid:101) U ( x, t )] (cid:101) k √ πa (cid:82) ∞−∞ dx (cid:48) [ (cid:101) U ( x (cid:48) , t )] . (7) B. Bimodular CANNs
Now we generalize the single-module CANNs to a bi-modular structure [34]. Since experiments have foundthat in the brain the different sensory modalities interactwith each other during the information encoding process[35–39], we add couplings between the two modules in thebimodular CANNs. The network architecture is shownin Fig. 2. For simplification, consider the case that theneurons are evenly distributed in the two modules. Eachneuron has its own preferred stimulus, indicated by thearrows in the neurons. Generalized from Eq.(6), the dy-namical equations of the bimodular CANNs model are(for convenience, we use U , r , k , I , J ( x, x (cid:48) ) to denote therescaled variables (cid:101) U , (cid:101) r , (cid:101) k , (cid:101) I , J ( x, x (cid:48) ) /J in the followingof the paper): τ ∂U ( x, t ) ∂t = − U ( x, t ) + ω (cid:90) ∞−∞ J ( x, x (cid:48) ) r ( x (cid:48) , t ) dx (cid:48) + ω (cid:90) ∞−∞ J ( x, x (cid:48) ) r ( x (cid:48) , t ) dx (cid:48) + I ext ( x, t ) ,τ ∂U ( x, t ) ∂t = − U ( x, t ) + ω (cid:90) ∞−∞ J ( x, x (cid:48) ) r ( x (cid:48) , t ) dx (cid:48) + ω (cid:90) ∞−∞ J ( x, x (cid:48) ) r ( x (cid:48) , t ) dx (cid:48) + I ext ( x, t ) . (8)The recurrent coupling strength within module 1(module 2) is denoted by ω ( ω ). The coupling frommodule 1 (module 2) to module 2 (module 1) is denotedby ω ( ω ). As for the coupling functions between twomodules, we still adopt Gaussian functions similar to thatin Eq. (2): J ij ( x, x (cid:48) ) = 1 √ πb exp (cid:20) − ( x − x (cid:48) ) b (cid:21) , i, j ∈ { , , i (cid:54) = j } , (9)where b denotes the coupling width between two modules.The firing rates (responses) in each module of bimod- FIG. 2. The bimodular CANNs architecture. ular CANNs are calculated r ( x, t ) = [ U ( x, t )] k √ πa (cid:82) ∞−∞ dx (cid:48) [ U ( x (cid:48) , t )] ,r ( x, t ) = [ U ( x, t )] k √ πa (cid:82) ∞−∞ dx (cid:48) [ U ( x (cid:48) , t )] . (10)There are also two external inputs I ext and I ext tothe two modules respectively, which are set to be inde-pendent of each other. In this paper, both of the externalstimuli are in Gaussian forms: I ext = I exp (cid:20) − ( x − z ) a (cid:21) ,I ext = I exp (cid:20) − ( x − z ) a (cid:21) . (11) I and I denote the magnitudes of external inputs re-spectively, and x denote the positions of neurons. Thecentral positions of inputs are denoted by z and z ,which can be constants, and can also be variables de-pending on moving velocities, e.g., z = vt . III. STATIC INPUTSA. Dependence on Inter-Modular Couplings
Different inter-modular couplings can give rise to vari-ous dynamics of the neural network [26, 34]. We presentthe firing rates (responses) of the network as one exam-ple in Fig. 3 when there are only unidirectional couplingsfrom module 1 to 2. The profile of the firing rate at aninstant has a bump shape. External input I ext is a mov-ing stimulus, and I ext is static. As shown in Fig. 3(b),after the response has been established at around posi-tion π , corresponding to its own input, it is dragged bythe external input I ext soon after due to the excitatorycoupling ω . FIG. 3. The firing rates (responses) of bimodular CANNswhen only unidirectional couplings from module 1 to 2. ω =0 . ω = 0. External inputs I = I = 0 . I ext is movingat velocities of 0.02 rad/ms, and I ext is fixed at position of π .Blue dotted lines indicate trajectories of the central positionsof inputs. In general, bumps in the modules receiving excitatoryinter-modular couplings are attracted, whereas those re-ceiving inhibitory couplings are repelled. This gives riseto the various behaviors illustrated in Fig. 4. In Fig.4(a), the bump in module 2 is attracted by module 1 toaround position of external input 1, which results fromthe stronger inter-modular couplings ω .In Fig. 4(b), the repulsive and inhibitory effects frommodule 2 to module 1 are evident in the responses ofmodule 1, whose responses are pushed away from its in-put position. In the beginning the bump in module 2 isattracted by those in module 1 and deviate from positionof I ext . Nevertheless, soon after the bump in module 1 issuppressed by module 2, the attraction effect disappears,and the bump in module 2 returns to follow its own in-put. In addition, the whole network spends longer timeto reach steady state compared with that in Fig. 4(a).Figure 4(c) presents the case when both inter-modularcouplings are strongest, but ω is excitatory, and ω is inhibitory. In module 1, the responses are inhibitedseverely from the beginning, which also indirectly weak-ens the attractive effect from module 1 to module 2. Aftera short period, the responses in module 1 are completelyinhibited, while the responses in module 2 are quicklyestablished stably and strongly, almost not affected bymodule 1, due to the responses in module 1 being inhib-ited too quickly. Figure 4(d) chooses both inter-modularcouplings to be inhibitory. The responses in both mod-ules are comparable in magnitude, but are crippled bythe prohibitive effects. Besides, both bumps deviate alittle bit from its own input positions, resulting from thepush effect, brought by the inhibitory inter-modular cou-plings. Figure 4(e) illustrates the situation when bothinter-modular couplings are weak. The bump in mod-ule 1 is attracted towards to input 2 due to the exci-tatory ω . Likewise, the bump in module 2 is pushedaway from its own input, given the inhibitory ω . How- FIG. 4. The firing rates (responses) of bimodular CANNs when various inter-modular couplings are imposed. External inputs I = I = 0 . I ext is fixed at position 3 π/
2, and I ext is fixed at position π . Blue dotted lines indicate trajectories of inputs. ever, the strengths of the responses are strange. Module1 receives excitatory couplings from module 2, whereasits responses are weaker than those of module 2, whichreceives inhibitory couplings from module 1. Since thedynamics of the bimodular CANNs under weak inter-modular couplings behave differently from our forecast,we will study deeper in Fig. 6. Figures 4(f) and 4(g)share the same excitatory inter-modular coupling ω .Comparing the responses in both figures, it can be con- cluded that the network applied with stronger inhibitoryinter-modular coupling ω spends longer time reachingstable status. Besides, since the excitatory coupling ω is relatively much stronger than the inhibitory coupling ω , the responses in module 1 in both figures are allattracted approximately to input 2, and the strength ofthe responses are also enhanced by the excitatory inter-modular coupling. The bump in module 2 almost followsthe input 2 in both cases, and the inhibitory effect from FIG. 5. Network behavior showing inhibitory effects at low disparity and bias effects at high disparity. The firing rates(responses) of the bimodular CANNs with ω = − . ω = 0 . I ext locates at [0.9 π , 1.5 π ]. I ext is fixed at position π . Blue dotted lines indicate trajectories of external inputs. The disparities between two externalinputs( L − L ) are respectively in each column (listed at top line): − . π , 0 . π . (a) External inputs I = I = 0 .
4. (b)External inputs I = I = 0 .
5. (c) External inputs I = I = 0 .
6. (d) External inputs I = I = 0 . module 1 mainly reflects on the bump height which isobviously smaller than that of module 1. B. Bias Effects
As shown in Fig. 4(e), when the inter-modular cou-plings ω and ω are both weak, especially when one ofthe couplings is inhibitory, and the other excitatory, thenetwork behavior may appear anomalous. Figure 4(e)shows that module 1 receiving excitatory inter-modularcoupling has weaker responses, compared with those inmodule 2 receiving inhibitory inter-modular coupling. InFig. 5, we study further the network behavior with thesame couplings as those in Fig. 4(e) ( ω = 0 . , ω = − . I ext located at [0.9 π , 1.5 π ] and I ext fixed at π .A careful inspection of Fig. 5 for different disparitiesat different input strengths reveals that the network be-havior is determined by the interplay of two effects. Thefirst one is the inhibitory effect mainly effective at lowdisparity, as shown in Figs. 5(a) - 5(d). Under this condi- tion the inhibitory inter-modular couplings suppress theresponses in module 2, to the extent that they are to-tally suppressed after a short time for the weaker inputstrengths in Figs. 5(a) - 5(c). Even for the stronger inputstrengths in Fig. 5(d) when module 2 manages to sustaina stable response, its amplitude is still weaker than thatin Fig. 5(d).The second effect is the bias effect mainly operativeat higher disparity, which explains the anomalous behav-ior in Fig. 4(e). Due to the inter-modular couplings,the peak positions of the bumps in both modules areeither attracted or repelled from the respective stimu-lus positions. When the disparity is within the range ofthe inter-modular couplings, the tendency to displace thebumps increases with the disparity [15]. This displace-ment weakens the efficacy of the input stimuli and resultsin a reduction of the amplitude. Interestingly, due to thenonlinear dependence of the firing rate on the synaptic in-put, the bias due to an excitatory interaction is strongerthan that of an inhibitory one. Thus the responses inmodule 1 (receiving excitatory interactions from module2) have a stronger bias and a weaker amplitude. FIG. 6. The ‘inverse effectiveness’ in the bimodular CANNs.Y-axis denotes the maximum of the firing rates ( r ) of steadystates of module 1. ω = 0 . , ω = − .
1. Square-lines standfor maximum firing rates in module 1 when it only receivesexternal input I ext and no inputs from the other module.Circle-lines denote the maximum firing rates in module 1 onlyreceiving inputs from module 2 via couplings, and I ext = 0.Star-lines record the maximum firing rates of steady statesin module 1 when it receives both I ext and the inputs frommodule 2 via couplings. Disparity = 0. Another observation about the network behavior is theprinciple of inverse effectiveness [40]. It states that thenetwork response due to the combination of two inputsis weaker than the sum of the responses to the individ-ual inputs, that is, network responses are sub-additive.This has been considered as an evidence of the divisivenormalization of network responses, and was illustratedin bimodular networks with excitatory couplings [23]. Asillustrated in Fig. 6, this principle is also valid for bimod-ular networks with excitatory and inhibitory couplings inrespective directions.
C. Center of Mass Positions
In the convention of population coding, the brain in-fers the input positions through computing the center ofmass of the firing rates of the neurons [41, 42]. Hence,it is convenient to illustrate the attraction and repulsioneffects due to inter-modular couplings by tracing the cen-ter of mass positions of the bump. Its dependence on thedisparity is plotted in Fig. 7, showing that the attractionand repulsion effects are strongest at low disparity, butthe bumps become effectively independent at high dis-parity. In Fig. 7, we consider the case that the positionof I ext is fixed, and its amplitude is fixed to be suffi-ciently strong for the network to reach steady state in ashort time, but also sufficiently moderate for the networkto exhibit competition effects.In Fig. 7(a), both of the inter-modular couplings areweak excitatory, namely, the two modules attract and ex-cite each other. Therefore the output positions of mod-ule 1 are much closer to I ext (position π ) when I ext is weaker than I ext . When I ext increases to 0.6, the out-put position begins to approximate to its own input, andthe output dotted curves approaches the diagonal dashedline, which indicates that the output position of module1 is approaching its own input. However, attracted bythe module 2, the output position of module 1 cannotfully overlap with its stimulus.Figure 7(b) shows the situation when the coupling frommodule 1 to module 2 is excitatory while the coupling inthe reverse direction is inhibitory. When the externalinput I ext is weaker than the I ext , the repulsion effectarising from inhibitory couplings dominates the behav-iors, so that the output positions of module 1 are pushedaway from its corresponding input positions. This ef-fect becomes gradually prominent when the I ext posi-tion approaches position π , and reaches maximum whenthe I ext is applied at π . As the external input I ext in-creases to 0.7, equivalent to I ext strength, the input 1 isstrong enough to move against the repulsion from mod-ule 2, pulling the output positions of module 1 towardsits own input positions.In Fig. 7(c), the roles of the couplings in Fig. 7(b)are exchanged. Now module 1 inhibits module 2, andmodule 2 attracts module 1. Similarly, when input 1 isweaker than input 2, the inhibition acting on module 2is also weak. Then input 2 is able to exert its attractionon module 1, resulting in output positions of module 1staying close to input 2 position π . For clarity and accu-racy, Fig. 7(d) amplifies the dotted curves in Fig. 7(c)for the amplitude of I ext in the range 0.1 to 0.4. Whenthe amplitude I of I ext increases to 0.5 and 0.6, theoutput position in module 1 remains affected by input 2at low disparity, but eventually jumps discontinuously totrack input 1 at high disparity. For I lying between 0.6and 0.7, we can find a continuous variation of the out-put position when the disparity changes. When I be-comes 0.7 or above, it has sufficient strength to suppressresponses in module 2 at low disparity, but responses ap-pear in module 2 at high disparity, resulting in a jumpfrom fully tracking input 1 to output positions betweeninputs 1 and 2. D. Relevance to Causal Inference
The behaviors of the proposed bimodular networks arerelevant to models of causal inference in the brain [43].Causal inference refers to the process of inferring whetheror not an event A is caused by another event B. In nor-mative models of causal inference for two channels usinga model averaging strategy, cues from these channels areintegrated at low disparity, resulting in an averaged pre-diction. However, when the disparity is too high, thestimuli of the individual channels are inferred to be inde-pendent. This picture is valid for a wide range of priordistributions, and the resultant inference resembles theoutput position in Fig. 7(a) for bimodular networks con-nected by excitatory couplings.Bimodular networks with a pair of excitatory and in-
FIG. 7. The center of mass of responses in module 1 (denoted as output position) versus the input position of module 1 underdifferent weak inter-modular couplings. External input I ext is fixed at position π and amplitude of 0.7. I strengthens from 0.1to 1.0, indicated by colorbar. (a) Inter-modular couplings ω = ω = 0 .
1. (b) Inter-modular couplings ω = 0 . , ω = − . ω = − . , ω = 0 .
1. (d) Amplifying the parts of I ext amplitude ranging among [0 . . I ext is static and located at position π . I ext is a moving stimulus, moving velocity v = 0 .
01 rad/ms. Bluedotted lines indicate the inputs trajectories. k = 0 . I = I = 0 . I = 1 . M denotes module 1, and M denotes module 2. (b), (e) and (h) The firing rates(responses) of module 1 under different inter-modular couplings. (c), (f), and (i) The firing rates (responses) of module 2 underdifferent inter-modular couplings. hibitory inter-modular couplings also belong to the sameclass of causal inference models. These models use aBayesian framework which consists of the prior distri-bution of the bimodular stimuli and the likelihood distri-bution of the cues generated from the stimuli. There arecases that the input from one channel is subordinate tothe other. For example, the likelihood distribution of thesubordinate channel may be correlated with that of theother channel and has a higher uncertainty. As shownin Appendix, the optimal network structure in this caseconsists of a pair of excitatory and inhibitory couplings,and the module with the subordinate input is similar tomodule 1 in Fig. 7(c), yielding the same output as mod-ule 2 at low disparity. IV. MOVING INPUTSA. Dynamical Behaviors
In practice, the information input to the brain is typ-ically dynamical. In unimodular networks, a rich spec-trum of behaviors is already observed [4]. Therefore it isinstructive to study the processing of moving stimuli inbimodular CANNs. Following the previous section, weconsider external inputs of moderate strengths.In Fig. 8, the external inputs I ext and I ext are ap-plied from time t ≥
0, indicated by blue dotted lines.Input 1 remains static while input 2 begins to move there-after. Figures 8(b) and 8(c) (8(e) and 8(f)) correspondto the inter-modular couplings in Fig. 8(a)(8(d)). AsFigs. 8(a) and 8(d) show, module 1 is heavily influencedby the module 2, owing to the excitatory inter-modularcouplings ω . Under the excitatory ω , the responsesin module 1 in Fig. 8(b) and 8(e) are oscillating aroundthe input 1 position. There are some differences in thebeginning stage of the dynamics of Figs. 8(b) and 8(e).In Fig. 8(b), since ω and ω are both excitatory, theresponses in module 1 at beginning are attracted by in-put 2. After input 2 moves away, its attractive effect onmodule 1 is reduced, and the bump in module 1 movesback to position π , completing a cycle of oscillation. Onthe other hand, in Fig. 8(e), ω is inhibitory. Hence, thebump in module 2 is inhibited at the beginning, whichalso gives module 1 chances to follow input 1 tightly andfree from the attractive effect. Once the responses inmodule 2 are established, the bump in module 1 is im-mediately repelled from its own input, and when input2 moves too far away to exert its influence, the bump inmodule 1 again returns to position π . In Figs. 8(c) and8(f), the couplings from module 1 to module 2 ( ω ) aredifferent, while the influenced sites are the same (around π ), where the external input 1 locates. Therefore in Fig.8(c), the responses of module 2 at position around π areenhanced, resulting from the positive couplings ω . InFig. 8(f), the ω is inhibitory, thus the responses ofmodule 2 are inhibited around the position π .Figures 8(a)-8(f) show that the static and moving stim-uli can interact with each other via the couplings betweentwo modules. In addition, the competition between theexternal inputs is apparent in module 1, while input 1 isdirect and input 2 works via inter-modular couplings. InFigs. 8(b) and 8(e), the bump in module 1 follows themoving input 2 at regular intervals, giving rise to the os-cillation patterns. In order to illustrate the competition,we present a relatively extreme situation, shown in Figs.8(g)-8(i).Figures 8(g)-8(i) show the dynamics of the bimodularCANNs under a stronger excitatory inter-modular cou-pling ( ω ) accompanied by a weaker inhibitory coupling( ω ), as shown in Fig. 8(g). The input amplitude ofthe static input is much stronger than that of the mov-ing input. Now the dynamics of the network are totallydifferent from that in Figs. 8(d) - 8(f). In Fig. 8(i), onaccount of the strong inhibitory couplings from module1, the responses in module 2 around the position π arealmost fully suppressed. Only after input 2 has movedaway or before it arrives at the position π can stableand strong responses be built in module 2. However,influenced by the strong attraction from module 2, theresponses in module 1 can only sustain its static statewhen the responses in module 2 are inhibited. When theresponses in module 2 are rebuilt, they again attract theresponses in module 1, inducing the module 1 to trackthe moving I ext instead of its own static input I ext . B. Phase Diagrams
The different cases in Fig. 8 illustrate the effects of thecompetition between two external inputs and the effectsof inter-modular couplings on the dynamical behaviors.To obtain a more comprehensive picture, we introducethe tracking mean square deviations with respect to thestatic and moving inputs as references. A comparison oftheir magnitudes reveals whether the responses are track-ing the static or moving inputs. Below, we denote themodules receiving static and moving inputs as modules s and m respectively. Since module m receives a mov-ing stimulus, we particularly focus on the mean squaredeviations in module m , σ s = (cid:104) ( x m ( t ) − v s t ) (cid:105) t − (cid:104) x m ( t ) − v s t (cid:105) t ,σ m = (cid:104) ( x m ( t ) − v m t ) (cid:105) t − (cid:104) x m ( t ) − v m t (cid:105) t , (12)where x m ( t ) denotes the center of mass of the responsesin module m , v m ( v s ) indicates the moving velocities oftwo external inputs, and v s = 0. (cid:104)· · ·(cid:105) t represents averageover time. σ m and σ s denote the mean square deviationsof the responses in module m with respect to two externalinput positions to the network. When σ s is less than σ m , it means the module m is tracking the static inputmore than its own moving input. Otherwise, it tracksthe moving input more.Figure 9 shows the phase diagrams of tracking behav-iors. Three kinds of couplings are listed at upper left cor-ner, respectively. Excitatory inter-modular couplings aredenoted by arrows, and inhibitory couplings are denotedby red circles. We also pick some points with the samemoving velocities, but various moving input strengths asexamples of the responses in module m shown in Fig. 9.The bump in module s is effectively pinned to the staticinput in this parameter range and will not be shown. Thecorresponding data points are marked in Figs. 9(a), 9(d)and 9(j) respectively by black stars.In three groups of couplings, the module m cannottrack its own moving stimulus when the stimulus is rel-atively weak or the input moves too fast. The responseis pinned by the static input. This is referred to as the pinned phase (see Figs. 9(b) and 9(k)). As the movingvelocity increases, stronger moving stimulus is needed toovercome the static interactions from the other module.When the moving input strength is sufficiently strong,module m is able to catch up with the moving input.This is the tracking phase with σ m < σ s (see Figs. 9(c)and 9(l)). In Figs. 9(a) and (j), in which module s ex-cites module m , the phase boundaries are similar, withthe pinned phase at low strength of the moving input(see Figs. 9(b) and 9(k)) and the tracking phase at highstrength (see Figs. 9(c) and 9(l)).On the other hand, when the module s inhibits mod-ule m , the phase boundaries in Fig. 9(d) are differentfrom the other two cases and an unpinned phase existsat intermediate strength of the moving input. In Fig.9(d), module m cannot build up stable and strong re-sponses when the moving stimulus is very weak. This is FIG. 9. The phase diagrams of the dynamical behaviors in module m with moving stimulus ((a), (d) and (j)) and networkbehaviors at selected locations. The static input I ext is fixed at the amplitude of 0.7, applied at position π , and the amplitudesof all couplings are fixed at 0.1. (b), (c) The firing rates (responses) of the bimodular CANN in (a) when I ext moves at speedof 0 . I ext is at the amplitude of 1 and 3 respectively. (e), (f), (g), (h) and (i) The firing rates (responses) of thebimodular CANN in (d) when I ext moves at speed of 0 . I ext is at the amplitude of 0.2, 1, 2 and 4 respectively.(k) and (l) The firing rates (responses) of the bimodular CANN in (j) when I ext moves at speed of 0 . I ext is atthe amplitude of 1 and 3 respectively. weak response phase . Furthermore, dueto the inhibitory inter-modular couplings ω ms and theweak moving input, the responses are suppressed tem-porarily when the moving bump passes by the inhibitorystatic input (see Fig. 9(e)). This region of temporarysuppression even extends slightly beyond the boundaryof the weak response phase.As the strength of moving stimulus increases, module m is able to build strong and stable responses. Due to therepulsion by the static input, the bump is repelled fromthe static input and drift with a low velocity, resultingin the unpinned phase . The drift velocity has the samedirection as that of the moving input, with the bumpattracted forward towards the moving input when thelatter is ahead, or attracted backward towards the mov-ing input when the latter is behind (see Fig. 9(f)). Thebump motion is heavily affected by the presence of thestatic input, which forms a barrier to the bump motion,causing the drift of the bump to slow down and reversefrom forward attraction to backward attraction.As the moving input strength continues to increase,the bump trajectory follows the moving input closer. Asa result, the forwardly-attracted segment of the bumptrajectory and the backwardly-attracted segment becomedisconnected, A discontinuous jump of the center of massof the bump can be observed (see Fig. 9(g)). On fur-ther increase of the moving input strength, the movingbump is able to overcome the barrier of the static input,and the two segments of the bump trajectory reconnect.However, the reconnection takes place in the forward di-rection, in contrast to the backward connection when themoving input strength is weak (see Fig. 9(h)). Hence,the bump is able to catch up with the moving input, andthe network enters the tracking phase (see Fig. 9(i)) with σ m < σ s .Figure 9 summarizes the tracking dynamics of the bi-modular CANNs under weak inter-modular couplings.As we have shown, a pair of inhibitory and excitatoryweak inter-modular couplings can give rise to a rich spec-trum of behaviors due to the competition between thedirect external input and the indirect input through theinter-modular couplings. V. SENSORY ILLUSION
Understanding the rich dynamics of bimodular net-works can assist us in studying the multisensory infor-mation processing in the neural circuits. The brain re-ceives different kinds of signals via distinct senses fromsurrounding environment, and generates appropriate re-sponses after integration of the received information.There have been extensive studies focusing on the mul-tisensory integration of different modes of signals, suchas visual-vestibular [36, 37], visual-auditory [38, 44], andso forth [19, 24, 35, 39, 45, 46]. In this paper, we take‘Motion-Bounce Illusion’ [30, 32, 47] experiment as anexample, which incorporates visual and auditory signals,to elucidate how bimodular CANNs explain the experi- ment.The sketch of the ‘Motion-Bounce Illusion’ experimentis shown in Fig. 10. The subject first sees two ballslocated at point A and B respectively. When the ex-periment starts, the two balls begin to move towards thediagonal points C and D respectively with the same veloc-ities. In test 1, when the two balls meet each other at thecenter point O, they keep the original velocities and mov-ing directions, moving to the destination points. In test2, when the two balls meet each other at the center pointO, there will be a brief auditory input presented concur-rently, sounding like ‘tink’, meanwhile the two balls stillkeep the original speeds and moving directions, movingtowards points C and D respectively.
FIG. 10. The sketch map of the ‘Motion-Bounce Illusion’experiment.
According to experimental results, the majority of ob-servers reported in test 1 that they perceived the twoballs streaming through each other, rather than collidingor bouncing off when they met at the center point O. Intest 2, although the motions of the balls are the same,there is a considerable fraction of observers reporting thatthey perceived the two balls bounced off each other in-stead of streaming through. That is, the trajectories ofthe two balls become ‘ >< (cid:48) , different from the ‘X’ shapein test 1.We use a bimodular CANN to model the visual and au-ditory modules. In the visual module, we have a movinginput with two peaks approaching each other with thesame velocity. In the auditory module, we have a mo-mentary static input simulating the brief auditory ‘tink’.To quantify the perception of ‘streaming through’ and‘bouncing off’, we introduce two reference patterns torepresent them. The single-bump profile in Fig. 11(a)corresponds to the situation in which two balls overlapwith each other completely at the meeting point O. Thetwo-bump profile in Fig. 11(b) represents the situationthat the two balls bounce off so that the observers cansee two balls at the meeting point O. Based on these tworeference patterns, we are able to calculate the bouncingratio (BR) defined by BR = P B P S + P B , (13)1where P S and P B are the projections of the network re-sponses at the meeting point to the respective referencepatterns (S: streaming through, B: bouncing off) com-puted by P B = N (cid:88) i =1 R i M Bi | M B | ,P S = N (cid:88) i =1 R i M Si | M S | , (14)in which R i indicates the response of neuron i in thevisual module at the meeting point O, N is the numberof neurons in each module. M B i and M S i denotes thereference patterns at neuron i . FIG. 11. The reference patterns in simulations of Motion-Bounce Illusion. (a) Reference pattern of streaming through.(b) Reference pattern of bouncing off. The central minimumis 3% of the two maxima.
In simulations, according to experiments [47], for vi-sual module, we set the moving velocities of two visualcues are both 0.02 rad/ms. The duration of the auditorysignal is 6 ms, which is present 6 ms before two ballscollide. Both of auditory inputs and visual inputs areat same amplitudes of 1.2. We tried different couplingsfrom visual module to auditory module, and found thatonly under the excitatory ω AV couplings can the net-work model simulate the experiments very well, thereforewe set the ω AV = 0.1. This also indicates that in thissensory illusion experiment, the functional inter-modularcouplings from visual module to auditory module in brainare likely also excitatory. The rest of the network param-eters are the same as in Fig. 9(d). We obtain the sim-ulation results shown in Fig. 12. Each point denotes aBR value when the two balls meet at center point O. Asthe ‘tink’ sound increases the perception of bouncing off, ω V A is set to be negative. While the visual cues have noinhibition effects on audition, the couplings from visual toauditory modality ω AV are set to be excitatory. When ω V A increases, the BR values also increase, indicatingthat the observers are more likely to sense the ‘bounc-ing illusion’. Furthermore, since the BR values remainseffectively independent of the auditory input, it can beconcluded that BRs are independent of the magnitudesof the auditory inputs.We compare our simulation results with the ‘Motion-Bounce Illusion’ experimental results [30–32, 47]. In theexperiment, the bouncing ratio increases by around 80%when the ‘tink’ sound is present, improving notably whencompared with the case in the absence of the auditory
FIG. 12. The BRs under different inter-modular couplingsand auditory input strengths. The region enclosed by thedashed lines denotes the simulation results without auditoryinputs. input. In the simulation, the increase of the boundingratios are around 50%, which is comparable to the exper-iment results. Therefore the bimodular CANNs can be auseful modeling tool for comparison with experiments.
VI. CONCLUSIONS
We have generalized the study of unimodular CANNsto bimodular CANNs, endowing the network with the ca-pacity to incorporate two sensory modalities. The inter-modular couplings in bimodular CANNs play importantroles in determining the dynamics of the network (Fig. 3and Fig. 4). Excitatory inter-modular couplings result inenhancing and attracting the responses of the other mod-ule, while inhibitory inter-modular couplings lead to sup-pressing and repelling effects for both static and movinginputs. The network behavior is determined by the in-terplay of the input strengths, their disparity, speed (formoving inputs) and the inter-modular couplings. Themost interesting case is the bimodular CANN with a pairof excitatory and inhibitory inter-modular couplings. Forstatic inputs at high disparity, it exhibits anomalous be-havior with the inhibited module having stronger-than-expected output than the excited module. For static andmoving inputs to the excited and inhibited modules re-spectively, a series of drifting responses with continuousand discontinuous evolution occur when the moving in-put strength increases and finally arriving at the trackingphase.We have shown that bimodular networks are relevantto issues in neuroscience and neural information pro-cessing. In the study of static inputs, they are usefulin modeling causal inference. Bimodular networks con-nected by excitatory inter-modular couplings yield inte-grated outputs at low disparity and segregated outputs2at high disparity, This provides a neural substrate forcausal inference based on a wide range of prior distribu-tions. Bimodular networks with a pair of excitatory andinhibitory inter-modular couplings can also be used tomodel causal inference in which one channel is subordi-nate to the other. In this paper we have not discussed bi-modular networks with inhibitory couplings, but they arealready important components in models of competitivedecision making [48–50]. Using bimodular networks withdynamical inputs, we have also modeled mutlisensorypsychophysics experiments such as the motion-bounce il-lusion experiment and predict that the psychophysicaleffect is robust in a wide range of the magnitude of theauditory input.Multisensory interactions has been an important is-sue which has been studied extensively. Figuring outhow brain processes multisensory signals is an importanttopic not only in modeling the functions of the brain,but also in the technological applications of neural com-putation. It has been commonly recognized that excita-tory couplings between modules are important when thebrain deals with different channels of signals that are cor-related [51], and the inhibitory couplings are importantwhen the brain processes signals that are uncorrelatedor anti-correlated [23, 52]. There have been experiments finding ‘congruent’ and ‘opposite’ cells [36], with whichthe neural system responds to signals with different dis-parities can be rather diverse in experiments integratingvisual and vestibular signals in the monkey’s brain. Ina recently proposed model explaining the functions ofthe congruent and opposite cells in Bayes-optimal infer-ence, the inter-modular couplings play an important role.Recent work also showed that the network structure toachieve Bayes-optimal performance incorporating bothexcitatory or inhibitory couplings depends on the priordistribution of the signals [53]. While most of the studiesfocus on the steady state behaviors of the neural system,our work shows that dynamical and temporal behaviorsare also important, and the transient behaviors of theneural system may also be useful in conveying informa-tion between the sensory modalities. Experiments basedon temporal integration, such as the moving-bounce il-lusion experiment, can also be designed to further studythe multisensory information processing.
Acknowledgments
This work is supported by grants from the ResearchGrants Council of Hong Kong (grant numbers 16322616,16306817 and 16302419). [1] D.J. Amit,
Modeling brain function: The world of attrac-tor neural networks (Cambridge University Press, Cam-bridge, UK, 1992).[2] W. Gerstner, W. M. Kistler, R. Naud, and L. Panin-ski,
Neuronal dynamics: From single neurons to networksand models of cognition (Cambridge University Press,Cambridge, UK, 2014).[3] P. Dayan and L. F. Abbott, Theoretical Neuroscience,Vol. 806 (2001).[4] R. Ben-Yishai, D. Hansel, and H. Sompolinsky, Travelingwaves and the processing of weakly tuned inputs in acortical network module. J. Comput. Neurosci. 4(1), 57-77 (1997).[5] H. R. Wilson and J. D. Cowan, Excitatory and inhibitoryinteractions in localized populations of model neurons,Biophys. J. 12(1), 1-24 (1972).[6] S. I. Amari, Dynamics of pattern formation in lateral-inhibition type neural fields, Biol. Cybern. 27(2), 77-87(1977).[7] J. H. Maunsell and D. C. Van Essen, Functional prop-erties of neurons in middle temporal visual area of themacaque monkey. I. Selectivity for stimulus direction,speed, and orientation, J. Neurophysiol. 49(5), 1127-1147(1983).[8] S. Deneve, P. E. Latham and A. Pouget, Reading popu-lation codes: a neural implementation of ideal observers,Nat. Neurosci. 2(8), 740-745 (1999).[9] A. Samsonovich and B. L. McNaughton, Path integrationand cognitive mapping in a continuous attractor neuralnetwork model, J. Neurosci. 17(15), 5900-5920 (1997).[10] M. Camperi and X. J. Wang, A model of visuospatialworking memory in prefrontal cortex: recurrent network and cellular bistability, J. Comput. Neurosci. 5(4), 383-405 (1998).[11] S. Wu, S. I. Amari and H. Nakahara, Population codingand decoding in a neural field: a computational study,Neural Comput. 14(5), 999-1026 (2002).[12] S. Wu and S. I. Amari, Computing with continuous at-tractors: Stability and online aspects, Neural Comput.17(10), 2215-2239 (2005).[13] S. Wu, K. Hamaguchi and S. I. Amari, Dynamics andcomputation of continuous attractors, Neural Comput.20(4), 994-1025 (2008).[14] C. C. A. Fung, K. Y. M. Won and S. Wu, Dynamics ofneural networks with continuous attractors, EPL 84(1),18002 (2008).[15] C. C. A. Fung, K. Y. M. Wong and S. Wu, A movingbump in a continuous manifold: A comprehensive studyof the tracking dynamics of continuous attractor neuralnetworks, Neural Comput. 22(3), 752-792 (2010).[16] C. C. A. Fung, K. Y. M. Wong, H. Wang and S. Wu,Dynamical synapses enhance neural information process-ing: gracefulness, accuracy, and mobility, Neural Com-put. 24(5), 1147-1185 (2012).[17] C. C. A. Fung, H. Wang, K. Lam, K. Y. M. Wong, andS. Wu, Resolution enhancement in neural networks withdynamical synapses, Front. Comput. Neurosc. 7 (2013).[18] C. S. Zhou, L. Zemanov, G. Zamora, C. C. Hilgetag andJ. Kurths, Hierarchical organization unveiled by func-tional connectivity in complex brain networks, Phys.Rev. Lett. 97(23), 238103 (2006).[19] J. Driver and T. Noesselt, Multisensory interplay revealscrossmodal influences on ‘sensory-specific’ brain regions,neural responses, and judgments, Neuron 57(1), 11-23 (2008).[20] C. R. Fetsch, G. C. DeAngelis and D. E. Angelaki, Bridg-ing the gap between theories of sensory cue integrationand the physiology of multisensory neurons, Nat. Rev.Neurosci. 14(6), 429-442 (2013).[21] T. R. Stanford, S. Quessy and B. E. Stein, Evaluating theoperations underlying multisensory integration in the catsuperior colliculus, J. Neurosci. 25(28), 6499-6508 (2005).[22] P. M. Jaekl and L. R. Harris, Auditoryvisual temporalintegration measured by shifts in perceived temporal lo-cation, Neurosci. Lett. 417(3), 219-224 (2007).[23] W. H. Zhang, A. Chen, M. J. Rasch and S. Wu, Decen-tralized multisensory information integration in neuralsystems, J. Neurosci. 36(2), 532-547 (2016).[24] M. O. Ernst and M. S. Banks, Humans integrate visualand haptic information in a statistically optimal fashion,Nature 415(6870), 429-433 (2002).[25] C. C. A. Fung, K. Y. M. Wong and S. Wu, Trackingdynamics of two-dimensional continuous attractor neuralnetworks, J. Phys. Conf. Ser. Vol. 197, No. 1, p. 012017(2009).[26] C. C. A. Fung, K. Y. M. Wong, H. Z. Mao and S. Wu,Fluctuation-response relation unifies dynamical behav-iors in neural fields, Phys. Rev. E, 92(2), 022801 (2015).[27] H. Wang, K. Lam, C. C. A. Fung, K. Y. M. Wong andS. Wu, Rich spectrum of neural field dynamics in thepresence of short-term synaptic depression, Phys. Rev.E, 92(3), 032908 (2015).[28] L. Shams, Y. Kamitani and S. Shimojo, Illusions: Whatyou see is what you hear, Nature, 408(6814), 788-788(2000).[29] L. Shams, Y. Kamitani and S. Shimojo, Visual illusioninduced by sound, Brain Res. Cogn. Brain Res. 14(1),147-152 (2002).[30] S. Shimojo and L. Shams, Sensory modalities are not sep-arate modalities: plasticity and interactions, Curr. Opin.Neurol. 11(4), 505-509 (2001).[31] S. Watkins, L. Shams, S. Tanaka, J. D. Haynes and G.Rees, Sound alters activity in human V1 in associationwith illusory visual perception, Neuroimage, 31(3), 1247-1256 (2006).[32] R. Sekuler, A. B. Sekuler and R. Lau, Sound alters visualmotion perception, Nature, 385(6614), 308 (1997).[33] M. Carandini and D. J. Heeger, Normalization as acanonical neural computation, Nat. Rev. Neurosci. 13(1),51-62 (2012).[34] W. H. Zhang and S. Wu, Neural information processingwith feedback modulations, Neural Comput. 24(7), 1695-1721 (2012).[35] L. Shams and A. R. Seitz, Benefits of multisensory learn-ing, Trends Cogn. Sci. 12(11), 411-417 (2008).[36] Y. Gu, D. E. Angelaki and G. C. DeAngelis, Neural corre-lates of multisensory cue integration in macaque MSTd,Nat. Neurosci. 11(10), 1201-1210 (2008). [37] K. Dokka, G. C. DeAngelis and D. E. Angelaki, Multi-sensory Integration of Visual and Vestibular Signals Im-proves Heading Discrimination in the Presence of a Mov-ing Object, J. Neurosci. 35(40), 13599-13607 (2015).[38] S. Molholm, W. Ritter, D. C. Javitt and J. J. Foxe, Mul-tisensory visualauditory object recognition in humans:a high-density electrical mapping study, Cereb. Cortex14(4), 452-465 (2004).[39] C. R. Fetsch, A. Pouget, G. C. DeAngelis, G. C. andD. E. Angelaki, Neural correlates of reliability-basedcue weighting during multisensory integration, Nat. Neu-rosci. 15(1), 146-154 (2012).[40] T. Ohshiro, D. E. Angelaki, D. E. and G. C. DeAngelis,A normalization model of multisensory integration, Nat.Neurosci. 14(6), 775 (2011).[41] A. Pouget, K. Zhang, S. Deneve and P. E. Latham,Statistically efficient estimation using population coding,Neural Comput. 10(2), 373-401 (1998).[42] S. Wu, H. Nakahara and S. I. Amari, Population codingwith correlation and an unfaithful model, Neural Com-put. 13(4), 775-797 (2001).[43] L. Shams and U. R. Beierholm, Causal inference in per-ception, Trends Cogn. Sci. 14(9), 425-432 (2010).[44] A. R. Seitz, R. Kim and L. Shams, Sound facilitates vi-sual learning, Curr. Biol. 16(14), 1422-1427 (2006).[45] W. D. Hairston, M. T. Wallace, J. W. Vaughan, B. E.Stein, J. L. Norris and J. A. Schirillo, Visual localiza-tion ability influences cross-modal bias, J. Cogn. Neu-rosci. 15(1), 20-29 (2003).[46] B. Odegaard, D. R. Wozny and L. Shams, The effects ofselective and divided attention on sensory precision andintegration, Neurosci. Lett. 614, 24-28 (2016).[47] K. Watanabe, Crossmodal interaction in humans (Doc-toral dissertation, California Institute of Technology)(2001).[48] X. J. Wang, Probabilistic decision making by slow re-verberation in cortical circuits, Neuron 36(5), 955-968(2002).[49] X. J. Wang, Decision making in recurrent neuronal cir-cuits, Neuron 60(2), 215-234 (2008).[50] C. T. Wang, C. T. Lee, X. J. Wang, and C. C. Lo, Top-down modulation on perceptual decision with balancedinhibition through feedforward and feedback inhibitoryneurons, PLOS ONE 8(4), e62379 (2013).[51] R. S. Kim, A. R. Seitz and L. Shams, Benefits of stimuluscongruency for multisensory facilitation of visual learn-ing, PLOS ONE 3(1), e1532 (2008).[52] W. H. Zhang, H. Wang, K. Y. M. Wong and S. Wu,‘Congruent’ and ‘Opposite’ Neurons: Sisters for Multi-sensory Integration and Segregation, NeurIPS (pp. 3180-3188) (2016).[53] H. Wang, W. H. Zhang, K. Y. M. Wong and S. Wu, Howthe prior information shapes neural networks for optimalmultisensory integration, (14th International Symposiumon Neural Networks (ISNN)), Sapporo, Japan (2017). APPENDIX: CAUSAL INFERENCE IN AN OPPOSITELY COUPLED BIMODULAR NETWORK
Following [23]. we consider a generic prior of two real-valued stimuli s and s described by p ( s , s ) = N (0 , σ s ) , (A1)where N (0 , σ s ) is a normal distribution with mean 0 and variance σ s . Instead of the independent likelihood consideredin [23], we focus on the case that the cues z and z are generated by the stimuli given by the correlated likelihood p ( z , z | s , s ) ∝ exp (cid:20) −
12 ( Z − S ) T (cid:0) C − (cid:1) ( Z − S ) (cid:21) (A2)where c ij = (cid:104) ( z i − s i )( z j − s j ) (cid:105) . Using Bayes’ rule, the posterior probability is given by p ( s , s | z , z ) ∝ p ( z , z | s , s ) p ( s , s ) . (A3)In a bimodular network, the posterior estimate of s is given by p ( s | z , z ) = (cid:90) ds p ( s , s | z , z ) . (A4)Noting that the integrand is a Gaussian function, we obtain the mean and variance of the posterior distribution givenby ˆ s = ( c − c + σ s ) z + ( c − c ) z c + c − c + σ s , (A5)ˆ σ = c c − c + c σ s c + c − c + σ s . (A6)The posterior mean of s can be obtained similarly. To relate the inference of a module to its direct input and theinference of the other module, we haveˆ s = σ s σ s + c − c z + c − c σ s + c − c ˆ s , (A7)ˆ s = σ s σ s + c − c z + c − c σ s + c − c ˆ s . (A8)Note that there is an important difference with the case of independent likelihoods in which c = 0. Instead ofhaving ˆ s positively weighted in ˆ s and vice versa, there exist likelihood functions in which c − c and c − c have opposite signs. For example, for the following correlated noise, input 1 is subordinate to input 2, s − z = t , s − z = t , t ∼ N (0 , . (A9)This results in c − c = 12 , c − c = − . (A10)Next, we will show that this setting can be implemented by a bimodular network in which the couplings from module2 to 1 are excitatory, and the couplings from module 1 to 2 are inhibitory. Consider network solutions of the form,for i = 1 , , U i ( x i ) = U i exp (cid:20) − ( x i − ˆ s i ) a (cid:21) . (A11)Substituting the solution into the Eq. (8) and integrating over x and x in the first and second equations respectively,we obtain U = ω √ U B + ω √ U B + I , (A12)5 U = ω √ U B + ω √ U B + I , (A13)where B i = 1 + kU i / i = 1 ,
2. Substituting Eq. (A11) into Eq. (8), multiplying both sides by x and x in thefirst and second equations respectively, integrating over x and x in the respective equations, and using Eqs. (A12)and (A13), we obtain ˆ s = ω U ω U + √ B I ˆ s + √ B I ω U + √ B I z , (A14)ˆ s = ω U ω U + √ B I ˆ s + √ B I ω U + √ B I z . (A15)Comparing these equations with Eqs. (A7) and (A8), we see that when the causal inference of an input is subordinateto another such as in the example of Eq. (A9), the network implementation can be achieved by having an excitatory ω and an inhibitory ω21