When Noise meets Chaos: Stochastic Resonance in Neurochaos Learning
WW HEN N OISE MEETS C HAOS : S
TOCHASTIC R ESONANCE IN N EUROCHAOS L EARNING
Harikrishnan NB, Nithin Nagaraj
Consciousness Studies Programme,National Institute of Advanced Studies,Indian Institute of Science Campus, Bengaluru, India. [email protected], [email protected]
February 3, 2021 A BSTRACT
Chaos and Noise are ubiquitous in the Brain. Inspired by the chaotic firing of neurons and theconstructive role of noise in neuronal models, we for the first time connect chaos, noise and learning.In this paper, we demonstrate Stochastic Resonance (SR) phenomenon in Neurochaos Learning(NL). SR manifests at the level of a single neuron of NL and enables efficient subthreshold signaldetection. Furthermore, SR is shown to occur in single and multiple neuronal NL architecture forclassification tasks - both on simulated and real-world spoken digit datasets. Intermediate levels ofnoise in neurochaos learning enables peak performance in classification tasks thus highlighting therole of SR in AI applications, especially in brain inspired learning architectures.
Keywords:
Neurochaos Learning,
ChaosNet , Machine Learning, Stochastic Resonance, Artificial Neural Network
The discipline of ‘Artificial Intelligence’ (AI) originated with the aim of building computer systems that mimics thehuman brain. This involves the interplay of neuroscience and computational/mathematical models. Over the years sincethe inception of AI, both neuroscience and computational approaches have expanded their boundaries. This in turnshifted the focus of AI from building systems by exploiting the properties of brain to mere engineering point of view i.e.,‘what works is ultimately all that really matters’ [1]. The engineering approaches like optimization and hyperparametertuning evaluate AI from a performance point of view. This particular view greatly limits the original motivation of AI.In this research, we use two key ideas from neuroscience namely Chaos and Stochastic Resonance to develop novelmachine learning algorithms.With the current understanding, there are nearly 86 billion neurons [2] in the human brain. They interact with each otherto form a complex network of neurons. These neurons are inherently non-linear and found to exhibit a fluctuating neuralresponse for the same stimuli on different trails while doing experiments. The fluctuating neural response is in part dueto (a) inherent chaotic nature of neurons [3]. Chaotic neurons are sensitive to initial states and thus show fluctuatingbehaviour to varying initial neural activity with the start of each trail. The second source of fluctuating behaviour canbe attributed to (b) neuronal noise and interference [4]. Noise has its effect on the perception of sensory signals tothe motor response generation [4]. Thus, noise poses a challenge as well as a benefit to information processing. Theresearch in noise can be traced back to the experiment of Robert Brown in 1822 [5]. In the experiment, the Scottishbotanist observed under a microscope the irregular movement of pollen on the surface of a film of water. Robert Browntried to investigate the reason behind this irregular fluctuations. This phenomenon is known as Brownian motion. Thisproblem was successfully solved by Albert Einstein in 1905 [6]. The noise produced by the Brownian motion is termedas brown noise or red noise. The term brown is indicated to give credit to Robert Brown for his key observations andlaying out experiments to understand Brownian motion. Another interesting research in 1912 by Dutch Physicist andthe first woman in noise theory, Geertruida de Haas-Lorentz viewed electrons as Brownian particles. This inspiredthe Swedish Physicist Gustav Adolf Ising in 1926 to explain why galvanometers cannot be cascaded indefinitely to a r X i v : . [ q - b i o . N C ] F e b OISE AND C HAOS IN N EUROCHAOS L EARNING increase amplification [7]. The next leap in noise research was brought by J. B. Johnson and H. Nyquist. During theyear 1927-1928 Johnson published his well known thermal voltage noise formula and derived the formula theoreticallyin collaboration with Nyquist [8, 9]. The world took a turn in 1948 by the ground breaking work of Claude ElwoodShannon who created the field called Information theory [10]. In his 1948 paper titled “A Mathematical Theory ofCommunication", Shannon solved how to reliably transmit a message through an unreliable (noisy) channel. Shannonshowed that any communication channel can be modeled in terms of bandwidth and noise. Bandwidth is the rangeof electromagnetic frequencies required to transmit a signal and noise is an unwanted signal that disrupts the originalsignal. He further showed how to calculate the maximum rate at which a data can be sent through a channel with aparticular bandwidth and noise characteristics with zero error. This is called the rate of channel capacity or Shannonlimit [10].All these research, especially Shannon’s work, considered noise as an unwanted signal that adversely affects thecommunication. But in the second half of 20th century, the constructive advantage of noise in signal detection and alsothe advantages of noise in physiological experiments lead to the birth of
Stochastic Resonance . The term StochasticResonance (SR) was first used in the context of noise optimized systems by Roberto Benzi [11] in 1980 with regard to adiscussion on climate variations and variability. Benzi introduced SR in connection with the explanation to a periodicityof 10 years found in the power spectrum of paleoclimatic variations for the last 700 ,
000 years [11]. The energy balancemodels failed to explain this phenomenon. Benzi, in his paper titled “Stochastic Resonance in Climate Change" [11],suggested that the combination of internal stochastic perturbations along with external periodic forcing due to earth’sorbital variations are the reasons behind the ice age cycle. The paper further suggests that neither the stochasticperturbations nor periodic forcing alone can reproduce the strong peak found at a periodicity of 10 years. Thus, theeffect produced by this co-operation of noise and periodic forcing was termed as Stochastic Resonance by Benzi. Eventhough this explanation is still a subject of debate, but the definition of the term SR got evolved in the coming years. SRfinds application in climate modelling [12], electronic circuits [13], neural models [14], chemical reactions [15], etc.The original use of the resonance part of SR comes from the plot of output signal to noise ratio (SNR) that exhibits asingle maximum for an intermediate intensity of noise [16]. A motivating idea to develop SR based electronic devicesor neuromorphic systems comes from the brain, because we know that the brain is far better than electronic devices interms of low computational power, robust to noise and neural interference. SR in the brain and nervous system couldserve as a motivation for the design of robust machine learning architectures and electronic systems. The observation ofSR in neural models was first published in 1991 [14]. The research in SR accelerated when the 1993 Nature articlereported the presence of SR in physiological experiments on crayfish mechanorecetors [17]. In the same year, yetanother highly cited paper – SR on neuronal model [18] – became widely popular. These research studies triggered theexpansion of SR in mathematical models of neurons, biological experiments, behavioural experiments especially inpaddle fish [19], noise enhanced cochlear implants [20] and computations [21]. Despite the wide popularity of SR,only a few papers have focused on the application of SR in machine learning (ML) and deep learning (DL) [22, 23].The current ML and DL algorithms assume ideal working conditions, i.e the input data is noiseless. But in practice,there is unavoidable noise that distorts measurements by sensors. Hence, there is a research gap from a theoretical andan implementation point of view as far as ML/DL algorithms are concerned. We define SR as noise enhanced signalprocessing [16]. In a nutshell for SR to occur, the following four elements are required:1. An information carrying signal.2. A noise added to the input signal.3. A non-linear system.4. A performance measure which captures the relationship between the output and input with respect to varyingnoise intensity. For classification tasks, we shall use F1-score as a measure to capture the performance of thenon-linear system with respect to varying noise intensities.In this work, we highlight how SR is inherently used in the recently proposed Neurochaos Learning (NL) [24]architecture namely ChaosNet [25]. The sections in the paper are arranged as follows: Section 2 describes NL and howSR is naturally manifested in NL. Section 3 highlights the empirical evidence of SR in NL for both simulated and realworld datasets. Section 4 deals with conclusion and future work.
Inspired by the presence of neural chaos (neurochaos) in the brain [3, 26], we have recently proposed two machinelearning architectures namely
ChaosNet [25] and Neurochaos-SVM [24] which we term as Neurochaos Learning orNL. The inspiration of the term Neurochaos [3] comes from the chaotic behaviour exhibited at different spatiotemporalscales in the brain [3, 26]. In this paper, we uncover the inherent constructive use of noise in NL. The crux of this paper2
OISE AND C HAOS IN N EUROCHAOS L EARNING
Figure 1: Neurochaos Learning exhibits Stochastic Resonance for signal detection and classification.can be represented by the flow diagram in Figure 1. A noisy signal is input to NL (
ChaosNet ) for the purposes of eithersignal detection (subthreshold) or classification (from multiple classes). In the diagram, we depict three possibilities– low, medium and high noise. As it turns out, the performance of NL for both signal detection and classification isfound to be optimum for an intermediate or medium noise intensity. The performance of NL is quantified using crosscorrelation coefficient ( ρ ) for signal detection and macro F1-score for classification. Thus, we claim that SR is inherentin NL and is responsible for its superior performance. The rest of this paper will showcase this idea using evidencefrom experiments conducted using both simulated as well as real-world examples. ChaosNet , a NL architecture, is distinctly different when compared with other machine learning algorithms. It consistsof a layer of chaotic neurons which is the 1D GLS (Generalized Lüroth Series) map, C GLS : [ , ) → [ , ) , defined asfollows: C GLS ( x ) = (cid:40) xb , ≤ x < b , ( − x )( − b ) , b ≤ x < , where x ∈ [ , ) and 0 < b < ChaosNet (NL) architecture. Each GLS neuron starts firing upon encountering a stimulus(normalized input data sample) and halts when the neural firing matches the stimulus. From the neural trace of eachGLS neuron, we extract the following features - firing time , firing rate , energy of the chaotic trajectory , and entropyof the symbolic sequence of chaotic trajectory [24]. ChaosNet architecture provided in Figure 2 uses a very simpledecision function - cosine similarity of mean representation vectors of each class with the chaos based feature extractedtest data instances [25]. The classifier provided in Figure 2 need not be restricted to SVM or cosine similarity. NLfeatures can be combined with the state of the art DL methods. Refer [24, 25] for a detailed explanation regarding theworking of the method. It is important to note that
ChaosNet does not make use of the celebrated backpropagationalgorithm (which conventional DL uses), and yet yields state-of-the-art performance for classification [25]. Veryrecently,
ChaosNet has been successfully extended to fit in continual learning framework [27].
Having described the
ChaosNet
NL architecture (Figure 2), we begin our study by exploring the functioning of a singleGLS neuron in this architecture. In order to understand the SR effect in a single GLS neuron, we consider a hypotheticalexample of the feeding pattern of a cannibalistic species with a food chain depicted in Figure 3. Specifically, weconsider the problem of survival of a single organism X of this species whose size is 0.5 units. All those organismswhose size is ≤ . > . X needs to solve a binary classification problem. To this end, we assume that X has a single GLSneuron that fires upon receiving a stimulus from the environment. The stimulus, say light reflected off the approachingorganism, encodes the size of that organism, and we assume that it could be any value in the range [ , ] . The problemnow reduces to determining whether the stimulus corresponds to the label ‘prey’ (Class-0) or label ‘predator’ (Class-1).3 OISE AND C HAOS IN N EUROCHAOS L EARNING
Figure 2: Neurochaos Learning (NL) Architecture: ChaosNet [25] is an instance of NL architecture. The input layerconsists of GLS neurons. When the r -th stimulus (normalized) x r is input to the r -th GLS neuron C r , it starts firingchaotically giving a neural trace. The firing stops when this neural trace matches the stimulus. Features are extractedout of the neural trace which are then passed on to a classifier.An example of several stimuli received by the organism X is depicted in Figure 4a where Class-0 (Prey) and Class-1(Predator) are marked with black stars and red circles respectively.Figure 3: Food chain of a (hypothetical) cannibalistic species. The organism X feeds on other organisms of its species(prey) or is fed by other organisms of its species (predator) depending on their size. The organism X has to decidewhether the approaching one is food (prey) or death (predator) – a binary classification problem. It has one internal GLSneuron which fires upon receiving stimulus which is light reflected of from the approaching organism (either predatoror prey) that encodes the size of the organism.In order to solve the binary classification problem for survival, organism X employs the ChaosNet
NL algorithm [25].Briefly, this is accomplished as follows. The single GLS neuron of organism X fires chaotically from an initial neuralactivity (of q units) until it matches the value of the stimulus. This chaotic neuronal trace encodes all the necessaryinformation which the organism X needs to determine whether the stimulus is Class-0 or Class-1. Particularly, thefollowing four features are used – firing time, firing rate, energy and entropy of neural trace. In the training phase,where the labels are assumed to be known, the organism X learns the mean representation vector (the aforementioned 4features) for each class (80 instances) and in the testing phase (20 instances per class), the organism X estimates thelabel. This ( , ) is the standard training-test distribution that we find in machine learning applications. From anevolutionary point of view, the interesting question to ask is - how can natural selection optimize the biological aspectsof organism X in order for it to have the highest possible survival rate where it is able to correctly classify the stimulusas predator or prey? Assuming that the organism can’t afford to have more than one internal neuron (each neuroncomes with a biological cost), the only available biological features are – A) the type of GLS neuron and B) initialneural activity (this corresponds to memory) of this single internal neuron. The type of GLS neuron is determined bythe value of the parameter b (we assume the skew-tent map which is a type of 1D GLS map ) and the initial neuralactivity is represented by q . Both b and q in our example can take values in the range ( , ) . In our discussion so far,we have omitted a very important factor namely the noise that is inherent in the environment. Noise, as we very wellknow, is unavoidable and the stimulus (the light reflected off from the approaching organism) is inevitably distorted byambient noise. In our efforts to optimize b and q to give the highest survival rate for organism X , we are also interestedin asking the question – is the presence of ambient noise always detrimental to the decision making process of the GLS In reality, it is possible for nature to evolve different types of neurons - which is very much the case with the brain. But forsimplicity, we restrict to skew-tent map in this example. OISE AND C HAOS IN N EUROCHAOS L EARNING neuron, irrespective of the level of noise?
In other words, one would naively expect that even the presence of a tinyamount of noise can only degrade the performance of the chaotic GLS neuron thereby reducing the survival rate oforganism X (not to speak of high levels of noise where the performance is expected to much worse).To answer the above two questions – one pertaining to the best possible values of q and b that maximizes survival rateand the other to the role of the level of noise on the performance of GLS neuron for decision making – we need tosolve an optimization problem. To this end, we performed a five fold cross-validation on the dataset in Figure 4a with q = . b = .
96 and noise intensity varying from 0 .
001 to 1 in steps of 0 . ( , ) training-test split. The ChaosNet algorithm is run with this singleGLS neuron (refer to [24, 25] for further details). We report an average accuracy of 100% for noise intensities rangingfrom 0 .
248 to 0 . X . An intermediate amount of noise, in fact, maximizes the survival rate of organism X .Our hypothetical example is not very far from reality. In the case of feeding behaviour of paddle fish [19] and in thephysiological experiments on crayfish mechanorecetors, stochastic resonance has been reported. We conjecture thatit may very well be the case of noise interacting with neurochaos leading to stochastic resonance in these real-worldbiological examples as well.We found that SR is exhibited for several other settings of q and b as well, but are not reported here for lack of space.The particular choice of q = .
25 and b = .
96 is motivated by the fact that SR enables a 100% accuracy for thesesettings (with an intermediate level of noise as seen in Figure 4b). However, such a performance is not unique to thisparticular choice of q and b . (a) (b) Figure 4: SR in ChaosNet with single GLS neuron: (a) Prey-Predator dataset for organism X . (b) Average Accuracy(%) vs. noise intensity shows stochastic resonance. Traditionally SR is shown to be exhibited in bistable system where there is an element of periodic forcing and stochasticcomponent [28]. This notion is used to detect signals which are sub-threshold. A combination of a sub-thresholdperiodic and noisy signal provides a peak in detection of the frequency of the periodic signal. In this section, wedemonstrate SR in signal detection using ChaosNet NL architecture.There are different ways to quantify SR for signal detection. Some of the commonly used techniques are cross-correlation coefficient, signal to noise ratio, and mutual information [29]. In this work, we use cross correlationcoefficient as a measure to evaluate the SR behaviour in
ChaosNet for signal detection.5
OISE AND C HAOS IN N EUROCHAOS L EARNING
We consider a low amplitude information carrying periodic sinusoidal function x ( t ) = A ( sin ( π t )+ ) , where A = .
2, and0 ≤ t ≤ x th = .
5. Clearly the low amplitude signal is below the threshold and hencethe signal goes undetected (Figure 5a). We pass x ( t ) + η ( t ) as input to the ChaosNet architecture with a single GLSneuron with q = .
35 and b = .
65. Here, η is noise which follows a uniform distribution with a range of [ − ε + ε ] ,where ε is varied from 0 .
001 to 1 . . ChaosNet architecture we extract the normalizedfiring time , y ( t ) , shown in Figure 5b which closely tracks x ( t ) . We then compute the cross correlation coefficient ρ between y ( t ) and x ( t ) for various noise intensities ( ε ) as shown in Figure 5c. We see that, for an intermediate amountof noise intensity, namely ε = . ρ = .
971 and for lower andhigher noise intensities we get lower values of ρ (this is because y ( t ) fails to track x ( t ) in these cases as demonstratedin Figure 6a, 6c). This is the classic phenomenon of stochastic resonance. Thus, we have demonstrated SR in NL( ChaosNet ) with a single GLS neuron in both classification and signal detection scenarios. (a) (b)(c)
Figure 5: SR in signal detection using ChaosNet NL. (a) Sub-threshold signal x ( t ) with threshold x th = .
5. The signalcannot be detected since it is below the threshold. (b) In the presence of intermediate amount of noise added to the inputsignal the single internal neuron in ChaosNet NL detects the weak amplitude signal and also preserves its frequency. (c)Cross correlation coefficient between y ( t ) and x ( t ) vs. noise intensity demonstrating SR. Whenever the variance of y ( t ) was zero, we take ρ = We now move on to demonstrating SR in NL with more than one GLS neuron in both simulated and real-world datasets. Normalization is done as follows: y = Y − min ( Y ) max ( Y ) − min ( Y ) . OISE AND C HAOS IN N EUROCHAOS L EARNING (a) (b) (c)
Figure 6: Intermediate noise is good for signal detection in NL. (a) With low noise (0 . ρ = − . . ρ = . . ρ = y ( t ) was zero, we take ρ = We consider a binary classification task of separating data instances belonging to two concentric circles which are eithernon-overlapping (CCD) or overlapping (OCCD). The governing equations for OCCD are as follows: f = r i cos ( θ ) + αη , (1) f = r i sin ( θ ) + αη , (2)where i = { , } ( i = i = r = . r = . θ is varied from 0 to 360°, α = . η ∼ N ( µ , σ ) , normal distribution, with µ = σ =
1. For CCD, the value of α used in equation 1 andequation 2 is set to 0 . (a) (b) Figure 7: Simulated datasets for binary classification. (a) Concentric Circle Data (CCD). (b) Overlapping ConcentricCircle (OCCD). 7
OISE AND C HAOS IN N EUROCHAOS L EARNING
Table 1: Dataset details of synthetically generated Concentric Circle Data (CCD) and Overlapping Concentric CircleData (OCCD). Dataset CCD OCCD
We first perform a five fold cross-validation to determine the appropriate noise intensities for the CCD and OCCDclassification tasks. The noise intensity ( ε ) was varied from 0 .
001 to 1 with a step size of 0 . q = .
21 and b = .
96 respectively for CCD. In the case of OCCD, q and b are fixed to 0 .
23 and 0 .
97 respectively.•
CCD : For noise intensity ε = . = .
907 in five fold cross-validation(Figure 8a).•
OCCD : For noise intensity ε = . = .
73 in five fold cross-validation(Figure 8b). (a) (b)
Figure 8: SR in ChaosNet NL. (a) Average macro F1-score vs. Noise intensity for Concentric Circle Data (CCD). (b)Average macro F1-score vs. Noise intensity for Overlapping Concentric Circle (OCCD). Both plots indicate local aswell as global SR.Inspecting Figure 8a and 8b, we can make the following observations:1. Average F1-scores achieve global maxima for intermediate amount of noise intensities ( ε ) in both cases (CCDand OCCD) indicating global SR .2. We observe several local maxima of average F1-scores in the above plots which we define as local stochasticresonance or local SR .3. There could be multiple local SR in such tasks.4. It is also possible that multiple global SR exist for different settings of b , q and ε .5. Such a rich behaviour of multiple SR (local and global) is due to the properties of chaos. In this section, we empirically show that SR is exhibited in
ChaosNet
NL even for a real world classification task. Thetask is to identify spoken digits from a human speaker. Particularly, it is to classify the speech data into classes 0 to 9.8
OISE AND C HAOS IN N EUROCHAOS L EARNING
We considered the openly available Free Spoken Digit Dataset (FSDD) . The dataset consists of voice recordings ofspoken digits from 0 to 9 of six speakers. Out of the six speakers, for the purposes of this study, we considered thevoice samples of only one speaker named ‘Jackson’. We have 50 spoken digit recordings for each of the ten classes(0 to 9) sampled at 8 kHz . The audio recordings are trimmed to remove the silence at the beginnings and ends. Themaximum and minimum length of data instances are 7038 and 2753 respectively. In order to have the same length ofdata across all instances, we only considered the first 2753 samples. The normalized Fourier coefficients of each datainstance was extracted and input to ChaosNet
NL.We did a five fold cross-validation using 400 data instances (40 data instances for each class) to determine the optimumnoise intensity that yields the best performance. The noise intensity was varied from 0 .
001 to 1 . . q = . b = .
499 and a noise intensity = . = . ChaosNet
NL with varying noise intensities and once again the familiar SR is seen (withlocal and global SR as defined in the previous section).Figure 9: SR in ChaosNet NL for spoken digit classification task. Average macro F1-score vs. Noise intensity.
Unlike traditional machine learning algorithms, stochastic resonance is not only found in neurochaos learning architec-ture (
ChaosNet ) but also seen to contribute to a peak performance in classification tasks. This was demonstrated forNL with a single neuron as well as with multiple neurons, and for both simulated and real-world classification tasks.Even a simple sub-threshold signal detection task using NL exhibited strong SR.The question that we now like to address is - why is there SR in NL?
To answer this, we observe a single neural trace (ortrajectory) of a single GLS neuron (of NL) corresponding to a single stimulus. Figure 10 shows the neural trace (inblack) for three scenarios - (a) zero noise ( ε = . ε = .
15) and (c) medium noise ( ε = . + noise) in blue. The GLS neuron stops firing onlywhen the neural activity matches the stimulus. The stopping time and the noise intensity is inversely proportional. Forzero noise, the GLS neuron fires indefinitely without stopping since the probability of the neural activity becomingequal to the stimulus is zero. For a very high noise intensity ( epsilon ≈ https://github.com/Jakobovski/free-spoken-digit-dataset OISE AND C HAOS IN N EUROCHAOS L EARNING (a) (b) (c)
Figure 10: Why SR in NL yields peak performance? (a) With zero noise, the neural trace is indefinite and hence firingtime is undefined (infinite). (b) With very high noise, the neural trace matches the stimulus in very short time yieldingvery low firing time. (c) For medium level of noise, there is a sufficient length of neural trace which enables meaningfulfeatures to be extracted for learning. This allows stimuli from different instances and different classes to have diverselength of neural traces with distinct features that enable peak classification performance in NL.
Noise is always contextual. A universal mathematical definition of noise without contextual consideration does not exist.Noise cannot always be treated as an unwanted signal. Stochastic Resonance is one such counter intuitive phenomenonwhere the constructive role of noise is seen to contribute a performance boost in certain non-linear systems. In this study,we highlight for the first time how stochastic resonance is naturally manifesting in
ChaosNet neurochaos learningarchitecture for classification and signal detection. Future work involves considering NL with spiking neuronal modelsinstead of GLS and exploring whether SR has a similar role in performance.Our study paves the way for potentially unravelling how learning happens in the human brain. There is ample amountof empirical evidence to suggest that chaos is inherent at the neuronal level in the brain, as well as at differentspatiotemporal scales [3, 26]. Also, given the enormous complexity of brain networks, neuronal noise and interferenceare unavoidable [30]. In spite of these challenges, the brain does a phenomenal job in learning - currently unmatched byANNs if power consumption is factored (the brain is known to operate at ≈ . ChaosNet and the human brain). Noise optimizes the selection of diverse chaotic neuraltraces with distinct features that enables efficient learning. SR seems to be not only inevitable but indispensable incognitive systems.The code used for the study of SR in NL is available here: https://github.com/HarikrishnanNB/stochastic_resonance_and_nl . Acknowledgment
Harikrishnan N. B. thanks “The University of Trans-Disciplinary Health Sciences and Technology (TDU)” for permittingthis research as part of the PhD programme. The authors gratefully acknowledge the financial support of Tata Trusts.We dedicate this work to the founder and Chancellor of Amrita Vishwa Vidyapeetham-Sri Mata AmritanandamayiDevi (AMMA) who continuously inspires us by her dedication and commitment in serving humanity with love andcompassion.
References [1] Demis Hassabis, Dharshan Kumaran, Christopher Summerfield, and Matthew Botvinick. Neuroscience-inspiredartificial intelligence.
Neuron , 95(2):245–258, 2017.10
OISE AND C HAOS IN N EUROCHAOS L EARNING [2] Frederico AC Azevedo, Ludmila RB Carvalho, Lea T Grinberg, José Marcelo Farfel, Renata EL Ferretti, Renata EPLeite, Wilson Jacob Filho, Roberto Lent, and Suzana Herculano-Houzel. Equal numbers of neuronal andnonneuronal cells make the human brain an isometrically scaled-up primate brain.
Journal of ComparativeNeurology , 513(5):532–541, 2009.[3] Henri Korn and Philippe Faure. Is there chaos in the brain? ii. experimental evidence and related models.
Comptesrendus biologies , 326(9):787–840, 2003.[4] A Aldo Faisal, Luc PJ Selen, and Daniel M Wolpert. Noise in the nervous system.
Nature reviews neuroscience ,9(4):292–303, 2008.[5] Robert Brown F.R.S. Hon. M.R.S.E. and R.I. Acad. V.P.L.S. Xxvii. a brief account of microscopical observationsmade in the months of june, july and august 1827, on the particles contained in the pollen of plants; and on thegeneral existence of active molecules in organic and inorganic bodies.
The Philosophical Magazine , 4(21):161–173,1828.[6] Albert Einstein. Über die von der molekularkinetischen theorie der wärme geforderte bewegung von in ruhendenflüssigkeiten suspendierten teilchen.
Annalen der physik , 4, 1905.[7] Gustaf Ising. Lxxiii. a natural limit for the sensibility of galvanometers.
The London, Edinburgh, and DublinPhilosophical Magazine and Journal of Science , 1(4):827–834, 1926.[8] Harry Nyquist. Thermal agitation of electric charge in conductors.
Physical review , 32(1):110, 1928.[9] John Bertrand Johnson. Thermal agitation of electricity in conductors.
Physical review , 32(1):97, 1928.[10] Claude E Shannon. A mathematical theory of communication.
The Bell system technical journal , 27(3):379–423,1948.[11] Roberto Benzi, Giorgio Parisi, Alfonso Sutera, and Angelo Vulpiani. Stochastic resonance in climatic change.
Tellus , 34(1):10–16, 1982.[12] Roberto Benzi, Giorgio Parisi, Alfonso Sutera, and Angelo Vulpiani. Stochastic resonance in climatic change.
Tellus , 34(1):10–16, 1982.[13] Stéphan Fauve and F Heslot. Stochastic resonance in a bistable system.
Physics Letters A , 97(1-2):5–7, 1983.[14] Adi Bulsara, EW Jacobs, Ting Zhou, Frank Moss, and Laszlo Kiss. Stochastic resonance in a single neuron model:Theory and analog simulation.
Journal of Theoretical Biology , 152(4):531–555, 1991.[15] David S Leonard and LE Reichl. Stochastic resonance in a chemical reaction.
Physical Review E , 49(2):1734,1994.[16] Mark D McDonnell, Nigel G Stocks, Charles EM Pearce, and Derek Abbott. Stochastic resonance. stre , 2008.[17] John K Douglass, Lon Wilkens, Eleni Pantazelou, and Frank Moss. Noise enhancement of information transfer incrayfish mechanoreceptors by stochastic resonance.
Nature , 365(6444):337–340, 1993.[18] André Longtin. Stochastic resonance in neuron models.
Journal of statistical physics , 70(1-2):309–327, 1993.[19] David F Russell, Lon A Wilkens, and Frank Moss. Use of behavioural stochastic resonance by paddle fish forfeeding.
Nature , 402(6759):291–294, 1999.[20] Monita Chatterjee and Mark E Robert. Noise enhances modulation sensitivity in cochlear implant listeners:Stochastic resonance in a prosthetic sensory system?
Journal of the Association for Research in Otolaryngology ,2(2):159–171, 2001.[21] Adi R Bulsara, Anna Dari, William L Ditto, K Murali, and Sudeshna Sinha. Logical stochastic resonance.
Chemical Physics , 375(2-3):424–434, 2010.[22] Shuhei Ikemoto, Fabio DallaLibera, and Koh Hosoda. Noise-modulated neural networks as an application ofstochastic resonance.
Neurocomputing , 277:29–37, 2018.[23] Achim Schilling, Richard Gerum, Alexandra Zankl, Holger Schulze, Claus Metzner, and Patrick Krauss. Intrinsicnoise improves speech recognition in a computational model of the auditory pathway. bioRxiv , 2020.[24] NB Harikrishnan and Nithin Nagaraj. Neurochaos inspired hybrid machine learning architecture for classification.In , pages 1–5. IEEE, 2020.[25] Harikrishnan Nellippallil Balakrishnan, Aditi Kathpalia, Snehanshu Saha, and Nithin Nagaraj. Chaosnet: A chaosbased artificial neural network architecture for classification.
Chaos: An Interdisciplinary Journal of NonlinearScience , 29(11):113125, 2019.[26] Philippe Faure and Henri Korn. Is there chaos in the brain? i. concepts of nonlinear dynamics and methods ofinvestigation.
Comptes Rendus de l’Académie des Sciences-Series III-Sciences de la Vie , 324(9):773–793, 2001.11
OISE AND C HAOS IN N EUROCHAOS L EARNING [27] Touraj Laleh, Mojtaba Faramarzi, Irina Rish, and Sarath Chandar. Chaotic continual learning. , 2020.[28] Bruno Andò and Salvatore Graziani.
Stochastic resonance: theory and applications . Springer Science & BusinessMedia, 2000.[29] Aruneema Das, NG Stocks, A Nikitin, and EL Hines. Quantifying stochastic resonance in a single thresholddetector for random aperiodic signals.
Fluctuation and Noise Letters , 4(02):L247–L265, 2004.[30] Gabriela Czanner, Sridevi V Sarma, Demba Ba, Uri T Eden, Wei Wu, Emad Eskandar, Hubert H Lim, SimonaTemereanca, Wendy A Suzuki, and Emery N Brown. Measuring the signal-to-noise ratio of a neuron.
Proceedingsof the National Academy of Sciences , 112(23):7141–7146, 2015.[31] Ferris Jabr. Does thinking really hard burn more calories?