Modeling the Hallucinating Brain: A Generative Adversarial Framework
Masoumeh Zareh, Mohammad Hossein Manshaei, Sayed Jalal Zahabi
AA PREPRINT, FEBRUARY 2021 1
Modeling the Hallucinating Brain:A Generative Adversarial Framework
Masoume Zareh, Mohammad Hossein Manshaei, and Sayed Jalal Zahabi
Abstract —This paper looks into the modeling of hallu-cination in the human’s brain. Hallucinations are knownto be causally associated with some malfunctions withinthe interaction of different areas of the brain involvedin perception. Focusing on visual hallucination and itsunderlying causes, we identify an adversarial mechanismbetween different parts of the brain which are responsiblein the process of visual perception. We then show how thecharacterized adversarial interactions in the brain can bemodeled by a generative adversarial network.
Index Terms —Generative Adversarial Network (GAN),Brain, Hallunication.
I. I
NTRODUCTION B RAIN is an essential organ of the body forinformation processing and memory. Therefore,discovering the functionality of the brain has al-ways been a challenge in neuroscience, which hasdrawn special attention in the past two decades.So far, various aspects of the brain’s functionalityand structure have been identified. Moreover, thesymptoms of different brain-related neurologicaldisorders have been revealed, for many of whicheffective treatment/symptom control drugs have alsobecome available nowadays.The functionality of each particular area in thebrain, and the connectivity between different areasare essential for reacting/responding to differentstimulating input signals [1], [2]. Neurotransmittersserve as a means to connect the different areas ofthe brain, which allows them to interact together forinformation processing [1], [2]. Factors such as agingor neurological disorders can lead to certain braindamages. One of the known symptoms of many braindiseases is hallucination. Hallucinations can occur ina wide range of diseases such as in Schizophrenia,
The authors are with the Department of Electrical and Com-puter Engineering, Isfahan University of Technology, 841568311,Isfahan, Iran. E-mail: [email protected], [email protected],[email protected]
Parkinson’s disease, Alzheimer’s disease, migraines,brain tumors, and epilepsy.Hallucinations are the unpredictable experienceof perceptions without corresponding sources in theexternal world. There are five types of hallucinations:auditory, visual, tactile, olfactory, and taste. Visualhallucinations occur in numerous ophthalmologic,neurologic, medical, and psychiatric disorders [3].Visual hallucinations are common in Parkinson’sdisease, with a reported prevalence as high as 74%after 20 years of the disease [4]. Positive symptomsof schizophrenia are hallucinations, delusions, andracing thoughts. Focusing on hallucination, in thispaper, we propose and artificial intelligence (AI)framework for modeling visual hallucinations (VHs).Today, probabilistic mathematical and AI tech-niques have come to assist neuroscientists in ana-lyzing the brain functionality. This includes deeplearning (DL), reinforcement learning (RL), andgenerative adversarial networks (GANs) [5]. Forinstance, in [6] the neural mechanisms have beenstudied via probabilistic inference methods. Thebrain’s structural and functional systems are seen topossess features of complex networks [7]. It is alsoshown that neurons as agents, can understand theirenvironment partially, make a decision, and controltheir internal organs [8]. Moreover, Yamins et al.use deep hierarchical neural networks to delve intocomputational sensory systems models, especiallythe sensory cortex and visual cortex [9].Recently, utilizing the idea of Generative Adver-sarial Network (GAN), Gershman has proposed anadversarial framework for probabilistic computationin the brain [10]. There, he explains the psychologi-cal and neural evidence for this framework and howthe generator and discriminator’s breakdown couldlead to delusions observed in some mental disorders.GANs, which were introduced by Goodfellow in2014 [5], are generative models which allow for the a r X i v : . [ q - b i o . N C ] F e b A PREPRINT, FEBRUARY 2021
Figure 1. The overall framework of the generative adversarial network (GAN) architecture. The generator contains a generative network anda discriminative network. The generator generates a new image by random inputs. This generated image is sent to the discriminator alongsidereal images. The discriminator takes input images and classifies them into two classes: real and fake. generation of new synthetic data instances similarto the training data. It has been mentioned in[10] that the idea of the adversarial frameworkcan potentially be applied to other symptoms suchas hallucinations. Inspired by this remark, in thispaper, we seek evidence and provide methodologyon how the idea of the GANs mechanism can beemployed as an adversarial framework for modelingthe hallucination observed in some mental disorders(such as Parkinson’s disease and Schizophrenia).The inference is ascertaining the probability ofeach potential cause given an observation [11].Approximate inference algorithms fall into twofamilies: Monte Carlo algorithms and variationalalgorithms [6]. Note that, while computational neu-roscientists often prefer to follow approximate in-ference, by exploring the biological implementationof Monte Carlo and variational methods, inspiredby [10], our approach here is based to model VHsthrough an adversarial inference setup. Adversar-ial inference has some important advantages overstandard Monte Carlo and variational approaches.First, it can be applied to more complex models.Second, the inference is more efficient than thestandard Monte Carlo algorithms and it can usemore flexible approximate posteriors compared tostandard variational algorithms [10]. Moreover, GANbased adversarial learning techniques directly learn agenerative model to construct high-quality data [12],and therefore usually more realistic than variationalapproaches. This paper looks into the evidence within theneurobiology and neuropsychiatry of the humanbrain aiming at developing a generative adversarialframework for approximate inference in the halluci-nating brain. In Section 2, we briefly review the ideaof GAN as a preliminary. In Section 3, we point outthe relevant evidence within the mechanism of visualhallucinations. Then, we develop our framework forvisual hallucinations in Section 4. Finally, we discussthe challenges of this framework in Section 5.II. GAN IN B RIEF
Generative adversarial network (GAN) is a genera-tive model in which a generator network (GN) and adiscriminator network (DN) contest with each other,in an adversarial setting (in Fig 1). In this setting, theDN and the GN play the two-player minimax game.GANs can be used for both semi-supervised andunsupervised learning [13]. The common analogyfor GAN is to think of GN as an art forger andDN as an art expert. The forger creates forgeries tomake realistic images, and the expert receives bothforgeries and real (authentic) images and aims to tellthem apart. Both of them are trained simultaneouslyand in competition with each other, as shown inFig. 1.In words of statistical learning, on the discrim-inator side, DN has a training set consisting ofsamples drawn from the distribution p data and learnsto represent an estimate of the distribution. As aresult DN is to classify the given input as real AREH et al. : MODELING THE HALLUCINATING BRAIN 3
Figure 2. Functional anatomy of a healthy human brain with regards to vision. or fake. On the generator side, GN is learned tomap noise variables z onto samples as genuineas possible, according to the prior distribution ofthe noise variables P z ( z ) . This way, GN and aDN contest in a two-player minimax game. In thisgame, DN intends to maximize the probability ofdistinguisihing between the real samples and thosegenerated by GN. As for GN, it aims to minimizethe probability of detecting the fake samples by DN.The relevant objective function can be written as: min G max D E x ∼ Pdata ( x ) [log D ( x )]+ E z ∼ Pz ( z ) [log(1 − D ( G ( z )))] (1)Indeed, by such ability in generating synthesizeddata, GANs will come to our aid in many ap-plications such as super-resolution, image captiongeneration, data imputation, etc., in which lack ofsufficient real data has been a challenge. In this paper,however, we benefit from GAN from a modelingperspective. In particular, we take advantage ofGAN adversarial framework as a basis for modelingvisual hallucinations. In the next section, we brieflyreview what hallucination refers to in view of brain’sneurology. III. H ALLUCINATION
In a healthy brain, when the human sees an object,some human brain areas interact together. It is as a result of such interactions between different areas ofthe brain that the human perceives the object. Forexample, Fig. 2 shows the functional anatomy ofa healthy human brain with regards to vision. Asit is shown on the figure, the information passesfrom the retina via the optic nerve and optic tract tothe lateral geniculate nucleus (LGN) in the thalamus.The signals project from there, via the optic radiationto the primary visual cortex-cells which processsimple local attributes such as the orientation oflines and edges. From the primary visual cortex,information is organized as two parallel hierarchicalprocessing streams [4]:1)
The ventral stream which identifies the featuresof the objects and passes them from the primaryvisual cortex to the inferior temporal cortex.2)
The dorsal stream which processes spatial rela-tions between objects and projects through theprimary visual cortex to the superior temporaland parietal cortices.Finally, the prefrontal cortex areas (such as theInferior frontal gyrus and Medial Prefrontal Cortex)analyze the received data from other areas by realand fake point of views.If the connectivity between any of the aboveexplained brain areas is disrupted, humans cannotunderstand the object or may perceive it falsely.A relatively common form of memory distortionarises when individuals must discriminate items they
A PREPRINT, FEBRUARY 2021 have seen from those they have imagined (realitymonitoring) [14]. In some neuro-diseases, individualscannot discriminate whether an item was imagined orperceived. In this regard, hallucinations are defined asthe unpredictable experience of perceptions withoutcorresponding sources in the external world [15].Now, in order to model the interaction of differentbrain areas with regards to hallucinations, we lookinto the known or suggested causes for the incidenceof hallucinations. In particular, some studies showthat hyperdopaminergic activity in the hippocampusmakes hallucinations in schizophrenia [16], [17].Also, a grey matter volume reduction is seen inParkinson’s disease patients with visual hallucina-tions involving Occipito-parietal areas associatedwith visual functions [18]. The hippocampal regiondysfunction and abnormalities in GABA and DA function is seen to have a role in causing thisdisease [19]. Abnormal cortical dopamine D1/D2activation ratio may be related to altered GABA andglutamate transmission [20].In order to model hallucination, we consider theareas of the brain involved in hallucination, accordingto the previous relevant studies [4], [17]. Visualhallucinations in Parkinson’s disease are caused byoveractivity of the Default Mode Network (DMN)and Ventral Attention Network (VAN) and under-activity of the Dorsal Attention Network (DAN) [4].VAN mediates the switch between DAN and DMN.Overactivity of DMN and VAN reinforces falseimages, which DAN fails to check when it is un-deractive [4]. Moreover, on functional neuroimagingstudies, patients with visual hallucinations showeddecreased cerebral activation in occipital, parietal,and temporoparietal regions and increased frontalactivation in the region of frontal eye fields [21].It is important to note that brain connections arenot static but rather dynamic as they change all thetime. According to aforementioned areas involved inhallucinations, and the effect of neurotransmitters inthe connectivity between different areas of the brain,one can conclude that imbalance between dopamine,acetylcholine, and other neurotransmitters is involvedin the pathogenesis of visual hallucinations. Inspiredby all the above, in Section IV, we present a theoret- γ -Aminobutyric Acid Dopamine ical GAN-based generative model for hallucinations,which highlights the functional importance of brainareas, their connections, and neurotransmitters.IV. M
ODELING H ALLUCINATION WITH
GANThis section presents a model for hallucinationin the framework of generative models. Individualsuse a number of different types of retrieved infor-mation (e.g., perceptual detail, information aboutcognitive operations) to determine whether an itemwas imagined or perceived. As explained in theprevious section, a breakdown in the connectivityof neural networks and dysfunction of some brainareas is known to results in visual hallucinations.In this condition, some brain areas, especially theoccipital lobe, the visual cortex, and the parietalarea change their mechanisms. Specifically, theyprocess imperfect visual input data and send outputto other areas. This somehow mimics the role ofGN in GAN, trying to change the visual input datain order to deceive the other areas which wereresponsible for the perception between reality andimagination (resembling DN in the GAN setup).In particular, some cortical areas, especially theprefrontal cortex and inferior frontal gyrus, processthe input to determine whether an item was imaginedor perceived. As mentioned in Section III, theperturbations in some neurotransmitters, especiallydopamine, impacts the functionality of these areas.As a result, these areas cannot truly classify theinput to determine whether an item was imagined orperceived. This imperfect functionality thus initiatesa contest between the distinguishing region and thefalsifying region which function in adversarial setup.Putting the two aforementioned sides together, theadversarial interaction between the mentioned areasof the brain can be viewed as a GAN network.Table I summarizes the correspondence betweenthe elements of the hallucinating brain and theircounterparts within the relevant GAN model.Consequently, the hallucinating human brain’svision system looks like GANs [5]. The generativeadversarial perspective, unlike Bayesian models,suggests a broad hypothesis about the origin ofhallucinations content (via an abnormal genera-tor)like delusion. GN formalizes the occipital lobe,visual cortex, and parietal area functionality in the
AREH et al. : MODELING THE HALLUCINATING BRAIN 5
Table IGAN
AND B RAIN WITH HALLUCINATION
ModelsAttribute GAN Brain with HallucinationGenerator Neural network Occipital lobe, Visual cortex, and Parietal areaDiscriminator Neural network Prefrontal cortex and Inferior frontal gyrusInput of Discriminator Images Signal DataOutput of Discriminator Fake or Real Imagination or RealInput of Generator Noise Nothing or NoiseOutput of Generator Fake Image ImaginationNeuron Artificial Neuron Interneurons and pyramidal neurons hallucinating brain. Also, the discriminator directlyformalizes the prefrontal cortex and Inferior frontalgyrus functionality and ideas about reality monitor-ing that have been applied to hallucinations [10].V. D
ISCUSSION
In this paper we explored the neurobiology ofhallucinations from a modeling perspective. In thisregard, we developed a generative adversarial frame-work for modeling visual hallucinations. Specifically,we demonstrated the mappings between the areas ofa hallucinating brain and the elements within GANs.On the neurological side, dopamine is critical forreinforcing actions consistent with behavioral learn-ing theories, while several other neuromodulatorsare implicated in creating new memories. Therefore,neurotransmitters are vital for the brain areas to reactconcertedly. Any perturbation in the functioningof the neurotransmitters, such as that in visualhallucinations, changes the mechanisms of differentbrain areas. This leads to an adversarial mechanismamong the responsible brain areas. Focusing on thisphenomena, the present study raises the intriguingpossibility that the areas of a hallucinating braininteract with each other through an adversarialmechanism which can be modeled by GAN.This is of course a first step, and questions onthe role of imagination in this setup remain tobe further explored. Specifically, questions such ashow imagination can become involved in learning(imagination-based learning) and also in the modeledadversarial interactions, is yet to be answered infuture research. Adversarially learned inference [22] can be used as one particular approach to such futurestudies. In particular, adversarially learned inferenceuses imagination to drive learning, exemplifying abroader class of imagination-based learning modelsthat have been studied in cognitive science [10].Another broad issue concerns how to evaluate theperformance of the model and check the functionaland structural constraints. Therefore, another inter-esting direction for future work is to seek for asuitable evaluation method, which would allow formodel validation as an important step. Finally, thepossibility of generalizing the proposed adversarialframework to other types of hallucination would alsobe of interest. VI. C
ONCLUSION
In the context of modeling functions of the humanbrain, we presented a model for the hallucinatingbrain. Focusing on visual hallucinations and some ofits sofar known neurological causes, we characterizedan adversarial mechanism between different areasof the brain. We then showed how this adversarialsetup can be modeled by GAN. The proposed modelcan be viewed as the first steps of an addendum tothe results of [10], providing evidence on how theidea of generative adversarial brain can be extendedto hallucinations as well.R
EFERENCES [1] D. A. Berg, L. Belnoue, H. Song, and A. Simon,“Neurotransmitter-mediated control of neurogenesis in the adultvertebrate brain,”
Development , vol. 140, no. 12, pp. 2548–2561,2013.
A PREPRINT, FEBRUARY 2021 [2] D. Purves, G. J. Augustine, D. Fitzpatrick, W. C. Hall, A.-S.LaMantia, J. O. McNamara, and S. Williams, “Neuroscience,vol. 3,”
Rahman MH, Jha MK, Kim JH, Nam Y, Lee MG, Go Y,Harris RA, Park DH, Kook H, Lee IK, Suk K (2016) PyruvateDehydrogenase Kinase-mediated Glycolytic Metabolic Shift inthe Dorsal Root Ganglion Drives Painful Diabetic Neuropathy.J Biol Chem , vol. 291, pp. 6011–6025, 2004.[3] S. Ali, M. Patel, J. Avenido, S. Jabeen, W. J. Riley, andM. MBA, “Hallucinations: Common features and causes,”
Current Psychiatry , vol. 10, no. 11, pp. 22–29, 2011.[4] R. S. Weil, A. E. Schrag, J. D. Warren, S. J. Crutch, A. J. Lees,and H. R. Morris, “Visual dysfunction in parkinson’s disease,”
Brain , vol. 139, no. 11, pp. 2827–2843, 2016.[5] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generativeadversarial nets,” in
Advances in neural information processingsystems , 2014, pp. 2672–2680.[6] S. J. Gershman and J. M. Beck, “Complex probabilisticinference,”
Computational models of brain and behavior , vol.453, 2017.[7] E. Bullmore and O. Sporns, “Complex brain networks: graphtheoretical analysis of structural and functional systems,”
Naturereviews neuroscience , vol. 10, no. 3, pp. 186–198, 2009.[8] M. Zareh, M. H. Manshaei, M. Adibi, and M. A. Montazeri,“Neurons and astrocytes interaction in neuronal network: A game-theoretic approach,”
Journal of theoretical biology , vol. 470, pp.76–89, 2019.[9] D. L. Yamins and J. J. DiCarlo, “Using goal-driven deep learningmodels to understand sensory cortex,”
Nature neuroscience ,vol. 19, no. 3, pp. 356–365, 2016.[10] S. J. Gershmanf, “The generative adversarial brain,”
Frontiersin Artificial Intelligence , vol. 2, p. 18, 2019.[11] K. Friston, “Learning and inference in the brain,”
NeuralNetworks , vol. 16, no. 9, pp. 1325–1352, 2003.[12] Y. Dandi, H. Bharadhwaj, A. Kumar, and P. Rai, “Generalized ad-versarially learned inference,” arXiv preprint arXiv:2006.08089 , 2020.[13] L. Lan, L. You, Z. Zhang, Z. Fan, W. Zhao, N. Zeng, Y. Chen,and X. Zhou, “Generative adversarial networks and its appli-cations in biomedical informatics,”
Frontiers in Public Health ,vol. 8, p. 164, 2020.[14] E. A. Kensinger and D. L. Schacter, “Neural processes underly-ing memory attribution on a reality-monitoring task,”
CerebralCortex , vol. 16, no. 8, pp. 1126–1133, 2006.[15] R. M. Bilder, “The neuroscience of hallucinations,” 2013.[16] D. A. Silbersweig, E. Stern, C. Frith, C. Cahill, A. Holmes,S. Grootoonk, J. Seaward, P. McKenna, S. E. Chua, L. Schnorr et al. , “A functional neuroanatomy of hallucinations inschizophrenia,”
Nature , vol. 378, no. 6553, pp. 176–179, 1995.[17] A. A. Moustafa, J. K. Garami, J. Mahlberg, J. Golembieski,S. Keri, B. Misiak, and D. Frydecka, “Cognitive functionin schizophrenia: conflicting findings and future directions,”
Reviews in the Neurosciences , vol. 27, no. 4, pp. 435–448,2016.[18] B. Ramirez-Ruiz, M.-J. Martí, E. Tolosa, M. Gimenez, N. Bar-gallo, F. Valldeoriola, and C. Junque, “Cerebral atrophy inparkinson’s disease patients with visual hallucinations,”
Euro-pean journal of neurology , vol. 14, no. 7, pp. 750–756, 2007.[19] A. A. Moustafa, S. Keri, M. M. Herzallah, C. E. Myers,and M. A. Gluck, “A neural model of hippocampal–striatalinteractions in associative learning and transfer generalizationin various neurological and psychiatric patients,”
Brain andcognition , vol. 74, no. 2, pp. 132–144, 2010.[20] N. Császár, G. Kapócs, and I. Bókkon, “A possible key roleof vision in the development of schizophrenia,”
Reviews in theNeurosciences , vol. 30, no. 4, pp. 359–379, 2019.[21] N. J. Diederich, G. Fénelon, G. Stebbins, and C. G. Goetz,“Hallucinations in parkinson disease,”
Nature Reviews Neurology ,vol. 5, no. 6, p. 331, 2009.[22] V. Dumoulin, I. Belghazi, B. Poole, O. Mastropietro, A. Lamb,M. Arjovsky, and A. Courville, “Adversarially learned inference,” arXiv preprint arXiv:1606.00704arXiv preprint arXiv:1606.00704