Haiguang Wen
Purdue University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Haiguang Wen.
Cerebral Cortex | 2017
Haiguang Wen; Junxing Shi; Yizhen Zhang; Kun-Han Lu; Jiayue Cao; Zhongming Liu
Convolutional neural network (CNN) driven by image recognition has been shown to be able to explain cortical responses to static pictures at ventral-stream areas. Here, we further showed that such CNN could reliably predict and decode functional magnetic resonance imaging data from humans watching natural movies, despite its lack of any mechanism to account for temporal dynamics or feedback processing. Using separate data, encoding and decoding models were developed and evaluated for describing the bi-directional relationships between the CNN and the brain. Through the encoding models, the CNN-predicted areas covered not only the ventral stream, but also the dorsal stream, albeit to a lesser degree; single-voxel response was visualized as the specific pixel pattern that drove the response, revealing the distinct representation of individual cortical location; cortical activation was synthesized from natural images with high-throughput to map category representation, contrast, and selectivity. Through the decoding models, fMRI signals were directly decoded to estimate the feature representations in both visual and semantic spaces, for direct visual reconstruction and semantic categorization, respectively. These results corroborate, generalize, and extend previous findings, and highlight the value of using deep learning, as an all-in-one model of the visual cortex, to understand and decode natural vision.
NeuroImage | 2017
Lauren Marussich; Kun-Han Lu; Haiguang Wen; Zhongming Liu
Abstract Despite the wide applications of functional magnetic resonance imaging (fMRI) to mapping brain activation and connectivity in cortical gray matter, it has rarely been utilized to study white‐matter functions. In this study, we investigated the spatiotemporal characteristics of fMRI data within the white matter acquired from humans both in the resting state and while watching a naturalistic movie. By using independent component analysis and hierarchical clustering, resting‐state fMRI data in the white matter were de‐noised and decomposed into spatially independent components, which were further assembled into hierarchically organized axonal fiber bundles. Interestingly, such components were partly reorganized during natural vision. Relative to resting state, the visual task specifically induced a stronger degree of temporal coherence within the optic radiations, as well as significant correlations between the optic radiations and multiple cortical visual networks. Therefore, fMRI contains rich functional information about the activity and connectivity within white matter at rest and during tasks, challenging the conventional practice of taking white‐matter signals as noise or artifacts. HighlightsICA applied to white‐matter fMRI signals reveals reproducible and hierarchical patterns.White‐matter ICA components are mostly preserved, but are in part distinct between the resting state and the task state.The distinction is specific to the axonal fibers involved in the task execution.White‐matter fMRI data are not noise or artifacts, but instead are signals of likely neuronal origin.
Brain Topography | 2016
Haiguang Wen; Zhongming Liu
Neurophysiological field-potential signals consist of both arrhythmic and rhythmic patterns indicative of the fractal and oscillatory dynamics arising from likely distinct mechanisms. Here, we present a new method, namely the irregular-resampling auto-spectral analysis (IRASA), to separate fractal and oscillatory components in the power spectrum of neurophysiological signal according to their distinct temporal and spectral characteristics. In this method, we irregularly resampled the neural signal by a set of non-integer factors, and statistically summarized the auto-power spectra of the resampled signals to separate the fractal component from the oscillatory component in the frequency domain. We tested this method on simulated data and demonstrated that IRASA could robustly separate the fractal component from the oscillatory component. In addition, applications of IRASA to macaque electrocorticography and human magnetoencephalography data revealed a greater power-law exponent of fractal dynamics during sleep compared to wakefulness. The temporal fluctuation in the broadband power of the fractal component revealed characteristic dynamics within and across the eyes-closed, eyes-open and sleep states. These results demonstrate the efficacy and potential applications of this method in analyzing electrophysiological signatures of large-scale neural circuit activity. We expect that the proposed method or its future variations would potentially allow for more specific characterization of the differential contributions of oscillatory and fractal dynamics to distributed neural processes underlying various brain functions.
The Journal of Neuroscience | 2016
Haiguang Wen; Zhongming Liu
Spontaneous activity observed with resting-state fMRI is used widely to uncover the brains intrinsic functional networks in health and disease. Although many networks appear modular and specific, global and nonspecific fMRI fluctuations also exist and both pose a challenge and present an opportunity for characterizing and understanding brain networks. Here, we used a multimodal approach to investigate the neural correlates to the global fMRI signal in the resting state. Like fMRI, resting-state power fluctuations of broadband and arrhythmic, or scale-free, macaque electrocorticography and human magnetoencephalography activity were correlated globally. The power fluctuations of scale-free human electroencephalography (EEG) were coupled with the global component of simultaneously acquired resting-state fMRI, with the global hemodynamic change lagging the broadband spectral change of EEG by ∼5 s. The levels of global and nonspecific fluctuation and synchronization in scale-free population activity also varied across and depended on arousal states. Together, these results suggest that the neural origin of global resting-state fMRI activity is the broadband power fluctuation in scale-free population activity observable with macroscopic electrical or magnetic recordings. Moreover, the global fluctuation in neurophysiological and hemodynamic activity is likely modulated through diffuse neuromodulation pathways that govern arousal states and vigilance levels. SIGNIFICANCE STATEMENT This study provides new insights into the neural origin of resting-state fMRI. Results demonstrate that the broadband power fluctuation of scale-free electrophysiology is globally synchronized and directly coupled with the global component of spontaneous fMRI signals, in contrast to modularly synchronized fluctuations in oscillatory neural activity. These findings lead to a new hypothesis that scale-free and oscillatory neural processes account for global and modular patterns of functional connectivity observed with resting-state fMRI, respectively.
PLOS ONE | 2016
Kun-Han Lu; Shao-Chin Hung; Haiguang Wen; Lauren Marussich; Zhongming Liu
Complex, sustained, dynamic, and naturalistic visual stimulation can evoke distributed brain activities that are highly reproducible within and across individuals. However, the precise origins of such reproducible responses remain incompletely understood. Here, we employed concurrent functional magnetic resonance imaging (fMRI) and eye tracking to investigate the experimental and behavioral factors that influence fMRI activity and its intra- and inter-subject reproducibility during repeated movie stimuli. We found that widely distributed and highly reproducible fMRI responses were attributed primarily to the high-level natural content in the movie. In the absence of such natural content, low-level visual features alone in a spatiotemporally scrambled control stimulus evoked significantly reduced degree and extent of reproducible responses, which were mostly confined to the primary visual cortex (V1). We also found that the varying gaze behavior affected the cortical response at the peripheral part of V1 and in the oculomotor network, with minor effects on the response reproducibility over the extrastriate visual areas. Lastly, scene transitions in the movie stimulus due to film editing partly caused the reproducible fMRI responses at widespread cortical areas, especially along the ventral visual pathway. Therefore, the naturalistic nature of a movie stimulus is necessary for driving highly reliable visual activations. In a movie-stimulation paradigm, scene transitions and individuals’ gaze behavior should be taken as potential confounding factors in order to properly interpret cortical activity that supports natural vision.
Human Brain Mapping | 2018
Junxing Shi; Haiguang Wen; Yizhen Zhang; Kuan Han; Zhongming Liu
The human visual cortex extracts both spatial and temporal visual features to support perception and guide behavior. Deep convolutional neural networks (CNNs) provide a computational framework to model cortical representation and organization for spatial visual processing, but unable to explain how the brain processes temporal information. To overcome this limitation, we extended a CNN by adding recurrent connections to different layers of the CNN to allow spatial representations to be remembered and accumulated over time. The extended model, or the recurrent neural network (RNN), embodied a hierarchical and distributed model of process memory as an integral part of visual processing. Unlike the CNN, the RNN learned spatiotemporal features from videos to enable action recognition. The RNN better predicted cortical responses to natural movie stimuli than the CNN, at all visual areas, especially those along the dorsal stream. As a fully observable model of visual processing, the RNN also revealed a cortical hierarchy of temporal receptive window, dynamics of process memory, and spatiotemporal representations. These results support the hypothesis of process memory, and demonstrate the potential of using the RNN for in‐depth computational understanding of dynamic natural vision.
bioRxiv | 2017
Kuan Han; Haiguang Wen; Junxing Shi; Kun-Han Lu; Yizhen Zhang; Zhongming Liu
Goal-driven and feedforward-only convolutional neural networks (CNN) have been shown to be able to predict and decode cortical responses to natural images or videos. Here, we explored an alternative deep neural network, variational auto-encoder (VAE), as a computational model of the visual cortex. We trained a VAE with a five-layer encoder and a five-layer decoder to learn visual representations from a diverse set of unlabeled images. Inspired by the “free-energy” principle in neuroscience, we modeled the brain’s bottom-up and top-down pathways using the VAE’s encoder and decoder, respectively. Following such conceptual relationships, we used VAE to predict or decode cortical activity observed with functional magnetic resonance imaging (fMRI) from three human subjects passively watching natural videos. Compared to CNN, VAE resulted in relatively lower accuracies for predicting the fMRI responses to the video stimuli, especially for higher-order ventral visual areas. However, VAE offered a more convenient strategy for decoding the fMRI activity to reconstruct the video input, by first converting the fMRI activity to the VAE’s latent variables, and then converting the latent variables to the reconstructed video frames through the VAE’s decoder. This strategy was more advantageous than alternative decoding methods, e.g. partial least square regression, by reconstructing both the spatial structure and color of the visual input. Findings from this study support the notion that the brain, at least in part, bears a generative model of the visual world.
Scientific Reports | 2017
Yizhen Zhang; Gang Chen; Haiguang Wen; Kun-Han Lu; Zhongming Liu
Musical imagery is the human experience of imagining music without actually hearing it. The neural basis of this mental ability is unclear, especially for musicians capable of engaging in accurate and vivid musical imagery. Here, we created a visualization of an 8-minute symphony as a silent movie and used it as real-time cue for musicians to continuously imagine the music for repeated and synchronized sessions during functional magnetic resonance imaging (fMRI). The activations and networks evoked by musical imagery were compared with those elicited by the subjects directly listening to the same music. Musical imagery and musical perception resulted in overlapping activations at the anterolateral belt and Wernicke’s area, where the responses were correlated with the auditory features of the music. Whereas Wernicke’s area interacted within the intrinsic auditory network during musical perception, it was involved in much more complex networks during musical imagery, showing positive correlations with the dorsal attention network and the motor-control network and negative correlations with the default-mode network. Our results highlight the important role of Wernicke’s area in forming vivid musical imagery through bilateral and anti-correlated network interactions, challenging the conventional view of segregated and lateralized processing of music versus language.
Scientific Reports | 2018
Haiguang Wen; Junxing Shi; Wei Chen; Zhongming Liu
The brain represents visual objects with topographic cortical patterns. To address how distributed visual representations enable object categorization, we established predictive encoding models based on a deep residual network, and trained them to predict cortical responses to natural movies. Using this predictive model, we mapped human cortical representations to 64,000 visual objects from 80 categories with high throughput and accuracy. Such representations covered both the ventral and dorsal pathways, reflected multiple levels of object features, and preserved semantic relationships between categories. In the entire visual cortex, object representations were organized into three clusters of categories: biological objects, non-biological objects, and background scenes. In a finer scale specific to each cluster, object representations revealed sub-clusters for further categorization. Such hierarchical clustering of category representations was mostly contributed by cortical representations of object features from middle to high levels. In summary, this study demonstrates a useful computational strategy to characterize the cortical organization and representations of visual features for rapid categorization.
NeuroImage | 2018
Haiguang Wen; Junxing Shi; Wei Chen; Zhongming Liu
ABSTRACT Recent studies have shown the value of using deep learning models for mapping and characterizing how the brain represents and organizes information for natural vision. However, modeling the relationship between deep learning models and the brain (or encoding models), requires measuring cortical responses to large and diverse sets of natural visual stimuli from single subjects. This requirement limits prior studies to few subjects, making it difficult to generalize findings across subjects or for a population. In this study, we developed new methods to transfer and generalize encoding models across subjects. To train encoding models specific to a target subject, the models trained for other subjects were used as the prior models and were refined efficiently using Bayesian inference with a limited amount of data from the target subject. To train encoding models for a population, the models were progressively trained and updated with incremental data from different subjects. For the proof of principle, we applied these methods to functional magnetic resonance imaging (fMRI) data from three subjects watching tens of hours of naturalistic videos, while a deep residual neural network driven by image recognition was used to model visual cortical processing. Results demonstrate that the methods developed herein provide an efficient and effective strategy to establish both subject‐specific and population‐wide predictive models of cortical representations of high‐dimensional and hierarchical visual features.