Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Huihui Zhou is active.

Publication


Featured researches published by Huihui Zhou.


Neuron | 2009

Millisecond-Timescale Optical Control of Neural Dynamics in the Nonhuman Primate Brain

Xue Han; Xiaofeng Qian; Jacob Bernstein; Huihui Zhou; Giovanni Talei Franzesi; Patrick Stern; Roderick T. Bronson; Ann M. Graybiel; Robert Desimone; Edward S. Boyden

To understand how brain states and behaviors are generated by neural circuits, it would be useful to be able to perturb precisely the activity of specific cell types and pathways in the nonhuman primate nervous system. We used lentivirus to target the light-activated cation channel channelrhodopsin-2 (ChR2) specifically to excitatory neurons of the macaque frontal cortex. Using a laser-coupled optical fiber in conjunction with a recording microelectrode, we showed that activation of excitatory neurons resulted in well-timed excitatory and suppressive influences on neocortical neural networks. ChR2 was safely expressed, and could mediate optical neuromodulation, in primate neocortex over many months. These findings highlight a methodology for investigating the causal role of specific cell types in nonhuman primate neural computation, cognition, and behavior, and open up the possibility of a new generation of ultraprecise neurological and psychiatric therapeutics via cell-type-specific optical neural control prosthetics.


Frontiers in Systems Neuroscience | 2011

A high-light sensitivity optical neural silencer: development and application to optogenetic control of non-human primate cortex.

Xue Han; Brian Y. Chow; Huihui Zhou; Nathan Cao Klapoetke; Amy S. Chuong; Reza Rajimehr; Aimei Yang; Michael V. Baratta; Jonathan Winkle; Robert Desimone; Edward S. Boyden

Technologies for silencing the electrical activity of genetically targeted neurons in the brain are important for assessing the contribution of specific cell types and pathways toward behaviors and pathologies. Recently we found that archaerhodopsin-3 from Halorubrum sodomense (Arch), a light-driven outward proton pump, when genetically expressed in neurons, enables them to be powerfully, transiently, and repeatedly silenced in response to pulses of light. Because of the impressive characteristics of Arch, we explored the optogenetic utility of opsins with high sequence homology to Arch, from archaea of the Halorubrum genus. We found that the archaerhodopsin from Halorubrum strain TP009, which we named ArchT, could mediate photocurrents of similar maximum amplitude to those of Arch (∼900 pA in vitro), but with a >3-fold improvement in light sensitivity over Arch, most notably in the optogenetic range of 1–10 mW/mm2, equating to >2× increase in brain tissue volume addressed by a typical single optical fiber. Upon expression in mouse or rhesus macaque cortical neurons, ArchT expressed well on neuronal membranes, including excellent trafficking for long distances down neuronal axons. The high light sensitivity prompted us to explore ArchT use in the cortex of the rhesus macaque. Optical perturbation of ArchT-expressing neurons in the brain of an awake rhesus macaque resulted in a rapid and complete (∼100%) silencing of most recorded cells, with suppressed cells achieving a median firing rate of 0 spikes/s upon illumination. A small population of neurons showed increased firing rates at long latencies following the onset of light stimulation, suggesting the existence of a mechanism of network-level neural activity balancing. The powerful net suppression of activity suggests that ArchT silencing technology might be of great use not only in the causal analysis of neural circuits, but may have therapeutic applications.


Progress in Brain Research | 2009

Long-range neural coupling through synchronization with attention.

Georgia G. Gregoriou; Stephen J. Gotts; Huihui Zhou; Robert Desimone

In a crowded visual scene, we typically employ attention to select stimuli that are behaviorally relevant. Two likely cortical sources of top-down attentional feedback to cortical visual areas are the prefrontal (PFC) and posterior parietal (PPC) cortices. Recent neurophysiological studies show that areas in PFC and PPC process signals about the locus of attention earlier than in extrastriate visual areas and are therefore likely to mediate attentional selection. Moreover, attentional selection appears to be mediated in part by neural synchrony between neurons in PFC/PPC and early visual areas, with phase relationships that seem optimal for increasing the impact of the top-down inputs to the visual cortex.


Neuron | 2016

Pulvinar-Cortex Interactions in Vision and Attention

Huihui Zhou; Robert John Schafer; Robert Desimone

The ventro-lateral pulvinar is reciprocally connected with the visual areas of the ventral stream that are important for object recognition. To understand the mechanisms of attentive stimulus processing in this pulvinar-cortex loop, we investigated the interactions between the pulvinar, area V4, and IT cortex in a spatial-attention task. Sensory processing and the influence of attention in the pulvinar appeared to reflect its cortical inputs. However, pulvinar deactivation led to a reduction of attentional effects on firing rates and gamma synchrony in V4, a reduction of sensory-evoked responses and overall gamma coherence within V4, and severe behavioral deficits in the affected portion of the visual field. Conversely, pulvinar deactivation caused an increase in low-frequency cortical oscillations, often associated with inattention or sleep. Thus, cortical interactions with the ventro-lateral pulvinar are necessary for normal attention and sensory processing and for maintaining the cortex in an active state.


Vision Research | 2009

Cognitively directed spatial selection in the frontal eye field in anticipation of visual stimuli to be discriminated

Huihui Zhou; Kirk G. Thompson

Single neuron activity was recorded in the frontal eye field (FEF) of monkeys trained to perform a difficult luminance discrimination task. The appearance of a cue stimulus informed the monkeys of the locations of two gray luminance stimuli that would appear within 500-1500ms. The monkeys were rewarded for making a saccade to the brighter of the two luminance stimuli, or if they were the same luminance, for making a saccade to the cue stimulus. Sixty percent (51/85) of FEF neurons exhibited elevated activity when the cue informed the monkeys that one of the luminance stimuli would appear in their response field (RF). This spatially selective anticipatory activity occurred without any visual stimulus appearing in their RF and was not related to saccade choice or latency. The responses of 27 of the anticipatory neurons (32% of the total sample) were also incompatible with the hypothesis that the activity represents saccade probability because they did not exhibit elevated activity for the cue stimulus which was the most probable saccade target. Behaviorally, monkeys exhibited improved perception at locations informed by cue than at unpredictable locations. These results provide physiological evidence that FEF serves an important role in endogenous spatial attention in addition to its well-known role in saccade production.


International Journal of Neural Systems | 2016

Increasing N200 Potentials Via Visual Stimulus Depicting Humanoid Robot Behavior.

Mengfan Li; Wei Li; Huihui Zhou

Achieving recognizable visual event-related potentials plays an important role in improving the success rate in telepresence control of a humanoid robot via N200 or P300 potentials. The aim of this research is to intensively investigate ways to induce N200 potentials with obvious features by flashing robot images (images with meaningful information) and by flashing pictures containing only solid color squares (pictures with incomprehensible information). Comparative studies have shown that robot images evoke N200 potentials with recognizable negative peaks at approximately 260 ms in the frontal and central areas. The negative peak amplitudes increase, on average, from 1.2 μV, induced by flashing the squares, to 6.7 μV, induced by flashing the robot images. The data analyses support that the N200 potentials induced by the robot image stimuli exhibit recognizable features. Compared with the square stimuli, the robot image stimuli increase the average accuracy rate by 9.92%, from 83.33% to 93.25%, and the average information transfer rate by 24.56 bits/min, from 72.18 bits/min to 96.74 bits/min, in a single repetition. This finding implies that the robot images might provide the subjects with more information to understand the visual stimuli meanings and help them more effectively concentrate on their mental activities.


Nature Neuroscience | 2017

Corrigendum: Opportunities and challenges in modeling human brain disorders in transgenic primates

Charles Jennings; Rogier Landman; Yang Zhou; Jitendra Sharma; Julia Hyman; J. Anthony Movshon; Zilong Qiu; Angela C. Roberts; Anna W. Roe; Xiaoqin Wang; Huihui Zhou; Liping Wang; Feng Zhang; Robert Desimone; Guoping Feng

Nat. Neurosci. 19, 1123–1130 (2016); published online 26 August 2016; corrected after print 29 August 2016 In the version of this article initially published online, the first authors name appears as “Charles Jennings” without middle initial; it has been changed to “Charles G Jennings”. Another authors name appears in the author list as “Angela Roberts,” also without middle initial; it has been changed to “Angela C Roberts.


Computational Intelligence and Neuroscience | 2017

Object Extraction in Cluttered Environments via a P300-Based IFCE

Xiaoqian Mao; Wei Li; Huidong He; Bin Xian; Ming Zeng; Huihui Zhou; Linwei Niu; Genshe Chen

One of the fundamental issues for robot navigation is to extract an object of interest from an image. The biggest challenges for extracting objects of interest are how to use a machine to model the objects in which a human is interested and extract them quickly and reliably under varying illumination conditions. This article develops a novel method for segmenting an object of interest in a cluttered environment by combining a P300-based brain computer interface (BCI) and an improved fuzzy color extractor (IFCE). The induced P300 potential identifies the corresponding region of interest and obtains the target of interest for the IFCE. The classification results not only represent the human mind but also deliver the associated seed pixel and fuzzy parameters to extract the specific objects in which the human is interested. Then, the IFCE is used to extract the corresponding objects. The results show that the IFCE delivers better performance than the BP network or the traditional FCE. The use of a P300-based IFCE provides a reliable solution for assisting a computer in identifying an object of interest within images taken under varying illumination intensities.


The Journal of Neuroscience | 2003

Foveal Versus Full-Field Visual Stabilization Strategies for Translational and Rotational Head Movements

Dora E. Angelaki; Huihui Zhou; Min Wei


Journal of Neurophysiology | 2002

Motor Scaling By Viewing Distance of Early Visual Motion Signals During Smooth Pursuit

Huihui Zhou; Min Wei; Dora E. Angelaki

Collaboration


Dive into the Huihui Zhou's collaboration.

Top Co-Authors

Avatar

Robert Desimone

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Wei Li

Tsinghua University

View shared research outputs
Top Co-Authors

Avatar

Edward S. Boyden

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mengfan Li

Hebei University of Technology

View shared research outputs
Top Co-Authors

Avatar

Aimei Yang

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Amy S. Chuong

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Brian Y. Chow

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Dora E. Angelaki

Baylor College of Medicine

View shared research outputs
Top Co-Authors

Avatar

Jonathan Winkle

Massachusetts Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge