Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jiajuan Liu is active.

Publication


Featured researches published by Jiajuan Liu.


Proceedings of the National Academy of Sciences of the United States of America | 2013

An integrated reweighting theory of perceptual learning

Barbara Anne Dosher; Pamela E. Jeter; Jiajuan Liu; Zhong-Lin Lu

Improvements in performance on visual tasks due to practice are often specific to a retinal position or stimulus feature. Many researchers suggest that specific perceptual learning alters selective retinotopic representations in early visual analysis. However, transfer is almost always practically advantageous, and it does occur. If perceptual learning alters location-specific representations, how does it transfer to new locations? An integrated reweighting theory explains transfer over retinal locations by incorporating higher level location-independent representations into a multilevel learning system. Location transfer is mediated through location-independent representations, whereas stimulus feature transfer is determined by stimulus similarity at both location-specific and location-independent levels. Transfer to new locations/positions differs fundamentally from transfer to new stimuli. After substantial initial training on an orientation discrimination task, switches to a new location or position are compared with switches to new orientations in the same position, or switches of both. Position switches led to the highest degree of transfer, whereas orientation switches led to the highest levels of specificity. A computational model of integrated reweighting is developed and tested that incorporates the details of the stimuli and the experiment. Transfer to an identical orientation task in a new position is mediated via more broadly tuned location-invariant representations, whereas changing orientation in the same position invokes interference or independent learning of the new orientations at both levels, reflecting stimulus dissimilarity. Consistent with single-cell recording studies, perceptual learning alters the weighting of both early and midlevel representations of the visual system.


Vision Research | 2010

Modeling mechanisms of perceptual learning with augmented Hebbian re-weighting

Zhong-Lin Lu; Jiajuan Liu; Barbara Anne Dosher

Using the external noise plus training paradigm, we have consistently found that two independent mechanisms, stimulus enhancement and external noise exclusion, support perceptual learning in a range of tasks. Here, we show that re-weighting of stable early sensory representations through Hebbian learning (Petrov et al., 2005, 2006) can generate performance patterns that parallel a large range of empirical data: (1) perceptual learning reduced contrast thresholds at all levels of external noise in peripheral orientation identification (Dosher & Lu, 1998, 1999), (2) training with low noise exemplars transferred to performance in high noise, while training with exemplars embedded in high external noise transferred little to performance in low noise (Dosher & Lu, 2005), and (3) pre-training in high external noise only reduced subsequent learning in high external noise, whereas pre-training in zero external noise left very little additional learning in all the external noise conditions (Lu et al., 2006). In the augmented Hebbian re-weighting model (AHRM), perceptual learning strengthens or maintains the connections between the most closely tuned visual channels and a learned categorization structure, while it prunes or reduces inputs from task-irrelevant channels. Reducing the weights on irrelevant channels reduces the contributions of external noise and additive internal noise. Manifestation of stimulus enhancement or external noise exclusion depends on the initial state of internal noise and connection weights in the beginning of a learning task. Both mechanisms reflect re-weighting of stable early sensory representations.


Journal of Vision | 2010

Augmented Hebbian reweighting: interactions between feedback and training accuracy in perceptual learning.

Jiajuan Liu; Zhong-Lin Lu; Barbara Anne Dosher

Feedback plays an interesting role in perceptual learning. The complex pattern of empirical results concerning the role of feedback in perceptual learning rules out both a pure supervised mode and a pure unsupervised mode of learning and leads some researchers to the proposal that feedback may change the learning rate through top-down control but does not act as a teaching signal in perceptual learning (M. H. Herzog & M. Fahle, 1998). In this study, we tested the predictions of an augmented Hebbian reweighting model (AHRM) of perceptual learning (A. Petrov, B. A. Dosher, & Z.-L. Lu, 2005), in which feedback influences the effective rate of learning by serving as an additional input and not as a direct teaching signal. We investigated the interactions between feedback and training accuracy in a Gabor orientation identification task over six training days. The accelerated stochastic approximation method was used to track threshold contrasts at particular performance accuracy levels throughout training. Subjects were divided into 4 groups: high training accuracy (85% correct) with and without feedback, and low training accuracy (65%) with and without feedback. Contrast thresholds improved in the high training accuracy condition, independent of the feedback condition. However, thresholds improved in the low training accuracy condition only in the presence of feedback but not in the absence of feedback. The results are both qualitatively and quantitatively consistent with the predictions of the augmented Hebbian learning model and are not consistent with pure supervised error correction or pure Hebbian learning models.


Vision Research | 2014

Modeling trial by trial and block feedback in perceptual learning.

Jiajuan Liu; Barbara Anne Dosher; Zhong-Lin Lu

Feedback has been shown to play a complex role in visual perceptual learning. It is necessary for performance improvement in some conditions while not others. Different forms of feedback, such as trial-by-trial feedback or block feedback, may both facilitate learning, but with different mechanisms. False feedback can abolish learning. We account for all these results with the Augmented Hebbian Reweight Model (AHRM). Specifically, three major factors in the model advance performance improvement: the external trial-by-trial feedback when available, the self-generated output as an internal feedback when no external feedback is available, and the adaptive criterion control based on the block feedback. Through simulating a comprehensive feedback study (Herzog & Fahle, 1997), we show that the model predictions account for the pattern of learning in seven major feedback conditions. The AHRM can fully explain the complex empirical results on the role of feedback in visual perceptual learning.


Vision Research | 2012

Mixed training at high and low accuracy levels leads to perceptual learning without feedback.

Jiajuan Liu; Zhong-Lin Lu; Barbara Anne Dosher

In this study, we investigated whether mixing easy and difficult trials can lead to learning in the difficult conditions. We hypothesized that while feedback is necessary for significant learning in training regimes consisting solely of low training accuracy trials, training mixtures with sufficient proportions of high accuracy training trials would lead to significant learning without feedback. Thirty-six subjects were divided into one experimental group in which trials with high training accuracy were mixed with those with low training accuracy and no feedback, and five control groups in which high and low accuracy training were mixed in the presence of feedback; high and high training accuracy were mixed or low and low training accuracy were mixed with and without feedback trials. Contrast threshold improved significantly in the low accuracy condition in the presence of high training accuracy trials (the high-low mixture group) in the absence of feedback, although no significant learning was found in the low accuracy condition in the group with the low-low mixture without feedback. Moreover, the magnitude of improvement in low accuracy trials without feedback in the high-low training mixture is comparable to that in the high accuracy training without feedback condition and those obtained in the presence of trial-by-trial external feedback. The results are both qualitatively and quantitatively consistent with the predictions of the Augmented Hebbian Re-Weighting model. We conclude that mixed training at high and low accuracy levels can lead to perceptual learning at low training accuracy levels without feedback.


Journal of Vision | 2015

Augmented Hebbian reweighting accounts for accuracy and induced bias in perceptual learning with reverse feedback.

Jiajuan Liu; Barbara Anne Dosher; Zhong-Lin Lu

Using an asymmetrical set of vernier stimuli (-15″, -10″, -5″, +10″, +15″) together with reverse feedback on the small subthreshold offset stimulus (-5″) induces response bias in performance (Aberg & Herzog, 2012; Herzog, Eward, Hermens, & Fahle, 2006; Herzog & Fahle, 1999). These conditions are of interest for testing models of perceptual learning because the world does not always present balanced stimulus frequencies or accurate feedback. Here we provide a comprehensive model for the complex set of asymmetric training results using the augmented Hebbian reweighting model (Liu, Dosher, & Lu, 2014; Petrov, Dosher, & Lu, 2005, 2006) and the multilocation integrated reweighting theory (Dosher, Jeter, Liu, & Lu, 2013). The augmented Hebbian learning algorithm incorporates trial-by-trial feedback, when present, as another input to the decision unit and uses the observers internal response to update the weights otherwise; block feedback alters the weights on bias correction (Liu et al., 2014). Asymmetric training with reversed feedback incorporates biases into the weights between representation and decision. The model correctly predicts the basic induction effect, its dependence on trial-by-trial feedback, and the specificity of bias to stimulus orientation and spatial location, extending the range of augmented Hebbian reweighting accounts of perceptual learning.


Journal of Vision | 2015

An integrated reweighting theory accounts for the role of task precision in transfer of perceptual learning for similar orientation tasks.

Jiajuan Liu; Barbara Anne Dosher; Zhong-Lin Lu

Specificity is one of the hallmark findings of perceptual learning. One of the factors influencing the extent of specificity is the difficulty of the training task1 or, alternatively, the precision of the transfer task2. For example, the specificity of perceptual learning is higher for more precise (±5°) than less precise (±12°) orientation discrimination transfer tasks at new reference angle and retinal locations, essentially independent of the precision of the training task2 (see also second-order3 or first-order4 motion direction tasks). Recently, an integrated reweighting theory (IRT)5 was developed to account for the degree of specificity over position. The IRT reweights evidence from both location independent and location specific representations to decision to account for transfer and specificity. Here we develop the predictions of the IRT for the effects of judgment precision on transfer2, using a visual front end of normalized spatial-frequency and orientation tuned channels and Hebbian reweighting to decision, augmented by feedback and criterion correction5. The exact details of the experiment are reprised to generate simulated model predictions. The IRT correctly predicts that the specificity depends upon the precision of the transfer task, relatively independent of the precision of the training task. In sum, when the training and the transfer tasks involve the same kinds of judgments but use stimuli that are rotationally symmetric, the degree of specificity is primarily driven by the precision of the transfer task. A more precise judgment in the transfer task is more demanding and so shows more specificity and less transfer. The IRT model can also be used to make predictions about a number of related phenomena in perceptual learning. Meeting abstract presented at VSS 2015.


Journal of Vision | 2010

Augmented Hebbian Learning Accounts for the Complex Pattern of Effects of Feedback in Perceptual Learning

Jiajuan Liu; Zhong-Lin Lu; Barbara Anne Dosher


Journal of Vision | 2010

Augmented Hebbian learning accounts for the Eureka effect in perceptual learning

Jiajuan Liu; Zhong-Lin Lu; Barbara Anne Dosher


Journal of Vision | 2011

Multi-location Augmented Hebbian Re-Weighting Accounts for Transfer of Perceptual Learning following Double Training

Jiajuan Liu; Zhong-Lin Lu; Barbara Anne Dosher

Collaboration


Dive into the Jiajuan Liu's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Wilson Chu

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Pamela E Jeter

Johns Hopkins University

View shared research outputs
Researchain Logo
Decentralizing Knowledge