Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hongjing Lu is active.

Publication


Featured researches published by Hongjing Lu.


Psychological Review | 2008

Bayesian Generic Priors for Causal Learning

Hongjing Lu; Alan L. Yuille; Mimi Liljeholm; Patricia W. Cheng; Keith J. Holyoak

The article presents a Bayesian model of causal learning that incorporates generic priors--systematic assumptions about abstract properties of a system of cause-effect relations. The proposed generic priors for causal learning favor sparse and strong (SS) causes--causes that are few in number and high in their individual powers to produce or prevent effects. The SS power model couples these generic priors with a causal generating function based on the assumption that unobservable causal influences on an effect operate independently (P. W. Cheng, 1997). The authors tested this and other Bayesian models, as well as leading nonnormative models, by fitting multiple data sets in which several parameters were varied parametrically across multiple types of judgments. The SS power model accounted for data concerning judgments of both causal strength and causal structure (whether a causal link exists). The model explains why human judgments of causal structure (relative to a Bayesian model lacking these generic priors) are influenced more by causal power and the base rate of the effect and less by sample size. Broader implications of the Bayesian framework for human learning are discussed.


Frontiers in Psychology | 2013

A predictive coding perspective on autism spectrum disorders

Jeroen J. A. van Boxtel; Hongjing Lu

A commentary on When the world becomes ‘too real’: a Bayesian explanation of autistic perception by Pellicano, E., and Burr, D. (2012). Trends Cogn. Sci. 16, 504–510. In a recent article entitled “When the world becomes ‘too real’: Bayesian explanation of autistic perception,” Elizabeth Pellicano and David Burr (Pellicano and Burr, 2012b) introduce an intriguing new hypothesis, a Bayesian account, concerning the possible origins of perceptual deficits in Autism Spectrum Disorder (ASD). This Bayesian account explains why ASD impacts perception in systematic ways, but it does not clearly explain how. Most prominently, the Bayesian account lacks connections to the neural computation performed by the brain, and does not provide mechanistic explanations for ASD (Rust and Stocker, 2010; Colombo and Series, 2012). Nor does the Bayesian account explain what the biological origin is of the “prior”—the essential addition of the Bayesian models. In Marrs terminology (Marr, 1982), Pellicano and Burr paper proposes a computational-level explanation for ASD, but not an account for the other two levels, representation and implementation. We propose that a predictive coding framework (schematized in Figure ​Figure1)1) may fill the gap and generate a testable framework open to further experimental investigations. Figure 1 A schematic representation of the predictive coding framework. Input arrives from the sensory organs, and is processed in a “low-level” area. This processed information is sent to a higher area. Based on this input the higher area tries ... In Pellicano and Burrs general Bayesian approach, perception is based on the integration of stimulus information (encapsulated in the likelihood) and regularizing (contextual) information based on previous experience (the “prior”). Often, the prior draws perception away from the veridical stimulus characteristics [e.g., people perceive a Kanizsa triangle above three circles, instead of three pac-men: see Figure ​Figure11 in Pellicano and Burr (2012b)]. Pellicano and Burr suggest that people with ASD have weak priors compared to the typically-developing population, explaining a key finding that autistic observers are less influenced by contextual information, and hence see the world more accurately (as it actually is), as their perception is less modulated by experience. This Bayesian account provides an explanation for the bias favoring local over global processing in ASD. The predictive coding framework provides a natural implementation of the prior used in the Bayesian model proposed by Pellicano and Burr. In predictive coding schemes, higher brain areas attempt to “explain” input from lower brain areas, and then project these predictions down to lower areas, where the predicted sensory information is subtracted from the input (i.e., predicted information is discounted). This feedback operates in a hierarchical manner (Figure ​(Figure1),1), and the predictions fed-back to lower areas constitute the (empirically-derived) “priors” (Feldman and Friston, 2010). Such empirical priors have been computationally implemented (Rao and Ballard, 1999; Feldman and Friston, 2010), and thus are open to experimental scrutiny. An added advantage of this framework is that it naturally explains the often-observed decrease in global processing in people with ASD, and concomitant increase in local processing (Happe and Frith, 2006; Mottron et al., 2006). The predictive coding framework also provides an elegant way to implement both endogenous (top-down) and exogenous (bottom-up) attention within the same framework. The framework can therefore guide detailed investigations of whether perceptual deficits in ASD are due to malfunctioning of certain higher-level brain areas, or instead due to an attentional bias toward lower-level stimulus characteristics (Plaisted, 2001; Mottron et al., 2006). Exogenous attention is linked to the prediction error in the predictive coding framework. Specifically, when the predictions (“priors”) do not match the input, expectations are violated, and a prediction error (i.e., the difference between the expected and the observed sensory information) is generated at lower levels. The prediction error constitutes a “surprise” (Feldman and Friston, 2010), which can be thought of as a trigger for exogenous attention. With decreased high-level processing in ASD (e.g., Brosnan et al., 2004; Happe and Frith, 2006), predictions are presumably less precise (or less strong, i.e., hypo-priors; Pellicano and Burr, 2012b), and thus prediction errors (“surprises”) will increase. As a result, the sensory systems of people with ASD will be constantly bombarded by new “surprises”, and hence overloaded with sensory stimulation. Endogenous attention can also be readily included in the predictive coding framework as a modulation of feedforward information (as explained in Feldman and Friston, 2010). Empirical evidence for such modulation exists (Zhang and Luck, 2009). Within the predictive coding framework, decreased influence of higher visual areas on perception, manifested in decreased activity (e.g., Belmonte et al., 2004; Schultz, 2005) or decreased (functional) connectivity (Just et al., 2004; Liu et al., 2011), could be due to decreased functioning of higher levels, or alternatively to decreased endogenous modulation of attention (Mottron et al., 2006), or both. Developing quantitative computational models may help us disentangle these possibilities. The predictive coding framework may also provide valuable insights into the developmental origins of ASD. Because of the recurrent nature of the predictive coding framework, it is possible that a dysfunction in one level causes a dysfunction in another level, which in turn feeds back to create a vicious circle. If this cycle occurs during development, it could potentially spiral out of control, contributing to ASD. Such scenarios go beyond a simple Bayesian account based on priors and likelihoods and could be investigated with computational models in the future (Rao and Ballard, 1999; Feldman and Friston, 2010). Finally, in a recent comment on Pellicano and Burrs paper, Brock (Brock, 2012) suggested that instead of hypo-priors, one may assume that people with ASD have reduced sensory noise. Although this is theoretically possible, Pellicano and Burr countered (Pellicano and Burr, 2012a) that there is in fact experimental evidence for increased neural noise in ASD. We would add that the hypothesis of reduced sensory noise also predicts a reduced variance in the intra-individual perceptual responses to identical (visual) stimuli, whereas a hypo-prior would be associated with an increase in variance. Although the literature on this issue is not extensive, intra-individual response time variability is reportedly greater in ASD than in the typical population (Geurts et al., 2008). In summary, the predictive coding framework complements the Bayesian approach introduced by Pellicano and Burr, providing a general account of why certain perceptual, and potentially social deficits (cf., Kilner et al., 2007) exist, and how biological substrates and computational mechanisms can give rise to these deficits in ASD.


Journal of Vision | 2013

A biological motion toolbox for reading, displaying, and manipulating motion capture data in research settings.

Jeroen J. A. van Boxtel; Hongjing Lu

Biological motion research is an increasingly active field, with a great potential to contribute to a wide range of applications, such as behavioral monitoring/motion detection in surveillance situations, intention inference in social interactions, and diagnostic tools in autism research. In recent years, a large amount of motion capture data has become freely available online, potentially providing rich stimulus sets for biological motion research. However, there currently does not exist an easy-to-use tool to extract, present and manipulate motion capture data in the MATLAB environment, which many researchers use to program their experiments. We have developed the Biomotion Toolbox, which allows researchers to import motion capture data in a variety of formats, to display actions using Psychtoolbox 3, and to manipulate action displays in specific ways (e.g., inversion, three-dimensional rotation, spatial scrambling, phase-scrambling, and limited lifetime). The toolbox was designed to allow researchers with a minimal level of MATLAB programming skills to code experiments using biological motion stimuli.


Journal of Experimental Psychology: General | 2010

Analogical and Category-Based Inference: A Theoretical Integration with Bayesian Causal Models.

Keith J. Holyoak; Hee Seung Lee; Hongjing Lu

A fundamental issue for theories of human induction is to specify constraints on potential inferences. For inferences based on shared category membership, an analogy, and/or a relational schema, it appears that the basic goal of induction is to make accurate and goal-relevant inferences that are sensitive to uncertainty. People can use source information at various levels of abstraction (including both specific instances and more general categories), coupled with prior causal knowledge, to build a causal model for a target situation, which in turn constrains inferences about the target. We propose a computational theory in the framework of Bayesian inference and test its predictions (parameter-free for the cases we consider) in a series of experiments in which people were asked to assess the probabilities of various causal predictions and attributions about a target on the basis of source knowledge about generative and preventive causes. The theory proved successful in accounting for systematic patterns of judgments about interrelated types of causal inferences, including evidence that analogical inferences are partially dissociable from overall mapping quality.


Frontiers in Psychology | 2013

Impaired Global, and Compensatory Local, Biological Motion Processing in People with High Levels of Autistic Traits

Jeroen J. A. van Boxtel; Hongjing Lu

People with Autism Spectrum Disorder (ASD) are hypothesized to have poor high-level processing but superior low-level processing, causing impaired social recognition, and a focus on non-social stimulus contingencies. Biological motion perception provides an ideal domain to investigate exactly how ASD modulates the interaction between low and high-level processing, because it involves multiple processing stages, and carries many important social cues. We investigated individual differences among typically developing observers in biological motion processing, and whether such individual differences associate with the number of autistic traits. In Experiment 1, we found that individuals with fewer autistic traits were automatically and involuntarily attracted to global biological motion information, whereas individuals with more autistic traits did not show this pre-attentional distraction. We employed an action adaptation paradigm in the second study to show that individuals with more autistic traits were able to compensate for deficits in global processing with an increased involvement in local processing. Our findings can be interpreted within a predictive coding framework, which characterizes the functional relationship between local and global processing stages, and explains how these stages contribute to the perceptual difficulties associated with ASD.


Journal of Vision | 2010

Structural processing in biological motion perception

Hongjing Lu

To investigate the basis for biological motion perception, structural and motion information were manipulated independently in a dynamic display using a novel stimulus with multiple apertures. Performance was compared in discrimination of global motion (translation and rotation) and biological motion. When structural information in the display was eliminated but motion information was intact, human observers were able to perceive global motion yet were at chance in discriminating walking direction of biological movement. In contrast, when the display provided even noisy and impoverished structural information, walking direction became identifiable. The present findings thus provide direct psychophysical evidence that motion information is insufficient and structural information is necessary for the identification of walking direction in biological movement. These findings imply that computational models must utilize a structural representation of the human body to account for perception of biological movements.


Journal of Vision | 2006

Computing dynamic classification images from correlation maps.

Hongjing Lu; Zili Liu

We used Pearsons correlation to compute dynamic classification images of biological motion in a point-light display. Observers discriminated whether a human figure that was embedded in dynamic white Gaussian noise was walking forward or backward. Their responses were correlated with the Gaussian noise fields frame by frame, across trials. The resultant correlation map gave rise to a sequence of dynamic classification images that were clearer than either the standard method of A. J. Ahumada and J. Lovell (1971) or the optimal weighting method of R. F. Murray, P. J. Bennett, and A. B. Sekuler (2002). Further, the correlation coefficients of all the point lights were similar to each other when overlapping pixels between forward and backward walkers were excluded. This pattern is consistent with the hypothesis that the point-light walker is represented in a global manner, as opposed to a fixed subset of point lights being more important than others. We conjecture that the superior performance of the correlation map may reflect inherent nonlinearities in processing biological motion, which are incompatible with the assumptions underlying the previous methods.


Psychological Science | 2013

Physical and Biological Constraints Govern Perceived Animacy of Scrambled Human Forms

Steven M. Thurman; Hongjing Lu

Point-light animations of biological motion are perceived quickly and spontaneously, giving rise to an irresistible sensation of animacy. However, the mechanisms that support judgments of animacy based on biological motion remain unclear. The current study demonstrates that animacy ratings increase when a spatially scrambled animation of human walking maintains consistency with two fundamental constraints: the direction of gravity and congruency between the directions of intrinsic and extrinsic motion. Furthermore, using a reverse-correlation method, we show that observers employ structural templates, or form-based “priors,” reflecting the prototypical mammalian body plan when attributing animacy to scrambled human forms. These findings reveal that perception of animacy in scrambled biological motion involves not only analysis of local intrinsic motion, but also its congruency with global extrinsic motion and global spatial structure. Thus, they suggest a strong influence of prior knowledge about characteristic features of creatures in the natural environment.


Journal of Vision | 2012

Two forms of aftereffects induced by transparent motion reveal multilevel adaptation

Alan L. F. Lee; Hongjing Lu

Visual adaptation produces remarkable perceptual aftereffects. However, it remains unclear what basic neural mechanisms underlie visual adaptation and how these adaptation-induced neural changes are related to perceptual aftereffects. To address these questions, we examined transparent motion adaptation and traced the effects of adaptation through the motion processing hierarchy. We found that, after adapting to a bidirectional transparent motion display, observers perceived two radically different motion aftereffects (MAEs): segregated and integrated MAEs, depending on testing locations. The segregated MAE yielded an aftereffect opposite to one of the adapting directions in the transparent motion stimulus. Our results revealed that the segregated MAE relies on the integration of local adaptation effects. In contrast, the integrated MAE yielded an aftereffect opposite to the average of the adapting directions. We found that integrated MAE was dominant at non-adapted locations but was reduced when local adaptation effects were weakened. These results suggest that integrated MAE is elicited by a combination of two mechanisms: adaptation-induced changes at a high-level processing stage and integration of local adaptation effects. We conclude that distinct perceptual aftereffects can be observed due to adaptation-induced neural changes at different processing levels, supporting the general hypothesis of multilevel adaptation in the visual hierarchy.


Journal of Vision | 2007

Motion perceptual learning: When only task-relevant information is learned

Xuan Huang; Hongjing Lu; Bosco S. Tjan; Yifeng Zhou; Zili Liu

The classic view that perceptual learning is information selective and goal directed has been challenged by recent findings showing that subthreshold and task-irrelevant information can induce perceptual learning. This study demonstrates a limit on task-irrelevant learning as exposure to suprathreshold task-irrelevant signals failed to induce perceptual learning. In each trial, two random-dot motion stimuli were presented in a two-alternative forced-choice task. Observers either decided which of the two contained a coherent motion signal (detection task), or whether the coherent motion direction was clockwise or counterclockwise relative to a reference direction (discrimination task). Whereas the exact direction of the coherent motion signal was irrelevant to the detection task, detection of the coherent motion signal was necessary for the discrimination task. We found that the detection trainees improved only their detection but not discrimination sensitivity, whereas the discrimination trainees improved both. Therefore, the importance of task relevance was demonstrated in both detection and discrimination learning. Furthermore, both detection and discrimination training along a single pedestal direction transferred to a broad range of pedestal directions. The profile of the discrimination transfer (as a function of pedestal direction) narrowed for the discrimination trainees.

Collaboration


Dive into the Hongjing Lu's collaboration.

Top Co-Authors

Avatar

Zili Liu

University of California

View shared research outputs
Top Co-Authors

Avatar

Alan L. Yuille

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alan L. F. Lee

University of California

View shared research outputs
Top Co-Authors

Avatar

Yujia Peng

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Song-Chun Zhu

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge