Grégoire Borst
Paris Descartes University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Grégoire Borst.
Memory & Cognition | 2008
Grégoire Borst; Stephen M. Kosslyn
The research reported in the present article investigates whether information is represented the same way in both visual mental imagery and the early phases of visual perception. In Experiment 1, the same participants scanned over patterns of dots in a mental image (with images based on a just-seen pattern), during perception, and in an iconic image. The time to scan increasing distances increased at comparable rates in the three tasks. However, in Experiment 2, when mental images were created from information stored in long-term memory, participants scanned more slowly in the mental image condition. Nevertheless, the rates of scanning in the perceptual tasks were highly correlated with the rates of scanning in the imagery tasks in both experiments. The results provide evidence that mental images and perceived stimuli are represented similarly and can be processed in the same way.
Memory & Cognition | 2010
Amandine Afonso; Alan Blum; Brian F. G. Katz; Philippe Tarroux; Grégoire Borst; Michel Denis
When people scan mental images, their response times increase linearly with increases in the distance to be scanned, which is generally taken as reflecting the fact that their internal representations incorporate the metric properties of the corresponding objects. In view of this finding, we investigated the structural properties of spatial mental images created from nonvisual sources in three groups (blindfolded sighted, late blind, and congenitally blind). In Experiment 1, blindfolded sighted and late blind participants created metrically accurate spatial representations of a small-scale spatial configuration under both verbal and haptic learning conditions. In Experiment 2, late and congenitally blind participants generated accurate spatial mental images after both verbal and locomotor learning of a full-scale navigable space (created by an immersive audio virtual reality system), whereas blindfolded sighted participants were selectively impaired in their ability to generate precise spatial representations from locomotor experience. These results attest that in the context of a permanent lack of sight, encoding spatial information on the basis of the most reliable currently functional system (the sensorimotor system) is crucial for building a metrically accurate representation of a spatial environment. The results also highlight the potential of spatialized audio-rendering technology for exploring the spatial representations of visually impaired participants.
Frontiers in Psychology | 2014
Olivier Houdé; Grégoire Borst
Jean Piaget underestimated the cognitive capabilities of infants, preschoolers, and elementary schoolchildren, and overestimated the capabilities of adolescents and even adults which are often biased by illogical intuitions and overlearned strategies (i.e., “fast thinking” in Daniel Kahneman’s words). The crucial question is now to understand why, despite rich precocious knowledge about physical and mathematical principles observed over the last three decades in infants and young children, older children, adolescents and even adults are nevertheless so often bad reasoners. We propose that inhibition of less sophisticated solutions (or heuristics) by the prefrontal cortex is a domain-general executive ability that supports children’s conceptual insights associated with more advanced Piagetian stages, such as number-conservation and class inclusion. Moreover, this executive ability remains critical throughout the whole life and even adults may sometimes need “prefrontal pedagogy” in order to learn inhibiting intuitive heuristics (or biases) in deductive reasoning tasks. Here we highlight some of the discoveries from our lab in the field of cognitive development relying on two methodologies used for measuring inhibitory control: brain imaging and mental chronometry (i.e., the negative priming paradigm). We also show that this new approach opens an avenue for re-examining persistent errors in standard classroom-learning tasks.
Memory & Cognition | 2012
Grégoire Borst; Giorgio Ganis; William L. Thompson; Stephen M. Kosslyn
Although few studies have systematically investigated the relationship between visual mental imagery and visual working memory, work on the effects of passive visual interference has generally demonstrated a dissociation between the two functions. In four experiments, we investigated a possible commonality between the two functions: We asked whether both rely on depictive representations. Participants judged the visual properties of letters using visual mental images or pictures of unfamiliar letters stored in short-term memory. Participants performed both tasks with two different types of interference: sequences of unstructured visual masks (consisting of randomly changing white and black dots) or sequences of structured visual masks (consisting of fragments of letters). The structured visual noise contained elements of depictive representations (i.e., shape fragments arrayed in space), and hence should interfere with stored depictive representations; the unstructured visual noise did not contain such elements, and thus should not interfere as much with such stored representations. Participants did in fact make more errors in both tasks with sequences of structured visual masks. Various controls converged in demonstrating that in both tasks participants used representations that depicted the shapes of the letters. These findings not only constrain theories of visual mental imagery and visual working memory, but also have direct implications for why some studies have failed to find that dynamic visual noise interferes with visual working memory.
Psychonomic Bulletin & Review | 2015
Grégoire Borst; Emmanuel Ahr; Margot Roell; Olivier Houdé
Mirror generalization is detrimental for identifying letters with lateral mirror-image counterparts (‘b/d’). In the present study, we investigated whether the discrimination of this type of letters in expert readers might be rooted in the ability to inhibit the mirror-generalization process. In our negative priming paradigm, participants judged whether two letters were identical on the prime and two animals (or buildings) were identical on the probe. In Experiment 1, participants required more time when determining that two animals (but not two buildings) were mirror images of each other when preceded by letters with mirror-image counterparts than without mirror-image counterparts (‘a/h’). In Experiment 2, we replicated the results with different letters without mirror-image counterparts and with the type of probe stimuli (animal or building) manipulated as a within-subject factors. Our results suggest that expert readers never completely “unlearn” the mirror-generalization process and still need to inhibit this heuristic to overcome mirror errors.
Quarterly Journal of Experimental Psychology | 2010
Grégoire Borst; Stephen M. Kosslyn
In this article, we report a new image-scanning paradigm that allowed us to measure objectively individual differences in spatial mental imagery—specifically, imagery for location. Participants were asked to determine whether an arrow was pointing at a dot using a visual mental image of an array of dots. The degree of precision required to discriminate “yes” from “no” trials was varied. In Experiment 1, the time to scan increasing distances, as well as the number of errors, increased when greater precision was required to make a judgement. The results in Experiment 2 replicated those results while controlling for possible biases. When greater precision is required, the accuracy of the spatial image becomes increasingly important—and hence the effect of precision in the task reflects the accuracy of the image. In Experiment 3, this measure was shown to be related to scores on the Paper Folding test, on the Paper Form Board test, and on the visuospatial items on Ravens Advanced Progressive Matrices—but not to scores on questionnaires measuring object–based mental imagery. Thus, we provide evidence that classical standardized spatial tests rely on spatial mental imagery but not object mental imagery.
European Journal of Cognitive Psychology | 2010
Grégoire Borst; Rogier A. Kievit; William L. Thompson; Stephen M. Kosslyn
When participants take part in mental imagery experiments, are they using their “tacit knowledge” of perception to mimic what they believe should occur in the corresponding perceptual task? Two experiments were conducted to examine whether such an account can be applied to mental imagery in general. These experiments both examined tasks that required participants to “mentally rotate” stimuli. In Experiment 1, instructions led participants to believe that they could reorient shapes in one step or avoid reorienting the shapes altogether. Regardless of instruction type, response times increased linearly with increasing rotation angles. In Experiment 2, participants first observed novel objects rotating at different speeds, and then performed a mental rotation task with those objects. The speed of perceptually demonstrated rotation did not affect the speed of mental rotation. We argue that tacit knowledge cannot explain mental imagery results in general, and that in particular the mental rotation effect reflects the nature of the underlying internal representation and processes that transform it, rather than participants’ pre-existing knowledge.
Psychological Research-psychologische Forschung | 2011
Katie Lewis; Grégoire Borst; Stephen M. Kosslyn
In two experiments, we used a temporal integration task to investigate visual mental images based on information in short-term memory or generated from information stored in long-term memory (LTM). We specifically asked whether the two sorts of images rely on depictive representations. If mental images rely on depictive representations, then it should be possible to combine mental images and visual percepts into a single representation that preserves the spatial layout of the display. To demonstrate this, participants were asked to generate mental images and then combine them with visual percepts of grids that were partially filled with different numbers of dots. Participants were asked to determine which cell remained empty when the two grids were combined. We contrasted predictions of propositional or verbal description theories with those of depictive theories, and report findings that support the claim that mental images—based on either short-term or LTM—depict information.
Neuropsychologia | 2010
Grégoire Borst; Stephen M. Kosslyn
Two types of representations can be used to specify spatial relations: Coordinate spatial relations representations specify the precise distance between two objects, whereas categorical spatial relations representations assign a category (such as above or below) to specify a spatial relation between two objects. Computer simulation models suggest that coordinate spatial relations representations should be easier to encode if one attends to a relatively large region of space, whereas categorical spatial relations should be easier to encode if one attends to a relatively small region of space. We tested these predictions. To vary the scope of attention, we asked participants to focus on the local or global level of Navon letters, and immediately afterwards had them decide whether a dot was within 2.54 cm of a bar (coordinate judgment) or was above or below the bar (categorical judgment). Participants were faster in the coordinate task after they had just focused on the global level of a Navon letter whereas they were faster in the categorical task after they had just focused on the local level. Although we did not test the hemispheric lateralization of these effects, these findings have direct implications for theories of why the cerebral hemispheres differ in their relative ease of encoding the two kinds of spatial relations.
American Psychologist | 2011
Grégoire Borst; William L. Thompson; Stephen M. Kosslyn
Traditionally, characterizations of the macrolevel functional organization of the human cerebral cortex have focused on the left and right cerebral hemispheres. However, the idea of left brain versus right brain functions has been shown to be an oversimplification. We argue here that a top-bottom divide, rather than a left-right divide, is a more fruitful way to organize human cortical brain functions. However, current characterizations of the functions of the dorsal (top) and ventral (bottom) systems have rested on dichotomies, namely where versus what and how versus what. We propose that characterizing information-processing systems leads to a better macrolevel organization of cortical function; specifically, we hypothesize that the dorsal system is driven by expectations and processes sequences, relations, and movement, whereas the ventral system categorizes stimuli in parallel, focuses on individual events, and processes object properties (such as shape in vision and pitch in audition). To test this hypothesis, we reviewed over 100 relevant studies in the human neuroimaging and neuropsychological literatures and coded them relative to 11 variables, some of which characterized our hypothesis and some of which characterized the previous dichotomies. The results of forward stepwise logistic regressions supported our characterization of the 2 systems and showed that this model predicted the empirical findings better than either the traditional dichotomies or a left-right difference.