Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Allison A. Brennan is active.

Publication


Featured researches published by Allison A. Brennan.


Psychonomic Bulletin & Review | 2010

Looking versus seeing: Strategies alter eye movements during visual search

Marcus R. Watson; Allison A. Brennan; Alan Kingstone; James T. Enns

Visual search can be made more efficient by adopting a passive cognitive strategy (i.e., letting the target “pop” into mind) rather than by trying to actively guide attention. In the present study, we examined how this strategic benefit is linked to eye movements. Results show that participants using a passive strategy wait longer before beginning to move their eyes and make fewer saccades than do active participants. Moreover, the passive advantage stems from more efficient use of the information in a fixation, rather than from a wider attentional window. Individual difference analyses indicate that strategies also change the way eye movements are related to search success, with a rapid saccade rate predicting success among active participants, and fewer and larger amplitude saccades predicting success among passive participants. A change in mindset, therefore, alters how oculomotor behaviors are harnessed in the service of visual search.


Psychonomic Bulletin & Review | 2015

When two heads are better than one: Interactive versus independent benefits of collaborative cognition

Allison A. Brennan; James T. Enns

Previous research has shown that two heads working together can outperform one working alone, but whether such benefits result from social interaction or from the statistical facilitation of independent responses is not clear. Here we apply Miller’s (Cognitive Psychology, 14, 247–279, 1982; Ulrich, Miller & Schröter, Behavior Research Methods, 39(2), 291–302, 2007) race model inequality (RMI) to distinguish between these two possibilities. Pairs of participants completed a visual enumeration task, both as independent individuals and as two members of a team. The results showed that team performance exceeded the efficiency of two individuals working independently, indicating that interpersonal interaction underlies the collaborative gains in this task. This interpretation was bolstered by analyses showing that the magnitude of the collaborative benefit was positively mediated by the strength of social affiliation and by the similarity of verbal communication among team members. This research serves as a proof-of-concept that Miller’s RMI can differentiate between interactive versus independent effects of collaborative cognition. Moreover, the finding that social affiliation and communication similarity each contribute to the collaborative benefit suggests new avenues of research for establishing the mechanisms supporting collaborative cognition.


NeuroImage | 2017

Teams on the same wavelength perform better: Inter-brain phase synchronization constitutes a neural substrate for social facilitation

Caroline Szymanski; Ana Pesquita; Allison A. Brennan; Dionysios Perdikis; James T. Enns; Timothy R. Brick; Viktor Müller; Ulman Lindenberger

ABSTRACT Working together feels easier with some people than with others. We asked participants to perform a visual search task either alone or with a partner while simultaneously measuring each participants EEG. Local phase synchronization and inter‐brain phase synchronization were generally higher when subjects jointly attended to a visual search task than when they attended to the same task individually. Some participants searched the visual display more efficiently and made faster decisions when working as a team, whereas other dyads did not benefit from working together. These inter‐team differences in behavioral performance gain in the visual search task were reliably associated with inter‐team differences in local and inter‐brain phase synchronization. Our results suggest that phase synchronization constitutes a neural correlate of social facilitation, and may help to explain why some teams perform better than others. HighlightsEEG hyperscanning is performed during individual and joint attention.Local and inter‐brain phase synchronization increase during joint attention.Phase synchronization correlates with behavioral team performance.Phase synchronization may function as a neural substrate of social facilitation.


Attention Perception & Psychophysics | 2011

Person perception informs understanding of cognition during visual search

Allison A. Brennan; Marcus R. Watson; Alan Kingstone; James T. Enns

Does person perception—the impressions we form from watching others—hold clues to the mental states of people engaged in cognitive tasks? We investigated this with a two-phase method: In Phase 1, participants searched on a computer screen (Experiment 1) or in an office (Experiment 2); in Phase 2, other participants rated the searchers’ video-recorded behavior. The results showed that blind raters are sensitive to individual differences in search proficiency and search strategy, as well as to environmental factors affecting search difficulty. Also, different behaviors were linked to search success in each setting: Eye movement frequency predicted successful search on a computer screen; head movement frequency predicted search success in an office. In both settings, an active search strategy and positive emotional expressions were linked to search success. These data indicate that person perception informs cognition beyond the scope of performance measures, offering the potential for new measurements of cognition that are both rich and unobtrusive.


Experimental Brain Research | 2012

Treadmill experience mediates the perceptual-motor aftereffect of treadmill walking

Allison A. Brennan; Jonathan Z. Bakdash; Dennis R. Proffitt

People have a lifetime of experience in which to calibrate their self-produced locomotion with the resultant optical flow. Contrary to walking across the ground, however, walking on a treadmill produces minimal optical flow, and consequentially, a perceptual-motor aftereffect results. We demonstrate that the magnitude of this perceptual-motor aftereffect—measured by forward drift while attempting to march in-place following treadmill walking—decreases as experience walking on a treadmill is acquired over time. Experience with treadmill walking enables walking in this context to become sufficiently distinguished from walking in other contexts. Consequently, two distinct perceptual-motor calibration states are maintained, each linked to the context in which walking occurs. Experience with treadmill walking maintains perceptual-motor calibration accuracy in both walking contexts, despite changes to the relationship between perception and action.


Experimental Brain Research | 2013

Isolating shape from semantics in haptic-visual priming

Ana Pesquita; Allison A. Brennan; James T. Enns; Salvador Soto-Faraco

The exploration of a familiar object by hand can benefit its identification by eye. What is unclear is how much this multisensory cross-talk reflects shared shape representations versus generic semantic associations. Here, we compare several simultaneous priming conditions to isolate the potential contributions of shape and semantics in haptic-to-visual priming. Participants explored a familiar object manually (haptic prime) while trying to name a visual object that was gradually revealed in increments of spatial resolution. Shape priming was isolated in a comparison of identity priming (shared semantic category and shape) with category priming (same category, but different shapes). Semantic priming was indexed by the comparisons of category priming with unrelated haptic primes. The results showed that both factors mediated priming, but that their relative weights depended on the reliability of the visual information. Semantic priming dominated in Experiment 1, when participants were free to use high-resolution visual information, but shape priming played a stronger role in Experiment 2, when participants were forced to respond with less reliable visual information. These results support the structural description hypothesis of haptic-visual priming (Reales and Ballesteros in J Exp Psychol Learn Mem Cogn 25:644–663, 1999) and are also consistent with the optimal integration theory (Ernst and Banks in Nature 415:429–433, 2002), which proposes a close coupling between the reliability of sensory signals and their weight in decision making.


PLOS ONE | 2015

What’s in a Friendship? Partner Visibility Supports Cognitive Collaboration between Friends

Allison A. Brennan; James T. Enns

Not all cognitive collaborations are equally effective. We tested whether friendship and communication influenced collaborative efficiency by randomly assigning participants to complete a cognitive task with a friend or non-friend, while visible to their partner or separated by a partition. Collaborative efficiency was indexed by comparing each pair’s performance to an optimal individual performance model of the same two people. The outcome was a strong interaction between friendship and partner visibility. Friends collaborated more efficiently than non-friends when visible to one another, but a partition that prevented pair members from seeing one another reduced the collaborative efficiency of friends and non-friends to a similar lower level. Secondary measures suggested that verbal communication differences, but not psychophysiological arousal, contributed to these effects. Analysis of covariance indicated that females contributed more than males to overall levels of collaboration, but that the interaction of friendship and visibility was independent of that effect. These findings highlight the critical role of partner visibility in the collaborative success of friends.


Archive | 2017

Attention in action and perception: Unitary or separate mechanisms of selectivity?

James T. Enns; Allison A. Brennan; Robert L. Whitwell

What is the relation between the two visual stream hypothesis and selective visual attention? In this chapter, we first consider this question at a theoretical level before presenting an example of work from our lab that examines the question: Under what conditions does the emotional content of a visual object influence visually guided action? Previous research has demonstrated that fear can influence perception, both consciously and unconsciously, but it is unclear when fear influences visually guided action. The study tested participants with varying degrees of spiderphobia on two visually guided pointing tasks, while manipulating the emotional valence of the target (positive and negative) and the cognitive load of the participant (single vs dual task). Participants rapidly moved their finger from a home position to a suddenly appearing target image on a touch screen. The images were emotionally negative (e.g., spiders and scorpions) or positive (e.g., flowers and food). In order to test the effect of emotional valence on the online control of the reach, the target either remained static or jumped to a new location. In both the single and dual tasks, a stream of digits were presented on the screen near the fingers starting location, but only in the dual task were participants asked to identify a letter somewhere in the stream. In the single task, increased fear of spiders reduced the speed and accuracy of the movement. In the dual task, increased fear impaired letter identification, but pointing actions were now equally efficient for low- and high-fear participants. These results imply that the fingers autopilot is influenced by emotional content only when attention can be fully devoted to the identification of the emotion-evoking images. As such, the results support the view that the mechanisms of selection are not the same in the two visual streams.


Canadian Journal of Experimental Psychology | 2017

Lifespan changes in attention revisited: everyday visual search

Allison A. Brennan; Alison J. Bruderer; Teresa Liu-Ambrose; Todd C. Handy; James T. Enns

This study compared visual search under everyday conditions among participants across the life span (healthy participants in 4 groups, with average age of 6 years, 8 years, 22 years, and 75 years, and 1 group averaging 73 years with a history of falling). The task involved opening a door and stepping into a room find 1 of 4 everyday objects (apple, golf ball, coffee can, toy penguin) visible on shelves. The background for this study included 2 well-cited laboratory studies that pointed to different cognitive mechanisms underlying each end of the U-shaped pattern of visual search over the life span (Hommel et al., 2004; Trick & Enns, 1998). The results recapitulated some of the main findings of the laboratory study (e.g., a U-shaped function, dissociable factors for maturation and aging), but there were several unique findings. These included large differences in the baseline salience of common objects at different ages, visual eccentricity effects that were unique to aging, and visual field effects that interacted strongly with age. These findings highlight the importance of studying cognitive processes in more natural settings, where factors such as personal relevance, life history, and bodily contributions to cognition (e.g., limb, head, and body movements) are more readily revealed. Cette étude visait à comparer la recherche visuelle dans des conditions de tous les jours chez des participants pendant toute la durée de vie (4 groupes de participants en santé d’âge moyen respectif de 6 ans, 8 ans, 22 ans et 75 ans et 1 groupe composé de personnes d’âge moyen de 73 ans ayant été victimes de chute). La tâche consistait à ouvrir une porte et à pénétrer dans une pièce afin d’y trouver un parmi quatre objets de tous les jours (une pomme, une balle de golf, une boîte de conserve de café, un jouet pingouin) bien en évidence, sur des tablettes. Cette étude s’appuie sur deux études de laboratoire bien citées soulignant différents mécanismes cognitifs qui sous-tendent chaque extrémité du schéma en forme de U de la recherche visuelle pendant toute la durée de vie (Hommel et al., 2004; Trick & Enns, 1998). Les résultats reprenaient certains des principaux résultats de l’étude de laboratoire (par ex. une fonction en forme de U, des facteurs dissociables de la maturation et du vieillissement), mais on y a tout de même fait plusieurs constats uniques. Ces derniers comprenaient de grandes différences au niveau de l’importance de la ligne de base d’objets courants à des âges variés, des effets d’excentricité visuelle, qui sont caractéristiques de l’âge, et des effets de champs visuels qui interagissaient fortement avec l’âge. Ces constats soulignent l’importance d’étudier les processus cognitifs dans des contextes plus naturels, où des facteurs tels la pertinence sur le plan personnel, l’expérience de vie et les contributions corporelles à la cognition (par ex., les mouvements des jambes, de la tête et du corps) et sont plus aisément dévoilés.


Behavioral and Brain Sciences | 2016

But is it social? How to tell when groups are more than the sum of their members

Allison A. Brennan; James T. Enns

Failure to distinguish between statistical effects and genuine social interaction may lead to unwarranted conclusions about the role of self-differentiation in group function. We offer an introduction to these issues from the perspective of recent research on collaborative cognition.

Collaboration


Dive into the Allison A. Brennan's collaboration.

Top Co-Authors

Avatar

James T. Enns

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Alan Kingstone

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Marcus R. Watson

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Ana Pesquita

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alison J. Bruderer

University of Northern British Columbia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Geniva Liu

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kim-Long Ngan Ta

University of British Columbia

View shared research outputs
Researchain Logo
Decentralizing Knowledge