Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Caitlyn McColeman is active.

Publication


Featured researches published by Caitlyn McColeman.


Topics in Cognitive Science | 2017

Using Video Game Telemetry Data to Research Motor Chunking, Action Latencies, and Complex Cognitive-Motor Skill Learning

Joseph J. Thompson; Caitlyn McColeman; Ekaterina R. Stepanova; Mark R. Blair

Many theories of complex cognitive-motor skill learning are built on the notion that basic cognitive processes group actions into easy-to-perform sequences. The present work examines predictions derived from laboratory-based studies of motor chunking and motor preparation using data collected from the real-time strategy video game StarCraft 2. We examined 996,163 action sequences in the telemetry data of 3,317 players across seven levels of skill. As predicted, the latency to the first action (thought to be the beginning of a chunked sequence) is delayed relative to the other actions in the group. Other predictions, inspired by the memory drum theory of Henry and Rogers, received only weak support.


PLOS ONE | 2014

Learning-Induced Changes in Attentional Allocation during Categorization: A Sizable Catalog of Attention Change as Measured by Eye Movements

Caitlyn McColeman; Jordan I. Barnes; Lihan Chen; Kimberly Meier; R. Calen Walshe; Mark R. Blair

Learning how to allocate attention properly is essential for success at many categorization tasks. Advances in our understanding of learned attention are stymied by a chicken-and-egg problem: there are no theoretical accounts of learned attention that predict patterns of eye movements, making data collection difficult to justify, and there are not enough datasets to support the development of a rich theory of learned attention. The present work addresses this by reporting five measures relating to the overt allocation of attention across 10 category learning experiments: accuracy, probability of fixating irrelevant information, number of fixations to category features, the amount of change in the allocation of attention (using a new measure called Time Proportion Shift - TIPS), and a measure of the relationship between attention change and erroneous responses. Using these measures, the data suggest that eye-movements are not substantially connected to error in most cases and that aggregate trial-by-trial attention change is generally stable across a number of changing task variables. The data presented here provide a target for computational models that aim to account for changes in overt attentional behaviors across learning.


Visual Cognition | 2013

The relationship between saccade velocity, fixation duration, and salience in category learning

Caitlyn McColeman; Mark R. Blair

Information in our visual environment enters the processing stream through a series of fixations punctuated by saccades. Patterns of a participant’s eye movements can indicate his or her perceived salience of parts of the environment, or the endogenous priority to attend certain elements of the visual space. Previous work has shown interesting interactions between low-level visual attention, and higher-level processing. For example, research has shown that that fixation latencies to task relevant items are be longer than fixations to irrelevant parts of the environment (Blair, Watson, Walshe, & Maj, 2009), and reports from van Zoest and colleagues (van Zoest, Donk, & Theeuwes, 2004; van Zoest, Hunt, & Kingstone, 2010) indicate that saccades under conscious control are slower than those that are automatically deployed. In this study, we measured the influence of salient distractors on eye movements during learning in a category learning task (measured by fixation durations and saccade speeds), thus exploring the influence of salience and task knowledge in parallel. The category learning task was relatively complex in that it required the participant to learn which features were important in predicting category membership, how to combine the features to make a correct category decision, and how to optimize their time and energy by making fixations only to information that informed their category decision. To this end, we explore how OPAM 2013 REPORT 701


international conference of design, user experience, and usability | 2017

Design-Based Evidence Collection and Evidence-Based Design (DEED) Model

Caitlyn McColeman; Robin Barrett; Mark R. Blair

The DEED (design-based evidence collecting and evidence-based design thinking) model offers a structure in which designers and scientists can effectively support one another in the development of both design and knowledge. The model offers one possible implementation of the applied and basic combined strategy to research [1]. DEED offers a design strategy that


Cognitive Science | 2011

A Tale of Two Processes: Categorization Accuracy and Attentional Learning Dissociate with Imperfect Feedback

Caitlyn McColeman; Aaron Ancell; Mark R. Blair


Cognitive Science | 2014

RLAttn: An actor-critic model of eye movements during category learning

Jordan I. Barnes; Caitlyn McColeman; Ekaterina R. Stepanova; Mark R. Blair; R. Calen Walshe


Cognitive Science | 2014

Task relevance moderates saccade velocities to spatially separated cues

Caitlyn McColeman; Mark R. Blair


Journal of Vision | 2013

The Influence of Salient Distractors over the Course of a Category Learning Task

Caitlyn McColeman; Mark R. Blair


Cognitive Science | 2015

During category learning, top-down and bottom up processes battle for control of the eyes.

Caitlyn McColeman; Mark R. Blair


Archive | 2011

Category learning with imperfect feedback

Caitlyn McColeman; Aaron Ancell; Mark R. Blair

Collaboration


Dive into the Caitlyn McColeman's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kimberly Meier

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Lihan Chen

Simon Fraser University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge