Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Adam M. Larson is active.

Publication


Featured researches published by Adam M. Larson.


Visual Cognition | 2010

The natural/man-made distinction is made before basic-level distinctions in scene gist processing

Lester C. Loschky; Adam M. Larson

What level of categorization occurs first in scene gist processing, basic level or the superordinate “natural” versus “man-made” distinction? The Spatial Envelope model of scene classification and human gist recognition (Oliva & Torralba, 2001) assumes that the superordinate distinction is made prior to basic-level distinctions. This assumption contradicts the claim that categorization occurs at the basic level before the superordinate level (Rosch, Mervis, Gray, Johnson, & Boyes-Braem, 1976). The present study tests this assumption of the Spatial Envelope model by having viewers categorize briefly flashed and masked scenes after varying amounts of processing time. The results show that early levels of processing (SOA < 72 ms) (1) produced greater sensitivity to the superordinate distinction than basic-level distinctions, and (2) basic-level distinctions crossing the superordinate natural/man-made boundary are treated as a superordinate distinction. Both results support the assumption of the Spatial Envelope model, and challenge the idea of basic-level primacy.


Journal of Experimental Psychology: Human Perception and Performance | 2014

The spatiotemporal dynamics of scene gist recognition.

Adam M. Larson; Tyler E. Freeman; Ryan Ringer; Lester C. Loschky

Viewers can rapidly extract a holistic semantic representation of a real-world scene within a single eye fixation, an ability called recognizing the gist of a scene, and operationally defined here as recognizing an images basic-level scene category. However, it is unknown how scene gist recognition unfolds over both time and space-within a fixation and across the visual field. Thus, in 3 experiments, the current study investigated the spatiotemporal dynamics of basic-level scene categorization from central vision to peripheral vision over the time course of the critical first fixation on a novel scene. The method used a window/scotoma paradigm in which images were briefly presented and processing times were varied using visual masking. The results of Experiments 1 and 2 showed that during the first 100 ms of processing, there was an advantage for processing the scene category from central vision, with the relative contributions of peripheral vision increasing thereafter. Experiment 3 tested whether this pattern could be explained by spatiotemporal changes in selective attention. The results showed that manipulating the probability of information being presented centrally or peripherally selectively maintained or eliminated the early central vision advantage. Across the 3 experiments, the results are consistent with a zoom-out hypothesis, in which, during the first fixation on a scene, gist extraction extends from central vision to peripheral vision as covert attention expands outward.


Memory & Cognition | 2016

The relative roles of visuospatial and linguistic working memory systems in generating inferences during visual narrative comprehension

Joseph P. Magliano; Adam M. Larson; Karyn Higgs; Lester C. Loschky

This study investigated the relative roles of visuospatial versus linguistic working memory (WM) systems in the online generation of bridging inferences while viewers comprehend visual narratives. We contrasted these relative roles in the visuospatial primacy hypothesis versus the shared (visuospatial & linguistic) systems hypothesis, and tested them in 3 experiments. Participants viewed picture stories containing multiple target episodes consisting of a beginning state, a bridging event, and an end state, respectively, and the presence of the bridging event was manipulated. When absent, viewers had to infer the bridging-event action to comprehend the end-state image. A pilot study showed that after viewing the end-state image, participants’ think-aloud protocols contained more inferred actions when the bridging event was absent than when it was present. Likewise, Experiment 1 found longer viewing times for the end-state image when the bridging-event image was absent, consistent with viewing times revealing online inference generation processes. Experiment 2 showed that both linguistic and visuospatial WM loads attenuated the inference viewing time effect, consistent with the shared systems hypothesis. Importantly, however, Experiment 3 found that articulatory suppression did not attenuate the inference viewing time effect, indicating that (sub)vocalization did not support online inference generation during visual narrative comprehension. Thus, the results support a shared-systems hypothesis in which both visuospatial and linguistic WM systems support inference generation in visual narratives, with the linguistic WM system operating at a deeper level than (sub)vocalization.


PLOS ONE | 2015

What would Jaws do? The tyranny of film and the relationship between gaze and higher-level narrative film comprehension

Lester C. Loschky; Adam M. Larson; Joseph P. Magliano; Tim J. Smith

What is the relationship between film viewers’ eye movements and their film comprehension? Typical Hollywood movies induce strong attentional synchrony—most viewers look at the same things at the same time. Thus, we asked whether film viewers’ eye movements would differ based on their understanding—the mental model hypothesis—or whether any such differences would be overwhelmed by viewers’ attentional synchrony—the tyranny of film hypothesis. To investigate this question, we manipulated the presence/absence of prior film context and measured resulting differences in film comprehension and eye movements. Viewers watched a 12-second James Bond movie clip, ending just as a critical predictive inference should be drawn that Bond’s nemesis, “Jaws,” would fall from the sky onto a circus tent. The No-context condition saw only the 12-second clip, but the Context condition also saw the preceding 2.5 minutes of the movie before seeing the critical 12-second portion. Importantly, the Context condition viewers were more likely to draw the critical inference and were more likely to perceive coherence across the entire 6 shot sequence (as shown by event segmentation), indicating greater comprehension. Viewers’ eye movements showed strong attentional synchrony in both conditions as compared to a chance level baseline, but smaller differences between conditions. Specifically, the Context condition viewers showed slightly, but significantly, greater attentional synchrony and lower cognitive load (as shown by fixation probability) during the critical first circus tent shot. Thus, overall, the results were more consistent with the tyranny of film hypothesis than the mental model hypothesis. These results suggest the need for a theory that encompasses processes from the perception to the comprehension of film.


eye tracking research & application | 2012

Using ScanMatch scores to understand differences in eye movements between correct and incorrect solvers on physics problems

Adrian Madsen; Adam M. Larson; Lester C. Loschky; N. Sanjay Rebello

Using a ScanMatch algorithm we investigate scan path differences between subjects who answer physics problems correctly and incorrectly. This algorithm bins a saccade sequence spatially and temporally, recodes this information to create a sequence of letters representing fixation location, duration and order, and compares two sequences to generate a similarity score. We recorded eye movements of 24 individuals on six physics problems containing diagrams with areas consistent with a novice-like response and areas of high perceptual salience. We calculated average ScanMatch similarity scores comparing correct solvers to one another (C-C), incorrect solvers to one another (I-I), and correct solvers to incorrect solvers (C-I). We found statistically significant differences between the C-C and I-I comparisons on only one of the problems. This seems to imply that top down processes relying on incorrect domain knowledge, rather than bottom up processes driven by perceptual salience, determine the eye movements of incorrect solvers.


2010 PHYSICS EDUCATION RESEARCH CONFERENCE | 2010

How does visual attention differ between experts and novices on physics problems

Adrian Carmichael; Adam M. Larson; Elizabeth Gire; Lester C. Loschky; N. Sanjay Rebello

Research in many disciplines has used eye‐tracking technology to investigate the differences in the visual attention of experts and novices. For example, it has been shown that experts in art and chess spend more time than novices looking at relevant information. Thus, it may be helpful to give novices more direct insight into the way experts allocate their visual attention, for example using attentional cueing techniques. However, not much is known about how experts allocate their attention on physics problems. More specifically, we look at physics problems where the critical information needed to answer the problem is contained in a diagram. This study uses eye movements to investigate how the allocation of visual attention differs between experts and novices on these types of physics problems. We find that in several problems tested, those who answered a question correctly spend more time looking at thematically relevant areas while those who answer incorrectly spend more time looking at perceptually s...


Frontiers in Psychology | 2014

Linking attentional processes and conceptual problem solving: visual cues facilitate the automaticity of extracting relevant information from diagrams.

Amy Rouinfar; Elise Agra; Adam M. Larson; N. Sanjay Rebello; Lester C. Loschky

This study investigated links between visual attention processes and conceptual problem solving. This was done by overlaying visual cues on conceptual physics problem diagrams to direct participants’ attention to relevant areas to facilitate problem solving. Participants (N = 80) individually worked through four problem sets, each containing a diagram, while their eye movements were recorded. Each diagram contained regions that were relevant to solving the problem correctly and separate regions related to common incorrect responses. Problem sets contained an initial problem, six isomorphic training problems, and a transfer problem. The cued condition saw visual cues overlaid on the training problems. Participants’ verbal responses were used to determine their accuracy. This study produced two major findings. First, short duration visual cues which draw attention to solution-relevant information and aid in the organizing and integrating of it, facilitate both immediate problem solving and generalization of that ability to new problems. Thus, visual cues can facilitate re-representing a problem and overcoming impasse, enabling a correct solution. Importantly, these cueing effects on problem solving did not involve the solvers’ attention necessarily embodying the solution to the problem, but were instead caused by solvers attending to and integrating relevant information in the problems into a solution path. Second, this study demonstrates that when such cues are used across multiple problems, solvers can automatize the extraction of problem-relevant information extraction. These results suggest that low-level attentional selection processes provide a necessary gateway for relevant information to be used in problem solving, but are generally not sufficient for correct problem solving. Instead, factors that lead a solver to an impasse and to organize and integrate problem information also greatly facilitate arriving at correct solutions.


Visual Cognition | 2014

Blur detection is unaffected by cognitive load

Lester C. Loschky; Ryan Ringer; Aaron Johnson; Adam M. Larson; Mark Neider; Arthur F. Kramer

Blur detection is affected by retinal eccentricity, but is it also affected by attentional resources? Research showing effects of selective attention on acuity and contrast sensitivity suggests that allocating attention should increase blur detection. However, research showing that blur affects selection of saccade targets suggests that blur detection may be pre-attentive. To investigate this question, we carried out experiments in which viewers detected blur in real-world scenes under varying levels of cognitive load manipulated by the N-back task. We used adaptive threshold estimation to measure blur detection thresholds at 0°, 3°, 6°, and 9° eccentricity. Participants carried out blur detection as a single task, a single task with to-be-ignored letters, or an N-back task with four levels of cognitive load (0, 1, 2, or 3-back). In Experiment 1, blur was presented gaze-contingently for occasional single eye fixations while participants viewed scenes in preparation for an easy picture recognition memory task, and the N-back stimuli were presented auditorily. The results for three participants showed a large effect of retinal eccentricity on blur thresholds, significant effects of N-back level on N-back performance, scene recognition memory, and gaze dispersion, but no effect of N-back level on blur thresholds. In Experiment 2, we replicated Experiment 1 but presented the images tachistoscopically for 200 ms (half with, half without blur), to determine whether gaze-contingent blur presentation in Experiment 1 had produced attentional capture by blur onset during a fixation, thus eliminating any effect of cognitive load on blur detection. The results with three new participants replicated those of Experiment 1, indicating that the use of gaze-contingent blur presentation could not explain the lack of effect of cognitive load on blur detection. Thus, apparently blur detection in real-world scene images is unaffected by attentional resources, as manipulated by the cognitive load produced by the N-back task.


Archive | 2014

Commonalities and Differences in Eye Movement Behavior When Exploring Aerial and Terrestrial Scenes

Sebastian Pannasch; Jens R. Helmert; Bruce C. Hansen; Adam M. Larson; Lester C. Loschky

Eye movements can provide fast and precise insights into ongoing mechanisms of attention and information processing. In free exploration of natural scenes, it has repeatedly been shown that fixation durations increase over time, while saccade amplitudes decrease. This gaze behavior has been explained as a shift from ambient (global) to focal (local) processing as a means to efficiently understand different environments. In the current study, we analyzed eye movement behavior during the inspection of terrestrial and aerial views of real-world scene images. Our results show that the ambient to focal strategy is preserved across both perspectives. However, there are several perspective-related differences: For aerial views, the first fixation duration is prolonged, showing immediate processing difficulties. Furthermore, fixation durations and saccade amplitudes are longer throughout the overall time of scene exploration, showing continued difficulties that affect both processing of information and image scanning strategies. The temporal and spatial scanning of aerial views is also less similar between observers than for terrestrial scenes, suggesting an inability to use normal scanning patterns. The observed differences in eye movement behavior when inspecting terrestrial and aerial views suggest an increased processing effort for visual information that deviates from our everyday experiences.


Journal of Vision | 2015

When Does Scene Categorization Inform Action Recognition

Adam M. Larson; Melinda Lee

When comprehending a film, viewers rapidly construct a working memory representation of the narrative called an event model. These models encode the story location first (Kitchen vs. Park) followed by the characters action (Cooking vs. Washing dishes)(Larson, Hendry, & Loschky, 2012). This time course for scene and action categorization was also supported by recent research showing that action recognition is best when it was embedded in real scenes than a gray background. Although, this benefit was not present at early processing times (< 50 ms SOA)(Larson, et al., 2013). This suggests that scene and action recognition are functionally isolated processes at early processing times. However, this conclusion may be an artifact of the design used. Namely, actions from the same scene category were presented in blocks, allowing participants in the gray background condition to predict the scene category that would be presented without relying on the scenes perceptual information. If true, then presenting actions in a random sequence should eliminate this advantage. Participants were assigned to one of three different viewing conditions. Actions were presented either in their original scene background, on a neutral gray background, or on a texture background generated from the original scene (Portilla & Simoncelli, 2000). Visual masking was used to control processing time which varied from 24 to 365 ms SOA. Afterwards, a valid or invalid action category post-cue was presented requiring participants to make a Yes/No response. The results show no difference between the original and gray background conditions at early processing times (< 50 ms SOA), but both conditions were better than the texture background. After 50 ms SOA, performance for the original background was greater than the gray and texture conditions. The data indicates that sufficient scene categorization processing (~ 50 ms SOA) is required before it can inform action categorization. Meeting abstract presented at VSS 2015.

Collaboration


Dive into the Adam M. Larson's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ryan Ringer

Kansas State University

View shared research outputs
Top Co-Authors

Avatar

Amy Rouinfar

Kansas State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Elise Agra

Kansas State University

View shared research outputs
Top Co-Authors

Avatar

Joseph P. Magliano

Northern Illinois University

View shared research outputs
Top Co-Authors

Avatar

Mark Neider

University of Central Florida

View shared research outputs
Researchain Logo
Decentralizing Knowledge