Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Lester C. Loschky is active.

Publication


Featured researches published by Lester C. Loschky.


Studies in Second Language Acquisition | 1994

Comprehensible Input and Second Language Acquisition: What Is the Relationship?.

Lester C. Loschky

This study attempts to test aspects of the input hypothesis (Krashen, 1980, 1983, 1985) and Longs modification of it (Long, 1980, 1983a, 1985). Specifically, it experimentally tests the hypothesis that both input and interactional modifications facilitate second language acquisition, using Japanese as the target language. Three experimental groups were differentiated in terms of input and interaction conditions: (1) unmodified input with no interaction, (2) premodified input with no interaction, and (3) unmodified input with the chance for negotiated interaction. The groups were compared in terms of (a) their degree of comprehension of the input and (b) their subsequent retention of vocabulary items and acquisition of two Japanese locative structures. The results indicated that moment-to-moment comprehension was highest for the negotiated interaction group, whereas there was no significant difference between the two noninteraction groups. Furthermore, there was no correlation found between differences in moment-to-moment comprehension and gains in vocabulary recognition and acquisition of structures, though significant gains on both measures were found for all three groups. Discussion of these findings centers on the relationship between comprehension and acquisition.


Human Factors | 2003

Gaze-contingent multiresolutional displays: an integrative review.

Eyal M. Reingold; Lester C. Loschky; George W. McConkie; David M. Stampe

Gaze-contingent multiresolutional displays (GCMRDs) center high-resolution information on the users gaze position, matching the users area of interest (AOI). Image resolution and details outside the AOI are reduced, lowering the requirements for processing resources and transmission bandwidth in demanding display and imaging applications. This review provides a general framework within which GCMRD research can be integrated, evaluated, and guided. GCMRDs (or “moving windows”) are analyzed in terms of (a) the nature of their images (i.e., “multiresolution,” “variable resolution,” “space variant,” or “level of detail”), and (b) the movement of the AOI (i.e., “gaze contingent,” “foveated,” or “eye slaved”). We also synthesize the known human factors research on GCMRDs and point out important questions for future research and development. Actual or potential applications of this research include flight, medical, and driving simulators; virtual reality; remote piloting and teleoperation; infrared and indirect vision; image transmission and retrieval; telemedicine; video teleconferencing; and artificial vision systems.


Visual Cognition | 2005

The limits of visual resolution in natural scene viewing

Lester C. Loschky; George W. McConkie; Jian Yang; Michael E. Miller

We examined the limits of visual resolution in natural scene viewing, using a gaze-contingent multiresolutional display having a gaze-centred area-of-interest and decreasing resolution with eccentricity. Twelve participants viewed high-resolution scenes in which gaze-contingent multiresolutional versions occasionally appeared for single fixations. Both detection of image degradation (five filtering levels plus a no-area-of-interest control) in the gaze-contingent multiresolutional display, and eye fixation durations, were well predicted by a model of eccentricity-dependent contrast sensitivity. The results also illuminate the time course of detecting image filtering. Detection did not occur for fixations below 100 ms, and reached asymptote for fixations above 200 ms. Detectable filtering lengthened fixation durations by 160 ms, and interference from an imminent manual response occurred by 400-450 ms, often lengthening the next fixation. We provide an estimate of the limits of visual resolution in natural scene viewing useful for theories of scene perception, and help bridge the literature on spatial vision and eye movement control.


Attention Perception & Psychophysics | 2005

Eye movements serialize memory for objects in scenes

Gregory J. Zelinsky; Lester C. Loschky

A gaze-contingent short-term memory paradigm was used to obtain forgetting functions for realistic objects in scenes. Experiment 1 had observers freely view nine-item scenes. After observers’ gaze left a predetermined target, they could fixate from 1–7 intervening nontargets before the scene was replaced by a spatial probe at the target location. The task was then to select the target from four alternatives. A steep recency benefit was found over the 1–2 intervening object range that declined into an above-chance prerecency asymptote over the remainder of the forgetting function. In Experiment 2, we used sequential presentation and variable delays to explore the contributions of decay and extrafoveal processes to these behaviors. We conclude that memory for objects in scenes, when serialized by fixation sequence, shows recency and prerecency effects that are similar to isolated objects presented sequentially over time. We discuss these patterns in the context of the serial order memory literature and object file theory.


eye tracking research & application | 2000

User performance with gaze contingent multiresolutional displays

Lester C. Loschky; George W. McConkie

One way to economize on bandwidth in single-user head-mounted displays is to put high-resolution information only where the user is currently looking. This paper summarizes results from a series of 6 studies investigating spatial, resolutional, and temporal parameters affecting perception and performance in such eye-contingent multi-resolutional displays. Based on the results of these studies, suggestions are made for the design of eye-contingent multi-resolutional displays.


ACM Transactions on Multimedia Computing, Communications, and Applications | 2007

How late can you update gaze-contingent multiresolutional displays without detection?

Lester C. Loschky; Gary S. Wolverton

This study investigated perceptual disruptions in gaze-contingent multiresolutional displays (GCMRDs) due to delays in updating the center of highest resolution after an eye movement. GCMRDs can be used to save processing resources and transmission bandwidth in many types of single-user display applications, such as virtual reality, video-telephony, simulators, and remote piloting. The current study found that image update delays as late as 60 ms after an eye movement did not significantly increase the detectability of image blur and/or motion transients due to the update. This is good news for designers of GCMRDs, since 60 ms is ample time to update many GCMRDs after an eye movement without disrupting perception. The study also found that longer eye movements led to greater blur and/or transient detection due to moving the eyes further into the low-resolution periphery, effectively reducing the image resolution at fixation prior to the update. In GCMRD applications where longer saccades are more likely (e.g., displays with relatively large distances between objects), this problem could be overcome by increasing the size of the region of highest resolution.


Journal of Experimental Psychology: Human Perception and Performance | 2007

The Importance of Information Localization in Scene Gist Recognition.

Lester C. Loschky; Amit Sethi; Daniel J. Simons; Tejaswi N. Pydimarri; Daniel Ochs; Jeremy L. Corbeille

People can recognize the meaning or gist of a scene from a single glance, and a few recent studies have begun to examine the sorts of information that contribute to scene gist recognition. The authors of the present study used visual masking coupled with image manipulations (randomizing phase while maintaining the Fourier amplitude spectrum; random image structure evolution [RISE]; J. Sadr & P. Sinha, 2004) to explore whether and when unlocalized Fourier amplitude information contributes to gist perception. In 4 experiments, the authors found that differences between scene categories in the Fourier amplitude spectrum are insufficient for gist recognition or gist masking. Whereas the global 1/f spatial frequency amplitude spectra of scenes plays a role in gist masking, local phase information is necessary for gist recognition and for the strongest gist masking. Moreover, the ability to recognize the gist of a target image was influenced by mask recognizability, suggesting that conceptual masking occurs even at the earliest stages of scene processing.


Visual Cognition | 2010

The natural/man-made distinction is made before basic-level distinctions in scene gist processing

Lester C. Loschky; Adam M. Larson

What level of categorization occurs first in scene gist processing, basic level or the superordinate “natural” versus “man-made” distinction? The Spatial Envelope model of scene classification and human gist recognition (Oliva & Torralba, 2001) assumes that the superordinate distinction is made prior to basic-level distinctions. This assumption contradicts the claim that categorization occurs at the basic level before the superordinate level (Rosch, Mervis, Gray, Johnson, & Boyes-Braem, 1976). The present study tests this assumption of the Spatial Envelope model by having viewers categorize briefly flashed and masked scenes after varying amounts of processing time. The results show that early levels of processing (SOA < 72 ms) (1) produced greater sensitivity to the superordinate distinction than basic-level distinctions, and (2) basic-level distinctions crossing the superordinate natural/man-made boundary are treated as a superordinate distinction. Both results support the assumption of the Spatial Envelope model, and challenge the idea of basic-level primacy.


Behavior Research Methods Instruments & Computers | 2002

Perception onset time during fixations in free viewing.

George W. McConkie; Lester C. Loschky

In this study, we investigated when visual perception begins in fixations. During picture viewing, the picture was degraded at the beginning of selected saccades and changed back to the original after varying intervals. Participants manually responded whenever they detected changes. The change-backs were undetected when they occurred <6 msec after the end of the saccade, marked by the peak of the overshoot in dual Purkinje image eyetracker data, and detection reached asymptote 32 msec after that marker. Eye velocity at the change-backtime also affected detection likelihood. Apparently, perception begins around the time at which the eyes stop rotating at the end of a saccade, giving a psychological justification for measuring fixation durations from then. This also specifies the deadline for gaze-contingent display changes to occur without detectable image motion. Investigators using the dual PurMnje image eyetracker should consider the peak of the overshoot as the fixation onset time and measure intrafixational presentation times from then.


Attention Perception & Psychophysics | 2010

The Role of Higher Order Image Statistics in Masking Scene Gist Recognition

Lester C. Loschky; Bruce C. Hansen; Amit Sethi; Tejaswi N. Pydimarri

In the present article, we investigated whether higher order image statistics, which are known to be carried by the Fourier phase spectrum, are sufficient to affect scene gist recognition. In Experiment 1, we compared the scene gist masking strength of four masking image types that varied in their degrees of second- and higher order relationships: normal scene images, scene textures, phase-randomized scene images, and white noise. Masking effects were the largest for masking images that possessed significant higher order image statistics (scene images and scene textures) as compared with masking images that did not (phase-randomized scenes and white noise), with scene image masks yielding the largest masking effects. In a control study, we eliminated all differences in the second-order statistics of the masks, while maintaining differences in their higher order statistics by comparing masking by scene textures rather than by their phase-randomized versions, and showed that the former produced significantly stronger gist masking. Experiments 2 and 3 were designed to test whether conceptual masking could account for the differences in the strength of the scene texture and phase-randomized masks used in Experiment 1, and revealed that the recognizability of scene texture masks explained just 1% of their masking variance. Together, the results suggest that (1) masks containing the higher order statistical structure of scenes are more effective at masking scene gist processing than are masks lacking such structure, and (2) much of the disruption of scene gist recognition that one might be tempted to attribute to conceptual masking is due to spatial masking.

Collaboration


Dive into the Lester C. Loschky's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ryan Ringer

Kansas State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

John Hutson

Kansas State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Joseph P. Magliano

Northern Illinois University

View shared research outputs
Top Co-Authors

Avatar

Mark Neider

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge