Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kyle R. Cave is active.

Publication


Featured researches published by Kyle R. Cave.


Journal of Experimental Psychology: Human Perception and Performance | 1989

Guided search: an alternative to the feature integration model for visual search.

Jeremy M. Wolfe; Kyle R. Cave; Susan L. Franzel

Subjects searched sets of items for targets defined by conjunctions of color and form, color and orientation, or color and size. Set size was varied and reaction times (RT) were measured. For many unpracticed subjects, the slopes of the resulting RT X Set Size functions are too shallow to be consistent with Treismans feature integration model, which proposes serial, self-terminating search for conjunctions. Searches for triple conjunctions (Color X Size X Form) are easier than searches for standard conjunctions and can be independent of set size. A guided search model similar to Hoffmans (1979) two-stage model can account for these data. In the model, parallel processes use information about simple features to guide attention in the search for conjunctions. Triple conjunctions are found more efficiently than standard conjunctions because three parallel processes can guide attention more effectively than two.


Cognitive Psychology | 1990

Modeling the role of parallel processing in visual search

Kyle R. Cave; Jeremy M. Wolfe

Treismans Feature Integration Theory and Juleszs Texton Theory explain many aspects of visual search. However, these theories require that parallel processing mechanisms not be used in many visual searches for which they would be useful, and they imply that visual processing should be much slower than it is. Most importantly, they cannot account for recent data showing that some subjects can perform some conjunction searches very efficiently. Feature Integration Theory can be modified so that it accounts for these data and helps to answer these questions. In this new theory, which we call Guided Search, the parallel stage guides the serial stage as it chooses display elements to process. A computer simulation of Guided Search produces the same general patterns as human subjects in a number of different types of visual search.


Journal of Cognitive Neuroscience | 1989

Why are “what” and “where” processed by separate cortical visual systems? a computational investigation

Jay G. Rueckl; Kyle R. Cave; Stephen M. Kosslyn

In the primate visual system, the identification of objects and the processing of spatial information are accomplished by different cortical pathways. The computational properties of this two-systems design were explored by constructing simplifying connectionist models. The models were designed to simultaneously classify and locate shapes that could appear in multiple positions in a matrix, and the ease of forming representations of the two kinds of information was measured. Some networks were designed so that all hidden nodes projected to all output nodes, whereas others had the hidden nodes split into two groups, with some projecting to the output nodes that registered shape identity and the remainder projecting to the output nodes that registered location. The simulations revealed that splitting processing into separate streams for identifying and locating a shape led to better performance only under some circumstances. Provided that enough computational resources were available in both streams, split networks were able to develop more efficient internal representations, as revealed by detailed analyses of the patterns of weights between connections.


Psychonomic Bulletin & Review | 1999

Visuospatial attention: Beyond a spotlight model

Kyle R. Cave; Narcisse P. Bichot

Much of the research in visual attention has been driven by the spotlight metaphor. This metaphor has been useful over many years for generating experimental questions in attention research. However, theories and models of visual selection have reached such a level of complexity that debate now centers around more specific questions about the nature of attention. In this review, the general question “Is visual attention like a spotlight?” is broken down into seven specific questions concerning the nature of visual attention, and the evidence relevant to each is examined. The answers to these specific questions provide important clues about why visual selection is necessary and what purpose attention plays in visual cognition.


Cognition | 1984

Individual differences in mental imagery ability: A computational analysis

Stephen M. Kosslyn; Jennifer L Brunn; Kyle R. Cave; Roger W. Wallach

Abstract This study addressed two questions: is mental imagery ability an undifferentiated general skill, or is it composed of a number of relatively distinct subabilities? Further, if imagery is not an undifferentiated general ability, can its structure be understood in terms of the processing components posited by the Kosslyn and Shwartz theory of imagery representation? A set of tasks was devised, and a model was specified for each task. These models invoked different combinations of the processing components posited in the general theory. Fifty people were tested on the tasks, and their scores on each measure were then correlated. A very wide range of correlation coefficients was found—which suggests that the subjects were not simply good or poor at imagery in general. In addition, the similarity of each pair of processing models was computed by considering the number of shared processing components. The correlations among scores were then compared to the predicted similarities in processing, and were found to be highly related to these measures. A common sense theory was also considered, based on the idea that people differ on variables such as ‘image vividness’ and ‘image control’. The Kosslyn and Shwartz theory proved a better predictor of the results than the common sense theory. Thus, imagery ability is not an undifferentiated general skill, and the underlying components bear a strong correspondence to those posited by the theory. Various additional analyses supported these conclusions.


Attention Perception & Psychophysics | 1999

Top-down and bottom-up attentional control: on the nature of interference from a salient distractor.

Min-Shik Kim; Kyle R. Cave

In two experiments using spatial probes, we measured the temporal and spatial interactions between top-down control of attention and bottom-up interference from a salient distractor in visual search. The subjects searched for a square among circles, ignoring color. Probe response times showed that a color singleton distractor could draw attention to its location in the early stage of visual processing (before a 100-msec stimulus onset asynchrony [SOA]), but only when the color singleton distractor was located far from the target. Apparently the bottom-up activation of the singleton distractor’s location is affected early on by local interactions with nearby stimulus locations. Moreover, probe results showed that a singleton distractor did not receive attention after extended practice. These results suggest that top-down control of attention is possible at an early stage of visual processing. In the long-SOA condition (150-msec SOA), spatial attention selected the target location over distractor locations, and this tendency occurred with or without extended practice.


Neuron | 1999

The psychophysical evidence for a binding problem in human vision.

Jeremy M. Wolfe; Kyle R. Cave

Boston, Massachusetts 02115 †Department of Psychology tions. Illusory conjunctions occurred with both letters University of Southampton and abstract shapes and included all the features tested Highfield (color, shape, size, and solidity). Treisman and Schmidt Southampton SO17 1BJ concluded that when attention is not available to comUnited Kingdom bine features correctly, they can be put together to form


Journal of Experimental Psychology: Human Perception and Performance | 1990

Limitations on the parallel guidance of visual search: color x color and orientation x orientation conjunctions

Jeremy M. Wolfe; Karen P. Yu; Marian I. Stewart; Amy D. Shorter; Stacia R. Friedman-Hill; Kyle R. Cave

In visual search for a conjunction it is much more difficult to search for the conjunction of 2 colors or 2 orientations than for Color x Orientation or Color x Shape conjunctions. The result is not limited to particular colors or shapes. Two colors cannot occupy the same spatial location in Color x Color searches. However, Experiments 6 and 7 show that Color x Shape searches remain efficient even if the color and shape are spatially separated. Our guided search model suggests that in searches for Color x Shape, a parallel color module can guide attention toward the correct color, whereas the shape module guides attention toward the correct shape. Together these 2 sources of guidance lead attention to the target. However, if a target is red and green among red-blue and green-blue distractors, it is not possible to guide search independently toward red items and green items or away from all blue items.


Attention Perception & Psychophysics | 1998

Spatial selection via feature-driven inhibition of distractor locations

Nicholas J. Cepeda; Kyle R. Cave; Narcisse P. Bichot; Min-Shik Kim

The allocation of spatial attention was measured with detection probes at different locations. Response times were faster for probes at the location of the target digit, which subjects reported, than at the locations of distractor digits, which they ignored. Probes at blank locations between stimuli produced fast responses, indicating that selection was accomplished by inhibiting distractor locations but not other areas. Unlike earlier studies using location cuing with simpler stimuli, these experiments showed no attentional differences across horizontal or vertical midlines. Attention varied little with distance from the target, although blank locations far from the target were somewhat less attended than were those near the target, and attention was only slightly affected by expectations for stimulus location. This task demonstrates a form of feature-driven spatial attention, in which locations with objects lacking target features are inhibited.


Journal of Experimental Psychology: General | 1989

Varieties of size-specific visual selection

Kyle R. Cave; Stephen M. Kosslyn

Compared time to evaluate stimuli of varying sizes. When Ss expect an upcoming stimulus to be a certain size, response time increases with the disparity between expected and actual size. There are, however, 2 size adjustment processes, and they reflect 2 types of visual selection. In the first, a shape-specific image representation is used to separate a visual object from a superimposed distractor. These representations require the type of slow size scaling demonstrated in imagery experiments. The second size scaling process is faster and not shape-specific. At any given time the visual system is set to process information at a particular scale, and that scale can be adjusted to match an objects size. Because both selection mechanisms depend on size, they probably occur at a relatively low, spatially organized processing level. These findings lead to a new explanation for results that had been taken as evidence for attentional selection at the level of object representations.

Collaboration


Dive into the Kyle R. Cave's collaboration.

Top Co-Authors

Avatar

Nick Donnelly

University of Southampton

View shared research outputs
Top Co-Authors

Avatar

Tamaryn Menneer

University of Southampton

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michael J. Stroud

University of Massachusetts Amherst

View shared research outputs
Top Co-Authors

Avatar

Jeremy M. Wolfe

Brigham and Women's Hospital

View shared research outputs
Top Co-Authors

Avatar

Zhe Chen

University of Canterbury

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Luke Phillips

University of Southampton

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge