Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where James F. Juola is active.

Publication


Featured researches published by James F. Juola.


Attention Perception & Psychophysics | 1969

Processing time as influenced by the number of elements in a visual display.

Richard C. Atkinson; J. E. Holmgren; James F. Juola

In a visual-detection experiment. a display of several letters was presented. and S was to report the presence or absence of a given target letter. Results clearly are incompatible with a self-terminating visual-scanning process as hypothesized by Sternberg (1967). Two models are considered. a serial exhaustive scanning process and a parallel exhaustive process, but findings from the present study do not provide a basis for differentiating between them.


Attention Perception & Psychophysics | 1990

Voluntary allocation versus automatic capture of visual attention

C. Bruce Warner; James F. Juola; Hideya Koshino

Is there a difference in the kind of attention elicited by an abrupt-onset peripheral cue and that elicited by an instruction (e.g., a central arrow cue) to move attention to a peripheral location? In Experiment 1, we found that peripheral cues are no more effective in orienting attention than are central cues. No evidence was found for separable attentional systems consisting of a volitional response to central cues and an automatic response triggered only by peripheral cues. Rather, an identical or similar attentional process seems to be activated by either type of cue, although perhaps in different ways. Peripheral cues seem to have an automatic component, however, in that once attention is engaged by a peripheral cue, it cannot easily be disengaged for refocus elsewhere. In Experiment 2, after several sessions of practice, subjects were able to circumvent automatic attentional capture by an abrupt-onset peripheral cue and to volitionally redirect the focus of attention. Thus, attentional capture by abrupt-onset stimuli is not strongly automatic.


Attention Perception & Psychophysics | 2008

Audiovisual synchrony and temporal order judgments: Effects of experimental method and stimulus type

Rlj Rob van Eijk; Ag Armin Kohlrausch; James F. Juola; Sljde Steven van de Par

When an audio—visual event is perceived in the natural environment, a physical delay will always occur between the arrival of the leading visual component and that of the trailing auditory component. This natural timing relationship suggests that the point of subjective simultaneity (PSS) should occur at an auditory delay greater than or equal to 0 msec. A review of the literature suggests that PSS estimates derived from a temporal order judgment (TOJ) task differ from those derived from a synchrony judgment (SJ) task, with (unnatural) auditory-leading PSS values reported mainly for the TOJ task. We report data from two stimulus types that differed in terms of complexity— namely, (1) a flash and a click and (2) a bouncing ball and an impact sound. The same participants judged the temporal order and synchrony of both stimulus types, using three experimental methods: (1) a TOJ task with two response categories (“audio first” or “video first”), (2) an SJ task with two response categories (“synchronous” or “asynchronous”; SJ2), and (3) an SJ task with three response categories (“audio first,” “synchronous,” or “video first”; SJ3). Both stimulus types produced correlated PSS estimates with the SJ tasks, but the estimates from the TOJ procedure were uncorrelated with those obtained from the SJ tasks. These results suggest that the SJ task should be preferred over the TOJ task when the primary interest is in perceived audio—visual synchrony.


Attention Perception & Psychophysics | 1971

Recognition time for information stored in long-term memory

James F. Juola; I. Fischler; C. T. Wood; Richard C. Atkinson

Two experiments were performed to determine the effects of number of words in a target set (varying from 10 to 26) and the nature of distractor words on the latency of both positive and negative recognition responses. Before the test phase, S memorized a list of words and then was tested with a series of single words. To each presentation S made a positive or negative response to indicate whether or not the word was a member of the memorized target list. Response latency was observed to be an increasing function of memory list length. Negative response latency also was greater if distractor words were visually or semantically similar to specific target words. The results were analyzed in terms of a modified signal detection model. It is assumed that S makes a subjective judgment of the familiarity of a test item and on that basis decides either to respond immediately or to delay the response until a search of the memorized list can be executed. Several different models of the search process are considered and evaluated against latency measures and error data.


Attention Perception & Psychophysics | 1975

The “ventriloquist effect”: Visual dominance or response bias?

Chong S. Choe; Robert B. Welch; Robb M. Gilford; James F. Juola

The interaction between vision and audition was investigated using a signal detection method. A light and tone were presented either in the same location or in different locations along the horizontal plane, and the subjects responded with same-different judgments of stimulus location. Three modes of stimulus presentation were used: simultaneous presentation of the light and tone, tone first, and light first. For the latter two conditions, the interstimulus interval was either 0.7 or 2.0 sec. A statistical decision model was developed which distinguished between the perceptual and decision processes. The results analyzed within the framework of this model suggested that the apparent interaction between vision and audition is due to shifts in decision criteria rather than perceptual change.


Memory & Cognition | 1982

Dimensions of lexical coding in Chinese and English

Hsuan-Chih Chen; James F. Juola

Phonology and orthography are closely related in some languages, such as English, and they are nearly unrelated in others, such as Chinese. The effects of these differences were assessed in a study of the roles of phonemic, graphemic, and semantic information on lexical coding and memory for Chinese logographs and English words. Some of the stimuli in the two languages were selected such that the natural confounding between phonemic and graphemic information in English was matched in the set of Chinese words used. An initial scaling study indicated that this attempt to equate degree of phonemic-graphemic confounding was successful. A second experiment used a recognition memory task for English and Chinese words with separate subject groups of native speakers of the two languages. Subjects were to select one of a pair of test words that was phonemically, graphemically, or semantically similar to a word on a previously studied list. Differences in the dimensions of lexical coding in memory were demonstrated in significant Stimulus Type by Decision Type interactions in the recognition data. Chinese-speaking subjects responded most rapidly and accurately in the graphemic recognition task, whereas performance was generally equivalent in all three tasks for the English-speaking subjects. Alphabetic and logographic writing systems apparently activate different coding and memory mechanisms such that logographic characters produce significantly more visual information in memory, whereas alphabetic words result in a more integrated code involving visual, phonological, and semantic information.


Journal of Experimental Child Psychology | 1978

The Development of Visual Information Processing Skills Related to Reading.

James F. Juola; Margaret Schadler; Robert J. Chabot; Mark W. McCaughey

Abstract Literate adults can use their familiarity with specific words and their knowledge of English orthography to facilitate word recognition processes. The development of word superiority effects in visual perception was investigated in the present study using a search task with kindergarten (5.7 years old), second (8.0 years old), and fourth grade children (10.0 years old), and college students. The search task consisted of the visual presentation of a target letter followed by a three-, four-, or five-letter display. The target letter was included in the display on half the trials, and the displays were common words, orthographically regular pseudowords, and irregular nonwords. Although response times decreased with age, the three oldest groups showed similar effects for the size and structure of the displays. That is, response times increased linearly with the number of display letters, and responses were faster for word and pseudoword displays than for nonwords. The data for the kindergarten children showed evidence for the use of a different search strategy, and they did not respond differentially to the three types of displays. The results are discussed in terms of the implications for developmental models of visual search and word superiority effects in visual perception.


Attention Perception & Psychophysics | 1977

Isolating visual units in the perception of words and nonwords

Glen A. Taylor; Timothy J. Miller; James F. Juola

In three experiments, reaction times for same-different judgments were obtained for pairs of words, pronounceable nonwords (pseudowords), and unpronounceable nonwords. The stimulus strings were printed either in a single letter case or in one of several mixtures of upper- and lowercase letters. In Experiment 1, the stimuli were common one- and two-syllable words; in Experiment 2, the stimuli included both words and pseudowords; and in Experiment 3, words, pseudowords, and nonwords were used. The functional visual units for each string type were inferred from the effects that the number and placement of letter case transitions had onsame reaction time judgments. The evidence indicated a preference to encode strings in terms of multiletter perceptual units if they are present in the string. The data .also suggested that whole words can be used as functional visual units, although the extent of their use depends on contextual parameters such as knowledge that a word will be presented.


Attention Perception & Psychophysics | 1995

Tradeoffs between attentional effects of spatial cues and abrupt onsets

James F. Juola; Hideya Koshino; C. Bruce Warner

We determined the relative effectiveness and tradeoffs among central, peripheral, and abrupt onset cues in directing attention to a potential target character. Central cues were arrows located at the fixation point, whereas peripheral cues were arrows occurring about 3° away from fixation, near the location of a potential target. These were contrasted with the abrupt onset of an ambiguous part of a character, which later was filled in to reveal a target or a distractor item. Each trial included an arrow cue and an abrupt onset cue, and both expected cue validities and cue-character SOAs were varied factorially. The results showed that, in general, abrupt onsets captured attention more effectively than either central or peripheral arrow cues. However, tradeoffs among separate cue effects indicated that the power of abrupt onsets to capture attention automatically could be overridden by a high-validity spatial cue presented in advance of the onset character. Tradeoffs between the effects of central and abrupt onset cues were additive, indicating that endogenous and exogenous cues have their main effects at different levels in the visual attention system. Peripheral cues and abrupt onsets showed mainly interactive effects, however, consistent with the idea that both types of cues have exogenous components that affect a common pool of attentional resources.


Journal of Experimental Child Psychology | 1980

Whole-word units are used before orthographic knowledge in perceptual development ☆ ☆☆

Mark W. McCaughey; James F. Juola; Margaret Schadler; Nicklas J. Ward

Abstract A visual search task for target letters in multiletter displays was used to investigate information-processing differences between college students and presecond-grade children (mean age = 7 years, 4 months). The stimulus displays consisted of single words, pronounceable pseudowords, and unpronounceable nonwords varying in length from three to five letters. The mean response times for indicating whether or not a target letter occurred in the display increased with the number of display letters for both groups, although there were apparent differences between groups in the rate of search and type of search strategy used. Pre-second-grade children responded faster to word displays than to pseudoword and nonword displays, indicating that familiar letter strings could be processed faster than unfamiliar strings regardless of whether or not the latter were consistent with rules of English orthography. In contrast, college students processed words and pseudowords about equally well, and both resulted in faster responses than nonwords. As reading skills develop, children apparently come to process familiar words differently from other letter strings. Only after a significant sightword vocabulary is established do children seem to recognize the regularities of standard English orthography and make use of this knowledge to facilitate perceptual processes.

Collaboration


Dive into the James F. Juola's collaboration.

Top Co-Authors

Avatar

Rh Raymond Cuijpers

Eindhoven University of Technology

View shared research outputs
Top Co-Authors

Avatar

E Elena Torta

Eindhoven University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Rlj Rob van Eijk

Eindhoven University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

D David van der Pol

Eindhoven University of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge