Joseph W. Houpt
Wright State University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Joseph W. Houpt.
Behavior Research Methods | 2014
Joseph W. Houpt; Leslie M. Blaha; John P. McIntire; Paul R. Havig; James T. Townsend
Systems factorial technology (SFT) comprises a set of powerful nonparametric models and measures, together with a theory-driven experiment methodology termed the double factorial paradigm (DFP), for assessing the cognitive information-processing mechanisms supporting the processing of multiple sources of information in a given task (Townsend and Nozawa, Journal of Mathematical Psychology 39:321–360, 1995). We provide an overview of the model-based measures of SFT, together with a tutorial on designing a DFP experiment to take advantage of all SFT measures in a single experiment. Illustrative examples are given to highlight the breadth of applicability of these techniques across psychology. We further introduce and demonstrate a new package for performing SFT analyses using R for statistical computing.
Behavior Research Methods | 2013
Devin Michael Burns; Joseph W. Houpt; James T. Townsend; Michael J. Endres
Workload capacity, an important concept in many areas of psychology, describes processing efficiency across changes in workload. The capacity coefficient is a function across time that provides a useful measure of this construct. Until now, most analyses of the capacity coefficient have focused on the magnitude of this function, and often only in terms of a qualitative comparison (greater than or less than one). This work explains how a functional extension of principal components analysis can capture the time-extended information of these functional data, using a small number of scalar values chosen to emphasize the variance between participants and conditions. This approach provides many possibilities for a more fine-grained study of differences in workload capacity across tasks and individuals.
Memory & Cognition | 2015
Andrew Heathcote; James R. Coleman; Ami Eidels; James M. Watson; Joseph W. Houpt; David L. Strayer
We examined the role of dual-task interference in working memory using a novel dual two-back task that requires a redundant-target response (i.e., a response that neither the auditory nor the visual stimulus occurred two back versus a response that one or both occurred two back) on every trial. Comparisons with performance on single two-back trials (i.e., with only auditory or only visual stimuli) showed that dual-task demands reduced both speed and accuracy. Our task design enabled a novel application of Townsend and Nozawa’s (Journal of Mathematical Psychology 39: 321–359, 1995) workload capacity measure, which revealed that the decrement in dual two-back performance was mediated by the sharing of a limited amount of processing capacity. Relative to most other single and dual n-back tasks, performance measures for our task were more reliable, due to the use of a small stimulus set that induced a high and constant level of proactive interference. For a version of our dual two-back task that minimized response bias, accuracy was also more strongly correlated with complex span than has been found for most other single and dual n-back tasks.
Frontiers in Psychology | 2015
Michael J. Endres; Joseph W. Houpt; Chris Donkin; Peter R. Finn
Working memory capacity (WMC) is typically measured by the amount of task-relevant information an individual can keep in mind while resisting distraction or interference from task-irrelevant information. The current research investigated the extent to which differences in WMC were associated with performance on a novel redundant memory probes (RMP) task that systematically varied the amount of to-be-remembered (targets) and to-be-ignored (distractor) information. The RMP task was designed to both facilitate and inhibit working memory search processes, as evidenced by differences in accuracy, response time, and Linear Ballistic Accumulator (LBA) model estimates of information processing efficiency. Participants (N = 170) completed standard intelligence tests and dual-span WMC tasks, along with the RMP task. As expected, accuracy, response-time, and LBA model results indicated memory search and retrieval processes were facilitated under redundant-target conditions, but also inhibited under mixed target/distractor and redundant-distractor conditions. Repeated measures analyses also indicated that, while individuals classified as high (n = 85) and low (n = 85) WMC did not differ in the magnitude of redundancy effects, groups did differ in the efficiency of memory search and retrieval processes overall. Results suggest that redundant information reliably facilitates and inhibits the efficiency or speed of working memory search, and these effects are independent of more general limits and individual differences in the capacity or space of working memory.
Systems Factorial Technology#R##N#A Theory Driven Methodology for the Identification of Perceptual and Cognitive Mechanisms | 2017
Joseph W. Houpt; Devin Michael Burns
Systems Factorial Technology (SFT) is a well defined approach based on rigorously mathematical definitions of constructs and derivations of measures. Inferences about cognitive processing based on the Survivor Interaction Contrast (SIC) and capacity coefficients (Ct) are broad, allowing the rejection of entire classes of models (e.g., all serial processes) because the approach relies on so few parametric assumptions. Although this generality is a strength of the framework, one drawback is that it complicates data analysis. Models based on specific parametric assumptions, such as Linear Ballistic Accumulator models, can be evaluated based on the likelihood of the observed data precisely because they make such strong assumptions. Hence, the challenge in analyzing data within the SFT framework is to develop statistical analyses that do not compromise the generality of the core theory.
Behavior Research Methods | 2017
Joseph W. Houpt; Daniel R. Little
The extent to which distracting information influences decisions can be informative about the nature of the underlying cognitive and perceptual processes. In a recent paper, a response time-based measure for quantifying the degree of interference (or facilitation) from distracting information termed resilience was introduced. Despite using a statistical measure, the analysis was limited to qualitative comparisons between different model predictions. In this paper, we demonstrate how statistical procedures from workload capacity analysis can be applied to the new resilience functions. In particular, we present an approach to null-hypothesis testing of resilience functions and a method based on functional principal components analysis for analyzing differences in the functional form of the resilience functions across participants and conditions.
Vision Research | 2016
Robert X. D. Hawkins; Joseph W. Houpt; Ami Eidels; James T. Townsend
While there is widespread agreement among vision researchers on the importance of some local aspects of visual stimuli, such as hue and intensity, there is no general consensus on a full set of basic sources of information used in perceptual tasks or how they are processed. Gestalt theories place particular value on emergent features, which are based on the higher-order relationships among elements of a stimulus rather than local properties. Thus, arbitrating between different accounts of features is an important step in arbitrating between local and Gestalt theories of perception in general. In this paper, we present the capacity coefficient from Systems Factorial Technology (SFT) as a quantitative approach for formalizing and rigorously testing predictions made by local and Gestalt theories of features. As a simple, easily controlled domain for testing this approach, we focus on the local feature of location and the emergent features of Orientation and Proximity in a pair of dots. We introduce a redundant-target change detection task to compare our capacity measure on (1) trials where the configuration of the dots changed along with their location against (2) trials where the amount of local location change was exactly the same, but there was no change in the configuration. Our results, in conjunction with our modeling tools, favor the Gestalt account of emergent features. We conclude by suggesting several candidate information-processing models that incorporate emergent features, which follow from our approach.
Psychological Methods | 2017
Joseph W. Houpt; Andrew Heathcote; Ami Eidels
The question of cognitive architecture—how cognitive processes are temporally organized—has arisen in many areas of psychology. This question has proved difficult to answer, with many proposed solutions turning out to be spurious. Systems factorial technology (Townsend & Nozawa, 1995) provided the first rigorous empirical and analytical method of identifying cognitive architecture, using the survivor interaction contrast (SIC) to determine when people are using multiple sources of information in parallel or in series. Although the SIC is based on rigorous nonparametric mathematical modeling of response time distributions, for many years inference about cognitive architecture has relied solely on visual assessment. Houpt and Townsend (2012) recently introduced null hypothesis significance tests, and here we develop both parametric and nonparametric (encompassing prior) Bayesian inference. We show that the Bayesian approaches can have considerable advantages.
Cognitive Research: Principles and Implications | 2016
Elizabeth Fox; Joseph W. Houpt
Multi-spectral imagery can enhance decision-making by supplying multiple complementary sources of information. However, overloading an observer with information can deter decision-making. Hence, it is critical to assess multi-spectral image displays using human performance. Accuracy and response times (RTs) are fundamental for assessment, although without sophisticated empirical designs, they offer little information about why performance is better or worse. Systems factorial technology (SFT) is a framework for study design and analysis that examines observers’ processing mechanisms, not just overall performance. In the current work, we use SFT to compare a display with two sensor images alongside each another with a display in which there is a single composite image. In our first experiment, the SFT results indicated that both display approaches suffered from limited workload capacity and more so for the composite imagery. In the second experiment, we examined the change in observer performance over the course of multiple days of practice. Participants’ accuracy and RTs improved with training, but their capacity limitations were unaffected. Using SFT, we found that the capacity limitation was not due to an inefficient serial examination of the imagery by the participants. There are two clear implications of these results: Observers are less efficient with multi-spectral images than single images and the side-by-side display of source images is a viable alternative to composite imagery. SFT was necessary for these conclusions because it provided an appropriate mechanism for comparing single-source images to multi-spectral images and because it ruled out serial processing as the source of the capacity limitation.
Frontiers in Psychology | 2015
Joseph W. Houpt; Bethany L. Sussman; James T. Townsend; Sharlene D. Newman
Developmental dyslexia is a complex and heterogeneous disorder characterized by unexpected difficulty in learning to read. Although it is considered to be biologically based, the degree of variation has made the nature and locus of dyslexia difficult to ascertain. Hypotheses regarding the cause have ranged from low-level perceptual deficits to higher order cognitive deficits, such as phonological processing and visual-spatial attention. We applied the capacity coefficient, a measure obtained from a mathematical cognitive model of response times to measure how efficiently participants processed different classes of stimuli. The capacity coefficient was used to test the extent to which individuals with dyslexia can be distinguished from normal reading individuals based on their ability to take advantage of word, pronounceable non-word, consonant sequence or unfamiliar context when categorizing character strings. Within subject variability of the capacity coefficient across character string types was fairly regular across normal reading adults and consistent with a previous study of word perception with the capacity coefficient—words and pseudowords were processed at super-capacity and unfamiliar characters strings at limited-capacity. Two distinct patterns were observed in individuals with dyslexia. One group had a profile similar to the normal reading adults while the other group showed very little variation in capacity across string-type. It is possible that these individuals used a similar strategy for all four string-types and were able to generalize this strategy when processing unfamiliar characters. This difference across dyslexia groups may be used to identify sub-types of the disorder and suggest significant differences in word level processing among these subtypes. Therefore, this approach may be useful in further delineating among types of dyslexia, which in turn may lead to better understanding of the etiologies of dyslexia.