Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ami Eidels is active.

Publication


Featured researches published by Ami Eidels.


Psychonomic Bulletin & Review | 2011

Workload capacity spaces: A unified methodology for response time measures of efficiency as workload is varied

James T. Townsend; Ami Eidels

Increasing the number of available sources of information may impair or facilitate performance, depending on the capacity of the processing system. Tests performed on response time distributions are proving to be useful tools in determining the workload capacity (as well as other properties) of cognitive systems. In this article, we develop a framework and relevant mathematical formulae that represent different capacity assays (Miller’s race model bound, Grice’s bound, and Townsend’s capacity coefficient) in the same space. The new space allows a direct comparison between the distinct bounds and the capacity coefficient values and helps explicate the relationships among the different measures. An analogous common space is proposed for the AND paradigm, relating the capacity index to the Colonius–Vorberg bounds. We illustrate the effectiveness of the unified spaces by presenting data from two simulated models (standard parallel, coactive) and a prototypical visual detection experiment. A conversion table for the unified spaces is provided.


Psychonomic Bulletin & Review | 2010

Converging measures of workload capacity.

Ami Eidels; Chris Donkin; Scott D. Brown; Andrew Heathcote

Does processing more than one stimulus concurrently impede or facilitate performance relative to processing just one stimulus? This fundamental question about workload capacity was surprisingly difficult to address empirically until Townsend and Nozawa (1995) developed a set of nonparametric analyses called systems factorial technology. We develop an alternative parametric approach based on the linear ballistic accumulator decision model (Brown & Heathcote, 2008), which uses the model’s parameter estimates to measure processing capacity. We show that these two methods have complementary strengths, and that, in a data set where participants varied greatly in capacity, the two approaches provide converging evidence.


Attention Perception & Psychophysics | 2008

Studying visual search using systems factorial methodology with target–distractor similarity as the factor

Mario Fific; James T. Townsend; Ami Eidels

Systems factorial technology (SFT) is a theory-driven set of methodologies oriented toward identification of basic mechanisms, such as parallel versus serial processing, of perception and cognition. Studies employing SFT in visual search with small display sizes have repeatedly shown decisive evidence for parallel processing. The first strong evidence for serial processing was recently found in short-term memory search, using target-distractor (T-D) similarity as a key experimental variable (Townsend & Fifić, 2004). One of the major goals of the present study was to employ T-D similarity in visual search to learn whether this mode of manipulating processing speed would affect the parallel versus serial issue in that domain. The result was a surprising and regular departure from ordinary parallel or serial processing. The most plausible account at present relies on the notion of positively interacting parallel channels.


Journal of Experimental Psychology: Human Perception and Performance | 2008

Where similarity beats redundancy: the importance of context, higher order similarity, and response assignment.

Ami Eidels; James T. Townsend; James R. Pomerantz

People are especially efficient in processing certain visual stimuli such as human faces or good configurations. It has been suggested that topology and geometry play important roles in configural perception. Visual search is one area in which configurality seems to matter. When either of 2 target features leads to a correct response and the sequence includes trials in which either or both targets are present, the result is a redundant-target paradigm. It is common for such experiments to find faster performance with the double target than with either alone, something that is difficult to explain with ordinary serial models. This redundant-targets study uses figures that can be dissimilar in their topology and geometry and manipulates the stimulus set and the stimulus?response assignments. The authors found that the combination of higher order similarity (e.g., topological) among the features in the stimulus set and response assignment can effectively overpower or facilitate the redundant-target effect, depending on the exact nature of the former characteristics. Several reasonable models of redundant-targets performance are falsified. Parallel models with the potential for channel interactions are supported by the data.


Memory & Cognition | 2015

Working Memory's Workload Capacity

Andrew Heathcote; James R. Coleman; Ami Eidels; James M. Watson; Joseph W. Houpt; David L. Strayer

We examined the role of dual-task interference in working memory using a novel dual two-back task that requires a redundant-target response (i.e., a response that neither the auditory nor the visual stimulus occurred two back versus a response that one or both occurred two back) on every trial. Comparisons with performance on single two-back trials (i.e., with only auditory or only visual stimuli) showed that dual-task demands reduced both speed and accuracy. Our task design enabled a novel application of Townsend and Nozawa’s (Journal of Mathematical Psychology 39: 321–359, 1995) workload capacity measure, which revealed that the decrement in dual two-back performance was mediated by the sharing of a limited amount of processing capacity. Relative to most other single and dual n-back tasks, performance measures for our task were more reliable, due to the use of a small stimulus set that induced a high and constant level of proactive interference. For a version of our dual two-back task that minimized response bias, accuracy was also more strongly correlated with complex span than has been found for most other single and dual n-back tasks.


Attention Perception & Psychophysics | 2015

Evaluating perceptual integration: uniting response-time- and accuracy-based methodologies

Ami Eidels; James T. Townsend; Howard C. Hughes; Lacey A. Perry

This investigation brings together a response-time system identification methodology (e.g., Townsend & Wenger Psychonomic Bulletin & Review 11, 391–418, 2004a) and an accuracy methodology, intended to assess models of integration across stimulus dimensions (features, modalities, etc.) that were proposed by Shaw and colleagues (e.g., Mulligan & Shaw Perception & Psychophysics 28, 471–478, 1980). The goal was to theoretically examine these separate strategies and to apply them conjointly to the same set of participants. The empirical phases were carried out within an extension of an established experimental design called the double factorial paradigm (e.g., Townsend & Nozawa Journal of Mathematical Psychology 39, 321–359, 1995). That paradigm, based on response times, permits assessments of architecture (parallel vs. serial processing), stopping rule (exhaustive vs. minimum time), and workload capacity, all within the same blocks of trials. The paradigm introduced by Shaw and colleagues uses a statistic formally analogous to that of the double factorial paradigm, but based on accuracy rather than response times. We demonstrate that the accuracy measure cannot discriminate between parallel and serial processing. Nonetheless, the class of models supported by the accuracy data possesses a suitable interpretation within the same set of models supported by the response-time data. The supported model, consistent across individuals, is parallel and has limited capacity, with the participants employing the appropriate stopping rule for the experimental setting.


Systems Factorial Technology#R##N#A Theory Driven Methodology for the Identification of Perceptual and Cognitive Mechanisms | 2017

Bridge-Building: SFT Interrogation of Major Cognitive Phenomena

Daniel Algom; Daniel Fitousi; Ami Eidels

Abstract In early studies employing the SFT, the stimuli were simple visual signals, mainly dots, lines, or letters of the alphabet. Although this feature facilitated focusing on theory development, it has reduced the impact of SFT on mainstream cognitive science. The goal of this chapter is to reconnect the SFT to cognitive psychology via an SFT-guided examination of four major phenomena of current cognitive science: the Stroop and Garner effects in attention, the Size-Congruity effect in numerical cognition, and the Redundant-Target effect in speeded signal detection. We show that, in each case, the SFT analysis led to novel insights, reformulating old problems and challenging established theories.


Psychonomic Bulletin & Review | 2014

The resurrection of Tweedledum and Tweedledee: Bimodality cannot distinguish serial and parallel processes

Paul Williams; Ami Eidels; James T. Townsend

Simultaneously presented signals may be processed in serial or in parallel. One potentially valuable indicator of a system’s characteristics may be the appearance of multimodality in the response time (RT) distributions. It is known that standard serial models can predict multimodal RT distributions, but it is unknown whether multimodality is diagnostic of serial systems, or whether alternative architectures, such as parallel ones, can also make such predictions. We demonstrate via simulations that a multimodal RT distribution is not sufficient by itself to rule out parallel self-terminating processing, even with limited trial numbers. These predictions are discussed within the context of recent data indicating the existence of multimodal distributions in visual search.


Cognitive Psychology | 2017

Breaking the rules in perceptual information integration

Maxim Bushmakin; Ami Eidels; Andrew Heathcote

We develop a broad theoretical framework for modelling difficult perceptual information integration tasks under different decision rules. The framework allows us to compare coactive architectures, which combine information before it enters the decision process, with parallel architectures, where logical rules combine independent decisions made about each perceptual source. For both architectures we test the novel hypothesis that participants break the decision rules on some trials, making a response based on only one stimulus even though task instructions require them to consider both. Our models take account of not only the decisions made but also the distribution of the time that it takes to make them, providing an account of speed-accuracy tradeoffs and response biases occurring when one response is required more often than another. We also test a second novel hypothesis, that the nature of the decision rule changes the evidence on which choices are based. We apply the models to data from a perceptual integration task with near threshold stimuli under two different decision rules. The coactive architecture was clearly rejected in favor of logical-rules. The logical-rule models were shown to provide an accurate account of all aspects of the data, but only when they allow for response bias and the possibility for subjects to break those rules. We discuss how our framework can be applied more broadly, and its relationship to Townsend and Nozawas (1995) Systems-Factorial Technology.


Vision Research | 2016

Can Two Dots Form a Gestalt? Measuring Emergent Features with the Capacity Coefficient

Robert X. D. Hawkins; Joseph W. Houpt; Ami Eidels; James T. Townsend

While there is widespread agreement among vision researchers on the importance of some local aspects of visual stimuli, such as hue and intensity, there is no general consensus on a full set of basic sources of information used in perceptual tasks or how they are processed. Gestalt theories place particular value on emergent features, which are based on the higher-order relationships among elements of a stimulus rather than local properties. Thus, arbitrating between different accounts of features is an important step in arbitrating between local and Gestalt theories of perception in general. In this paper, we present the capacity coefficient from Systems Factorial Technology (SFT) as a quantitative approach for formalizing and rigorously testing predictions made by local and Gestalt theories of features. As a simple, easily controlled domain for testing this approach, we focus on the local feature of location and the emergent features of Orientation and Proximity in a pair of dots. We introduce a redundant-target change detection task to compare our capacity measure on (1) trials where the configuration of the dots changed along with their location against (2) trials where the amount of local location change was exactly the same, but there was no change in the configuration. Our results, in conjunction with our modeling tools, favor the Gestalt account of emergent features. We conclude by suggesting several candidate information-processing models that incorporate emergent features, which follow from our approach.

Collaboration


Dive into the Ami Eidels's collaboration.

Top Co-Authors

Avatar

James T. Townsend

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Joseph W. Houpt

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge