Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michael J. Schoelles is active.

Publication


Featured researches published by Michael J. Schoelles.


Psychological Review | 2006

The soft constraints hypothesis: A rational analysis approach to resource allocation for interactive behavior

Wayne D. Gray; Chris R. Sims; Wai Tat Fu; Michael J. Schoelles

Soft constraints hypothesis (SCH) is a rational analysis approach that holds that the mixture of perceptual-motor and cognitive resources allocated for interactive behavior is adjusted based on temporal cost-benefit tradeoffs. Alternative approaches maintain that cognitive resources are in some sense protected or conserved in that greater amounts of perceptual-motor effort will be expended to conserve lesser amounts of cognitive effort. One alternative, the minimum memory hypothesis (MMH), holds that people favor strategies that minimize the use of memory. SCH is compared with MMH across 3 experiments and with predictions of an Ideal Performer Model that uses ACT-Rs memory system in a reinforcement learning approach that maximizes expected utility by minimizing time. Model and data support the SCH view of resource allocation; at the under 1000-ms level of analysis, mixtures of cognitive and perceptual-motor resources are adjusted based on their cost-benefit tradeoffs for interactive behavior.


Proceedings of the IEEE | 2002

Integrating perceptual and cognitive modeling for adaptive and intelligent human-computer interaction

Zoran Duric; Wayne D. Gray; Ric Heishman; Fayin Li; Azriel Rosenfeld; Michael J. Schoelles; Christian D. Schunn; Harry L. Wechsler

This paper describes technology and tools for intelligent human-computer interaction (IHCI) in which human cognitive, perceptual, motor and affective factors are modeled and used to adapt the H-C interface. IHCI emphasizes that human behavior encompasses both apparent human behavior and the hidden mental state behind behavioral performance. IHCI expands on the interpretation of human activities, known as W4 (what, where, when, who). While W4 only addresses the apparent perceptual aspect of human behavior the W5+ technology for IHCI described in this paper addresses also the why and how questions, whose solution requires recognizing specific cognitive states. IHCI integrates parsing and interpretation of nonverbal information with a computational cognitive model of the user which, in turn, feeds into processes that adapt the interface to enhance operator performance and provide for rational decision-making. The technology proposed is based on a general four-stage interactive framework, which moves from parsing the raw sensory-motor input, to interpreting the users motions and emotions, to building an understanding of the users current cognitive state. It then diagnoses various problems in the situation and adapts the interface appropriately. The interactive component of the system improves processing at each stage. Examples of perceptual, behavioral, and cognitive tools are described throughout the paper Adaptive and intelligent HCI are important for novel applications of computing, including ubiquitous and human-centered computing.


Behavior Research Methods Instruments & Computers | 2001

Argus: A suite of tools for research in complex cognition

Michael J. Schoelles; Wayne D. Gray

Argus simulates a radar-like target classification task. It was developed to support research in measuring and modeling cognitive work load. Argus is used in both single-subject and team modes. However, the Argus systemis more than just a simulated task environment. Argus features flexible experimenter control over cognitive work load, as well as extensive data collection and data playback facilities to support the iterative nature of research in complex behaviors. In addition, embodied computational models interact with Argus using the same interface as do human subjects. In this paper, we describe these features, as well as the task simulation. In addition, we describe how the system has been used for experimentation. We conclude with a comparison of Argus with other complex task environments.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2005

NON-INTRUSIVE MEASUREMENT OF WORKLOAD IN REAL-TIME

Markus Guhe; Wayne D. Gray; Michael J. Schoelles; Wenhui Liao; Zhiwei Zhu; Qiang Ji

We present a new method to measure workload that offers several advantages. First, it uses non-intrusive means: cameras and a mouse. Second, the workload is measured in real-time. Third, the setup is comparably cheap: the cameras and sensors are off-the-shelf components. Fourth, we go beyond measuring performance and demonstrate that just using such measures does not suffice to measure workload. Fifth, by using a Bayesian Network to assess the workload from the various manifesting measures the model adapts itself to the individual user as well as to a particular task. Sixth, we use a cognitive computational model to explain the cognitive mechanisms that cause the differences in workload and performance.


Acta Psychologica | 2003

Using Brunswikian theory and a longitudinal design to study how hierarchical teams adapt to increasing levels of time pressure

Leonard Adelman; Sheryl L. Miller; DeVere Henderson; Michael J. Schoelles

Brunswikian theory and a longitudinal design were used to study how three-person, hierarchical teams adapted to increasing levels of time pressure and, thereby, try to understand why previous team research has not necessarily found a direct relationship between team processes and performance with increasing time pressure. We obtained four principal findings. First, team members initially adapted to increasing time pressure without showing any performance decrements by accelerating their cognitive processing, increasing the amount of their implicit coordination by sending more information without being asked and, to a lesser extent, filtering (omitting) certain activities. Second, teams began and continued to perform the task differently with increasing time pressure, yet often achieved comparable levels of performance. Third, time pressure did affect performance because there was a level of time pressure beyond which performance could not be maintained, although that level differed for different teams. And, fourth, some adaptation strategies were more effective than others at the highest time pressure level. Taken together, these findings support the Brunswikian perspective that one should not necessarily expect a direct relationship between team processes and performance with increasing time pressure because teams adapt their processes in different, yet often equally effective ways, in an effort to maintain high and stable performance.


Archive | 2011

Determining the Number of Simulation Runs: Treating Simulations as Theories by Not Sampling Their Behavior

Frank E. Ritter; Michael J. Schoelles; Karen S. Quigley; Laura Cousino Klein

How many times should a simulation be run to generate valid predictions? With a deterministic simulation, the answer simply is just once. With a stochastic simulation, the answer is more complex. Different researchers have proposed and used different heuristics. A review of the models presented at a conference on cognitive modeling illustrates the range of solutions and problems in this area. We present the argument that because the simulation is a theory, not data, it should not so much be sampled but run enough times to provide stable predictions of performance and the variance of performance. This applies to both pure simulations as well as human-in-the-loop simulations. We demonstrate the importance of running the simulation until it has stable performance as defined by the effect size of interest. When runs are expensive we suggest a minimum number of runs based on power calculations; when runs are inexpensive we suggest a maximum necessary number of runs. We also suggest how to adjust the number of runs for different effect sizes of interest.


Behavior Research Methods | 2005

ProtoMatch: a tool for analyzing high-density, sequential eye gaze and cursor protocols.

Christopher W. Myers; Michael J. Schoelles

ProtoMatch is a software tool for integrating and analyzing fixed-location and movement eye gaze and cursor data. It provides a comprehensive collection of protocol analysis tools that support sequential data analyses for eye fixations and scanpaths as well as for cursor “fixations” (dwells at one location) and “cursorpaths” (movements between locations). ProtoMatch is modularized software that integrates both eye gaze and cursor protocols into a unified stream of data and provides an assortment of filters and analyses. ProtoMatch subsumes basic analyses (i.e., fixation duration, number of fixations, etc.) and introduces a method of objectively computing the similarity between scanpaths or cursorpaths using sequence alignment. The combination of filters, basic analyses, and sequence alignment in ProtoMatch provides researchers with a versatile system for performing both confirmatory and exploratory sequential data analyses (Sanderson & Fisher, 1994).


Cognitive Systems Research | 2005

Action editor: Christian Schunn: Adapting to the task environment: Explorations in expected value

Wayne D. Gray; Michael J. Schoelles; Chris R. Sims

Small variations in how a task is designed can lead humans to trade off one set of strategies for another. In this paper we discuss our failure to model such tradeoffs in the Blocks World task using ACT-Rs default mechanism for selecting the best production among competing productions. ACT-Rs selection mechanism, its expected value equation, has had many successes (see, for example [Anderson, J. R., & Lebiere, C. (Eds.). (1998). Atomic components of thought. Hillsdale, NJ: Lawrence Erlbaum Associates.]) and a recognized strength of this approach is that, across a wide variety of tasks, it tends to produce models that adapt to their task environment about as fast as humans adapt. (This congruence with human behavior is in marked contrast to other popular ways of computing the utility of alternative choices; for example, Reinforcement Learning or most Connectionist learning methods.) We believe that the failure to model the Blocks World task stems from the requirement in ACT-R that all actions must be counted as a binary success or failure. In Blocks World, as well as in many other circumstances, actions can be met with mixed success or partial failure. Working within ACT-Rs expected value equation we replace the binary success/failure judgment with three variations on a scalar one. We then compare the performance of each alternative with ACT-Rs default scheme and with the human data. We conclude by discussing the limits and generality of our attempts to replace ACT-Rs binary scheme with a scalar credit assignment mechanism.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2000

The Eye Blink as a Physiological Indicator of Cognitive Workload

Deborah A. Boehm-Davis; Wayne D. Gray; Michael J. Schoelles

This report explores the hypothesis that eye blinks serve as a sort of “mental punctuation” during completion of a complex task. Specifically, it examines the rates of eye blinks while people are engaged in a complex decision-making task. The data suggest that eye blinks are suppressed while people are engaged in cognitive processing, and that they rebound once that processing has been completed.


Behavior Research Methods | 2014

Simplifying the interaction between cognitive models and task environments with the JSON Network Interface

Ryan M. Hope; Michael J. Schoelles; Wayne D. Gray

Process models of cognition, written in architectures such as ACT-R and EPIC, should be able to interact with the same software with which human subjects interact. By eliminating the need to simulate the experiment, this approach would simplify the modeler’s effort, while ensuring that all steps required of the human are also required by the model. In practice, the difficulties of allowing one software system to interact with another present a significant barrier to any modeler who is not also skilled at this type of programming. The barrier increases if the programming language used by the modeling software differs from that used by the experimental software. The JSON Network Interface simplifies this problem for ACT-R modelers, and potentially, modelers using other systems.

Collaboration


Dive into the Michael J. Schoelles's collaboration.

Top Co-Authors

Avatar

Wayne D. Gray

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar

Christopher W. Myers

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Frank E. Ritter

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Laura Cousino Klein

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Markus Guhe

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar

Vladislav D. Veksler

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andrew L. Reifers

Pennsylvania State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge