Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Steven C. Sutherland is active.

Publication


Featured researches published by Steven C. Sutherland.


NeuroImage | 2013

Temporal prediction errors modulate cingulate-insular coupling.

Roberto Limongi; Steven C. Sutherland; Jian Zhu; Michael E. Young; Reza Habib

Prediction error (i.e., the difference between the expected and the actual events outcome) mediates adaptive behavior. Activity in the anterior mid-cingulate cortex (aMCC) and in the anterior insula (aINS) is associated with the commission of prediction errors under uncertainty. We propose a dynamic causal model of effective connectivity (i.e., neuronal coupling) between the aMCC, the aINS, and the striatum in which the task context drives activity in the aINS and the temporal prediction errors modulate extrinsic cingulate-insular connections. With functional magnetic resonance imaging, we scanned 15 participants when they performed a temporal prediction task. They observed visual animations and predicted when a stationary ball began moving after being contacted by another moving ball. To induced uncertainty-driven prediction errors, we introduced spatial gaps and temporal delays between the balls. Classical and Bayesian fMRI analyses provided evidence to support that the aMCC-aINS system along with the striatum not only responds when humans predict whether a dynamic event occurs but also when it occurs. Our results reveal that the insula is the entry port of a three-region pathway involved in the processing of temporal predictions. Moreover, prediction errors rather than attentional demands, task difficulty, or task duration exert an influence in the aMCC-aINS system. Prediction errors debilitate the effect of the aMCC on the aINS. Finally, our computational model provides a way forward to characterize the physiological parallel of temporal prediction errors elicited in dynamic tasks.


Psychonomic Bulletin & Review | 2009

The spatiotemporal distinctiveness of direct causation

Michael E. Young; Steven C. Sutherland

The launching effect, in which people judge one object to have caused another to immediately move after contact, is often described as the prototype of direct causation. The special status of this interaction may be due to its psychophysical distinctiveness, and this property may be the origin of the formation of causality as a conceptual category. This hypothesis was tested by having participants judge the relative similarity of pairs of events that either had no spatial gap or delay (direct launching) or had small gaps and/or short delays. Direct launching was much easier to discriminate from launches involving small gaps or delays. In a follow-up experiment, participants made similar judgments for a noncausal event.


Behavior Research Methods | 2012

Rich stimulus sampling for between-subjects designs improves model selection

Michael E. Young; James J. Cole; Steven C. Sutherland

The choice of stimulus values to test in any experiment is a critical component of good experimental design. This study examines the consequences of random and systematic sampling of data values for the identification of functional relationships in experimental settings. Using Monte Carlo simulation, uniform random sampling was compared with systematic sampling of two, three, four, or N equally spaced values along a single stimulus dimension. Selection of the correct generating function (a logistic or a linear model) was improved with each increase in the number of levels sampled, with N equally spaced values and random stimulus sampling performing similarly. These improvements came at a small cost in the precision of the parameter estimates for the generating function.


human factors in computing systems | 2015

The Goal of Scoring: Exploring the Role of Game Performance in Educational Games

Casper Harteveld; Steven C. Sutherland

In this paper the role of game performance as an assessment tool is explored and an approach is presented for designing and assessing learning-centered game scores. In recent years, attention has shifted from focusing on games for learning to games for assessment. The research question this paper tries to address is how valid games are as an assessment tool, and more specifically, how valid the use of game scores are as a measure of assessment. To explore this use, we looked at the role of game performance in a game where the goals were designed based on its learning objectives. We hypothesized that because of this design the scores could be used as a measure of learning. The results of our mixed-methods study confirmed this hypothesis. However, the scores are influenced by factors such as computer skills and age. Further analysis revealed that the design of the game and the game-based training were also of influence. These insights will help in designing better predictive game scores in the future.


Ksii Transactions on Internet and Information Systems | 2016

Effects of the Advisor and Environment on Requesting and Complying With Automated Advice

Steven C. Sutherland; Casper Harteveld; Michael E. Young

Given the rapid technological advances in our society and the increase in artificial and automated advisors with whom we interact on a daily basis, it is becoming increasingly necessary to understand how users interact with and why they choose to request and follow advice from these types of advisors. More specifically, it is necessary to understand errors in advice utilization. In the present study, we propose a methodological framework for studying interactions between users and automated or other artificial advisors. Specifically, we propose the use of virtual environments and the tarp technique for stimulus sampling, ensuring sufficient sampling of important extreme values and the stimulus space between those extremes. We use this proposed framework to identify the impact of several factors on when and how advice is used. Additionally, because these interactions take place in different environments, we explore the impact of where the interaction takes place on the decision to interact. We varied the cost of advice, the reliability of the advisor, and the predictability of the environment to better understand the impact of these factors on the overutilization of suboptimal advisors and underutilization of optimal advisors. We found that less predictable environments, more reliable advisors, and lower costs for advice led to overutilization, whereas more predictable environments and less reliable advisors led to underutilization. Moreover, once advice was received, users took longer to make a final decision, suggesting less confidence and trust in the advisor when the reliability of the advisor was lower, the environment was less predictable, and the advice was not consistent with the environmental cues. These results contribute to a more complete understanding of advice utilization and trust in advisors.


intelligent user interfaces | 2017

Design of Playful Authoring Tools for Social and Behavioral Science

Casper Harteveld; Nolan Manning; Farah Abu-Arja; Rick Menasce; Dean Thurston; Gillian Smith; Steven C. Sutherland

Playful environments are increasingly being used for conducting research. This makes a game platform for authoring research studies and teaching about how to conduct research a necessary progression. In this paper, we discuss Mad Science, a playful platform that is being created to allow users to create behavioral experiments. We discuss iterations of the authoring tools, including lessons learned, and the need for AI assistance to guide and teach users.


Proceedings of the 2017 ACM Workshop on Theory-Informed User Modeling for Tailoring and Personalizing Interfaces | 2017

Personalized Gaming for Motivating Social and Behavioral Science Participation

Casper Harteveld; Steven C. Sutherland

Game-like environments are increasingly used for conducting research due to the affordances that such environments offer. However, the problem remains that such environments treat their users equally. In order to address this, personalization is necessary. In this paper we discuss the need to personalize gamified research environments to motivate participation by illustrating a playful platform called Mad Science, which is being developed to allow users to create social and behavioral studies. This discussion is both informed by the platforms affordances and use thus far as well as existing theories on player motivation, and contributes to theory-informed approaches to (gamified) personalization technologies.


hawaii international conference on system sciences | 2016

Standing on the Shoulders of Citizens: Exploring Gameful Collaboration for Creating Social Experiments

Casper Harteveld; Amy J. Stahl; Gillian Smith; Cigdem Talgar; Steven C. Sutherland

There exists a gap in knowledge between scientists and the larger non-scientist public. Therefore, much of the information provided to the public regarding research that should influence their decisions is often misunderstood. In order to eliminate, or at the very least, minimize this gap, there is a need to educate non-scientists about research methods and experimental design. In order to address this need, we have created a digital game, Mad Science, that allows non-scientists to create and participate in experiments to better understand research methods. The current study analyses the results of a paper prototyping session, where non-scientists were asked to create experiments using the tools and scaffolding provided in the game. Participants were able to create playable scenarios and testable experiments. However, our results suggest a need for further AI support and scaffolding to address common areas of confusion and to facilitate the experimental design process.


human factors in computing systems | 2015

The Role of Environmental Predictability and Costs in Relying on Automation

Steven C. Sutherland; Casper Harteveld; Michael E. Young

There is a growing need to understand how automated decision aids are implemented and relied upon by users. Past research has focused on factors associated with the user and automation technology to explain reliance. The purpose of the present study was determining how the predictability of the environment affects reliance. In this paper, we present the results from an experiment using a digital game where participants had access to a free environmental cue of varying predictive validity. Some participants also had access to automated advice at varying costs. We found that participants underutilized automated advice in more predictable environments and when advice was more costly; however, when costs were low and the environment was less predictable, participants tended to overutilize automated advice. These findings provide insights for a more complete model of automation use, and offer a framework for understanding automation biases by considering how automation use compares to a model of optimality.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2018

Design of a Wearable Stress Monitoring Tool for Intensive Care Unit Nursing: Functional Information Requirements Analysis

Mahnoosh Sadeghi; Kunal Khanade; Farzan Sasangohar; Steven C. Sutherland

Nurses are the last line of defense to reduce preventable medical errors; however, they suffer from poor systems design and human factors issues (e.g., long shifts, dynamic workload, stressful situations, and fatigue), contributing to a reduced quality of care. A smart nursing system based on physiological monitoring is being designed to help nurses and their managers to efficiently communicate, reduce interruptions that affect critical task performance, and monitor acute stress and fatigue levels. This paper documents the systematic process of deriving information requirements through a group-participatory usability study, conducted with nurses working in various Southeastern Texas hospitals. Information requirements derived from these studies include: a need for accessing patients’ vital signs as well as laboratory results, memory aid tools for various critical nursing tasks, and options to call for help and to reduce interruptions for critical tasks. The system shows promise to meet these requirements.

Collaboration


Dive into the Steven C. Sutherland's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

James J. Cole

Southern Illinois University Carbondale

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Angie Avera

University of Houston–Clear Lake

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Eric A. Jacobs

Southern Illinois University Carbondale

View shared research outputs
Top Co-Authors

Avatar

Jian Zhu

Southern Illinois University Carbondale

View shared research outputs
Researchain Logo
Decentralizing Knowledge