Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Stephen B. Gilbert is active.

Publication


Featured researches published by Stephen B. Gilbert.


Neuropsychologia | 2012

The temporal dynamics of medial and lateral frontal neural activity related to proactive cognitive control

Robert West; Kira Bailey; Brandy N. Tiernan; Wutthigrai Boonsuk; Stephen B. Gilbert

The neural correlates of proactive cognitive control were examined in two experiments using the counting Stroop task and a computerized Blackjack task in combination with event-related brain potentials (ERPs). The primary objective of the study was to determine whether slow wave activity related to proactive control would be observed in the two tasks. Consistent with the existing literature, transient components of the ERPs (i.e., medial frontal negativity and feedback related negativity) were observed over the medial frontal region in both tasks that were related to stimulus congruency and feedback processing, respectively. The medial frontal ERPs in both tasks were modeled with a pair of equivalent current dipoles placed along the anterior to posterior axis of the cingulate. Most importantly, slow wave activity was observed that differentiated incongruent trials from congruent trials after the response in the counting Stroop task, and losses from wins and ties in the Blackjack task. In the Blackjack task, a pair of dipoles in the left lateral frontal and posterior regions modeled the slow wave activity. These data reveal that updating goal representations that support proactive cognitive control may require several 100 ms in contrast to conflict or outcome monitoring that is associated with transient medial frontal neural activity.


IEEE Transactions on Visualization and Computer Graphics | 2015

Virtual Training: Learning Transfer of Assembly Tasks

Patrick E. Carlson; Anicia Peters; Stephen B. Gilbert; Judy M. Vance; Andy Luse

In training assembly workers in a factory, there are often barriers such as cost and lost productivity due to shutdown. The use of virtual reality (VR) training has the potential to reduce these costs. This research compares virtual bimanual haptic training versus traditional physical training and the effectiveness for learning transfer. In a mixed experimental design, participants were assigned to either virtual or physical training and trained by assembling a wooden burr puzzle as many times as possible during a twenty minute time period. After training, participants were tested using the physical puzzle and were retested again after two weeks. All participants were trained using brightly colored puzzle pieces. To examine the effect of color, testing involved the assembly of colored physical parts and natural wood colored physical pieces. Spatial ability as measured using a mental rotation test, was shown to correlate with the number of assemblies they were able to complete in the training. While physical training outperformed virtual training, after two weeks the virtually trained participants actually improved their test assembly times. The results suggest that the color of the puzzle pieces helped the virtually trained participants in remembering the assembly process.


ASME-AFM 2009 World Conference on Innovative Virtual Reality | 2009

Sparsh UI: A Multi-Touch Framework for Collaboration and Modular Gesture Recognition

Prasad Ramanahally; Stephen B. Gilbert; Thomas Niedzielski; Desirée Velázquez; Cole Anagnost

Most current multi-touch libraries provide support to recognize the touch input from particular hardware and seldom support complex gestures. For rapid prototyping and development of multi-touch applications, particularly for collaboration across multiple disparate devices, there is a need for a framework which can support an array of multi-touch hardware, provide gesture processing, be cross platform compatible, and allow applications to be developed in the desired programming language. In this paper we present criteria for evaluating a multi-touch library and “Sparsh UI”— an open source multi-touch library which is a novel attempt to address these issues by enabling developers to easily develop multi-touch applications. We also compare Sparsh UI with other multi-touch libraries and describe several Sparsh-based applications, including BasePlate, a system for collaborative virtual assembly.Copyright


ieee virtual reality conference | 2012

Puzzle assembly training: Real world vs. virtual environment

Mike Oren; Patrick E. Carlson; Stephen B. Gilbert; Judy M. Vance

While training participants to assemble a 3D wooden burr puzzle, we compared results of training in a stereoscopic, head tracked virtual assembly environment utilizing haptic devices and data gloves with real world training. While virtual training took participants about three times longer, the group that used the virtual environment was able to assemble the physical test puzzle about three times faster than the group trained with the physical puzzle. We present several possible cognitive explanations for these results and our plans for future exploration of the factors that improve the effectiveness of virtual process training over real world experience.


artificial intelligence in education | 2018

Designing Adaptive Instruction for Teams: a Meta-Analysis

Robert A. Sottilare; C. Shawn Burke; Eduardo Salas; Anne M. Sinatra; Joan H. Johnston; Stephen B. Gilbert

The goal of this research was the development of a practical architecture for the computer-based tutoring of teams. This article examines the relationship of team behaviors as antecedents to successful team performance and learning during adaptive instruction guided by Intelligent Tutoring Systems (ITSs). Adaptive instruction is a training or educational experience tailored by artificially-intelligent, computer-based tutors with the goal of optimizing learner outcomes (e.g., knowledge and skill acquisition, performance, enhanced retention, accelerated learning, or transfer of skills from instructional environments to work environments). The core contribution of this research was the identification of behavioral markers associated with the antecedents of team performance and learning thus enabling the development and refinement of teamwork models in ITS architectures. Teamwork focuses on the coordination, cooperation, and communication among individuals to achieve a shared goal. For ITSs to optimally tailor team instruction, tutors must have key insights about both the team and the learners on that team. To aid the modeling of teams, we examined the literature to evaluate the relationship of teamwork behaviors (e.g., communication, cooperation, coordination, cognition, leadership/coaching, and conflict) with team outcomes (learning, performance, satisfaction, and viability) as part of a large-scale meta-analysis of the ITS, team training, and team performance literature. While ITSs have been used infrequently to instruct teams, the goal of this meta-analysis make team tutoring more ubiquitous by: identifying significant relationships between team behaviors and effective performance and learning outcomes; developing instructional guidelines for team tutoring based on these relationships; and applying these team tutoring guidelines to the Generalized Intelligent Framework for Tutoring (GIFT), an open source architecture for authoring, delivering, managing, and evaluating adaptive instructional tools and methods. In doing this, we have designed a domain-independent framework for the adaptive instruction of teams.


It Professional | 2013

Capturing Cognitive Fingerprints from Keystroke Dynamics

J.M. Chang; Chi-Chen Fang; Kuan-Hsing Ho; N. Kelly; Pei-Yuan Wu; Yixiao Ding; Chris C. N. Chu; Stephen B. Gilbert; Ahmed E. Kamal; Sun-Yuan Kung

Conventional authentication systems identify a user only at the entry point. Keystroke dynamics can continuously authenticate users by their typing rhythms without extra devices. This article presents a new feature called cognitive typing rhythm (CTR) to continuously verify the identities of computer users. Two machine techniques, SVM and KRR, have been developed for the system. The best results from experiments conducted with 1,977 users show a false-rejection rate of 0.7 percent and a false-acceptance rate of 5.5 percent. CTR therefore constitutes a cognitive fingerprint for continuous. Its effectiveness has been verified through a large-scale dataset. This article is part of a special issue on security.


artificial intelligence in education | 2015

Authoring Effective Embedded Tutors: An Overview of the Extensible Problem Specific Tutor (xPST) System.

Stephen B. Gilbert; Stephen B. Blessing; Enruo Guo

The Extensible Problem Specific Tutor (xPST) allows authors who are not cognitive scientists and not programmers to quickly create an intelligent tutoring system that provides instruction akin to a model-tracing tutor. Furthermore, this instruction is overlaid on existing software, so that the learner’s interface does not have to be made from scratch. The xPST architecture allows for extending its capabilities by the addition of plug-ins that communicate with additional third-party software. After reviewing this general architecture, we describe three major implementations that we have created using the xPST system, each using different third-party software as the learner’s interface. We have conducted three evaluations of authors using xPST to create tutoring content, and these are considered in turn. These evaluations show that xPST authors can quickly learn the system, and can efficiently produce successful embedded instruction.


human factors in computing systems | 2012

The impact of three interfaces for 360-degree video on spatial cognition

Wutthigrai Boonsuk; Stephen B. Gilbert; Jonathan W. Kelly

In this paper, we describe an experiment designed to evaluate the effectiveness of three interfaces for surveillance or remote control using live 360-degree video feeds from a person or vehicle in the field. Video feeds are simulated using a game engine. While locating targets within a 3D terrain using a 2D 360-degree interface, participants indicated perceived egocentric directions to targets and later placed targets on an overhead view of the terrain. Interfaces were compared based on target finding and map placement performance. Results suggest 1) non-seamless interfaces with visual boundaries facilitate spatial understanding, 2) correct perception of self-to-object relationships is not correlated with understanding object-to-object relationships within the environment, and 3) increased video game experience corresponds with better spatial understanding of an environment observed in 360-degrees. This work can assist researchers of panoramic video systems in evaluating the optimal interface for observation and teleoperation of remote systems.


intelligent tutoring systems | 2008

Evaluating an Authoring Tool for Model-Tracing Intelligent Tutoring Systems

Stephen B. Blessing; Stephen B. Gilbert

We have been creating an authoring tool, the Cognitive Model SDK, which allows non-cognitive scientists and non-programmers to produce a cognitive model for model-tracing tutors [1, 2]. The SDK is in use by developers at Carnegie Learning to produce their commercial Cognitive Tutors for math. However, it has never been evaluated with regards to the strong claim that non-cognitive scientists and non-programmers could, without much effort, produce useful cognitive models with it. The research presented here shows that this can be done, using a task that past researchers have used [3]. The models are evaluated across several metrics to see what characteristics of either them or their creators may distinguish better models from worse models. The goal of this work is to establish a baseline for future work examining how cognitive modeling can be opened up to a wider class of people.


tests and proofs | 2013

Space perception in virtual environments: Displacement from the center of projection causes less distortion than predicted by cue-based models

Jonathan W. Kelly; Melissa Burton; Brice Pollock; Eduardo Rubio; Michael Curtis; Julio de la Cruz; Stephen B. Gilbert; Eliot Winer

Virtual reality systems commonly include both monocular and binocular depth cues, which have the potential to provide viewers with a realistic impression of spatial properties of the virtual environment. However, when multiple viewers share the same display, only one viewer typically receives the projectively correct images. All other viewers experience the same images despite displacement from the center of projection (CoP). Three experiments evaluated perceptual distortions caused by displacement from the CoP and compared those percepts to predictions of models based on monocular and binocular viewing geometry. Leftward and rightward displacement from the CoP caused virtual angles on the ground plane to be judged as larger and smaller, respectively, compared to judgments from the CoP. Backward and forward displacement caused rectangles on the ground plane to be judged as larger and smaller in depth, respectively, compared to judgments from the CoP. Judgment biases were in the same direction as cue-based model predictions but of smaller magnitude. Displacement from the CoP had asymmetric effects on perceptual judgments, unlike model predictions. Perceptual distortion occurred with monocular cues alone but was exaggerated when binocular cues were added. The results are grounded in terms of practical implications for multiuser virtual environments.

Collaboration


Dive into the Stephen B. Gilbert's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge