Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Brett Roads is active.

Publication


Featured researches published by Brett Roads.


human factors in computing systems | 2016

Designing Engaging Games Using Bayesian Optimization

Mohammad M. Khajah; Brett Roads; Robert V. Lindsey; Yun-En Liu; Michael C. Mozer

We use Bayesian optimization methods to design games that maximize user engagement. Participants are paid to try a game for several minutes, at which point they can quit or continue to play voluntarily with no further compensation. Engagement is measured by player persistence, projections of how long others will play, and a post-game survey. Using Gaussian process surrogate-based optimization, we conduct efficient experiments to identify game design characteristics---specifically those influencing difficulty---that lead to maximal engagement. We study two games requiring trajectory planning, the difficulty of each is determined by a three-dimensional continuous design space. Two of the design dimensions manipulate the game in user-transparent manner (e.g., the spacing of obstacles), the third in a subtle and possibly covert manner (incremental trajectory corrections). Converging results indicate that overt difficulty manipulations are effective in modulating engagement only when combined with the covert manipulation, suggesting the critical role of a users self-perception of competence.


PLOS ONE | 2016

Using Highlighting to Train Attentional Expertise.

Brett Roads; Michael C. Mozer; Thomas A. Busey

Acquiring expertise in complex visual tasks is time consuming. To facilitate the efficient training of novices on where to look in these tasks, we propose an attentional highlighting paradigm. Highlighting involves dynamically modulating the saliency of a visual image to guide attention along the fixation path of a domain expert who had previously viewed the same image. In Experiment 1, we trained naive subjects via attentional highlighting on a fingerprint-matching task. Before and after training, we asked subjects to freely inspect images containing pairs of prints and determine whether the prints matched. Fixation sequences were automatically scored for the degree of expertise exhibited using a Bayesian discriminative model of novice and expert gaze behavior. Highlighted training causes gaze behavior to become more expert-like not only on the trained images but also on transfer images, indicating generalization of learning. In Experiment 2, to control for the possibility that the increase in expertise is due to mere exposure, we trained subjects via highlighting of fixation sequences from novices, not experts, and observed no transition toward expertise. In Experiment 3, to determine the specificity of the training effect, we trained subjects with expert fixation sequences from images other than the one being viewed, which preserves coarse-scale statistics of expert gaze but provides no information about fine-grain features. Observing at least a partial transition toward expertise, we obtain only weak evidence that the highlighting procedure facilitates the learning of critical local features. We discuss possible improvements to the highlighting procedure.


Cognitive Science | 2017

Improving Human‐Machine Cooperative Classification Via Cognitive Theories of Similarity

Brett Roads; Michael C. Mozer

Acquiring perceptual expertise is slow and effortful. However, untrained novices can accurately make difficult classification decisions (e.g., skin-lesion diagnosis) by reformulating the task as similarity judgment. Given a query image and a set of reference images, individuals are asked to select the best matching reference. When references are suitably chosen, the procedure yields an implicit classification of the query image. To optimize reference selection, we develop and evaluate a predictive model of similarity-based choice. The model builds on existing psychological literature and accommodates stochastic, dynamic shifts of attention among visual feature dimensions. We perform a series of human experiments with two stimulus types (rectangles, faces) and nine classification tasks to validate the model and to demonstrate the models potential to boost performance. Our system achieves high accuracy for participants who are naive as to the classification task, even when the classification task switches from trial to trial.


PLOS ONE | 2016

Correction: Using Highlighting to Train Attentional Expertise.

Brett Roads; Michael C. Mozer; Thomas A. Busey

The following information is missing from the Funding section: Publication of this article was funded by the University of Colorado Boulder Libraries Open Access Fund.


Journal of Vision | 2015

Visual Classification Expertise without Training

Brett Roads; Michael C. Mozer

Decision-support systems have been built to assist individuals in categorizing a visual stimulus by presenting the stimulus next to two or more reference images with known category labels. Such systems transform the task of categorization into the task of similarity judgment. Decision-support systems are playing an increasing role in diverse applications such as commercial software products, human-in-the-loop computer vision, and citizen science projects. To explore the capabilities of decision support, Mechanical Turk experiments were devised in which participants make a sequence of similarity judgments between a test exemplar and four reference exemplars. Experiment 1 used rectangles that vary along the dimensions of width and height. Judgments from this experiment were used to select and parameterize a model of human forced-choice selection. This model was used to optimize the choice of reference exemplars for specific categorization tasks. Experiment 2 evaluated implicit classification accuracy on three different categorization tasks, each corresponding to a different decision boundary in the 2D space of rectangles-vertical, horizontal, and diagonal. High classification accuracy (M = 91%, SD = 2%) was achieved even though the three implicit tasks were intermixed and subjects had no awareness that they were performing specific categorization tasks. (From their perspective, the task was similarity judgment.) Through intelligent selection of exemplars, naïve individuals can be guided to make correct classification decisions. Further experiments calibrated the forced-choice selection model-and the reference exemplars chosen based on this model-to individual participants and used more complex and naturalistic stimuli, yielding further encouraging results. Meeting abstract presented at VSS 2015.


international conference on image processing | 2017

Learning to generate images with perceptual similarity metrics

Jake Snell; Karl Ridgeway; Renjie Liao; Brett Roads; Michael C. Mozer; Richard S. Zemel


Journal of Vision | 2018

Towards using human-surrogate models to optimize training sequences during visual category learning

Brett Roads; Michael C. Mozer


Archive | 2017

LEARNING TO GENERATE IMAGES WITH PERCEPTUAL SIMILARITY METRICS (SUPPLEMENTARY MATERIAL)

Jake Snell; Karl Ridgeway; Renjie Liao; Brett Roads; Michael C. Mozer; Richard S. Zemel


Archive | 2017

LEARNING TO GENERATE IMAGES WITH PERCEPTUAL SIMILARITY METRICS (POSTER)

Jake Snell; Karl Ridgeway; Renjie Liao; Brett Roads; Michael C. Mozer; Richard S. Zemel


Journal of Vision | 2017

The Easy-to-Hard Advantage with Real-World Visual Categories

Brett Roads; Buyun Xu; June K. Robinson; James W. Tanaka

Collaboration


Dive into the Brett Roads's collaboration.

Top Co-Authors

Avatar

Michael C. Mozer

University of Colorado Boulder

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Karl Ridgeway

University of Colorado Boulder

View shared research outputs
Top Co-Authors

Avatar

Renjie Liao

The Chinese University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Thomas A. Busey

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar

Buyun Xu

University of Victoria

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mohammad M. Khajah

University of Colorado Boulder

View shared research outputs
Researchain Logo
Decentralizing Knowledge