Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Rick P. Thomas is active.

Publication


Featured researches published by Rick P. Thomas.


Memory & Cognition | 2015

Using a model of hypothesis generation to predict eye movements in a visual search task.

Daniel R. Buttaccio; Nicholas D. Lange; Rick P. Thomas; Michael R. Dougherty

We used a model of hypothesis generation (called HyGene; Thomas, Dougherty, Sprenger, & Harbison, 2008) to make predictions regarding the deployment of attention (as assessed via eye movements) afforded by the cued recall of target characteristics before the onset of a search array. On each trial, while being eyetracked, participants were first presented with a memory prompt that was diagnostic regarding the target’s color in a subsequently presented search array. We assume that the memory prompts led to the generation of hypotheses (i.e., potential target characteristics) from long-term memory into working memory to guide attentional processes and ocular–motor behavior. However, given that multiple hypotheses might be generated in response to a prompt, it has been unclear how the focal hypothesis (i.e., the hypothesis that exerts the most influence on search) affects search behavior. We tested two possibilities using first fixation data, with the assumption that the first item fixated within a search array was the focal hypothesis. We found that a model assuming that the first item generated into working memory guides overt attentional processes was most consistent with the data at both the aggregate and single-participant levels of analysis.


British Journal of Mathematical and Statistical Psychology | 2017

Order-constrained linear optimization

Joe W. Tidwell; Michael R. Dougherty; Jeffrey S. Chrabaszcz; Rick P. Thomas

Despite the fact that data and theories in the social, behavioural, and health sciences are often represented on an ordinal scale, there has been relatively little emphasis on modelling ordinal properties. The most common analytic framework used in psychological science is the general linear model, whose variants include ANOVA, MANOVA, and ordinary linear regression. While these methods are designed to provide the best fit to the metric properties of the data, they are not designed to maximally model ordinal properties. In this paper, we develop an order-constrained linear least-squares (OCLO) optimization algorithm that maximizes the linear least-squares fit to the data conditional on maximizing the ordinal fit based on Kendalls τ. The algorithm builds on the maximum rank correlation estimator (Han, 1987, Journal of Econometrics, 35, 303) and the general monotone model (Dougherty & Thomas, 2012, Psychological Review, 119, 321). Analyses of simulated data indicate that when modelling data that adhere to the assumptions of ordinary least squares, OCLO shows minimal bias, little increase in variance, and almost no loss in out-of-sample predictive accuracy. In contrast, under conditions in which data include a small number of extreme scores (fat-tailed distributions), OCLO shows less bias and variance, and substantially better out-of-sample predictive accuracy, even when the outliers are removed. We show that the advantages of OCLO over ordinary least squares in predicting new observations hold across a variety of scenarios in which researchers must decide to retain or eliminate extreme scores when fitting data.


Sociological Methodology | 2015

An Introduction to the General Monotone Model with Application to Two Problematic Data Sets

Michael R. Dougherty; Rick P. Thomas; Ryan P. Brown; Jeffrey S. Chrabaszcz; Joe W. Tidwell

We argue that the mismatch between data and analytical methods, along with common practices for dealing with “messy” data, can lead to inaccurate conclusions. Specifically, using previously published data on racial bias and culture of honor, we show that manifest effects, and therefore theoretical conclusions, are highly dependent on how researchers decide to handle extreme scores and nonlinearities when data are analyzed with traditional approaches. Within LS approaches, statistical effects appeared or disappeared on the basis of the inclusion or exclusion of as little as 1.5% (3 of 198) of the data, and highly predictive variables were masked by nonlinearities. We then demonstrate a new statistical modeling technique called the general monotone model (GeMM) and show that it has a number of desirable properties that may make it more appropriate for modeling messy data: It is more robust to extreme scores, less affected by outlier analyses, and more robust to violations of linearity on both the response and predictor variables compared with a variety of well-established statistical algorithms and frequently possesses greater statistical power. We argue that using procedures that make fewer assumptions about the data, such as GeMM, can lessen the need for researchers to use data-editing strategies (e.g., to apply transformations or to engage outlier analyses) on their data to satisfy often unrealistic statistical assumptions, leading to more consistent and accurate conclusions about data than traditional approaches of data analysis.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2017

Cue Use in Distal Autism Spectrum Assessment: A Lens Model Analysis of the Efficacy of Telehealth Technologies

David A. Illingworth; Rick P. Thomas; Agata Rozga; Christopher J. Smith

Recent developments in the field of telehealth suggest that novel technologies may ameliorate patients’ limited access to clinicians capable of conducting ASD assessments (Koch, 2006). Specifically, studies have shown that parents can capture informative behaviors that aid in autism assessment by using phone-based applications, and use of these videos result in diagnoses that are consistent with those of clinicians who interact with the same child in person (Nazneen et al., 2015; Smith et al., In press). It is yet unknown how clinicians make use of the information gleaned from videos uploaded to a store-and-forward system. Given that clinicians and physicians often exhibit bias in their use of available information, we sought to understand how cues were utilized when direct contact or observation of the patient is not possible. We used lens model analyses to evaluate one store-and-forward approach: the Naturalistic Observation Diagnostic Assessment (NODA; Smith et al., 2009). Brunswik’s (1952, 1955) lens model provides a computational approach to evaluating use of information while formulating decisions (Karelaia & Hogarth, 2008), such as in the assessment and diagnosis of ASD in children. The parents of 51 children used the NODA procedure to upload four 10-minute long videos depicting the child’s behavior in familiar in-home scenarios. Eleven children were typically developing, and the remaining 40 were seeking an Autism evaluation. Each child was observed twice: One clinician performed a standard in-person assessment (IPA), while the other performed an assessment via videos uploaded to the NODA tool. Observations for 65 classes of behavior (e.g., limited conversation, speaking volume too loud, lack of peer play, echolalia, lining up toys, preoccupation with activity) were clustered into eight nominal variables representing the seven sub-criteria associated with ASD (American Psychiatric Association, 2013) and an additional criterion for behavior labeled as typical. We computed a count for each ASD variable that represented the frequency with which the NODA clinician used the label when tagging the videos. Three pairs of linear regressions were run to estimate the weight clinicians placed on observations associated with each sub-criterion for ASD. Each pair of regressions consisted of one analysis where NODA tag counts were regressed onto the decision made by the IPA clinician and another that regressed NODA tag counts onto the NODA clinician’s decision. The three sets of regressions modeled the clinicians’ use of cues as an equal weight strategy, a conjunctive strategy, and a disjunctive strategy respectively. Our results suggest that clinicians consistently derive their decisions from a limited number of the cues available to them, as no analysis found more than two classes of observation to be predictive of diagnosis. Specifically, we found that IPA and NODA clinicians appeared to adopt a conjunctive rule, and relied most heavily on the number of typical behaviors observed. We also found a high level of agreement between the IPA and NODA clinicians with respect to use of information and diagnosis. These findings suggests that there is no dearth of information available to clinicians for distal ASD assessment when observations are made through pre-recorded video provided by parents via the NODA system as compared to IPA. The results of the reported study illustrate the promise of telehealth technology adoption for distal patient assessment and diagnosis.


PLOS ONE | 2018

Long-term serial position effects in cue-based inference

Ashley Lawrence; Rick P. Thomas; Michael R. Dougherty

An important theoretical question in decision making concerns the nature of cue-generation: What mechanism drives the generation of cues used to make inferences? Most models of decision making assume that the properties of cues, often cue validity, initiate a set of dynamic pre-decision processes. In two studies, we test how memory accessibility affects cue use by manipulating both ecological cue validity and cue accessibility in a stock-forecasting task. Cue accessibility was manipulated by the pattern of accurate cue discriminations within experiment blocks of the learning phase of the experiments. Specifically, we manipulated the serial positions in which the cues accurately discriminated while holding overall cue validity constant. At test, participants preferred cues that discriminated early in the learning phase—a kind of primacy effect. The findings suggest that cue use is influenced by memory retrieval mechanisms and that cue use is not solely determined by cue validity. The results have implications for the development of computational models of heuristic decision-making.


Quarterly Journal of Experimental Psychology | 2017

Does constraining memory maintenance reduce visual search efficiency

Daniel R. Buttaccio; Nicholas D. Lange; Rick P. Thomas; Michael R. Dougherty

We examine whether constraining memory retrieval processes affects performance in a cued recall visual search task. In the visual search task, participants are first presented with a memory prompt followed by a search array. The memory prompt provides diagnostic information regarding a critical aspect of the target (its colour). We assume that upon the presentation of the memory prompt, participants retrieve and maintain hypotheses (i.e., potential target characteristics) in working memory in order to improve their search efficiency. By constraining retrieval through the manipulation of time pressure (Experiments 1A and 1B) or a concurrent working memory task (Experiments 2A, 2B, and 2C), we directly test the involvement of working memory in visual search. We find some evidence that visual search is less efficient under conditions in which participants were likely to be maintaining fewer hypotheses in working memory (Experiments 1A, 2A, and 2C), suggesting that the retrieval of representations from long-term memory into working memory can improve visual search. However, these results should be interpreted with caution, as the data from two experiments (Experiments 1B and 2B) did not lend support for this conclusion.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2017

Expectations Influence Visual Search Performance

Carolyn Hartzell; Rick P. Thomas

When the number of visual search targets is unknown, observers in all domains are more likely to miss targets after finding an initial target—a costly effect known as satisfaction of search in radiology (Tuddenham, 1962) or, more recently, subsequent search misses (SSMs; Cain, Adamo, & Mitroff, 2013). Many explanations for SSMs focus on perceptual characteristics of the targets and distractors or the working memory resources used to store target information(Cain et al., 2013; Körner & Gilchrist, 2007). We argue and test the idea that higher-level cognition can significantly affect search behavior and search errors, particularly for subsequent search misses. Specifically, we present evidence that expectations that are generated from long-term memory drive visual search, and can account for search behavior both before and after an initial target is located. During the study, participants first learned an environmental ecology for cue and target sets. This is similar to a radiologist’s acquired experience with the relationship between symptoms on a patient’s medical chart and possible abnormalities in an x-ray. After training, participants were presented with just the cue and then were eye-tracked as they searched for one or two targets in a field of distractors. Using hypothesis-guided search as a theoretical foundation (Buttaccio, Lange, Thomas, & Dougherty, 2014), we predicted that initial search behavior would be driven by the hypotheses most highly associated with the cue, where hypotheses are defined as possible target characteristics – in this case, color. We predicted that search occurring after the first target would be driven by the hypotheses most highly correlated with the first target found. Measures of search behavior included response times, fixations by object color, and miss errors for secondary targets.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2015

Price as Information Incidental Search Costs Affect Decisions to Terminate Information Search and Valuations of Information Sources

David A. Illingworth; Rick P. Thomas

This study investigated the influence of incidental search costs on decisions to terminate information acquisition and the valuation of information sources. Participants who paid for information in a sequential hypothesis testing, medical diagnosis task terminated search earlier than those who acquired information without incidental costs. This finding is consistent with research in various search domains that demonstrates the aversive nature of costs. Participants exhibited a preference for highly diagnostic tests over those possessing low diagnosticity when cost distributions required participants to pay the same price to view the outcome of each tests. However, participants showed a preference for tests that were more expensive than their alternatives when costs differed between tests. This finding suggests that incidental costs influence the valuation of an information source during search. Our observation is consistent with a cost-quality substitution heuristic, where acquisition costs become a surrogate for usefulness when estimating the quality of tests.


Journal of applied research in memory and cognition | 2018

Assessment of Expert Performance Compared Across Professional Domains

Rick P. Thomas; Ashley Lawrence


Journal of Behavioral Decision Making | 2018

Integrating Fast and Frugal Heuristics with a Model of Memory-based Cue Generation: Memory-based Cue Generation

Ashley Lawrence; Rick P. Thomas; Michael R. Dougherty

Collaboration


Dive into the Rick P. Thomas's collaboration.

Top Co-Authors

Avatar

Ashley Lawrence

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David A. Illingworth

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Agata Rozga

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Carolyn Hartzell

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge