Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Allaire K. Welk is active.

Publication


Featured researches published by Allaire K. Welk.


International Journal of Cyber Behavior, Psychology and Learning archive | 2015

Will the Phisher-Men Reel You In?: Assessing Individual Differences in a Phishing Detection Task

Allaire K. Welk; Kyung Wha Hong; Olga A. Zielinska; Rucha Tembe; Emerson R. Murphy-Hill; Christopher B. Mayhorn

Some authors suggest that regardless of how good security technology is, it is the “people problem” that must be overcome for successful cybersecurity (West, Mayhorn, Hardee, & Mendel, 2009). While security threats to the average computer user might take a variety of forms such as viruses or worms delivered via nefarious websites or USB drives, identity theft tactics such as phishing are becoming increasingly problematic and common. Phishing is a technology-based, social engineering tactic where attackers attempt to appear as authorized sources to target individuals and obtain personal and/or sensitive information. The current research aims to explore how individuals differ in phishing susceptibility within the context of a real world email-related decision making task.


symposium and bootcamp on science of security | 2016

Differences in trust between human and automated decision aids

Carl J. Pearson; Allaire K. Welk; William A. Boettcher; Roger C. Mayer; Sean Streck; Joseph Simons-Rudolph; Christopher B. Mayhorn

Humans can easily find themselves in high cost situations where they must choose between suggestions made by an automated decision aid and a conflicting human decision aid. Previous research indicates that humans often rely on automation or other humans, but not both simultaneously. Expanding on previous work conducted by Lyons and Stokes (2012), the current experiment measures how trust in automated or human decision aids differs along with perceived risk and workload. The simulated task required 126 participants to choose the safest route for a military convoy; they were presented with conflicting information from an automated tool and a human. Results demonstrated that as workload increased, trust in automation decreased. As the perceived risk increased, trust in the human decision aid increased. Individual differences in dispositional trust correlated with an increased trust in both decision aids. These findings can be used to inform training programs for operators who may receive information from human and automated sources. Examples of this context include: air traffic control, aviation, and signals intelligence.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2016

A Temporal Analysis of Persuasion Principles in Phishing Emails

Olga A. Zielinska; Allaire K. Welk; Christopher B. Mayhorn; Emerson R. Murphy-Hill

Eight hundred eighty-seven phishing emails from Arizona State University, Brown University, and Cornell University were assessed by two reviewers for Cialdini’s six principles of persuasion: authority, social proof, liking/similarity, commitment/consistency, scarcity, and reciprocation. A correlational analysis of email characteristics by year revealed that the persuasion principles of commitment/consistency and scarcity have increased over time, while the principles of reciprocation and social proof have decreased over time. Authority and liking/similarity revealed mixed results with certain characteristics increasing and others decreasing. Results from this study can inform user training of phishing emails and help cybersecurity software to become more effective.


symposium and bootcamp on science of security | 2015

Exploring expert and novice mental models of phishing

Olga A. Zielinska; Allaire K. Welk; Christopher B. Mayhorn; Emerson R. Murphy-Hill

Mental models are internal representations of a concept or system that develop with experience. By rating pairs of concepts on the strength of their relationship, networks can be created showing an in-depth analysis of how information is organized. We asked novice and expert computer users to rate 10 terms related to the prevention of phishing. Expert mental models were more complex with more links between concepts. Relatedness ratings provide quantifiable network displays of mental models of novices and experts that cannot be seen through interviews. This information could provide a basis for future research on how mental models could be used to determine phishing vulnerability and the effectiveness of phishing training.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2015

Exploring Expert and Novice Mental Models of Phishing

Olga A. Zielinska; Allaire K. Welk; Christopher B. Mayhorn; Emerson R. Murphy-Hill

Experience influences actions people take in protecting themselves against phishing. One way to measure experience is through mental models. Mental models are internal representations of a concept or system that develop with experience. By rating pairs of concepts on the strength of their relationship, networks can be created through Pathfinder, showing an in-depth analysis of how information is organized. Researchers had novice and expert computer users rate three sets of terms related to phishing. The terms were divided into three categories: prevention of phishing, trends and characteristics of phishing attacks, and the consequences of phishing. Results indicated that expert mental models were more complex with more links between concepts. Specifically, experts had sixteen, thirteen, and fifteen links in the networks describing the prevention, trends, and consequences of phishing, respectively; however, novices only had eleven, nine, and nine links in the networks describing prevention, trends, and consequences of phishing, respectively. These preliminary results provide quantifiable network displays of mental models of novices and experts that cannot be seen through interviews. This information could provide a basis for future research on how mental models could be used to determine phishing vulnerability and the effectiveness of phishing training.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2014

I Was Blind but Now I See: A Manipulation of Task Relevance on Inattentional Blindness

Allaire K. Welk; James H. Creager; Douglas J. Gillan

Previous inattentional blindness research suggests that unexpected events may be cognitively processed atsome level, but typically have difficulty reaching consciousness. The present study aims to investigate ifand how primary-task relevance of an unexpected event contributes to individuals’ ability to thoroughly process and subsequently identify the event. Participants performed a dynamic, computer-based target-monitoring task, in which unexpected events occasionally occurred. Primary task performance and theability to correctly identify unexpected changes were recorded. Results indicate that when unexpectedevents contained information that was relevant to the primary target-monitoring task, they were morefrequently identified. Additionally, in conditions of task relevance, participants were more confident intheir ability to recognize unexpected events accurately. Applications of these data include interface design,improved safety in highway design, and enhanced training programs for dynamic visual monitoring taskssuch as air traffic control.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2017

The Oddball Effect and Inattentional Blindness: How Unexpected Events Influence Our Perceptions Of Time

Thomas A. Stokes; Allaire K. Welk; Olga A. Zielinska; Douglas J. Gillan

Inattentional blindness is a phenomenon in which an unexpected event goes unnoticed (at a conscious level) during a demanding task. The oddball effect is a perceptual phenomenon whereby novel or unexpected stimuli result in longer perceived time durations. The two phenomenon – inattentional blindness and the oddball effect—seem to have no surface relationship, however they share an important commonality: both occur in the presence of unexpected events. The present research aims to connect the two bodies of work, and examine if and how the oddball effect manifests itself within an inattentional blindness paradigm. The results of this research have important implications including understanding the effect of unexpected events on conscious attention and how the conscious processing of the event influences time perception. Results may also inform the design of systems that support tasks that require keeping track of elapsed duration when unexpected events may occur.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2016

In Automation We Trust? Identifying Varying Levels of Trust in Human and Automated Information Sources

Carl J. Pearson; Allaire K. Welk; Christopher B. Mayhorn

Humans can easily find themselves in high cost situations where they must choose between suggestions made by an automated decision aid and a conflicting human decision aid. Previous research indicates that trust is an antecedent to reliance, and often influences how individuals prioritize and integrate information presented from a human and/or automated information source. Expanding on previous work conducted by Lyons and Stokes (2012), the current experiment measured how trust in automated or human decision aids differs along with perceived risk and workload. The simulated task required 126 participants to choose the safest route for a military convoy; they were presented with conflicting information regarding which route was safest from an automated tool and a human. Results demonstrated that as workload increased, trust in automation decreased. As the perceived risk increased, trust in the human decision aid increased. Individual differences in dispositional trust correlated with an increased trust in both decision aids. These findings can be used to inform training programs and systems for operators who may receive information from human and automated sources. Examples of this context include: air traffic control, aviation, and signals intelligence.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2016

Differences in Mental Model Development Among Psychology and Engineering Students of a Human Factors Course

Lawton Pybus; Allaire K. Welk; Douglas J. Gillan

How does a human factors practitioner’s primary field of study affect the way he or she conceives of human factors concepts? Previous work has studied how mental models develop over the course of instruction, and how experts structure human factors knowledge. The present study longitudinally assessed mental models of human factors among students from psychology majors and students from engineering majors. Participants rated the relatedness of pairs of concepts for two units: one theoretical, and one applied. These data were used to produce Pathfinder networks for comparison. Results showed that students from the two majors held different mental models of the same concepts before and after instruction. Unexpected findings may indicate a possible application for mental model assessment: diagnosing issues in course design. Limitations, conclusions, and suggestions for future research are discussed.


symposium and bootcamp on science of security | 2015

All signals go: investigating how individual differences affect performance on a medical diagnosis task designed to parallel a signals intelligence analyst task

Allaire K. Welk; Christopher B. Mayhorn

Signals intelligence analysts play a critical role in the United States government by providing essential information regarding potential threats to national security to government leaders. Analysts perform complex decision-making tasks that involve gathering, sorting, and analyzing information. The current study aimed to evaluate how individual differences influence performance in an Internet search-based medical diagnosis task designed to simulate a signals analyst task. The individual differences of interest included working memory capacity and previous experience with elements of the task, specifically health literacy, prior experience using the Internet, and prior experience conducting Internet searches. Preliminary results indicated that working memory significantly predicted performance on this medical diagnosis task; conversely, medical literacy, prior experience using the Internet, and Internet search experience were not significanant predictors of performance. These results support previous research and provide additional evidence that working memory capacity greatly influences performance on cognitively complex decision-making tasks, whereas experience with elements of the task may not. These findings suggest that working memory capacity should be considered when screening individuals for signals intelligence analyst positions. Future research should aim to generalize these findings within a broader sample of individuals, ideally utilizing a task that directly replicates those performed by intelligence analysts.

Collaboration


Dive into the Allaire K. Welk's collaboration.

Top Co-Authors

Avatar

Christopher B. Mayhorn

North Carolina State University

View shared research outputs
Top Co-Authors

Avatar

Olga A. Zielinska

North Carolina State University

View shared research outputs
Top Co-Authors

Avatar

Emerson R. Murphy-Hill

North Carolina State University

View shared research outputs
Top Co-Authors

Avatar

Douglas J. Gillan

North Carolina State University

View shared research outputs
Top Co-Authors

Avatar

Carl J. Pearson

North Carolina State University

View shared research outputs
Top Co-Authors

Avatar

Thomas A. Stokes

North Carolina State University

View shared research outputs
Top Co-Authors

Avatar

James H. Creager

North Carolina State University

View shared research outputs
Top Co-Authors

Avatar

Jim Witschey

North Carolina State University

View shared research outputs
Top Co-Authors

Avatar

Joseph Simons-Rudolph

North Carolina State University

View shared research outputs
Top Co-Authors

Avatar

Kyung Wha Hong

North Carolina State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge