Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kristen Greene is active.

Publication


Featured researches published by Kristen Greene.


human factors in computing systems | 2007

Usability of voting systems: baseline data for paper, punch cards, and lever machines

Michael D. Byrne; Kristen Greene; Sarah P. Everett

In the United States, computer-based voting machines are rapidly replacing other older technologies. While there is potential for this to be a usability improvement, particularly in terms of accessibility, the only way it is possible to know if usability has improved is to have baseline data on the usability of traditional technologies. We report an experiment assessing the usability of punch cards, lever machines, and two forms of paper ballot. There were no differences in ballot completion time between the four methods, but there were substantial effects on error rate, with the paper ballots superior to the other methods as well as an interaction with age of voters. Subjective usability was assessed with the System Usability Scale and showed a slight advantage for bubble-style paper ballots. Overall, paper ballots were found to be particularly usable, which raises important technological and policy issues.


human factors in computing systems | 2008

Electronic voting machines versus traditional methods: improved preference, similar performance

Sarah P. Everett; Kristen Greene; Michael D. Byrne; Dan S. Wallach; Kyle Derr; Daniel Sandler; Ted Torous

In the 2006 U.S. election, it was estimated that over 66 million people would be voting on direct recording electronic (DRE) systems in 34% of the nations counties [8]. Although these computer-based voting systems have been widely adopted, they have not been empirically proven to be more usable than their predecessors. The series of studies reported here compares usability data from a DRE with those from more traditional voting technologies (paper ballots, punch cards, and lever machines). Results indicate that there were little differences between the DRE and these older methods in efficiency or effectiveness. However, in terms of user satisfaction, the DRE was significantly better than the older methods. Paper ballots also perform well, but participants were much more satisfied with their experiences voting on the DRE. The disconnect between subjective and objective usability has potential policy ramifications.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2006

Measuring the Usability of Paper Ballots: Efficiency, Effectiveness, and Satisfaction:

Sarah P. Everett; Michael D. Byrne; Kristen Greene

The Help America Vote Act (HAVA) of 2002 secured funding for improvements to election administration. Improvements include upgrading older voting systems to meet new guidelines. To determine whether the new voting systems are improvements over existing voting systems, information is needed on the usability of the older, traditional systems. This study was designed as a first step in addressing the need for usability data on existing voting systems. Three traditional paper ballots were empirically evaluated to collect baseline data that can later be compared to newer, electronic voting systems. Usability was evaluated using the thee International Organization for Standardization (ISO) metrics suggested by the National Institute of Standards and Technology (NIST): effectiveness, efficiency, and satisfaction. All three ballot types (bubble, arrow, and open response) produced reasonable levels of efficiency. The three ballot types did not produce different levels of effectiveness, but the overall error rate was higher than would be expected. On satisfaction, voters were clearly more satisfied with their experience with the bubble ballot.


international conference on cross-cultural design | 2014

Development of a Scale to Assess the Linguistic and Phonological Difficulty of Passwords

Jennifer C. Romano Bergstrom; Stefan A. Frisch; David Charles Hawkins; Joy Hackenbracht; Kristen Greene; Mary F. Theofanos; Brian Griepentrog

Institutions often require or recommend that their employees use secure, system-generated passwords. It is not clear how well linguistic and phonological language properties map onto complex, randomly-generated passwords. Passwords containing a mix of letters, numbers, and other symbol characters may or may not be similar to common patterns in spoken or written English. The Linguistic Phonological Difficulty (LPD) scoring rubric was created by considering the extent to which a string of characters in a password resembles ordinary spoken or written language patterns. LPD is a score calculated through a six-rule process that considers these spoken and written patterns of English as well as memory load. These rules can be applied to any password. Our research explores mapping linguistic and phonological language properties onto complex randomly generated passwords to assess behavioral performance.


international conference on human computer interaction | 2015

The Authentication Equation: A Tool to Visualize the Convergence of Security and Usability of Text-Based Passwords

Cathryn A. Ploehn; Kristen Greene

Password management is a ubiquitous struggle of the modern human. Despite usability playing a vital role in authentication, many password policies and requirements focus on security without sufficient consideration of human factors. In fact, security and usability needs are often in contention. Until an improved authentication method beyond character input is implemented on a large scale, developing new methodologies for balancing competing requirements is vital. This research project focused on building a data visualization tool to explore password usability and security metrics. The visualization tool integrates various measurements of passwords, enabling exploration of the intersection of their usability and security components. The tool is based on insight from previously gathered data from usability studies conducted at the United States National Institute of Standards and Technology. It also leverages web technologies to flexibly display data sets computed from sets of passwords. The tool is available at https://github.com/usnistgov/DataVis.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2018

Public Safety Communication User Needs: Voices of First Responders

Shanee T. Dawkins; Kristen Greene; Michelle Potts Steves; Mary F. Theofanos; Yee-Yin Choong; Susanne M. Furman; Sandra Spickard Prettyman

The public safety community is transitioning from land mobile radios to a communications technology ecosystem including a variety of broadband data sharing platforms. Successful deployment and adoption of new communications technology relies on efficient and effective user interfaces based on understanding first responder needs, requirements, and contexts of use; human factors research is needed to examine these factors. As such, this paper presents initial qualitative research results via semi-structured interviews with 133 first responders across the U.S. While there are similarities across disciplines, results show there is no easy “one size fits all” communications technology solution. To facilitate trust in new communications technology, solutions must be dependable, easy to use for first responders, and meet their communication needs through the application of user-centered design principles. During this shift in public safety communications technology, the time is now to leverage existing human factors expertise to influence emerging technology for public safety.


Proceedings 2018 Workshop on Usable Security | 2018

User Context: An Explanatory Variable in Phishing Susceptibility

Kristen Greene; Michelle Potts Steves; Mary F. Theofanos; Jennifer Kostick

Extensive research has been performed to examine the effectiveness of phishing defenses, but much of this research was performed in laboratory settings. In contrast, this work presents 4.5 years of workplace-situated, embedded phishing email training exercise data, focusing on the last three phishing exercises with participant feedback. The sample was an operating unit consisting of approximately 70 staff members within a U.S. government research institution. A multiple methods assessment approach revealed that the individual’s work context is the lens through which email cues are interpreted. Not only do clickers and non-clickers attend to different cues, they interpret the same cues differently depending on the alignment of the user’s work context and the premise of the phishing email. Clickers were concerned over consequences arising from not clicking, such as failing to be responsive. In contrast, non-clickers were concerned with consequences from clicking, such as downloading malware. This finding firmly identifies the alignment of user context and the phishing attack premise as a significant explanatory factor in phishing susceptibility. We present additional findings that have actionable operational security implications. The long-term, embedded and ecologically valid conditions surrounding these phishing exercises provided the crucial elements necessary for these findings to surface and be confirmed. Keywords—decision-making, embedded phishing awareness training, user-centered approach, survey instrument, long-term assessment, operational data, trial deployment, network security, security defenses


NIST Interagency/Internal Report (NISTIR) - 8216 | 2018

Voices of First Responders Identifying Public Safety Communication Problems: Findings from User-Centered Interviews, Phase 1, Volume 1 | NIST

Yee-Yin Choong; Shanee T. Dawkins; Susanne M. Furman; Kristen Greene; Sandra Spickard Prettyman; Mary F. Theofanos

................................................................................................................................................... I EXECUTIVE SUMMARY ................................................................................................................................. II TABLE OF CONTENTS ................................................................................................................................... V LIST OF TABLES ........................................................................................................................................... VI LIST OF FIGURES ......................................................................................................................................... VI GLOSSARY ................................................................................................................................................. VI


IEEE Computer | 2018

No Phishing beyond This Point

Kristen Greene; Michelle Potts Steves; Mary F. Theofanos

As phishing continues to evolve, what’s your organization doing to stay off the hook?


NIST Interagency/Internal Report (NISTIR) - 8194 | 2017

Exploratory Lens Model of Decision-Making in a Potential Phishing Attack Scenario

Franklin Tamborello; Kristen Greene

____________________________________________________________________________________ This puication is avilable ree of carge rom : https://rg/10.6028/N IS .IR .194 Phishing, the transmission of a message spoofing a legitimate sender about a legitimate subject with intent to perform malicious activity, causes a tremendous and rapidly-increasing amount of damage to information systems and users annually. This project implements an exploratory computational model of user decision making in a potential phishing attack scenario. The model demonstrates how contextual factors, such as message subject matter match to current work concerns, and personality factors, such as conscientiousness, contribute to users’ decisions to comply with or ignore message requests.

Collaboration


Dive into the Kristen Greene's collaboration.

Top Co-Authors

Avatar

Mary F. Theofanos

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Yee-Yin Choong

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Paul A. Grassi

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Brian C. Stanton

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michelle Potts Steves

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Susanne M. Furman

National Institute of Standards and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge