Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Justin Scott Giboney is active.

Publication


Featured researches published by Justin Scott Giboney.


decision support systems | 2015

User Acceptance of Knowledge-Based System Recommendations: Explanations, Arguments, and Fit

Justin Scott Giboney; Susan A. Brown; Paul Benjamin Lowry; Jay F. Nunamaker

Knowledge-based systems (KBS) can potentially enhance individual decision-making. Yet, recommendations from KBS continue to be met with resistance. This is particularly troubling in the context of deception detection (e.g., border control), in which humans are accurate only about half the time. In this study, we examine how the fit between KBS explanations and users’ internal explanations influences acceptance of KBS recommendations. We leverage cognitive fit theory (CFT) to explain why fit is important for user acceptance of KBS evaluations. We also compare the predictions of CFT to those of the person-environment fit (PEF) paradigm. The two theories make conflicting predictions about the outcomes of fit when it comes to KBS explanations. CFT predicts that explanations with a higher cognitive fit will have more influence and be evaluated faster whereas PEF predicts that individuals will take more time in evaluating explanations with greater fit. In our deception detection scenario, we find support for CFT in the sense that people are influenced more by cognitively fitting explanations, however PEF is supported in the sense that people take more time to evaluate the explanation.


Journal of Language and Social Psychology | 2016

Which Spoken Language Markers Identify Deception in High-Stakes Settings? Evidence From Earnings Conference Calls

Judee K. Burgoon; William J. Mayew; Justin Scott Giboney; Aaron C. Elkins; Kevin Moffitt; Bradley Dorn; Michael D. Byrd; Lee Spitzley

Quarterly conference calls where corporate executives discuss earnings that are later found to be misreported offer an excellent test bed for determining if automated linguistic and vocalic analysis tools can identify potentially fraudulent utterances in prepared versus unscripted remarks. Earnings conference calls from one company that restated their financial reports and were accused of making misleading statements were annotated as restatement-relevant (or not) and as prepared (presentation) or unprepared (Q&A) responses. We submitted more than 1,000 utterances to automated analysis to identify distinct linguistic and vocalic features that characterize various types of utterances. Restatement-related utterances differed significantly on many vocal and linguistic dimensions. These results support the value of language and vocal features in identifying potentially fraudulent utterances and suggest important interplay between utterances that are unscripted responses rather than rehearsed statements.


intelligence and security informatics | 2012

Establishing a foundation for automated human credibility screening

Jay F. Nunamaker; Judee K. Burgoon; Nathan W. Twyman; Jeffrey Gainer Proudfoot; Ryan M. Schuetzler; Justin Scott Giboney

Automated human credibility screening is an emerging research area that has potential for high impact in fields as diverse as homeland security and accounting fraud detection. Systems that conduct interviews and make credibility judgments can provide objectivity, improved accuracy, and greater reliability to credibility assessment practices, need to be built. This study establishes a foundation for developing automated systems for human credibility screening.


Management Information Systems Quarterly | 2017

Creating High-Value Real-World Impact through Systematic Programs of Research

Jay F. Nunamaker; Nathan W. Twyman; Justin Scott Giboney; Robert O. Briggs

ISSUES AND OPINIONS Creating High-Value Real-World Impact through Systematic Programs of Research Jay F. Nunamaker, Nathan W. Twyman, Justin Scott Giboney, and Robert O. Briggs . . . . . . . . . . . . . . . . . . . . . . . . . . . No. 2, pg. 335 METHODS ARTICLE A Multicollinearity and Measurement Error Statistical Blind Spot: Correcting for Excessive False Positives in Regression and PLS Dale L. Goodhue, William Lewis, and Ron Thompson . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . No. 3, pg. 667 Grounded Theory Methodology in Information Systems Research Manuel Wiesche, Marlen C. Jurisch, Philip W. Yetton, and Helmut Krcmar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . No. 3, pg. 685 RESEARCH COMMENTARY Toward Meaningful Engagement: A Framework for Design and Research of Gamified Information Systems De Liu, Radhika Santhanam, and Jane Webster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . No. 4, pg. 1011 RESEARCH ARTICLES How Is Your User Feeling? Inferring Emotion Through Human–Computer Interaction Devices Martin Hibbeln, Jeffrey L. Jenkins, Christoph Schneider, Joseph S. Valacich, and Markus Weinmann . . . . . . . . . . . . . . . . No. 1, pg. 1 Is Voluntary Profiling Welfare Enhancing? Byungwan Koh, Srinivasan Raghunathan, and Barrie R. Nault . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . No. 1, pg. 23 A Configural Approach to Coordinating Expertise in Software Development Teams Srinivas Kudaravalli, Samer Faraj, and Steven L. Johnson . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . No. 1, pg. 43 Using Forum and Search Data for Sales Prediction of High-Involvement Projects Tomer Geva, Gal Oestreicher-Singer, Niv Efron, and Yair Shimshoni . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . No. 1, pg. 65 Design and Evaluation of Auto-ID Enabled Shopping Asssistance Artifacts in Customers’ Mobile Phones: Two Retail Store Laboratory Experiments Viswanath Venkatesh, John A. Aloysius, Hartmut Hoehle, and Scot Burton . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . No. 1, pg. 83 Trust and the Strength of Ties in Online Social Networks: An Exploratory Field Experiment Ravi Bapna, Alok Gupta, Sarah Rice, and Arun Sundararajan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . No. 1, pg. 115 An Empirical Analysis of Intellectual Property Rights Sharing in Software Development Outsourcing Yuanyuan Chen, Anandhi Bharadwaj, and Khim-Yong Goh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . No. 1, pg. 131 Show Me the Way to Go Home: An Empirical Investigation of Ride-Sharing and Alcohol Related Motor Vehicle Fatalities Brad N. Greenwood and Sunil Wattal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . No. 1, pg. 163 Effects of Competition Among Internet Service Providers and Content Providers on the Net Neutrality Debate Hong Guo, Subhajyoti Bandyopadhyay, Arthur Lim, Yu-Chen Yang, and Hsing Kenneth Cheng . . . . . . . . . . . . . . . . . . No. 2, pg. 353


International Journal of Human-computer Studies \/ International Journal of Man-machine Studies | 2016

Application of Expectancy Violations Theory to communication with and judgments about embodied agents during a decision-making task

Judee K. Burgoon; Joseph A. Bonito; Paul Benjamin Lowry; Sean L. Humpherys; Gregory D. Moody; James Eric Gaskin; Justin Scott Giboney

Because users treat embodied agents (EAs) as social actors, users hold expectations about human-to-EA communication (HtEAC) similar to those in human-to-human communication. This study extends Expectancy Violations Theory (EVT) to examine how different forms of interfaces that confirm or violate user expectations affect the communication process, social judgments, ability to influence, and accuracy of recall associated with HtEAC. Positive violations of expectancy are acts or characteristics of the EA that are unexpected but evaluated favorably by the human partner. Results suggest that when the EA deviates from expectations, effects on the HtEAC process and related outcomes can be more pronounced. EAs evaluated as positive violations had more favorable effects on task attractiveness than other human or EA interaction partners. As predicted by EVT, EA interactions that were positively evaluated elicited more perceived connectedness, feelings of being understood/receptivity, and dependability than those EA interactions evaluated negatively. However, negative violations did not produce worse outcomes than negative confirmations. EVT offers a useful lens for examining the communication effects of HtEAC and points to benefits of creating EAs that evoke positive violations of expectancy.


Computers & Security | 2016

The Security Expertise Assessment Measure (SEAM)

Justin Scott Giboney; Jeffrey Gainer Proudfoot; Sanjay Goel; Joseph S. Valacich

Hackers pose a continuous and unrelenting threat. Industry and academic researchers alike can benefit from a greater understanding of how hackers engage in criminal behavior. A limiting factor of hacker research is the inability to verify that self-proclaimed hackers participating in research actually possess their purported knowledge and skills. This paper develops and validates a conceptual-expertise-based tool that we call SEAM that can be used to discriminate between novice and expert hackers. This tool has the potential to provide information systems researchers with the following two key capabilities: (1) maximizing the generalizability of hacking research by verifying the legitimacy of hackers involved in data collections, and (2) segmenting samples of hackers into different groups based on skill thereby allowing more granular analyses and insights. This paper reports on samples from four different groups: security experts, students, security workers, and Amazon Mechanical Turk hackers. SEAM was able to differentiate between security expertise in different populations (e.g., experts and student novices). We also provide norm development by measuring security workers and Amazon Mechanical Turk hackers.


hawaii international conference on system sciences | 2010

The Value of Distrust in Computer-Based Decision-Making Groups

Paul Benjamin Lowry; Justin Scott Giboney; Ryan M. Schuetzler; Jacob Richardson; Tom Gregory; John Romney; Bonnie Brinton Anderson

During crises, relief agency commanders have to make decisions in a complex and uncertain environment, requiring them to continuously adapt to unforeseen environmental changes. In the process of adaptation, the commanders depend on information management systems for information. Yet there are still numerous reports of situations in which commanders had to make decisions based on incomplete, outdated or incorrect information, indicating poor information quality. In many of these situations, poor information quality can be attributed to the information management process incapable of adapting to external (environmental) changes and internal (team) information needs. Using dynamic capability theory and the findings of a case study, this paper presents four principles for information management adaptability: (1) maintain and update team memory, (2) dedicate resources for environmental scanning, (3) maximize the number of alternative information sources and (4) integrate forecasting and back casting methods in the information management process.


Journal of Management Information Systems | 2016

Special Issue: Information Systems for Deception Detection

Jay F. Nunamaker; Judee K. Burgoon; Justin Scott Giboney

JAY F. NUNAMAKER, JR. ([email protected], corresponding author) is Regents and Soldwedel Professor of MIS, Computer Science and Communication at the University of Arizona. He is director of the Center for the Management of Information and the Center for Border Security and Immigration. He received his Ph. D. in operations research and systems engineering from Case Institute of Technology. He obtained his professional engineer’s license in 1965. He specializes in the fields of system analysis and design, collaboration technology, and deception detection. He has been inducted into the Design Science Hall of Fame and received the LEO Award for Lifetime Achievement from the Association of Information Systems. He has published over 368 journal articles, book chapters, books, and refereed proceedings papers. He has also cofounded five spin-off companies based on his research.


Journal of Management Information Systems | 2015

Guest Editors’ Introduction: On the Contributions of Applied Science/Engineering Research to Information Systems

Robert O. Briggs; Jay F. Nunamaker; Justin Scott Giboney

JAY F. NUNAMAKER, JR. is Regents and Soldwedel Professor of MIS, Computer Science, and Communication, and director of the Center for the Management of Information and the National Center for Border Security at the University of Arizona. He received his Ph.D. in operations research and systems engineering from Case Institute of Technology, an M.S. and B.S. in engineering from the University of Pittsburgh, and a B.S. from Carnegie Mellon University. He received his professional engineer’s license in 1965. He was inducted into the Design Science Hall of Fame in May 2008. He received the LEO Award for Lifetime Achievement from the Association for Information Systems (AIS) and was elected a Fellow of the AIS. He was featured in the July 1997 issue of Forbes magazine on technology as one of the eight key innovators in information technology. He is widely published, with an h-index of 60. He specializes is in system analysis and design, collaboration technology, and deception detection. The commercial product, GroupSystems’ ThinkTank, based on his research, is often referred to as the gold standard for structured collaboration systems. He founded the MIS Department at the University of Arizona in 1974 and served as department head for eighteen years.


Journal of Management Information Systems | 2016

Special Issue: Designing Tools to Answer Great Information Systems Research Questions

Justin Scott Giboney; Robert O. Briggs; Jay F. Nunamaker

JUSTIN SCOTT GIBONEY ([email protected]) is an assistant professor in Information Security and Digital Forensics at the University at Albany. He received his Ph.D. in management information systems from the University of Arizona. His research focuses on behavioral information security, deception detection, expert systems, and meta-analytic processes. He emphasizes design science research and system building to solve real-world problems with technology. He has published in International Journal of Human–Computer Studies, Communications of the AIS and Decision Support Systems, as well as in cybersecurity, management, and psychology journals.

Collaboration


Dive into the Justin Scott Giboney's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ryan M. Schuetzler

University of Nebraska Omaha

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Robert O. Briggs

University of Nebraska–Lincoln

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge