Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Douglas P. Twitchell is active.

Publication


Featured researches published by Douglas P. Twitchell.


Journal of Management Information Systems | 2004

A Comparison of Classification Methods for Predicting Deception in Computer-Mediated Communication

Lina Zhou; Judee K. Burgoon; Douglas P. Twitchell; Tiantian Qin; Jay F. Nunamaker

The increased chance of deception in computer-mediated communication and the potential risk of taking action based on deceptive information calls for automatic detection of deception. To achieve the ultimate goal of automatic prediction of deception, we selected four common classification methods and empirically compared their performance in predicting deception. The deception and truth data were collected during two experimental studies. The results suggest that all of the four methods were promising for predicting deception with cues to deception. Among them, neural networks exhibited consistent performance and were robust across test settings. The comparisons also highlighted the importance of selecting important input variables and removing noise in an attempt to enhance the performance of classification methods. The selected cues offer both methodological and theoretical contributions to the body of deception and information systems research.


hawaii international conference on system sciences | 2003

An exploratory study into deception detection in text-based computer-mediated communication

Lina Zhou; Douglas P. Twitchell; Tiantian Qin; Judee K. Burgoon; Jay F. Nunamaker

Deception is an everyday occurrence across all communication media. The expansion of the Internet has significantly increased the amount of textual communication received and stored by individuals and organizations. Inundated with massive amounts of textual information transmitted through computer-mediated communication, CMC, people remain largely unsuccessful and inefficient in detecting those messages that may be deceptive. Creating an automated tool that could help people flag the possible deceptive messages in CMC is desirable, but first it is necessary to understand cues used to deceive in textual instances. This study focuses on the identification of deceptive cues deceivers use in a textual CMC environment. 30 dyads (n =14 truthful, n = 16 deceptive) were able to complete the desert survival problem. Findings have demonstrated significant differences between the content within truthful and deceptive messages. Several cues were also found to be significantly more present when deceivers write messages.


intelligence and security informatics | 2003

A longitudinal analysis of language behavior of deception in e-mail

Lina Zhou; Judee K. Burgoon; Douglas P. Twitchell

The detection of deception is a promising but challenging task. Previous exploratory research on deception in computer-mediated communication found that language cues were effective in differentiating deceivers from truthtellers. However, whether and how these language cues change over time remains an open issue. In this paper, we investigate the effect of time on cues to deception in an empirical study. The preliminary results showed that some cues to deception change over time, while others do not. The explanation for the lack of change in the latter cases is provided. In addition, we show that the number and type of cues to deception vary from time to time. We also suggest what could be the best time to investigate cues to deception in a continuous email communication.


hawaii international conference on system sciences | 2005

An Approach for Intent Identification by Building on Deception Detection

Judee K. Burgoon; Mark Adkins; John Kruse; Matthew L. Jensen; Thomas O. Meservy; Douglas P. Twitchell; Amit V. Deokar; Jay F. Nunamaker; Shan Lu; Gabriel Tsechpenakis; Dimitris N. Metaxas; Robert Younger

Past research in deception detection at the University of Arizona has guided the investigation of intent detection. A theoretical foundation and model for the analysis of intent detection is proposed. Available test beds for intent analysis are discussed and two proof-of-concept studies exploring nonverbal communication within the context of deception detection and intent analysis are shared.


intelligence and security informatics | 2004

Using Speech Act Profiling for Deception Detection

Douglas P. Twitchell; Jay F. Nunamaker; Judee K. Burgoon

The rising use of synchronous text-based computer-mediated communication (CMC) such as chat rooms and instant messaging in government agencies and the business world presents a potential risk to these organizations. There are no current methods for visualizing or analyzing these persistent conversations to detect deception. Speech act profiling is a method for analyzing and visualizing online conversations, and this paper shows its use for distinguishing online conversations that express uncertainty, an indicator of deception.


IEEE Transactions on Intelligent Transportation Systems | 2009

Detecting Concealment of Intent in Transportation Screening: A Proof of Concept

Judee K. Burgoon; Douglas P. Twitchell; Matthew L. Jensen; Thomas O. Meservy; Mark Adkins; John Kruse; Amit V. Deokar; Gabriel Tsechpenakis; Shan Lu; Dimitris N. Metaxas; Jr . Jay F. Nunamaker; Robert Younger

Transportation and border security systems have a common goal: to allow law-abiding people to pass through security and detain those people who intend to harm. Understanding how intention is concealed and how it might be detected should help in attaining this goal. In this paper, we introduce a multidisciplinary theoretical model of intent concealment along with three verbal and nonverbal automated methods for detecting intent: message feature mining, speech act profiling, and kinesic analysis. This paper also reviews a program of empirical research supporting this model, including several previously published studies and the results of a proof-of-concept study. These studies support the model by showing that aspects of intent can be detected at a rate that is higher than chance. Finally, this paper discusses the implications of these findings in an airport-screening scenario.


hawaii international conference on system sciences | 2004

Speech act profiling: a probabilistic method for analyzing persistent conversations and their participants

Douglas P. Twitchell; Jay F. Nunamaker

The increase in persistent conversations in the form of chat and instant messaging (IM) has presented new opportunities for researchers. This paper describes a method for evaluating and visualizing persistent conversations by creating a speech act profile for conversation participants using speech act theory and concepts from fuzzy logic. This method can be used either to score a participant based on possible intentions or to create a visual map of those intentions. Transcripts from the Switchboard corpus, which have been marked up with speech act labels according to a SWBD-DAMSL tag set of 42 tags, are used to train language models and a modified hidden Markov model (HMM) to obtain probabilities for each speech act type for a given sentence. Rather than choosing the speech act with the maximum probability and assigning it to the sentence, the probabilities are aggregated for each conversation participant creating a set of speech act profiles, which can be visualized as a radar graphs. Several example profiles are shown along with possible interpretations. The profiles can be used as an overall picture of a conversation, and may be useful in various analyses of persistent conversations including information retrieval, deception detection, and online technical support monitoring.


IEEE Transactions on Professional Communication | 2012

An Examination of Deception in Virtual Teams: Effects of Deception on Task Performance, Mutuality, and Trust

Christie M. Fuller; Kent Marett; Douglas P. Twitchell

Research Problem: This study investigates the impact of deception on the performance of tasks in virtual teams. While the advantages of virtual teams in organizations have been well-studied, as the use of these teams expands, organizations must acknowledge the potential for negative consequences of team member actions. Research Questions: (1) How does deceptive communication influence the outcomes of virtual group collaboration? and (2) How does perceived deception impact the individual perceptions, such as perceived trustworthiness and mutuality, of the virtual team itself? Literature Review: Based on (1), the conclusion from the literature on virtual teams that trust and mutuality are vital toward team development, (2) the propositions put forth by Interpersonal Deception Theory that deception will be perceived by team members, and (3) from the conclusion from the literature on interpersonal deception and trust that deception will impact outcomes of an interaction, including trust, mutuality, and ultimately team performance, we developed a model of the impact of deception on outcomes in virtual teams. This model suggests that deceptive communication negatively impacts task performance. Deceptive communication is also expected to impact perceived deception both within and between groups. The model further proposes that perceived deception will negatively impact both perceived trustworthiness and mutuality. Methodology: Through an experiment, virtual teams of three members participated in a group decision-making task in which team members must cooperate to search a grid for enemy camps and then collaborate on a strike plan, with half the teams populated by a deceptive team member. Two-hundred seventeen subjects were recruited from courses at three universities. Five experimental sessions were conducted across two semesters in computer labs at the three universities. Following the virtual team experiment, subjects completed surveys related to key constructs. Analysis of variance and linear regression were used to test the hypotheses. Results and Discussion: Deception has a negative impact on task performance by virtual teams. Participants perceived deception when it was present. Perceived deception led to decreased mutuality and trust among team members. These findings suggest that organizations that utilize virtual teams must be aware of and prepared to deal with negative behaviors, such as deception. The generalizability of these findings is potentially limited by the use of student subjects in a laboratory setting. Future research may extend these findings by incorporating additional variables that have been found to be important to virtual team outcomes or studying the current model in a longitudinal design.


hawaii international conference on system sciences | 2005

StrikeCOM: A Multi-Player Online Strategy Game for Researching and Teaching Group Dynamics

Douglas P. Twitchell; Karl Wiers; Mark Adkins; Judee K. Burgoon; Jay F. Nunamaker

StrikeCOM is a multi-player online strategy game designed to create discourse to aid in the examination of the development of group processes, shared awareness, and communication in distributed and face-to-face groups. The game mimics C3ISR (Command, Control, Communication, Intelligence, Surveillance, Reconnaissance) scenarios and information gathering in group activities. The game is most commonly used to create group communication and interaction in multiple communication modes. Built using a Java-based collaborative server platform the game is available for use in almost any computing environment. StrikeCOM has been used as a research tool to study leadership and deception in group decision making. The U.S. Department of Defense is using the tool to teach Network Centric Warfare to battle commanders. Use of StrikeCOM over the last two years has resulted in a number of lessons-learned, including using simple, familiar game interfaces, utilizing full and immediate feedback, and creating a flexible technical design to meet shifting research and teaching needs.


Enabling technologies for simulation science. Conference | 2004

Advances in automated deception detection in text-based computer-mediated communication

Mark Adkins; Douglas P. Twitchell; Judee K. Burgoon; Jay F. Nunamaker

The Internet has provided criminals, terrorists, spies, and other threats to national security a means of communication. At the same time it also provides for the possibility of detecting and tracking their deceptive communication. Recent advances in natural language processing, machine learning and deception research have created an environment where automated and semi-automated deception detection of text-based computer-mediated communication (CMC, e.g. email, chat, instant messaging) is a reachable goal. This paper reviews two methods for discriminating between deceptive and non-deceptive messages in CMC. First, Document Feature Mining uses document features or cues in CMC messages combined with machine learning techniques to classify messages according to their deceptive potential. The method, which is most useful in asynchronous applications, also allows for the visualization of potential deception cues in CMC messages. Second, Speech Act Profiling, a method for quantifying and visualizing synchronous CMC, has shown promise in aiding deception detection. The methods may be combined and are intended to be a part of a suite of tools for automating deception detection.

Collaboration


Dive into the Douglas P. Twitchell's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kent Marett

Mississippi State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge