Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Bennett Kleinberg is active.

Publication


Featured researches published by Bennett Kleinberg.


PLOS ONE | 2015

Memory detection 2.0: the first web-based memory detection test

Bennett Kleinberg; Bruno Verschuere

There is accumulating evidence that reaction times (RTs) can be used to detect recognition of critical (e.g., crime) information. A limitation of this research base is its reliance upon small samples (average n = 24), and indications of publication bias. To advance RT-based memory detection, we report upon the development of the first web-based memory detection test. Participants in this research (Study1: n = 255; Study2: n = 262) tried to hide 2 high salient (birthday, country of origin) and 2 low salient (favourite colour, favourite animal) autobiographical details. RTs allowed to detect concealed autobiographical information, and this, as predicted, more successfully so than error rates, and for high salient than for low salient items. While much remains to be learned, memory detection 2.0 seems to offer an interesting new platform to efficiently and validly conduct RT-based memory detection research.


Journal of Forensic Sciences | 2016

ID-check: Online concealed information test reveals true identity

Bruno Verschuere; Bennett Kleinberg

The Internet has already changed peoples lives considerably and is likely to drastically change forensic research. We developed a web‐based test to reveal concealed autobiographical information. Initial studies identified a number of conditions that affect diagnostic efficiency. By combining these moderators, this study investigated the full potential of the online ID‐check. Participants (n = 101) tried to hide their identity and claimed a false identity in a reaction time‐based Concealed Information Test. Half of the participants were presented with personal details (e.g., first name, last name, birthday), whereas the others only saw irrelevant details. Results showed that participants′ true identity could be detected with high accuracy (AUC = 0.98; overall accuracy: 86–94%). Online memory detection can reliably and validly detect whether someone is hiding their true identity. This suggests that online memory detection might become a valuable tool for forensic applications.


Memory | 2017

Assessing autobiographical memory: the web-based autobiographical Implicit Association Test

Bruno Verschuere; Bennett Kleinberg

ABSTRACT By assessing the association strength with TRUE and FALSE, the autobiographical Implicit Association Test (aIAT) [Sartori, G., Agosta, S., Zogmaister, C., Ferrara, S. D., & Castiello, U. (2008). How to accurately detect autobiographical events. Psychological Science, 19, 772–780. doi:10.1111/j.1467-9280.2008.02156.x] aims to determine which of two contrasting statements is true. To efficiently run well-powered aIAT experiments, we propose a web-based aIAT (web-aIAT). Experiment 1 (n = 522) is a web-based replication study of the first published aIAT study [Sartori, G., Agosta, S., Zogmaister, C., Ferrara, S. D., & Castiello, U. (2008). How to accurately detect autobiographical events. Psychological Science, 19, 772–780. doi:10.1111/j.1467-9280.2008.02156.x; Experiment 1]. We conclude that the replication was successful as the web-based aIAT could accurately detect which of two playing cards participants chose (AUC = .88; Hit rate = 81%). In Experiment 2 (n = 424), we investigated whether the use of affirmative versus negative sentences may partly explain the variability in aIAT accuracy findings. The aIAT could detect the chosen card when using affirmative (AUC = .90; Hit rate = 81%), but not when using negative sentences (AUC = .60; Hit rate = 53%). The web-based aIAT seems to be a valuable tool to facilitate aIAT research and may help to further identify moderators of the test’s accuracy.


Proceedings of the Second Workshop on Computational Approaches to Deception Detection | 2016

Using the verifiability of details as a test of deception: A conceptual framework for the automation of the verifiability approach

Bennett Kleinberg; Galit Nahari; Bruno Verschuere

The Verifiability Approach (VA) is a promising new approach for deception detection. It extends existing verbal credibility assessment tools by asking interviewees to provide statements rich in verifiable detail. Details that i) have been experienced with an identifiable person, ii) have been witnessed by an identifiable person, or iii) have been recorded through technology, are labelled as verifiable. With only minimal modifications of information-gathering interviews this approach has yielded remarkable classification accuracies. Currently, the VA relies on extensive manual annotation by human coders. Aiming to extend the VA’s applicability, we present a work in progress on automated VA scoring. We provide a conceptual outline of two automation approaches: one being based on the Linguistic Inquiry and Word Count software and the other on rule-based shallow parsing and named entity recognition. Differences between both approaches and possible future steps for an automated VA are discussed.


Journal of Forensic Sciences | 2018

Using Named Entities for Computer‐Automated Verbal Deception Detection

Bennett Kleinberg; Maximilian Mozes; Arnoud Arntz; Bruno Verschuere

There is an increasing demand for automated verbal deception detection systems. We propose named entity recognition (NER; i.e., the automatic identification and extraction of information from text) to model three established theoretical principles: (i) truth tellers provide accounts that are richer in detail, (ii) contain more contextual references (specific persons, locations, and times), and (iii) deceivers tend to withhold potentially checkable information. We test whether NER captures these theoretical concepts and can automatically identify truthful versus deceptive hotel reviews. We extracted the proportion of named entities with two NER tools (spaCy and Stanfords NER) and compared the discriminative ability to a lexicon word count approach (LIWC) and a measure of sentence specificity (speciteller). Named entities discriminated truthful from deceptive hotel reviews above chance level, and outperformed the lexicon approach and sentence specificity. This investigation suggests that named entities may be a useful addition to existing automated verbal deception detection approaches.


Archive | 2018

Detecting Concealed Information on a Large Scale: Possibilities and Problems

Bennett Kleinberg; Yaloe van der Toolen; Arnoud Arntz; Bruno Verschuere

Abstract There is an increasing demand for deception detection at scale. In situations in which larger numbers of people need to be tested, traditional deception-detection methods are limited because they often require extensive testing sessions or are limited in their flexibility to novel contexts. The aim of this chapter is to discuss the potential for large-scale applications of several deception-detection methods. Specifically, we evaluate the theoretical foundations of the methods, the potential for quick data-collection procedures, and the flexibility of implementing the methods in different contexts. Each method (behavioral observation, reaction times, speech analysis, verbal content, thermal imaging) is outlined, evaluated, and discussed on the suitability for large-scale purposes. There is no substantial evidence to support the validity of behavioral observation nor of speech analysis. Reaction times and verbal content analysis are promising for future of large-scale applications. To reach their full potential, for both methods the key challenges are a fast data-collection process (e.g., remote online testing) and a near real-time assessment of the data. We close this chapter with an outlook on possible new applications including the integration of methods (i.e., multimodal approaches).


Acta Psychologica | 2018

Using more different and more familiar targets improves the detection of concealed information

Kristina Suchotzki; Jan De Houwer; Bennett Kleinberg; Bruno Verschuere

When embedded among a number of plausible irrelevant options, the presentation of critical (e.g., crime-related or autobiographical) information is associated with a marked increase in response time (RT). This RT effect crucially depends on the inclusion of a target/non-target discrimination task with targets being a dedicated set of items that require a unique response (press YES; for all other items press NO). Targets may be essential because they share a feature - familiarity - with the critical items. Whereas irrelevant items have not been encountered before, critical items are known from the event or the facts of the investigation. Target items are usually learned before the test, and thereby made familiar to the participants. Hence, familiarity-based responding needs to be inhibited on the critical items and may therefore explain the RT increase on the critical items. This leads to the hypothesis that the more participants rely on familiarity, the more pronounced the RT increase on critical items may be. We explored two ways to increase familiarity-based responding: (1) Increasing the number of different target items, and (2) using familiar targets. In two web-based studies (n = 357 and n = 499), both the number of different targets and the use of familiar targets facilitated concealed information detection. The effect of the number of different targets was small yet consistent across both studies, the effect of target familiarity was large in both studies. Our results support the role of familiarity-based responding in the Concealed Information Test and point to ways on how to improve validity of the Concealed Information Test.


Journal of Social Structure | 2017

Web-based text anonymization with Node.js: Introducing NETANOS (Named entity-based Text Anonymization for Open Science)

Bennett Kleinberg; Maximilian Mozes

Netanos (Named Entity-based Text ANonymization for Open Science) is a natural language processing software that anonymizes texts by identifying and replacing named entities. The key feature of NETANOS is that the anonymization preserves critical context that allows for secondary linguistic analyses on anonymized texts. Consider the example string “Max and Ben spent more than 1000 hours on writing the software. They started in August 2016 in Amsterdam.” While coarse anonymization such as simple “XXX” replacement would suffice to mask the true content of the string, essential text properties are lost that are needed for secondary analyses. For example, content-based deception detection approaches rely on the number of specific times and dates to differentiate between deceptive and truthful texts (Warmelink et al. 2013). The architecture of NETANOS relies on two software libraries capable of identifying named entities. (1) The Stanford Named Entity Recognizer (NER) (Finkel, Grenager, and Manning 2005) integrated with the ner Node.js package (Srivastava 2016), and (2) the NLP-compromise JavaScript frontend-library (Kelly 2016). Both libraries are used in a layered architecture to identify persons (e.g. “Max”, “Ben”), locations (e.g. “Amsterdam”, “Munich”), organizations (e.g. “Google”), dates (e.g. “August 2016”), and values (e.g. “42”). Specifically, the text anonymization is achieved with the following stepwise procedure: The input string is analyzed by Stanford’s NER, identifying organizations, locations, persons, and dates. All identified entities are replaced with their context-preserving anonymized versions. NLP-compromise’s named entity recognition tool is applied to identify potentially remaining, unrecognized entities. Besides the key feature of context preserving text anonymization, Netanos also provides three alternative anonymization types. • Context-preserving anonymization (key feature): Identified named entity types are replaced with a composite string consisting of the entity type and the corresponding index of occurrence. “[PERSON_1] and [PERSON_2] spent more than [DATE/TIME_1] on writing the software. They started in [DATE/TIME_2] in [LOCATION_1].” • Named entity-based replacement: Identified entities are replaced with a different, randomly chosen named entity of the same type. “Barry and Rick spent more than 997 hours on writing the software. They started in January 14 2016 in Odessa.” • Non-context preserving anonymization: This replacement type is inspired by the anonymization procedure suggested by the UK Data Service (Service, n.d.). It replaces all strings having a capital first letter and all numeric values with XXX. “XXX and XXX spent more than XXX hours on writing the software. XXX started in XXX XXX in XXX.” • Combined, non-context preserving anonymization: The context-preserving replacement is used to identify candidates for replacement that are then replaced with the procedure of the non-context preserving replacement “XXX and XXX spent more than XXX XXX on writing the software. XXX started in XXX XXX in XXX.” Note that all replacements are applied globally across the input string.


Journal of applied research in memory and cognition | 2015

RT-based memory detection: Item saliency effects in the single-probe and the multiple-probe protocol

Bruno Verschuere; Bennett Kleinberg; Kalliopi Theocharidou


Journal of applied research in memory and cognition | 2016

The role of motivation to avoid detection in reaction time-based concealed information detection

Bennett Kleinberg; Bruno Verschuere

Collaboration


Dive into the Bennett Kleinberg's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Arnoud Arntz

University of Amsterdam

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lara Warmelink

University of Portsmouth

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge