Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jaume Masip is active.

Publication


Featured researches published by Jaume Masip.


Psychology Crime & Law | 2005

The detection of deception with the reality monitoring approach: a review of the empirical evidence

Jaume Masip; Siegfried Ludwig Sporer; Eugenio Garrido; Carmen Herrero

One of the verbal approaches to the detection of deceit is based on research on human memory that tries to identify the characteristics that differentiate between internal and external memories (reality monitoring). This approach has attempted to extrapolate the contributions of reality monitoring (RM) research to the deception area. In this paper, we have attempted to review all available studies conducted in several countries in order to yield some general conclusions concerning the discriminative power of this approach. Regarding individual criteria, the empirical results are not very encouraging: few criteria discriminate significantly across studies, and there are several variables that moderate their effect. Some of the contradictory findings may have emerged because of differences in the operationalizations and procedures used across individual studies. However, more promising results have been reported in recent studies, and the approach as a whole appears to discriminate above chance level, reaching accuracy rates that are similar to those of criteria-based content analysis (CBCA). Some suggestions for future research are made.


International Journal of Psychology | 2004

Police officers' credibility judgments: Accuracy and estimated ability

Eugenio Garrido; Jaume Masip; Carmen Herrero

A study was conducted to examine Spanish police officers’ and nonofficers’ lie- and truth-detection accuracy, as well as their estimated detection ability. The participants were 121 police officers and 146 undergraduates who watched videotaped truthful and deceptive statements. They had to indicate: (1) whether each statement was truthful or deceptive, and (2) how good police officers were, in comparison with the general population, at detecting the truthfulness or deceptiveness of a statement. Results indicate that police officers’ accuracy was not higher than that of nonofficers, rather, while the officers reached an accuracy rate close to chance probability, the undergraduates surpassed that probability. Officers had a very strong tendency to judge the statements as deceptive; this made them less accurate than the students in judging the truthful accounts, while both groups reached a similar accuracy when judging the deceptive ones. Both occupational samples considered that the police are more capable ...


Personality and Social Psychology Review | 2015

Are Computers Effective Lie Detectors? A Meta-Analysis of Linguistic Cues to Deception

Valerie Hauch; Iris Blandón-Gitlin; Jaume Masip; Siegfried Ludwig Sporer

This meta-analysis investigates linguistic cues to deception and whether these cues can be detected with computer programs. We integrated operational definitions for 79 cues from 44 studies where software had been used to identify linguistic deception cues. These cues were allocated to six research questions. As expected, the meta-analyses demonstrated that, relative to truth-tellers, liars experienced greater cognitive load, expressed more negative emotions, distanced themselves more from events, expressed fewer sensory–perceptual words, and referred less often to cognitive processes. However, liars were not more uncertain than truth-tellers. These effects were moderated by event type, involvement, emotional valence, intensity of interaction, motivation, and other moderators. Although the overall effect size was small, theory-driven predictions for certain cues received support. These findings not only further our knowledge about the usefulness of linguistic cues to detect deception with computers in applied settings but also elucidate the relationship between language and deception.


International Journal of Psychology | 2004

Facial appearance and impressions of ‘credibility’: The effects of facial babyishness and age on person perception

Jaume Masip; Eugenio Garrido; Carmen Herrero

T he babyface overgeneralization effect is perceiving that people whose facial features resemble those of children have childlike traits, and treating them accordingly. This experiment sought to replicate the US findings with a South-European sample, to examine the impact of facial maturity on impressions of truthfulness, and to examine the influence of age on person perception. Three-hundred and twenty-four Spanish undergraduates were shown a photograph and had to rate it on a series of behavioural-tendency and trait scales measuring honesty, truthfulness, strength, dominance, intelligence, naivety, and warmth. The photographs were babyfaced, intermediate, and mature faced computer-manipulated versions of three pictures of the same individual at three different ages. Results indicate that the experimental manipulations significantly affected most of the dependent variables. Babyfaced individuals were perceived as the most truthful, and children as the most deceitful. However, when the deceit concerned a ...


Scandinavian Journal of Psychology | 2015

The source of the truth bias: Heuristic processing?

Chris N. H. Street; Jaume Masip

People believe others are telling the truth more often than they actually are; this is called the truth bias. Surprisingly, when a speaker is judged at multiple points across their statement the truth bias declines. Previous claims argue this is evidence of a shift from (biased) heuristic processing to (reasoned) analytical processing. In four experiments we contrast the heuristic-analytic model (HAM) with alternative accounts. In Experiment 1, the decrease in truth responding was not the result of speakers appearing more deceptive, but was instead attributable to the raters processing style. Yet contrary to HAMs, across three experiments we found the decline in bias was not related to the amount of processing time available (Experiments 1-3) or the communication channel (Experiment 2). In Experiment 4 we found support for a new account: that the bias reflects whether raters perceive the statement to be internally consistent.


Journal of Language and Social Psychology | 2010

What Did You Just Call Me? European and American Ratings of the Valence of Ethnophaulisms

Diana R. Rice; Dominic Abrams; Constantina Badea; Gerd Bohner; Andrea Carnaghi; Lyudmila I. Dementi; Kevin Durkin; Bea Ehmann; Gordon Hodson; Dogan Kokdemir; Jaume Masip; Aidan Moran; Margit E. Oswald; J.W. Ouwerkerk; Rolf Reber; Jonathan E. Schroeder; Katerina Tasiopoulou; Jerzy Trzebinski

Previous work has examined the relative valence (positivity or negativity) of ethnophaulisms (ethnic slurs) targeting European immigrants to the United States. However, this relied on contemporary judgments made by American researchers. The present study examined valence judgments made by citizens from the countries examined in previous work. Citizens of 17 European nations who were fluent in English rated ethnophaulisms targeting their own group as well as ethnophaulisms targeting immigrants from England. American students rated ethnophaulisms for all 17 European nations, providing a comparison from members of the host society. Ratings made by the European judges were (a) consistent with those made by the American students and (b) internally consistent for raters’ own country and for the common target group of the English. Following discussion of relevant methodological issues, the authors examine the theoretical significance of their results.


Frontiers in Psychology | 2016

Strategic Interviewing to Detect Deception: Cues to Deception across Repeated Interviews

Jaume Masip; Iris Blandón-Gitlin; Carmen del Hoyo Martínez; Carmen Herrero; Izaskun Ibabe

Previous deception research on repeated interviews found that liars are not less consistent than truth tellers, presumably because liars use a “repeat strategy” to be consistent across interviews. The goal of this study was to design an interview procedure to overcome this strategy. Innocent participants (truth tellers) and guilty participants (liars) had to convince an interviewer that they had performed several innocent activities rather than committing a mock crime. The interview focused on the innocent activities (alibi), contained specific central and peripheral questions, and was repeated after 1 week without forewarning. Cognitive load was increased by asking participants to reply quickly. The liars’ answers in replying to both central and peripheral questions were significantly less accurate, less consistent, and more evasive than the truth tellers’ answers. Logistic regression analyses yielded classification rates ranging from around 70% (with consistency as the predictor variable), 85% (with evasive answers as the predictor variable), to over 90% (with an improved measure of consistency that incorporated evasive answers as the predictor variable, as well as with response accuracy as the predictor variable). These classification rates were higher than the interviewers’ accuracy rate (54%).


Psychological Reports | 2010

Regression toward the Mean or Heuristic Processing in Detecting Deception?: Reply to Elaad (2010)

Jaume Masip; Eugenio Garrido; Carmen Herrero

Masip, et al. (2009) conducted a study in which observers had to make truth–lie judgments at the beginning, middle, or end of a series of videotaped statements. They found a decline in truth judgments over time and explained this finding in terms of information processing mode. Recently, Elaad (2010) challenged this explanation and contended that the decline could be a result of regression toward the mean. In the present paper, it is argued that because Masip, et al. took multiple Moment 1 judgments over time and then averaged across judgments, regression toward the mean was extremely unlikely. Furthermore, the decrease in truth judgments was found under several separate conditions; this cannot be explained by random fluctuations alone. Finally, Masip, et al.s data were re-analyzed adjusting for the effects of regression toward the mean. The outcomes of these analyses were the same as those reported in the original article.


Psychological Assessment | 2017

Can credibility criteria be assessed reliably? A meta-analysis of criteria-based content analysis.

Valerie Hauch; Siegfried Ludwig Sporer; Jaume Masip; Iris Blandón-Gitlin

This meta-analysis synthesizes research on interrater reliability of Criteria-Based Content Analysis (CBCA). CBCA is an important component of Statement Validity Assessment (SVA), a forensic procedure used in many countries to evaluate whether statements (e.g., of sexual abuse) are based on experienced or fabricated events. CBCA contains 19 verbal content criteria, which are frequently adapted for research on detecting deception. A total of k = 82 hypothesis tests revealed acceptable interrater reliabilities for most CBCA criteria, as measured with various indices (except Cohen’s kappa). However, results were largely heterogeneous, necessitating moderator analyses. Blocking analyses and meta-regression analyses on Pearson’s r resulted in significant moderators for research paradigm, intensity of rater training, type of rating scale used, and the frequency of occurrence (base rates) for some CBCA criteria. The use of CBCA summary scores is discouraged. Implications for research vs. field settings, for future research and for forensic practice in the United States and Europe are discussed.


Estudios De Psicologia | 2004

La detección de la mentira mediante la medida de la tensión en la voz: una revisión crítica

Jaume Masip; Eugenio Garrido; Carmen Herrero

Resumen Los analizadores de la tensión vocal son dispositivos que supuestamente detectan la ausencia de ciertos microtemblores en la voz, lo) que sería indicativo de que el sujeto está experimentando tensión. Desde esta perspectiva se asume además que todo mentiroso está tenso, por loo que los analizadores de la tensión vocal se comercializan como detectores de mentiras. En este trabajo presentamos la historia de tales artilugios y la base teórica sobre la que pretenden apoyarse, para pasar a continuación a examinar la investigación empírica realizada para contestar a cuatro preguntas clave: (a) ¿existen características vocales que se alteran cuando el hablante experimenta tensión?, (b) ¿existen características vocales que se alteran cuando el hablante miente?, (c) ¿detectan la tensión los evaluadores del estrés vocal?, y (d) ¿detectan la mentira? La respuesta que la investigación ha dado a estas preguntas cuestiona seriamente el empleo de los analizadores de la tensión vocal como detectores de mentiras.

Collaboration


Dive into the Jaume Masip's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Izaskun Ibabe

University of the Basque Country

View shared research outputs
Top Co-Authors

Avatar

Elise Fenn

Claremont Graduate University

View shared research outputs
Researchain Logo
Decentralizing Knowledge