Jens F. Beckmann
Durham University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jens F. Beckmann.
Journal of Education and Training | 2009
Robert E. Wood; Jens F. Beckmann; Damian P. Birney
Purpose – The purpose of this paper is to consider how simulations are increasingly used in training programs for the development of skills such as leadership. However, the requirements of leadership development go beyond the development of task specific procedural knowledge or expertise that simulations have typically been used to develop. Leadership requires flexibility in the application of knowledge developed through simulations and the creation of linkages to behavioral execution skills needed to utilize that knowledge effectively in real world settings.Design/methodology/approach – The successful acquisition of flexible expertise and the related execution skills requires instructional techniques that manage cognitive load, delay automatization of responses, and provide diversity in simulated experiences to ensure richness of the mental models developed while working on simulations. The successful transfer of that knowledge to real world settings requires supplemental instructional techniques that li...
Frontiers in Psychology | 2016
Edward Cripps; Robert E. Wood; Nadin Beckmann; John W. Lau; Jens F. Beckmann; Sally Cripps
A Bayesian technique with analyses of within-person processes at the level of the individual is presented. The approach is used to examine whether the patterns of within-person responses on a 12-trial simulation task are consistent with the predictions of ITA theory (Dweck, 1999). ITA theory states that the performance of an individual with an entity theory of ability is more likely to spiral down following a failure experience than the performance of an individual with an incremental theory of ability. This is because entity theorists interpret failure experiences as evidence of a lack of ability which they believe is largely innate and therefore relatively fixed; whilst incremental theorists believe in the malleability of abilities and interpret failure experiences as evidence of more controllable factors such as poor strategy or lack of effort. The results of our analyses support ITA theory at both the within- and between-person levels of analyses and demonstrate the benefits of Bayesian techniques for the analysis of within-person processes. These include more formal specification of the theory and the ability to draw inferences about each individual, which allows for more nuanced interpretations of individuals within a personality category, such as differences in the individual probabilities of spiraling. While Bayesian techniques have many potential advantages for the analyses of processes at the level of the individual, ease of use is not one of them for psychologists trained in traditional frequentist statistical techniques.
Diagnostica | 2000
Jens F. Beckmann
Zusammenfassung. Analysiert wird das Latenzzeitverhalten bei der Beantwortung von Items in nicht zeitbegrenzt dargebotenen Intelligenztests. Bisherige Befunde zeigen, das falsche Antworten spater gegeben werden als korrekte. Dieses “Falsch > Richtig-Phanomen” wird sowohl hinsichtlich seiner Generalitat als auch hinsichtlich seiner Universalitat an einer Stichprobe von insgesamt N = 169 Schulern (mittleres Alter 14;6 Jahre) untersucht. Die Testanden bearbeiteten einen Figurenfolgen-, einen Zahlenfolgen-Test und einen Test mit verbalen Analogien. Das “Falsch > Richtig-Phanomen” konnte repliziert werden (Generalitat). Die Differenz zwischen Falsch- und Richtig-Latenz nimmt jedoch mit sinkender Testleistung ab (eingeschrankte Universalitat). Der Nachweis einer transsituativen Konsistenz des Latenzzeitverhaltens und die lediglich moderaten positiven korrelativen Beziehungen der Latenzzeit zur Testleistung legt die Erwartung eines potentiell zusatzlichen diagnostisch relevanten Indikators nahe. Die Interpretati...
Computers in Human Behavior | 2015
Nadin Beckmann; Jens F. Beckmann; Damian P. Birney; Robert E. Wood
We address the issue of underutilisation of learning opportunities in simulations.71 professionals took part in an experiment using a management simulation.Peer interactions were structured to encourage hypothesis-testing strategies.Simple manipulation of how learners interact with the simulation affected learning.Evidence for proximal, distal and deliberation learning effects is presented. Whilst micro-worlds or simulations have increasingly been used in higher education settings, students do not always benefit as expected from these learning opportunities. By using an experimental-control group design we tested the effectiveness of structuring the task environment so as to encourage learners to approach simulations more systematically. Seventy-one professionals who participated in a postgraduate-level management program worked on a management simulation either individually (n=35) or in dyads (n=36) while exploring the simulation (exploration phase). Peer interactions in the shared learning condition were structured so that learners were encouraged to employ hypothesis testing strategies. All participants then completed the simulation again individually so as to demonstrate what they had learned (performance phase). Baseline measures of cognitive ability and personality were also collected. Learners who explored the simulation in the shared learning condition outperformed their counterparts who explored the simulation individually. A simple manipulation of the way learners interacted with the simulation facilitated learning. Improved deliberation is discussed as a potential cause of this effect, preliminary evidence is provided. This study lends further evidence that the effectiveness of learning using simulations is co-determined by characteristics of the learning environment.
Zeitschrift Fur Padagogische Psychologie | 2000
Jens F. Beckmann; Heike Dobat
Zusammenfassung: Gepruft wird die Angemessenheit des Gultigkeitsanspruchs von Lerntests, die intellektuelle Leistungsfahigkeit unter dem Aspekt des Veranderungspotenzials zu messen. Dies erfolgt zum einen durch einen Vergleich mit «konkurrierenden» Diagnostikansatzen (traditionelle Statusdiagnostik, Diagnostik bereichsspezifischen Vorwissens) und zum anderen durch Uberprufung des eigenstandigen Beitrags von Lerntests zur Prognose schulischer Lernleistungen. Referiert werden zwei Validierungsstudien, an denen 166 Schuler der 7. Klasse (konkurrente Validierung) bzw. 130 Schuler der 8. Klassenstufe (prognostische Validierung) teilnahmen. Als externe Validitatskriterien wurden Lernleistungen in computergestutzten, standardisierten und curricularbezogenen Lernprogrammen herangezogen. Die Ergebnisse zeigen, dass sich Lerntests bei der Vorhersage zukunftiger Lernleistungen bewahren. Daruber hinaus kann belegt werden, dass Lerntests vom Vorwissen und vom Intelligenzstatus unabhangige Kriteriumsvarianz aufklaren. ...
Journal of Intelligence | 2017
Jens F. Beckmann; Natassia Goode
In this paper we discuss how the lack of a common framework in Complex Problem Solving (CPS) creates a major hindrance to a productive integration of findings and insights gained in its 40+-year history of research. We propose a framework that anchors complexity within the tri-dimensional variable space of Person, Task and Situation. Complexity is determined by the number of information cues that need to be processed in parallel. What constitutes an information cue is dependent on the kind of task, the system or CPS scenario used and the task environment (i.e., situation) in which the task is performed. Difficulty is conceptualised as a person’s subjective reflection of complexity. Using an existing data set of N = 294 university students’ problem solving performances, we test the assumption derived from this framework that particular system features such as numbers of variables (NoV) or numbers of relationships (NoR) are inappropriate indicators of complexity. We do so by contrasting control performance across four systems that differ in these attributes. Results suggest that for controlling systems (task) with semantically neutral embedment (situation), the maximum number of dependencies any of the output variables has is a promising indicator of this task’s complexity.
Frontiers in Psychology | 2017
Jens F. Beckmann; Damian P. Birney; Natassia Goode
In this paper we argue that a synthesis of findings across the various sub-areas of research in complex problem solving and consequently progress in theory building is hampered by an insufficient differentiation of complexity and difficulty. In the proposed framework of person, task, and situation (PTS), complexity is conceptualized as a quality that is determined by the cognitive demands that the characteristics of the task and the situation impose. Difficulty represents the quantifiable level of a person’s success in dealing with such demands. We use the well-documented “semantic effect” as an exemplar for testing some of the conceptual assumptions derived from the PTS framework. We demonstrate how a differentiation between complexity and difficulty can help take beyond a potentially too narrowly defined psychometric perspective and subsequently gain a better understanding of the cognitive mechanisms behind this effect. In an empirical study a total of 240 university students were randomly allocated to one of four conditions. The four conditions resulted from contrasting the semanticity level of the variable labels used in the CPS system (high vs. low) and two instruction conditions for how to explore the CPS system’s causal structure (starting with the assumption that all relationships between variables existed vs. starting with the assumption that none of the relationships existed). The variation in the instruction aimed at inducing knowledge acquisition processes of either (1) systematic elimination of presumptions, or (2) systematic compilation of a mental representation of the causal structure underpinning the system. Results indicate that (a) it is more complex to adopt a “blank slate” perspective under high semanticity as it requires processes of inhibiting prior assumptions, and (b) it seems more difficult to employ a systematic heuristic when testing against presumptions. In combination, situational characteristics, such as the semanticity of variable labels, have the potential to trigger qualitatively different tasks. Failing to differentiate between ‘task’ and ‘situation’ as independent sources of complexity and treating complexity and difficulty synonymously threaten the validity of performance scores obtained in CPS research.
Journal of Cognitive Education and Psychology | 2014
Jens F. Beckmann
In this article, I reflect on potential reasons for the seemingly persistent impression that dynamic testing has not delivered on its promise. Potential reasons are embedded in a paradox. On the one hand, validity-related expectations toward dynamic tests seem too broad. This includes fuzziness in defining the diagnostic target constructs, simplistic quantitative focus on conventional validity indices, and overgeneralized expectations regarding incremental validity. At the same time, the focus on dynamic testing seems too narrow. By introducing three tests of cognitive flexibility, I exemplify that dynamic testing has potential which goes beyond the assessment of learning potential in specific subpopulations. My ambition is to help in addressing potential users’ misconceptions about dynamic testing productively.
Psychologische Rundschau | 2003
Jürgen Guthke; Jens F. Beckmann; Karl H. Wiedl
Zusammenfassung. Das “dynamische Testen“ (DT) wird als eine Alternative zum sog. Statustest - vor allem im Bereich der Intelligenzdiagnostik - in der internationalen Literatur zunehmend mehr diskutiert. Die bekannteste Variante des DT ist der sog. Lerntest. Typisch hiefur ist der systematische Einbau von Feedbacks, Denkhilfen und “Trainingsstrecken“ in den Testprozess. Man verspricht sich hierdurch eine inkrementelle Validitat gegenuber Statustests - besonders bei sog. Unterprivilegierten und Testanden mit einer “irregularen Lerngeschichte“. Das Ubersichtsreferat skizziert zunachst den Ansatz, referiert vor allem neuere empirische Untersuchungen zur sog. dynamischen Validierung und zeigt zum Schluss, dass das Konzept uber die Intelligenzmessung hinaus auch auf die Diagnostik anderer Personlichkeitsmerkmale im Sinne einer “Psychodiagnostik intraindividueller Variabilitat“ durch ein mehr experimentelles Vorgehen bei der Testung ubertragbar erscheint.
Educational Review | 2018
Julian Elliott; Wilma C. M. Resing; Jens F. Beckmann
Abstract This paper updates a review of dynamic assessment in education by the first author, published in this journal in 2003. It notes that the original review failed to examine the important conceptual distinction between dynamic testing (DT) and dynamic assessment (DA). While both approaches seek to link assessment and intervention, the former is of particular interest for academic researchers in psychology, whose focus is upon the study of reasoning and problem-solving. In contrast, those working in the area of dynamic assessment, often having a practitioner orientation, tend to be particularly concerned to explore the ways by which assessment data can inform educational practice. It is argued that while some authors have considered the potential value of DA in assisting classification, or in predicting future performance, the primary contribution of this approach would seem to be in guiding intervention. Underpinning this is the view that DA can shed light on the operation of underlying cognitive processes that are impairing learning. However, recent research has demonstrated that the belief that deficient cognitive/executive functions could be identified and ameliorated, and subsequently result in academic progress, has not been supported. Where gains in such processes/functions have sometimes been found in laboratory training studies, these have tended not to transfer meaningfully to classroom contexts. The review concludes by pointing out that DA continues to be supported primarily on the basis of case studies and notes that the 2003 call for research that systematically examines the relationship between assessment and intervention has yet to be realised.