Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Claudia Harsch is active.

Publication


Featured researches published by Claudia Harsch.


Language Assessment Quarterly | 2011

Designing and Scaling Level-Specific Writing Tasks in Alignment With the CEFR: A Test-Centered Approach

Claudia Harsch; Andrea Alexander Rupp

The Common European Framework of Reference (CEFR; Council of Europe, 2001) provides a competency model that is increasingly used as a point of reference to compare language examinations. Nevertheless, aligning examinations to the CEFR proficiency levels remains a challenge. In this article, we propose a new, level-centered approach to designing and aligning writing tasks in line with the CEFR levels. Much work has been done on assessing writing via tasks spanning over several levels of proficiency but little research on a level-specific approach, where one task targets one specific proficiency level. In our study, situated in a large-scale assessment project where such a level-specific approach was employed, we investigate the influence of the design factors tasks, assessment criteria, raters, and student proficiency on the variability of ratings, using descriptive statistics, generalizability theory, and multifaceted Rasch modeling. Results show that the level-specific approach yields plausible inferences about task difficulty, rater harshness, rating criteria difficulty, and student distribution. Moreover, Rasch analyses show a high level of consistency between a priori task classifications in terms of CEFR levels and empirical task difficulty estimates. This allows for a test-centered approach to standard setting by suggesting empirically grounded cut-scores in line with the CEFR proficiency levels targeted by the tasks.


Language Testing | 2016

Comparing C-Tests and Yes/No Vocabulary Size Tests as Predictors of Receptive Language Skills.

Claudia Harsch; Johannes Hartig

Placement and screening tests serve important functions, not only with regard to placing learners at appropriate levels of language courses but also with a view to maximizing the effectiveness of administering test batteries. We examined two widely reported formats suitable for these purposes, the discrete decontextualized Yes/No vocabulary test and the embedded contextualized C-test format, in order to determine which format can explain more variance in measures of listening and reading comprehension. Our data stem from a large-scale assessment with over 3000 students in the German secondary educational context; the four measures relevant to our study were administered to a subsample of 559 students. Using regression analysis on observed scores and SEM on a latent level, we found that the C-test outperforms the Yes/No format in both methodological approaches. The contextualized nature of the C-test seems to be able to explain large amounts of variance in measures of receptive language skills. The C-test, being a reliable, economical and robust measure, appears to be an ideal candidate for placement and screening purposes. In a side-line of our study, we also explored different scoring approaches for the Yes–No format. We found that using the hit rate and the false-alarm rate as two separate indicators yielded the most reliable results. These indicators can be interpreted as measures for vocabulary breadth and as guessing factors respectively, and they allow controlling for guessing.


Language Assessment Quarterly | 2014

General Language Proficiency Revisited: Current and Future Issues

Claudia Harsch

This article explores a number of key issues that emerged during the panel discussion that followed the General Language Proficiency Symposium at the Language Testing Forum (LTF) 2010, celebrating the 30th anniversary of the LTF. The key issues that emerged during the discussion should be of interest to a wider audience, as they express current issues and concerns in the testing community in light of 30 years of research and development. These issues were whether language proficiency can be seen as unitary or divisible; the role and use of the Common European Framework (CEF) proficiency scales and levels when it comes to reporting test scores; the issue of test equivalence between high-stakes tests, also in relation to the CEF; and a general demand for developing assessment literacy among test users and stakeholders. Based on a summary of the LTF discussion, the article provides a state-of-the art review of relevant research that addresses the aforementioned listed key issues, followed by a future agenda for researching the reviewed areas.


Assessment in Education: Principles, Policy & Practice | 2013

Comparing holistic and analytic scoring methods: issues of validity and reliability

Claudia Harsch; Guido Martin

This paper explores issues of rating quality when assessing writing in a level-specific approach, i.e. tasks targeting specific proficiency levels. We investigate whether holistic scores could mask deviances in how underlying descriptors are interpreted and applied by the raters, as such deviances could compromise rating validity. We conducted a study with six raters, comparing a holistic approach with a combined approach whereby an analytic score for each descriptor was collected complementary to the holistic judgements. The results confirmed the initial hypothesis that holistic scores may mask deviances in how descriptors are applied. To monitor rating quality and enhance rating validity, we therefore recommend a complementary approach combining holistic scores with analytic, descriptor-focused scores. We illustrate the applicability of this approach by a successful implementation during rater training. Our findings contribute towards enhancing rater consistency and improving rating validity.


Language and Intercultural Communication | 2016

Enhancing student experiences abroad: the potential of dynamic assessment to develop student interculturality

Claudia Harsch; Matthew E. Poehner

ABSTRACT Educational institutions are acknowledging the requirements of a globalized world on students’ mobility, interculturality, and language skills by offering study-abroad programmes. These need to be accompanied by procedures to assess student needs prior to and during their time abroad as well as upon their return. In the exploratory study reported here, we use Dynamic Assessment (DA) to examine international students’ interculturality and learning needs when interpreting Critical Incidents (CIs). DA integrates teaching, learning, and assessment by providing mediation to reveal the cognitive processes behind student performance. Four empirically derived incidents were presented to 13 international students studying in the UK. Students worked in pairs and were asked to interpret the CIs. Appropriate interpretations of the CIs involved identifying the issues from different perspectives; anticipating emotions, behaviour and problems for different participants; and negotiating situations and solutions. Interaction data were qualitatively analysed for instances of relevant (meta-)cognitive processes and emerging learning. Results indicate the potential of DA to provide an empirical diagnosis of student interculturality and learning needs, and thus a starting point for the design of enrichment programmes to optimize students’ development while abroad by meeting students where they are and moving them towards the desired abilities and objectives. Die globalisierte Welt trägt in den Bereichen der Mobilität, Interkulturalität und Sprachkompetenzen neue Ansprüche an Studierende heran. Diesen Ansprüchen tragen Hochschulen unter Anderem durch Auslandsaufenthalte Rechnung. Diese müssen jedoch begleitet werden von Massnahmen zur Beurteilung des Lernzuwachses der Studierenden vor, während und nach dem Auslandsaufenhalt. In der explorativen Studie, die wir hier berichten, nutzten wir das Verfahren des Dynamischen Beurteilens (DB), um die Interkulturalität und die Lernbedürfnisse internationaler Studierender zu untersuchen. DB integriert lehren, lernen und beurteilen; dabei wird mittels Mediation versucht, kognitive Prozesse zu erleuchten, die der Performanz der Studierenden zugrunde liegen. In der Studie wurden 13 internationalen Studierenden vier empiriebasierte Critical Incidents (CIs) vorgelegt, die sie in paarweiser Kooperation interpretieren sollten. Angemessene Interpretationen der CIs beinhalteten das Identifizieren kritischer Sachverhalte von verschiedenen Perspektiven, das Antizipieren von Gefühlen, Reaktionen und möglichen Problemen der verschiedenen Teilnehmer, und das Aushandeln von Interpretationen und Lösungsvorschlägen. Die Interaktionsdaten wurden qualitativ auf relevante (meta-)kognitive Prozesse und mögliche Lernprozesse hin analysiert. Unsere Ergebnisse deuten darauf hin, dass DB großes Potential besitzt zur empirischen Diagnose der Interkulturalität und der Lernbedürfnisse von Studierenden. Somit könnte DB eine Bereicherung für Begleitprogramme von Auslandsaufenthalten darstellen, um die Entwicklung der Interkulturalität der Studierenden während ihres Auslandsaufenthaltes optimal zu fördern.


Language Assessment Quarterly | 2015

What Are We Aligning Tests to When We Report Test Alignment to the CEFR

Claudia Harsch; Johannes Hartig

The study reported here investigates the validity of judgments made when aligning tests to the Common European Framework of Reference (CEFR). Listening tests operationalizing pre-defined difficulty-determining characteristics were to be aligned to CEFR levels. We employed a modified version of the item-descriptor-matching-method. Ten judges stated the CEFR descriptors they thought each item operationalizes and assigned a global CEFR level per item. We compared agreement on CEFR level judgments and CEFR descriptors quoted. Analyzing the relationship between CEFR level judgments and item ratings of difficulty-determining characteristics shed light on further criteria the judges employed. Follow-up interviews helped to triangulate the findings by examining judges’ perceptions of the alignment procedure. We found that judges relied on different criteria and CEFR descriptors to a varying degree, interpreting CEFR levels differently. There seemed little comparability in what aspects judges used to form their global CEFR judgments. Therefore, if an alignment does not take into account the meaning of the CEFR levels as expressed by existing CEFR descriptors, this raises issues with alignment validity, and hence the validity of test-score interpretation and usage. Given the impact of using CEFR aligned tests for high-stakes purposes, this article aims to shed more light on what assigning a CEFR level to a test actually means.


Language Assessment Quarterly | 2018

How Suitable Is the CEFR for Setting University Entrance Standards

Claudia Harsch

ABSTRACT The discussion takes up the common theme across the seven contributions of this special issue, namely the CEFR’s suitability as a basis for setting university entrance standards. The special issue allows insights into this theme from a multitude of contexts, languages and perspectives, including data from stakeholders, students, tests and their documentations, as well as student performances and teacher judgements. The contributions of the seven papers are first discussed with a view to examining current practices of setting entrance requirements and the suitability of the CEFR as a means of comparison between different tests and contexts. The discussion then moves on to acknowledge that the CEFR alone cannot guarantee that different institutions and stakeholders will use it in a comparable way and come to comparable interpretations when employing and interpreting its proficiency scales. In order to overcome these challenges, possible ways of dealing with discrepancies when comparing different tests and contexts that are suggested by the seven contributions in this special issue are discussed, with a particular view on how these contributions answer earlier calls for further research. Finally, the discussion outlines ways forward to improve practices and stimulate future research in the realm of setting university entrance requirements.


Archive | 2017

Multidimensional Structures of Competencies: Focusing on Text Comprehension in English as a Foreign Language

Johannes Hartig; Claudia Harsch

The project “Modeling competencies with multidimensional item-response-theory models” examined different psychometric models for student performance in English as a foreign language. On the basis of the results of re-analyses of data from completed large scale assessments, a new test of reading and listening comprehension was constructed. The items within this test use the same text material both for reading and for listening tasks, thus allowing a closer examination of the relations between abilities required for the comprehension of both written and spoken texts. Furthermore, item characteristics (e.g., cognitive demands and response format) were systematically varied, allowing us to disentangle the effects of these characteristics on item difficulty and dimensional structure. This chapter presents results on the properties of the newly developed test: Both reading and listening comprehension can be reliably measured (rel = .91 for reading and .86 for listening). Abilities for both sub-domains prove to be highly correlated yet empirically distinguishable, with a latent correlation of .84. Despite the listening items being more difficult, in terms of absolute correct answers, the difficulties of the same items in the reading and listening versions are highly correlated (r = .84). Implications of the results for measuring language competencies in educational contexts are discussed.


Assessing Writing | 2012

Adapting CEF-descriptors for rating purposes: Validation by a combined rater training and scale revision approach

Claudia Harsch; Guido Martin


Archive | 2010

Empirische und inhaltliche Analyse lokaler Abhangigkeiten im C-Test

Claudia Harsch; Johannes Hartig

Collaboration


Dive into the Claudia Harsch's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hans Anand Pant

Humboldt University of Berlin

View shared research outputs
Top Co-Authors

Avatar

Jenny Frenzel

Humboldt University of Berlin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Matthew E. Poehner

Pennsylvania State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge