Madhabi Chatterji
Columbia University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Madhabi Chatterji.
Journal of Educational Psychology | 2006
Madhabi Chatterji
This study estimated reading achievement gaps in different ethnic, gender, and socioeconomic groups of 1st graders in the U.S. compared with specific reference groups and identified statistically significant correlates and moderators of early reading achievement. A subset of 2,296 students nested in 184 schools from the Early Childhood Longitudinal Study (ECLS) kindergarten to 1st-grade cohort were analyzed with hierarchical linear models. With child-level background differences controlled, significant 1st-grade reading differentials were found in African American children (-0.51 SD units below Whites), boys (-0.31 SD units below girls), and children from high-poverty households (-0.61 to -1.0 SD units below well-to-do children). In all 3 comparisons, the size of the reading gaps increased from kindergarten entry to 1st grade. Reading level at kindergarten entry was a significant child-level correlate, related to poverty status. At the school level, class size and elementary teacher certification rate were significant reading correlates in 1st grade. Cross-level interactions indicated reading achievement in African American children was moderated by the schools students attended, with attendance rates and reading time at home explaining the variance.
Educational Researcher | 2004
Madhabi Chatterji
Federal policy tools for gathering evidence on “What Works” in education, such as the What Works Clearinghouse’s (WWC) standards, emphasize randomized field trials as the preferred method for generating scientific evidence on the effectiveness of educational programs. This article argues instead for extended-term mixed-method (ETMM) designs. Emphasizing the need to consider temporal factors in gaining thorough understandings of programs as they take hold in organizational or community settings, the article asserts that formal study of contextual and site-specific variables with multiple research methods is a necessary prerequisite to designing sound field experiments for making generalized causal inferences. A theoretical rationale and five guiding principles for ETMM designs are presented, with suggested revisions to the WWC’s standards.
American Journal of Evaluation | 2007
Madhabi Chatterji
This article argues with a literature review that a simplistic distinction between strong and weak evidence hinged on the use of randomized controlled trials (RCTs), the federal “gold standard” for generating rigorous evidence on social programs and policies, is not tenable with evaluative studies of complex, field interventions such as those found in education. It introduces instead the concept of grades of evidence, illustrating how the choice of research designs coupled with the rigor with which they can be executed under field conditions, affects evidence quality progressively. It argues that evidence from effectiveness research should be graded on different design dimensions, accounting for conceptualization and execution aspects of a study. Well-implemented, phased designs using multiple research methods carry the highest potential to yield the best grade of evidence on effects of complex, field interventions.
Educational Researcher | 2008
Madhabi Chatterji
Traditional methods for preparing systematic reviews and syntheses of effectiveness studies rely on a limited set of methodological criteria to include studies that measure and report effects too narrowly to forward the mission of evidence-based practice. This article discusses why and how the criteria for study selection, evidence screening, and synthesis need to be broadened when education programs are investigated for effects.
Health Education & Behavior | 2014
Madhabi Chatterji; Lawrence W. Green; Shiriki Kumanyika
This article summarizes a comprehensive, systems-oriented framework designed to improve the use of a wide variety of evidence sources to address population-wide obesity problems. The L.E.A.D. framework (for Locate the evidence, Evaluate the evidence, Assemble the evidence, and inform Decisions), developed by an expert consensus committee convened by the Institute of Medicine, is broadly applicable to complex, community-wide health problems. The article explains how to use the framework, presenting an evidence typology that helps specify relevant research questions and includes examples of how particular research methodologies and sources of evidence relate to questions that stem from decision-maker needs. The utility of a range of quantitative, qualitative, and mixed method designs and data sources for assembling a broad and credible evidence base is discussed, with a call for ongoing “evidence generation” to fill information gaps using the recommended systems perspective.
Educational and Psychological Measurement | 2002
Madhabi Chatterji; Christina Sentovich; John M. Ferron; Gianna Rendina-Gobioff
Scores from the Teacher Readiness for Educational Reforms (TRER)instrument were validated using a six-phase, iterative model. Initial conceptualization, content validation, and pilot testing yielded a 61-item instrument with seven subdomains (Phases 1-4). Exploratory work (Phase 5) using principal axis factor extraction supported a five-factor structure (Data Set 1, n = 393). Further exploration with a five-factor free-path model and a more constrained structural model yielded satisfactory fit values (Bentler’s comparative fit indexes of .94 and .93, respectively). Deletion or collapsing of items in Phase 5 yielded a refined TRER with 43 items. Confirmatory work (Phase 6) with a new data set (n = 392) showed little slippage in fit. Cronbach’s alpha values ranged from .78 to .96 in final subdomain scores.
Quality Assurance in Education | 2014
Oren Pizmony-Levy; James Harvey; William H. Schmidt; Richard Noonan; Laura C. Engel; Michael J. Feuer; Henry Braun; Carla Santorno; Iris C. Rotberg; Paul Ash; Madhabi Chatterji; Judith Torney-Purta
Purpose – This paper presents a moderated discussion on popular misconceptions, benefits and limitations of International Large-Scale Assessment (ILSA) programs, clarifying how ILSA results could be more appropriately interpreted and used in public policy contexts in the USA and elsewhere in the world. Design/methodology/approach – To bring key issues, points-of-view and recommendations on the theme to light, the method used is a “moderated policy discussion”. Nine commentaries were invited to represent voices of leading ILSA scholars/researchers and measurement experts, juxtaposed against views of prominent leaders of education systems in the USA that participate in ILSA programs. The discussion is excerpted from a recent blog published by Education Week. It is moderated with introductory remarks from the guest editor and concluding recommendations from an ILSA researcher who did not participate in the original blog. References and author biographies are presented at the end of the article. Findings – To...
Quality Assurance in Education | 2014
Jade Caines; Beatrice L. Bridglall; Madhabi Chatterji
– This policy brief discusses validity and fairness issues that could arise when test-based information is used for making “high stakes” decisions at an individual level, such as, for the certification of teachers or other professionals, or when admitting students into higher education programs and colleges, or for making immigration-related decisions for prospective immigrants. To assist test developers, affiliated researchers and test users enhance levels of validity and fairness with these particular types of test score interpretations and uses, this policy brief summarizes an “argument-based approach” to validation given by Kane. , – This policy brief is based on a synthesis of conference proceedings and review of selected pieces of extant literature. To that synthesis, the authors add practitioner-friendly examples with their own analysis of key issues. They conclude by offering recommendations for test developers and test users. , – The authors conclude that validity is a complex and evolving construct, especially when considering issues of fairness in individual testing contexts. Kanes argument-based approach offers an accessible framework through which test makers can accumulate evidence to evaluate inferences and arguments related to decisions to be made with test scores. Perspectives of test makers, researchers, test takers and decision-makers must all be incorporated into constructing coherent “validity arguments” to guide the test development and validation processes. , – Standardized test use for individual-level decisions is gradually spreading to various regions of the world, but understandings of validity are still uneven among key stakeholders of such testing programs. By translating complex information on test validation, validity and fairness issues with all concerned stakeholders in mind, this policy brief attempts to address the communication gaps noted to exist among these groups by Kane.
Quality Assurance in Education | 2014
Edmund W. Gordon; Michael V. McGill; Deanna Iceman Sands; Kelley M. Kalinich; James W. Pellegrino; Madhabi Chatterji
Purpose – The purpose of this article is to present alternative views on the theory and practice of formative assessment (FA), or assessment to support teaching and learning in classrooms, with the purpose of highlighting its value in education and informing discussions on educational assessment policy. Methodology/approach – The method used is a “moderated policy discussion”. The six invited commentaries on the theme represent perspectives of leading scholars and measurement experts juxtaposed against voices of prominent school district leaders from two education systems in the USA. The discussion is moderated with introductory and concluding remarks from the guest editor and is excerpted from a recent blog published by Education Week. References and author biographies are presented at the end of the article. Findings – While current assessment policies in the USA push for greater accountability in schools by increasing large scale testing of students, the authors underscore the importance of FA integrat...
Quality Assurance in Education | 2014
W. James Popham; David C. Berliner; Neal M. Kingston; Susan H. Fuhrman; Steven M. Ladd; Jeffrey Charbonneau; Madhabi Chatterji
Purpose – Against a backdrop of high-stakes assessment policies in the USA, this paper explores the challenges, promises and the “state of the art” with regard to designing standardized achievement tests and educational assessment systems that are instructionally useful. Authors deliberate on the consequences of using inappropriately designed tests, and in particular tests that are insensitive to instruction, for teacher and/or school evaluation purposes. Methodology/approach – The method used is a “moderated policy discussion”. The six invited commentaries represent voices of leading education scholars and measurement experts, juxtaposed against views of a prominent leader and nationally recognized teacher from two American education systems. The discussion is moderated with introductory and concluding remarks from the guest editor, and is excerpted from a recent blog published by Education Week. References and author biographies are presented at the end of the article. Findings – In the education assess...