Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Robert W. Lissitz is active.

Publication


Featured researches published by Robert W. Lissitz.


Journal of Research in Science Teaching | 1999

Estimating the Impact of Instructional Practices on Student Achievement in Science

Clare Von Secker; Robert W. Lissitz

This study used a hierarchical linear model (HLM) to estimate direct and indirect effects of instructional practices recommended by the National Science Education Standards on individual achievement. Three pedagogical reforms—namely, providing more opportunities for laboratory inquiry, increasing emphasis on critical thinking, and reducing the amount of teacher-centered instruction—were expected to account for variability in school mean achievement and explain why gender, racial-ethnic status, and socioeconomic status have more influence on achievement of students in some schools than in others. Results suggest that whereas the instructional policies recommended by the authors of the Standards may be associated with higher achievement overall, they are equally likely to have the unintended consequence of contributing to greater achievement gaps among students with different demographic profiles. Theoretical expectations about the impact of instructional practices on academic excellence and equity require further evaluation.


Educational Researcher | 2007

A Suggested Change in Terminology and Emphasis Regarding Validity and Education

Robert W. Lissitz; Karen Samuelsen

This article raises a number of questions about the current unified theory of test validity that has construct validity at its center. The authors suggest a different way of conceptualizing the problem of establishing validity by considering whether the focus of the investigation of a test is internal to the test itself or focuses on constructs and relationships that are external to the test. They also consider whether the perspective on the test examination is theoretical or practical. The resulting taxonomy, encompassing both investigative focus and perspective, serves to organize a reconceptualization of the field of validity studies. The authors argue that this approach, together with changes in the rest of the terminology regarding validity, leads to a more understandable and usable model.


Journal of Teacher Education | 1987

Measurement Training for School Personnel Recommendations and Reality

William D. Schafer; Robert W. Lissitz

Using evidence from a national survey and earlier literature, Schafer and Lissitz note that the time spent by teachers in assessment activities is not reflected in their evaluation skills nor in their prep aration programs. Prospective teachers receive limited preparation in assess ment procedures despite the existence of well-developed recommendations from professional groups vis-a-vis as sessment objectives for preprofessional training. The authors suggest that cur ricular change be evidenced in teacher preparation programs to ensure that teachers have adequate assessment skills.


Review of Educational Research | 1986

IRT Test Equating: Relevant Issues and a Review of Recent Research

Gary Skaggs; Robert W. Lissitz

The application of item response theory (IRT) methodology to test equating has been a research topic of considerable interest in the past 2 decades. Despite the volume of research, it has been difficult to draw conclusions and make generalizations because different studies have used different types of tests, different types of samples, and different methods for assessing the accuracy of equating results. The purpose of this paper is threefold: (a) to review some of the major studies thus far and synthesize their results, (b) to discuss what questions are as yet unanswered and what problems exist with research methodology, and (c) to provide direction for future research. Whereas earlier research focused on comparing equating methods and IRT models, recent research has addressed such statistical concerns as standard errors of equating, parameter stability, and robustness of IRT models to violations of their assumptions. A major finding from the research so far is that it is unreasonable to expect a single equating method to provide the best results for equating all types of tests. Future research must determine how conditions, such as multidimensionality and test content, affect IRT equating.


Applied Psychological Measurement | 1988

Effect of Examinee Ability on Test Equating Invariance.

Gary Skaggs; Robert W. Lissitz

Previous research on the application of IRT method ology to vertical test equating has demonstrated con flicting results about the degree of invariance shown by these methods with respect to examinee ability. The purpose of this study was to examine IRT equating invariance by simulating the vertical equating of two tests under varying conditions. Rasch, three-parame ter, and equipercentile equating methods were com pared. Six equating cases, using different sets of item parameters, were replicated based on examinee sam ples of low, medium, or high ability or where ability was matched to the difficulty level of the test. The re sults showed that all three methods were reasonably invariant to examinee ability level under all conditions imposed. This suggests that multidimensionality is likely to be the cause of the lack of invariance found in real datasets. Index terms: Examinee ability; In variance in item response theory; Item response the ory, equating; Item response theory, invariance; Test equating; Vertical equating.


Applied Psychological Measurement | 2000

An Evaluation of the Accuracy of Multidimensional IRT Linking

Yuan H. Li; Robert W. Lissitz

Most multidimensional item response theory (MIRT) parameter estimation programs solve the identification problem by requiring that multidimensional traits be distributed as multivariate normal, MVN(0, I). Three types of MIRT linking methods were evaluated, which are based on a composite transformation that changes the linked group’s reference system into the base group’s reference system: an orthogonal Procrustes rotation, a translation transformation, and a single dilation. The results indicate that the best MIRT linking method was an unbiased, effective, and consistent estimator that produced accurate estimates of transformation parameters when errors in estimation of item parameters were purposely manipulated. This method was capable of successfully recovering item parameters under model-fitting conditions.


Applied Psychological Measurement | 1986

An exploration of the robustness of four test equating models

Gary Skaggs; Robert W. Lissitz

This monte carlo study explored how four com monly used test equating methods (linear, equipercen tile, and item response theory methods based on the Rasch and three-parameter models) responded to tests of different psychometric properties. The four methods were applied to generated data sets where mean item difficulty and discrimination as well as level of chance scoring were manipulated. In all cases, examinee abil ity was matched to the level of difficulty of the tests. The results showed the Rasch model not to be very robust to violations of the equal discrimination and non-chance scoring assumptions. There were also problems with the three-parameter model, but these were due primarily to estimation and linking prob lems. The recommended procedure for tests similar to those studied is the equipercentile method.


Applied Psychological Measurement | 2012

Exploring the Full-information Bifactor Model in Vertical Scaling with Construct Shift

Ying Li; Robert W. Lissitz

To address the lack of attention to construct shift in item response theory (IRT) vertical scaling, a multigroup, bifactor model was proposed to model the common dimension for all grades and the grade-specific dimensions. Bifactor model estimation accuracy was evaluated through a simulation study with manipulated factors of percentage of common items, sample size, and degree of construct shift. In addition, the unidimensional IRT (UIRT) model, which ignores construct shift, was also estimated to represent current practice. It was found that (a) bifactor models were well recovered overall, though the grade-specific dimensions were not as well recovered as the general dimension; (b) item discrimination parameter estimates were overestimated in UIRT models due to the effect of construct shift; (c) the person parameters of UIRT models were less accurately estimated than those of bifactor models; (d) group mean parameter estimates from UIRT models were less accurate than those of bifactor models; and (e) a large effect due to construct shift was found for the group mean parameter estimates of UIRT models. A real data analysis provided an illustration of how bifactor models can be applied to problems involving vertical scaling with construct shift. General procedures for testing practice were recommended and discussed.


Teaching and Teacher Education | 1986

Stress in Student Teachers: A Multidimensional Scaling Analysis of Elicited Stressful Situations.

Naomi Kaunitz; Arnold R. Spokane; Robert W. Lissitz; William Strein

Abstract Previous studies of student-teacher stress listed common concerns or stresses, rank ordered them, or at best factor-analyzed ratings. Although useful in a descriptive sense, such studies did not promote analytic understanding of the problems student teachers face. The present study employed multidimensional scaling (MDS) analysis of elicited stressful situations in an effort to (a) spatially represent, on the basis of underlying dimensions, those situations student teachers find stressful, (b) label these dimensions statistically, and (c) examine the relationship of the underlying dimensions to reported personal strain. Thirty-one student teachers generated a total of 95 situations they considered stressful. These situations served to define stimuli for the paired-comparisons procedure, completed by an additional 44 student teachers. These paired comparisons results were input for MDS, and results revealed some support for three underlying dimensions: professional-personal; threat-non-threat; and professional disaster-personal crisis. Attempts to label these dimensions statistically were not satisfactory. Although no more stressed than average professionals, student teachers who could not separate personal and professional stressors did report increased personal strain. Implications for the dynamics of the student teaching experience are explored.


Archive | 2015

An Empirical Study of the Impact of the Choice of Persistence Models in Value Added Modeling upon Teacher Effect Estimates

Yong Luo; Hong Jiao; Robert W. Lissitz

It seems that the application of value added modeling (VAM) in educational settings has been gaining momentum in the past decade or so due to the interest in using test scores to evaluate teachers or schools, and currently myriads of VAM models are available for VAM researchers and practitioners. Despite the large number of VAM models, McCaffrey et al. (2004) summarized the relations among them and concluded that many can be viewed as special cases of persistence models. In persistence models, student scores are calculated based on the sum of teacher effects across years. Since different students may change teachers every year and have different membership in multiple group units, such models are also referred to as “multiple membership” models (Browne et al. 2001 Rasbash and Browne 2001). Persistence models differ from each other in the value of the persistence parameter, which, ranging from 0 to 1, denotes how teacher effects at the current year persist into the subsequent years, may it be vanished, undiminished, or diminished. The Variable Persistence (VP) model (Lockwood et al. 2007 McCaffrey et al. 2004) had been considered more flexible due to its free estimation of the persistence parameter, while other persistence models constrain its value to be either 0 or 1.

Collaboration


Dive into the Robert W. Lissitz's collaboration.

Top Co-Authors

Avatar

Ying Li

American Institutes for Research

View shared research outputs
Top Co-Authors

Avatar

Gary Skaggs

University of Maryland

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge