Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Clarence D. Kreiter is active.

Publication


Featured researches published by Clarence D. Kreiter.


Anatomical Record-advances in Integrative Anatomy and Evolutionary Biology | 2001

Comparison of a virtual microscope laboratory to a regular microscope laboratory for teaching histology.

Tonya Harris; Timothy Leaven; Paul M. Heidger; Clarence D. Kreiter; James Duncan; Fred R. Dick

Emerging technology now exists to digitize a gigabyte of information from a glass slide, save it in a highly compressed file format, and deliver it over the web. By accessing these images with a standard web browser and viewer plug‐in, a computer can emulate a real microscope and glass slide. Using this new technology, the immediate aims of our project were to digitize the glass slides from urinary tract, male genital, and endocrine units and implement them in the Spring 2000 Histology course at the University of Iowa, and to carry out a formative evaluation of the virtual slides of these three units in a side‐by‐side comparison with the regular microscope laboratory. The methods and results of this paper will describe the technology employed to create the virtual slides, and the formative evaluation carried out in the course. Anat Rec (New Anat) 265:10–14, 2001.


Anatomical Record-advances in Integrative Anatomy and Evolutionary Biology | 2002

Integrated approach to teaching and testing in histology with real and virtual imaging

Paul M. Heidger; Fred R. Dee; Daniel Consoer; Timothy Leaven; James Duncan; Clarence D. Kreiter

The University of Iowa College of Medicine histology teaching laboratory incorporates extensive Web‐ and computer‐based teaching modalities, including the Virtual Microscope (VM), as emerging learning aids in histology and pathology laboratory instruction. We report here our experience in offering a multiple resource‐based approach to laboratory instruction while retaining the opportunity and requirement of examining actual microscopic slide preparations with the microscope. Acceptance of this approach has been high among our students and faculty, and performance levels established over years of teaching histology by traditional means have been maintained. Anat Rec (New Anat) 269:107–112, 2002.


Academic Medicine | 2004

Medical students' use of information resources: is the digital age dawning?

Michael W. Peterson; Jane A. Rowat; Clarence D. Kreiter; Jess Mandel

PurposeOne of the many challenges clinicians face is applying growing medical knowledge to specific patients; however, there is an information gap between information needs and delivery. Digital information resources could potentially bridge this gap. Because most medical students are exposed to per


Academic Medicine | 2005

The psychometric properties of five scoring methods applied to the script concordance test.

Andrew C. Bland; Clarence D. Kreiter; Joel A. Gordon

Purpose The Script Concordance Test (SCT) is designed to measure cognitive ability related to successful clinical decision making. An SCTs usefulness for medical education depends on establishing its construct validity. The SCTs present construct relates examinees scores to experts’ response patterns, which does not require a single-best-answer format. Because medical education assessments do require a single best answer, the authors compared the psychometric properties of two aggregate scoring methods with three single-best-answer scoring methods for an SCT. Method A nephrology SCT was developed and administered to 85 examinees. Examinees’ scores derived from a key developed using eight experts and a traditional aggregate scoring method on a five-point Likert-based scale were compared with four alternate scoring methods (one method eliminated the multipoint Likert-type scale and three eliminated the Likert-type scale and employed single-best-answer scoring). Results Two of the four alternate scoring methods performed as well as the traditional Likert-type aggregate scoring method. Scores from all five methods were highly intercorrelated. In addition, each method produced scores similarly correlated with level of experience, and none exhibited an intermediate effect. Conclusions Single-best-answer scoring with three answer choices produced results similar to aggregate scoring on a Likert-type scale. Because SCT items appear to assess an examinees understanding of the interrelatedness of medical knowledge, single-best-answer scoring on an SCT may be valid as an educational assessment. More research is needed to assess differential validity compared with multiple-choice question exams and the predictive validity related to clinical performance.


Human Pathology | 2009

Competency assessment of residents in surgical pathology using virtual microscopy

Leslie A. Bruch; Barry R. De Young; Clarence D. Kreiter; Thomas H. Haugen; Timothy Leaven; Fred R. Dee

Our goal was to develop an efficient and reliable performance-based virtual slide competency examination in general surgical pathology that objectively measures pathology residents morphologic diagnostic skill. A Perl scripted MySQL database was used to develop the test editor and test interface. Virtual slides were created with the Aperio ScanScope. The examination consisted of 20 questions using 20 virtual slides. Slides were chosen to represent general surgical pathology specimens from a variety of organ systems. The examination was administered in a secure environment and was completed in 1 to 1 1/2 hours. Examination reliability, as an indicator of the tests ability to discriminate between trainee ability levels, was excellent (r = 0.84). The linear correlation coefficient of virtual slide competency examination score versus months of surgical pathology training was 0.83 (P = .0001). The learning curve was much steeper early in training. Correlation of virtual slide competency examination performance with residents performance on the 64 item Resident In-Service Examination surgical pathology subsection was 0.70. Correlation of virtual slide competency examination performance with global end of rotation ratings was 0.28. This pilot implementation demonstrates that it is possible to create a short, reliable performance-based assessment tool for measuring morphologic diagnostic skill using a virtual slide competency examination. Furthermore, the examination as implemented in our program will be a valid measure of an individual residents progress in morphologic competency. Virtual slide technology and computer accessibility have advanced to the point that the virtual slide competency examination model implemented in our program could have applicability across multiple residency programs.


Teaching and Learning in Medicine | 2007

A Validity Generalization Perspective on the Ability of Undergraduate GPA and the Medical College Admission Test to Predict Important Outcomes

Clarence D. Kreiter; Yuka Kreiter

Background: Research on the validity of using the Medical College Admissions Test (MCAT) and undergraduate grade point average (GPA) for selection to medical school has produced conflicting interpretations. There is debate regarding the degree to which coefficients diminish over the course of educational and professional outcomes and disagreement over whether these two measures can predict clinical performance. Purpose: To summarize and interpret the validity literature using validity generalization techniques that account for measurement error. Methods: Validity generalization techniques were used to summarize MCAT and undergraduate GPA validity research. A meta-analysis was performed to evaluate validity coefficients for two outcome domains across educational and professional attainment levels. Results: The ability to predict academic performance decreases slightly for written tests. For clinical performance assessments, existing research does not allow an assessment of change across training levels. However, relevant studies suggest that MCAT and undergraduate GPA have a positive predictive relationship with clinical skills. Conclusion: A validity generalization perspective of the literature supports the use of MCAT and undergraduate GPA for selection to medical school.


Medical Education | 2009

The validity of performance-based measures of clinical reasoning and alternative approaches.

Clarence D. Kreiter; George R. Bergus

Context  The development of a valid and reliable measure of clinical reasoning ability is a prerequisite to advancing our understanding of clinically relevant cognitive processes and to improving clinical education. A record of problem‐solving performances within standardised and computerised patient simulations is often implicitly assumed to reflect clinical reasoning skills. However, the validity of this measurement method for assessing clinical reasoning is open to question.


Medical Education | 2013

Threats to validity in the use and interpretation of script concordance test scores

Matthew Lineberry; Clarence D. Kreiter; Georges Bordage

Recent reviews have claimed that the script concordance test (SCT) methodology generally produces reliable and valid assessments of clinical reasoning and that the SCT may soon be suitable for high‐stakes testing.


Medical Teacher | 2011

Research in assessment: Consensus statement and recommendations from the Ottawa 2010 conference

Lambert Schuwirth; Jerry A. Colliver; Larry D. Gruppen; Clarence D. Kreiter; Stewart Mennin; Hirotaka Onishi; Louis N. Pangaro; Charlotte Ringsted; David B. Swanson; Cees Van der Vleuten; Michaela Wagner-Menghin

Medical education research in general is a young scientific discipline which is still finding its own position in the scientific range. It is rooted in both the biomedical sciences and the social sciences, each with their own scientific language. A more unique feature of medical education (and assessment) research is that it has to be both locally and internationally relevant. This is not always easy and sometimes leads to purely ideographic descriptions of an assessment procedure with insufficient general lessons or generalised scientific knowledge being generated or vice versa. For medical educational research, a plethora of methodologies is available to cater to many different research questions. This article contains consensus positions and suggestions on various elements of medical education (assessment) research. Overarching is the position that without a good theoretical underpinning and good knowledge of the existing literature, good research and sound conclusions are impossible to produce, and that there is no inherently superior methodology, but that the best methodology is the one most suited to answer the research question unambiguously. Although the positions should not be perceived as dogmas, they should be taken as very serious recommendations. Topics covered are: types of research, theoretical frameworks, designs and methodologies, instrument properties or psychometrics, costs/acceptability, ethics, infrastructure and support.


Academic Medicine | 1999

Evaluating the usefulness of computerized adaptive testing for medical in-course assessment.

Clarence D. Kreiter; Kristi J. Ferguson; Larry D. Gruppen

PURPOSE This study investigated the feasibility of converting an existing computer-administered, in-course internal medicine test to an adaptive format. METHOD A 200-item internal medicine extended matching test was used for this research. Parameters were estimated with commercially available software with responses from 621 examinees. A specially developed simulation program was used to retrospectively estimate the efficiency of the computer-adaptive exam format. RESULTS It was found that the average test length could be shortened by almost half with measurement precision approximately equal to that of the full 200-item paper-and-pencil test. However, computer-adaptive testing with this item bank provided little advantage for examinees at the upper end of the ability continuum. An examination of classical item statistics and IRT item statistics suggested that adding more difficult items might extend the advantage to this group of examinees. CONCLUSIONS Medical item banks presently used for incourse assessment might be advantageously employed in adaptive testing. However, it is important to evaluate the match between the items and the measurement objective of the test before implementing this format.

Collaboration


Dive into the Clarence D. Kreiter's collaboration.

Top Co-Authors

Avatar

Kristi J. Ferguson

Roy J. and Lucille A. Carver College of Medicine

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

George R. Bergus

Roy J. and Lucille A. Carver College of Medicine

View shared research outputs
Top Co-Authors

Avatar

Catherine Solow

Roy J. and Lucille A. Carver College of Medicine

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Marcy E. Rosenbaum

Roy J. and Lucille A. Carver College of Medicine

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge