Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Raja Subhiyah is active.

Publication


Featured researches published by Raja Subhiyah.


Anesthesia & Analgesia | 2002

Development and analysis of a new certifying examination in perioperative transesophageal echocardiography.

Solomon Aronson; Aggie Butler; Raja Subhiyah; Richard E. Buckingham; Michael K. Cahalan; Steven Konstandt; Jonathan B. Mark; Robert M. Savage; Joseph S. Savino; Jack S. Shanewise; John Smith; Daniel M. Thys

A key element in developing a process to determine knowledge and ability in applying perioperative echocardiography has included an examination. We report on the development of a certifying examination in perioperative echocardiography. In addition, we tested the hypothesis that examination performance is related to clinical experience in echocardiography. Since 1995, more than 1200 participants have taken the examination, and more than 70% have passed. Overall examination performance was related positively to longer than 3 mo of training (or equivalent) in echocardiography and performance and interpretation of at least six examinations a week. We concluded that the certifying examination in perioperative echocardiography is a valid tool to help determine individual knowledge in perioperative echocardiography application.


Academic Medicine | 2004

Using the NBME self-assessments to project performance on USMLE Step 1 and Step 2: impact of test administration conditions.

Amy Sawhill; Aggie Butler; Douglas R. Ripkey; David B. Swanson; Raja Subhiyah; John Thelman; William Walsh; Kathleen Z. Holtzman; Kathy Angelucci

Problem Statement and Background. This study examined the extent to which performance on the NBME® Comprehensive Basic Science Self-Assessment (CBSSA) and NBME Comprehensive Clinical Science Self-Assessment (CCSSA) can be used to project performance on USMLE Step 1 and Step 2 examinations, respectively. Method. Subjects were 1,156 U.S./Canadian medical students who took either (1) the CBSSA and Step 1, or (2) the CCSSA and Step 2, between April 2003 and January 2004. Regression analyses examined the relationship between each self-assessment and corresponding USMLE Step as a function of test administration conditions. Results. The CBSSA explained 62% of the variation in Step 1 scores, while the CCSSA explained 56% of Step 2 score variation. In both samples, Standard-Paced conditions produced better estimates of future Step performance than Self-Paced ones. Conclusions. Results indicate that self-assessment examinations provide an accurate basis for predicting performance on the associated Step with some variation in predictive accuracy across test administration conditions.


Evaluation & the Health Professions | 2007

Convergence Between Cluster Analysis and the Angoff Method for Setting Minimum Passing Scores on Credentialing Examinations

Brian J. Hess; Raja Subhiyah; Carolyn Giordano

Cluster analysis can be a useful statistical technique for setting minimum passing scores on high-stakes examinations by grouping examinees into homogenous clusters based on their responses to test items. It has been most useful for supplementing data or validating minimum passing scores determined from expert judgment approaches, such as the Ebel and Nedelsky methods. However, there is no evidence supporting how well cluster analysis converges with the modified Angoff method, which is frequently used in medical credentialing. Therefore, the purpose of this study is to investigate the efficacy of cluster analysis for validating Angoff-derived minimum passing scores. Data are from 652 examinees who took a national credentialing examination based on a content-by-process test blueprint. Results indicate a high degree of consistency in minimum passing score estimates derived from the modified Angoff and cluster analysis methods. However, the stability of the estimates from cluster analysis across different samples was modest.


The Journal of Physician Assistant Education | 2004

Confirmatory Factor Analysis of the NCCPA Physician Assistant National Recertifying Examination

Brian J. Hess; Raja Subhiyah

Purpose: The purpose of this study was to apply confirmatory factor analysis (CFA) methodology to support the structure of the Physician Assistant National Recertifying Examination (PANRE) for both primary care and surgical physician assistant (PA) examinee populations. Method: Examinee data from 950 first‐time takers (680 primary care and 270 surgical PA examinees) of PANRE were selected. LISREL 8 software was used to determine whether one general ability factor (ie, ability to apply general, primary care knowledge) accounted for most of the variability in both the task and organ system scores. Results: Results indicated that one general ability factor accounted for most of the variability in scores obtained from both the task and organ system dimensions of the test blueprint. Invariance tests indicated that the general ability factor structure was equivalent across the two populations, indicating that PANRE scores are interpretable in a way that is consistent with the specifications defined in the content blueprint. While the surgical PA examinees on average did not score as high on PANRE as the primary care PA examinees, the difference should not be attributed to the structural design or content specifications of the PANRE. Conclusions: The present study demonstrated the applicability of factor analysis for validating the structural design and scoring model for the PANRE across populations of specialized PA examinees; only a single general ability primarily explains variation in responses to individual test questions, and subsequently explains overall PANRE performance. Even though PANRE is designed to measure application of general medical knowledge in primary care, recertifying surgical PA examinees do not appear to be at a disadvantage when taking PANRE.


The Journal of Physician Assistant Education | 2004

Building a Certification Examination Program, Part I: Development and Administration

Brian J. Hess; Scott Arbet; Raja Subhiyah

Certification examination programs are designed for the distinct purpose of awarding a credential. This credential represents that the professional has achieved predetermined standards of knowledge and performance. Because the assessment or examination process is one of the most important components of a good certification program, every effort should be made to ensure that the exam yields valid scores and meet the highest standards in the industry, and that this philosophy and practice are made known to key stakeholders so there is no doubt regarding the reputation of the assessment instrument.1


The Journal of Physician Assistant Education | 2003

Building a Certification Examination, Part II: Scoring and Standard Setting

Raja Subhiyah; Brian J. Hess; Scott Arbet

The end product of a certification examination such as the Physician Assistant National Certifying Exam (PANCE) is an outcome of pass or fail. This outcome is based on the score that a candidate obtains on the test. Every effort is made to ensure that the examination score is an appropriate measure of knowledge required for practice. The validity of the inference made from the score to the pass/fail decision is a function of the quality of the score and of the passing standard. To start with, we must have good questions that reflect concepts important for practice and knowledge. Then, we must have well-balanced forms of the test that represent the required knowledge in appropriate proportions as defined by the test blueprint. Next, we must have standardized conditions for administering the test. We must also have a solid and accurate scoring system. Finally, we must have a fair and equitable standard for passing the examination. Part I of this article documents the first three requisites. This part documents scoring and standard setting.


Annals of Internal Medicine | 2002

The In-Training Examination in Internal Medicine: An Analysis of Resident Performance over Time

Richard A. Garibaldi; Raja Subhiyah; Mary E. Moore; Herbert S. Waxman


Journal of Educational Measurement | 1995

Scoring a Performance-Based Assessment by Modeling the Judgments of Experts

Brian E. Clauser; Raja Subhiyah; Ronald J. Nungester; Douglas R. Ripkey; Stephen G. Clyman; Danette W. McKinley


Clinical Gastroenterology and Hepatology | 2003

Knowledge base evaluation of medicine residents on the gastroenterology service: Implications for competency assessments by faculty.

Joseph C. Kolars; Furman S. McDonald; Raja Subhiyah; Randall S. Edson


Journal of The American Society of Echocardiography | 2001

Concept, development, administration, and analysis of a certifying examination in echocardiography for physicians.

Arthur E. Weyman; Aggie Butler; Raja Subhiyah; Christopher P. Appleton; Edward A. Geiser; Stephen A. Goldstein; Mary Etta King; Sanjiv Kaul; Arthur J. Labovitz; Michael H. Picard; Thomas J. Ryan; Jack S. Shanewise

Collaboration


Dive into the Raja Subhiyah's collaboration.

Top Co-Authors

Avatar

Brian J. Hess

American Board of Internal Medicine

View shared research outputs
Top Co-Authors

Avatar

Richard A. Garibaldi

University of Connecticut Health Center

View shared research outputs
Top Co-Authors

Avatar

Aggie Butler

National Board of Medical Examiners

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Carolyn Giordano

National Board of Medical Examiners

View shared research outputs
Top Co-Authors

Avatar

David B. Swanson

National Board of Medical Examiners

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Douglas R. Ripkey

National Board of Medical Examiners

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Herbert S. Waxman

American College of Physicians

View shared research outputs
Researchain Logo
Decentralizing Knowledge