Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Aarti P. Bellara is active.

Publication


Featured researches published by Aarti P. Bellara.


Journal of Teacher Education | 2013

The Current State of Assessment Education: Aligning Policy, Standards, and Teacher Education Curriculum

Christopher DeLuca; Aarti P. Bellara

In response to the existing accountability movement in the United States, a plethora of educational policies and standards have emerged at various levels to promote teacher assessment competency, with a focus on preservice assessment education. However, despite these policies and standards, research has shown that beginning teachers continue to maintain low competency levels in assessment. Limited assessment education that is potentially misaligned to assessment standards and classroom practices has been identified as one factor contributing to a lack of assessment competency. Accordingly, the purpose of this study was to analyze the alignment between teacher education accreditation policies, professional standards for teacher assessment practice, and preservice assessment course curriculum. Through a curriculum alignment methodology involving two policy documents, two professional standards documents, and syllabi from 10 Florida-based, Council for Accreditation of Teacher Education–certified teacher education programs, the results of this study serve to identify points of alignment and misalignment across policies, standards, and curricula. The study concludes with a discussion on the current state of assessment education with implications for enhancing teacher preparation in this area and future research on assessment education.


Journal of Research on Leadership Education | 2014

The Hidden Curriculum: Candidate Diversity in Educational Leadership Preparation

Zorka Karanxha; Vonzell Agosto; Aarti P. Bellara

The authors describe a process of self-assessment attuned to equity and justice in the policies and practices that affect student diversity, namely, those associated with the selection of candidates. The disproportionate rate of rejection for applicants from underrepresented groups and the unsystematic process of applicant selection operated as hidden curriculum affecting the opportunities for the program to enhance meaningful relationships among diverse groups of students. The authors describe institutional and sociopolitical conditions, and individual actions reflecting a faculty’s will to policy. Faculty efforts supported and challenged systemic change to increase racial and ethnic diversity among aspiring educational administrators.


Multivariate Behavioral Research | 2015

How Do Propensity Score Methods Measure Up in the Presence of Measurement Error? A Monte Carlo Study

Patricia Rodríguez de Gil; Aarti P. Bellara; Rheta E. Lanehart; Reginald S. Lee; Eun Sook Kim; Jeffrey D. Kromrey

Considering that the absence of measurement error in research is a rare phenomenon and its effects can be dramatic, we examine the impact of measurement error on propensity score (PS) analysis used to minimize selection bias in behavioral and social observational studies. A Monte Carlo study was conducted to explore the effects of measurement error on the treatment effect and balance estimates in PS analysis across seven different PS conditioning methods. In general, the results indicate that even low levels of measurement error in the covariates lead to substantial bias in estimates of treatment effects and concomitant reduction in confidence interval coverage across all methods of conditioning on the PS.


Race Ethnicity and Education | 2015

Battling Inertia in Educational Leadership: CRT Praxis for Race Conscious Dialogue

Vonzell Agosto; Zorka Karanxha; Aarti P. Bellara

The purpose of this article is to illustrate how institutional racism is mediated by faculty negotiating power and privilege in the selection of Black (African American) women into an educational leadership preparation program. Critical race theory (CRT) praxis is used to analyze the faculty dynamics in the candidate selection process situated in a race neutral institutional culture. This reflective case study of an educational leadership department draws on qualitative data such as field notes from faculty conversations, experiential knowledge, memos, and quantitative data describing the disproportionate rejection of Black women applying to an educational leadership program in the US. Efforts to confront a race neutral process prompted by the higher rejection rate of Black women in comparison to their white counterparts prompted some faculty to engage in race conscious discourse. Faculty in departments of educational leadership who provoke race conscious dialogue on how they are implicated in institutional racism will likely face risks they will need to (em)brace for the battle against inertia.


Archive | 2014

Modeling Social Justice Educational Leadership: Self-Assessment for Equity (SAFE)

Zorka Karanxha; Vonzell Agosto; Aarti P. Bellara

In this chapter we present a model of self-assessment for equity (SAFE) to guide faculty in leadership preparation programs concerned with equity and justice. The conceptual framework for this model derives from a review of the literature on social justice in educational leadership preparation and addresses the gap in the literature on the evaluation of educational leadership programs. We discuss how we conceptualize and implement SAFE in the context of the mission statement guiding the educational leadership program in which we work.


Journal of Educational and Behavioral Statistics | 2018

Does the Package Matter? A Comparison of Five Common Multilevel Modeling Software Packages

D. Betsy McCoach; Graham G. Rifenbark; Sarah D. Newton; Xiaoran Li; Janice Kooken; Dani Yomtov; Anthony J. Gambino; Aarti P. Bellara

This study compared five common multilevel software packages via Monte Carlo simulation: HLM 7, Mplus 7.4, R (lme4 V1.1-12), Stata 14.1, and SAS 9.4 to determine how the programs differ in estimation accuracy and speed, as well as convergence, when modeling multiple randomly varying slopes of different magnitudes. Simulated data included population variance estimates, which were zero or near zero for two of the five random slopes. Generally, when yielding admissible solutions, all five software packages produced comparable and reasonably unbiased parameter estimates. However, noticeable differences among the five packages arose in terms of speed, convergence rates, and the production of standard errors for random effects, especially when the variances of these effects were zero in the population. The results of this study suggest that applied researchers should carefully consider which random effects they wish to include in their models. In addition, nonconvergence rates vary across packages, and models that fail to converge in one package may converge in another.


American Journal of Clinical Pathology | 2016

Does Taking the Fellowship In-Service Hematopathology Examination and Performance Relate to Success on the American Board of Pathology Hematology Examination?

Sara A. Monaghan; Raymond E. Felgar; Melissa Kelly; Asma M. Ali; John Anastasi; Aarti P. Bellara; Henry M. Rinder; Rachel L. Sargent; Jay Wagner; Steven H. Swerdlow; Rebecca L. Johnson

OBJECTIVES The biannual Fellow In-Service Hematopathology Examination (FISHE) assesses knowledge in five content areas. We examined the relationship between taking the FISHE and performance on it with outcomes on the first attempted American Board of Pathology Hematology subspecialty certifying examination (ABP-HE). METHODS The pass rate between the ABP-HE candidates who took the spring FISHE and those who did not were compared. The likelihood of fellows passing the ABP-HE based on their percentiles on the FISHE was also assessed. RESULTS ABP-HE candidates who took the spring FISHE had a higher pass rate (96.4%) than those who did not (76.1%, P < .001). Spring FISHE performance, including total percentile and percentiles in four of five FISHE content areas, was only a weak predictor of passing the ABP-HE. CONCLUSIONS Candidates who take the spring FISHE do better on the ABP-HE than those who do not. Most fellows passed the first attempted ABP-HE regardless of FISHE performance. Whether this is due to fellows making use of the FISHE as a self-evaluation tool to help identify and then correct their knowledge deficiencies remains to be determined.


Archive | 2012

PROPENSITY SCORE ANALYSIS AND ASSESSMENT OF PROPENSITY SCORE APPROACHES USING SAS ® PROCEDURES

Rheta E. Lanehart; Patricia Rodríguez de Gil; Eun Sook Kim; Aarti P. Bellara; Reginald S. Lee


The Teacher Educator | 2013

Pedagogies for Preservice Assessment Education: Supporting Teacher Candidates' Assessment Literacy Development

Christopher DeLuca; Teresa Chavez; Aarti P. Bellara; Chunhua Cao


Journal of education and training studies | 2013

Youth Engagement in Electoral Activities: A Collaborative Evaluation of a Civic Education Project.

Michael Berson; Liliana Rodríguez-Campos; Connie Walker-Egea; Corina Owens; Aarti P. Bellara

Collaboration


Dive into the Aarti P. Bellara's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Eun Sook Kim

University of South Florida

View shared research outputs
Top Co-Authors

Avatar

Jeffrey D. Kromrey

University of South Florida

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Vonzell Agosto

University of South Florida

View shared research outputs
Top Co-Authors

Avatar

Zorka Karanxha

University of South Florida

View shared research outputs
Top Co-Authors

Avatar

Diep Nguyen

University of South Florida

View shared research outputs
Top Co-Authors

Avatar

Susan T. Hibbard

University of South Florida

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Reginald S. Lee

University of South Florida

View shared research outputs
Researchain Logo
Decentralizing Knowledge