Maria E. Hernández Finch
Ball State University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Maria E. Hernández Finch.
Educational and Psychological Measurement | 2013
W. Holmes Finch; Maria E. Hernández Finch
The assessment of test data for the presence of differential item functioning (DIF) is a key component of instrument development and validation. Among the many methods that have been used successfully in such analyses is the mixture modeling approach. Using this approach to identify the presence of DIF has been touted as potentially superior for gaining insights into the etiology of DIF, as compared to using intact groups. Recently, researchers have expanded on this work to incorporate multilevel mixture modeling, for cases in which examinees are nested within schools. The current study further expands on this multilevel mixture modeling for DIF detection by using a multidimensional multilevel mixture model that incorporates multiple measured dimensions, as well as the presence of multiple subgroups in the population. This model was applied to a national sample of third-grade students who completed math and language tests. Results of the analysis demonstrate that the multidimensional model provides more complete information regarding the nature of DIF than do separate unidimensional models.
Journal of Educational and Psychological Consultation | 2016
Janay B. Sander; Maria E. Hernández Finch; Eric E. Pierson; Jared A. Bishop; Rachel L. German; Claire E. Wilmoth
ABSTRACT This is a consensual qualitative research study of the perceptions of university faculty about methods and tools to teach students the professional competency area of school-based psychological consultation, with special attention to cultural competence. The participants (n = 7) included faculty of school psychology programs located in the Northeast, South, Midwest, and Mountain regions of the United States. Participants were from programs serving urban, suburban, and rural settings and represented a wide range of consultation backgrounds, experiences, and theoretical orientation. The analysis revealed three major themes: general coverage of the consultation skills and content, university tension with school setting needs, and specific hurdles and solutions to diversity training. This study also provided ideas on how trainers might overcome some of the barriers to addressing diversity.
Journal of Educational and Psychological Consultation | 2016
Janay B. Sander; Maria E. Hernández Finch; Markeda Newell
ABSTRACT The purpose of this special issue is to highlight current research on multicultural consultation in school psychology, broadly defined. Given the need to advance multicultural consultation, articles addressing research, practice, and training issues are included. The articles in this issue collectively bring to the fore the importance of the voice and perspective in the culturally sensitive consultation process. For example, trainers must consider not only the voices of their students, but also their own voices in how they teach consultation. Researchers have to consider their own voices as well as the voices of the participants and how those participants might perceive a dominant or more powerful group, including researchers. Therefore, these articles reflect the complex nature of engaging in culturally sensitive consultation, due in large measure to the process of being able to incorporate differing, sometimes competing, perspectives and then using that information to transform consultation services, research, and teaching.
Gifted Child Quarterly | 2014
Maria E. Hernández Finch; Kristie L. Speirs Neumeister; Virginia H. Burney; Audra L. Cook
This study provides baseline data to assist researchers in conducting future studies exploring the developmental trajectories of young gifted learners on measures of cognitive ability and achievement. The study includes common neuropsychological tests associated with preliteracy and the early-reading process as well as markers for inattention and executive functioning skills. Using a sample of kindergarteners identified as gifted, the results indicated that despite intelligence quotient scores in the very superior range and high means on traditional achievement measures, great variability was observed within the sample on several benchmarking measures of cognitive, academic, neuropsychological, and executive functioning. Additionally, only an average mean score on a visual–motor processing neuropsychological measure was obtained. Four neuropsychological measures provided important loadings in canonical correlations with achievement: Oromotor Sequences, Repetition of Nonsense Words, Beery-Buktenica Developmental Test of Visual-Motor Integration scores, and Speeded Naming. In addition to providing baseline data on these measures, the results also offer support for defining giftedness as a developmental process.
International Journal of Quantitative Research in Education | 2016
W. Holmes Finch; Maria E. Hernández Finch; Melissa Singh
Missing data is a major issue with which researchers working on large scale assessments must contend. Such research efforts frequently collect a wide array of variables, including dichotomous, ordinal, nominal, normal, skewed, and counts. This variation in data distributions renders many recommended methods for missing data imputation less than optimal because they assume a single joint probability model for all variables. This simulation study compared four imputation methods, random forest imputation (RF), multivariate imputation by chained equations (MICE), and combinations of the two methods using either the recursive partitioning tree (MICE-RPT) or random forest (MICE-RF) methodologies. Results reveal that data imputed with RF, MICE, MICE-RF, and MICE-RPT yield more accurate parameter estimates than data treated with LD and that MICE-RF and MICE-RPT are associated with more accurate estimates than MICE or RF alone. Implications of these results and recommendations for practice are discussed.
Frontiers in Psychology | 2018
Holmes Finch; Brian F. French; Maria E. Hernández Finch
A primary underlying assumption for researchers using a psychological scale is that scores are comparable across individuals from different subgroups within the population. In the absence of invariance, the validity of these scores for inferences about individuals may be questionable. Factor invariance testing refers to the methodological approach to assessing whether specific factor model parameters are indeed equivalent across groups. Though much research has investigated the performance of several techniques for assessing invariance, very little work has examined how methods perform under small sample size, and non-normally distributed latent trait conditions. Therefore, the purpose of this simulation study was to compare invariance assessment Type I error and power rates between (a) the normal based maximum likelihood estimator, (b) a skewed-t distribution maximum likelihood estimator, (c) Bayesian estimation, and (d) the generalized structured component analysis model. The study focused on a 1-factor model. Results of the study demonstrated that the maximum likelihood estimator was robust to violations of normality of the latent trait, and that the Bayesian and generalized component models may be useful in particular situations. Implications of these findings for research and practice are discussed.
Journal of Experimental Education | 2017
W. Holmes Finch; Maria E. Hernández Finch
ABSTRACT Single subject (SS) designs are popular in educational and psychological research. There exist several statistical techniques designed to analyze such data and to address the question of whether an intervention has the desired impact. Recently, researchers have suggested that generalized additive models (GAMs) might be useful for modeling nonlinear effects that are common with SS designs. This study sought to extend the use of GAM from SS to a research design in which individuals may be placed in separate groups and receive different interventions. Results of the simulation study found that using a mixed model form of GAM (GAMM) resulted in higher power for detecting actual effects in the population than was true for either GAM or a Bayesian GAM estimator. Thus, GAMMs are recommended for use with SS designs when interventions are expected to induce nonlinear relationships between time and the outcome variable and individuals receive different treatments.
International Journal of Testing | 2016
W. Holmes Finch; Maria E. Hernández Finch; Brian F. French
Differential item functioning (DIF) assessment is key in score validation. When DIF is present scores may not accurately reflect the construct of interest for some groups of examinees, leading to incorrect conclusions from the scores. Given rising immigration, and the increased reliance of educational policymakers on cross-national assessments such as Programme for International Student Assessment, Trends in International Mathematics and Science Study, and Progress in International Reading Literacy Study (PIRLS), DIF with regard to native language is of particular interest in this context. However, given differences in language and cultures, assuming similar cross-national DIF may lead to mistaken assumptions about the impact of immigration status, and native language on test performance. The purpose of this study was to use model-based recursive partitioning (MBRP) to investigate uniform DIF in PIRLS items across European nations. Results demonstrated that DIF based on mothers language was present for several items on a PIRLS assessment, but that the patterns of DIF were not the same across all nations.
Psychology in the Schools | 2015
Maria E. Hernández Finch; W. Holmes Finch; Constance E. McIntosh; Cynthia M. Thomas; Erin Maughan
Archive | 2014
W. Holmes Finch; Maria E. Hernández Finch; Lauren E. Moss