Edward W. Wolfe
Virginia Tech
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Edward W. Wolfe.
American Journal of Evaluation | 2008
Patrick D. Converse; Edward W. Wolfe; Xiaoting Huang; Frederick L. Oswald
This study examines response rates for mixed-mode survey implementation involving mail and e-mail/Web components. Using Dillmans Tailored Design Method, 1,500 participants were sent a survey either (a) via mail with a follow-up contact via e-mail that directed them to a Web-based questionnaire or (b) via e-mail that directed them to a Web-based questionnaire with a follow-up contact via mail. Results indicate that these mixed-mode procedures produce moderately high response rates. However, the mail survey tended to be more effective than the e-mail/Web survey, when serving either as the initial contact or as the follow-up contact. These results suggest that survey implementation involving mail followed by e-mail/Web, or even mail-only approaches, may result in larger samples than implementation involving e-mail/Web followed by mail.
Research Quarterly for Exercise and Sport | 2006
Nicholas D. Myers; Deborah L. Feltz; Kimberly S. Maier; Edward W. Wolfe; Mark D. Reckase
This study provided initial validity evidence for multidimensional measures of coaching competency derived from the Coaching Competency Scale (CCS). Data were collected from intercollegiate mens (n = 8) and womens (n = 13) soccer and womens ice hockey teams (n = 11). The total number of athletes was 585. Within teams, a multidimensional internal model was retained in which motivation, game strategy, technique, and character building comprised the dimensions of coaching competency. Some redundancy among the dimensions was observed. Internal reliabilities ranged from very good to excellent. Practical recommendations for the CCS are given in the Discussion section.
Measurement in Physical Education and Exercise Science | 2005
Nicholas D. Myers; Edward W. Wolfe; Deborah L. Feltz
This study extends validity evidence for the Coaching Efficacy Scale (CES; Feltz, Chase, Moritz, & Sullivan, 1999) by providing an evaluation of the psychometric properties of the instrument from previously collected data on high school and college coaches from United States. Data were fitted to a multidimensional item response theory model. Results offered some supporting evidence concerning validity based on the fit of a multidimensional conceptualization of coaching efficacy (i.e., motivation, game strategy, technique, and character building) as compared to a unidimensional conceptualization of coaching efficacy (i.e., total coaching efficacy), the fit of the majority of items to the measurement model, the internal consistency of coaching efficacy estimates, and the precision of total coaching efficacy estimates. However, concerns exist relating to the rating scale structure, the precision of multidimensional coaching efficacy estimates, and misfit of a couple of items to the measurement model. Practical recommendations for both future research with the CES and for the development of a revised instrument are forwarded.
Research Quarterly for Exercise and Sport | 2008
Nicholas D. Myers; Deborah L. Feltz; Edward W. Wolfe
This study extended validity evidence for measures of coaching efficacy derived from the Coaching Efficacy Scale (CES) by testing the rating scale categorizations suggested in previous research. Previous research provided evidence for the effectiveness of a four-category (4-CAT) structure for high school and collegiate sports coaches; it also suggested that a five-category (5-CAT) structure may be effective for youth sports coaches, because they may be more likely to endorse categories on the lower end of the scale. Coaches of youth sports (N = 492) responded to the CES items with a 5-CAT structure. Across rating scale category effectiveness guidelines, 32 of 34 evidences (94%) provided support for this structure. Data were condensed to a 4-CAT structure by collapsing responses in Category 1 (CAT-1) and Category 2 (CAT-2). Across rating scale category effectiveness guidelines, 25 of 26 evidences (96%) provided support for this structure. Findings provided confirmatory, cross-validation evidence for both the 5-CAT and 4-CAT structures. For empirical, theoretical, and practical reasons, the authors concluded that the 4-CAT structure was preferable to the 5-CAT when CES items are used to measure coaching efficacy. This conclusion is based on the findings of this confirmatory study and the more exploratory findings of Myers, Wolfe, and Feltz (2005).
Measurement in Physical Education and Exercise Science | 2006
Nicholas D. Myers; Edward W. Wolfe; Deborah L. Feltz; Randall D. Penfield
This study (a) provided a conceptual introduction to differential item functioning (DIF), (b) introduced the multifaceted Rasch rating scale model (MRSM) and an associated statistical procedure for identifying DIF in rating scale items, and (c) applied this procedure to previously collected data from American coaches who responded to the coaching efficacy scale (CES; Feltz, Chase, Moritz, & Sullivan, 1999). In this study, an item displayed DIF if coaches from different groups were more or less likely to endorse that item once coaches were matched on the efficacy of interest, where Motivation, Game Strategy, Technique, and Character Building efficacies defined coaching efficacy. Coach gender and level coached were selected as the grouping variables. None of the Technique and Character Building items exhibited DIF based on coach gender or level coached. One of the Motivation items and one of the Game Strategy items exhibited DIF based on coach gender. Two of the Motivation items exhibited DIF based on level coached.
Research Quarterly for Exercise and Sport | 2006
Nicholas D. Myers; Edward W. Wolfe; Kimberly S. Maier; Deborah L. Feltz; Mark D. Reckase
This study extended validity evidence for multidimensional measures of coaching competency derived from the Coaching Competency Scale (CCS; Myers, Feltz, Maier, Wolfe, & Reckase, 2006) by examining use of the original rating scale structure and testing how measures related to satisfaction with the head coach within teams and between teams. Motivation, game strategy, technique, and character building comprised the dimensions of coaching competency. Data were collected from athletes (N = 585) nested within intercollegiate mens (g = 8) and womens (g = 13) soccer and womens ice hockey (g = 11) teams (G = 32). Validity concerns were observed for the original rating scale structure and the predicted positive relationship between motivation competency and satisfaction with the coach between teams. Validity evidence was offered for a condensed post hoc rating scale and the predicted relationship between motivation competency and satisfaction with the coach within teams.
Educational and Psychological Measurement | 2008
Randall D. Penfield; Nicholas D. Myers; Edward W. Wolfe
Measurement invariance in the partial credit model (PCM) can be conceptualized in several different but compatible ways. In this article the authors distinguish between three forms of measurement invariance in the PCM: step invariance, item invariance, and threshold invariance. Approaches for modeling these three forms of invariance are proposed, and the mathematical relationship between the three forms is established. Parametric and contingency table approaches for assessing the three forms of invariance are presented, and the application of the parametric and contingency table approaches to a real data set is described. The invariance effect estimates observed for the parametric and contingency table approaches are consistent with the theoretical equivalence of the two approaches.
Educational and Psychological Measurement | 2007
Edward W. Wolfe; Steven G. Viger; Denis W. Jarvinen; Jay Linksman
As large-scale accountability testing becomes more refined, statewide standards are being created so that teachers and students can create learning and assessment targets that are aligned with statewide testing systems. An important hurdle in assisting teachers in their efforts to create standards-aligned classroom assessments is creating feelings of comfort and confidence in the teachers as they learn the relevant skills. Hence, an important component of a professional development program designed to foster these skills in teachers is an instrument that provides measures that provide useful information for planning development activities for the teachers. This article summarizes a validation study of scores from the Teacher Assessment Efficacy Scale. The analyses indicated support for the six dimensions around which the items were developed, that subscales scores exhibit adequate reliabilities, and that gains are realized in ways that one would expect when teachers engage in professional development activities designed to increase their proficiencies in creating standards-aligned classroom assessments.
International Journal of Audiology | 2007
Antony Joseph; Jerry L. Punch; Mark R. Stephenson; Nigel Paneth; Edward W. Wolfe; William J. Murphy
This experiment investigated the effect of small-group versus individual hearing loss prevention (HLP) training on the attenuation performance of passive insert-type hearing protection devices (HPDs). A subject-fit (SF) methodology, which gave naive listeners access only to the instructions printed on the HPD product label, was used to determine real-ear attenuation at threshold (REAT) at third-octave noise bands between 125–8000 Hz. REAT measurements were augmented by use of the Hearing Loss Prevention Attitude-Belief (HLPAB) survey, a field-tested self-assessment tool developed by the National Institute for Occupational Safety and Health (NIOSH). Participants were randomly assigned to one of four experimental groups, consisting of 25 listeners each, in a controlled behavioral-intervention trial. There were two types of HPDs (formable and premolded) and two training formats (individual and small group). A short multimedia program, including a practice session, was presented to all 100 listeners. Results showed training to have a significant effect, for both HPDs on real-ear attenuation and attitude, but, importantly, there was no difference between small-group and individual training.
Applied Psychological Measurement | 2008
Edward W. Wolfe
The SAS macro RBF.sas (Rasch bootstrap fit) utilizes a bootstrap procedure (Efron, 1981; Hesterberg, Moore, Monaghan, Clipson, & Epstein, 2005) to estimate critical values for item and person fit statistics as well as additional item quality indices that are produced by the WINSTEPS computer program (Linacre, 2007), Version 3.63.2 or later. Specifically, the program computes bootstrap estimates of several percentile values for four person and item fit statistics (the unweighted and weighted mean square and the standardized unweighted and weighted mean square; Wright & Masters, 1982; Wright & Stone, 1979), item-total score correlations, item slope estimates, and item lower asymptote estimates. The macro functions by estimating parameters for the dichotomous, rating scale, or partial credit Rasch model for a data set specified by the user. Based on these estimated parameters, the macro then proceeds to create B simulated data sets (the number specified by the user) that contain the same number of items and persons, estimates parameters and computes the target indices (mentioned above), and compiles those indices across the B simulated data sets. Finally, 2.5th, 5th, 95th, and 97.5th percentiles are computed for each of the simulated data sets for each target index, and the averaged values are output to a text file as bootstrap critical values along with whether individual persons or items in the original data set are more extreme than these critical values.