Schuyler W. Huck
University of Tennessee
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Schuyler W. Huck.
Research Quarterly for Exercise and Sport | 1986
B. Don Franks; Schuyler W. Huck
Abstract Responses invited. While the issues raised in the following paper by Franks and Huck are not new ones, reminding readers to consider these points is important. Exactly how a scholar handles these issues depends on the questions asked, the resources available, as well as other considerations. I invite reaction to this paper. If some are received, I will offer Franks and Huck the opportunity to respond and publish the set together in a subsequent issue under “Dialogue.”
Journal of Experimental Education | 1973
Schuyler W. Huck; Howard M. Sandler
In analyzing the data associated with a Solomon Four-Group Design, the posttest scores are initially subjected to a 2x2 factorial ANOVA, with the two main effects being a.) pretest versus no pretest and b.) treatment versus no treatment. Campbell and Stanley (2) maintain that if this analysis yields non-significant F-ratios for both the main effect of pretesting and the pretesting-treatment interaction, it might be advantageous to reanalyze the data from the two pretested groups with an analysis of covariance (using pretest and posttest scores as the covariate and criterion variables, respectively). Assuming a high pretest-posttest correlation, the more powerful covariance might pick up a significant treatment effect which was not found in the initial analysis. Although Campbell and Stanley were correct in noting that this use of covariance must be preceeded by a non-significant pretesting-treatment interaction, the present authors argue that the covariance analysis is completely valid even if there is a ...
Physiology & Behavior | 1989
Kathleen A. Lawler; Schuyler W. Huck; Linda B. Smalley
This research is an assessment of the physiological correlates of Type A behavior in college-aged women. Subjects were monitored while they took a midterm statistics examination; the dependent variables were systolic and diastolic blood pressure, heart rate, and heart rate variability. Type A or B behavior was assessed with the student form of the Jenkins Activity Survey. The results indicated that Type A women had higher levels of systolic blood pressure and heart rate, and lower levels of heart rate variability. Thus, when the stressor was a genuine examination, Type A behavior in young women was associated with increased physiological response levels compared to Type Bs, a finding consistent with the hypothesis that Type A behavior is associated with sympathetic nervous system activity.
Human Movement Science | 2015
David D. Laughlin; Jeffrey T. Fairbrother; Craig A. Wrisberg; Arya Alami; Leslee A. Fisher; Schuyler W. Huck
This study examined the self-control behaviors of participants learning a 3-ball cascade juggle. Participants chose when they would receive one of four types of instructional assistance: (a) instructions; (b) video demonstration; (c) knowledge of performance (KP); and (d) knowledge of results (KR). Juggling proficiency was divided into three categories based on catches per attempt during retention and transfer testing. In general, participants decreased their requests for instructions and video demonstration throughout acquisition. For the most proficient performers, requests for KR increased over practice. Post-experimental interviews revealed that participants requested KR after primarily good attempts and KP after both good and bad attempts. Participant-reported reasons for requesting feedback included the confirmation of success (KR) and identification of technique flaws (KP). Overall, the findings suggest that self-control behaviors are more complex than previously demonstrated and that participants use self-control differently depending upon the type of assistance available, individual preferences, and learning needs.
Journal of Experimental Education | 1973
Schuyler W. Huck; James D. Long
The efficacy of behavioral objectives in improving student achievement was assessed in a college setting. The Ss were nineteen senior and graduate students enrolled in a research course in educational psychology. Ss randomly assigned to two groups were separated before treatment. One group received a list of precise instructional objectives, while the other group discussed an unrelated topic. The two groups were reunited, exposed to the same lecture, and then administered a 12-item quiz covering the day’s lesson. Results of an analysis of covariance revealed that behavioral objectives had a desirable effect on student achievement.
Educational and Psychological Measurement | 1978
Schuyler W. Huck; Robert G. Malgady
In an earlier article published in this journal, Gordon (1973) demonstrated how to compute an ANOVA F-ratio from nothing more than a table of means and standard deviations. Here, it is shown how to accomplish this same goal in two-factor designs. To point out the value of the simple formulas presented, a table from a recently published article is found to contain a large error.
Educational and Psychological Measurement | 1992
Schuyler W. Huck
For many who deal with correlation (as students, teachers, or applied researchers), the connection between group heterogeneity and the magnitude of Pearsons r is difficult to pin down. Confusion abounds because factors that increase score variability do not have a similar effect on r. Three such factors are considered in this paper, with the point made that an increase in ßr (and/or in ßy) can be associated with an increase or a decrease in r-or possibly with no change in r whatsoever! A helpful distinction is also drawn between (a) properties of the persons or objects that define ones population of interest and (b) properties of the numbers assigned to those persons or objects (or a sample of them) as a result of the measurement process.
Journal of Educational and Behavioral Statistics | 1985
Schuyler W. Huck; Sheldon B. Clark; Gipsie B. Ranney
Classroom demonstrations, if well designed, can help students gain insights into statistical concepts and phenomena. Unfortunately, however, some instructors choose not to use this instructional device for fear that the data generated will turn out to be “uncooperative”; other instructors use demonstrations but use them unscientifically, ending up with data sets that either yield no insights or constitute “overkill.” After discussing four kinds of demonstrations for which a “properN” can and should be computed, we present three possible approaches for determining how much data are needed for the demonstration to have a reasonable probability of success. Examples from the literature are used to illustrate the need for a more scientific approach to this form of instruction.
Journal of Experimental Education | 1972
Schuyler W. Huck
The analysis of covariance serves the functions of a.) adjusting group means on a dependent variable to account for mean differences among the groups on a concomitant variable, and b.) increasing power by “explaining away” part of the within-group variability. Although most educational researchers are familiar with the first function of covariance, many are unaware of the second. The author shows, through a simple numerical example, how covariance can be useful even when treatment groups have identical covariate means. Data from an actual experiment is used to demonstrate the same point. Finally, the assumption of random assignment of Ss is discussed to support the author’s contention that the main value of covariance is increased power through reduced within-group variability.
Educational Psychologist | 1979
Robert G. Malgady; Josephine A. Amato; Schuyler W. Huck
A common statistical error in educational psychology involves the failure to treat language materials (e.g., words, sentences, prose passages) as a random effect in the analysis of variance. The fallacy of treating language items as a fixed effect limits the generalizability of research findings to the particular items used in an experiment, thus questioning the scientific worth of such studies. A review of social science citations suggests that few researchers are aware of the problem outside such areas as psycholinguistics, verbal learning, and cognitive psychology. A review of 35 studies in educational psychology revealed that the statistical error of failing to treat language items as a random effect was committed 32 times in these 35 studies. A possible methodological solution to the problem is discussed, along with several statistical solutions. The language‐as‐fixed‐effect fallacy is discussed as a special case of a more general problem involving the identification of relevant sampling variables re...