Edwin T. Cornelius
Ohio State University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Edwin T. Cornelius.
Journal of Applied Psychology | 1979
Phillip J. Decker; Edwin T. Cornelius
Several recruiting sources for obtaining new workers used by an insurance company, a bank, and a professional abstracting service were compared in terms of their relationship to later job survival. Types of jobs studied included clerical, managerial, professional, and sales. Significant differences were found among the various recruiting sources in their relationship to later employee turnover. These findings are consistent with previous empirical results and suggest the importance of the area for further research. The empirical relationship between recruiting source and subsequent tenure with an organization has been investigated and reported in two separate articles (Gannon, 1971; Reid, 1972). The results suggest that applicants referred through informal methods (e.g., recommended by friends, relatives, or other employees) tend to remain with the organization longer than applicants recruited by means of formal methods (e.g., newspaper advertising and employment agencies). The purpose of this study was to test the generalizability of the findings from the Gannon and Reid studies to samples of employees collected several years later in order to assess the desirability of further psychological investigation of this phenomenon.
Organizational Behavior and Human Performance | 1982
Karen S. Lyness; Edwin T. Cornelius
Abstract This study compared three judgment strategies used to determine composite performance ratings, based on information varied along three, six or nine dimensions, in a factorial design. College students (N = 270) rated written descriptions of the performance of hypothetical college instructors, using numerical rating scales and salary increase estimates. The entire procedure was repeated on two occasions. The three judgment strategies were compared in terms of intrarater reliability across occassions, intrarater convergence across rating methods, and interrater agreement, using both correlations and mean absolute deviations as dependent measures. The results based on mean absolute deviations supported the predicted overall superiority of the decomposed judgment strategy with combination by algorithm. The results based on correlations indicated that a simple holistic strategy was as effective as the decomposed judgement approach. Both sets of results indicated that a decomposed judgment strategy with combination by algorithm. The results based on correlations indicated that a simple holistic strategy was as effective as the decomposed judgment approach. Both sets of results indicated that a decomposed judgment strategy followed by a clinical overall evaluation is a particularly ineffective method for making performance evaluations. This finding has important implications since the decomposed—clinical strategy is similar to the approach often used in actual performance rating situations.
Psychometrika | 1977
Robert C. MacCallum; Edwin T. Cornelius
A Monte Carlo study was carried out to investigate the ability of ALSCAL to recover true structure inherent in simulated proximity measures. The nature of the simulated data varied according to (a) number of stimuli, (b) number of individuals, (c) number of dimensions, and (d) level of random error. Four aspects of recovery were studied: (a) SSTRESS, (b) recovery of true distances, (c) recovery of stimulus dimensions, and (d) recovery of individual weights. Results indicated that all four measures were rather strongly affected by random error. Also, SSTRESS improved with fewer stimuli in more dimensions, but the other three indices behaved in the opposite fashion. Most importantly, it was found that the number of individuals, over the range studied, did not have a substantial effect on any of the four measures of recovery. Practical implications and suggestions for further research are discussed.
Applied Psychological Measurement | 1979
Robert C. MacCallum; Edwin T. Cornelius; Timothy F. Champney
Several questions are raised concerning differences between traditional metric mutiple regression, which assumes all variables to be measured on in terval scales, and nonmetric multiple regression, which treats variables measured on any scale. Both models are applied to 30 derivation and cross- validation samples drawn from two sets of empiri cal data composed of ordinally scaled variables. Re sults indicate that the nonmetric model is, on the average, far superior in fitting derivation samples but that it exhibits much more shrinkage than the metric model. The metric technique fits better than the nonmetric in cross-validation samples. In ad dition, results produced by the nonmetric model are more unstable across repeated samples. A probable cause of these results is presented, and the need for further research is discussed.
Applied Psychological Measurement | 1986
Edwin T. Cornelius
measure (or both). CORRECTR also builds confidence intervals around both the observed and the estimated true validities for the 90%, 95%, and 99% levels. The program applies the corrections in the sequential order advocated by Hunter, Schmidt, and Jackson (1982). The program can do any number of corrections without restarting. Although this algorithm is particularly useful for researchers and practitioners implementing test validation studies, it is obviously appropriate for researchers in any setting in which an estimate is needed of the degree of true relationship between two variables, unaffected by measurement error and range restriction. The program asks the user for the observed correlation, the type of correction desired, and other data, depending upon the application. For instance, the estimated degree of range restriction is entered if the user desires correction for range restriction; reliability estimates, and whether or not these estimates are based on population values or are estimated from the same sample, are entered if the user desires corrections for unreliability. Sample sizes are entered if the user wants confidence intervals calculated. Output includes the observed correlation, the corrected correlation, and the three confidence intervals for both correlations. An option allows for printed output as well as output on the screen.
Applied Psychological Measurement | 1986
Edwin T. Cornelius
It is often necessary to be able to generate observed cases according to a bivariate distribution with a known population correlation. This capability is essential for monte carlo work in a variety of areas, and is also useful for instructors of statistics in order to generate examples for teaching purposes. TESTR is an interactive program that allows users to do this on a microcomputer. TESTR uses the random number function (RND) available in BASIC, and applies the correction specified by Gilder (1980, p. 30) to produce normally distributed z scores. Each z score is then used to generate
Applied Psychological Measurement | 1979
Edwin T. Cornelius
EXTAB2 is a computer program designed to calculate any number of empirical, institutionaltype Expectancy Tables from a set of data (Lawshe & Bolda, 1958; Lawshe, Bolda, Brune, & Auclair, 1958). The user may specify tables to be constructed on the basis of the bivariate relationship between pairs of variables in the input array or may specify that multiple variables are to be combined into a composite and compared to a criterion variable. In the latter instance, EXTAB2 allows the user to
Journal of Applied Psychology | 1980
Edwin T. Cornelius; Karen S. Lyness
Journal of Applied Psychology | 1976
Barry Alan Friedman; Edwin T. Cornelius
Personnel Psychology | 1979
Edwin T. Cornelius; Theodore J. Carron; Marianne N. Collins