John Tisak
Bowling Green State University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by John Tisak.
Psychometrika | 1990
William Meredith; John Tisak
As a method for representing development, latent trait theory is presented in terms of a statistical model containing individual parameters and a structure on both the first and second moments of the random variables reflecting growth. Maximum likelihood parameter estimates and associated asymptotic tests follow directly. These procedures may be viewed as an alternative to standard repeated measures ANOVA and to first-order auto-regressive methods. As formulated, the model encompasses cohort sequential designs and allow for period or practice effects. A numerical illustration using data initially collected by Nesselroade and Baltes is presented.
Personality and Individual Differences | 2002
Carlla S. Smith; Simon Folkard; Robert A. Schmieder; Luis F Parra; Evelien Spelten; Helena Almiral; R.N Sen; S Sahu; Lisa M. Perez; John Tisak
Morningness, or the preference for morning or evening activities, is an individual difference in circadian rhythms with potential applications in optimizing work schedules, sports performance, and academic achievement. This study addressed some neglected issues in morningness research. First, we propose a morningness self-report measure, the Preferences Scale, to remedy deficiencies in existing scale content and format. Second, because little is known about group or population differences in morningness, we collected data from university students in six countries. Both classical and structural equation modeling (SEM) analyses indicated that the psychometric properties of the Preferences Scale are adequate and comparable with an established morningness instrument, the Composite Scale. The SEM analyses also showed considerable group consistency in the morningness construct. However, mean differences were found across countries, suggesting that people in more temperate climates perceive themselves to be more morning-oriented than their peers in less temperate climates.
Journal of Management | 1994
John Tisak; Carlla S. Smith
We define difference scores as the difference between distinct but conceptually linked constructs. This definition should not be confused with change scores, or the difference between a single construct measured at two or more points in time. In the disciplines of education and human development, the attack against difference scores has stemmed from their use for assessing change on multiple measurements of some within-person characteristic (e.g., changes in abilities or skills) over time, usually in response to some type of treatment. Critics note that these change or difference scores must have some va~ability to function as good predictors (or outcomes), which they often do not, and that they frequently correlate with the initial level of the characteristic measured. As a consequence of these problems, several researchers (e.g., Cronbach & Furby, 1970; Lord, 1958; Werts & Linn, 1970) suggest that difference measures should be abandoned in favor of other techniques, such as residualized gain scores and regression-based estimates of change (Cronbach & Furby, 1970). Other researchers (e.g., Rogosa, Brandt & Zimowski, 1982; Rogosa & Willett, 1983; Zimmerman, Brotohusodo & Williams, 1981), however, disagree with this position, claiming that difference scores provide unique information on intraindividual change and should not be dismissed simply because they may not always be useful. The historical arguments against difference scores that have arisen in educational and developmental research, however, often do not directly translate to management research. For example, there are notable distinctions between the difference scores criticized by psychometricians and the difference scores used by organizational researchers. Traditional psychometric arguments have mostly concerned change scores, or scores on identical variables over time. These measures are usually single pre and post scores collected from individual subjects. The difference scores collected by organizational researchers are often composite (multiple item), multiple source measures collected at a single point in time. Many of the measurement concerns about single item, single source,
Journal of Organizational Behavior | 1997
Carlla S. Smith; John Tisak; Susan E. Hahn; Robert A. Schmieder
Subjective or perceived control over job-related activities or events is a frequently measured construct in organizational stress research. Karasek (1985) assessed perceived control as both decision authority and skill discretion at work (job decision latitude). Ganster (1989b; Dwyer and Ganster, 1991) developed a multidimensional or general measure of worker control, as well as a specific measure of work predictability. Because little published psychometric data exist for these scales, we investigated the item-level measurement properties of Karaseks and Gansters measures. We hypothesized two separate, two-factor solutions, decision authority and skill discretion, for the job decision latitude scale, and general control and predictability, for the work control scale. The dimensionality of both measures was assessed in multiple, independent samples using confirmatory factor analyses (LISREL) with maximum likelihood estimation. Simultaneous solutions across samples were used to determine the fit of the factor models to the data. The hypothesized two-factor solutions were confirmed for both Karaseks and Gansters scales, although item refinement is indicated. We also investigated the relative independence between Karaseks and Gansters scales and found a lack of independence between the general control and decision authority items in one sample.
Psychological Methods | 2000
John Tisak; Marie S. Tisak
The constancy or change of an attribute is important to most substantive areas of psychology. During the past decade, 2 independent methodological schools have developed statistical models for the depiction of longitudinal research. One, which might be called the European school, has created latent state-trait models. Alternatively, the American school has formulated models that go by the rubric of latent curve analysis or latent growth models. In this article, the authors integrate both approaches into a detailed unified latent curve and latent state-trait model (LC-LSTM) that includes the significant features from both schools. From the LC-LSTM framework, the permanency and ephemerality of psychological measures are discussed and the concepts of stability and reliability are reformulated. In addition, a comprehensive illustration on organization commitment is presented.
Aggressive Behavior | 1998
Dushka Crane-Ross; Marie S. Tisak; John Tisak
The god of the current study was to determine whether aggressive and conventional rule-violating behaviors could be predicted by social-cognitive beliefs and values regarding aggression and conventional rule violations. The extent to which adolescents (N = 398; grades 9 through 12) engaged in both aggressive behavior and conventional school rule violations was assessed using self-ratings and peer nominations. Results indicated that aggressive and conventional rule-violating behaviors were predicted by (1) beliefs about the legitimacy of aggressive and convention-violating behavior; (2) values placed on the expected outcomes of these acts, such as negative self-evaluations, peer disapproval, and tangible rewards; and (3) beliefs about the effects of these acts on others. Furthermore, the results indicated that aggressive and conventional transgressions were predicted better by beliefs and values within the same social-cognitive domain than across domains. In contrast to females, male students committed more aggressive acts and conventional rule violations and reported beliefs and values that were more supportive of aggressive behavior and conventional rule-violating behavior. However, gender differences in beliefs and values were greater for aggressive acts than for conventional acts. The results support the need to distinguish between behavioral domains when attempting to predict social behavior.
Journal of Applied Psychology | 1991
Carlla S. Smith; John Tisak; Todd Bauman; Elizabeth Green
The fidelity of an English-to-Japanese translation of a circadian rhythm questionnaire was examined through simultaneous factor analysis in several populations (Jöreskog, 1971a). Results indicate significant differences in item responses between populations, although between-population convergence was obtained on one factor. Back translations revealed both major and minor content discrepancies between the original and translated scales, which preclude clearly separating linguistic or semantic and population differences in item responses. Within-population results based on classic measurement techniques were compared with results based on structural equation techniques (in the American sample only); each technique led to different data-based conclusions. On the basis of the structural equation results, refinements in the source (English) scale items are suggested.
Applied Psychological Measurement | 1996
John Tisak; Marie S. Tisak
The concepts of reliability and validity and their associated coefficients typically have been restricted to a single measurement occasion. This paper describes dynamic generalizations of reliability and validity that will incorporate longitudinal or developmental models, using latent curve analysis. Initially a latent curve model is formulated to depict change. This longitudinal model is then incorporated into the classical definitions of reli ability and validity. This approach permits the separa tion of constancy or change from the indexes of reli ability and validity. Statistical estimation and hypoth esis testing be achieved using standard structural equations modeling computer programs. These longitu dinal models of reliability and validity are demon strated on sociological psychological data. Index terms: concurrent validity, dynamic models, dynamic true score, latent curve analysis, latent trajectory, predictive validity, reliability, validity.
Journal of Management | 1994
John Tisak; Carlla S. Smith
In his position paper, Edwards critiqued several of our comments concerning the reliability and validity of difference scores. We believe our differences of opinion occur not only because Edwards has endorsed historical arguments against difference scores, but also because he conceptualizes certain issues quite differently than we do. We address his major points of criticism and then reiterate (and perhaps clarify) our position. Edwards assumes that it is reasonable to assert a priori that difference scores will often exhibit poor reliabilities because the conditions under which poor reliabilities can occur (i.e., unreliable and highly positively intercorrelated component measures) are very common in empirical research. Although these circumstances may be common, they should not be sufficient to condemn the use of difference scores a priori because reliability may be empirically investigated and because, as we suggested, reliabilities can be improved. We take exception to Edwards’s statements, “. . . the reliability of a difference score should be evaluated not only in an absolute sense, but also in relation to viable alternatives, such as using both component measures jointly in multiple regression analysis . . . . If a difference score exhibits adequate reliability, then it is almost certain that its components will exhibit superior reliabilities, indicating that the latter should be used in place of the former.” To us, this presumes that the difference and component measures in question are conceptually interchangeable, a blanket assumption we are unwilling to make. For example, the concept of role conflict obtained from the differences between subordinate and supervisor job ratings is not the same as conceptualizations of the components of subordinate and supervisor job ratings. Also, we do not agree, given adequate difference score reliabilities, that difference scores should be discarded because their component measures show higher reliabilities. What about the theory being tested or research goals? Finally, notice that we and Edwards (1994) agree, that response surfaces do not eliminate reliability problems. We disagree with Edwards’s suggestion that the reliabilities of profile similarity measures can be “problematic” because dimensions are often formed by large numbers of heterogeneous items. Our position was never that
Multivariate Behavioral Research | 1996
Lance E. Anderson; Eugene F. Stone-Romero; John Tisak
The results of moderated multiple regression (MMR) are highly affected by the unreliability of the predictor variables (regressors). Errors-in-variables regression (EIVR) may remedy this problem as it corrects for measurement error in the regressors, and thus provides less biased parameter estimates. However, little is known about the properties of the EIVR estimators in the moderator variable context. The present study used simulation methods to compare the moderator variable detection capabilities of MMR and EIVR. Specifically, the study examined the bias and mean squared error of the MMR and EIVR estimates under varying conditions of sample size, reliability of the predictor variables, and intercorrelations among the predictor variables. Findings showed that EIVR estimates are superior to MMR estimates when sample size is high (i.e., at least 250) and the reliabilities of the predictors are high (i.e., rij ≥ .65). However, MMR appears to be the better strategy when reliabilities or sample size are low.