Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Malcolm James Ree is active.

Publication


Featured researches published by Malcolm James Ree.


Journal of Leadership & Organizational Studies | 2003

Leading Generation X: Do the Old Rules Apply?

Raul O. Rodriguez; Mark T. Green; Malcolm James Ree

The purpose of this study was to quantify the generational preferences for leadership behavior. The dependent variable was the preference of leadership behavior associated with generational themes. The five themes were: (a) Fulfillment, (b) Flexibility, (c) Technology, (d) Monetary Benefits, and (e) Work Environment. The independent variables were: (a) generation, (b) ethnicity, and (c) education. The quantitative data for the study was gathered through completed surveys from 805 participants. The cross-sectional design was used as the quantitative design. The MANOVA was used to contrast the dependent and independent variables. Significant differences were encountered at the p < .05 level. A difference in preference for leadership behavior was found between the baby boomers and generation X


The Scientific Study of General Intelligence#R##N#Tribute to Arthur R. Jensen | 2003

The Ubiquitous Role of g in Training

Malcolm James Ree; Thomas R. Carretta; Mark T. Green

Publisher Summary This chapter presents the literature examining relations between general cognitive ability, g , and an early part of occupational performance, training. The chapter begins with an explication of a common problem in the examination of human characteristics—confusing constructs and methods. It also reviews theories about the configuration (structure) of ability and evidence regarding the near identity of factor structure for sex and ethnic groups. Additionally, the concepts of specific abilities, non-cognitive characteristics and knowledge, and their theoretical relations to training performance are introduced. The chapter then examines the predictive validity of g for training, the incremental validity of specific cognitive abilities, job knowledge, and personality. Moreover, the path models are reviewed that document studies examining causal relations among g , job knowledge and training performance. Findings on differential validity and predictive bias are also presented. The chapter concludes with an examination of the value of g as a predictor of organizational effectiveness.


Military Psychology | 2010

Factor Structure of the Air Force Officer Qualifying Test Form S: Analysis and Comparison with Previous Forms

Fritz Drasgow; Christopher D. Nye; Thomas R. Carretta; Malcolm James Ree

Due to its importance for assignment and classification in the U.S. Air Force, the Air Force Officer Qualifying Test (AFOQT) has received a substantial amount of research. Recently, the AFOQT was revised to reduce administrative burden and test-taker fatigue. However, the new version, the AFOQT Form S, was implemented without explicitly examining the latent structure of the exam. The current study examined the factor structure of Form S as well as its measurement equivalence across race- and sex-based groups. Results indicated that a bifactor model with a general intelligence factor and five content-specific factors fit the best. The measurement equivalence of the AFOQT across gender and racial/ethnic groups was also supported.


Military Psychology | 2000

Basic Attributes Test (BAT) Retest Performance

Thomas R. Carretta; Warren E. Zelenski; Malcolm James Ree

The Basic Attributes Test (BAT) contributes to a U.S. Air Force pilot selection composite known as the Pilot Candidate Selection Method (PCSM). When PCSM was operationally implemented in 1993, no retests were permitted on the BAT. To determine the effects of retesting on mean score change and reliability, the BAT was administered to 477 college students who were then retested after 2 weeks, 3 months, or 6 months. Several important findings were observed. First, about 70% of the students exhibited score improvement on retest, regardless of length of retest interval. Those who performed poorly on the 1st test generally exhibited larger improvements than those who performed well on the 1st test. Second, practice effects diminished as the length of the retest interval increased. For a 6-month retest interval, it was expected that the mean increase in PCSM scores would be about 6 percentile points. The results suggest that BAT retests could be permitted no less than 6 months after initial testing. Third, and very important, BAT scores demonstrated acceptable reliability. The reliability of the psychomotor composite ranged from .775 to .800, and the reliabilities for the other subtests ranged from .474 to .871.


International Journal of Selection and Assessment | 2011

The Observation of Incremental Validity Does Not Always Mean Unique Contribution to Prediction

Malcolm James Ree; Thomas R. Carretta

Statistical analyses require proper interpretation. Misinterpretation leads to a lack of understanding of the relationships among variables. Worse, it can lead to misunderstanding of these relationships, which sometimes lead researchers and practitioners to infer the presence of a source of variance that is not present. This is especially true in regression where increased predictiveness from an additional variable may be due to either common or specific variance. In many instances, erroneous interpretation leads to erroneous attribution of the source of the improved prediction. Three examples are provided and methods for detecting specific variance are suggested.


Archives of Clinical Neuropsychology | 2001

Premorbid IQ estimates from a multiple aptitude test battery: regression vs. equating.

Daniel R. Orme; Malcolm James Ree; Paul Rioux

Estimation of premorbid abilities remains an integral part of neuropsychological evaluations. Several methods of indirect estimation have been suggested in the literature. Many of these methods are based in prediction via linear regression. Unfortunately, linear regression has the well-reported tendency to underpredict high IQ scores and overpredict low IQ scores. This can be shown to be an unavoidable statistical artifact of linear regression. We demonstrate a procedure to estimate premorbid IQ without the regression artifact. The procedure has two steps: confirmation of construct equivalence and psychometric equating. An example using real data is presented which shows the regression to the mean problem with prediction and compares it to the results from equating.


Personality and Individual Differences | 2003

Salvaging Construct Equivalence Through Equating

Malcolm James Ree; Thomas R. Carretta; James A. Earles

Turban, Sanders, Francis, and Osburn (1989) provided a two-step procedure for selecting a replacement for a currently used test without an expensive validation study. The two steps are confirmatory factor analysis and impact analysis. We evaluated this two-step procedure and found that it was possible to apply it and find that the replacement test was not acceptable. We provide an example of just such a negative outcome that was salvaged by the extra step of equipercentile equating. This step, added to Turban et al., required no additional investment other than an equating analysis on the extant data. We caution that equating does not create construct equivalence, but is a necessary procedure when tests measure identical constructs with differing distributional shapes.


Management Decision | 2003

Does sex of the leader and subordinate influence a leader’s disciplinary decisions?

Robert D. Bisking; Malcolm James Ree; Mark T. Green; Lamar Odom

This study, conducted in 2002, investigated the impact of sex on a leader’s decisions involving employee disciplinary situations. All leaders would like to believe that they make fair and impartial decisions. Some of the most difficult decisions leaders make involve people (i.e. subordinates), because careers may be at risk. This research examined the impact sex may have on decisions made by leaders in four different disciplinary scenarios, sexual harassment, drug test violation, insubordination and theft. A scenario‐based survey instrument, developed by the author, and the Bem Sex‐Role Instrument (BSRI) short‐form, were used in the data collection. It was determined from the data collected that the sex of the employee was an influence in decision making and that the sex of the leader (i.e. decision maker) was of no influence. It was further determined that the BSRI Femininity and Masculinity scores were not accurate predictors of disciplinary actions.


International Journal of Selection and Assessment | 2016

Training Affects Variability in Training Performance Both Within and Across Jobs

Thomas R. Carretta; Malcolm James Ree; Mark S. Teachout

A partial test of a model of training performance variability was conducted. The current study examined variability in cognitive ability and training performance in job‐specific training. Several studies have found mean score differences in cognitive ability across jobs. Further, the variability in training outcomes among individuals within a job has been shown to vary across jobs. Reduced variability in training outcomes is a measure of training effectiveness. For this study data were grouped by job over several years. Participants were 116,310 enlistees enrolled in 108 US Air Force training specialties. Aptitude was measured by a verbal/math composite derived from the US military enlistment test, the Armed Services Vocational Aptitude Battery. Training performance was assessed by written tests of job‐related knowledge content. Predictive validity of the verbal/math composite ranged from .124 to .836 across jobs with a mean weighted value of 0.691. Substantial differences were observed for mean and variability of aptitude across jobs. Trainees in jobs with high aptitude requirements had higher mean aptitude and were less variable on aptitude than those in jobs with lower aptitude requirements. Further, trainees in high aptitude jobs had higher mean training performance scores and were less variable on performance than those in jobs with lower aptitude requirements. Finally, training performance was much less variable than aptitude. Training had the effect of reducing variability among trainees within jobs. This has the effect of producing a more homogenous set of trainees on trained content, which is beneficial to on‐the‐job training. Support was found for a part of the model.


Military Psychology | 2010

Instrument Development With Web Surveys and Multiple Imputations

LinChiat Chang; Lucinda Z. Frost; Susan Chao; Malcolm James Ree

This article outlines the development of a tool to assess psychosocial needs in U.S. Air Force (USAF) units. Instrument development began with the construction of an item taxonomy, pretesting procedures (Q-sort; cognitive interviews), and a pilot web survey to document internal consistency and test-retest reliability, followed by two large-scale Web surveys to derive a viable factor structure. Because certain items (e.g., child care) were not relevant to all respondents, multiple imputations to replace missing values were necessary before factor analyses could proceed. Exploratory factor analyses of data from the first Web survey revealed an eight-factor solution; and data from the second Web survey confirmed that the eight-factor solution could be replicated on an independent sample. Developed in accordance with psychometric principles, the final assessment tool tapped a range of psychosocial factors spanning quality of work life, community life, family functioning, and personal well-being.

Collaboration


Dive into the Malcolm James Ree's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mark T. Green

Our Lady of the Lake University

View shared research outputs
Top Co-Authors

Avatar

Paul D. Retzlaff

University of Northern Colorado

View shared research outputs
Top Co-Authors

Avatar

Phyllis Duncan

Our Lady of the Lake University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lee J. Konczak

University of Washington

View shared research outputs
Top Co-Authors

Avatar

Mark S. Teachout

University of the Incarnate Word

View shared research outputs
Top Co-Authors

Avatar

Paul W. Thayer

North Carolina State University

View shared research outputs
Top Co-Authors

Avatar

Robert G. Jones

Missouri State University

View shared research outputs
Top Co-Authors

Avatar

S. Bartholomew Craig

North Carolina State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge