Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jason W. Osborne is active.

Publication


Featured researches published by Jason W. Osborne.


Educational Psychology | 2006

Stereotype Threat, Identification with Academics, and Withdrawal from School: Why the Most Successful Students of Colour Might Be Most Likely to Withdraw.

Jason W. Osborne; Christopher Walker

Claude Steele’s stereotype threat hypothesis posits that when there are negative stereotypes about the intellectual capacity of certain (stigmatised) groups, members of that group suffer aversive consequences; group members who are most strongly identified with the stigmatised domain in question (e.g., intellectual or academic ability) are those most likely to suffer the effects of stereotype threat. In education, it is widely held that personal investment in schooling should lead to more positive outcomes. However, highly‐invested individuals will most keenly experience the negative effects of stigma. Thus those most at risk for withdrawing from school among students of colour (who suffer a stigma of intellectual inferiority) could be those most invested in schooling. This hypothesis was tested by measuring identification with academics among a group of incoming students at a racially diverse inner‐city high school in the Midwest USA. Regardless of race, the students who most strongly identified with academics (they valued and considered academics central to the self) had higher GPAs, lower levels of absenteeism, and fewer behavioural referrals. However, among students of colour the most strongly identified were more likely to withdraw, while identification with academics did not significantly influence the withdrawal of Caucasian students. These results highlight the importance of providing a supportive environment that diffuses stereotype threat for all students, even those who appear to be academically successful.


Educational Psychology | 2007

Linking Stereotype Threat and Anxiety

Jason W. Osborne

Claude Steele’s stereotype threat hypothesis has attracted significant attention in recent years. This study tested one of the main tenets of his theory—that stereotype threat serves to increase individual anxiety levels, thus hurting performance—using real‐time measures of physiological arousal. Subjects were randomly assigned to either high or low stereotype threat conditions involving a challenging mathematics task while physiological measures of arousal were recorded. Results showed significant physiological reactance (skin conductance, skin temperature, blood pressure) as a function of a stereotype threat manipulation. These findings are consistent with the argument that stereotype threat manipulations either increase or decrease situational‐specific anxiety, and hold significant implications for thinking about fair assessment and testing practices in academic settings.


Journal of Obstetric, Gynecologic, & Neonatal Nursing | 2004

Validity and Reliability of the Neonatal Skin Condition Score

Carolyn H. Lund; Jason W. Osborne

OBJECTIVE To demonstrate the validity and reliability of the Neonatal Skin Condition Scale (NSCS) used in the Association of Womens Health, Obstetric and Neonatal Nurses (AWHONN) and the National Association of Neonatal Nurses (NANN) neonatal skin care evidence-based practice project. SETTING NICU and well-baby units in 27 hospitals located throughout the United States. PARTICIPANTS Site coordinators (N = 27) and neonates (N = 1,006) observed during both the pre and postimplementation phases of the original neonatal skin care project. METHOD To assess reliability, two consecutive NSCS assessments on a single infant were analyzed. Site coordinators were contacted after the original project was concluded. Sites indicating that a single nurse scored all infant skin observations provided data that were used to evaluate intrarater reliability. Sites using more than one nurse to score skin observations provided data that were used to assess interrater reliability. To assess validity, the following variables were used from the original data set: the Neonatal Skin Condition Scale (NSCS), with three subscales for dryness, erythema, and breakdown; birth weight in grams; number of skin score observations for each infant; and the prevalence of infection, defined as a positive blood culture. RESULTS For intrarater reliability, 16 sites used a single nurse for all NSCS assessments; total NSCS assessments 475. For interrater reliability, 11 sites used multiple raters; total assessments 531. The NSCS demonstrated adequate reliability for each of the three subscales and for the total score, with the percent agreement between scores ranging from 68.7% to 85.4% (intrarater) and 65.9% to 89% (interrater); all Kappas were significant at p < .001 and were in the moderate range for reliability. The validity of the NSCS was demonstrated by the findings that smaller infants were 6 times more likely to have erythema (chi2(6) = 109.55, p < .0001), and approximately twice as likely to have the most severe breakdown (chi2(6) = 108.01, p < .0001). Infants with more observations (longer length of stay) had higher skin scores (odds ratio = 1.21, p < .0001), and an increased probability of infection was noted for infants with higher skin scores (odds ratio = 2.25, p < .0001). CONCLUSIONS The Neonatal Skin Condition Score (NSCS) is reliable when used by single and multiple raters to assess neonatal skin condition, even across weight groups and racial groups. Validity of the NSCS was demonstrated by confirmation of the relationship of the skin condition scores with birth weight, number of observations, and prevalence of infection.


Frontiers in Psychology | 2010

Challenges for quantitative psychology and measurement in the 21st century.

Jason W. Osborne

Quantitative researchers exist in the exciting nexus where knowledge is created from raw data. Through quantitative study of the human condition, we hope to gain insight into basic, fascinating questions that humans have pondered for millennia. We (and the quantitative psychologists that have preceded us) are therefore optimists above all else. We believe that through systematic, rigorous study, we are able to gain insight into behavior, psychological processes, and important outcomes that ultimately can benefit the world and its inhabitants. Yet the promise of quantitative study of psychology is also one of its greatest challenges: demonstrating in a convincing way that quantification of behavioral, cognitive, biological, and psychological processes is valid, and that the analyses we subject the numbers to are honest efforts at elucidation rather than obfuscation.


Frontiers in Psychology | 2011

Random Responding from Participants is a Threat to the Validity of Social Science Research Results

Jason W. Osborne; Margaret Blanchard

Research in the social sciences often relies upon the motivation and goodwill of research participants (e.g., teachers, students) to do their best on low stakes assessments of the effects of interventions. Research participants who are unmotivated to perform well can engage in random responding on outcome measures, which can cause substantial mis-estimation of results, biasing results toward the null hypothesis. Data from a recent educational intervention study served as an example of this problem: participants identified as random responders showed substantially lower scores than other participants on tests during the study, and failed to show growth in scores from pre- to post-test, while those not engaging in random responding showed much higher scores and significant growth over time. Furthermore, the hypothesized differences across instructional method were masked when random responders were retained in the sample but were significant when removed. We remind researchers in the social sciences to screen their data for random responding in their outcome measures in order to improve the odds of detecting effects of their interventions.


Educational Psychology | 2008

Sweating the small stuff in educational psychology: how effect size and power reporting failed to change from 1969 to 1999, and what that means for the future of changing practices 1

Jason W. Osborne

Methodologists have written for years about the importance of attending to important details in quantitative research, yet there has been little research investigating methodological practice in the social sciences. This study assessed the extent to which innovations and practices are adopted by researchers voluntarily. In particular, I use the case of power analysis and effect size reporting as the primary example, but I also examine other reporting behaviours. Results show that while observed power and effect sizes in the educational psychology literature tend to be strong, researchers do not seem eager to adopt practices such as reporting effect sizes and power, and neither do they tend to report their testing assumptions or the quality of their measurement. There is room for much improvement in how we attend to the basics of quantitative research, and it does not appear that persuasion and professional communication are effective in changing practice.


Review of General Psychology | 2004

Identification with academics and violence in schools

Jason W. Osborne

The goal of this article is to present a theoretical model integrating identification with academics, motivation and engagement behaviors, academic outcomes, and violent or deviant behavior. Four different scenarios are presented in which students might be prone to engage in violent or undesirable behavior as a consequence of low or maladaptively high levels of identification with academics. The types of violent behavior likely to result from each scenario are discussed. Because domain identification has been shown to be malleable, this theoretical perspective leads directly to specific actions that might reduce the probability of undesirable behavior, and these actions are discussed.


Frontiers in Psychology | 2013

Is data cleaning and the testing of assumptions relevant in the 21st century

Jason W. Osborne

You must understand fully what your assumptions say and what they imply. You must not claim that the “usual assumptions” are acceptable due to the robustness of your technique unless you really understand the implications and limits of this assertion in the context of your application. And you must absolutely never use any statistical method without realizing that you are implicitly making assumptions, and that the validity of your results can never be greater than that of the most questionable of these (Vardeman and Morris, 2003, p. 26).


Manual Therapy | 2012

The Örebro Musculoskeletal Screening Questionnaire: Validation of a modified primary care musculoskeletal screening tool in an acute work injured population

Charles Philip Gabel; Markus Melloh; Brendan Burkett; Jason W. Osborne; Michael Yelland

The original Örebro Musculoskeletal Pain Questionnaire (original-ÖMPQ) was developed to identify patients at risk of developing persistent back pain problems and is also advocated for musculoskeletal work injured populations. It is critiqued for its informal non-clinimetric development process and narrow focus. A modified version, the Örebro Musculoskeletal Screening Questionnaire (ÖMSQ), evolved and progressed the original-ÖMPQ to broaden application and improve practicality. This study evaluated and validated the ÖMSQ clinimetric characteristics and predictive ability through a single-stage prospective observational cohort of 143 acute musculoskeletal injured workers from ten Australian physiotherapy clinics. Baseline-ÖMSQ scores were concurrently recorded with functional status and problem severity outcomes, then compared at six months along with absenteeism, costs and recovery time to 80% of pre-injury functional status. The ÖMSQ demonstrated face and content validity with high reliability (ICC(2.1) = 0.978, p < 0.001). The score range was broad (40-174 ÖMSQ-points) with normalised distribution. Factor analysis revealed a six-factor model with internal consistency α = 0.82 (construct range α = 0.26-0.83). Practical characteristics included completion and scoring times (7.5 min), missing responses (5.6%) and Flesch-Kincaid readability (sixth-grade and 70% reading-ease). Predictive ability ÖMSQ-points cut-off scores were: 114 for absenteeism, functional impairment, problem severity and high cost; 83 for no-absenteeism; and 95 for low cost. Baseline-ÖMSQ scores correlated strongly with recovery time to 80% functional status (r = 0.73, p < 0.01). The ÖMSQ was validated prospectively in an acute work-injured musculoskeletal population. The ÖMSQ cut-off scores retain the predictive capacity intent of the original-ÖMPQ and provide clinicians and insurers with identification of patients with potentially high and low risks of unfavourable outcomes.


Journal of Obstetric, Gynecologic, & Neonatal Nursing | 2011

The SUCCESS Program for Smoking Cessation for Pregnant Women

Susan A. Albrecht; Karen Kelly‐Thomas; Jason W. Osborne; Semhar Ogbagaber

The Association of Womens Health, Obstetric, and Neonatal Nurses (AWHONN) developed an evidence-based practice program, Setting Universal Cessation Counseling Education and Screening Standards (SUCCESS), to educate nurses and other health care practitioners about smoking cessation interventions, increase the number of practitioners providing smoking cessation interventions, and deliver a smoking cessation intervention program to childbearing women who smoke. The development, implementation, and outcomes of the SUCCESS program are described.

Collaboration


Dive into the Jason W. Osborne's collaboration.

Top Co-Authors

Avatar

Charles Philip Gabel

University of the Sunshine Coast

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Margaret Blanchard

North Carolina State University

View shared research outputs
Top Co-Authors

Avatar

Brendan Burkett

University of the Sunshine Coast

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

John L. Nietfeld

North Carolina State University

View shared research outputs
Top Co-Authors

Avatar

Li Cao

University of West Georgia

View shared research outputs
Top Co-Authors

Avatar

Amy Overbay

North Carolina State University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge