Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Joshua Wilson is active.

Publication


Featured researches published by Joshua Wilson.


Behavioral Disorders | 2014

Writing Performance of Students with Emotional and/or Behavioral Disabilities:

Nicholas A. Gage; Joshua Wilson; Ashley S. MacSuga-Gage

Students with emotional and/or behavioral disabilities (E/BD), including students with emotional disturbance and attention deficit hyperactivity disorder, receiving special education services perform significantly worse on academic performance measures than same age peers. Researchers have focused on reading and math performance while less is known about the writing performance of students with E/BD. We examined the writing performance of students with E/BD and compared their writing performance with that of students without disabilities. In addition, we examined the mediating effect of reading performance on differential writing performance for students with E/BD and typical peers. A sample of 114 students with E/BD was compared with both a full sample of 3,187 typical students and 114 students matched with the E/BD students using propensity score matching. Students were compared on their writing and reading performance on the Connecticut State Mastery Test. Results indicate that students with E/BD perform significantly worse than propensity score-matched peers in writing. Mediation analyses indicate that reading performance accounts for ~60% of the total variance of writing performance for students with E/BD. Implications and future directions for researchers are discussed.


Archive | 2012

Examining the Validity of Single-Occasion, Single-Genre, Holistically Scored Writing Assessments

Natalie G. Olinghouse; Tanya Santangelo; Joshua Wilson

This chapter presents a study exploring two key assumptions underlying the use of scores generated from single-genre, single-occasion, holistically scored writing assessments in large-scale assessments for evaluating progress toward state writing standards: (1) a students writing ability across genres is sufficiently stable as to allow for performance in a single genre (e.g., story) to represent a students performance in alternative, unassessed genres (e.g., persuasive, or informative); and (2) a holistic score is a valid and adequate measure of the multiple writing abilities which it purports to represent. One hundred and five 5th-grade students completed three compositions: story, persuasive, and informative. Each composition was scored for essential genre elements, paragraph conventions and construction, sentence construction and conventions, and vocabulary. The results from the study suggest that the writing assessments may provide limited information about a students writing performance across the range of skills represented in a states writing content standards. Keywords:holistic quality score; informative writing; large-scale writing assessments; performance assessment; persuasive writing; single-genre, single-occasion, holistically scored writing assessments; story writing


Elementary School Journal | 2015

Academic Standards for Writing: To What Degree Do Standards Signpost Evidence-Based Instructional Practices and Interventions?.

Gary A. Troia; Natalie G. Olinghouse; Ya Mo; Lisa Hawkins; Rachel A. Kopke; Angela Chen; Joshua Wilson; Kelly A. Stewart

Though writing plays an important role in academic, social, and economic success, typical writing instruction generally does not reflect evidence-based practices (EBPs). One potential reason for this is limited signposting of EBPs in standards. We analyzed the content of writing standards from a representative sample of states and the Common Core State Standards (CCSS) for writing and language to determine to what degree EBPs were signposted, variability of this signposting, and the overlap of practices signposted in states’ standards and the CCSS. We found a few practices signposted fairly consistently (e.g., isolated components of writing process instruction) and others rarely so (e.g., use of text models), as well as great variability across standards, with some covering almost half of the EBPs and others far fewer. Only a few states’ writing standards overlapped considerably with the CCSS. We discuss the implications of these findings for teacher professional development and for evaluating standards.Though writing plays an important role in academic, social, and economic success, typical writing instruction generally does not reflect evidence-based practices (EBPs). One potential reason for this is limited signposting of EBPs in standards. We analyzed the content of writing standards from a representative sample of states and the Common Core State Standards (CCSS) for writing and language to determine to what degree EBPs were signposted, variability of this signposting, and the overlap of practices signposted in states’ standards and the CCSS. We found a few practices signposted fairly consistently (e.g., isolated components of writing process instruction) and others rarely so (e.g., use of text models), as well as great variability across standards, with some covering almost half of the EBPs and others far fewer. Only a few states’ writing standards overlapped considerably with the CCSS. We discuss the implications of these findings for teacher professional development and for evaluating standards.


Journal of School Psychology | 2018

Universal screening with automated essay scoring: Evaluating classification accuracy in grades 3 and 4

Joshua Wilson

The adoption of the Common Core State Standards and its associated assessments has placed increased focus on writing performance. Consequently, weak writers may be at risk of failing Common Core English language arts (ELA) assessments. Thus, the current study sampled a diverse group of third and fourth grade students (n=100 and 130, respectively) who were administered Fall and Spring writing screeners using the procedures of a Direct Assessment of Writing (DAW). Results were used to predict whether students did or did not attain grade-level standards as evaluated by the summative Smarter Balanced ELA assessment. Writing screeners were scored using the Project Essay Grade (PEG) automated essay scoring system. ROC curve analysis and logistic regression were used to evaluate screening models. Area under the ROC curve (AUC) values for grade 3 were in the acceptable range (Fall=0.74, Spring=0.75). AUCs approached or fell within the excellent range for grade 4 (Fall=0.79, Spring=0.83). Sensitivity-based and d-based cutpoints were selected and measures of diagnostic accuracy, including sensitivity and specificity, are reported. Results indicate that automatically-scored DAW has promise for universal screening for writing risk.


workshop on innovative use of nlp for building educational applications | 2015

Using PEGWriting to Support the Writing Motivation and Writing Quality of Eighth-Grade Students: A Quasi-Experimental Study

Joshua Wilson; Trish Martin

A quasi-experimental study compared the effects of feedback condition on eighth-grade students’ writing motivation and writing achievement. Four classes of eighth-graders were assigned to a combined feedback condition in which they received feedback on their writing from their teacher and from an automated essay evaluation (AEE) system called PEGWriting®. Four other eighth-grade classes were assigned to a teacher feedback condition, in which they solely received feedback from their teacher via GoogleDocs. Results indicated that students in the combined PEGWriting+Teacher Feedback condition received feedback more quickly and indicated that they were more likely to solve problems in their writing. Equal writing quality was achieved between feedback groups even though teachers in the PEGWriting condition spent less time providing feedback to students than in the GoogleDocs condition. Results suggest that PEGWriting enabled teachers to offload certain aspects of the feedback process and promoted greater independence and persistence for students.


Learning Disabilities: A Contemporary Journal | 2014

Does Automated Feedback Improve Writing Quality

Joshua Wilson; Natalie G. Olinghouse; Gilbert N. Andrada


Computers in Education | 2016

Automated essay evaluation software in English Language Arts classrooms

Joshua Wilson; Amanda Czik


Computers in Human Behavior | 2017

Full length articlePresentation, expectations, and experience: Sources of student perceptions of automated writing evaluation

Rod D. Roscoe; Joshua Wilson; Adam Johnson; Christopher R. Mayra


Computers in Human Behavior | 2017

Presentation, expectations, and experience

Rod D. Roscoe; Joshua Wilson; Adam C. Johnson; Christopher R. Mayra


Assessing Writing | 2016

Comparing the accuracy of different scoring methods for identifying sixth graders at risk of failing a state writing assessment

Joshua Wilson; Natalie G. Olinghouse; D. Betsy McCoach; Tanya Santangelo; Gilbert N. Andrada

Collaboration


Dive into the Joshua Wilson's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gary A. Troia

Michigan State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Rod D. Roscoe

Arizona State University

View shared research outputs
Top Co-Authors

Avatar

Ya Mo

Michigan State University

View shared research outputs
Top Co-Authors

Avatar

Rachel A. Kopke

Michigan State University

View shared research outputs
Top Co-Authors

Avatar

Amanda Czik

University of Delaware

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Angela Chen

Michigan State University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge