Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Joshua Raclaw is active.

Publication


Featured researches published by Joshua Raclaw.


Research on Language and Social Interaction | 2016

Providing Epistemic Support for Assessments Through Mobile-Supported Sharing Activities

Joshua Raclaw; Jessica S. Robles; Stephen M. DiDomenico

ABSTRACT This article examines how participants in face-to-face conversation employ mobile phones as a resource for social action. We focus on what we call mobile-supported sharing activities, in which participants use a mobile phone to share text or images with others by voicing text aloud from their mobile or providing others with visual access to the device’s display screen. Drawing from naturalistic video recordings, we focus on how mobile-supported sharing activities invite assessments by providing access to an object that is not locally accessible to the participants. Such practices make relevant coparticipants’ assessment of these objects and allow for different forms of coparticipation across sequence types. We additionally examine how the organization of assessments during these sharing activities displays sensitivity to preference structure. The analysis illustrates the relevance of embodiment, local objects, and new communicative technologies to the production of action in copresent interaction. Data are in American English.


Proceedings of the National Academy of Sciences of the United States of America | 2018

Low agreement among reviewers evaluating the same NIH grant applications

Elizabeth L. Pier; Markus Brauer; Amarette Filut; Anna Kaatz; Joshua Raclaw; Mitchell J. Nathan; Cecilia E. Ford; Molly Carnes

Significance Scientific grant peer reviewers must differentiate the very best applications from comparatively weaker ones. Despite the importance of this determination in allocating funding, little research has explored how reviewers derive their assigned ratings for the applications they review or whether this assessment is consistent when the same application is evaluated by different sets of reviewers. We replicated the NIH peer-review process to examine the qualitative and quantitative judgments of different reviewers examining the same grant application. We found no agreement among reviewers in evaluating the same application. These findings highlight the subjectivity in reviewers’ evaluations of grant applications and underscore the difficulty in comparing the evaluations of different applications from different reviewers—which is how peer review actually unfolds. Obtaining grant funding from the National Institutes of Health (NIH) is increasingly competitive, as funding success rates have declined over the past decade. To allocate relatively scarce funds, scientific peer reviewers must differentiate the very best applications from comparatively weaker ones. Despite the importance of this determination, little research has explored how reviewers assign ratings to the applications they review and whether there is consistency in the reviewers’ evaluation of the same application. Replicating all aspects of the NIH peer-review process, we examined 43 individual reviewers’ ratings and written critiques of the same group of 25 NIH grant applications. Results showed no agreement among reviewers regarding the quality of the applications in either their qualitative or quantitative evaluations. Although all reviewers received the same instructions on how to rate applications and format their written critiques, we also found no agreement in how reviewers “translated” a given number of strengths and weaknesses into a numeric rating. It appeared that the outcome of the grant review depended more on the reviewer to whom the grant was assigned than the research proposed in the grant. This research replicates the NIH peer-review process to examine in detail the qualitative and quantitative judgments of different reviewers examining the same application, and our results have broad relevance for scientific grant peer review.


Journal of Pragmatics | 2017

Laughter and the management of divergent positions in peer review interactions

Joshua Raclaw; Cecilia E. Ford

In this paper we focus on how participants in peer review interactions use laughter as a resource as they publicly report divergence of evaluative positions, divergence that is typical in the give and take of joint grant evaluation. Using the framework of conversation analysis, we examine the infusion of laughter and multimodal laugh-relevant practices into sequences of talk in meetings of grant reviewers deliberating on the evaluation and scoring of high-level scientific grant applications. We focus on a recurrent sequence in these meetings, what we call the score-reporting sequence, in which the assigned reviewers first announce the preliminary scores they have assigned to the grant. We demonstrate that such sequences are routine sites for the use of laugh practices to navigate the initial moments in which divergence of opinion is made explicit. In the context of meetings convened for the purposes of peer review, laughter thus serves as a valuable resource for managing the socially delicate but institutionally required reporting of divergence and disagreement that is endemic to meetings where these types of evaluative tasks are a focal activity.


Archive | 2014

Queer Excursions: Retheorizing Binaries in Language, Gender, and Sexuality

Lal Zimman; Jennifer L Davis; Joshua Raclaw


Research Evaluation | 2017

‘Your comments are meaner than your score’: score calibration talk influences intra- and inter-panel variability during scientific grant peer review

Elizabeth L. Pier; Joshua Raclaw; Anna Kaatz; Markus Brauer; Molly Carnes; Mitchell J. Nathan; Cecilia E. Ford


Archive | 2015

Meetings as Interactional Achievements: A Conversation Analytic Perspective

Joshua Raclaw; Cecilia E. Ford; Joseph A. Allen; Nale Lehmann-Willenbrock; Steven G. Rogelberg


Archive | 2014

Opposites attract: Retheorizing binaries in language, gender, and sexuality

Jennifer L Davis; Lal Zimman; Joshua Raclaw


Language & Communication | 2018

Doing being an ordinary technology and social media user

Jessica S. Robles; Stephen M. DiDomenico; Joshua Raclaw


Wisconsin Center for Education Research | 2015

Studying the Study Section: How Group Decision Making in Person and via Videoconferencing Affects the Grant Peer Review Process. WCER Working Paper No. 2015-6.

Elizabeth L. Pier; Joshua Raclaw; Mitchell J. Nathan; Anna Kaatz; Molly Carnes; Cecilia E. Ford


The International Encyclopedia of Language and Social Interaction | 2015

Conversation Analysis, Overview

Joshua Raclaw

Collaboration


Dive into the Joshua Raclaw's collaboration.

Top Co-Authors

Avatar

Cecilia E. Ford

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Anna Kaatz

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Elizabeth L. Pier

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Mitchell J. Nathan

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Molly Carnes

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Stephen M. DiDomenico

State University of New York System

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lal Zimman

University of Colorado Boulder

View shared research outputs
Top Co-Authors

Avatar

Markus Brauer

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Amarette Filut

University of Wisconsin-Madison

View shared research outputs
Researchain Logo
Decentralizing Knowledge