PLoS Computational Biology | 2021

Validation and tuning of in situ transcriptomics image processing workflows with crowdsourced annotations

 
 
 

Abstract


Recent advancements in in situ methods, such as multiplexed in situ RNA hybridization and in situ RNA sequencing, have deepened our understanding of the way biological processes are spatially organized in tissues. Automated image processing and spot-calling algorithms for analyzing in situ transcriptomics images have many parameters which need to be tuned for optimal detection. Having ground truth datasets (images where there is very high confidence on the accuracy of the detected spots) is essential for evaluating these algorithms and tuning their parameters. We present a first-in-kind open-source toolkit and framework for in situ transcriptomics image analysis that incorporates crowdsourced annotations, alongside expert annotations, as a source of ground truth for the analysis of in situ transcriptomics images. The kit includes tools for preparing images for crowdsourcing annotation to optimize crowdsourced workers’ ability to annotate these images reliably, performing quality control (QC) on worker annotations, extracting candidate parameters for spot-calling algorithms from sample images, tuning parameters for spot-calling algorithms, and evaluating spot-calling algorithms and worker performance. These tools are wrapped in a modular pipeline with a flexible structure that allows users to take advantage of crowdsourced annotations from any source of their choice. We tested the pipeline using real and synthetic in situ transcriptomics images and annotations from the Amazon Mechanical Turk system obtained via Quanti.us. Using real images from in situ experiments and simulated images produced by one of the tools in the kit, we studied worker sensitivity to spot characteristics and established rules for annotation QC. We explored and demonstrated the use of ground truth generated in this way for validating spot-calling algorithms and tuning their parameters, and confirmed that consensus crowdsourced annotations are a viable substitute for expert-generated ground truth for these purposes.

Volume 17
Pages None
DOI 10.1371/journal.pcbi.1009274
Language English
Journal PLoS Computational Biology

Full Text