Transplantation | 2021

Toward Improved and Standardized Diagnostic Pipelines in Transplantation.

 
 
 

Abstract


A recent article about significant differences in how individual laboratories preprocess and analyze data from functional MRI reiterates the fact that with complex datasets and techniques, there are no consensus or single “best” approaches to data analysis and subsequent interpretation. In the age of “big data” using genome sequences, complex imaging techniques, and higher throughput data analysis, standardization and precise phenotyping of clinical samples is critical. In organ transplant, phenotyping of the allograft by histology is still the “gold standard” for diagnosis of rejection after transplantation and often dictates the subsequent course of clinical care. The transplant biopsy is still an imperfect gold standard because of inherent operational and data interpretation limitations. It also has limited utility as it is ill-suited for serial monitoring of patients. Moreover, the scoring and grading of transplant biopsies show substantial interobserver variation. Finally, the biopsy is invasive and a lagging indicator of the rejection process. Nonetheless, the transplant biopsy remains the standard of care for diagnosis and has been continually refined by the Banff working group. Since the Banff classification of transplant biopsies represents an expert consensus of histology by key opinion leaders, it was decided to add a molecular diagnostics component to improve the utility and objectivity of a biopsy. Thus, it is worthwhile to rethink the role of the biopsy as the primary diagnostic and include it as one component of a larger diagnostic toolbox. The “for cause” biopsy is a reaction to other clinical evidence of the rejection process; however, damage to the organ is already set in motion by the time the histological rejection is confirmed. One logical solution to this issue has been the use of surveillance/protocol biopsies. Unfortunately, surveillance biopsies are performed in less than half of all US transplant centers (34% of transplant recipients), with even high-volume transplant centers (>200 transplants per y) not routinely performing surveillance biopsies. Additionally, surveillance biopsies are usually only performed twice a year in most instances due to the invasive nature and associated costs and risks. This has provided a window of opportunity for other nonor semiinvasive diagnostic tests that are amenable to routine frequent monitoring. The advent of molecular techniques for rejection diagnosis has changed the landscape of kidney transplantation diagnostics. The increased population of public databases with highthroughput data from multiple sources (tissue, blood, and urine) and ubiquitous powerful bioinformatic algorithms, computational infrastructure, and the expertise to handle large datasets has generated great potential to leverage these datasets to identify posttransplant dysfunction. However, the use of noninvasive or semiinvasive techniques using urine and specifically whole blood, although convenient, must be taken with a healthy dose of skepticism. There are several limitations to their use in the successful development, validation, and implementation of diagnostic tools, and these inherent problems make them far from being ready to replace a biopsy. Microarray or RNA-sequencing data collected from whole blood or peripheral blood mononuclear cells have a high variability even between patients with the same graft status; and, even with the latest machine learning and deep learning tools for classification, predictive accuracy is underwhelming. Additional confounders include patient demographics, treatment regimens, nonstandardized sample collection, and data generation methods that inhibit the clear elucidation of biologic mechanisms orchestrating the rejection process. For example, the transcriptional differences in whole blood, a heterogeneous mixture of numerous cell subsets, may reflect changes in the proportions of the individual cell types rather than mirroring the true biology driving rejection; and despite several cellular deconvolution methods in the literature, results of this approach have been mixed. The multifactorial nature of the variability requires consideration of clinical (ie, patient characteristics, treatment regimen), biologic (ie, cell composition, level of inflammation), and technical (RNA quality, batch effects) sources, where each type is handled with different bioinformatic solutions (ie, model factors, CIBERSORT, ComBat). The true test to the efficacy of these approaches will be the measured accuracy of the predicted classes of independently collected patient samples. Mirroring the use of machine learning tools in diagnostic development, digital pathology utilizes deep learning algorithms, which are well suited to analyze and classify digital images of the biopsies themselves. In contrast to the Banff grading system, which was developed based on Received 27 July 2020.

Volume None
Pages None
DOI 10.1097/TP.0000000000003438
Language English
Journal Transplantation

Full Text