TieNet: Text-Image Embedding Network for Common Thorax Disease Classification and Reporting in Chest X-rays
Xiaosong Wang, Yifan Peng, Le Lu, Zhiyong Lu, Ronald M. Summers
TTieNet: Text-Image Embedding Network for Common Thorax DiseaseClassification and Reporting in Chest X-rays
Xiaosong Wang ∗ , Yifan Peng ∗ , Le Lu , Zhiyong Lu ,Ronald M. Summers Department of Radiology and Imaging Sciences, Clinical Center, National Center for Biotechnology Information, National Library of Medicine,National Institutes of Health, Bethesda, MD 20892 { xiaosong.wang,yifan.peng,le.lu,luzh,mohammad.bagheri,rms } @nih.gov Abstract
Chest X-rays are one of the most common radiologi-cal examinations in daily clinical routines. Reporting tho-rax diseases using chest X-rays is often an entry-level taskfor radiologist trainees. Yet, reading a chest X-ray imageremains a challenging job for learning-oriented machineintelligence, due to (1) shortage of large-scale machine-learnable medical image datasets, and (2) lack of tech-niques that can mimic the high-level reasoning of humanradiologists that requires years of knowledge accumulationand professional training. In this paper, we show the clini-cal free-text radiological reportscan be utilized as a prioriknowledge for tackling these two key problems. We proposea novel Text-Image Embedding network (TieNet) for extract-ing the distinctive image and text representations. Multi-level attention models are integrated into an end-to-endtrainable CNN-RNN architecture for highlighting the mean-ingful text words and image regions. We first apply TieNetto classify the chest X-rays by using both image featuresand text embeddings extracted from associated reports. Theproposed auto-annotation framework achieves high accu-racy (over 0.9 on average in AUCs) in assigning diseaselabels for our hand-label evaluation dataset. Furthermore,we transform the TieNet into a chest X-ray reporting system.It simulates the reporting process and can output diseaseclassification and a preliminary report together. The classi-fication results are significantly improved (6% increase onaverage in AUCs) compared to the state-of-the-art baselineon an unseen and hand-labeled dataset (OpenI).
1. Introduction
In the last decade, challenging tasks in computer vi-sion have gone through different stages, from sole im-age classification to multi-category multi-instance classi- ∗ Both authors contributed equally.
Figure 1. Overview of the proposed automated chest X-ray report-ing framework. A multi-level attention model is introduced. fication/detection/segmentation to more complex cognitivetasks that involve understanding and describing the rela-tionships of object instances inside the images or videos.The rapid and significant performance improvement ispartly driven by publicly accessible of the large-scale im-age and video datasets with quality annotations, e.g ., Ima-geNet [8], PASCAL VOC [10], MS COCO [22], and Vi-sual Genome [18] datasets. In particular, ImageNet pre-trained deep Convolutional Neural Network (CNN) mod-els [15, 19, 21] has become an essential basis (indeed an ad-vantage) for many higher level tasks, e.g ., Recurrent NeuralNetwork (RNN) based image captioning [34, 17, 30, 11],Visual Question Answering [36, 42, 38, 27], and instancerelationship extraction [16, 14, 6].On the contrary, there are few publicly available large-scale image datasets in the medical image domain. Con-ventional means of annotating natural images, e.g crowd-sourcing, cannot be applied to medical images due to thefact that these tasks often require years of professional train-ing and domain knowledge. On the other hand, radiologi-cal raw data ( e.g ., images, clinical annotations, and radio-1 a r X i v : . [ c s . C V ] J a n ogical reports) have been accumulated in many hospitals’Picture Archiving and Communication Systems (PACS) fordecades. The main challenge is how to transform those ret-rospective radiological data into a machine-learnable for-mat. Accomplishing this with chest X-rays represents a ma-jor milestone in the medical-imaging community [35].Different from current deep learning models, radiolo-gists routinely observe multiple findings when they readmedical images and compile radiological reports. One mainreason is that these findings are often correlated. For in-stance, liver metastases can spread to regional lymph nodesor other body parts. By obtaining and maintaining a holis-tic picture of relevant clinical findings, a radiologist will beable to make a more accurate diagnosis. To our best knowl-edge, developing a universal or multi-purpose CAD frame-work, which is capable of detecting multiple disease typesin a seamless fashion, is still a challenging task. However,such a framework is a crucial part to build an automatic ra-diological diagnosis and reporting system.Toward this end, we investigate how free-text radiologi-cal reports can be exploited as a priori knowledge using aninnovative text-image embedding network. We apply thisnovel system in two different scenarios. We first introducea new framework for auto-annotation of the chest X-rays byusing both images features and text embeddings extractedfrom associated reports. Multi-level attention models areintegrated into an end-to-end trainable CNN-RNN architec-ture for highlighting the meaningful text words and imageregions. In addition, we convert the proposed annotationframework into a chest X-ray reporting system (as shownin Figure 1). The system stimulates the real-world report-ing process by outputting disease classification and generat-ing a preliminary report spontaneously. The text embeddinglearned from the retrospective reports are integrated into themodel as a priori knowledge and the joint learning frame-work boosts the performance in both tasks in comparison toprevious state-of-the-art.Our contributions are in fourfold: (1) We proposed theText-Image Embedding Network, which is a multi-purposeend-to-end trainable multi-task CNN-RNN framework; (2)We show how raw report data, together with paired image,can be utilized to produce meaningful attention-based im-age and text representations using the proposed TieNet. (3)We outline how the developed text and image embeddingsare able to boost the auto-annotation framework and achieveextremely high accuracy for chest x-ray labeling; (4) Fi-nally, we present a novel image classification frameworkwhich takes images as the sole input, but uses the pairedtext-image representations from training as a prior knowl-edge injection, in order to produce improved classificationscores and preliminary report generations.Importantly, we validate our approach on three differ-ent datasets and the TieNet improves the image classifica- tion result (6% increase on average in area under the curve(AUC) for all disease categories) in comparison to the state-of-the-art on an unseen and hand-labeled dataset (OpenI[7]) from other institute. Our multi-task training schemecan help not only the image classification but also the reportgeneration by producing reports with higher BLEU scoresthan the baseline method.
2. Related work
Computer-Aided Detection (CADe) and Diagnosis(CADx) has long been a major research focus in medicalimage processing [5]. In recent years, deep learning mod-els start to outperform conventional statistical learning ap-proaches in various tasks, such as automated classificationof skin lesions [9], detection of liver lesions [4], and detec-tion of pathological-image findings [40]. However, currentCADe methods typically target one particular type of dis-ease or lesion, such as lung nodules, colon polyps or lymphnodes [24].Wang et al . [35] provide a recent and prominent excep-tion, where they introduced a large scale chest X-ray datasetby processing images and their paired radiological reports(extracted from their institutional PACS database) with nat-ural language processing (NLP) techniques. The publiclyavailable dataset contains , front-view chest X-rayimages of , unique patients . However, radiologi-cal reports contain richer information than simple diseasebinary labels, e.g ., disease location and severity, whichshould be exploited in order to fully leverage existing PACSdatasets. Thus, we differ from Wang et al .’s approach byleveraging this rich text information in order to produce anenhanced system for chest X-ray CADx.In vision of visual captioning, our work is closed to[37, 33, 29, 38, 27]. Xu et al . [37] first introduced thesequence-to-sequence model and spatial attention modelinto the image captioning task. They conditioned the longshort-term memory (LSTM) decoder on different parts ofthe input image during each decoding step, and the atten-tion signal was determined by the previous hidden stateand CNN features. Vinyals et al . [33] cast the syntacticalparsing problem as a sequence-to-sequence learning taskby linearizing the parsing tree. Pederoli et al . [29] alloweda direct association between caption words and image re-gions. More recently, multi-attention models [38, 27] ex-tract salient regions and words from both image and text andthen combine them together for better representations of thepair. In medical imaging domain, Shin et al .[32] proposedto correlate the entire image or saliency regions with MeSHterms. Promising results [41] are also reported in summariz-ing the findings in pathology images using task-oriented re-ports in the training. The difference between our model and https://nihcc.app.box.com/v/ChestXray-NIHCC/ igure 2. Framework of the proposed chest X-ray auto-annotation and reporting framework. Multi-level attentions are introduced toproduce saliency-encoded text and image embeddings. theirs lies in that we employ multi-attention models with amixture of image and text features in order to provide moresalient and meaningful embeddings for the image classifi-cation and report generation task.Apart from visual attention, text-based attention has alsobeen increasingly applied in deep learning for NLP [2,26, 31]. It attempts to relieve one potential problem thatthe traditional encoder-decoder framework faces, which isthat the input is long or very information-rich and selec-tive encoding is not possible. The attention mechanismattempts to ease the above problems by allowing the de-coder to refer back to the input sequence [39, 23, 25]. Tothis end, our work closely follows the one used in [23]where they extracted an interpretable sentence embeddingby introducing self-attention. Our model paired both theattention-based image and text representation from trainingas a prior knowledge injection to produce improved classi-fication scores.
3. Text-Image Embedding Network
The radiological report is a summary of all the clinicalfindings and impressions determined during examination ofa radiography study. A sample report is shown in Figure 1.It usually contains richer information than just disease key-words, but also may consist of negation and uncertaintystatements. In the ‘findings’ section, a list of normal and ab-normal observations will be listed for each part of the body examined in the image. Attributes of the disease patterns, e.g ., specific location and severity, will also be noted. Fur-thermore, critical diagnosis information is often presentedin the ‘impression’ section by considering all findings, pa-tient history, and previous studies. Suspicious findings maycause recommendations for additional or follow-up imagingstudies. As such, reports consist of a challenging mixtureof information and a key for machine learning is extractinguseful parts for particular applications.In addition to mining the disease keywords [35] as asummarization of the radiological reports, we want to learna text embedding to capture the richer information con-tained in raw reports. Figure 2 illustrates the proposed Text-Image Embedding Network. We first introduce the founda-tion of TieNet, which is an end-to-end trainable CNN-RNNarchitecture. Afterwards we discuss two enhancements wedevelop and integrate, i.e ., attention-encoded text embed-ding (AETE) and saliency weighted global average pooling(SW-GAP). Finally, we outline the joint learning loss func-tion used to optimize the framework.
As shown in Figure 2, our end-to-end trainable CNN-RNN model takes an image I and a sequence of 1-of- V encoded words. S = { w , . . . , w T } , w t ∈ R V , (1)here w t is a vector standing for a d w dimensional wordembedding for the t -th word in the report, V is the size ofthe vocabulary, and T is the length of the report. The ini-tial CNN component uses layers borrowed from ImageNetpre-trained models for image classification, e.g ., ResNet-50(from Conv1 to Res5c). The CNN component additionallyincludes a convolutional layer (transition layer) to manipu-late the spatial grid size and feature dimension.Our RNN is based off of Xu et al .’s visual image spa-tial attention model [37] for image captioning. The con-volutional activations from the transition layer, denoted as X , initialize the RNN’s hidden state, h t , where a fully-connected embedding, φ ( X ) , maps the size d X transitionlayer activations to the LSTM state space of dimension d h .In addition, X is also used as one of the RNN’s input. How-ever, following Xu et al . [37], our sequence-to-sequencemodel includes a deterministic and soft visual spatial atten-tion, a t , that is multiplied element-wise to X before the lat-ter is inputted to the RNN. At each time step, the RNN alsooutputs the subsequent attention map, a t +1 .In addition to the soft-weighted visual features, the RNNalso accepts the current word at each time step as input. Weadopt standard LSTM units [13] for the RNN. The transitionto the next hidden state can then be denoted as h t = LST M ([ w t , a t , X ] , h t − ) . (2)The LSTM produces the report by generating one word ateach time step conditioned on a context vector, i.e ., the pre-vious hidden state h t , the previously generated words w t ,and the convolutional features of X whose dimension is D × D × C . Here D = 16 and C = 1024 denote the spa-tial and channel dimensions, respectively. Once the modelis trained, reports for a new image can be generated by se-quentially sampling w t ∼ p ( w t | h t ) and updating the stateusing Equation 2.The end-to-end trainable CNN-RNN model provides apowerful means to process both text and images. However,our goal is also to obtain an interpretable global text andvisual embedding for the purposes of classification. For thisreason, we introduce two key enhancements in the form ofthe AETE and SW-GAP. To compute a global text representation, we use an ap-proach that closely follows the one used in [23]. Morespecifically, we use attention to combine the most salientportions of the RNN hidden states. Let H = ( h , . . . , h T ) be the d h × T matrix of all the hidden states. The attentionmechanism outputs a r × T matrix of weights G as G = sof tmax ( W s tanh ( W s H )) , (3)where r is the number of global attentions we want to ex-tract from the sentence, and W s and W s are s -by- d h and r -by- s matrices, respectively. s is a hyperparameter govern-ing the dimensionality, and therefore maximum rank, of theattention-producing process.With the attention calculated, we compute an r × d h em-bedding matrix, M = GH , which in essence executes r weighted sums across the T hidden states, aggregating themtogether into r representations. Each row of G , denoted g i ( i ∈ { . . . r } ) , indicates how much each hidden state con-tributes to the final embedded representation of M . We canthus draw a heat map for each row of the embedding matrix M (See Figure 10 for examples). This way of visualizationgives hints on what is encoded in each part of the embed-ding, adding an extra layer of interpretation.To provide a final global text embedding of the sentencesin the report, the AETE executes max-over- r pooling across M , producing an embedding vector ˆ X AET E with size d h . In addition to using attention to provide a more mean-ingful text embedding, our goal is also to produce improvedvisual embeddings for classification. For this purpose, were-use the attention mechanism, G , except that we performa max-over- r operation, producing a sequence of saliencyvalues, g t ( t = 1 , . . . , T ) , for each word, w t . These saliencyvalues are used to weight and select the spatial attentionmaps, a t , generated at each time point: a ws ( x, y ) = (cid:88) t a t ( x, y ) ∗ g t . (4)This map is encoded with all spatial saliency regions guidedby the text attention. We use this this map to highlight thespatial regions of X with more meaningful information: ˆ X SW − GAP ( c ) = (cid:88) ( x,y ) a ws ( x, y ) ∗ X ( x, y, c ) , (5)where x, y ∈ { ...D } and ˆ X SW − GAP is a 1-by-C vectorrepresenting the global visual information, guided by bothtext- and visual-based attention. The lower part of figure 2illustrates an example of such pooling strategy.
With global representations computed for both the imageand report, these must be combined together to produce thefinal classification. To accomplish this, we concatenate thetwo forms of representations ˆ X = [ ˆ X AET E ; ˆ X SW − GAP ] and use a final fully-connected layer to produce the out-put for multi-label classification. The intuition behind ourmodel is that the connection between the CNN and RNNnetwork will benefit the training of both because the imageactivations can be adjusted for the text embedding task andsalient image features could be extracted by pooling basedon high text saliency.n a similar fashion as Wang et al . [35], we de-fine an M -dimensional disease label vector y =[ y , ..., y m , ..., y M ] , y m ∈ { , } for each case and M = 15 indicates the number of classes. y m indicates the presencewith respect to a pathology or ‘no finding’ (of listed diseasecategories) in the image. Here, we adopt the NLP-mined la-bels provided by [35] as the ‘ground-truth’ during the train-ing.The instance numbers for different disease categories arehighly unbalanced, from hundreds to dozens of thousands.In addition to the positive/negative balancing introducedin [35], we add weights to instances associated with dif-ferent categories, L m ( f ( I, S ) , y ) = β P (cid:88) y m =1 − ln( f ( I, S )) · λ m + β N (cid:88) y m =0 − ln(1 − f ( I, S )) · λ m , (6) where β P = | N || P | + | N | and β N = | P || P | + | N | . | P | and | N | are the total number of images with at least one disease andwith no diseases, respectively. λ m = ( Q − Q m ) /Q is a setof precomputed class-wised weights, where Q and Q m arethe total number of images and the number of images thathave disease label m . λ m will be larger if the number ofinstances from class m is small.Because the TieNet can also generate text reports, wealso optimize the RNN generative model loss [37], L R .Thus the overall loss is composed of two parts, the sigmoidcross entropy loss L C for the multi-label classification andthe loss L R from the RNN generative model [37], L overall = αL C + (1 − α ) L R (7)where α is added to balance the large difference betweenthe two loss types. One straightforward application of the TieNet is theauto-annotation task to mine image classification labels. Byomitting the generation of sequential words, we accumulateand back-propagate only the classification loss for bettertext-image embeddings in image classification. Here, weuse the NLP-mined disease labels as ‘ground truth’ in thetraining. Indeed we want to learn a mapping between theinput image-report pairs and the image labels. The reporttexts often contain more easy-to-learn features than the im-age side. The contribution of both sources to the final classi-fication prediction should be balanced via either controllingthe feature dimensions or drop-off partial of the ‘easy-to-learn’ data during training.
For a more difficult but real-world scenario, we trans-form the image-text embedding network to serve as a uni- fied system of image classification and report generationwhen only the unseen image is available. During the train-ing, both image and report are fed and two separate lossesare computed as stated above, i.e ., the loss for image clas-sification and the loss for sequence-to-sequence modeling.While testing, only the image is required as the input.The generated text contained the learned text embeddingrecorded in the LSTM units and later used in the final imageclassification task. The generative model we integrated intothe text-image embedding network is the key to associate animage with its attention encoded text embedding.
4. Dataset
ChestX-ray14 [35] is a recently released benchmarkdataset for common thorax disease classification and local-ization. It consists of 14 disease labels that can be observedin chest X-ray, i.e ., Atelectasis, Cardiomegaly, Effusion, In-filtration, Mass, Nodule, Pneumonia, Pneumothorax, Con-solidation, Edema, Emphysema, Fibrosis, Pleural Thicken-ing, and Hernia. The NLP-mined labels are used as ‘groundtruth’ for model training throughout the experiments. Weadopt the patient-level data splits published with the data . Hand-labeled : In addition to NLP-mined labels, we ran-domly select 900 reports from the testing set and have tworadiologists to annotate the 14 categories of findings for theevaluation purpose. A trial set of 30 reports was first used tosynchronize the criterion of annotation between two annota-tors. Then, each report was independently annotated by twoannotators. In this paper, we used the inter-rater agreement(IRA) to measure the consistency between two observers.The resulting Cohens kappa is 84.3%. Afterwards, the fi-nal decision was adjudicated between two observers on theinconsistent cases.
OpenI [7] is a publicly available radiography datasetcollected from multiple institutes by Indiana University.Using the OpenI API, we retrieved 3,851 unique radiologyreports and 7,784 associated frontal/lateral images whereeach OpenI report was annotated with key concepts (MeSHwords) including body parts, findings, and diagnoses. Forconsistency, we use the same 14 categories of findings asabove in the experiments. In our experiments, only 3,643unique front view images and corresponding reports are se-lected and evaluated.
5. Experiments
Report vocabulary:
We use all 15,472 unique words inthe training set that appear at least twice. Words that appearless frequently are replaced by a special out-of-vocabularytoken, and the start and the end of the reports are markedwith a special (cid:104)
START (cid:105) and (cid:104)
END (cid:105) token. The pre-trainedword embedding vectors was learned on PubMed articles https://nihcc.app.box.com/v/ChestXray-NIHCC sing the gensim word2vec implementation with the dimen-sionality set to 200 . The word embedding vectors will beevolved along with other LSTM parameters. Evaluation Metrics:
To compare previous state-of-the-art works, we choose different evaluation metrics for differ-ent tasks so as to maintain consistency with data as reportedin the previous works.Receiver Operating Curves (ROC) are plotted for eachdisease category to measure the image classification perfor-mance and afterward, Areas Under Curve (AUC) are com-puted, which reflect the overall performance as a summaryof different operating points.To assess the quality of generated text report, BLEUscores [28], METEOR [3] and ROUGE-L [20] are com-puted between the original reports and the generated ones.Those measures reflect the word overlapping statistics be-tween two text corpora. However, we believe their capabil-ities are limited for showing the actual accuracy of diseasewords (together with their attributes) overlapping betweentwo text corpora.
Training:
The LSTM model contains a 256 dimensionalcell and s = 2000 in W s and W s for generating theattention weights G . During training, we use 0.5 dropouton the MLP and 0.0001 for L2 regularization. We use theAdam optimizer with a mini-batch size of 32 and a constantlearning rate of 0.001.In addition, our self-attention LSTM has a hidden layerwith 350 units. We choose the matrix embedding to have 5rows (the r ), and a coefficient of 1 for the penalization term.All the models are trained until convergence is achieved andthe hyper-parameters for testing is selected according to thecorresponding best validation set performance.Our text-image embedding network is implementedbased on TensorFlow [1] and Tensorpack . The ImageNetpre-trained model, i.e ., ResNet-50 [12] is obtained from theCaffe model zoo and converted into the TensorFlow com-patible format. The proposed network takes the weightsfrom the pre-trained model and fixes them during the train-ing. Other layers in the network are trained from scratch.In a similar fashion as introduced in [35], we reduce thesize of mini-batch to fit the entire model in each GPU whilewe accumulate the gradients for a number of iterations andalso across a number of GPUs for better training perfor-mance. The DCNN models are trained using a Dev-BoxLinux server with 4 Titan X GPUs. Figure 3 illustrates the ROC curves for the image clas-sification performance with 3 different inputs evaluatedon 3 different testing sets, i.e ., ChestX-ray14 testing set https://radimrehurek.com/gensim/models/word2vec.html https://github.com/ppwwyyxx/tensorpack/ (ChestX-ray14), the hand-labeled set (Hand-labeled) andthe OpenI set (OpenI). Separate curves are plotted for eachdisease categories and ‘No finding’. Here, two differentauto-annotation frameworks are trained by using differentinputs, i.e ., taking reports only (R) and taking image-reportpairs (I+R) as inputs. When only the reports are used, theframework will not have the saliency weighted global av-erage pooling path. In such way, we can get a sense howthe features from text path and image path individually con-tribute to the final classification prediction.We train the proposed auto-annotation framework us-ing the training and validation sets from the ChestX-ray14dataset and test it on all three testing sets, i.e ., ChestX-ray14, hand-labeled and OpenI. Table 1 shows the AUCvalues for each class computed from the ROC curves shownin Figure 3. The auto-annotation framework achieves highperformance on both ChestX-ray14 and Hand-labeled, i.e .,over 0.87 in AUC with reports alone as the input and over0.90 in AUC with image-report pairs on sample numberweighted average ( wAV G ). The combination of im-age and report demonstrates the supreme advantage in thistask. In addition, the auto-annotation framework trainedon ChestX-ray14 performed equivalently on OpenI. It indi-cates that the model trained on a large-scale image datasetcould easily be generalized to the unseen data from otherinstitutes. The model trained solely based on images couldalso be generalized well to the datasets from other sources.In this case, both the proposed method and the one in [35]are able to perform equally well on all three testing sets. When the TieNet is switched to an automatic diseaseclassification and reporting system, it takes a single im-age as the input and is capable of outputting a multi-label prediction and corresponding radiological report to-gether. The ROC curves on the right in Figure 3 and Ta-ble 1 show the image classification performance producedby the multi-purpose reporting system. The AUCs fromour TieNet (I+GR) demonstrate the consistent improve-ment in AUCs ( . − . on wAV G for all the dis-ease categories) across all three datasets. The multilabelclassification framework [35] serves as a baseline modelthat also takes solely the images. Furthermore, the per-formance improvement achieved on the Hand-labeled andOpenI datasets (with ground truth image labels) is evenlarger than the performance gain on ChestX-ray14 (withNLP-mined labels). It indicates that the TieNet is able tolearn more meaningful and richer text embeddings directlyfrom the raw reports and correct the inconsistency betweenembedded features and erroneous mined labels.Table 2 shows that the generated reports from our pro-posed system obtain higher scores in all evaluation metricsin comparison to the baseline image captioning model [37].isease ChestX-ray14 Hand-labeled OpenIR / I+R / I [35] / I+GR / AVG .976 / .989 / .745 / .772 / – .922 / .925 / .735 / .748 / – .960 / .965 / .719 / .779 / – .978 / .992 / .722 / .748 / – .878 / .900 / .687 / .719 / – .957 / .966 / .741 / .798 / – Table 1. Evaluation of image classification results (AUCs) on ChestX-ray14, hand-labeled and OpenI dataset. Performances are reported onfour methods, i.e ., multilabel classification based on Report (R), Image + Report (I+R), Image [35], and Image + Generative Report(I+GR). T r u e P o s i t i v e R a t e R classification on ChestX-ray14
Atelectasis 0.983Cardiomegaly 0.978Effusion 0.984Infiltrate 0.960Mass 0.984Nodule 0.981Pneumonia 0.947Pneumothorax 0.983Consolidation 0.989Edema 0.976Emphysema 0.996Fibrosis 0.986Pleural_Thickening 0.988Hernia 0.929No finding 0.920 T r u e P o s i t i v e R a t e I + R classification on ChestX-ray14
Atelectasis 0.993Cardiomegaly 0.994Effusion 0.995Infiltrate 0.986Mass 0.994Nodule 0.994Pneumonia 0.969Pneumothorax 0.995Consolidation 0.997Edema 0.989Emphysema 0.997Fibrosis 0.986Pleural_Thickening 0.997Hernia 0.958No finding 0.985 0.0 0.2 0.4 0.6 0.8 1.0False Positive Rate0.00.20.40.60.81.0 T r u e P o s i t i v e R a t e I + GR classification on ChestX-ray14
Atelectasis 0.732Cardiomegaly 0.844Effusion 0.793Infiltrate 0.666Mass 0.725Nodule 0.685Pneumonia 0.720Pneumothorax 0.847Consolidation 0.701Edema 0.829Emphysema 0.865Fibrosis 0.796Pleural_Thickening 0.735Hernia 0.876No finding 0.701 T r u e P o s i t i v e R a t e R classification on Hand-labeled
Atelectasis 0.886Cardiomegaly 0.964Effusion 0.938Infiltrate 0.849Mass 0.935Nodule 0.974Pneumonia 0.917Pneumothorax 0.983Consolidation 0.923Edema 0.970Emphysema 0.980Fibrosis 0.930Pleural_Thickening 0.904Hernia 0.757No finding 0.889 T r u e P o s i t i v e R a t e I + R classification on Hand-labeled
Atelectasis 0.919Cardiomegaly 0.989Effusion 0.967Infiltrate 0.879Mass 0.943Nodule 0.974Pneumonia 0.946Pneumothorax 0.996Consolidation 0.910Edema 0.987Emphysema 0.981Fibrosis 0.989Pleural_Thickening 0.923Hernia 0.545No finding 0.908 0.0 0.2 0.4 0.6 0.8 1.0False Positive Rate0.00.20.40.60.81.0 T r u e P o s i t i v e R a t e I + GR classification on Hand-labeled
Atelectasis 0.715Cardiomegaly 0.872Effusion 0.823Infiltrate 0.664Mass 0.710Nodule 0.684Pneumonia 0.681Pneumothorax 0.855Consolidation 0.631Edema 0.834Emphysema 0.863Fibrosis 0.714Pleural_Thickening 0.776Hernia 0.647No finding 0.666
Figure 3. A comparison of classification performance with different testing inputs, i.e . Report (R), Image+Report (I+R), and Im-age+Generative Report(I+GR).
It may be because the gradients from RNN are backpropa-gated to the CNN part and the adjustment of image featuresfrom Transition layer will benefit the report generation task.Figure 10 illustrates 4 sample results from the proposed automatic classification and reporting system. Please seemore examples in the appendix A. Original images areshown along with the classification predications, originalreports and generated reports. Text-attended words are also m a g e S a m p l e ca s e s P Atelectasis Effusion
No finding Nodule
Pneumothorax
Mass Consolidation
Mass O r i g i n a l r e po r t findings : a single ap view of the chest demonstrates increasing bibasilar interstitial opacities with decreased overall aeration . increasing blunting of right costophrenic angle. … impression : increasing bibasilar atelectasis with possible development of right pleural effusion . Normal no evidence of lung infiltrate . findings : heart and mediastinum unchanged . multiple lung nodules . evidence of recent left chest surgery with left chest tube in place . very small left apical pneumothorax . lungs unchanged , no evidence of acute infiltrates . impression : stable chest . findings : large left suprahilar and infrahilar masses as well as the well circumscribed nodule the level of the aortic knob . the right infrahilar mass as well . no effusion . impression : metastatic lung disease . G e n e r a t e d R e po r t findings : a single ap view of the chest demonstrates unchanged bilateral reticular opacities , consider atelectasis . continued left basilar atelectasis . no evidence of developing infiltrate . the cardiac and mediastinal contours are stable . impression : no evidence of developing infiltrate . findings : pa and lateral views of the chest demonstrate lungs that are clear without focal mass , infiltrate or effusion . cardiomediastinal silhouette is normal size and contour . pulmonary vascularity is normal in caliber and distribution . impression : no evidence of acute pulmonary pathology findings : pa and lateral views of the chest demonstrate unchanged bilateral chest tubes . again pulmonary nodules are seen on the right and cardiac silhouette unchanged . the cardiac and mediastinal contours are stable . impression : 1. bilateral masses and left lower lung field consolidation . 2.new bilateral lung masses . comparison is to previous upright study of no significant interval change is seen in the appearance of the chest . the mediastinal soft tissue and pulmonary vascularity are stable . there are blastic bone lesions in the chest . bones , soft tissues are normal . the lung fields are clear . there are calcified lymph nodes in the left lower lung . impression : . sclerotic lesions in the left humeral , consistent with metastasis. A A B C D
Figure 4. 4 sample image Classification Predictions (P) along with original and generated reports. Text attentions are highlighted over thegenerated text. Correct predication is marked in green, false prediction in red and missing prediction in blue.Table 2. Evaluation of generated reports in ChestX-ray14 testingset using BLEU, METEOR and ROUGE-L.
Captioning [37] TieNet I+GRBLEU-1 0.2391 0.2860BLEU-2 0.1248 0.1597BLEU-3 0.0861 0.1038BLEU-4 0.0658 0.0736METEOR 0.1024 0.1076ROUGE-L 0.1988 0.2263highlighted over the generated reports. If looking at gener-ated reports alone, we find that they all read well. However,the described diseases may not truly appear in the images.For example, ‘Atelectasis’ is correctly recognized in sam-ple A but ‘Effusion’ is missed. ‘Effusion’ (not too far fromthe negation word ‘without’) is erroneously highlighted insample B but the system is still able to correctly classify theimage as ‘No finding’. In sample D, the generated reportmisses ‘Mass’ while it states right about the metastasis inthe lung. One promising finding is that the false predictions(‘Mass’ and ‘Consolidation’) in sample C can actually beobserved in the image (verified by a radiologist) but some-how did not noted in the original report, which indicates ourproposed netowrk can in some extent associate the image appearance with the text description.
6. Conclusion
Automatically extracting the machine-learnable anno-tation from the retrospective data remains a challengingtask, among which images and reports are two main use-ful sources. Here, we proposed a novel text-image em-bedding network integrated with multi-level attention mod-els. TieNet is implemented in an end-to-end CNN-RNNarchitecture for learning a blend of distinctive image andtext representations. Then, we demonstrate and discuss thepros and cons of including radiological reports in both auto-annotation and reporting tasks. While significant improve-ments have been achieved in multi-label disease classifica-tion, there is still much space to improve the quality of gen-erated reports. For future work, we will extend TieNet toinclude multiple RNNs for learning not only disease wordsbut also their attributes and further correlate them and imagefindings with the description in the generated reports.
Acknowledgements
This work was supported by theIntramural Research Programs of the NIH Clinical Centerand National Library of Medicine. Thanks to Adam Har-rison and Shazia Dharssi for proofreading the manuscript.We are also grateful to NVIDIA Corporation for the GPUdonation. eferences [1] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen,C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, S. Ghe-mawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia,R. Jozefowicz, L. Kaiser, M. Kudlur, J. Levenberg, D. Mane,R. Monga, S. Moore, D. Murray, C. Olah, M. Schuster,J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. Tucker,V. Vanhoucke, V. Vasudevan, F. Viegas, O. Vinyals, P. War-den, M. Wattenberg, M. Wicke, Y. Yu, and X. Zheng. Ten-sorFlow: large-scale machine learning on heterogeneous dis-tributed systems. 2016. 6[2] D. Bahdanau, K. Cho, and Y. Bengio. Neural machine trans-lation by jointly learning to align and translate. In
Inter-national Conference on Learning Representations (ICLR) ,pages 1–15, 2015. 3[3] S. Banerjee and A. Lavie. METEOR: An automatic met-ric for MT evaluation with improved correlation with humanjudgments. In
Proceedings of the ACL Workshop on Intrinsicand Extrinsic Evaluation Measures for Machine Translationand/or Summarization , pages 65–72, 2005. 6[4] A. Ben-Cohen, I. Diamant, E. Klang, M. Amitai, andH. Greenspan. Fully convolutional network for liver seg-mentation and lesions detection. In
International Workshopon Large-Scale Annotation of Biomedical Data and ExpertLabel Synthesis , pages 77–85, 2016. 2[5] G. Chartrand, P. M. Cheng, E. Vorontsov, M. Drozdzal,S. Turcotte, C. J. Pal, S. Kadoury, and A. Tang. Deep learn-ing: a primer for radiologists.
Radiographics : a review pub-lication of the Radiological Society of North America, Inc ,37(7):2113–2131, 2017. 2[6] B. Dai, Y. Zhang, and D. Lin. Detecting visual relation-ships with deep relational networks. In
The IEEE Confer-ence on Computer Vision and Pattern Recognition (CVPR) ,pages 3076–3086, 2017. 1[7] D. Demner-Fushman, M. D. Kohli, M. B. Rosenman, S. E.Shooshan, L. Rodriguez, S. Antani, G. R. Thoma, and C. J.McDonald. Preparing a collection of radiology examinationsfor distribution and retrieval.
Journal of the American Medi-cal Informatics Association , 23(2):304–310, 2015. 2, 5[8] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A large-scale hierarchical image database.In
The IEEE Conference on Computer Vision and PatternRecognition (CVPR) , pages 248–255, 2009. 1[9] A. Esteva, B. Kuprel, R. A. Novoa, J. Ko, S. M. Swetter,H. M. Blau, and S. Thrun. Dermatologist-level classifi-cation of skin cancer with deep neural networks.
Nature ,542(7639):115–118, 2017. 2[10] M. Everingham, S. M. A. Eslami, L. V. Gool, C. K. I.Williams, J. Winn, and A. Zisserman. The PASCAL visualobject classes challenge: A retrospective.
International Jour-nal of Computer Vision , 111(1):98–136, 2015. 1[11] Z. Gan, C. Gan, X. He, Y. Pu, K. Tran, J. Gao, L. Carin,and L. Deng. Semantic compositional networks for visualcaptioning. In
The IEEE Conference on Computer Visionand Pattern Recognition (CVPR) , pages 1–13, 2017. 1[12] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learningfor image recognition. In
The IEEE Conference on Com- puter Vision and Pattern Recognition (CVPR) , pages 770–778, 2016. 6[13] S. Hochreiter and J. Schmidhuber. Long short-term memory.
Neural Computation , 9(8):1735–1780, 1997. 4[14] R. Hu, M. Rohrbach, J. Andreas, T. Darrell, and K. Saenko.Modeling relationships in referential expressions with com-positional modular networks. In
The IEEE Conference onComputer Vision and Pattern Recognition (CVPR) , pages1115–1124, 2017. 1[15] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Gir-shick, S. Guadarrama, and T. Darrell. Caffe: Convolu-tional architecture for fast feature embedding. In
Proceed-ings of the 22nd ACM international conference on Multime-dia , pages 675–678, 2014. 1[16] J. Johnson, A. Karpathy, and L. Fei-Fei. Densecap: Fullyconvolutional localization networks for dense captioning.In
The IEEE Conference on Computer Vision and PatternRecognition (CVPR) , pages 4565–4574, 2016. 1[17] A. Karpathy and L. Fei-Fei. Deep visual-semantic align-ments for generating image descriptions.
IEEE transactionson pattern analysis and machine intelligence , 39(4):664–676, 2017. 1[18] R. Krishna, Y. Zhu, O. Groth, J. Johnson, K. Hata, J. Kravitz,S. Chen, Y. Kalantidis, L.-J. Li, D. A. Shamma, M. S. Bern-stein, and F.-F. Li. Visual genome: Connecting language andvision using crowdsourced dense image annotations. 2016.1[19] A. Krizhevsky, I. Sutskever, and G. E. Hinton. ImageNetclassification with deep convolutional neural networks. In
Advances in neural information processing systems , pages1097–1105, 2012. 1[20] C.-Y. Lin. ROUGE: A package for automatic evaluationof summaries. In
Text summarization branches out: Pro-ceedings of the ACL-04 workshop , volume 8, pages 1–8.Barcelona, Spain, 2004. 6[21] M. Lin, Q. Chen, and S. Yan. Network in network. In
In-ternational Conference on Learning Representations (ICLR) ,pages 1–10, 2014. 1[22] T.-Y. Lin, M. Maire, S. Belongie, L. Bourdev, R. Girshick,J. Hays, P. Perona, D. Ramanan, C. L. Zitnick, and P. Doll´ar.Microsoft COCO: Common objects in context. In
EuropeanConference on Computer Vision (ECCV) , pages 740–755,2014. 1[23] Z. Lin, M. Feng, C. N. dos Santos, M. Yu, B. Xiang, B. Zhou,and Y. Bengio. A structured self-attentive sentence embed-ding. In , pages 1–15, 2017. 3, 4[24] J. Liu, D. Wang, L. Lu, Z. Wei, L. Kim, E. B. Turk-bey, B. Sahiner, N. Petrick, and R. M. Summers. Detec-tion and diagnosis of colitis on computed tomography us-ing deep convolutional neural networks.
Medical Physics ,44(9):4630–4642, 2017. 2[25] Y. Liu, C. Sun, L. Lin, and X. Wang. Learning natural lan-guage inference using bidirectional LSTM model and inner-attention. 2016. 3[26] F. Meng, Z. Lu, M. Wang, H. Li, W. Jiang, and Q. Liu. En-coding source language with convolutional neural networkor machine translation. In
Proceedings of the 53rd An-nual Meeting of the Association for Computational Linguis-tics and the 7th International Joint Conference on NaturalLanguage Processing (ACL-CoNLL) , pages 20–30, 2015. 3[27] H. Nam, J.-W. Ha, and J. Kim. Dual attention networks formultimodal reasoning and matching. In
The IEEE Confer-ence on Computer Vision and Pattern Recognition (CVPR) ,pages 299–307, 2017. 1, 2[28] K. Papineni, S. Roukos, T. Ward, and W.-J. Zhu. BLEU: amethod for automatic evaluation of machine translation. In
Proceedings of the 40th annual meeting on association forcomputational linguistics (ACL) , pages 311–318, 2002. 6[29] M. Pedersoli, T. Lucas, C. Schmid, and J. Verbeek. Areas ofattention for image captioning. In
International Conferenceon Computer Vision (ICCV) , pages 1–22, 2017. 2[30] B. Plummer, L. Wang, C. Cervantes, J. Caicedo, J. Hock-enmaier, and S. Lazebnik. Flickr30k entities: Collect-ing region-to-phrase correspondences for richer image-to-sentence models. In
International Conference on ComputerVision (ICCV) , 2015. 1[31] A. M. Rush, S. Chopra, and J. Weston. A neural attentionmodel for abstractive sentence summarization. In
Proceed-ings of the 2015 Conference on Empirical Methods in Nat-ural Language Processing (EMNLP) , pages 379–389, 2015.3[32] H.-C. Shin, K. Roberts, L. Lu, D. Demner-Fushman, J. Yao,and R. M. Summers. Learning to read chest X-rays: recur-rent neural cascade model for automated image annotation.In
The IEEE Conference on Computer Vision and PatternRecognition (CVPR) , pages 2497–2506, 2016. 2[33] O. Vinyals, M. Fortunato, and N. Jaitly. Pointer networks. In
Advances in Neural Information Processing Systems , pages2692–2700, 2015. 2[34] O. Vinyals, A. Toshev, S. Bengio, and D. Erhan. Show andtell: A neural image caption generator. In
The IEEE Confer-ence on Computer Vision and Pattern Recognition (CVPR) ,pages 3156–3164, 2015. 1[35] X. Wang, Y. Peng, L. Lu, Z. Lu, M. Bagheri, and R. M. Sum-mers. ChestX-ray8: Hospital-scale chest x-ray database andbenchmarks on weakly-supervised classification and local-ization of common thorax diseases. In
The IEEE Conferenceon Computer Vision and Pattern Recognition (CVPR) , pages2097–2106, 2017. 2, 3, 5, 6, 7[36] Q. Wu, P. Wang, C. Shen, A. Dick, and A. van den Hengel.Ask me anything: free-form visual question answering basedon knowledge from external sources. In
The IEEE Confer-ence on Computer Vision and Pattern Recognition (CVPR) ,pages 1–5, 2016. 1[37] K. Xu, J. Ba, R. Kiros, K. Cho, A. Courville, R. Salakhudi-nov, R. Zemel, and Y. Bengio. Show, attend and tell: Neu-ral image caption generation with visual attention. In
In-ternational Conference on Machine Learning (ICML) , pages2048–2057, 2015. 2, 4, 5, 6, 8[38] D. Yu, J. Fu, T. Mei, and Y. Rui. Multi-level attention net-works for visual question answering. In
The IEEE Confer-ence on Computer Vision and Pattern Recognition (CVPR) ,pages 1–9, 2017. 1, 2 [39] W. L. L. C.-C. Yulia, T. S. Amir, R. F. A. C. D. Alan, andW. B. I. Trancoso. Not all contexts are created equal: Betterword representations with variable attention. In
Proceedingsof the 2015 Conference on Empirical Methods in NaturalLanguage Processing (EMNLP) , pages 1367–1372, 2015. 3[40] Z. Zhang, P. Chen, M. Sapkota, and L. Yang. Tandemnet:Distilling knowledge from medical images using diagnos-tic reports as optional semantic references. In
InternationalConference on Medical Image Computing and Computer-Assisted Intervention , pages 320–328. Springer, 2017. 2[41] Z. Zhang, Y. Xie, F. Xing, M. McGough, and L. Yang. MD-Net: a semantically and visually interpretable medical imagediagnosis network. In
The IEEE Conference on ComputerVision and Pattern Recognition (CVPR) , pages 6428–6436,2017. 2[42] Y. Zhu, O. Groth, M. Bernstein, and L. Fei-Fei. Visual7W:Grounded question answering in images. In
The IEEEConference on Computer Vision and Pattern Recognition(CVPR) , 2016. 1
A. More Experiment Results
In this section, we present 20 more classification and re-porting results (case E-X) from the proposed TieNet in ad-dition to the four examples (case A-D) shown in the mainpaper. Sample images are illustrated along with associ-ated classification Predictions (P), original and generatedreports. Text attentions are highlighted with different sat-uration levels over the generated text. Darker red meanshigher weights of the text attention. Correct classificationpredications are marked in green, false predictions in redand missed predictions in blue.
Image Sample cases P A t e l ec t a s i s E ff u s i o n N o f i nd i n g N o du l e P n e u m o t h o r ax M a ss C o n s o li d a t i o n M a ss Original report f i nd i ng s : a s i ng l e a p v i e w o f t h e c h e s t d e m on s t r a t e s i n c r ea s i ng b i b a s il a r i n t e r s titi a l op ac iti e s w it h d ec r ea s e d ov e r a ll ae r a ti on . i n c r ea s i ng b l un ti ng o f r i gh t c o s t oph r e n i c a ng l e . … i m p r e ss i on : i n c r ea s i ng b i b a s il a r a t e l ec t a s i s w it h po ss i b l e d e v e l op m e n t o f r i gh t p l e u r a l e ff u s i on . N o r m a l no e v i d e n ce o f l ung i n f ilt r a t e . f i nd i ng s : h ea r t a nd m e d i a s ti nu m un c h a ng e d . m u lti p l e l ung nodu l e s . e v i d e n ce o f r ece n t l e f t c h e s t s u r g e r y w it h l e f t c h e s t t ub e i n p l ace . v e r y s m a ll l e f t a p i ca l pn e u m o t ho r a x . l ung s un c h a ng e d , no e v i d e n ce o f ac u t e i n f ilt r a t e s . i m p r e ss i on : s t a b l e c h e s t . f i nd i ng s : l a r g e l e f t s up r a h il a r a nd i n fr a h il a r m a ss e s a s w e ll a s t h e w e ll c i r c u m s c r i b e d nodu l e t h e l e v e l o f t h e a o r ti c knob . t h e r i gh t i n fr a h il a r m a ss a s w e ll . no e ff u s i on . i m p r e ss i on : m e t a s t a ti c l ung d i s ea s e . Generated Report f i nd i ng s : a s i ng l e a p v i e w o f t h e c h e s t d e m on s t r a t e s un c h a ng e d b il a t e r a l r e ti c u l a r op ac iti e s , c on s i d e r a t e l ec t a s i s . c on ti nu e d l e f t b a s il a r a t e l ec t a s i s . no e v i d e n ce o f d e v e l op i ng i n f ilt r a t e . t h e ca r d i ac a nd m e d i a s ti n a l c on t ou r s a r e s t a b l e . i m p r e ss i on : no e v i d e n ce o f d e v e l op i ng i n f ilt r a t e . f i nd i ng s : p a a nd l a t e r a l v i e w s o f t h e c h e s t d e m on s t r a t e l ung s t h a t a r e c l ea r w it hou t f o ca l m a ss , i n f ilt r a t e o r e ff u s i on . ca r d i o m e d i a s ti n a l s il hou e tt e i s no r m a l s i ze a nd c on t ou r . pu l m on a r y v a s c u l a r it y i s no r m a l i n ca li b e r a nd d i s t r i bu ti on . i m p r e ss i on : no e v i d e n ce o f ac u t e pu l m on a r y p a t ho l ogy f i nd i ng s : p a a nd l a t e r a l v i e w s o f t h e c h e s t d e m on s t r a t e un c h a ng e d b il a t e r a l c h e s t t ub e s . a g a i n pu l m on a r y nodu l e s a r e s ee n on t h e r i gh t a nd ca r d i ac s il hou e tt e un c h a ng e d . t h e ca r d i ac a nd m e d i a s ti n a l c on t ou r s a r e s t a b l e . i m p r e ss i on : . b il a t e r a l m a ss e s a nd l e f t l o w e r l ung f i e l d c on s o li d a ti on . . n e w b il a t e r a l l ung m a ss e s . c o m p a r i s on i s t o p r e v i ou s up r i gh t s t udy o f no s i gn i f i ca n t i n t e r v a l c h a ng e i s s ee n i n t h e a pp ea r a n ce o f t h e c h e s t . t h e m e d i a s ti n a l s o f t ti ss u e a nd pu l m on a r y v a s c u l a r it y a r e s t a b l e . t h e r e a r e b l a s ti c bon e l e s i on s i n t h e c h e s t . bon e s , s o f t ti ss u e s a r e no r m a l . t h e l ung f i e l d s a r e c l ea r . t h e r e a r e ca l c i f i e d l y m ph nod e s i n t h e l e f t l o w e r l ung . i m p r e ss i on : . s c l e r o ti c l e s i on s i n t h e l e f t hu m e r a l , c on s i s t e n t w it h m e t a s t a s i s . A B C D Figure 5. 4 sample image Classification Predictions (P) along with original and generated reports. Text attentions are highlighted over thegenerated text. Correct predication is marked in green, false prediction in red and missing prediction in blue.
Image Sample cases P N o f i nd i n g N o f i nd i n g E ff u s i o n N o f i nd i n g Original report f i nd i ng s : p r e v i ou s l y no t e d r i gh t l o w e r l ob e i n f ilt r a t e s h a v e r e s o l v e d . v e nou s li n e i s no t e d i n s v c . l ung s a r e no w c l ea r . t h e h ea r t s i ze a nd m e d i a s ti n a l c on t ou r a r e no r m a l . i m p r e ss i on : no ac u t e ca r d i opu l m on a r y d i s ea s e i s no t e d . l ung s a r e m od e r a t e l y w e ll - i n f l a t e d . no i n f ilt r a t e s o r e ff u s i on s . i n t e r s titi a l m a r k i ng s a r e upp e r li m it s o f no r m a l . ca r d i o m e d i a s ti n a l s il hou e tt e i s no r m a l . i m p r e ss i on : m od e r a t e r i gh t p l e u r a l e ff u s i on . no pn e u m o t ho r a x . r i gh t c h e s t t ub e w it h ti p a nd s i d e po r t d ee p i n t h e r i gh t c o s t oph r e n i c s u l c u s . f i nd i ng s : no r a d i og r a ph i c a bno r m a liti e s . i m p r e ss i on : s t a b l e c h e s t . Generated Report f i nd i ng s : p a a nd l a t e r a l c h e s t . d i ff u s e bony l e s i on s a r e no t e d . s ugg e s t c o rr e l a ti on w it h c h e s t c t f o r m o r e s e n s iti v e a ss e ss m e n t f o r . m i n i m a li n t e r s titi a l t h i c k e n i ng i s s ee n i n l ung . t h i s i s s t a b l e , bu t s ugg e s t c o rr e l a ti on w it h c h e s t c t o r f o r i n t e r v a l d e v e l op i ng ac u t e pu l m on a r yp a t ho l ogy . no e v i d e n ce o f pn e u m o t ho r a x . no ac u t e i n f ilt r a t e s o r pu l m on a r y i n f ilt r a t e s a r e . . bony s t r u c t u r e s a r e i n t ac t . f i nd i ng s : l ung s c l ea r . no r m a l ca r d i o m e d i a s ti n a l c on t ou r s . no p l e u r a l e ff u s i on s . v a gu e d e n s it y o r a t e l ec t a s i s o r s ca rr i ng i n t h e r i gh t l ung b a s e . i m p r e ss i on : no ac u t e p r o ce ss . f i nd i ng s : t h e b il a t e r a l p l e u r a l e ff u s i on s a r e no t e d . t h e h ea r t i s no r m a l i n s i ze . t h e r e i s no p l e u r a l e ff u s i on . bony s t r u c t u r e s a r e i n t ac t . i m p r e ss i on : n e w p l e u r a l e ff u s i on s . f i nd i ng s : p a a nd l a t e r a l v i e w s o f t h e c h e s t d e m on s t r a t e l ung s t h a t a r e c l ea r w it hou t f o ca l m a ss , i n f ilt r a t e o r e ff u s i on . ca r d i o m e d i a s ti n a l s il hou e tt e i s no r m a l s i ze a nd c on t ou r . pu l m on a r y v a s c u l a r it y i s no r m a l i n li m it s a nd t op no r m a l s i ze ca r d i ac s il hou e tt e . s k e l e t a l s t r u c t u r e s a r e i n t ac t . i m p r e ss i on : no ac u t e l ung i n f ilt r a t e s . E F G H D Figure 6. 4 sample image Classification Predictions (P) along with original and generated reports. Text attentions are highlighted over thegenerated text. Correct predication is marked in green, false prediction in red and missing prediction in blue.
Image Sample cases P E m ph y s e m a I n f il t r a t i o n M a ss C o n s o li d a t i o n N o f i nd i n g N o du l e Original report m i n i m a l - m od e r a t e l e f t n ec k a nd upp e r c h e s t s ub c u t a n e ou s e m phy s e m a un c h a ng e d o r m i n i m a ll y d ec r ea s i ng . m od e r a t e - m a r k e d r i gh t c h e s t , m i n i m a l - m od e r a t e r i gh t n ec k a nd upp e r a bdo m e n s ub c u t a n e ou s e m phy s e m a . f i nd i ng s : i n t e r v a l d e v e l op m e n t o f l e f t upp e r l ob e p a t c hy nodu l a r i n f ilt r a t e i n f e r i o r l y . un c h a ng e d r a d i op a qu e ca t h e t e r c o m p a ti b l e w it h vp s hun t . s t a b l e ca t h e t e r ov e r l y i ng t h e s t o m ac h . c o s t oph r e n i c a ng l e s a r e c l ea r . ca r d i ac a nd m e d i a s ti n a l bo r d e r s a r e w it h i n no r m a l li m it s o f s i ze . i m p r e ss i on : i n t e r v a l d e v e l op m e n t o f l e f t upp e r l ob e p a t c hy nodu l a r i n f ilt r a t e f i nd i ng s : a s i ng l e a p v i e w o f t h e c h e s t d e m on s t r a t e s un c h a ng e d o r m i n i m a ll y i n c r ea s i ng , d e p e nd e n t po s iti on i ng , r i gh t l ung m a ss / c on s o li d a ti on . t h e ca r d i ac a nd m e d i a s ti n a l c on t ou r s a r e s t a b l e . i m p r e ss i on : . un c h a ng e d o r m i n i m a ll y i n c r ea s i ng , d e p e nd e n t po s iti on i ng , r i gh t l ung m a ss / c on s o li d a ti on . . no e v i d e n ce o f d e v e l op i ng i n f ilt r a t e t h e v i s u a li ze d l e f t l ung f i nd i ng s : l ung s a r e w e ll ae r a t e d w it h no e v i d e n ce o f i n f ilt r a t e . ca r d i ac a nd m e d i a s ti n a l bo r d e r s a r e w it h i n no r m a l li m it s o f s i ze . .. . i m p r e ss i on : no e v i d e n ce o f i n f ilt r a t e Generated Report r ea s on f o r s t udy : s / p v a t s c li n i ca l i n f o r m a ti on : a p l a s ti c a n e m i a c h e s t v i e w : c h e s t x -r a y p e rf o r m e d on t h e s a m e d a y . t h e h ea r t a nd m e d i a s ti nu m a r e no r m a l . t h e s ub c u t a n e ou s e m phy s e m a i s s ee n i n t h e r i gh t n ec k a nd n ec k on t h e r i gh t . t h e r e i s un c h a ng e d s ub c u t a n e ou s e m phy s e m a s ee n on t h e r i gh t . r ea s on f o r e x a m ( e n t e r e d by o r d e r i ng c li n i c i a n i n t o c r i s ) : r / o ac u t e , r / opu l m on a r y d i s ea s e i n t e r v a l c h a ng e s no i n t e r v a l c h a ng e a nd s ee n a r ea v a il a b l e o f t h e h ea r t . t h e r e a r e a nodu l a r i n t e r s titi a l i n f ilt r a t e o r s ca rr i ng a s w e ll a s m il d d e g e n e r a ti v e c h a ng e s o f t h e s p i n e . f i nd i ng s i n c l ud e . l e f t l ung c on s o li d a ti on . . l e f t l ung c on s o li d a ti on . F i nd i ng s : h ea r t a nd m e d i a s ti nu m un c h a ng e d . l ung s un c h a ng e d , no e v i d e n ce o f ac u t e i n f ilt r a t e s . nodu l e p r o j ec ti ng on po s t e r i o r r i bpo s t e r i o r l y . o ss e ou s s t r u c t u r e s i n t ac t . i m p r e ss i on : s t a b l e c h e s t . I J K L Figure 7. 4 sample image Classification Predictions (P) along with original and generated reports. Text attentions are highlighted over thegenerated text. Correct predication is marked in green, false prediction in red and missing prediction in blue.
Image Sample cases P E ff u s i o n P n e u m o t h o r ax E ff u s i o n A t e l ec t a s i s pn e u m o t h o r ax N o du l e A t e l ec t a s i s C o n s o li d a t i o n N o f i nd i n g A t e l ec t a s i s Original report c on ti nu e d v i s u a li za ti on o f a s m a ll p l e u r a l e ff u s i on on t h e l e f t s i d e a nd s li gh t e l e v a ti on o f t h e l e f t h e m i d i a ph r a g m . t h e r e i s d ec r ea s e i n s i ze o f t h e l e f t a p i ca l pn e u m o t ho r a x w h i c h i s m i n i m a l no w . no d e v e l op i ng i n f ilt r a t e s . f i nd i ng s : a s i ng l e a p v i e w o f t h e c h e s t d e m on s t r a t e s s t a b l e e t t ub e . t h e r e i s no c h a ng e i n ng o r s w a n . li k e l y r i gh t p l e u r a l f l u i d w it h l o c u l a ti on n ea r ho r i z on t a l f i ss u r e . t h e ca r d i ac a nd m e d i a s ti n a l c on t ou r s a r e s t a b l e . i m p r e ss i on : . s t a b l e li n e s t ub e s . r i gh t p l e u r a l e ff u s i on f i nd i ng s i n c l ud e . . l e f t l ung nodu l e s . . . v e nou s ca t h e t e r , ti p i n s up e r i o r v e n a ca v a . . . e v i d e n ce o f p r e v i ou s b il a t e r a l a x ill a r y s u r g e r y . i m p r e ss i on i n c r ea s e d d e n s it y l e f t b a s e ( o f p l e u r a l f l u i d /l e f t l o w e r l ob e c on s o li d a ti on , a t e l ec t a s i s ? f i nd i ng s c o m p a ti b l e w it h e x c r e t o r y ph a s e , a s r e qu e s t e d -- no e v i d e n ce o f pn e u m o t ho r a x . Generated Report f i nd i ng s : l e f t p i cc li n e r e m a i n s i n p l ace . s m a ll r e s i du a l p l e u r a l e ff u s i on i n t h e l e f t l a t e r a l l ung b a s e h a s b ee n r e m ov e d . no d e f i n it e i n f ilt r a t e s o r e ff u s i on s m i n i m a ll y . . no d e f i n it e p l e u r a l e ff u s i on s . a v e r y s m a ll r i gh t p l e u r a l e ff u s i on h a s b ee n p l ace d s i n ce l a s t s t udy . i m p r e ss i on : r e s o l u ti on o f r i gh t p l e u r a l e ff u s i on . f i nd i ng s : t h e r e i s s till e ndo t r ac h ea l t ub e a bov e t h e ca r i n a . a s m a ll r i gh t a p i ca l p l e u r a l e ff u s i on1 . i n t e r v a l r e m ov a l o f a n n a s og a s t r i c t ub e . ng t ub e i s no t e d . t h e r e a r e d i ff u s e b il a t e r a l p e r i h il a r a i r s p ace op ac iti e s , i n c l ud i ng d i ff u s e a i r s p ace d i s ea s e a nd a s m a ll r i gh t p l e u r a l e ff u s i on . t h e r e a r e no p l e u r a l e ff u s i on s . t h e r e i s no pn e u m o t ho r a x . t h e h ea r t i s s t a b l e . i m p r e ss i on : . t w o c h e s tt ub e s , c on ti nu e d b il a t e r a l pn e u m o t ho r a x . . b il a t e r a l p l e u r a l e ff u s i on s a nd b il a t e r a l l o w e r l ung a t e l ec t a s i s . f i nd i ng s i n c l ud e un c h a ng e d i n a pp ea r a n ce o f l e f t l ung op ac iti e s . no e v i d e n ce o f pn e u m o t ho r a x . on e i s i n p l e u r a l d e n s it y i n t h e c h e s t a nd m a yb e o f un ce r t a i n l o ca ti on a nd no t d e f i n it e l y s ee n . f i nd i ng s i n c l ud e . b l un ti ng o f c o s t oph r e n i c a ng l e s a nd li n ea r c o m pon e n t s i n t h e l ung b a s e s , p r e do m i n a n tl y i n t h e c h e s t s t a b l e a nd a s m a llli n ea r c o m pon e n t s c on s i s t e n t w it h a t e l ec t a s i s . . . no e v i d e n ce o f pn e u m o t ho r a x . . . m e t a l c li p s i nd i ca ti v e o f p r e v i ou s l e f t a x ill a r y s u r g e r y . i m p r e ss i on un c h a ng e d un c h a ng e d s i n ce a p r il no e v i d e n ce o f ac u t e pu l m on a r y p r o ce ss M N O P Figure 8. 4 sample image Classification Predictions (P) along with original and generated reports. Text attentions are highlighted over thegenerated text. Correct predication is marked in green, false prediction in red and missing prediction in blue.
Image Sample cases P M a ss A t e l ec t a s i s I n f il t r a t i o n E d e m a N o du l e E ff u s i o n C o n s o li d a t i o n P n e u m o n i a I n f il t r a t i o n E d e m a A t e l ec t a s i s N o du l e P n e u m o t h o r ax E m ph y s e m a I n f il t r a t i o n E ff u s i o n Original report f i nd i ng s : h ea r t s i ze w it h i n no r m a l li m it s . r i gh t h il a r m a ss a nd r i gh t m i dd l e l ob e a t e l ec t a s i s un c h a ng e d . i n c r ea s e d i n t e r s titi a l i n f ilt r a t e s i n t h e l ung s m o s t a pp a r e n t i n t h e l e f t l ung . no p l e u r a l f l u i d . s o f t ti ss u e nodu l e i n t h e l e f t s up r ac l a v i c u l a r r e g i on . o ss e ou s s t r u c t u r e s i n t ac t i m p r e ss i on : i n c r ea s i ng i n t e r s titi a l i n f ilt r a t e ? e d e m a o r i n f ec ti on . a g a i n t h e r e i s a s m a ll l e f t p l e u r a l e ff u s i on d e v e l op e d s i n ce t h e s t udy . b il a t e r a l i n t e r s titi a l l ung d i s ea s e i s a g a i n s ee n p r e s e n t on t h e p r e v i ou s s t udy p r i m a r il y i n bo t h l ung b a s e s bu t no w i n c r ea s e d d e n s it y i n t h e r i gh t upp e r l ob e a s w e ll s ugg e s ti ng b il a t e r a l p a t c hy a r ea s o f c on s o li d a ti on , p r e s u m a b l y pn e u m on i a . s t a b l e l e f t l ung b a s e li n ea r d e n s iti e s c on s i s t e n t w it h s ca rr i ng o r a t e l ec t a s i s . no d e v e l op i ng i n f ilt r a t e s o r e ff u s i on s . e x t e n s i v e d i ff u s e bony i nvo l v e m e n t fr o m p r o s t a t e ca n ce r a g a i n s ee n . f i nd i ng s : un c h a ng e d r i gh t s i d e d c h e s t t ub e a nd r i gh t a p i ca l pn e u m o t ho r a x . s t a b l e r i gh t - s i d e d c h e s t w a ll s ub c u t a n e ou s e m phy s e m a . s t a b l e l e f t a p i ca l i n f ilt r a t e c o s t oph r e n i c a ng l e s a r e c l ea r . ca r d i ac a nd m e d i a s ti n a l bo r d e r s a r e w it h i n no r m a l li m it s o f s i ze . i m p r e ss i on : un c h a ng e d r i gh t s i d e d c h e s t t ub e a nd r i gh t a p i ca l pn e u m o t ho r a x . Generated Report f i nd i ng s : m u lti p l e r i gh t l ung m a ss e s a r e s ee n b il a t e r a ll y w it h b il a t e r a l pu l m on a r y nodu l e s . l a r g e r i gh t h il a r m a ss e s i s s ee n t h a t i s s li gh tl y w o r s e on t h e r i gh t a nd t h e r i gh t m i d l ung . t h e r i gh t l ung c on ti nu e s t o r e m a i n no r m a l . r ea s on f o r s t udy : n e w s ho r t n e ss o f b r ea t h c li n i ca l i n f o r m a ti on : c h r on i c l y m pho m a t w o d r i m a g e s m a r k e d , . m i n i m a l b il a t e r a l m i d l ung i n f ilt r a t e s a pp ea r i ng s i n ce s e p t e m b e r , po ss i b l y du e t o e d e m a a nd / o r pn e u m on i a . a n t e r i o r c h e s t s u r g e r y . c l o t h i ng a r ti f ac t a pp ea r i ng . a n t e r i o r r i gh t upp e r a bdo m e n s u r g e r y . f i nd i ng s : . i n c r ea s e d li n ea r a t e l ec t a s i s i n t h e li ngu l a c on s i s t e n t w it h m e t a s t a s e s . . m i n i m a l li n ea r d e n s iti e s i n t h e c o s t oph r e n i ca ng l e s c h a r ac t e r i s ti c o f s ca rr i ng . . h ea l e d r i b fr ac t u r e s . . m i n i m a l t o r t uo s it y t ho r ac i c a o r t a . . M u lti p l e ca l c i f i e d pu l m on a r y nodu l e s c on s i s t e n t w it h pu l m on a r y e d e m a . i m p r e ss i on : s t a b l e c h e s t , n e g a ti v e f o r e v i d e n ce o f pn e u m o t ho r a x . f i nd i ng s : m i n i m a l r i gh t pn e u m o t ho r a x d e s p it e r i gh t p l e u r a l t ub e w it h l a r g e p l e u r a l e ff u s i on a nd r e m ov a l r i gh t c h e s t t ub e . r i gh t c h e s t s ub c u t a n e ou s e m phy s e m a . i m p r e ss i on : s t a b l e t ub e s . r i gh t c h e s t . r i gh t l o w e r c h e s t s u r g e r y i n c r ea s i ng o r . . r i gh t c h e s t s u r g e r y . Q R S T Figure 9. 4 sample image Classification Predictions (P) along with original and generated reports. Text attentions are highlighted over thegenerated text. Correct predication is marked in green, false prediction in red and missing prediction in blue.