COVID-19 identification from volumetric chest CT scans using a progressively resized 3D-CNN incorporating segmentation, augmentation, and class-rebalancing
Md. Kamrul Hasan, Md. Tasnim Jawad, Kazi Nasim Imtiaz Hasan, Sajal Basak Partha, Md. Masum Al Masba, Shumit Saha
CCOVID-19 identification from volumetric chest CT scans using aprogressively resized 3D-CNN incorporating segmentation,augmentation, and class-rebalancing
Md. Kamrul Hasan a,1, ∗ , Md. Tasnim Jawad a , Kazi Nasim Imtiaz Hasan b , Sajal BasakPartha b , Md. Masum Al Masba b a Department of Electrical and Electronic Engineering, Khulna University of Engineering & Technology,Khulna-9203, Bangladesh b Department of Computer Science and Engineering, Khulna University of Engineering & Technology,Khulna-9203, Bangladesh
Abstract
The novel COVID-19 is a global pandemic disease overgrowing worldwide. Computer-aidedscreening tools with greater sensitivity is imperative for disease diagnosis and prognosis asearly as possible. It also can be a helpful tool in triage for testing and clinical supervi-sion of COVID-19 patients. However, designing such an automated tool from non-invasiveradiographic images is challenging as many manually annotated datasets are not publiclyavailable yet, which is the essential core requirement of supervised learning schemes. Thisarticle proposes a 3D Convolutional Neural Network (CNN)-based classification approachconsidering both the inter- and intra-slice spatial voxel information. The proposed systemis trained in an end-to-end manner on the 3D patches from the whole volumetric CT im-ages to enlarge the number of training samples, performing the ablation studies on patchsize determination. We integrate progressive resizing, segmentation, augmentations, andclass-rebalancing to our 3D network. The segmentation is a critical prerequisite step forCOVID-19 diagnosis enabling the classifier to learn prominent lung features while excludingthe outer lung regions of the CT scans. We evaluate all the extensive experiments on apublicly available dataset, named MosMed, having binary- and multi-class chest CT imagepartitions. Our experimental results are very encouraging, yielding areas under the ROCcurve of 0 . ± .
049 and 0 . ± .
035 for the binary- and multi-class tasks, respectively,applying 5-fold cross-validations. Our method’s promising results delegate it as a favorable1 a r X i v : . [ ee ss . I V ] F e b iding tool for clinical practitioners and radiologists to assess COVID-19. Keywords:
COVID-19, 3D convolutional neural network, Volumetric chest CT scans, 3Dpatches, Progressive resizing.
1. Introduction
Pneumonia of unknown cause discovered in Wuhan, China, was published to the WorldHealth Organization (WHO) office in China on 31st December 2019. It was consequentlyassigned to severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) because of hav-ing similar genetic properties to the SARS outbreak of 2003. On 11th February 2020, WHOtermed that new disease as COVID-19 (Coronavirus disease), which displays an upper res-piratory tract and lung infection [106]. The clinical characteristics of critical COVID-19pandemic are bronchopneumonia that affects cough, fever, dyspnea, and detailed respira-tory anxiety ailment [14, 60, 100]. According to the WHO reports, COVID-19’s generalindications are equivalent to that of ordinary flu, including fever, tiredness, dry cough,shortness of breath, aches, pains, and sore throat [48]. Those shared signs turn it challeng-ing to recognize the virus at an ancient step. The aforementioned is a virus, which workson bacterial or fungal infections [48, 108] with no possibility that antibiotics can restrictit. Besides, people suffering from medical complications, like diabetes, chronic respiratoryand cardiovascular diseases, are prone to undergo COVID-19. An explanatory statement ofthe Imperial College advised that the affection rate will be more than 90 . ∗ I am corresponding author
Email addresses: [email protected] (Md. Kamrul Hasan), [email protected] (Md. Tasnim Jawad), [email protected] (Kazi NasimImtiaz Hasan), [email protected] (Sajal Basak Partha), [email protected] (Md.Masum Al Masba) Department of EEE, KUET, Khulna-9203, Bangladesh.
Preprint submitted to Artificial intelligence in medicine February 12, 2021 xponentially. The incubation period, which is a time between catching the virus and causingto have indications of the illness, is 1 ∼
14 days, making it remarkably challenging to identifyCOVID-19 infection at a preliminary stage of an individual’s symptoms [48]. The clinicalscreening test for the COVID-19 is Reverse Transcription Polymerase Chain Reaction (RT-PCR), practicing respiratory exemplars. However, it is a manual, complicated, tiresome,and time-consuming fashion with an estimated true-positive rate of 63 .
0% [103]. Thereis also a significant lack of RT-PCR kit inventory, leading to a delay in preventing andcuring coronavirus disease [112]. Furthermore, the RT-PCR kit is estimated to cost around120 ∼
130 USD. It also requires a specially designed biosafety laboratory to house the PCRunit, each of which can cost 15 , ∼ ,
000 USD [1]. Nevertheless, the utilization ofa costly screening device with a delayed test results makes it more challenging to restrictthe disease’s spread. Inadequate availability of screening workstations and measurementkits constitute an enormous hardship to identify COVID-19 in this pandemic circumstance.In such a situation, speedy and trustworthy presumed COVID-19 cases are an enormousdifficulty for related personals.However, it is observed that most of the COVID-19 incidents have typical properties onradiographic CT and X-ray images, including bilateral, multi-focal, ground-glass opacitieswith a peripheral or posterior distribution, chiefly in the lower lobes and early- and late-stagepulmonary concentration [18, 42, 88, 110]. Those features can be utilized to build a sensitiveComputer-aided Diagnosis (CAD) tool to identify COVID-19 pneumonia, which is deemedan automated screening tool [59]. Currently, deep Convolutional Neural Networks (CNNs)allow for building an end-to-end model without requiring manual and time-consuming fea-ture extraction and engineering [57, 58], demonstrating tremendous success in many domainsof medical imaging, such as arrhythmia detection [4, 28, 113], skin lesion segmentation andclassification [17, 23, 24, 35], breast cancer detection [13, 19, 31], brain disease segmentationand classification [93, 97], pneumonia detection from chest X-ray images [79], fundus imagesegmentation [34, 94], and lung segmentation [26]. Most recently, various deep CNN-basedmethods have been published for identifying COVID-19 from X-rays and CT images, sum-marizing and bestowing in Table 1. Though the results obtained in the current articles are3 able 1: Numerous published articles for the COVID-19 identification with their respective utilized datasetsand performances exhibiting different metrics such as mSn, mSp, and mF1 respectively for mean sensitivity,specificity, and F1-score. The mixed datasets indicate that data have come from different open-sources.
Different methods Datasets ResultsA pre-trained 2D MobileNet-v2 [82] architecture on ImageNet [20] was used to extract massivehigh-dimensional features to classify six different diseases using the fully-connected layers [7] Mixed mSn: 0 . . . . . . . . . . . . . . . . . . . . . . . . − mF1: 0 . . . . . . . − mF1: 0 . . − An average rank pooling, multiple-way augmentation, and deep feature fusion-based CNN and graphCNN was developed to fuse individual image-level features and relation-aware features [102] Wang et al. [102] mSn: 0 . . . . . . . . . − mF1: 0 . . . • Designing a 3D-CNN-based classification network for volumetric CT images as the 3Dnetworks account for the inter- and intra-slice spatial voxel information while the 2Dnetworks consider only the intra-slice spatial voxel information [37, 44, 52, 89, 114, 118] • Conducting 3D patch-based classification as it increases the sample numbers in thesmaller datasets, where we perform ablation studies to determine a proper patch size • Progressively increasing the input patch size of our network up to the original CT sizeof R × C × S , where the trained network with the patch size of ( R/ n +1 ) × ( C/ n +1 ) × ( S/ n +1 ) is a pre-trained model of a network with the patch size of ( R/ n ) × ( C/ n ) × ( S/ n ) • Developing an unsupervised lung segmentation pipeline for allowing the classifier tolearn salient lung features while omitting the outer lung areas of the CT scans • Class rebalancing and augmentations, such as intensity- and geometry-based, are em-ployed to develop a general network, although a small dataset is being utilized5he remainder of the article is prepared as follows. Section 2 details the materialsand methods practiced in the study, including a brief introduction to the methodology andend-to-end 3D-CNN training. Section 3 describes the experimental operations and theircorresponding obtained results. Lastly, section 4 concludes the article.
2. Materials and Methods
In this section, we describe the utilized materials and methods to conduct the widespreadexperiments. We summarize the adopted dataset in the first subsection 2.1. The essentialintegral preprocessing, such as segmentation, augmentation, and class-rebalancing, are re-ported in the second subsection 2.2. The design of the proposed 3D-CNN-based COVID-19classifier, along with its training protocol, is explained in the third subsection 2.3. Finally,in the fourth subsection 2.4, we represent used hardware to execute the aimed method andevaluation criterion.
This article’s experimentations utilize a publicly usable MosMedData dataset adminis-tered by municipal hospitals in Moscow, Russia, from March to April 2020 [66]. This datasetincludes anonymized human chest lung CT scans with and without COVID-19 related find-ings of 1110 studies. The population of MosMedData is distributed as 42 % male, 56 %female, and 2 % others, where the median age of the subjects is 47 years (18 ∼
97 years).All the studies ( n = 1110) are distributed into five following categories, as presented in Ta-ble 2. We design two experimental protocols using the MosMedData dataset, such as binary-and multi-class identification, to evaluate our proposed workflow. In binary-class evaluation,we use NOR vs. NCP (Novel COVID-19 Positive), where NCP includes MiNCP-, MoNCP-,SeNCP-, and CrNCP-classes, while in multi-class evaluation, we use NOR vs. MiNCP vs.MoNCP vs. SeNCP. In multi-class protocols, we merge SeNCP- and CrNCP-classes, namingthem as SeNCP, as CrNCP has only two samples in the MosMedData dataset. We have ap-plied a cross-validation technique to choose training, validation, and testing images as thoseare not explicitly given by the data provider. The class-wise distribution of MosMedData6 able 2: Distribution of utilized MosMedData dataset for COVID-19 identification with a short class de-scription.Class acronym Description PPI ∗ Samples (%)
NOR
Not consistent with pneumonia, including COVID-19, andrefer to a specialist −
254 (22 . MiNCP
Mild novel COVID-19 positive with ground-glass opacitiesand follow-up at home using mandatory telemonitoring = <
25 % 684 (61 . MoNCP
Moderate novel COVID-19 positive with ground-glass opac-ities and follow-up at home by a primary care physician 25 −
50 % 125 (11 . SeNCP
Severe novel COVID-19 positive with ground-glass opacitiesand immediate admission to a COVID specialized hospital 50 −
75 % 45 (4 . CrNCP
Critical novel COVID-19 positive with diffuse ground-glassopacities and emergency medical care > = 75 % 2 (0 . PPI ∗ : Pulmonary parenchymal involvement dataset in Table 2 illustrates that the class distribution is imbalanced. Such an imbalancedclass distribution produces a biased image classifier towards the class having more trainingsamples. We apply various rebalancing schemes to develop a generic classifier for COVID-19identification, even though the dataset is imbalanced. The recommended integral preprocessing consists of segmentation, augmentations (bothgeometry- and intensity-based), and class-rebalancing, which are concisely explained as fol-lows:
Segmentation.
The segmentation, to separate an image into regions with similar prop-erties such as gray level, color, texture, brightness, and contrast, is the significant elementfor automated detection pipeline [39]. It is also a fundamental prerequisite for the COVID-19 identification as it extracts the lung region and delivers explanatory information aboutthe shapes, structures, and textures. However, this article proposes an unsupervised LungSegmentation (LS) technique applying different image processing algorithms, as a massive7umber of annotated COVID-19 images are not available yet in this pandemic situation.Fig. 1 depicts the pipeline of the proposed LS method. The proposed threshold-based LS’s
HU Transformation Thresholding Removing border blobsLargest area extraction Morphological operations Filling holes
Figure 1: The proposed block diagram of an unsupervised lung segmentation pipeline, without requiring amanually annotated lung region. primary step is transforming all the CT volumes to Hounsfield units (HU), as it is a quan-titative measure of radiodensity for CT scans. We set the HU unit as -1000 to -400 asthe study shows that lung regions are within that range, which was also used in manyarticles [56, 85, 101]. The thresholded binary lung masks are then refined to exclude dif-ferent false-positive regions, such as the connected blobs with the image border and othersmall false-positive areas, and false-negative regions, such as small holes in the lung regions.Firstly, the border connected regions are eradicated. Secondly, the two largest areas arepicked using the region properties algorithm. Thirdly, morphological erosion to separatethe lung nodules attached to the blood vessels and morphological closing to keep nodulesattached to the lung wall. Finally, the false-negative regions are removed using binary holefill algorithms. Such an unsupervised thresholding-based segmentation method is better interms of efficiency, taking only a few seconds, and yields utterly reproducible LS.
Augmentation.
The CNN-based classifiers are profoundly dependent on large datasamples to evade the overfitting. Lamentably, various medical imaging fields, especially thecurrent COVID-19 pandemic, suffer from an inadequate dataset size as manually annotatedmassive training samples are still not available. In such a scenario, the augmentations arevery dormant preprocessing for increasing the training samples as they are incredibly dis-criminative [45]. Data augmentation incorporates a method that magnifies training datasets’8ize and property to develop a better-CNN classifier [86]. The geometric-based augmenta-tion, including a rotation (around row/ col/
2) of − ◦ , − ◦ , 10 ◦ , 30 ◦ and height &width shifting by 20 %, the intensity-based augmentation, including gamma correction &adding Gaussian random noise, and Elastic deformation are applied in this article as a partof the recommended preprocessing. Two values of gamma ( γ ), such as 0 . .
7, haveused in gamma correction to adjust the luminance of the CT volumes by V out = V γin , where V out and V in individually denote the output and input values of the luminance. Rebalancing.
The utilized dataset in Table 2 is imbalanced. This situation is prettyobvious in the medical diagnosis field due to the scarcity of massive manually annotatedtraining samples, especially in COVID-19 datasets. The undesired class-biasing occurs inthe supervised learning systems towards the class with majority samples. However, weapply two techniques to rebalance the imbalanced class distribution, such as adding extra CTvolumes from the publicly available CC-CCII dataset [115] and weighting the loss function forpenalizing the overrepresented class. The latter approach rewards more extra considerationto the class with minority samples. Here, we estimate the class weight using a portion of W n = N n /N , where W n , N , and N n separately denote the n th -class weight, the total samplenumbers, and the samples in n th -class. We employ both the class-rebalancing strategiesin the binary-class protocol, whereas the only class weighting method is adopted in themulti-class protocol. The deep neural network is a machine learning framework with a wide range of applica-tions, from natural language processing [21] to medical image classification [12], segmenta-tion [12], and registration [25]. In special, CNNs have become a prevalent technique in thecomputer vision community. They are practiced in diverse tasks, including object detection[50], classification [22], and localization [63]. The CNN-based deep neural systems are alsopopularly adopted in recent pandemic for COVID-19 identification [11, 73] (see in Table 1). https://pypi.org/project/elasticdeform/ ’ M N 64 M / N/ M / N/ M / N/ FC1
FC2
FC3+softmx n o C l a ss GAP
3D Conv with ReLU 3D Pool 3D Batch Norm Fully Conn. Fully Conn. + softmx Figure 2: The architectural construction of the proposed base network, training with the most smaller 3Dpatches. This trained base network is applied as a pre-trained model for the next bigger patches. Best viewin the color figure. training of the network architectures in less time [89]. The Global Average Pooling (GAP)[61] is used as a bridge layer between the feature extractor and feature classifier modules,converting the feature tensor into a single long continuous linear vector. In GAP, onlyone feature map is produced for each corresponding category, achieving a more extremedimensionality compression to evade overfitting [61]. A dropout layer [90] is also employedas a regulariser, which randomly sets half of the activation of the fully-connected layers tozero through the training of our network.Again, as mentioned earlier, the CNNs are heavily reliant on the massive dataset tobypass overfitting and build a generic network. The acquisition of annotated medical imagesis arduous to accumulate, as the medical data collection and labeling are confronted withdata privacy, requiring time-consuming expert explanations [111]. There are two generalresolving directions: accumulating more data, such as crowdsourcing [51] or digging intothe present clinical reports [105]. Another technique is investigating how to enhance theachievement of the CNNs with small datasets, which is exceptionally significant because11he understanding achieved from the research can migrate the data insufficiency in themedical imaging fields [111]. Transfer learning is a widely adopted method for advancing theperformance of CNNs with inadequate datasets [15]. To our most trustworthy knowledge,there is no public pre-trained 3D-CNN model for the COVID-19 identification from thevolumetric chest images with limited samples. Therefore, we create a pre-trained modelby training our base model (see in Fig. 2) on the extracted 3D patches from whole chestCT scans (see details in subsection 2.3.2). Then, we double the patches’ size and use themfor training the modified base network, where we also double the base model’s input sizeapplying a stack of convolutional, pooling, and batch normalization layers (see details inFig. 3). At the same time, we keep the base model’s trained weights for the smaller patches.We repeat to enlarge ( n th -times) the patch and network sizes until we arrive at the provided n × M n × N × M × N M N Base networkBase classifier (3D-CNN)Progressive resizing (1 st time)Progressive resizing ( n th times)
3D Conv with ReLU 3D Pool 3D Batch Norm
Figure 3: The proposed progressively resized network’s architectural structure, where the base model (see inFig. 2) is trained with the smaller 3D patches and sequentially doubles the base network’s size from smallerto larger sizes. The network trained with the smaller patches is the pre-trained model for the next biggerpatches. Best view in the color figure.
CT scans’ size, as pictured in Fig. 3. Such training is called progressive resizing [9], where12he training begins with smaller image sizes followed by a progressive expansion of size. Thistraining process is continued until the last patch and network sizes are as same as the initialimage dimension.
We first extract five different patches with different sizes (see in Fig. 4) to begin theexperimentations. We perform ablation studies in subsection 3.1 looking for the best patchsize. The weights of the base network in Fig. 2 is initialized with Xavier normal distribution.The weights of the first progressively resized network are initialized with the weights of thebase network. In general, the weights of the network with the patch size of ( R/ n ) × ( C/ n ) × ( S/ n ) are initialized with the weights of the network with the patch size of( R/ n +1 ) × ( C/ n +1 ) × ( S/ n +1 ) for the original CT volume size of R × C × S .Categorical cross-entropy and accuracy are utilized as a loss function and metric, re-spectively, for training all the networks in this article. We use Adam [54] optimizer withinitial learning rate ( LR ), exponential decay rates ( β , β ) as LR = 0 . β = 0 .
9, and β = 0 . We execute all the comprehensive experiments on a
Windows-
10 machine utilizing thePython, with various Keras [27] and image processing APIs, and MATLAB programminglanguages. The device configurations of the used machine are: Intel ® Core TM i7-7700 HQCPU @ 3 . GHz processor with a install memory (RAM) of 32 . GB , and GeForce GTX1080 GPU with a memory of 8 . GB (GDDR5).We evaluate all the experimental outcomes by employing numerous metrics, such as re-call, precision, and F1-score, for evaluating them from diverse perspectives. The recall mea-sures the type-II error (the patient having positive COVID-19 characteristics, erroneouslyabandons to be repealed), whereas the precision estimates the positive predictive values (a13ortion of absolutely positive-identification amid all the positive-identification). The har-monic mean of recall and precision is manifested using the F1-score, conferring the tradeoffbetween these two metrics. Furthermore, we also quantify the prognostication probabilityof an anonymously selected CT sample using a Receiver Operating Characteristics (ROC)with its Area Under the ROC Curve (AUC) value.
3. Results and Discussion
In this section, the achieved results from different experiments are reported with com-prehensive discussion. In subsection 3.1, we confer the results of COVID-19 identificationutilizing various 3D patches and compare them with original CT image utilization on thesame experimental conditions and network. We discuss the results of progressive resizingover a single fixed size in subsection 3.2. We demonstrate the effects of different proposedpreprocessing on COVID-19 identification in subsection 3.3. Finally, in subsection 3.4, weshow the results for binary- and multi-class COVID-19 identification applying our proposednetwork and preprocessing.
We extract five different 3D patches, named P , P , P , P , and P , having respectivesize of 16 × ×
9, 32 × ×
12, 64 × ×
15, 128 × ×
20, and 256 × ×
27. The originalCT scans having size of 512 × ×
36 is named as P . The height and width of the patch P is half of the P , whereas these dimensions of the patch P is one-fourth of the P , andso on. We extract 2 n number of patches for a n th -time reduction of the height and width.Therefore, we train and test our network with 71040, 35520, 17760, 8880, 4440, and 1110samples for the 3D volumes P to P , respectively. The examples of the extracted patchesare shown in Fig. 4, where we select the middle slices of the extracted patches of the sameCT scan. Different patches in Fig. 4 shows their respective resolutions, where it is seen thatthe patches P and P demonstrate very low resolutions. However, the effects of those patchresolutions are judged by classifying the NOR vs. NCP classes (see in subsection 2.1).14 a) (c)(b)(d) (f)(e) Figure 4: Example of various extracted patches having different sizes, as mentioned earlier, where patches P to P are displayed in a) to f), respectively. The middle slices of each 3D patches are illustrated forthe same sample ( study 0258.nii.gz ) in the MosMedData dataset. Slices are captured using a ITK-Snapwindows version a . a The classification results are presented in Fig. 5 for all the patches ( P to P ) and originalCT scans ( P ) employing our 3D network without any type of preprocessing. The resultsshow that the network inputting with P patch outputs COVID-19 identification with type-II errors as 69 . . . P produces identification results with type-II errors as 56 . . . P patch has double samples, it fails to provide a class-balanced performancethan the P patch. This is because of having a better-resolution in the P patch than the P patch (see in Fig. 4), as other experimental settings are constant. Again, the patch P R e c a ll a n d P r e c i s i o n Recall NOR-classRecall NCP-classPrecision NOR-classPrecision NCP-class
Figure 5: The binary classification results from our 3D-CNN utilizing different 3D-patch sizes, where thebars with dots, horizontal lines, stars, and diagonal hatching respectively denote recall and precision ofNOR- and NCP-classes. Best view in the color figure. further improves the identification results with type-II errors as 54 . . P also provides similar results to the P patch.It is noteworthy from those experimentations that P or P patches have much fewer samplesthan P (4-times and 8-times, respectively); still, they outperform the identification resultsof P and P patches with the same experimental settings.Furthermore, the utilization of patch P further reduces the performances (type-II errorsas 6 . . P and P are visually looking similar but P has two-times samples as of P . This experiment exposes that having fewer samples also generatesclass-biased classifiers if input images are similar in resolution.Finally, the network with the original images also provides less COVID-19 identificationperformance as in the patch P (see in Fig. 5). All the experiments show that our networkwith P or P patches has outputted better-identification results. Such experimental results16ndoubtedly prove that both the input resolution and the number of samples play an im-portant role in CNN-based classifiers. We can not increase the number of samples takingthe smaller patch sizes, as it has a shallow resolution, which adversely affects the classifiers. The aforementioned results reveal that the utilization of better-resolution with moresample numbers increases the performance of CNN. Therefore, we propose to employ pro-gressive resizing of our proposed 3D-CNN (see details in subsection 2.3). Firstly, we begintraining our network with a suitable 3D patch with more training samples from the previousexperiments, acting as a base model. Then, we add some CNN layers to the input of thebase model with the higher resolution (2-times more in this article), where the base modelis adopted as a pre-trained model (see details in subsection 2.3). We repeat this networkresizing until we reach to original given CT size ( P ).The results for such a progressive resizing are presented in the confusion metrics in Table 3and ROC curves (with respective AUC values) in Fig. 6. The confusion matrix in Table 3, Table 3: Normalized confusion matrix employing our network with progressive resizing, where we progres-sively increase the input resolution from P to P then to P (original resolution). The first table (left) forthe resolution of P , the second table (middle) for resolution of P (cid:55)−→ P , and the last (right) for resolutionof P (cid:55)−→ P (cid:55)−→ P .Actual P NOR NCPNOR 24 .
26 % 13 .
52 % P r e d i c t NCP 75 .
74 % 86 .
48 % Actual P (cid:55)−→ P NOR NCPNOR 21 .
08 % 5 .
38 % P r e d i c t NCP 78 .
92 % 94 .
62 % Actual P (cid:55)−→ P P (cid:55)−→ P NOR NCPNOR 39 .
22 % 9 .
30 % P r e d i c t NCP 60 .
78 % 90 .
70 % for more detailed analysis of the identification results, points that 24 .
26 %-NOR samples areaccurately classified as NOR, whereas 86 .
48 %-NCP samples are correctly classified as NCPwhile utilizing the 3D patch P with 8880 samples. This training is set as a base model.Now, employing the base model as a pre-trained model, the utilization of P patch, with4440 samples, decreases the false-negative rate of NCP by 8 .
14 % although false-positive rateincreases by 3 .
18 % (see in Table 3 (left and middle)). This training is a first-time progressive17 .0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0False Positive Rate0.00.10.20.30.40.50.60.70.80.91.0 T r u e P o s i t i v e R a t e ROC using patch P (AUC = 0.584)ROC using patch P P (AUC = 0.677)ROC using patch P P P (AUC = 0.754) Figure 6: The ROC curves for the progressive resizing of our 3D network. Best view in the color figure. resizing ( P (cid:55)−→ P ). Again, employing P (cid:55)−→ P as a pre-trained model, the utilizationof P (original CT scans), with 1110 samples, increases the false-negative rate of NCP by3 .
92 %, still less than baseline false-negative rate of 13 .
52 %. It also decreases the false-positive rate by a margin of 18 .
14 %, which is less than the former two false-positive rates(see all tables in Table 3). Furthermore, the proposed final progressively resized network( P (cid:55)−→ P (cid:55)−→ P ) obtains an AUC of 0 . . P and P (cid:55)−→ P respectively by 17 . .
70 %in terms of AUC, as presented in Fig. 6. Although the final progressively resized network( P (cid:55)−→ P (cid:55)−→ P ) has an input of the original CT scans, its performance is far betterthan the network training with P alone (see in Fig. 5). All the above discussions in thissubsection experimentally validate the progressive resizing supremacy for the COVID-19identification instead of training using single size input CT scans. This subsection presents the COVID-19 identification results from our progressively re-sized 3D network employing different preprocessing, such as augmentation, segmentation,and class-rebalancing. 18able 4 bestows different experimental results, where we explicitly explicate the outcomesof each preprocessing for the COVID-19 identification from volumetric CT scans. The base-
Table 4: The COVID-19 identification results on the MosMedData dataset from our 3D-CNN networkutilizing different preprocessing. Class-wise and weighted average metricsRecall PrecisionDifferent experiments NOR NCP Avg. NOR NCP Avg.Baseline model 0 .
137 0 .
983 0 .
789 0 .
700 0 .
793 0 . .
392 0 .
907 0 .
789 0 .
556 0 .
834 0 . .
529 0 .
884 0 .
803 0 .
574 0 .
864 0 . .
333 0 .
971 0 .
825 0 .
773 0 .
831 0 . .
706 0 .
919 0 .
870 0 .
720 0 .
913 0 . line model, without progressive resizing and inputting with original CT scans ( P ), produceslow identification consequences resulting in type-II errors of 86 . . . . N OR : N CP = 1 : 3 .
37) with less intra-class het-erogeneity and high inter-class similarity are the probable causes for providing such a poorresult. However, the utilization of different 3D patches improves intra-class heterogeneityand inter-class similarity and appliance of progressive resizing, where the base model actsas a pre-trained model, can mitigate those aforementioned difficulties, which reflects in thePRN results (see in the second row of Table 4). The appliance of PRN successfully reducesthe class-imbalanced results improving the type-II error of NOR-class by a margin of 25 . . Augmentation . The employment of different image augmentations, such as random rota-tion, height & width shifting, gamma correction, adding Gaussian noise, and Elastic defor-mation (see details in subsection 2.2) with PRN further improves the COVID-19 identifi-cation results, showing far better class-balance (type-II error of NOR-class improved by a19argin of 13 . . . . Segmentation . The well-defined segmentation, with less-coarseness, is an essential require-ment for further identification. The incorporation of segmentation with the PRN furtherpromotes the identification results than the PRN alone, as exposed in Table 4. Severalexamples of the segmented lung from our proposed unsupervised pipeline (as described insubsection 2.2) are depicted in Fig. 7 for qualitative evaluation. However, the COVID-19
Figure 7: Examples of lung segmentation results applying our unsupervised pipeline, as described in sub-section 2.2. identification results incorporating lung segmentation with the PRN reflects its supremacyover the PRN alone, extending the weighted average type-II error by 1 . . Augmentation, Segmentation, and Class-rebalancing . The combination of augmen-tations, segmentation, and class-rebalancing with the PRN provides the best COVID-1920dentification results of this article. This experiment identifies the COVID-19 from the chestCT scans with relatively less class-imbalance with the weighted average type-II error of13 . . . . T r u e P o s i t i v e R a t e ROC for Baseline model (AUC = 0.708)ROC for PRN model (AUC = 0.754)ROC for PRNA model (AUC = 0.788)ROC for PRNS model (AUC = 0.824)ROC for PRNASCR model (AUC = 0.897)
Figure 8: The ROC curves for the employment of various preprocessing to our 3D network. Best view inthe color figure. an AUC of 0 . . . . . . . . . . . . . . This subsection displays the COVID-19 identification results using our proposed PRNASCRfor binary- and multi-class (see in subsection 2.1) utilizing the 5-fold cross-validation. The21 able 5: The confusion matrix for the COVID-19 identification on the MosMedData dataset from ourproposed 3D-CNN network and preprocessing for both the binary- (left) and multi- (right) class problems.Actual2-classes NOR NCPNOR 16765 .
75 % 80 .
94 % P r e d i c t NCP 8734 .
25 % 84899 .
06 % Actual4-classes NOR MiNCP MoNCP SeNCPNOR 18874 .
02 % 679 .
80 % 32 .
40 % 24 .
26 %MiNCP 6224 .
41 % 58084 .
80 % 2923 . .
66 %MoNCP 31 .
18 % 223 .
22 % 8668 .
80 % 12 .
13 % P r e d i c t SeNCP 10 .
39 % 152 .
18 % 75 .
60 % 3165 .
95 % detailed class-wise performance of our PRNASCR for both the binary- and multi-class isexhibited in the confusion metrics in Table 5 (left) and Table 5 (right), correspondingly.The binary-classification results in Table 5 (left) show that among 254-NOR CT sam-ples, correctly classified samples are 167 (67 .
75 %), whereas only 87 (34 .
25 %) samples areerroneously classified as NCP (false positive). It is also noteworthy that among 856-NCP samples, rightly classified samples are 848 (99 .
06 %), whereas only 8 (0 .
94 %) sam-ples are wrongly classified as NOR (false negative). Again, the matrix in Table 5 (right)for multi-class recognition reveals the FN and FP for the COVID-19 identification, wherenumber of wrongly classified CT images (type-I or type-II errors) are 66 /
256 (25 .
78 %),104 /
684 (15 .
20 %), 39 /
125 (31 .
20 %), and 16 /
47 (34 .
04 %) respectively for the NOR-, MiNCP-, MoNCP-, and SeNCP-classes. Those binary- and multi-class results expose that the NOR-class performance has been improved by 8 .
27 % margin with other constant experimentalsettings. The identification results for the severity prediction (MoNCP vs. SeNCP) confertremendous success in our pipeline, where barely 5 .
60 %-MoNCP and 2 .
13 %-SeNCP samplesare prognosticated as SeNCP- and MoNCP-classes, respectively (see in Table 5). Althoughoverall macro-average AUC of the binary classification defeats the multi-class recognition(see in Fig. 9) by a margin of 2 . .0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0False Positive Rate0.00.10.20.30.40.50.60.70.80.91.0 T r u e P o s i t i v e R a t e ROC for fold 1 with an AUC of 0.974ROC for fold 2 with an AUC of 0.861ROC for fold 3 with an AUC of 0.941ROC for fold 4 with an AUC of 0.944ROC for fold 5 with an AUC of 0.852Macro-avg. ROC (AUC = 0.914 ± 0.049)± 1 std. dev. (a) ROC for binary-class T r u e P o s i t i v e R a t e ROC for fold 1 with an AUC of 0.856ROC for fold 2 with an AUC of 0.918ROC for fold 3 with an AUC of 0.936ROC for fold 4 with an AUC of 0.913ROC for fold 5 with an AUC of 0.850Macro-avg. ROC (AUC = 0.893 ± 0.035)± 1 std. dev. (b) ROC for multi-classFigure 9: The ROC curves for the binary- and multi-class identification of COVID-19, applying 5-foldcross-validations. Best view in the color figure. multi-class protocol also provides less inter-fold variation than the binary-class, as depictedin Fig. 9. However, our approach for the COVID-19 identification exhibits praiseworthyachievement with high AUC values with less inter-fold variation in both of the class proto-cols.
4. Conclusion
During the current COVID-19 pandemic emergency, to mitigate the permanent lungdamage due to coronavirus, precise recognition with negligible false negative is highly es-sential. This article aimed to design an artificial screening system for automated COVID-19identification. A progressively resized 3D-CNN classifier is recommended in this study, incor-porating lung segmentation, image augmentations, and class-rebalancing. The experimentalanalysis confirms that the CNN classifier’s training with the suitable smaller patches andprogressively increasing the network size enhance the identification results. Furthermore, in-corporating the lung segmentation empowers the classifier to learn salient and characteristicCOVID-19 features than utilizing whole chest CT images, driving to improved COVID-19classification performance. Again, the augmentations and class-rebalancing result in im-23roved COVID-19 identification with high class-balanced recognition, shielding the networkfrom being biased to a particular overrepresented class. In the future, the proposed pipelinewill be employed in other volumetric medical imaging domain to validate its efficacy, ver-satility, and robustness. We also aim to deploy our trained model to a user-friendly webapplication for clinical utilization. The proposed system can be an excellent tool for cliniciansto fight this deadly epidemic by the quicker and automated screening of the COVID-19.
CRediT authorship contribution statementM. K. Hasan:
Conceptualization, Methodology, Software, Formal analysis, Investiga-tion, Visualization, Writing- Review & Editing, Supervision;
M. T. Jawad:
Software, Val-idation, Data Curation, Writing- Original Draft;
K. N. I. Hasan:
Data Curation, Writing-Original Draft;
S. B. Partha:
Writing- Original Draft;
M. M. A. Masba:
Writing-Original Draft;
Acknowledgements
None. No funding to declare.
Conflict of Interest
All authors have no conflict of interest to publish this research.
References [1] A. J. NEWS, 2020.
Bangladesh scientists create $3 kit. Can it help detect COVID-19? . https://bit.ly/aj2020corona [Accessed: 14 July 2020].[2] Abbas, A., Abdelsamea, M.M., Gaber, M.M., 2020a. Classification of covid-19 in chest x-ray imagesusing detrac deep convolutional neural network. arXiv:2003.13815 .[3] Abbas, A., Abdelsamea, M.M., Gaber, M.M., 2020b. Detrac: Transfer learning of class decomposedmedical images in convolutional neural networks. IEEE Access 8, 74901–74913.[4] Acharya, U.R., Oh, S.L., Hagiwara, Y., Tan, J.H., Adam, M., Gertych, A., San Tan, R., 2017. A deepconvolutional neural network model to classify heartbeats. Computers in biology and medicine 89,389–396.
5] Alshazly, H., Linse, C., Barth, E., Martinetz, T., 2020. Explainable covid-19 detection using chest ctscans and deep learning. arXiv:2011.05317 .[6] Angelov, P., Almeida Soares, E., 2020. Explainable-by-design approach for covid-19 classification viact-scan. medRxiv .[7] Apostolopoulos, I.D., Aznaouridis, S.I., Tzani, M.A., 2020. Extracting possibly representative covid-19 biomarkers from x-ray images with deep learning approach and image data related to pulmonarydiseases. Journal of Medical and Biological Engineering , 1.[8] Apostolopoulos, I.D., Mpesiana, T.A., 2020. Covid-19: automatic detection from x-ray images utilizingtransfer learning with convolutional neural networks. Physical and Engineering Sciences in Medicine, 1.[9] Arani, E., Marzban, S., Pata, A., Zonooz, B., 2021. Rgpnet: A real-time general purpose semanticsegmentation, in: Proceedings of the IEEE/CVF Winter Conference on Applications of ComputerVision, pp. 3009–3018.[10] Bergstra, J., Bengio, Y., 2012. Random search for hyper-parameter optimization. The Journal ofMachine Learning Research 13, 281–305.[11] Bhattacharya, S., Maddikunta, P.K.R., Pham, Q.V., Gadekallu, T.R., Chowdhary, C.L., Alazab, M.,Piran, M.J., et al., 2021. Deep learning and medical image processing for coronavirus (covid-19)pandemic: A survey. Sustainable cities and society 65, 102589.[12] Cai, L., Gao, J., Zhao, D., 2020. A review of the application of deep learning in medical imageclassification and segmentation. Annals of translational medicine 8.[13] Celik, Y., Talo, M., Yildirim, O., Karabatak, M., Acharya, U.R., 2020. Automated invasive ductalcarcinoma detection based using deep transfer learning with whole-slide images. Pattern RecognitionLetters .[14] Chen, N., Zhou, M., Dong, X., Qu, J., Gong, F., Han, Y., Qiu, Y., Wang, J., Liu, Y., Wei, Y., et al.,2020. Epidemiological and clinical characteristics of 99 cases of 2019 novel coronavirus pneumonia inwuhan, china: a descriptive study. The Lancet 395, 507–513.[15] Cheplygina, V., de Bruijne, M., Pluim, J.P., 2019. Not-so-supervised: a survey of semi-supervised,multi-instance, and transfer learning in medical image analysis. Medical image analysis 54, 280–296.[16] Chollet, F., 2017. Xception: Deep learning with depthwise separable convolutions, in: Proceedings ofthe IEEE conference on computer vision and pattern recognition, pp. 1251–1258.[17] Codella, N.C., Nguyen, Q.B., Pankanti, S., Gutman, D.A., Helba, B., Halpern, A.C., Smith, J.R.,2017. Deep learning ensembles for melanoma recognition in dermoscopy images. IBM Journal ofResearch and Development 61, 5–1.[18] Corman, V.M., Landt, O., Kaiser, M., Molenkamp, R., Meijer, A., Chu, D.K., Bleicker, T., Br¨unink, ., Schneider, J., Schmidt, M.L., et al., 2020. Detection of 2019 novel coronavirus (2019-ncov) byreal-time rt-pcr. Eurosurveillance 25, 2000045.[19] Cruz-Roa, A., Basavanhally, A., Gonz´alez, F., Gilmore, H., Feldman, M., Ganesan, S., Shih, N.,Tomaszewski, J., Madabhushi, A., 2014. Automatic detection of invasive ductal carcinoma in wholeslide images with convolutional neural networks, in: Medical Imaging 2014: Digital Pathology, Inter-national Society for Optics and Photonics. p. 904103.[20] Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L., 2009. Imagenet: A large-scale hierarchicalimage database, in: 2009 IEEE conference on computer vision and pattern recognition, Ieee. pp.248–255.[21] Deng, L., Liu, Y., 2018. Deep learning in natural language processing. Springer.[22] Dhruv, P., Naskar, S., 2020. Image classification using convolutional neural network (cnn) and recur-rent neural network (rnn): A review. Machine Learning and Information Processing , 367–381.[23] Dutta, A., Hasan, M.K., Ahmad, M., 2020. Skin lesion classification using convolutional neuralnetwork for melanoma recognition. medRxiv .[24] Esteva, A., Kuprel, B., Novoa, R.A., Ko, J., Swetter, S.M., Blau, H.M., Thrun, S., 2017.Dermatologist-level classification of skin cancer with deep neural networks. nature 542, 115–118.[25] Fu, Y., Lei, Y., Wang, T., Curran, W.J., Liu, T., Yang, X., 2020. Deep learning in medical imageregistration: a review. Physics in Medicine & Biology 65, 20TR01.[26] Ga´al, G., Maga, B., Luk´acs, A., 2020. Attention u-net based adversarial architectures for chest x-raylung segmentation. arXiv:2003.10304 .[27] G´eron, A., 2019. Hands-on machine learning with Scikit-Learn, Keras, and TensorFlow: Concepts,tools, and techniques to build intelligent systems. O’Reilly Media.[28] Hannun, A.Y., Rajpurkar, P., Haghpanahi, M., Tison, G.H., Bourn, C., Turakhia, M.P., Ng, A.Y.,2019. Cardiologist-level arrhythmia detection and classification in ambulatory electrocardiogramsusing a deep neural network. Nature medicine 25, 65.[29] Hasan, M., Ahamed, M., Ahmad, M., Rashid, M., et al., 2017. Prediction of epileptic seizure byanalysing time series eeg signal using-nn classifier. Applied bionics and biomechanics 2017.[30] Hasan, M., Alam, M., Elahi, M., Toufick, E., Roy, S., Wahid, S.R., et al., 2020a. Cvr-net: A deep con-volutional neural network for coronavirus recognition from chest radiography images. arXiv:2007.11993.[31] Hasan, M., Aleef, T.A., et al., 2019. Automatic mass detection in breast using deep convolutionalneural network and svm classifier. arXiv:1907.04424 .[32] Hasan, M.K., Alam, M.A., Dahal, L., Elahi, M.T.E., Roy, S., Wahid, S.R., Marti, R., Khanal, B.,2020b. Challenges of deep learning methods for covid-19 detection using public datasets. medRxiv .
33] Hasan, M.K., Alam, M.A., Das, D., Hossain, E., Hasan, M., 2020c. Diabetes prediction using ensem-bling of different machine learning classifiers. IEEE Access 8, 76516–76531.[34] Hasan, M.K., Alam, M.A., Elahi, M.T.E., Roy, S., Mart´ı, R., 2020d. Drnet: Segmentation andlocalization of optic disc and fovea from diabetic retinopathy image. Artificial Intelligence in Medicine, 102001.[35] Hasan, M.K., Dahal, L., Samarakoon, P.N., Tushar, F.I., Mart´ı, R., 2020e. DSNet: Automaticdermoscopic skin lesion segmentation. Computers in Biology and Medicine 120, 103738.[36] He, K., Zhang, X., Ren, S., Sun, J., 2016. Deep residual learning for image recognition, in: Proceedingsof the IEEE conference on computer vision and pattern recognition, pp. 770–778.[37] He, X., Wang, S., Shi, S., Chu, X., Tang, J., Liu, X., Yan, C., Zhang, J., Ding, G., 2020. Benchmarkingdeep learning models and automated model design for covid-19 detection with chest ct scans. medRxiv.[38] Hemdan, E.E.D., Shouman, M.A., Karar, M.E., 2020. Covidx-net: A framework of deep learningclassifiers to diagnose covid-19 in x-ray images. arXiv:2003.11055 .[39] Hesamian, M.H., Jia, W., He, X., Kennedy, P., 2019. Deep learning techniques for medical imagesegmentation: Achievements and challenges. Journal of digital imaging 32, 582–596.[40] Horry, M.J., Chakraborty, S., Paul, M., Ulhaq, A., Pradhan, B., Saha, M., Shukla, N., 2020. Covid-19detection through transfer learning using multimodal imaging data. IEEE Access 8, 149808–149824.[41] Howard, A.G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., Andreetto, M.,Adam, H., 2017. Mobilenets: Efficient convolutional neural networks for mobile vision applications.arXiv:1704.04861 .[42] Huang, C., Wang, Y., Li, X., Ren, L., Zhao, J., Hu, Y., Zhang, L., Fan, G., Xu, J., Gu, X., et al.,2020. Clinical features of patients infected with 2019 novel coronavirus in wuhan, china. The lancet395, 497–506.[43] Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q., 2017a. Densely connected convolutionalnetworks, in: Proceedings of the IEEE conference on computer vision and pattern recognition, pp.4700–4708.[44] Huang, X., Shan, J., Vaidya, V., 2017b. Lung nodule detection in ct using 3d convolutional neuralnetworks, in: 2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017), IEEE.pp. 379–383.[45] Hussain, Z., Gimenez, F., Yi, D., Rubin, D., 2017. Differential data augmentation techniques formedical imaging classification tasks, in: AMIA Annual Symposium Proceedings, American MedicalInformatics Association. p. 979.[46] Iandola, F.N., Han, S., Moskewicz, M.W., Ashraf, K., Dally, W.J., Keutzer, K., 2016. Squeezenet: lexnet-level accuracy with 50x fewer parameters and¡ 0.5 mb model size. arXiv:1602.07360 .[47] Ioffe, S., Szegedy, C., 2015. Batch normalization: Accelerating deep network training by reducinginternal covariate shift. arXiv:1502.03167 .[48] Jain, G., Mittal, D., Thakur, D., Mittal, M.K., 2020. A deep learning approach to detect covid-19coronavirus with x-ray images. Biocybernetics and biomedical engineering 40, 1391–1405.[49] Jaiswal, A., Gianchandani, N., Singh, D., Kumar, V., Kaur, M., 2020. Classification of the covid-19infected patients using densenet201 based deep transfer learning. Journal of Biomolecular Structureand Dynamics , 1–8.[50] Ji, Y., Zhang, H., Zhang, Z., Liu, M., 2021. Cnn-based encoder-decoder networks for salient objectdetection: A comprehensive review and recent advances. Information Sciences 546, 835–857.[51] Jim´enez-S´anchez, A., Albarqouni, S., Mateus, D., 2018. Capsule networks against medical imagingdata challenges, in: Intravascular Imaging and Computer Assisted Stenting and Large-Scale Annota-tion of Biomedical Data and Expert Label Synthesis. Springer, pp. 150–160.[52] Jnawali, K., Arbabshirani, M.R., Rao, N., Patel, A.A., 2018. Deep 3d convolution neural network for ctbrain hemorrhage classification, in: Medical Imaging 2018: Computer-Aided Diagnosis, InternationalSociety for Optics and Photonics. p. 105751C.[53] Khan, A.I., Shah, J.L., Bhat, M.M., 2020. Coronet: A deep neural network for detection and diagnosisof covid-19 from chest x-ray images. Computer Methods and Programs in Biomedicine , 105581.[54] Kingma, D.P., Ba, J., 2014. Adam: A method for stochastic optimization. arXiv:1412.6980 .[55] Ko, H., Chung, H., Kang, W.S., Kim, K.W., Shin, Y., Kang, S.J., Lee, J.H., Kim, Y.J., Kim, N.Y.,Jung, H., et al., 2020. Covid-19 pneumonia diagnosis using a simple 2d deep learning framework witha single chest ct image: model development and validation. Journal of medical Internet research 22,e19569.[56] Ko, J.P., Betke, M., 2001. Chest ct: automated nodule detection and assessment of change overtime—preliminary experience. Radiology 218, 267–273.[57] Krizhevsky, A., Sutskever, I., Hinton, G.E., 2012. Imagenet classification with deep convolutionalneural networks, in: Advances in neural information processing systems, pp. 1097–1105.[58] LeCun, Y., Bengio, Y., Hinton, G., 2015. Deep learning. nature 521, 436–444.[59] Lee, E.Y., Ng, M.Y., Khong, P.L., 2020. Covid-19 pneumonia: what has ct taught us? The LancetInfectious Diseases 20, 384–385.[60] Li, Q., Guan, X., Wu, P., Wang, X., Zhou, L., Tong, Y., Ren, R., Leung, K.S., Lau, E.H., Wong, J.Y.,et al., 2020. Early transmission dynamics in wuhan, china, of novel coronavirus–infected pneumonia.New England Journal of Medicine .[61] Lin, M., Chen, Q., Yan, S., 2013. Network in network. arXiv:1312.4400 .
62] Litjens, G., Kooi, T., Bejnordi, B.E., Setio, A.A.A., Ciompi, F., Ghafoorian, M., Van Der Laak, J.A.,Van Ginneken, B., S´anchez, C.I., 2017. A survey on deep learning in medical image analysis. Medicalimage analysis 42, 60–88.[63] Long, Y., Gong, Y., Xiao, Z., Liu, Q., 2017. Accurate object localization in remote sensing imagesbased on convolutional neural networks. IEEE Transactions on Geoscience and Remote Sensing 55,2486–2498.[64] Lu, H., Wang, H., Zhang, Q., Yoon, S.W., Won, D., 2019. A 3d convolutional neural network forvolumetric image semantic segmentation. Procedia Manufacturing 39, 422–428.[65] Mahmud, T., Alam, M., Chowdhury, S., Ali, S.N., Rahman, M.M., Fattah, S.A., Saquib, M., et al.,2021. Covtanet: A hybrid tri-level attention based network for lesion segmentation, diagnosis, andseverity prediction of covid-19 chest ct scans. arXiv:2101.00691 .[66] Morozov, S., Andreychenko, A., Pavlov, N., Vladzymyrskyy, A., Ledikhova, N., Gombolevskiy, V.,Blokhin, I., Gelezhe, P., Gonchar, A., Chernina, V., et al., 2020. Mosmeddata: Chest ct scans withcovid-19 related findings. medRxiv .[67] Narin, A., Kaya, C., Pamuk, Z., 2020. Automatic detection of coronavirus disease (covid-19) usingx-ray images and deep convolutional neural networks. arXiv:2003.10849 .[68] Nayak, S.R., Nayak, D.R., Sinha, U., Arora, V., Pachori, R.B., 2020. Application of deep learningtechniques for detection of covid-19 cases using chest x-ray images: A comprehensive study. BiomedicalSignal Processing and Control 64, 102365.[69] Nour, M., C¨omert, Z., Polat, K., 2020. A novel medical diagnosis model for covid-19 infection detectionbased on deep features and bayesian optimization. Applied Soft Computing 97, 106580.[70] ¨Oks¨uz, C., Urhan, O., G¨ull¨u, M.K., 2020. Ensemble-cvdnet: A deep learning based end-to-endclassification framework for covid-19 detection using ensembles of networks. arXiv:2012.09132 .[71] Ouchicha, C., Ammor, O., Meknassi, M., 2020. Cvdnet: A novel deep learning architecture fordetection of coronavirus (covid-19) from chest x-ray images. Chaos, Solitons & Fractals 140, 110245.[72] Ozkaya, U., Ozturk, S., Barstugan, M., 2020. Coronavirus (covid-19) classification using deep featuresfusion and ranking technique. arXiv:2004.03698 .[73] Ozsahin, I., Sekeroglu, B., Musa, M.S., Mustapha, M.T., Uzun Ozsahin, D., 2020. Review on diagnosisof covid-19 from chest ct images using artificial intelligence. Computational and Mathematical Methodsin Medicine 2020.[74] Ozturk, T., Talo, M., Yildirim, E.A., Baloglu, U.B., Yildirim, O., Acharya, U.R., 2020. Automateddetection of covid-19 cases using deep neural networks with x-ray images. Computers in Biology andMedicine , 103792.[75] Pathak, Y., Shukla, P.K., Tiwari, A., Stalin, S., Singh, S., Shukla, P.K., 2020. Deep transfer learning ay to prevent neural networks from overfitting. The journal of machine learning research 15, 1929–1958.[91] Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A., 2016. Inception-v4, inception-resnet and the impactof residual connections on learning. arXiv:1602.07261 .[92] Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V.,Rabinovich, A., 2015. Going deeper with convolutions, in: Proceedings of the IEEE conference oncomputer vision and pattern recognition, pp. 1–9.[93] Talo, M., Yildirim, O., Baloglu, U.B., Aydin, G., Acharya, U.R., 2019. Convolutional neural networksfor multi-class brain disease detection using mri images. Computerized Medical Imaging and Graphics78, 101673.[94] Tan, J.H., Fujita, H., Sivaprasad, S., Bhandary, S.V., Rao, A.K., Chua, K.C., Acharya, U.R., 2017.Automated segmentation of exudates, haemorrhages, microaneurysms using single convolutional neu-ral network. Information sciences 420, 66–76.[95] Tan, M., Le, Q.V., 2019. Efficientnet: Rethinking model scaling for convolutional neural networks.arXiv:1905.11946 .[96] To˘ga¸car, M., Ergen, B., C¨omert, Z., 2020. Covid-19 detection using deep learning models to exploitsocial mimic optimization and structured chest x-ray images using fuzzy color and stacking approaches.Computers in Biology and Medicine , 103805.[97] Tushar, F.I., Alyafi, B., Hasan, M.K., Dahal, L., 2019. Brain tissue segmentation using neuronetwith different pre-processing techniques, in: 2019 Joint 8th International Conference on Informatics,Electronics & Vision (ICIEV) and 2019 3rd International Conference on Imaging, Vision & PatternRecognition (icIVPR), IEEE. pp. 223–227.[98] Waheed, A., Goyal, M., Gupta, D., Khanna, A., Al-Turjman, F., Pinheiro, P.R., 2020. Covidgan:Data augmentation using auxiliary classifier gan for improved covid-19 detection. IEEE Access 8,91916–91923.[99] Walker, P.G., Whittaker, C., Watson, O.J., Baguelin, M., Winskill, P., Hamlet, A., Djafaara, B.A.,Cucunub´a, Z., Mesa, D.O., Green, W., et al., 2020. The impact of covid-19 and strategies for mitigationand suppression in low-and middle-income countries. Science .[100] Wang, D., Hu, B., Hu, C., Zhu, F., Liu, X., Zhang, J., Wang, B., Xiang, H., Cheng, Z., Xiong, Y.,et al., 2020a. Clinical characteristics of 138 hospitalized patients with 2019 novel coronavirus–infectedpneumonia in wuhan, china. Jama 323, 1061–1069.[101] Wang, J., Li, F., Li, Q., 2009. Automated segmentation of lungs with severe interstitial lung diseasein ct. Medical physics 36, 4592–4599.[102] Wang, S.H., Govindaraj, V.V., G´orriz, J.M., Zhang, X., Zhang, Y.D., 2020b. Covid-19 classification by gcnet with deep feature fusion from graph convolutional network and convolutional neural network.Information Fusion 67, 208–229.[103] Wang, W., Xu, Y., Gao, R., Lu, R., Han, K., Wu, G., Tan, W., 2020c. Detection of sars-cov-2 indifferent types of clinical specimens. Jama 323, 1843–1844.[104] Wang, X., Deng, X., Fu, Q., Zhou, Q., Feng, J., Ma, H., Liu, W., Zheng, C., 2020d. A weakly-supervised framework for covid-19 classification and lesion localization from chest ct. IEEE Transac-tions on Medical Imaging .[105] Wang, X., Peng, Y., Lu, L., Lu, Z., Summers, R.M., 2018. Tienet: Text-image embedding networkfor common thorax disease classification and reporting in chest x-rays, in: Proceedings of the IEEEconference on computer vision and pattern recognition, pp. 9049–9058.[106] World Health Organization, 2020. Naming the coronavirus disease (COVID-19) . [Accessed:16 July 2020].[107] Wu, W.J., Lin, S.W., Moon, W.K., 2012. Combining support vector machine with genetic algorithmto classify ultrasound breast tumor images. Computerized Medical Imaging and Graphics 36, 627–633.[108] Wu, Y.C., Chen, C.S., Chan, Y.J., 2020. The outbreak of covid-19: An overview. Journal of theChinese Medical Association 83, 217.[109] Xu, B., Wang, N., Chen, T., Li, M., 2015. Empirical evaluation of rectified activations in convolutionalnetwork. arXiv:1505.00853 .[110] Xu, X., Jiang, X., Ma, C., Du, P., Li, X., Lv, S., Yu, L., Ni, Q., Chen, Y., Su, J., et al., 2020. A deeplearning system to screen novel coronavirus disease 2019 pneumonia. Engineering .[111] Yadav, S.S., Jadhav, S.M., 2019. Deep convolutional neural network based medical image classificationfor disease diagnosis. Journal of Big Data 6, 1–18.[112] Yang, T., Wang, Y.C., Shen, C.F., Cheng, C.M., 2020. Point-of-care rna-based diagnostic device forcovid-19.[113] Yıldırım, ¨O., P(cid:32)lawiak, P., Tan, R.S., Acharya, U.R., 2018. Arrhythmia detection using deep con-volutional neural network with long duration ecg signals. Computers in biology and medicine 102,411–420.[114] Yip, S.S., Klanecek, Z., Naganawa, S., Kim, J., Studen, A., Rivetti, L., Jeraj, R., 2020. Performanceand robustness of machine learning-based radiomic covid-19 severity prediction. medRxiv .[115] Zhang, K., Liu, X., Shen, J., Li, Z., Sang, Y., Wu, X., Zha, Y., Liang, W., Wang, C., Wang, K.,et al., 2020. Clinically applicable ai system for accurate diagnosis, quantitative measurements, andprognosis of covid-19 pneumonia using computed tomography. Cell .116] Zhang, X., Zhou, X., Lin, M., Sun, J., 2018. Shufflenet: An extremely efficient convolutional neuralnetwork for mobile devices, in: Proceedings of the IEEE conference on computer vision and patternrecognition, pp. 6848–6856.[117] Zhao, J., Zhang, Y., He, X., Xie, P., 2020. Covid-ct-dataset: a ct scan dataset about covid-19.arXiv:2003.13865 .[118] Zhou, X., Yamada, K., Kojima, T., Takayama, R., Wang, S., Zhou, X., Hara, T., Fujita, H., 2018.Performance evaluation of 2d and 3d deep learning approaches for automatic segmentation of multipleorgans on ct images, in: Medical Imaging 2018: Computer-Aided Diagnosis, International Society forOptics and Photonics. p. 105752C.[119] Zoph, B., Vasudevan, V., Shlens, J., Le, Q.V., 2018. Learning transferable architectures for scalableimage recognition, in: Proceedings of the IEEE conference on computer vision and pattern recognition,pp. 8697–8710.