Reliable COVID-19 Detection Using Chest X-ray Images

Coronavirus disease 2019 (COVID-19) has emerged the need for computer-aided diagnosis with automatic, accurate, and fast algorithms. Recent studies have applied Machine Learning algorithms for COVID-19 diagnosis over chest X-ray (CXR) images. However…

Authors: Aysen Degerli, Mete Ahishali, Serkan Kiranyaz

Reliable COVID-19 Detection Using Chest X-ray Images
RELIABLE CO VID-19 DETECTION USING CHEST X-RA Y IMA GES A ysen De gerli † , Mete Ahishali † , Serkan Kiranyaz ∗ , Muhammad E. H. Chowdhury ∗ , and Moncef Gabbouj † † Faculty of Information T echnology and Communication Sciences, T ampere Uni versity , T ampere, Finland ∗ Department of Electrical Engineering, Qatar Uni versity , Doha, Qatar ABSTRA CT Coronavirus disease 2019 (CO VID-19) has emerged the need for computer-aided diagnosis with automatic, accurate, and fast algo- rithms. Recent studies have applied Machine Learning algorithms for CO VID-19 diagnosis over chest X-ray (CXR) images. Howe ver , the data scarcity in these studies prevents a reliable ev aluation with the potential of overfitting and limits the performance of deep net- works. Moreover , these networks can discriminate CO VID-19 pneu- monia usually from healthy subjects only or occasionally , from lim- ited pneumonia types. Thus, there is a need for a robust and accurate CO VID-19 detector ev aluated over a large CXR dataset. T o address this need, in this study , we propose a reliable CO VID-19 detection network: ReCovNet, which can discriminate COVID-19 pneumonia from 14 different thoracic diseases and healthy subjects. T o accom- plish this, we hav e compiled the largest CO VID-19 CXR dataset: QaT a-CO V19 with 124,616 images including 4603 CO VID-19 sam- ples. The proposed ReCovNet achieved a detection performance with 98.57% sensitivity and 99.77% specificity . Index T erms — SARS-CoV -2, CO VID-19 Detection, Machine Learning, Deep Learning 1. INTRODUCTION Coronavirus disease 2019 (CO VID-19), caused by severe acute res- piratory syndrome Corona virus-2 (SARs-CoV -2), was declared a pandemic by the W orld Health Organization in March 2020. The disease affects seriously people in high-risk groups (especially the elderly) leading to hospitalization, intubation, and e ven death [1]. In order to prevent the spread of the disease, detection, and isolation of infected patients have the utmost importance. Howe ver , the diag- nosis of CO VID-19 is challenging due to its similar symptoms with other viral infections such as fev er , cough, fatigue, and breathless- ness [2]. Therefore, reliable detection of the disease has significant importance. Recent diagnostic tools to detect CO VID-19 are nucleic acid de- tection with real-time polymerase chain reaction (R T -PCR), com- puted tomography (CT), and chest X-ray (CXR) imaging. R T -PCR has become the gold standard for CO VID-19 diagnosis. Howe ver , R T -PCR tests suffer from instability and high false alarm rate [3]. On the other hand, CT imaging has higher sensiti vity compared to R T -PCR test; thus, recommended for the suspected cases [4]. How- ev er, the performance of CT imaging in the early CO VID-19 cases has limited sensiti vity [4]. Thus, CXR imaging is widely used for the diagnosis of CO VID-19 mainly because of its advantages that are faster acquisition, less radiation exposure, and easy accessibility compared to the aforementioned tools [5]. Many studies utilized Deep Learning (DL) algorithms for CO VID-19 detection [6–8]. Howe ver, the reliability of these models is under question due to their hidden decision-making process. In fact, the activ ation maps of the deep models re veal the unreliabil- ity of their decision-making process, where irrele vant areas on the CXRs, outside of the lung area such as bones, background, or text, affect the decision of the network. Therefore, sev eral studies [9–11] attempted to pre vent deep models to learn from these irrelev ant areas on the CXRs with a two-staged approach for CO VID-19 detection by processing only the lung areas with lung segmentation as their first stage. At the second stage, only the segmented lung area on the CXRs are given to the deep models as the input. Although these studies have achiev ed good performance for CO VID-19 detection, data scarcity is the main drawback that can yield overfitting and hinders an accurate ev aluation. Moreover , the datasets used in these studies encapsulate none or limited thoracic diseases, i.e., viral and bacterial pneumonia against CO VID-19 pneumonia that makes them unreliable in real-case scenarios for CO VID-19 diagnosis. In this study , to address the aforementioned issues we propose ReCovNet: a re liable COV ID-19 detection net work, which is an end-to-end network solution. Instead of detecting CO VID-19 di- rectly from the CXR image or the se gmented lung area on the CXR, we embed this information into the ReCovNet model by transfer learning from a segmentation network. For this purpose, we ini- tially train the se gmentation network and detach its encoder block to reconstruct the ReCovNet model for CO VID-19 detection. Ad- ditionally , in this work, we extend the QaT a-COV19 dataset that was introduced in our previous study [12]. The extended version of QaT a-CO V19 is the largest CO VID-19 dataset with 124 , 616 im- ages including 4603 CO VID-19 samples. The control group CXRs consists of 14 different thoracic diseases and health y subjects. More- ov er , the QaT a-COV19 consists of a subset of 1065 early CO VID-19 cases sho wing no or limited sign of CO VID-19 pneumonia, which makes the diagnosis more challenging. Accordingly , the proposed ReCovNet trained over the largest QaT a-CO V19 dataset has an out- standing performance with a reliable diagnosis compared to state-of- the-art deep models. Lastly , the benchmark QaT a-CO V19 dataset is publicly shared with the research community 1 . The rest of the paper is or ganized as follo ws. In Section 2, we introduce the QaT a-COV19 dataset and giv e the details of our pro- posed ReCovNet model along with the state-of-the-art deep models. In Section 3, we report the experimental results, and we conclude the paper in Section 4. 2. MA TERIALS AND METHODOLOGY In this section, first we introduce the benchmark QaT a-CO V19 dataset. Then, the state-of-the-art deep models are introduced for CO VID-19 diagnosis. Lastly , we propose the ReCovNet model for reliable CO VID-19 detection. 1 The benchmark QaT a-CO V19 is publicly shared at the repository https://www .kaggle.com/aysendegerli/qatacov19-dataset. Fig. 1 : The proposed ReCovNet, where the transfer learning is performed from the segmentation network initially trained on CXRs for lung segmentation. 2.1. The Benchmark QaT a-CO V19 Dataset The benchmark QaT a-CO V19 dataset, compiled by researchers of Qatar University and T ampere Univ ersity is so far the largest CO VID-19 dataset including 4603 CO VID-19 and 120 , 013 con- trol group CXRs. The detection task on this dataset is especially challenging since QaT a-COV19 consists of 1065 samples from early CO VID-19 cases that show no or limited sign of CO VID-19 pneumonia. CO VID-19 samples hav e been collected from publicly av ailable datasets and repositories [10, 13–18], and were prepro- cessed by e xcluding low-quality images and any duplication. The control group images were collected from sev eral datasets: ChestX- ray14 [19], X-rays from pediatric patients [20], and Chest X-rays (Indiana University) [21]. W e hav e only used the bacterial and viral pneumonia CXRs from pediatric patients to increase pneumonia samples for a challenging diagnosis. Additionally , we included only the lateral-view CXRs from Chest X-rays (Indiana University) dataset since all other samples in the control group are from frontal- view CXRs, whereas CO VID-19 samples include CXRs both from lateral and frontal views. T able 1 : Details of QaT a-CO V19 dataset. Data T raining Samples Augmented Augmented T raining Samples T est Samples ChestX-ray14 86 , 524 7 86 , 524 25 , 596 Bacterial Pneumonia 2130 3 5000 630 Chest X-rays (Indiana Univ ersity) 2816 3 5000 832 V iral Pneumonia 1146 3 5000 339 CO VID-19 3553 3 10 , 000 1050 T otal 96,169 111,524 28,447 T able 1 shows the number of samples in the QaT a-CO V19 dataset. COVID-19 detection is performed against the control group images, which consists of 14 different thoracic diseases and healthy subjects. Therefore, we perform a binary classification problem. Since the train and test sets of the ChestX-ray14 dataset are prede- fined, we hav e randomly split Chest X-rays (Indiana Uni versity), bacterial and viral pneumonia, and CO VID-19 CXRs with the same train/ test ratio as in [19]. The CXRs in the dataset are resized to 224 × 224 pixels. W e ha ve augmented the images e xcept for ChestX- ray14 samples using the Image Data Generator in Keras. The images are 10% randomly shifted both horizontally and vertically , and in a 10-degree range randomly rotated. Lastly , the ’ near est ’ mode is selected to fill the blank sections. 2.2. CO VID-19 Detection with Deep Models DL algorithms achiev ed state-of-the-art results on many computer vision tasks, including CO VID-19 detection. Especially during the pandemic, recent studies concluded that DL algorithms with Con vo- lutional Neural Networks can achie ve outstanding performance for CO VID-19 diagnosis. Ne vertheless, the major issue in DL is that supervised deep models require a large amount of data to generalize well over unseen data. Thus, when subjected to data scarcity , such models fail in the testing phase due to overfitting. In this study , our first objective is to in vestigate the performances of state-of-the-art deep models by transfer learning on the largest COVID-19 dataset: QaT a-CO V19. The state-of-the-art networks are selected as follo ws: • DenseNet-121 [22] is a 121-layer deep network that achie ves a maximum information flo w by connecting the layers with additional input nodes. • ResNet-50 [23] is a deep network with 50-layers that intro- duces residual blocks to prevent gradient vanishing in deep model structures by shortcut connections that mer ge input and output through the stacked layers. • Inception-v3 [24] is a deep network with lo w computational complexity compared to other state-of-the-art deep models. The reduced complexity is ensured by pruning and f actorizing operations inside the network. • Inception-ResNet-v2 [25] unites the structure of the incep- tion model [24] with residual blocks [23] to achieve state-of- the-art results in computer vision tasks with a less computa- tional cost. In order to utilize the deep models in the CO VID-19 detection task, we modify their output layers by inserting a global average pooling layer , a fully connected layer with 2-neurons, and a softmax activ a- tion function. The transfer learning is performed on the models by initializing their weights with the ImageNet weights. 2.3. ReCovNet: Reliable CO VID-19 Detection Network DL algorithms are often considered as black-box since their decision- making process is latent. In order to re veal their mystery in the decision-making process, the authors in [26] proposed Grad-CAM method that computes activ ation maps indicating the areas on the input image considered by the deep model during the classifica- tion task. In the CO VID-19 detection task, our observ ations on the activ ation maps with the Grad-CAM approach show that the state- of-the-art deep models tend to learn and perform the classification from irrelev ant areas on the CXRs, such as bones, background, or text. Therefore, the decisions of these models may be considered unreliable for CO VID-19 detection. In order to ov ercome the unreli- ability issue, this study proposes ReCovNet: an end-to-end network for reliable CO VID-19 detection. ReCovNet is a deep network that considers the lung areas on the input CXR images to detect CO VID-19 pneumonia. The struc- ture of the proposed ReCovNet is gi ven in Fig. 1. Accordingly , to construct ReCovNet, a se gmentation network is trained in phase-I. The structure of the lung segmentation network is a con volutional autoencoder that maps the input image, X to its corresponding out- put mask, M : M ← − P θ , φ ( X ) . Any deep model can be used as the en- coder block of the network, ε θ . On the other hand, the decoder block of the segmentation network is similar to the U-Net [27] model ex- cept for its u-shaped architecture, where the low-le vel features at the encoder block are concatenated with the high-lev el features at the decoder lev el. The u-shaped architecture is excluded by removing the skip connections, which performs the concatenation operation. The reason for constructing an encoder-decoder network without skip connections is that the contrib utions from the initial layers are av oided; therefore, the network can mak e decisions from the high- lev el features that are closer to segmentation mapping of the input image. Based on our observations, this approach impro ves the per- formance of ReCo vNet in terms of reliability observed in the activ a- tion maps. The decoder block of the segmentation network consists of φ ∈ { b j , w j } L j = 1 with L number of layers composed of fi ve stages. Each stage consists of an upsampling layer by × 2, and sequentially two times of the con volutional layer , batch normalization, and Rec- tified Linear Unit (ReLU) activ ation function. The output of the last stage is connected to a con volutional layer with a sigmoid acti vation function to reconstruct the segmentation mask at the output. In order , the number of con volutional layer filters are { 256 , 128 , 64 , 32 , 16 , 1 } with kernel of size of k = ( 3 × 3 ) . Lastly , training is performed over N number of samples { x j s , t rain , M j } N j = 1 , where x s and M are the train- ing data, and ground-truth segmentation masks, respecti vely . The loss function used in training is a hybrid function, which is the sum- mation of the binary focal and dice loss functions. During phase-II of the training, we construct the con volutional layers of ReCovNet by ε θ that generates the latent features f ← − ε θ (.). Then, f is vectorized and downsampled by attaching a global av erage pooling layer and a fully connected layer with 2-neurons us- ing softmax acti vation function. W e perform the classification task with categorical cross-entropy loss function by training ReCovNet ov er N number of samples { x j t rain , y j t rain } N j = 1 , where x and y are the training data and ground-truth labels, respectiv ely . During this train- ing phase, ε θ is not frozen; therefore, the latent features f are further adjusted to the benchmark QaT a-CO V19 dataset. Overall, during the inference, ReCovNet does not require prior lung segmentation to provide reliable CO VID-19 detection. Finally , we propose two ver- sions of the proposed model: ReCovNet-v1 is formed by DenseNet- 121 encoder due to its good performance in the CO VID-19 detection task, and ReCovNet-v2 is formed by ResNet-50 encoder . 3. EXPERIMENT AL EV ALU A TION In this section, the experimental setup is presented. Then, the exper - imental results are giv en on the benchmark QaT a-CO V19 dataset. 3.1. Experimental Setup The performance metrics are calculated on the test (unseen) set of the QaT a-CO V19 dataset. W e consider CO VID-19 CXRs as positive- class, whereas control group samples as negati ve-class. Accord- ingly , we form the confusion matrix (CM) elements as follows: true positiv e is the number of correctly classified CO VID-19 samples, false positive is the number of misclassified control group samples as the positi ve class member, true ne gative is the number of correctly detected control group samples, and false negati ve is the number of misclassified CO VID-19 samples as the negativ e class members. The performance metrics are defined as follows: sensitivity is the rate of correctly detected CO VID-19 samples in all positive samples, specificity is the ratio of correctly classified control group samples in all negati ve samples, precision is the rate of correctly classified positiv e samples among all the members detected as positive class members, accuracy is the rate of correctly detected samples among all the data. Moreover , we define the F-score as follo ws: F ( β ) = ( 1 + β 2 ) ( pr ecision × sensitivity ) β 2 × pr ecision + sensitivity (1) where the harmonic av erage between pr ecision and sensitivity is de- fined as F 1 -scor e as β = 1. On the other hand, to minimize the ef fect of false negati ves ov er false positiv es, F 2 -Scor e is defined as β = 2. The major performance metric in CO VID-19 detection is sensitivity since any misdetection of the disease threatens global health. Hence minimizing the false alarm (1 − specificity ) is our target. The networks are implemented using T ensorflow library on NV idia ® GeForce R TX 2080 Ti GPU card. The optimizer choice is Adam with its default momentum parameters. ReCovNet mod- els are trained with 15-epochs, the learning rate of α = 10 − 5 , and a batch size of 64. The segmentation networks are trained with 15-epochs, the learning rate of α = 10 − 4 , and a batch size of 32. W e have utilized Montgomery County X-ray Set [28] and Japanese Society of Radiological T echnology (JSR T) [29] datasets to train the segmentation models. All the images are frontal-view CXRs and hav e their corresponding ground-truths except for the JSR T . Thus, the segmentation masks provided by [30] are used as ground-truths T able 2 : CO VID-19 detection performance results (%) computed over the test (unseen data) set of QaT a-CO V19 dataset using four state-of- the-art and the proposed ReCovNet deep models. Model Sensitivity Specificity Precision F1-Score F2-Score Accuracy ResNet-50 96 . 571 99 . 953 98 . 734 97 . 641 96 . 996 99 . 828 Inception-v3 94 . 762 99 . 821 95 . 307 95 . 033 94 . 870 99 . 634 Inception-ResNet-v2 94 . 286 99 . 803 94 . 828 94 . 556 94 . 394 99 . 599 DenseNet-121 97 . 429 99.974 99.320 98.365 97.801 99.880 ReCovNet-v1 97 . 810 99 . 901 97 . 438 97 . 624 97 . 735 99 . 824 ReCovNet-v2 98.571 99 . 770 94 . 262 96 . 369 97 . 678 99 . 726 for JSR T [29]. Overall, the number of CXRs is 385 in the lung segmentation dataset. For the performance evaluation, we split this data with a ratio of 80% training to 20% test sets. Then, training samples are augmented up to 1000 samples. 3.2. Experimental Results In this section, the performance of segmentation networks is first in vestigated. Over the test set of segmentation dataset, the segmen- tation model with DenseNet-121 encoder has achieved 96 . 12% sen- sitivity and 98 . 59% specificity , and with ResNet-50 encoder 97 . 12% sensitivity and 98 . 22% specificity for the lung se gmentation task. The CO VID-19 detection performance results of the state-of- the-art and ReCo vNet models are presented in T able 2. For each model, we hav e observed that their performance on CO VID-19 de- tection is successful with > 94% sensitivity . The best model from the state-of-the-art deep models is DenseNet-121 with 97 . 43% sensitiv- ity and 99 . 97% specificity . The performance of ReCovNet-v1 is very close to DenseNet-121. Howe ver , the best sensitivity in CO VID-19 detection is achieved by the ReCovNet-v2 by 98 . 57%, which is an outstanding performance for the diagnosis on the lar gest CO VID-19 dataset. Moreover , ReCovNet-v2 also holds a high specificity level of 99 . 77%. T able 3 shows the confusion matrices of the best per- forming models, which are DenseNet-121 from state-of-the-art deep models and ReCovNet-v2 from the proposed networks. The best de- tection (sensitivity) rate is achiev ed by ReCovNet-v2, which misses only 15 CO VID-19 samples among 1050 images. The results on the largest COVID-19 dataset, which includes many CXR images from different thoracic diseases, shows that deep models can achie ve ele gant CO VID-19 detection performance. Howe ver , the activ ation maps extracted by Grad-CAM [26] ap- proach re veal the contribution of the irrelev ant regions and this is T able 3 : Confusion matrices of the best performing DenseNet-121 and proposed ReCovNet-v2 model for CO VID-19 detection. (a) Confusion Matrix DenseNet-121 DenseNet-121 Predicted Control Group CO VID-19 Ground T ruth Control Group 27390 7 CO VID-19 27 1023 (b) Confusion Matrix ReCovNet-v2 ReCovNet-v2 Predicted Control Group CO VID-19 Ground T ruth Control Group 27334 63 CO VID-19 15 1035 Fig. 2 : The acti vation maps are e xtracted by Grad-CAM [26] ap- proach for the models. The top two ro ws are CO VID-19 samples, whereas the bottom row is a CXR from the control group images. a major issue of these models in CO VID-19 diagnosis. T o ex em- plify this issue, we hav e compared the proposed ReCovNet-v1 and ReCovNet-v2 models with deep models as shown in Fig. 2. The activ ation maps show that DenseNet-121 and ResNet-50 models obviously get the information from irrelev ant regions on the CXRs while the proposed models focus on the relev ant regions. 4. CONCLUSIONS The diagnosis of CO VID-19 is a crucial task to pre vent the further spread of the disease. This study in vestigates the limitations of the state-of-the-art deep models that are trained for CO VID-19 detection directly from CXRs. T o address these problems, we propose an end- to-end reliable CO VID-19 detection network with pre-trained con- volutional layers. W e hav e compiled and publicly shared the largest CO VID-19 dataset: QaT a-CO V19, which includes 4603 CO VID- 19 samples, and 120 , 013 CXRs from 14 different thoracic diseases and normal samples. The experimental results over this benchmark dataset hav e shown that the proposed approach has achieved the highest sensiti vity le vel compared to competing methods. W e also demonstrated how the proposed models properly focus their analy- sis in the relevant region of the CXR instead of irrelev ant activ ation observed in the competing models. In our future work, more CXR images will be used to train the lung segmentation models to further increase the reliability of our approach in CO VID-19 detection. 5. REFERENCES [1] W orld Health Organization, “Corona virus disease 2019 (covid- 19): situation report, 88, ” 2020. [2] T . Singhal, “ A revie w of coronavirus disease-2019 (co vid-19), ” The Indian Journal of P ediatrics , vol. 87, no. 4, pp. 281–286, 2020. [3] A. T ahamtan and A. Ardebili, “Real-time rt-pcr in covid-19 de- tection: issues affecting the results, ” Expert Review of Molec- ular Diagnostics , v ol. 20, no. 5, 2020. [4] A. Bernheim, X. Mei, M. Huang, Y . Y ang, Z. A. Fayad, N. Zhang, K. Diao, B. Lin, X. Zhu, K. Li, et al., “Chest ct findings in coronavirus disease-19 (co vid-19): relationship to duration of infection, ” Radiology , vol. 295, no. 3, pp. 200463, 2020. [5] D. J. Brenner and E. J. Hall, “Computed tomography—an in- creasing source of radiation exposure, ” New England J ournal of Medicine , vol. 357, no. 22, pp. 2277–2284, 2007. [6] N. K. Cho wdhury , M. M. Rahman, and M. A. Kabir , “Pdcovid- net: a parallel-dilated con volutional neural network architec- ture for detecting co vid-19 from chest x-ray images, ” Health Information Science and Systems , v ol. 8, no. 1, pp. 1–14, 2020. [7] T . D. Pham, “Classification of co vid-19 chest x-rays with deep learning: new models or fine tuning?, ” Health Information Science and Systems , vol. 9, no. 1, pp. 1–11, 2020. [8] M. E. H. Cho wdhury, T . Rahman, A. Khandakar, R. Mazhar, M. A. Kadir, Z. B. Mahbub, K. R. Islam, M. S. Khan, A. Iqbal, N. A. Emadi, M. B. I. Reaz, and M. T . Islam, “Can ai help in screening viral and covid-19 pneumonia?, ” IEEE Access , vol. 8, pp. 132665–132676, 2020. [9] M. Z. Alom, M. Rahman, M. S. Nasrin, T . M. T aha, and V . K. Asari, “Covid mtnet: Covid-19 detection with multi-task deep learning approaches, ” arXiv preprint , 2020. [10] A. Haghanifar , M. M. Majdabadi, and S. Ko, “Covid-cxnet: Detecting covid-19 in frontal chest x-ray images using deep learning, ” arXiv preprint , 2020. [11] E. Goldstein, D. Keidar , D. Y aron, Y . Shachar, A. Blass, L. Charbinsk y , I. Aharon y , L. Lifshitz, D. Lumelsky , Z. Nee- man, et al., “Covid-19 classification of x-ray images using deep neural networks, ” arXiv pr eprint arXiv:2010.01362 , 2020. [12] A. Degerli, M. Ahishali, M. Y amac, S. Kiranyaz, M. E. H. Chowdhury , K. Hameed, T . Hamid, R. Mazhar, and M. Gab- bouj, “Covid-19 infection map generation and detection from chest x-ray images, ” arXiv preprint , 2020. [13] M. I. V ay ´ a, J. M. Saborit, J. A. Montell, A. Pertusa, A. Bustos, M. Cazorla, J. Galant, X. Barber, D. Orozco-Beltr ´ an, F . Gar- cia, et al., “Bimcv covid-19+: a large annotated dataset of rx and ct images from covid-19 patients, ” arXiv preprint arXiv:2006.01174 , 2020. [14] “CO VID-19 Image Repository , ” 2020, https://github.com/ml- workgroup/covid-19-image-repository. [Accessed on 12- January-2021]. [15] “CO VID-19 D A T ABASE, ” 2020, https://www .sirm.org/cate gory/senza-categoria/co vid-19/. [Accessed on 12-January-2021]. [16] J. P . Cohen, P . Morrison, and L. Dao, “Covid-19 image data collection, ” arXiv preprint , 2020. [17] “CO VID-19 Radiography Database, ” 2020, https://www .kaggle.com/tawsifurrahman/co vid19- radiography-database. [Accessed on 12-January-2021]. [18] “Chest Imaging, ” 2020, https://www .eurorad.org/. [Accessed on 12-January-2021]. [19] X. W ang, Y . Peng, L. Lu, Z. Lu, M. Bagheri, and R. M. Summers, “Chestx-ray8: Hospital-scale chest x-ray database and benchmarks on weakly-supervised classification and local- ization of common thorax diseases, ” in Pr oceedings of the IEEE Confer ence on Computer V ision and P attern Recogni- tion , 2017, pp. 2097–2106. [20] D. S. Kermany , M. Goldbaum, W . Cai, C. C. V alentim, H. Liang, S. L. Baxter , A. McKeo wn, G. Y ang, X. W u, F . Y an, et al., “Identifying medical diagnoses and treatable diseases by image-based deep learning, ” Cell , v ol. 172, no. 5, pp. 1122– 1131, 2018. [21] “Chest X-rays (Indiana University), ” 2020, https://www .kaggle.com/raddar/chest-xrays-indiana- univ ersity?select=indiana reports.csv. [Accessed on 12- January-2021]. [22] G. Huang, Z. Liu, L. V an Der Maaten, and K. Q. W ein- berger , “Densely connected conv olutional networks, ” in 2017 IEEE Conference on Computer V ision and P attern Recognition (CVPR) , 2017, pp. 2261–2269. [23] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition, ” in 2016 IEEE Confer ence on Computer V ision and P attern Recognition (CVPR) , 2016, pp. 770–778. [24] C. Szegedy, V . V anhoucke, S. Ioffe, J. Shlens, and Z. W o- jna, “Rethinking the inception architecture for computer vi- sion, ” in 2016 IEEE Confer ence on Computer V ision and P at- tern Recognition (CVPR) , 2016, pp. 2818–2826. [25] C. Szegedy , S. Ioffe, V . V anhoucke, and A. A. Alemi, “Inception-v4, inception-resnet and the impact of residual con- nections on learning, ” in Pr oceedings of the Thirty-F irst AAAI Confer ence on Artificial Intelligence , 2017, p. 4278–4284. [26] R. R. Selv araju, M. Cogswell, A. Das, R. V edantam, D. Parikh, and D. Batra, “Grad-cam: V isual explanations from deep net- works via gradient-based localization, ” in Pr oceedings of the IEEE International Confer ence on Computer V ision , 2017, pp. 618–626. [27] O. Ronneberger , P . Fischer, and T . Brox, “U-net: Conv o- lutional networks for biomedical image segmentation, ” in International Conference on Medical Image Computing and Computer-assisted Intervention . Springer , 2015, pp. 234–241. [28] S. Jaeger , S. Candemir , S. Antani, Y .-X. J. W ´ ang, P .-X. Lu, and G. Thoma, “T wo public chest x-ray datasets for computer- aided screening of pulmonary diseases, ” Quantitative Ima ging in Medicine and Sur gery , v ol. 4, no. 6, pp. 475, 2014. [29] J. Shiraishi, S. Katsuragawa, J. Ikezoe, T . Matsumoto, T . K obayashi, K.-i. Komatsu, M. Matsui, H. Fujita, Y . Kodera, and K. Doi, “Development of a digital image database for chest radiographs with and without a lung nodule: receiver operating characteristic analysis of radiologists’ detection of pulmonary nodules, ” American J ournal of Roentgenology , vol. 174, no. 1, pp. 71–74, 2000. [30] B. V . Ginneken, M. B. Stegmann, and M. Loog, “Segmentation of anatomical structures in chest radiographs using supervised methods: a comparative study on a public database, ” Medical Image Analysis , v ol. 10, no. 1, pp. 19–40, 2006.

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment