Generative adversarial network for segmentation of motion affected neonatal brain MRI
Automatic neonatal brain tissue segmentation in preterm born infants is a prerequisite for evaluation of brain development. However, automatic segmentation is often hampered by motion artifacts caused by infant head movements during image acquisition…
Authors: N. Khalili, E. Turk, M. Zreik
Generativ e adv ersarial net w ork for segmen tation of motion affected neonatal brain MRI N. Khalili 1 , E. T urk 2 , M. Zreik 1 , M.A. Viergev er 1 , 3 M.J.N.L. Benders 2 , 3 , and I. I ˇ sgum 1 , 3 1 Image Sciences Institute, Universit y Medical Center Utrec ht, The Netherlands 2 Departmen t of Neonatology , Wilhelmina Childrens Hospital, Universit y Medical Cen ter Utrech t, The Netherlands 3 Brain Center Rudolf Magnus, Universit y Medical Center Utrec ht, The Netherlands Abstract. Automatic neonatal brain tissue segmen tation in preterm b orn infants is a prerequisite for ev aluation of brain developmen t. How- ev er, automatic segmen tation is often hamp ered by motion artifacts caused b y infant head mo vemen ts during image acquisition. Metho ds ha ve b een developed to remov e or minimize these artifacts during image reconstruction using frequency domain data. Ho wev er, frequency domain data migh t not alwa ys b e av ailable. Hence, in this study we prop ose a metho d for remo ving motion artifacts from the already reconstructed MR scans. The method emplo ys a generativ e adv ersarial net work trained with a cycle consistency loss to transform slices affected by motion into slices without motion artifacts, and vice v ersa. In the experiments 40 T2- w eighted coronal MR scans of preterm b orn infants imaged at 30 weeks p ostmenstrual age were used. All images contained slices affected b y motion artifacts hamp ering automatic tissue segmentation. T o ev aluate whether correction allows more accurate image segmentation, the im- ages w ere segmented into 8 tissue classes: cereb ellum, my elinated white matter, basal ganglia and thalami, ven tricular cerebrospinal fluid, white matter, brain stem, cortical gra y matter, and extracerebral cerebrospinal fluid. Images corrected for motion and corresponding segmen tations w ere qualitativ ely ev aluated using 5-p oint Lik ert scale. Before the correction of motion artifacts, median image qualit y and qualit y of corresp onding automatic segmentations were assigned grade 2 (p o or) and 3 (mo der- ate), resp ectively . After correction of motion artifacts, b oth improv ed to grades 3 and 4, resp ectively . The results indicate that correction of motion artifacts in the image space using the prop osed approach allo ws accurate segmentation of brain tissue classes in slices affected b y motion artifacts. Keyw ords: motion correction · conv olutional neural netw ork · cycle- GAN · neonatal MRI Accepted in Medical Image Computing and Computer Assisted Interv en- tion 2019 2 Khalili et al. 1 In tro duction Imp ortan t brain developmen t o ccurs in the last trimester of pregnancy includ- ing brain gro wth, m yelination, and cortical gyrification [9]. Magnetic resonance imaging (MRI) is widely used to non-inv asively assess and monitor brain devel- opmen t in preterm infants. In spite of ability of MRI to visualize the neonatal brain, motion artifacts caused by the head mov ement lead to blurry image slices or slices with strip es (see Figure 1). These artifacts hamp er image interpretation as well as brain tissue segmentation. T o enable the analysis of images affected by motion artifacts, most studies p erform the correction in the frequency domain (k-space) prior to analysis [1,3]. Ho wev er, frequency domain data is typically not stored and hence, not av ailable after image reconstruction. Recen tly , Duffy at al. [2] and Pa w are et al. [10] pro- p osed to use conv olutional neural netw orks (CNNs) to correct motion-corrupted MRI from already reconstructed scans. CNNs were trained to reconstruct sim- ulated motion artifacts that were mo delled with a predefined formula. This en- forces the netw ork tow ards an assumed distribution of artifacts. Ho wev er, in practice, it is difficult to estimate the real distribution of motion. Alternatively , a CNN could b e trained to generate images without motion artifacts from im- ages with such artifacts. How ev er, this would require training with paired scans, whic h are rarely av ailable. T o solv e this, recen tly cycleGAN has b een prop osed to train CNNs for image-to-image transformation with unpaired images [12]. In this study , we prop ose to employ a cycleGAN to generate MR slices with- out motion artifacts from slices affected b y motion artifacts in a set of neonatal brain MR scans. The cycleGAN is trained to transform slices affected by motion artifacts in to slices without artifacts, and vice versa. T o generate slices corrected for motion artifacts, w e applied the trained cycleGAN to motion affected slices and w e hypothesize that images corrected for motion artifacts allo w more accu- rate (automatic) segmentation. T o ev aluate this, we use a metho d exploiting a con volutional neural netw ork to segmen t scans into eigh t tissue classes. More- o ver, w e prop ose to augment the segmentation training data from the cycleGAN that syn thesizes slices with artifacts from slices without the artifacts. W e demon- strate that the prop osed correction for motion artifacts improv es image qualit y and allows accurate automatic segmen tation of brain tissue classes in brain MRI of infan ts. W e also sho w that the proposed data augmentation further improv es segmen tation results. 2 Data This study includes 80 T2-weigh ted MRI scans of preterm born infants scanned at av erage of 30 . 7 ± 1 . 0 weeks postmenstrual age (PMA). Images were acquired on a Philips Ac hiev a 3T scanner at Universit y Medical Center Utrech t, the Nether- lands. The acquired v oxel size w as 0 . 34 × 0 . 34 mm 2 and the reconstruction matrix w as 384 × 384 × 50. The scans w ere acquired in the coronal plane. In this data set, 60 scans had visible motion artifacts in most of the slices and 20 scans had GAN for segmen tation of motion affected neonatal brain MRI 3 Fig. 1. Examples of coronal slices from T2-weigh ted MRI acquired in preterm b orn infan ts at 30 weeks p ostmenstrual age affected by motion artifacts. Structures outside the neonatal cranium hav e b een masked out. no visible motion in an y slice. The reference segmentation of 10 scans out of 20 scans without motion artifacts w ere av ailable. The scans were manually seg- men ted in to 8 tissue classes: cereb ellum (CB), m yelinated white matter (mWM), basal ganglia and thalami (BGT), ven tricular cerebrospinal fluid (vCSF), white matter (uWM), brain stem (BS), cortical gray matter (cGM), and extracerebral cerebrospinal fluid (eCSF). 3 Metho d Motion artifacts in the neonatal brain MR hamp er the diagnostic interpretabil- it y and precise automatic segmentation of the brain tissue classes. T o address this, we propose to correct motion artifacts in the reconstructed MR scans using a cycleGAN. Thereafter, to ev aluate whether the corrected images are suitable for segmentation of brain tissues, a CNN architecture was trained to segment the brain into eigh t tissue classes. F urthermore, to improv e segmentation perfor- mance, w e prop osed to augment the training data by synthesizing images with motion artifacts from the images without artifacts using the cycleGAN. 3.1 Artifact cor rection netw ork CycleGAN has b een prop osed to train image-to-image translation CNNs with unpaired images. Given that obtaining paired scans with and without motion artifacts is difficult, cycleGAN w as trained to transform slices affected by mo- tion to slices without motion artifacts and, vice versa (Figure 2). The net work arc hitecture consists of t wo cycles, motion correction and motion generation cy- cles. The motion correction cycle consists of three netw orks. Motion correction net work ( M C ) transforms slices affected b y motion to slices without motion artifacts. Motion generation netw ork ( M G ) reconstructs the generated slices without motion artifacts to the original image slices. A discriminator CNN dis- criminates b etw een generated and real slices without motion artifacts D is M C . While the discriminator distinguishes b etw een generated and real slices without 4 Khalili et al. Fig. 2. The CycleGAN consists of tw o cycles: motion correction and motion genera- tion. In the motion correction cycle, first net work is trained to transform slices affected b y motion into slices without motion artifacts ( M C ), the second netw ork is trained to transform the generated slices without motion artifacts back to the original slices ( M G ), and the third net w ork discriminates betw een real and synthesized slices without motion artifacts ( D is M C ). In the motion generation netw ork, motion w as added to the slices without motion artifacts ( M G ), motion correction net work transforms generated slices to the original slices ( M C ), and the discriminator netw ork discriminates b etw een real and fak e slices affected by motion artifacts ( D is M G ) motion artifacts, the generator tries to preven t it by generating images whic h are not distinguishable for the discriminator. Similarly , motion generation cy- cle transforms slices without motion artifacts to slices affected by motion. The net work architecture in b oth cycles is identical. The generator contains 2 con- v olution lay ers with stride of 2, 9 residual blo c ks [5], and 2 fractionally strided con volutions with stride prop osed in [7]. The discriminator net works hav e a P atchGAN [6], which classifies 70 × 70 ov erlapping image patches as fake or real. Tw o adversial losses [4] w ere used in b oth motion correction netw ork and motion generation netw ork. F urthermore, cycle consistency loss in motion cor- rection net work ( M C cl ) and motion generation netw ork ( M G cl ) w ere weigh ted b y λ and were added to adversial losses. 3.2 Segmen tation Netw ork T o assess segmen tation p erformance in images affected b y motion artifacts, a CNN with Unet-like architecture was trained to segment images into eight tis- sue classes. The segmentation netw ork consists of a con tracting path and an expanding path. The contracting path consists of 10 3 × 3 conv olution lay ers follo wed by rectified linear units (ReLUs). Every tw o conv olution la yers the fea- GAN for segmen tation of motion affected neonatal brain MRI 5 tures were downsampled by 2 × 2 max p o oling and the feature channels were doubled using the following sc heme 32, 64, 128, 256, 512. In the expanding path, an up-sampling is follow ed by a 2 × 2 conv olution which halv es the n umber of feature channels. The results are concatenated with the corresp onding contract- ing path and conv olved b y tw o 3 × 3 con volutional la yers follo wed b y a ReLU. In the final lay er, one 1 × 1 conv olutional lay er maps each comp onent of the feature v ector to the desired num ber of classes. Batch normalization is applied after all con volutional la yers to allow for faster conv ergence. The net work w as trained with 3D patches of 256 × 256 × 3 vo xels. The netw ork was trained by minimizing the a verage of Dice co efficient in all classes betw een the netw ork output and man ual segmen tation. 4 Ev aluation Giv en that slices affected by motion don’t allow accurate manual annotation, to quan titatively ev aluate the prop osed metho d, motion is syn thesized in images using the motion generation net work. This allows ev aluation with the manual an- notations p erformed in images without artifacts. Thereafter, the p erformance of the segmentation net work w as ev aluated using the Dice co effic ien t (DC), Haus- dorf distance (HD) and mean surface distance (MSD) betw een manual reference and automatically obtained segmen tations. The ev aluation w as performed in 3D. T o ev aluate the prop osed metho d on images with real motion artifacts, the images and the corresp onding automatic segmentations b efore and after motion correction were qualitatively ev aluated using 5-p oints Lik ert scale. The image qualit y was scored on a scale from 1 to 5, where 1 indicates uninterpretable images with severe motion artifacts, and 5 indicates excellent image quality . Similarly , automatic segmentations were scored 1 when the segmentation failed, and 5 when the segmentation was v ery accurate. 5 Exp erimen ts and Results Prior to analysis, the intracranial brain v olume w as extracted from all scans using Brain Extraction T o ol [11]. T o train the artifact correction netw ork, 15 scans without motion artifacts and 20 scans with motion artifacts w ere selected for training. The remaining 5 scans without motion artifacts and 40 scans with motion artifacts were used for testing. F rom scans without motion artifacts, 700 slices without visible artifacts were selected. Similarly , from the scans with motion artifacts, 714 slices with visible artifacts were selected. The netw ork was trained with a batch size of 4. Adam [8] was used to minimize the loss function for 100 ep o chs with a fixed learning rate of 0.00005. λ was set to 10. T o segment the brain into eight tissue classes, the segmentation netw ork w as trained with 5 scans without motion artifacts selected from the 15 training scans used to train the motion correction net work. The segmentation netw ork w as trained with a batch size of 6. Adam w as used to minimize the loss function for 200 ep o ch and the learning rate was set to 0.0001. 6 Khalili et al. In the exp eriments, we p erformed quantitativ e ev aluation of the prop osed metho d through the ev aluation of the brain tissue segmentation. First, to deter- mine the upp er limit of the segmen tation performance, images without artifacts w ere segmented (T able 1, top row). Second, we aimed to ev aluate the segmen- tation p erformance in the images with artifacts. How ever, motion artifacts are prohibitiv e for accurate manual annotation thus, those were not av ailable for suc h images. Hence, the motion generation netw ork was used to syn thesize im- ages with artifacts from the images without artifacts, for which manual segmen- tations were a v ailable. Segmentation was p erformed in the synthesized images. (T able 1, second ro w). Third, using motion correction net work, the artifacts w ere remo ved from the images with syn thesized artifacts and those w ere subsequently segmen ted (T able 1, third ro w). In the previous experiments, the segmentation net work was trained only with images without motion artifacts, as only those w ere manually lab elled. Ho wev er, w e h yp othesized that the p erformance would impro ve when the segmentation would b e trained with b oth t yp es of images. Hence, to obtain images affected by motion that can b e used for training, simi- lar to the second exp erimen t, we synthesized training images using motion gen- eration net work. In the fourth exp erimen t, we ev aluated segmentation net work trained with augmented training data, i.e. images with and without motion arti- facts on images with synthesized motion artifacts (T able 1, fourth row). Finally , segmen tation was p erformed in images with corrected syn thesized artifacts as in the third exp eriment, and training data for the segmentation was augmented as in the fourth exp eriment (T able 1, b ottom row). The results show that cor- rection of motion artifacts using motion correction netw ork impro ves the p er- formance (T able 1, second vs. third row). Moreov er, results demonstrate that the p erformance of the segmen tation netw ork impro ves when the training data is augmented (T able 1, second row vs fourth row and third vs. b ottom row). T o qualitativ ely ev aluate the p erformance of the motion correction netw ork, 40 scans affected b y motion artifacts w ere corrected using motion correction net- w ork. Subsequently , the segmen tation netw ork trained with the prop osed data augmen tation was used to segmen t the corrected images. Qualitativ e scoring of the images and segmentations b efore and after motion correction was p er- formed. The ev aluation results show that the median image qualit y and quality of corresp onding automatic segmentations were assigned grade 2 (po or) and 3 (mo derate), resp ectively . After correction of motion artifacts, b oth improv ed to grades 3 and 4, resp ectively . Figure 3 shows examples of images and corre- sp onding segmentations b efore and after motion correction. This shows that the motion correction netw ork reduces motion artifacts and hence, impro ves quality of the images and corresp onding segmentations. Moreo ver, the figure shows that our prop osed motion augmen tation further impro ves automatic segmentations. 6 Discussion and conclusion W e presented a method for correction of motion artifacts in reconstructed brain MR scans of preterm infan ts using a cycleGAN. W e demonstrate that the pro- GAN for segmen tation of motion affected neonatal brain MRI 7 T able 1. Performance of brain tissue segmen tation in to eight tissue classes. The ev al- uation of segmentation w as p erformed 1) on scans without motion artifacts (Motion F ree) 2) on the same scans with synthesized motion using motion generation netw ork (Motion Synthesized) 3) on the scans where synthesized motion were corrected us- ing motion correction netw ork (Motion Corrected). The segmentation net work was retrained with motion augmented scans using motion generation net work. The ev alu- ation of segmentation was p erformed 4) on the scans with synthesized motion using motion generation netw ork (Motion Augmented) 5) on the scans where synthesized motion were corrected (Motion Corrected and Augmented) CB mWM BGT vCSF WM BS cGM eCSF Mean Motion F ree DC 0 . 90 0 . 53 0 . 89 0 . 84 0 . 94 0 . 84 0 . 67 0 . 83 0 . 80 HD 44 . 92 32 . 97 39 . 06 23 . 08 17 . 25 42 . 57 18 . 47 8 . 60 28 . 36 MSD 0 . 36 1 . 85 0 . 56 0 . 36 0 . 20 0 . 56 0 . 21 0 . 23 0 . 54 Motion Synthesized DC 0 . 87 0 . 38 0 . 87 0 . 77 0 . 90 0 . 81 0 . 62 0 . 75 0 . 75 HD 52 . 27 53 . 80 42 . 93 33 . 70 21 . 33 48 . 18 21 . 53 22 . 43 37 . 02 MSD 0 . 62 4 . 10 1 . 04 1 . 32 0 . 77 0 . 92 0 . 55 1 . 00 1 . 29 Motion Corrected DC 0 . 90 0 . 47 0 . 89 0 . 83 0 . 94 0 . 83 0.68 0.85 0 . 79 HD 45 . 06 41 . 93 33 . 58 22 . 84 18 . 25 39 . 19 18 . 57 8 . 90 28 . 54 MSD 0 . 46 2 . 07 0 . 55 0 . 35 0 . 20 0.41 0 . 21 0.16 0 . 55 Motion Augmented DC 0 . 88 0 . 45 0 . 88 0 . 80 0 . 92 0 . 81 0 . 63 0 . 80 0 . 77 HD 40.19 27.42 28 . 43 19 . 27 14 . 98 30.85 15.03 11 . 79 23 . 49 MSD 0 . 46 1 . 84 0 . 61 0 . 39 0 . 27 0 . 48 0 . 27 0 . 24 0 . 57 Motion Corrected & Augmented DC 0.91 0.48 0.89 0.84 0.94 0.84 0 . 67 0 . 84 0.80 HD 45 . 62 34 . 52 26.83 17.77 14.40 35 . 93 17 . 18 7.63 24.99 MSD 0.45 1.89 0.44 0.29 0.19 0 . 42 0.20 0 . 17 0.51 p osed artifact correction generates images that are more suitable for (automatic) image segmentation. Additionally , we show that training the segmen tation net- w ork with the prop osed data augmentati on further improv es segmentation p er- formance. Unlik e previous methods that performed motion correction in the frequency domain (k-space), the prop osed metho d corrects motion artifacts in already re- constructed scans. Given that k-space data is typically not a v ailable after scans ha ve b een reconstructed and stored, the prop osed metho d allo ws correction. T o conclude, results demonstrate that correction of motion artifacts in re- constructed neonatal brain MR scans is feasible. Moreov er, results show that the prop osed motion correction allows automatic brain tissue segmentation in scans affected by motion artifacts. This ma y impro ve clinical in terpretability and extraction of quantitativ e markers in images with motion artifacts. References 1. Atkinson, D., Hill, D.L., Stoyle, P .N., Summers, P .E., Keevil, S.F.: Automatic correction of motion artifacts in magnetic resonance images using an entrop y fo cus criterion. IEEE T ransactions on Medical imaging 16 (6), 903–910 (1997) 2. Duffy , B.A., Zhang, W., T ang, H., Zhao, L., Law, M., T oga, A.W., Kim, H.: Ret- rosp ectiv e correction of motion artifact affected structural MRI images using deep learning of sim ulated motion (2018) 8 Khalili et al. Fig. 3. Examples of slices affected b y motion artifacts and the corresp onding tissue segmen tation in neonatal MRI. 1st column: A motion affected slice; 2nd column: Auto- matic segmentation when the netw ork was trained on slices without motion artifacts; 3rd column: Automatic segmen tation, net work trained on slices with augmented mo- tion; 4th column: A motion corrected slice; 5th column: Automatic segmen tation result on the corrected slice; 6th column: Automatic segmentation results on the corrected slice when the netw ork was trained with data augmen tation. 3. Go densch w eger, F., K¨ ageb ein, U., Stuc ht, D., Y arach, U., Sciarra, A., Y akup ov, R., L ¨ usebrink, F., Sch ulze, P ., Speck, O.: Motion correction in MRI of the brain. Ph ysics in Medicine & Biology 61 (5), R32 (2016) 4. Go o dfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., W arde-F arley , D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial nets. In: Adv ances in neural information pro cessing systems. pp. 2672–2680 (2014) 5. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Pro ceedings of the IEEE conference on computer vision and pattern recognition. pp. 770–778 (2016) 6. Isola, P ., Zh u, J.Y., Zhou, T., Efros, A.A.: Image-to-image translation with condi- tional adversarial net works. In: Pro ceedings of the IEEE conference on computer vision and pattern recognition. pp. 1125–1134 (2017) 7. Johnson, J., Alahi, A., F ei-F ei, L.: Perceptual losses for real-time style transfer and sup er-resolution. In: Europ ean Conference on Computer Vision. pp. 694–711. Springer (2016) 8. Kingma, D.P ., Ba, J.: Adam: A method for sto chastic optimization. arXiv preprin t arXiv:1412.6980 (2014) 9. Kostovi ´ c, I., Jov ano v-Milo ˇ sevi ´ c, N.: The developmen t of cerebral connections dur- ing the first 20–45 weeks gestation. In: Seminars in F etal and Neonatal Medicine. v ol. 11, pp. 415–422. Elsevier (2006) GAN for segmen tation of motion affected neonatal brain MRI 9 10. Pa w ar, K., Chen, Z., Shah, N.J., Egan, G.F.: Mo conet: Motion correction in 3D MPRA GE images using a conv olutional neural netw ork approach. arXiv preprint arXiv:1807.10831 (2018) 11. Smith, S.M.: F ast robust automated brain extraction. Human Brain Mapping 17 (3), 143–155 (2002) 12. Zhu, J.Y., Park, T., Isola, P ., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial netw orks. In: Pro ceedings of the IEEE Interna- tional Conference on Computer Vision. pp. 2223–2232 (2017)
Original Paper
Loading high-quality paper...
Comments & Academic Discussion
Loading comments...
Leave a Comment