Evaluation of CT Image Synthesis Methods:From Atlas-based Registration to Deep Learning

Computed tomography (CT) is a widely used imaging modality for medical diagnosis and treatment. In electroencephalography (EEG), CT imaging is necessary for co-registering with magnetic resonance imaging (MRI) and for creating more accurate head mode…

Authors: Andreas D. Lauritzen, Xenophon Papademetris, Sergei Turovets

Evaluation of CT Image Synthesis Methods:From Atlas-based Registration   to Deep Learning
Ev aluation of CT Image Syn thesis Metho ds: F rom A tlas-based Registration to Deep Learning Andreas D. Lauritzen 1 , 3 , Xenophon P apademetris 1 , 2 , Sergei T urov ets 4 , and John A. Onofrey 1 1 Departmen ts of Radiology & Biomedical Imaging, 2 Biomedical Engineering, Y ale Univ ersity , New Hav en, CT, USA { xenophon.papademetris, john.onofrey } @yale.edu 3 Departmen t of Computer Science, Universit y of Copenhagen, Denmark al@di.ku.dk 4 Neuroinformatics Center, Univ ersity of Oregon, USA sergei@cs.uoregon.edu Abstract. Computed tomograph y (CT) is a widely used imaging mo dal- it y for medical diagnosis and treatment. In electroencephalography (EEG), CT imaging is necessary for co-registering with magnetic resonance imag- ing (MRI) and for creating more accurate head models for the brain electrical activit y due to better representation of b one anatomy . Unfor- tunately , CT imaging exposes patients to p oten tially harmful sources of ionizing radiation. Image syn thesis metho ds presen t a solution for av oid- ing extra radiation exp osure. In this pap er, w e p erform image syn thesis to create a realistic, synthetic CT image from MRI of the same sub ject, and we presen t a comparison of different image syn thesis techniques. Using a dataset of 30 paired MRI and CT image v olumes, our results compare image synthesis using deep neural net work regression, state-of- the-art adversarial deep learning, as well as atlas-based synthesis utilizing image registration. W e also present a no vel syn thesis method that com- bines multi-atlas registration as a prior to deep learning algorithms, in whic h we p erform a weigh ted addition of syn thetic CT images, derived from atlases, to the output of a deep neural netw ork to obtain a residual t yp e of learning. In addition to ev aluating the qualit y of the synthetic CT images, we also demonstrate that image syn thesis methods allo w for more accurate b one segmen tation using the syn thetic CT imaging than w ould otherwise b e possible b y segmenting the bone in the MRI directly . Keyw ords: image synthesis · MRI · CT · deep learning · segmen tation · atlas-based registration 1 In tro duction Magnetic resonance (MR) imaging (MRI) and X-ra y computed tomography (CT) provide non-inv asiv e techniques of in vestigating the h uman anatom y , th us significan tly improving diagnosis and treatment of diseases. CT is w ell-suited for visualizing b one structures, including lo cation and density . Bone features are 2 A. Lauritzen et al. essen tial for many adv anced applications, e.g. image-guided radiotherapy and reconstruction in electro encephalograph y (EEG). CT scans, how ev er, carry a risk of causing cancer in the sub ject due to ionizing radiation. The additional risk of a sub ject dev eloping fatal cancer from a CT scan is appro ximately 1 in 2000 [ 10 ]. Notably , young individuals are more susceptible to radiation-induced diseases than adults [ 4 ]. MRI, on the other hand, do es not exp ose the sub ject to ionizing radiation and is considered a safe procedure. MRI further con trasts with CT in that it has excellent soft tissue contrast visualization but cannot image b one. These facts motiv ate the dev elopment of metho ds in whic h a single image mo dalit y is used to syn thesize the desired information, that could b e pro vided b y another imaging mo dalit y , with the end goal of making clinical pro cesses more effectiv e and circumv en ting unnecessary inconv eniences for patien ts. CT image synthesis was initially developed as a means for attenuation cor- rection in p ositron emission tomography (PET) [ 3 ]. Researc hers still revisit registration-based syn thesis methods due to its o verall go o d p erformance and fast prediction times. Multi-atlas registration metho ds are capable of estimating the intensit y distribution within bone with goo d approximation with prediction times within 5 minutes [ 17 , 1 ]. Alongside the dev elopmen t of new registration- based metho ds, the developmen t of deep learning [ 5 ] has inspired a revolution of learning-based metho ds. The developmen t of v ersatile mo dels, originating from computer vision, has spark ed an interest in deep learning techniques for medi- cal image analysis. Deep learning techniques for cross-mo dalit y medical image syn thesis first outp erformed existing learning based-metho ds such as k-nearest neigh b or when using a relatively shallow con volutional neural netw ork (CNN) to synthesize 3D PET images from an MRI [ 6 ]. Later, a 3D fully conv olutional net work (F CN) w as prop osed to syn thesize a CT image from MRI [ 8 ]. By adopt- ing up-p o oling, the F CN preserves structural information such as neigh b oring pixel v alues. This F CN outp erformed random forest and atlas-based registration metho ds. T ransfer learning also pro v ed useful for syn thesizing 2D CT slices from 2D MRI slices [ 2 ]. By initializing the mo del with the learned filters from a net- w ork trained on natural images, the final trained mo del p erformed b etter than atlas-based registration metho ds. Conditional generativ e adversarial net w orks (cGAN) ha ve further refined the estimation of CT images. In particular, an auto-context mo del (ACM) consist- ing of three separately trained cGANs was sho wn to estimate CT images more accurately than an F CN without a discriminator netw ork [ 9 ], with prediction times under four min utes. Most recen tly , an FCN consisting of em b edding blocks trained on 3D MRI and CT data, also called a deep embedding CNN (DECNN) has been proposed [ 19 ]. By using em b edding blo c ks, the netw ork is forced to out- put tentativ e CT images. This approach outp erformed atlas-based registration metho ds and also deep learning metho ds suc h as a CNN and an FCN mo del. W e contribute to this area of research by presen ting four metho ds capable of MRI to CT synthesis adapted from previous metho ds in the recen t literature. W e prop ose a no vel framework in which multi-atlas registration syn thesis serves as a prior to a deep neural net w ork (DNN). W e ev aluate the syn thesis results and Ev aluation of CT Image Syn thesis Metho ds 3 giv e a general comparison of all methods. Finally , we sho w that segmentation of syn thetic CT images is more accurate than learning to identify b one directly from the MRI. 2 Metho ds An image is a function that maps d -dimensional p oin ts, from the set of p oin ts Ω in the image domain, to m -comp onen t in tensit y v alues I Ω : R d 7→ R m . In image syn thesis, we aim to estimate the function S that b est appro ximates the ground- truth CT image I CT from an MRI I MR of the same sub ject, i.e. I CT ≈ S ( I MR ). 2.1 Syn thesis Using Multi-A tlas Registration W e p erform atlas-based image synthesis of a given sub ject’s MRI I MR b y regis- tering a set of N atlas atlases, which consist of co-registered MR and CT image pairs along with gold-standard brain segmentation masks, to this image. Drastic differences in the image field of view that result in cropp ed anatom y and miss- ing corresp ondences betw een a sub ject’s MRI and the v arious MR atlas images mak e intensit y-based image registration challenging when optimizing metrics lik e normalized m utual information (NMI) [ 16 ]. Instead, we p erform a more robust surface-based atlas registration, where the brain surface S atlas ,i in eac h atlas is extracted from the segmentation mask and the brain surface S MR in I MR can reliably b e found using standard skull stripping metho ds [ 14 ]. W e then register S atlas ,i , i = 1 , . . . , N to S MR using robust p oin t matching [ 11 ], and use the re- sulting transformations to w arp I CT ,i to I MR . The transformed atlas CTs are then a veraged to form the synthesized CT image. 2.2 Deep Neural Netw ork Regression W e treat image syn thesis as a regression problem and fit a deep learning model to the distribution of CT images with MR images as a conditional distribution. Sp ecifically , w e train a DNN on MR images and bac kpropagate the error betw een the output of the DNN and the ground truth CT image. W e adopt the U-Net arc hitecture [ 12 ], as it ensures the preserv ation of high-frequency information while learning an in ternal representation of the input images. W e propose to use a mo dified v ersion of the U-Net, where we use 3 max po oling op erations and 3 lay ers of conv olutions instead of 2 prior to p o oling (or up-sampling) for a total of 23 lay ers. W e train the mo del with randomly sampled patches from the training set. P atch-wise training lo wers memory restrictions while providing the opportunity to augment the training data and maintaining v ariance in eac h mini-batc h. The L 2 loss and image gradient difference loss (GDL) is used to p enalize the error [ 9 ]. L ( I MR , I CT ) = 1 | Ω | X i ∈ Ω  ( S ( I MR )( i ) − I CT ( i )) 2  + GD L ( S ( I MR ) , I CT ) (1) 4 A. Lauritzen et al. where Ω is the set of all indices in S ( I MR ), whic h is the same size as I CT . GDL is defined as the sum of the absolute squared differences b etw een the gradients of image X and Y in eac h dimension. GD L ( X, Y ) = 1 | D || Ω | X d ∈ D X i ∈ Ω |∇ X d ( i ) − ∇ Y d ( i ) | 2 (2) where D is the set of dimension of X and Y . ∇ X d and ∇ Y d denote the image gradien ts with regards to dimension d . A CT prediction is made by extracting ov erlapping patches from I MR , for- w ard passing all patc hes and reconstructing the output. The loss tends to in- crease with distance from the center of a patch, due to less con textual information in proximit y of b orders of the patch and padding. Therefore the reconstruction function applies a Gaussian distributed mask to w eight center pixels more than pixels on the b order of the patch. 2.3 Conditional Generative Adversarial Netw ork A conditional generativ e adversarial netw ork (cGAN) includes t wo netw orks. The ob jectiv e of the generator is to learn S b y approximating the ground truth CT images while producing CT images indistinguishable from the discriminator. The ob jective of the discriminator is to classify CT images as either syn thesized or true CT images. In this work, the generator is the U-Net (Sec. 2.2). An adv ersarial loss term is added to the total generator loss L G . The classification output of the discriminator is penalized with a corresp onding adv ersarial loss term L G ( I MR , I CT ) = 1 | Ω | X i ∈ Ω  ( S ( I MR )( i ) − I CT ( i )) 2  + GD L ( S ( I MR ) , I CT ) + 1 2 ( C ( S ( I MR )) − T real ) 2 (3) L D ( I MR , I CT ) = 1 2  ( C ( I CT ) − T real ) 2 + ( C ( S ( I MR )) − T synthetic ) 2  (4) The adv ersarial loss terms define the cGAN as a least square cGAN which is more stable when training than a regular GAN with binary cross en tropy loss [ 7 ]. During training, a full forward pass is p erformed trough G and D and the errors are computed using L G , L G . The error defined b y L G in backpropagated through G and the error defined by L D in backpropagated through D . W eights of G and D are up dated, in that order. 2.4 Residual Learning with Atlas Prior F or most applications utilizing synthetic CT images, the primary area of in terest is b one features such as b one lo cation and densit y . The soft tissue is often less Ev aluation of CT Image Syn thesis Metho ds 5 Fig. 1. Our proposed framework for MRI to CT image syn thesis that incorporates prior information. significan t and can be rapidly estimated with atlas-based syn thesis. W e propose a framew ork (Fig. 1) capable of syn thesizing a CT image S ( I MR ) in three stages; i) syn thesizing a prior synthetic CT image, S atlas ( I MR ), with atlas-based registra- tion ii) computing the difference, S DNN ( I MR ), b et w een S ( I MR ) and S atlas ( I MR ) with a DNN and iii) combine of S atlas ( I MR ) and S DNN ( I MR ) with a w eighted addition: S ( I MR ) = W 1 S DNN ( I MR ) + W 2 S atlas ( I MR ) (5) The prior syn thetic CT is obtained b y affine multi-atlas registration as describ ed in section 2.1. Additionally , the transformation is not only applied to the atlas CT images but also the reference b one masks, which are later used to define the w eights W 1 and W 2 . The aligned b one masks are merged into a single mask and smo othed with a Gaussian distribution parameterized b y the standard deviation σ . The smo othed b one mask, W bone , is then saturated with the function: W bone ( i ) =  1 if I bone ( i ) > t I bone ( i ) t if I bone ( i ) ≤ t , ∀ i ∈ Ω I bone (6) where t is a threshold such that 0 ≤ t ≤ 1 and I bone ( i ) is the intensit y v alue at index i from the set of indices, Ω I bone , in I bone . W bone pro vides an image that roughly lo cates b one. The weigh ts are computed b y the follo wing linear relationship: W = αW bone + β (7) where α and β are to b e chosen suc h that 0 ≤ W 1 ≤ 1. These parameters allo w tuning of how muc h to weigh S DNN ( I MR ) and how muc h to weigh S atlas ( I MR ) and can b e altered at any stage of the training. By using W bone as a weigh t on the output of a DNN and computing the loss after adding the prior synthetic CT, we force the it to pa y less attention to areas outside the head and inside 6 A. Lauritzen et al. the skull. Again, the DNN is chosen to be the U-Net (Sec. 2.2), which computes the function S DNN ( I MR ), and is trained in the same manner as the model in Sec. 2.2. 3 Results 3.1 Data The dataset consists of N = 30 paired 3D T1-w eighted MR and CT images { I MR ,i , I CT ,i } from different p ediatric sub jects i = 1 , . . . , N acquired retrosp ec- tiv ely by data mining the clinical image repository of the W ashington Universit y BJC Health System (St. Louis, MO, USA) within the Pediatric Head Mo deling pro ject [ 18 ]. Institutional Review Boards at b oth pro ject sites (Philips-Electrical Geo desics, Inc and the Univ ersit y of Ark ansas for Medical Sciences) appro ved all researc h retrosp ectiv e proto cols inv olving human sub jects [ 15 ]. The age range of the sub jects is six months old to 16 years and five months old. F or atlas-based syn thesis, we use N atlas = 5 co-registered MR and CT image v olumes with tissue masks, consisting of 3 adult and 2 p ediatric sub jects ( www.egi.com ). W e spatially normalized all images to a common template space in t wo phases: ( i ) inter-sub ject affine registration of all MR images to the MNI Colin 27 brain reference space using NMI [ 16 ], and ( ii ) rigid in tra-sub ject registration of all { I MR ,i , I CT ,i } pairs using NMI. All images w ere res liced to template space with isotropic 1mm 3 resolution. W e normalized the MR image intensit y v alues b y subtracting the mean and dividing by the standard deviation of the intensit y v alues from within brain tissue (we used the Colin 27 brain mask to roughly estimate the brain v olume in the spatially normalized images). 3.2 Similarit y Metrics W e ev aluated the qualit y of the syn thesized CT images b y the qualit y of tissue in the whole head and, since w e are most interested in the b one, w e ev aluated the qualit y of synthesis sp ecifically in the pro ximit y of b one. At prediction time the syn thesized CT image is multiplied with a head mask and a smo othed bone mask to extract the areas of interest. Given X and Y are the predicted and ground truth images respectively , w e ev aluate syn thesis using the follo wing similarit y metrics: ( i ) p eak signal-to-noise ratio (PSNR), PSNR( X, Y ) = 20 log 10   v max q 1 | Ω | P i ∈ Ω ( Y ( i ) − X ( i )) 2   , (8) where v max = 4096 as CT images are normalized to the intensit y range [0 , 4095]; ( ii ) mean structural similarit y (MSSIM), MSSIM( X, Y ) = 1 | Ω | X i ∈ Ω (2 µ X µ Y + c 1 )(2 σ X Y + c 2 ) ( µ 2 X + µ 2 Y + c 1 )( σ 2 X + σ 2 Y + c 2 ) , (9) Ev aluation of CT Image Syn thesis Metho ds 7 where µ I and σ I are the mean and the standard deviation, respectively , of image I and constants c 1 = (0 . 01 · (2 12 − 2)) 2 and c 2 = (0 . 03 · (2 12 − 2)) 2 ; ( iii ) mean a verage error (MAE), MAE( X, Y ) = 1 | Ω | X i ∈ Ω | Y ( i ) − X ( i ) | ; (10) ( iv ) P earson cross-correlation (PCC), PCC( X, Y ) = P i ∈ Ω ( X ( i ) − µ X )( Y ( i ) − µ Y ) p P i ∈ Ω ( X ( i ) − µ X ) 2 p P i ∈ Ω ( Y ( i ) − µ Y ) 2 ; (11) and, as an additional metho d of ev aluating the qualit y of syn thesized CT images, w e segment b oth the ground truth CT image and the syn thesized CT image in to three classes: air, b one, and soft-tissue. W e p erform the segmentation with the k-means clustering of intensit y v alues distributed in 1024 bins. Synthesized CT images obtained with deep learning algorithms are not guaranteed to b e Hounsfield scaled, and the threshold segmentation migh t not apply . The masks for b one are extracted, and we ev aluate with the DICE ov erlap metric, DICE( B X , B Y ) = 2 T P 2 T P + F P + F N (12) Where B X and B Y are the bone mask for the prediction image and the ground truth image resp ectiv ely . T P , F P and F N are true positives, false p ositiv es and false negatives resp ectiv ely . This metric quan tifies ho w w ell the b one can b e segmented in the syn thetic CT images. Segmen ting the ground truth CT images with the mentioned k-means algorithm and computing the DICE o verlap with ground truth b one mask yields a mean (standard deviation) and median of 0 . 91(0 . 03) and 0 . 92 resp ectiv ely . This will be the baseline b y which we compare our results where a DICE o verlap of 0 . 91 is the ideal score. 3.3 Exp erimen ts W e trained and tested the deep learning models using 6-fold cross v alidation with 25 images in the training set and five images in the test set. Each ep och, 3200 randomly sampled patches w ere extracted from the training set and used for training. W e compared mo dels using b oth 2D patches and 3D patc hes, and used patch sizes of 128 × 128 and 48 × 48 × 24, resp ectiv ely . W e performed data augmen tation b y randomly flipping the patches in the x and y . W e trained the mo dels for 650 ep ochs with the ADAM optimizer and fixed learning rate of 0 . 0001. F or the registration-based multi-atlas image synthesis (Sec. 2.1), we reg- istered the 5 atlas images to eac h of the 30 test MR images using b oth affine and non-rigid transformations. F or the non-rigid transformations, w e used free-form deformation (FFD) [ 13 ] with 30mm B-spline con trol point spacing. Syn thetic CT images from all metho ds are display ed in Fig. 2. T ables 1 and 2 show syn thesis ev aluation results for the head and b one areas, resp ectiv ely , and T able 3 shows 8 A. Lauritzen et al. T able 1. Results of MRI to CT synthesis with all metho ds ev aluated with PSNR, MAE, MSSIM and PCC on the head masked syn thetic CT. Similarity Metric PSNR MAE SSIM PCC Mean (std) Median Mean (std) Median Mean (std) Median Mean (std) Median Atlas, non-rigid 25.85 (0.95) 25.82 71.66 (13.76) 71.45 0.9 (0.03) 0.89 0.94 (0.02) 0.94 Atlas, affine 25.97 (1.01) 26 71.61 (14.79) 69.95 0.89 (0.03) 0.9 0.94 (0.02) 0.94 2D U-Net 28.01 (2.04) 27.99 44.58 (16.64) 42.66 0.91 (0.03) 0.91 0.951 (0.023) 0.96 2D U-Net, adversarial 27.93 (1.94) 27.92 44.75 (16.34) 42.77 0.91 (0.03) 0.91 0.95 (0.02) 0.95 3D U-Net 27.9 (1.96) 27.84 44.97 (15.38) 42.2 0.91 (0.03) 0.91 0.95 (0.02) 0.95 2D U-Net, boneweigh ted 27.38 (1.84) 27.54 56.04 (16.86) 54 0.9 (0.03) 0.91 0.95 (0.02) 0.95 T able 2. Results of MRI to CT synthesis with all metho ds ev aluated with PSNR, MAE, MSSIM and PCC on the bone mask ed synthetic CT. Similarity Metric PSNR MAE SSIM PCC Mean (std) Median Mean (std) Median Mean (std) Median Mean (std) Median Atlas, non-rigid 38.82 (1.67) 38.78 16.35 (4.11) 16.08 0.93 (0.01) 0.93 0.9 (0.03) 0.9 Atlas, affine 38.87 (1.71) 38.89 16.29 (4.23) 15.88 0.93 (0.02) 0.93 0.9 (0.03) 0.9 2D U-Net 41.61 (3.02) 41.52 9.09 (4.8) 8.25 0.97 (0.02) 0.97 0.94 (0.03) 0.94 2D U-Net, adversarial 41.59 (2.88) 41.41 9.03 (4.5) 8.31 0.97 (0.02) 0.97 0.94 (0.02) 0.94 3D U-Net 41.51 (2.98) 41.53 9.21 (4.64) 8.12 0.97 (0.02) 0.97 0.94 (0.02) 0.94 2D U-Net, b onew eighted 41.27 (2.77) 41.19 10.06 (4.53) 9.25 0.96 (0.02) 0.96 0.93 (0.03) 0.94 b one segmentation results. W e also compared segmentation of the b one directly in the MRI b y training a U-Net with w eigh ted cross-en tropy as the loss function, for 160 ep o c hs, with the same parameters as describ ed ab o v e (w e lab el this 2D U-Net, direct MRI segmen tation). 4 Discussion and Conclusion W e hav e p erformed an extensive ev aluation of several metho ds capable of MR to CT image synthesis with visually goo d results. The m ulti-atlas image syn- thesis methods yielded perceptually less satisfying syn thetic images than the deep learning mo dels, as seen in Fig. 2, which was also reflected in across all metrics including segmentation p erformance. W e demonstrated that 3D deep learning models are not b etter at syn thesizing CT images than a corresp onding 2D mo del. The 2D mo del achiev ed a mean PSNR of 28 . 01 ± 2 . 04 dB ev aluated on the head and 41 . 69 ± 3 . 02 dB on the area around the b one. The 3D mo del ac hieved a PSNR of 27 . 9 ± 1 . 96 dB ev aluated on the head and 41 . 51 ± 2 . 98 dB on the area in proximit y of b one. The 3D mo del to ok 54 hours to train while the 2D mo del took 16 hours to train. Predicting an image took 10 seconds with the 2D mo del and 50-65 seconds with the 3D mo del. Synthetic CT images pro duced b y the 3D mo del lo oks visually b etter in the third axis. The 2D U-Net mo del and the adversarial 2D U-Net mo del resulted in v ery similar metric scores and also visually similar images, but the adv ersarial 2D U-Net mo del to ok 30 hours to train. Adopting a 2D U-Net for syn thesizing CT w as the b est metho d con- cerning memory consumption, training time and prediction time. The metho d ac hieved b etter or equally as high metric scores as all the other metho ds and Ev aluation of CT Image Synthesis Metho ds 9 Fig. 2. Same sub ject center slices from MRI, ground truth CT and synthetic CT from all metho ds. T able 3. Dice ov erlap scores of segmen ting synthetic CT image from all metho ds with k-means clustering compared to direct MRI segmen tation. Similarit y metric Dice Mean (std) Median A tlas, non-rigid 0.56 (0.08) 0.56 A tlas, affine 0.55 (0.08) 0.53 2D U-Net 0.64 (0.11) 0.63 2D U-Net, adversarial 0.65 (0.1) 0.64 3D U-Net 0.64 (0.1) 0,63 2D U-Net, b oneweigh ted 0.63 (0.11) 0.63 2D U-Net, direct MRI segmentation 0.63 (0.1) 0.63 pro duces visually outstanding CT images, and this suggests that the choice of deep neural net work architecture pla ys a crucial role for p erformance. F urthermore, we demonstrated that segmenting synthetic CT images pro- duced by a 2D U-Net resulted in a higher DICE ov erlap than training a 2D U-Net to segment b one directly from MR images. Still, the DICE ov erlap score w as low (under 0 . 65 of possible 0 . 91). MRI bone segmen tation is an ill-posed and challenging problem, and there is ro om for significant improv emen t in this area of researc h. W e presented a no vel framework for synthesizing CT images. These images are perceptually less satisfying than the other metho ds, as borders from the atlas are visible. In future work, w e will impro ve on this no v el metho d using atlas priors to deep learning mo dels such that we will b e able to train a mo del with significan tly fewer parameters that are specifically tailored to synthesize b one features and will enable b etter b one segmentation. 10 A. Lauritzen et al. Ac knowledgemen ts This work was supp orted by the National Institute of Health (NIH) National Institute of Neurological Disorders and Stroke (NINDS) R44 NS093889. Collection of the data partially used in this work was supp orted b y NIH NINDS R43 NS67726, and the National Institute of Mental Health R44 MH106421. Additionally , this work w as supp orted by gran ts from Knud Hj- grds F oundation, The Lundb ec k F oundation, The Oticon F oundation and F amily Hede Nielsen F oundation. References 1. Burgos, N., Guerreiro, F., McClelland, J., Presles, B., Mo dat, M., Nill, S., Dearna- ley , D., deSouza, N., Oelfke, U., Knopf, A.C., Ourselin, S., Cardoso, M.J.: Iterativ e framew ork for the joint segmentation and ct synthesis of mr images: Application to mri-only radiotherapy treatmen t planning. Physics in Medicine and Biology 62 , 4237–4253 (03 2017) 2. Han, X.: Mr-based synthetic ct generation using a deep con volutional neural net- w ork method. Medical Physics 44 , 1408–1419 (02 2017) 3. Hofmann, M., Steinke, F., Scheel, V., Charpiat, G., F arquhar, J., Aschoff, P ., Brady , M., Sch¨ olk opf, B., Pichler, B.J.: Mri-based attenuation correction for p et/mri: a no vel approach combining pattern recognition and atlas registration. Journal of Nuclear Medicine 49 11 , 1875–83 (2008) 4. Kutanzi, K.R., Lumen, A.A., Koturbash, I., Miousse, I.R.: Pediatric exp osures to ionizing radiation: Carcinogenic considerations. In: International journal of envi- ronmen tal researc h and public health (2016) 5. LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521 (7553), 436–444 (2015) 6. Li, R., Zhang, W., Suk, H., W ang, L., Li, J., Shen, D., Ji, S.: Deep learning based imaging data completion for improv ed brain disease diagnosis. Medical image com- puting and computer-assisted in terven tion : MICCAI 17 (Pt 3), 305–312 (2014) 7. Mao, X., Li, Q., Xie, H., Lau, R.Y.K., W ang, Z., Smolley , S.P .: Least squares gen- erativ e adv ersarial net works. In: 2017 IEEE International Conference on Computer Vision (ICCV). pp. 2813–2821 (2017) 8. Nie, D., Cao, X., Gao, Y., W ang, L., Shen, D.: Estimating ct image from mri data using 3d fully conv olutional netw orks. In: Deep Learning and Data Lab eling for Medical Applications. pp. 170–178. Springer International Publishing (2016) 9. Nie, D., T rullo, R., Lian, J., Petitjean, C., Ruan, S., W ang, Q., Shen, D.: Medical image synthesis with context-a ware generative adv ersarial netw orks. In: Medical Image Computing and Computer-Assisted Interv ention. pp. 417–425. Springer In- ternational Publishing (2017) 10. NIH: Computed tomograph y (CT) scans and cancer (2013), https://www.cancer. gov/about- cancer/diagnosis- staging/ct- scans- fact- sheet#r2 (visited 2018- 05-08) 11. Rangara jan, A., Chui, H., Mjolsness, E., P appu, S., Dav achi, L., Goldman-Rakic, P ., Duncan, J.: A robust p oin t-matching algorithm for autoradiograph alignment. Medical Image Analysis 1 (4), 379–398 (1997) 12. Ronneb erger, O., Fischer, P ., Bro x, T.: U-net: Conv olutional net works for biomedi- cal image segmentation. In: International Conference on Medical image computing and computer-assisted interv ention. pp. 234–241. Springer (2015) Ev aluation of CT Image Synthesis Metho ds 11 13. Ruec kert, D., Sonoda, L., Hay es, C., Hill, D., Leach, M., Ha wkes, D.: Nonrigid registration using free-form deformations: application to breast MR images. IEEE T ransactions on Medical Imaging 18 (8), 712–721 (1999) 14. Smith, S.M.: F ast robust automated brain extraction. Human Brain Mapping 17 (3), 143–155 (2002) 15. Song, J., Morgan, K., Sergei, T., Li, K., Dav ey , C., Govy adino v, P .: Anatomically accurate head mo dels and their deriv atives for dense array eeg source lo calization. F unctional Neurology , Rehabilitation, and Ergonomics 3 , 275–294 (01 2013) 16. Studholme, C., Hill, D.L.G., Ha wkes, D.J.: An ov erlap in v ariant entrop y measure of { 3D } medical image alignment. Pattern Recognition 32 (1), 71–86 (1999) 17. T orrado-Carv a jal, A., Herraiz, J.L., Alcain, E., Montema y or, A.S., Garcia- Caamaque, L., Hernandez-T amames, J.A., Rozenholc, Y., Malpica, N.: F ast patc h- based pseudo-ct synthesis from t1-weigh ted mr images for p et/mr atten uation cor- rection in brain studies. Journal of Nuclear Medicine 57 (1), 136–143 (2016) 18. T urov ets, S.: h ttps://www.pedeheadmo d.net/ (2018), https://www.pedeheadmod. net/ (visited 2018-06-21) 19. Xiang, L., W ang, Q., Nie, D., Zhang, L., Jin, X., Qiao, Y., Shen, D.: Deep em b ed- ding conv olutional neural net work for syn thesizing ct image from t1-weigh ted mr image. Medical Image Analysis 47 , 31 – 44 (2018)

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment