Deep Neural Network Assisted Iterative Reconstruction Method for Low Dose CT
Low Dose Computed Tomography suffers from a high amount of noise and/or undersampling artefacts in the reconstructed image. In the current article, a Deep Learning technique is exploited as a regularization term for the iterative reconstruction metho…
Authors: Shabab Bazrafkan, Vincent Van Nieuwenhove, Joris Soons
1 Deep Neural Network Assisted Iterati v e Reconstruction Method for Lo w Dose CT S. Bazrafkan, V . V an Nieuwenhove, J. Soons, J. De Beenhouwer , and J. Sijbers Abstract —Low Dose Computed T omography suffers from a high amount of noise and/or undersampling artefacts in the reconstructed image. In the current article, a Deep Learning technique is exploited as a regularization term for the iterative reconstruction method SIRT . While SIRT minimizes the err or in the sinogram space, the pr oposed regularization model ad- ditionally steers intermediate SIR T reconstructions towards the desired output. Extensive evaluations demonstrate the superior outcomes of the proposed method compared to the state of the art techniques. Comparing the f orward projection of the reconstructed image with the original signal, shows a higher fidelity to the sinogram space for the current approach amongst other lear ning based methods. Index T erms —Low Dose CT Reconstruction, Deep Neural Networks, Iterative Reconstruction Methods. I . I N T RO D U C T I O N C OMPUTED T omography (CT) is a diagnostic imaging method which operates by acquiring multiple projection images (radiographs) of the object from different angles, after which a 3D image is computed from the set of radiographs. CT employs harmful X-ray radiation and there is a continuous striv e towards reducing the X-ray dose administered to the patient. In order to lo wer the X-ray dose, there are two main approaches. The first one is to reduce the X-ray radiation dose by decreasing the X-ray tube current [1]. This strategy , howe v er , decreases the Signal to Noise Ratio (SNR) of the projection images and hence also of the reconstructed image. Another way to decrease the X-ray exposure is to reduce the number of acquired projection images. In other words, the CT device takes images from fewer angles and as a result, a lo wer amount of radiation is applied during the imaging procedure. This, howe v er , leads to streaking artefacts in the reconstructed image, the se verity of which increases with decreasing number of acquired projections. In the past decade, the problem of reconstructing images from a small number of projections has attracted considerable interest in the field of compressed sensing [2]. In particular , it was prov en that, if the image is sparse, it can be reconstructed accurately from a small number of measurements with very high probability , as long as the set of measurements satisfies certain randomization properties [3]. In man y cases, the image itself is not sparse, yet the boundary of the object is relatively S. Bazrafkan, J. De Beenhouwer and J. Sijbers are with imec V isionlab, Department of Physics, Univ ersity of Antwerp, Antwerp, Belgium e-mail: { shabab .bazrafkan } , { jan.debeenhouwer } , { jan.sijbers } @uantwerpen.be. V . V an Nieuwenhov e and J. Soons are with Agfa NV , Mortsel, Belgium email: { vincent.v annieuwenhove } , { joris.soons } @agfa.com. small compared to the total number of pixels. In such cases, sparsity of the gradient image can be exploited, by adding a proper regularization term into iterati ve reconstruction meth- ods [2]. Similarly , sparsity of the gradient image [2], the grey lev els [4] or the coefficients in a transform domain [5] can be exploited to reduce limited data CT artefacts. In this work, a reconstruction technique based on an iterati v e method is presented wherein a Machine Learning technique provides prior knowledge that serves as a regularization to the reconstruction. A. Deep Neural Networks In the last few years, Deep Neural Networks (DNN) hav e played an important role in developing a ne w generation of Machine Learning techniques kno wn as Deep Learning. These models -if employed in the right place- are able to provide surprisingly high-quality results wherein the y already passed the borders of human accuracy on object recognition tasks [6], [7]. DNNs learn the solution from the training data and generalize this solution to some set of new data they haven’ t seen before. T ypical DNNs consist of several processing units such as Con volutional Layers, Fully Connected Dense Layers, Pooling, and Unpooling layers, which take advantage of techniques such as Batch Normalization [7] as a regularization step and skipped connections [8] to keep high frequency information of the input data throughout the network. Currently , we are witnessing a v ast number of applications for Neural Networks in se veral fields of science including low dose CT reconstruction. In [9], the authors introduce a bank of filters for the FBP method which is learned by a fully connected neural network known as Multi-Layer Perceptron and the weighted sum of several FBP reconstructions is returned as the final result. In [10], a k-sparse autoencoder [11] is utilized to learn the priors on the CT data. This model is applied iterativ ely to perform the reconstruction by applying minimization to the learned manifold alongside with a data fidelity term using a separable quadratic surrogate (SQS) algorithm. In [12], the authors represent an end to end solution wherein the DNN accepts the sinogram space data and generates the reconstructed image at the output. The main problem with this method is the size of the network, which grows exponentially with the input dimension which is the biggest barrier on the practical implementation. The main reason is the two dense fully connected layers wherein ev ery neuron in each layer is connected to e very neuron 2 in next and previous layers. The first fully connected layer maps the sinogram to a layer with the size of the output image. The second layer maps this image into an image with the same dimension. These two lar ge layers require a fairly lar ge number of samples to train and still, there are implementation issues considering the required memory . In [13], the authors present a framework wherein the denoising is performed in the contourlet space with a network that exploits skipped connection and concatenation layers. The fully con volutional network trained in this work is very large. In fact, in one of the con volutional layers, there are more than 1 million learnable parameters. Such a large network is prone to overfitting if there is not enough representativ e data av ailable. The main adv antage of this w ork is the mathematical background used in the network design. In [14] the wa velet transform of the reconstructed images is used to train the network in order to perform noise reduction. In other words, the wav elet decomposition is first applied on the reconstructed image and the network repairs the sample, while at the output, the wavelet recomposition is applied. The wa velet transform seems to induce marginal improvements on the final metrics. Methods presented in [15]–[17] train fully con volutional networks to learn noise reduction for FBP reconstruction scenarios. Results from these approaches look promising since the artefact patterns are similar throughout the whole database and the DNNs proved to be efficient in learning these patterns. In the current work, a method similar to these techniques is applied to SIR T reconstruction in a consecutiv e manner [18]. Other work includes the technique presented in [19] wherein a Generative Adversarial Network (GAN) [20] is used to learn the distrib ution of high-quality CT images. Low dose images are then repaired by minimizing a perceptual loss derived from the pre-trained VGG network [21] while the repaired image is forced to have a distribution with minimum W asserstein distance with the learned distribution. The biggest problem with perceptual loss is its fairness in medical purposes. This is an considerably important issue since e very perceptually friendly image might not represent true diagnostic information. In the current work, these loss functions are a voided and Mean Squared Error (MSE) loss has been deployed. The biggest disadvantage of most of the methods described abov e is the lack of fidelity to the measured sinogram data. This issue is described as follows. The CT de vice acquires sev eral projections from the object which is also kno wn as the sinogram signal and the construction methods compose an image using any of the aforementioned techniques. If a forward projection is simulated from the reconstructed image, most of these methods do not guarantee that the new sinogram is the same as the sinogram measured by the CT device. In other words, the sinogram of the reconstructed image is different from the original captured signal. This, in fact, is a serious issue which is addressed in this work. The proposed approach reduces the loss in sinogram space by taking advantage of SIR T algorithm which guarantees the fidelity to the measured sinogram and at the same time exploits Deep Learning models to lo wer the loss in the image space. In the next section, the methodology is described followed by the results and ev aluations giv en in section III. Conclusions are presented in the last section. I I . M E T H O D O L O G Y A. CT Reconstruction Let x = ( x j ) ∈ R n denote the discretised image of an object, with n the number of pixels. In a parallel beam projection geometry , projection data is measured along lines l θ,t = { ( x, y ) ∈ R × R : x cos θ + y sin θ = t } , where θ ∈ [0 , π ) represents the angle between the line and the y -axis and t represents the coordinate along the projection axis. In practice, a projection is measured at a finite set of projection angles and at a finite set of detector elements, each measuring the integral of the object density along a ray . Let p = ( p i ) ∈ R m denote the measured projection data, with m the total number of detector cells times the number of projection angles. The Radon transform of the object for a finite set of projection directions can be modelled as a linear operator W , called the pr ojection operator , which maps the image x to the projection data q : q := W x . (1) In Eq. (1), W = ( w ij ) is an m × n matrix where w ij represents the contrib ution of image pixel j to detector i . The vector p is called the forwar d pr ojection or sinogr am of x . The tomographic reconstruction problem can be modelled by the recovery of x from a gi ven projection data p by solving the follo wing system of equations: W x = p . (2) In the remainder of this work, the Simultaneous Iterative Reconstruction T echnique (SIRT) , as described in [18], will be used to solve Eq. (2). SIR T is an iterati ve algorithm that finds a solution ˜ x such that the weighted squared projection difference || W ˜ x − p || R = ( W ˜ x − p ) T R ( W ˜ x − p ) is minimal. R ∈ R m × m is a diagonal matrix that contains the inv erse row sums of W : r ii = 1 / P j w ij . The update step of SIR T is giv en by: x k +1 = x k + C A T R ( p − W x k ) (3) where x k is k th iteration of the reconstructed image, C is a diagonal matrices that contain the in verse of the sum of the columns of the system matrix W . For the case x 0 = 0 , the SIR T algorithm is a linear algorithm in the sense that a reconstructed image ¯ x ∈ R n is formed by applying a linear transformation to the input vector p of projection data. 3 a 1 1 a 1 2 a 1 3 a 2 1 a 2 2 a 2 3 a 3 1 a 3 2 a 3 3 a 1 1 a 1 2 a 1 3 a 2 1 a 2 2 a 2 3 a 3 1 a 3 2 a 3 3 0 0 0 0 0 0 0 0 a 1 1 0 a 1 2 0 a 1 3 0 0 0 0 0 0 0 0 0 a 2 1 0 a 2 2 0 a 2 3 0 0 0 0 0 0 0 0 0 a 3 1 0 a 3 2 0 a 3 3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 a 1 1 0 a 1 2 0 a 1 3 0 0 0 0 0 0 0 0 0 a 2 1 0 a 2 2 0 a 2 3 0 0 0 0 0 0 0 0 0 a 3 1 0 a 3 2 0 a 3 3 0 0 0 0 0 0 0 0 1 d i l a t e k e r n e l 2 d i l a t e k e r n e l Fig. 1. 3 × 3 kernels. Left: 1 dilate. Right: 2 dilate. B. Deep Neural Networks Fully con volutional deep neural networks are models that do not contain any dense layers in their architecture. All layers are conv olution, decon volution, pooling and unpooling which might e xploit batch normalization, drop out [22], and/or skipped connections to improve their output quality . The model used in this article is a fully con volutional network wherein just conv olutional layers with different dilation sizes and intense skipped connections are used. 1) Con volutional Layers: DNNs are composed of several processing units wherein the con volutional layers play an important role in most of the modern designs. In image pro- cessing uses cases, the con volution layers are 3 dimensional: width, height, and channels. The network input is also a 3- dimensional signal in which the samples are stacked in the 4th dimension. While a 4-dimensional kernel maps each layer to the next one, an activ ation function is applied to the layer output to induce nonlinearity to the model. The following equation describes the mapping of a window in layer m − 1 to a pixel value in layer m . S m ( x, y , c ) = σ n m − 1 c X k =1 [ n w / 2] X j = − [ n w / 2] [ n h / 2] X i = − [ n h / 2] H m c ( i, j, k ) · S m − 1 ( x − i, y − j, k ) (4) Wherein S m ( i, j, c ) is the signal in pixel location ( x, y ) , located in channel c in layer m , H m c is the kernel associated with the channel c of layer m . In other words this kernel maps e very channel in layer m − 1 to channel c in layer m . n h and n w are the width and height of the kernel and n m − 1 c is number of channels in layer m − 1 . σ is the activ ation function which is also kno wn as the nonlinearity of the layer . One of the most beneficial properties in con volutional layers is the idea of dilation in the kernel design [23]. This quality gives the opportunity for the kernel to increase its field of vie w while keeping a lo w number of learnable parameters. The idea is to expand a kernel and fill the void places with zeros as it is illustrated in figure 1. There are alternativ e methods such as using larger kernels and/or using pooling operations. A larger kernel means a larger number of parameters which increases the risk of overfitting and the pooling operation induces blurring to the final results. Dilation is a simple and effecti ve approach to increase the receptiv e field of the kernel without increasing the number of the parameters in the kernel and/or adding pooling layers. 2) Mixed-Scale Dense (MSD) Con volutional Networks: The Mixed-Scale Dense (MSD) Con volutional Network is a fully conv olutional network which was first introduced in [24] for image segmentation tasks. Later in [17], it has been used to remove the low-dose CT reconstruction artefacts of FBP method. MSD structure is sho wn in figure 2. This architecture is taking advantage of sev eral dilation scales throughout the model and also because of the single channel con volutions, there are fe wer trainable parameters compared to other typical DNNs. In this network, each layer accepts the output of ev ery pre vious layers concatenated with input image in the channel dimension. The kernel for each layer consists of a 3 × 3 conv olutional operation. Each kernel has a different dilation value which is specified by the layer number and a value p . In the current work p = 10 . All the layers are taking advantage of the well kno wn ReLU nonlinearity [25], except the last layer which exploits tanh nonlinearity . The work presented in [17] shows the practicality of MSD in removing streaking artefacts in FBP reconstruction method. The main issue stays to be the fidelity of the method to the sinogram space which is addressed in the follo wing section. C. Pr oposed Method As explained in the pre vious section, the main issue with the current approaches in removing the artefacts in CT imaging is the lack of fidelity to the sinogram space. In other words, there is no guarantee that the sinogram of the reconstructed image matches that of the original sinogram. Iterative methods such as SIR T are designed to decrease weighted squared projection distance in the sinogram space. In fact, these methods minimize the distance between the simulated and measured sinogram. On the other hand, these iterative methods do not guarantee any fidelity in the image space. Depending on the size of the solution space, a reconstructed image can be very different from the scanned object ev en if the measured sinogram is identical to the simulated sinogram. At the same time, DNN model requires the image to be as similar as possible to its corresponding ground truth image. In other words, Neural Networks induce the fidelity to the image space. The proposed idea is to use the output of the DNN as the initial point for the SIR T algorithm. In this approach, the DNN steers the SIR T into producing a more realistic output while SIR T ensures that the reconstructed results are entitled to sinogram space fidelity . In order to accomplish this, the DNN is utilized as a regularization unit for SIR T . The follo wing equation shows the update stage for SIR T including the regularization term. x k +1 = x k + C A T R ( p − W x k ) + REG. (5) 4 C on vo l ut i on 3x 3; 1 C ha nn e l D i l a t e : m od ( l a ye r # , p ) A c t i va t i on : R e L U C on c a t e na t i on i n c ha nne l di m e ns i on C on vo l ut i on 3x 3; 1 C ha nn e l D i l a t e : m od ( l a ye r # , p ) A c t i va t i on: R e L U C on c a t e na t i on i n c ha nn e l d i m e ns i on C on c a t e na t i on i n c ha nn e l d i m e ns i on C on vo l ut i on 1x 1; 1 C ha nn e l A c t i va t i on : t a n h ... I np ut i m a ge O ut put Fig. 2. Mixed-Scale Dense Con volutional Network architecture. S IR T N i tera ti ons DN N 1 S i nogram S IR T N i tera ti ons DN N 2 ... S IR T N i tera ti ons Re construc ted Ima ge 0 + + Fig. 3. Proposed method for low dose CT reconstruction. DNNs regularize the SIR T output. The idea is to provide a DNN which regularizes the term ‘REG’ in a way that: x k +1 = GT . , (6) wherein GT is the ground truth of the reconstructed image. It means that the regularization term forces SIR T to provide a perfect reconstruction in the image space. From equations (5) and (6) it is concluded that: REG = GT − x k + C W T R ( p − W x k ) . (7) Equation (7) implied that the regularization term should provide the residual value between the reconstructed image and the ground truth. The proposed method is illustrated in figure 3. Sev eral DNNs will be trained with the reconstruction image as the input and residual v alue as target. At the inference step, the trained DNN provides the residual v alue which is used to generate a new initialization point for SIR T . The regularization is applied once in every N SIR T iterations. D. Data Base CPT A C-PDA : National Cancer Institutes Clinical Proteomic T umor Analysis Consortium Pancreatic Ductal Adenocarcinoma (CPT A C-PD A) 1 is a publicly av ailable database containing 45786 Pancreas images from CPT A C phase 3 patients. It consists of 45 radiology and 77 pathology subjects. This database contains se veral modalities including CT , Computed Radiography (CR) and MRI samples. Images are from different sizes but in the current work, they were resized to 128 × 128 . Using multiple modalities in the training stage increases the generality of the solution induced by the different properties of various imaging techniques. V isible Human Project CT Datasets : V isible Human Project CT Datasets 2 contains 2989 images from 10 CT imaging cases. This dataset is publicly av ailable. Images are 512 × 512 while in the current study they were all resized to 128 × 128 . This database contains CT images of the ankle, head, hip, knee, pelvis, and shoulder from both male and 1 https://wiki.cancerimagingarchiv e.net/display/Public/CPT AC-PD A 2 https://mri.radiology .uio wa.edu/visible human datasets.html female subjects. The male shoulder samples (461 images) were isolated from all training data to be used as the test set. The lo w dose scenario is simulated by taking a limited number of projections from e very image in the database. A parallel beam geometry [26] with 20 equidistant projections between 0 and 180 degrees has been utilized to produce the low dose sinogram. The ASTRA T oolbox 3 [27], [28] provides the required tools in order to simulate the X-ray projections. In this study , the male shoulder samples from V isible Human Project CT Datasets (461 images) are used as test set and the rest of the data is employed in the training procedure. 80% of the training dataset is used for network training and the remaining 20% for v alidation. E. T raining In order to obtain the framew ork shown in figure 3, sev eral DNNs are trained consequently . In this work, the regularization is applied 10 times so there are 10 dif ferent networks trained in figure 3. An MSD con volutional architecture used for all netw orks. The netw ork consists of 51 layers with p = 10 . The number of SIR T iterations before each regularization step is N = 10 . The training procedure is sho wn in figure 4, with M axN et = 10 and M axE poches = 100 . M axN et is the number of the networks and M axE poches is the maximum number of epochs each network is trained. The first DNNs parameters were initialized uniformly in the range [ − 0 . 25 , 0 . 25] and the further networks were initialized from the model sa ved for the previous step. This technique which knows as transfer learning has been widely used in various applications and it is certainly effecti ve in the current problem wherein the artefacts induced by SIR T in different steps are quite similar . The Mean Squared Error is used as the loss function for training which is gi ven by: Loss = 1 B s H W B s X k =1 H X j =1 W X i =1 O ( i, j, k ) − t ( i, j, k ) 2 , (8) 3 https://www .astra-toolbox.com/ 5 S i n o g r a m S I R T N e t C o u n t + = 1 N e t C o u n t == M a x N e t R e c o n st r u c t e d i m a g e s O r i g i n a l i m a g e s D N N L o ss P a r a m e t e r g r a d i e n t E p o c C o u n t + = 1 Ep o c C o u n t == M a x E p o c h v a l i d Lo s s D e c r e a s e d S a v e D N N D N N En d no - + Y e s No N o : u p d a t e ( B a c k p r o p a g a t i o n ) Y e s : L o a d D N N Y e s F o r w a r d p r o p a g a t i o n Fig. 4. Training procedure for the proposed framework. where W , H , and B s are the width, height and the batch size of the input signal, respecti vely . A batch size equal to 10 has been used in this work. An AD AM optimizer [29] hav e been utilized to update the parameters with learning rate, β 1 , β 2 and equal to 0 . 0001 , 0 . 9 , 0 . 999 , and 10 − 8 respectiv ely . The MXNET 1.3.0 [30] 4 framew ork hav e been used to train the network and the ASTRA T oolbox [27], [28] was used to perform the SIR T step. The training was accomplished on one TESLA V100 [31] GPU of a DGX Station [32]. Figures 5a and 5b illustrates the train loss and validation loss for each of ten trained networks. As it is sho wn, the network losses decrease after each SIR T block. This is the sign of cooperati ve behaviour of SIR T and DNN, while SIR T minimizes the error in sinogram space it also decreases the image space loss thanks to the regularization term provided by the DNN. The same improvement is visible in the validation loss which declares the generalization of the method. I I I . R E S U L T S In this section, the proposed method is ev aluated and compared to the state of the art methods in the literature. The term SIR T+DNN is used to represent the current method. The other techniques used for comparison are as follo ws: 1) Model-based methods: Filtered Back Projection (FBP), Simultaneous Iterative Reconstruction T echnique (SIR T) [18], Conjugate Gradient Least Squares (CGLS) [33], and T otal V ariation with adapti ve step size (TV adapti ve) [34]. 2) Learning based methods: 4 https://mxnet.apache.org/ a) A UTOMAP [12]: This is the state of the art implementation of an end to end design for image reconstruction, wherein a single DNN is trained to perform this job. It accepts the sensor signal and returns the reconstructed image. In this work, the low dose sinogram signal is used as input and the original image as the target image to provide the loss function. The A UT OMAP network is trained on the exact same data as the SIR T+DNN. The Mean Squared Error is used as the loss function with AD AM optimizer updating the network parameters. The learning rate, β 1 , β 2 , and are set to 0 . 0001 , 0 . 9 , 0 . 999 , and 10 − 8 , respectively . The main disadvantage with this method is the fact that it is a fully data-driv en method which does not take advantage of the image acquisition and geometry properties. b) FBP+DNN : This method utilizes a DNN to learn the artefacts induced by the FBP method. This technique has been widely used in the literature and in the current ev aluations, the frame work presented in [17] has been employed. In order to provide a fair comparison, the FBP+DNN model has been trained on the same MSD network as SIR T+DNN. The same database is used in the training procedure. The Mean Squared Error is used as the loss function as declared in [17]. The AD AM optimizer is utilized to update the parameters with learning rate, β 1 , β 2 , and equal to 0 . 0001 , 0 . 9 , 0 . 999 , and 10 − 8 , respectiv ely . c) Neural Network Filtered Back Projection NNFBP (16,32,64) [9] In this approach, a fully connected neural network is trained to find a set of best filters for the FBP method. The numbers in the parenthesis declared the number of hidden units deployed in the network. In the observations done in [9] it has been sho wn that the networks with 16, 32 and 64 hidden layers return the results with highest accuracies, therefore, these three setups have been used in the current ev aluation section. Since the training is performed on the pixel lev el, the test set has been used for training 5 . This gives the opportunity to compare the proposed method with the best version of NNFBP on the current data. The measurements used for e valuations are as follows: 1) Peak Signal to Noise Ratio (PSNR) : Is the ratio between the maximum power of the signal to the power of the noise. This measure is widely used in image comparison for reconstruction use cases. The higher value indicates better reconstruction quality . 5 http://dmpelt.github .io/pynnfbp/ 6 0 10 20 30 40 50 60 70 80 90 100 Epochs 10 -3 Train Loss Net 1 Net 2 Net 3 Net 4 Net 5 Net 6 Net 7 Net 8 Net 9 Net 10 (a) Training Loss 0 10 20 30 40 50 60 70 80 90 100 Epochs 0.6 0.8 1 1.2 1.4 1.6 1.8 2 2.2 2.4 Validation Loss 10 -3 Net 1 Net 2 Net 3 Net 4 Net 5 Net 6 Net 7 Net 8 Net 9 Net 10 (b) V alidation Loss Fig. 5. Training and validation losses for 10 networks. PSNR MSE SSIM µ σ µ σ 2 µ σ Model Based Methods FBP 23.7 1.2 4.4e-3 1.0e-3 0.6015 2.5e-2 CGLS 29.3 1.0 1.2e-3 2.6e-4 0.8310 2.2e-2 SIR T 28.8 1.1 1.3e-3 3.0e-4 0.8148 2.3e-2 TV Adaptive 29.7 1.2 1.1e-3 2.6e-4 0.8532 2.1e-2 Learning Based Methods FBP+DNN 31.0 1.1 8.1e-4 1.8e-4 0.9020 1.5e-2 NNFBP16 25.1 1.1 3.2e-3 8.6e-4 0.8050 2.1e-2 NNFBP32 27.8 1.5 1.8e-3 7.4e-4 0.8350 1.9e-2 NNFBP64 29.9 0.9 1.0e-3 2.3e-4 0.8542 1.7e-2 A UTOMAP 28.2 0.8 1.5e-3 2.9e-4 0.8549 1.6e-2 SIR T+DNN 37.2 1.3 1.9e-4 5.1e-5 0.9805 4.4e-3 T ABLE I E V AL UAT IO N S I N T HE I M AG E S PAC E O N P S N R, M S E A N D S S I M 2) Mean Squar ed Error (MSE) : Represents the power of the noise in the reconstructed image. The lo wer value of MSE corresponds to higher quality reconstruction. Both MSE and PSNR are pixel lev el measures. In other words, these measures calculate the difference between two images in the pix el lev el grayscale values. 3) Structural Similarity Index (SSIM) : is a quality mea- surement presented in [35] wherein two images are compared based on their structural information and not solely the pixel v alue. The index range is between zero and one which zero indicates no similarities and one is a perfect structural match between the reconstructed image and ground truth. A. Image Space Evaluations T able I shows the comparisons between the proposed method and the state of the art methods in the literature. High PSNR and lo w MSE declare a higher accurac y in returning pixel lev el information for the proposed scheme. And the high SSIM value sho ws the consistency of SIR T+DNN in keeping the structural information even in very low CT doses. The learning based methods return a higher SSIM value especially when they are used as an auxiliary step to remov e artefacts from the reconstruction image as in FBP+DNN and SIR T+DNN techniques. This is while SIR T+DNN delivers the highest PSNR and lo west MSE compared to other methods. As sho wn in figure 3 each SIR T step reduces the loss in the sinogram space which in practice induces streaking artefacts to the image space and each DNN steps removes these artefacts without considering the consistency in the sinogram space. The mixture of these two steps consecutively provides a strong tool in returning a high-quality reconstruction in both pixel lev el and structural information. Figure 6 illustrates the reconstruction results for the pro vided geometry (20 projections parallel beam) in dif ferent methods. 7 F BP F BP F BP F BP F BP F BP F BP F BP F BP F BP A U T O M A P A U T O M A P A U T O M A P A U T O M A P A U T O M A P A U T O M A P A U T O M A P A U T O M A P A U T O M A P A U T O M A P A U T O M A P F BP + D N N F BP + D N N F BP + D N N F BP + D N N F BP + D N N F BP + D N N F BP + D N N F BP + D N N F BP + D N N F BP + D N N CG L S CG L S CG L S CG L S CG L S CG L S CG L S CG L S CG L S CG L S N N F BP 64 N N F BP 64 N N F BP 64 N N F BP 64 N N F BP 6 4 S I RT S IRT S IRT S IRT S IRT T V a da pt i ve T V a da pt i ve T V a da pt i ve T V a da pt i ve T V a da pt i ve S IRT + D N N S IRT + D N N S IRT + D N N S IRT + D N N S IRT + D N N G r ou nd t r ut h G round t rut h G r ou nd t r ut h G r ou nd t r ut h G r ou nd t r ut h Fig. 6. Reconstruction examples taken from the test set alongside their corresponding ground truth 8 F BP A U T O M A P F BP + D N N CG L S N N F BP 64 S I RT T V a da pt i ve S IRT + D N N G r ou nd t r ut h F BP A U T O M A P F BP + D N N CG L S N N F BP 64 S IRT T V a da pt i ve S IRT + D N N G rou nd t rut h Fig. 7. Left: soft tissue reconstruction examples. Right: sharp edges examples. 9 S i n o g r a m s i g n a l R e c o n s t r u c t i o n a l g o r i t h m F o r w a r d p r o j e c t i o n M e a s u r e m e n t s P S N R M S E S S I M F i d e l i t y Fig. 8. Processing blocks for measuring sinogram fidelity . The A UTOMAP method does not introduce any streaking but the provided images are suffering from a tangled artefact. In other words, the details are mostly twisted together which results in a matte reconstruction. The main reason for this behavior is that A UT OMAP does not include any geometry information in the reconstruction procedure and also the lo w dose scenario increases the uncertainty of the solution. All other methods suffer from sev ere streaking artefacts and e ven in FBP+DNN scheme, the network is not able to remove all the artefacts. In other cases such as CGLS, SIR T , and TV adaptiv e, a certain amount of blurring is also introduced to the image. Especially in the TV adaptive method, details are merged into a single block due to the total variation term of the loss function. Figure 7 illustrates zoom up images which giv e more elaborate insight into the presented technique. In the left column, a soft tissue block is illustrated. Most of the methods fail to reconstruct the correct low contrast property of the soft tissue. A UTOMAP clearly fails to produce ev en the slightest structural information. Other methods such as SIR T , FBP , CGLS, and NNFBP return a blurry image which suffers from the lack of any recognizable edges. The next best result is from the TV adaptiv e method which produces sharper edges when the contrast is large enough (air/object or bone/soft tissue transitions) but yields sev erely blurred results in the soft tissue regions. The SIR T+DNN method returns a better output for the soft tissue. There is a similar situation for the right column images where correspond to the reconstruction of sharp edges. The FBP and NNFBP methods induce a strong le vel of noise into the image which is very difficult to remov e e ven with a DNN. This is shown in the FBP+DNN image in which the black region is closed due to the filtering applied in the DNN step. Again, the proposed SIR T+DNN method produces the best reconstruction compared to the other methods. B. Sinogram Space Evaluations As described earlier most of the learning based reconstruction methods suffer from the lack of fidelity to the sinogram space. In other words, the forward projection of the reconstructed image dif fers from the original measurements taken from the sensors. In order to overcome this drawback, the proposed method takes adv antage of SIR T which decreases the loss in sinogram space. And the DNN is utilized as a regularization term which introduces image space information into the model. 10 3 10 4 Background Intensity 5 10 15 20 25 30 35 mean PSNR AUTOMAP CGLS FBP FBP+DNN NNFBP16 NNFBP32 NNFBP64 SIRT SIRT+DNN TVadaptive Fig. 9. Mean PSNR wrt the background intensity for different methods. The PSNR, MSE, and SSIM measurements are calculated from the pipeline shown in figure 8. This is done for all the methods and the results are sho wn in table II. The model-based techniques such as SIR T and CGLS are designed to explicitly keep the reconstruction sinogram as close as possible to the measurements. TV technique imposes corrections in the image space which reduces its fidelity to the sinogram space. The FBP method does not couple back to the sinogram which it is why it returns the worst fidelity among the model-based methods. In the learning based methods, the NNFBP returns the worst values which indicate that the designed filters do not require sinogram fidelity . A UTOMAP and FBP+DNN giv e the next best results. In fact, getting a higher v alue for FPB+DNN compared to the original FBP shows that the DNN is pushing the results towards a higher sinogram fidelity . This is happening while the sinogram loss term is not included in the DNN objecti ves. The proposed SIR T+DNN method produces the best results in sinogram space among the learning based methods. It is also worthwhile to mention that while DNN does not improve the sinogram fidelity compared to SIR T but it has a significant impact on the image space measurements. C. Sinogram Noise Evaluation T o in vestigate the performance of the proposed reconstruction method in terms of the noise in the sinogram, projection data of each phantom image was generated to which Poisson noise was applied. The intensity of this noise is defined by the incident beam intensity , I 0 (further referred to as the background intensity), i.e. the photon count in the incident X-ray beam. Reconstructions were performed using different values for I 0 . 10 PSNR MSE SSIM µ σ µ σ µ σ Model Based Methods FBP 39.1 1.3 1.3e-4 3.1e-5 0.9686 5.7e-3 CGLS 101.9 1.0 6.6e-11 1.5e-11 1.0000 3.9e-9 SIR T 92.2 2.1 7.3e-10 1.1e-9 1.0000 3.1e-7 TV Adaptive 61.8 1.2 6.9e-7 1.7e-7 0.9999 3.4e-5 Learning Based Methods FBP+DNN 47.7 1.8 1.8e-5 7.3e-6 0.9918 3.0e-3 NNFBP16 28.2 1.6 1.6e-3 6.2e-4 0.9479 1.1e-2 NNFBP32 28.8 1.6 1.4e-3 5.6e-4 0.9508 2.6e-2 NNFBP64 28.5 1.7 1.5e-3 6.4e-4 0.9535 1.4e-2 A UTOMAP 45.4 1.2 3.0e-5 8.2e-6 0.9923 1.6e-3 SIR T+DNN 68.7 0.9 1.4e-7 2.8e-8 1.0000 5.6e-6 T ABLE II E V AL UAT IO N S I N T HE S I NO G R A M S P AC E O N P S NR , M S E A N D S S I M 10 3 10 4 Backgroung Intesity 10 -4 10 -3 10 -2 10 -1 10 0 mean MSE AUTOMAP CGLS FBP FBP+DNN NNFBP16 NNFBP32 NNFBP64 SIRT SIRT+DNN TVadaptive Fig. 10. Mean MSE wrt the background intensity for different methods. The PSNR, MSE, and SSIM of the reconstructed images as a function of the noise lev el in the projection images (by vary- ing the background intensity) for sev eral methods are illus- trated in figures 9 to 11, respectiv ely . At very low background intensities (high noise power), the learning based methods are able to keep the structural information better than model- based methods. Considering pixel lev el information, SIR T and TV adaptiv e gi ve higher quality results compared to FBP and CGLS in high noise power . A UT OMAP gives the most robust results giving dif ferent noise le vels considering that it is a fully learning based method and no model information is utilized in dev eloping this model. The proposed SIR T+DNN method lies within NNFBP methods in low background intensity b ut it has the highest slope in improving the output with respect to the noise lev el. In other words for background intensities higher than 10000, the proposed method returns superior results compared to the other techniques. It is worthwhile to mention 10 3 10 4 Background Intensity 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 mean SSIM AUTOMAP CGLS FBP FBP+DNN NNFBP16 NNFBP32 NNFBP64 SIRT SIRT+DNN TVadaptive Fig. 11. Mean SSIM wrt the background intensity for different methods. that no amount of noise was added to the input samples in the training stage. The results for the learning-based methods will improv e by providing broader range of variations in the training set including noisy samples. D. Intermediate Results As sho wn in figure 3 the proposed method consists of sev eral SIR T and DNN blocks placed consecuti vely . In this section, the results after each step are in vestigated. The con- sidered blocks are divided into two observations for SIR T and DNN indi vidually . In the current simulations, ten DNNs hav e been trained, one after each SIR T step and at the end, a final SIR T was applied to the network output. Therefore there are ten DNN and elev en SIR T blocks in total. The output of each of these blocks are calculated for the test set and PSNR, MSE and SSIM measures are plotted in figures 12 and 13 after each SIR T and DNN block respectively . Both 11 pixel v alue and structural features impro ve after each SIR T and DNN step. This indicates the cooperati ve beha vior of SIR T and DNN. In other words, these figures show that improvements induced by DNN are in the same direction of optimizing the loss in the sinogram space. It is also shown that the early stages of the model play an important role in the whole workflo w . The lo w-quality results after the first SIR T block are highly impacted by the first DNN block, while as going forward in the processing steps, the improvements get more and more marginal. It is also worthwhile to mention that these improv ements generate a more detailed reconstruction which yields the proposed SIR T+DNN method to stand out amongst other reconstruction methods. I V . C O N C L U S I O N In this article, a technique for low dose CT reconstruction has been proposed wherein a Deep Neural Network is utilized as a regularization term for a classical iterati v e reconstruction algorithm known as SIR T . T en Mixed-Scale Dense Con volutional Deep Neural Networks have been employed consecutiv ely after ten SIR T blocks. The first network has been initialized randomly and further netw orks take advantage of the transfer learning technique wherein each network is initialized as the best network from the previous step. In the results section, the proposed method is compared to state of the art techniques in CT image reconstruction where it shows a superior impro vement in PSNR, MSE and SSIM measurements compared to other methods. Another problem tackled in the proposed technique is the fidelity to the sinogram space. Most of the learning based methods act in the image space which is blind to the sinogram space and the reconstructed image after forward projection differs from the originally measured sinogram. By using the power of SIR T in decreasing the loss in sinogram space and the DNN optimizing the model in image space, SIR T+DNN returns the best measures in sinogram fidely amongst other learning-based methods alongside the superior results in the image space. The proposed technique is also compared to other methods in removing the sinogram noise. It is shown that in general, the learning based models return a better structurally correct result in different noise lev els compared to model-based techniques. And the proposed method giv es better results in background intensities higher than 10000. It is worthwhile to note that the network is not trained on noisy data and adding noise to training samples will increase the robustness of the model to dif ferent noise value. The intermediate results have been presented which concludes the fact that the early stages of the model hav e the most impact on improving the result and also it is shown that both SIR T and DNN cooperate in returning a satisfactory output in both image and sinogram space. As in e very other technique, the presented method suffers from se veral drawbacks explained as follows: 1) The current technique is trained on a specific geometry (20 projections, parallel beam geometry) and will not induce the exact same improvements over other geometries. This issue is not limited to the current method but to ev ery other learning-based technique. In the ev aluation section, all other learning-based techniques are trained on the same geometry to accomplish a fair comparison. 2) The presented technique is slower than other techniques in the e v aluation section. This is expectable since the current method deploys the SIR T and DNN blocks sev eral times iterativ ely . But considering the fast improv ement of hardware design, affordability and accessibility of parallel processing machines such as GPUs, this issue is not prohibiti ve in placing these type of models into the consumer market even with current technologies. The future works include adding sinogram noise to the training data in order to train a more robust model. A C K N O W L E D G M E N T This work is financially supported by VLAIO (Flemish Agency for Innov ation and Entrepreneurship), through the ANNTOM project HBC.2017.0595. W e gratefully acknowledge the support of NVIDIA Cor- poration with the donation of a T itan Xp GPU used for this research. R E F E R E N C E S [1] C. H. McCollough, M. R. Bruesewitz, and J. M. K ofler Jr, “CT dose reduction and dose management tools: overview of available options, ” Radiographics , vol. 26, no. 2, pp. 503–512, 2006. [2] E. Y . Sidky and X. Pan, “Image reconstruction in circular cone-beam computed tomography by constrained, total-variation minimization, ” Physics in Medicine and Biology , vol. 53, no. 17, pp. 4777–4807, SEP 7 2008. [3] D. Donoho, “Compressed sensing, ” IEEE Tr ansactions on Information Theory , vol. 52, no. 4, pp. 1289–1306, APR 2006. [4] K. J. Batenb urg and J. Sijbers, “D AR T: A practical reconstruction algo- rithm for discrete tomography , ” IEEE T r ansactions on Image Processing . [5] M. Rantala, S. V anska, S. Jarvenpaa, M. Kalke, M. Lassas, J. Moberg, and S. Siltanen, “W a velet-based reconstruction for limited-angle X-ray tomography, ” IEEE T ransactions on Medical Imaging , vol. 25, no. 2, pp. 210–217, FEB 2006. [6] K. He, X. Zhang, S. Ren, and J. Sun, “Delving deep into rectifiers: Surpassing human-level performance on imagenet classification, ” in Pr oceedings of the IEEE international conference on computer vision , 2015, pp. 1026–1034. [7] S. Ioffe and C. Szegedy , “Batch normalization: Accelerating deep network training by reducing internal cov ariate shift, ” arXiv preprint arXiv:1502.03167 , 2015. [8] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition, ” in Proceedings of the IEEE conference on computer vision and pattern recognition , 2016, pp. 770–778. [9] D. M. Pelt and K. J. Batenburg, “Fast tomographic reconstruction from limited data using artificial neural networks, ” IEEE T ransactions on Image Pr ocessing , vol. 22, no. 12, pp. 5238–5251, 2013. 12 1 2 3 4 5 6 7 8 9 10 11 SIRT step 22 24 26 28 30 32 34 36 38 40 42 PSNR 1 2 3 4 5 6 7 8 9 10 11 SIRT step 0 1 2 3 4 5 6 MSE 10 -3 1 2 3 4 5 6 7 8 9 10 11 SIRT step 0.7 0.75 0.8 0.85 0.9 0.95 1 SSIM Fig. 12. T est statistics after each SIR T step for PSNR, MSE and SSIM measurements. 1 2 3 4 5 6 7 8 9 10 DNN step 31 32 33 34 35 36 37 38 39 40 41 PSNR 1 2 3 4 5 6 7 8 9 10 DNN step 1 2 3 4 5 6 7 8 9 MSE 10 -4 1 2 3 4 5 6 7 8 9 10 DNN step 0.92 0.93 0.94 0.95 0.96 0.97 0.98 0.99 SSIM Fig. 13. T est statistics after each DNN step for PSNR, MSE and SSIM measurements. [10] D. W u, K. Kim, G. El F akhri, and Q. Li, “Iterative low-dose CT reconstruction with priors trained by artificial neural network, ” IEEE transactions on medical imaging , vol. 36, no. 12, pp. 2479–2486, 2017. [11] A. Makhzani and B. J. Frey , “W inner-tak e-all autoencoders, ” in Ad- vances in Neural Information Pr ocessing Systems , 2015, pp. 2791–2799. [12] B. Zhu, J. Z. Liu, S. F . Cauley , B. R. Rosen, and M. S. Rosen, “Image reconstruction by domain-transform manifold learning, ” Natur e , vol. 555, no. 7697, p. 487, 2018. [13] E. Kang, W . Chang, J. Y oo, and J. C. Y e, “Deep conv olutional framelet denosing for low-dose CT via wav elet residual network, ” IEEE T rans- actions on Medical Imaging , vol. 37, no. 6, pp. 1358–1369, 2018. [14] E. Kang, J. C. Y e et al. , “W av elet domain residual network (W avResNet) for low-dose X-ray CT reconstruction, ” arXiv preprint arXiv:1703.01383 , 2017. [15] H. Chen, Y . Zhang, M. K. Kalra, F . Lin, Y . Chen, P . Liao, J. Zhou, and G. W ang, “Low-dose CT with a residual encoder-decoder conv olutional neural network, ” IEEE transactions on medical imaging , vol. 36, no. 12, pp. 2524–2535, 2017. [16] H. Chen, Y . Zhang, W . Zhang, P . Liao, K. Li, J. Zhou, and G. W ang, “Low-dose CT via con volutional neural network, ” Biomedical optics expr ess , vol. 8, no. 2, pp. 679–694, 2017. [17] D. Pelt, K. Batenbur g, and J. Sethian, “Improving tomographic recon- struction from limited data using mixed-scale dense conv olutional neural networks, ” Journal of Imaging , vol. 4, no. 11, p. 128, 2018. [18] J. Gregor and T . Benson, “Computational analysis and improv ement of SIR T, ” vol. 27, no. 7, pp. 918–924, 2008. [19] Q. Y ang, P . Y an, Y . Zhang, H. Y u, Y . Shi, X. Mou, M. K. Kalra, Y . Zhang, L. Sun, and G. W ang, “Low dose CT image denoising using a 13 generativ e adversarial network with wasserstein distance and perceptual loss, ” IEEE transactions on medical imaging , 2018. [20] I. Goodfellow , J. Pouget-Abadie, M. Mirza, B. Xu, D. W arde-F arley , S. Ozair, A. Courville, and Y . Bengio, “Generative adversarial nets, ” in Advances in neural information pr ocessing systems , 2014, pp. 2672– 2680. [21] K. Simonyan and A. Zisserman, “V ery deep convolutional networks for large-scale image recognition, ” arXiv pr eprint arXiv:1409.1556 , 2014. [22] N. Srivasta va, G. Hinton, A. Krizhevsky , I. Sutskev er , and R. Salakhut- dinov , “Dropout: a simple way to pre vent neural networks from ov er- fitting, ” The Journal of Machine Learning Research , vol. 15, no. 1, pp. 1929–1958, 2014. [23] F . Y u and V . Koltun, “Multi-scale context aggregation by dilated con volutions, ” arXiv pr eprint arXiv:1511.07122 , 2015. [24] D. M. Pelt and J. A. Sethian, “ A mixed-scale dense conv olutional neural network for image analysis, ” Proceedings of the National Academy of Sciences , vol. 115, no. 2, pp. 254–259, 2018. [25] R. H. Hahnloser , R. Sarpeshkar, M. A. Mahowald, R. J. Douglas, and H. S. Seung, “Digital selection and analogue amplification coexist in a cortex-inspired silicon circuit, ” Nature , vol. 405, no. 6789, p. 947, 2000. [26] W . Palenstijn, K. Batenburg, and J. Sijbers, “Performance improvements for iterati ve electron tomography reconstruction using graphics process- ing units (GPUs), ” Journal of structural biology , vol. 176, no. 2, pp. 250–253, 2011. [27] W . van Aarle, W . J. Palenstijn, J. Cant, E. Janssens, F . Bleichrodt, A. Dabravolski, J. De Beenhouwer, K. J. Batenburg, and J. Sijbers, “Fast and flexible x-ray tomography using the ASTRA toolbox, ” Optics expr ess , vol. 24, no. 22, pp. 25 129–25 147, 2016. [28] W . van Aarle, W . J. Palenstijn, J. De Beenhouwer, T . Altantzis, S. Bals, K. J. Batenb urg, and J. Sijbers, “The ASTRA toolbox: A platform for advanced algorithm dev elopment in electron tomography , ” Ultra- micr oscopy , vol. 157, pp. 35–47, 2015. [29] D. P . Kingma and J. Ba, “ Adam: A method for stochastic optimization, ” arXiv preprint arXiv:1412.6980 , 2014. [30] T . Chen, M. Li, Y . Li, M. Lin, N. W ang, M. W ang, T . Xiao, B. Xu, C. Zhang, and Z. Zhang, “Mxnet: A flexible and efficient machine learning library for heterogeneous distributed systems, ” arXiv preprint arXiv:1512.01274 , 2015. [31] N. TESLA, “V100 gpu accelerator, ” NVIDIA, Oct , 2016. [Online]. A v ailable: https://images.nvidia.com/content/technologies/ volta/pdf/tesla- v olta- v100- datasheet- letter- fnl- web .pdf [32] NVIDIA, “Nvidia dgx station: Ai workstation for data science teams, ” 2018. [Online]. A v ailable: https://www .n vidia.com/en- us/data- center/ dgx- station/ [33] C. Paige and M. Saunders, “LSQR - an algorithm for sparse linear- equations and sparse least-squares, ” ACM T ransactions on Mathematical Softwar e , vol. 8, no. 1, pp. 43–71, 1982. [34] T . Y okota and H. Hontani, “An Efficient Method for Adapting Step-size Parameters of Primal-dual Hybrid Gradient Method in Application to T otal V ariation Regularization, ” in 2017 Asia-P acific Signal and Infor- mation Processing Association Annual Summit and Conference (APSIP A ASC 2017) , ser . Asia-Pacific Signal and Information Processing Associ- ation Annual Summit and Conference, 2017, pp. 973–979, 9th Annual Summit and Conference of the Asia-Pacific-Signal-and-Information- Processing-Association (APSIP A ASC), Kuala Lumpur, MALA YSIA, DEC 12-15, 2017. [35] Z. W ang, A. C. Bovik, H. R. Sheikh, E. P . Simoncelli et al. , “Image quality assessment: from error visibility to structural similarity , ” IEEE transactions on image processing , vol. 13, no. 4, pp. 600–612, 2004. Shabab Bazrafkan received his B.Sc degree from Urmia Univ ersity , Urmia, Iran in electrical engineering in 2011, M.Sc degree from Shiraz University of T echnology (SuTECH) in telecommunication engineering, Image processing branch in 2013 and Ph.D. from the National Univ ersity of Ireland, Galway (NUIG) in Deep Learning and Neural Network design in 2018 and he is currently a postdoctoral researcher working on low dose CT image recon- struction using machine learning techniques with V isionLab at the University of Antwerp. V incent V an Nieuwenhov e receiv ed his masters degree in Physics in 2013 at the University of Antwerp, Belgium with a thesis on statistical processing of functional MRI data. Afterwards, he pursued his PhD at the imec-V ision lab, University of Antwerp. He received his PhD in Physics in 2017 with a thesis entitled Model-based reconstruction algorithms for dynamic X-ray CT. In 2018, V incent joined Agfa NV , Belgium as Research Engineer in 2-3D Medical Imaging. Joris Soons receiv ed his M.Sc degree in physics from University of Antwerp, Belgium in 2007. During his PhD (2007-2012) at the lab of biomedical physics (University of Antwerp) and as postdoctoral researcher (2012-2015) at the Otobiomechanics group (Stanford University , USA), he focussed on 3D imaging techniques, modelling and mechanical experiments in biomechanics. Currently he is a researcher in 3D reconstruction and image processing at A GF A NV , Belgium. Jan De Beenhouwer obtained a M.Sc. in Computer Science Engineering in 2003 from the KU Leuven, Belgium and a Ph.D. in Biomedical Engineering from the University of Ghent, Belgium in 2008. He was a postdoctoral fellow for 2 years at the same institution prior to joining the V ision Lab at the University of Antwerp, Belgium. Currently , he is a research professor and leads the ASTRA group in imec-V ision Lab which focuses on the dev elopment of advanced computational methods for tomography as well as new reconstruction techniques that lead to better reconstruction quality compared to classical reconstruction methods. His main interest is in image reconstruction, processing and analysis with focus on computed tomography and electron tomography . Jan Sijbers graduated in Physics in 1993. In 1998, he received a PhD in Physics from the Univ ersity of Antwerp, Belgium, entitled Signal and Noise Estimation from Magnetic Resonance Images”. He was an FWO Postdoc at the Univ ersity of Antwerp (Belgium) and the Delft University of T echnology (the Netherlands) from 2002-2008. In 2010, he was appointed as a senior lecturer at the University of Antwerp, Belgium. In 2014, he became a full professor . He is the head of imec-V ision Lab, a research lab focusing on image reconstruction, processing, and analysis. His main interests are in the domain of Magnetic Resonance Imaging and X-ray Computed T omography . He is Senior Area Editor of IEEE Transactions on Image Processing as well as Associated Editor of IEEE T ransactions on Medical Imaging.
Original Paper
Loading high-quality paper...
Comments & Academic Discussion
Loading comments...
Leave a Comment