Deep neural networks for non-linear model-based ultrasound reconstruction

Ultrasound reflection tomography is widely used to image large complex specimens that are only accessible from a single side, such as well systems and nuclear power plant containment walls. Typical methods for inverting the measurement rely on delay-…

Authors: Hani Almansouri, S.V. Venkatakrishnan, Gregery T. Buzzard

Deep neural networks for non-linear model-based ultrasound   reconstruction
DEEP NEURAL NETWORKS FOR NON-LINEAR MODEL-B ASED UL TRASOUND RECONSTR UCTION H. Almansouri ? S.V . V enkatakrishnan † G.T . Buzzard + C.A. Bouman ? H. Santos-V illalobos † ? School of Electrical and Computer Engineering, Purdue Uni versity , W est Lafayette, IN, 47907 † Imaging, Signals and Machine Learning Group, Oak Ridge National Laboratory , Oak Ridge, TN, 37831 + Department of Mathematics, Purdue Uni versity , W est Lafayette, IN, 47907 ABSTRA CT Ultrasound reflection tomography is widely used to image large complex specimens that are only accessible from a single side, such as well systems and nuclear power plant containment walls. T ypical methods for in verting the mea- surement rely on delay-and-sum algorithms that rapidly pro- duce reconstructions but with significant artifacts. Recently , model-based reconstruction approaches using a linear for- ward model hav e been sho wn to significantly improv e image quality compared to the conv entional approach. Howe ver , ev en these techniques result in artifacts for complex objects because of the inherent non-linearity of the ultrasound for - ward model. In this paper , we propose a non-iterativ e model-based reconstruction method for in verting measurements that are based on non-linear forward models for ultrasound imaging. Our approach in volves obtaining an approximate estimate of the reconstruction using a simple linear back-projection and training a deep neural network to refine this to the actual reconstruction. W e apply our method to simulated and experimental ultrasound data to demonstrate dramatic improv ements in image quality compared to the delay-and- sum approach and the linear model-based reconstruction approach. I. INTRODUCTION One-sided ultrasound reflection tomography is vital for non-destructiv e ev aluation (NDE) of large heterogeneous specimens, such as the casing of injection wells and thick concrete walls [1]. A typical system uses an array of transducers to transmit a signal from one sensor and re- ceiv e at the others (see Fig. 1). The collection of received timeseries signals is then processed to reconstruct a cross- section of the object being imaged. Due to the need for rapid reconstructions, full wa veform in version approaches This manuscript has been authored by UT -Battelle, LLC under Contract No. DE-A C05-00OR22725 with the U.S. Department of Energy . The United States Gov ernment retains and the publisher , by accepting the article for publication, acknowledges that the United States Gov ernment retains a non- exclusi ve, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes. The Department of Ener gy will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan (http://energy .gov/do wnloads/doe-public-access- plan). [2] are not practical for ultrasound NDE and hence analytic algorithms based on a delay-and-sum approach, such as the synthetic aperture focusing technique (SAFT), are routinely used for reconstructions of ultrasound reflection mode data [3]–[5]. Recently , we have de veloped a model-based iterativ e reconstruction [6] approach using a simplified linear model and demonstrated significant improvements in reconstruction performance compared to SAFT while still being able to produce a reconstruction in near real-time. Howe ver , this linear-MBIR (L-MBIR) method still results in artifacts, such as reverberation and shadowing, due to the inherent non- linearity of the ultrasound system. In summary , existing approaches for ultrasound reflection imaging for NDE may result in reconstructions with significant artifacts. Fig. 1 : Illustration of a typical ultrasound system for non-destructive ev aluation. The transducers are used to make pulse-echo measurements which are processed to reconstruct the cross-section. There ha ve been sev eral recent efforts to use deep con volu- tional neural networks (CNN) to address in verse problems in imaging [7]. One class of algorithms applies a two-step, non- iterativ e approach composed of a simple in version followed by a CNN to obtain a reconstruction for in verse problems such as tomography [8], [9], MRI [10], [11], photo-acoustic tomography [12], [13], compressed sensing [14], and non- linear optical imaging based on multiple scattering [15]. Alternativ ely , researchers ha ve adapted variable splitting strategies such as the Plug-and-Play approach [16], [17] to iterativ ely solve two learned sub-problems corresponding to a forward-model in version and a denoising step in order to determine a fixed point [13], [18]–[22]. In summary , deep- learning based techniques ha ve demonstrated promising re- sults for a variety of in verse problems in imaging. In this paper, we propose a learning based approach for ultrasound reflection mode imaging using a non-iterative two-stage strategy . Since the underlying forward model is non-linear , we first obtain a preliminary reconstruction based on the adjoint of a simple linear model for the ultrasound system. W e then train a multi-scale deep con volutional neural network to map this initial reconstruction to the true reconstruction. Importantly , the CNN can account for the nonlinear and space varying effects in the ultrasound forward model, as well as account for attributes of the prior model that can be used to suppress image artifacts and noise. Once the network is trained, the algorithm can be applied in real time, because both steps can be performed rapidly using GPUs. W e demonstrate that the proposed approach dramatically improv es image quality for ultrasound imaging compared to SAFT and L-MBIR by removing artifacts caused by the non-linearity of the system, such as rev erberation and shadowing artif acts. The improvements are more evident for image targets buried deep inside the object being inspected. Also, the proposed CNN-based approach is able to reconstruct the specimen’ s acoustic speeds from the back-projected timeseries signals, yielding a quantitativ e reconstruction compared to existing approaches that are qualitativ e. The organization of the rest of this paper is as follows. In section II we introduce the ultrasound forward model and the con ventional linear model used for in version. In section III we present details of the proposed in version algorithm based on a deep neural network. Finally in section IV we present our results based on simulated phantom data. II. UL TRASOUND FOR W ARD MODEL AND LINEAR MODEL-B ASED INVERSION 5 10 15 20 25 30 35 40 45 Width(cm) 5 10 15 20 25 30 Depth (cm) 3500 4000 4500 5000 5500 6000 6500 5 10 15 20 25 30 35 40 45 Width(cm) 5 10 15 20 25 30 Depth (cm) -3 -2 -1 0 1 2 3 × 10 -4 Fig. 2 : Illustration of a back-projection of a one-sided ultrasonic NDE measurements using the system matrix A in Eq. 3. The left image is the ground truth (speed-of-sound in units of m/s) and the right image is the back-projection of the simulated measurements obtained from the ground truth using an array of 10 transducers and a non-linear wave propagation model. The back-projection suffers from artifacts and does not faithfully reconstruct the object. The goal of ultrasound reflection mode imaging is to determine the properties of a cross-section being imaged using a transducer array (See Fig. 1). In particular , the ultrasound wa ve propagation in a medium can be described by a set of coupled partial dif ferential equations [23], ∂ u ∂ t = − 1 ρ 0 ∇ p, ∂ ρ ∂ t = − ρ 0 ∇ u − u ∇ ρ 0 , p = c 2 0 ( ρ + d · ∇ ρ 0 − Lρ ) , (1) where u is the acoustic particle velocity , ρ is the acoustic density , d is the acoustic particle displacement, L is an operator defined by L = − 2 α 0 c y − 1 0 ∂ ∂ t  −∇ 2  y 2 − 1 +2 α 0 c y 0 tan  π y 2  ( −∇ 2 ) y +1 2 − 1 , and 0 < y < 3 , y 6 = 1 is a parameter that controls the behavior of the absorption and dispersion. For the forward (simulation) model, the inputs to this system of equations are the 2D fields corresponding to c 0 , the acoustic velocities; ρ 0 , the ambient densities; and α 0 , the attenuation. The output is the pressure p measured at the locations r j of the sensors as a function of time, t . These measurements are then concatenated to form the measurement vector y . Abstractly , we can represent this forward model relationship as y = f ( c 0 , ρ 0 , α 0 ) . Using these equations we can solve for the pressure at the sensor locations for a given input signal in order to simulate the received signal. Howev er , the in version of the underlying quantities from the receiv ed signals based on this model is challenging because of the complicated and non-linear nature of the forward model. T o address these challenges, we dev eloped a simplified linear model for the measurements [6], gi ven by ˜ y i,j ( t ) = Z R 3 ˜ A i,j ( τ i,j ( ν ) , t ) ˜ x ( ν ) dν + ˜ d i,j ( t ) , (2) where ˜ y i,j is the measurement at the transmit-receiv e pair ( i, j ) , ν is a point in the field of view , ˜ A i,j is a response function that accounts for the time-shift and attenuation of the transmitted pulse, ˜ x is the reflection coefficient, τ i,j is the time delay of the transmitted signal for point ν and the measurement pair ( i, j ) , ˜ d i,j is the direct arri val signal. Using this model, we designed a fast model-based reconstruction approach [6] (L-MBIR) which works by minimizing the cost-function ˆ v ← argmin v  1 2 || y − Av || 2 2 + R ( v )  , (3) where A is a projection matrix which discretizes ˜ A , v is a vector of reflection coef ficients and R is a Marko v random field based regularizer [24]. While this model is simple and significantly improves the reconstructions compared to conv entional delay-and-sum approaches like SAFT , the method can result in artifacts in the reconstructed images due to the assumption of linearity . Furthermore, the reflection coefficient may not have a clear quantitative interpretation compared to quantities such as the speed, density or attenu- ation in the medium. III. DEEP NEURAL NETWORK FOR NON-LINEAR UL TRASOUND INVERSION Since computing the exact solutions to (1) is expensiv e, we propose a two stage approach to the in version. In the first step we leverage our previously introduced linear model and use the A matrix in (3) to estimate an initial reconstruction 48 X 32 X 64 24 X 16 X 128 12 X 8 X 256 Input Image Output Image 6 X 4 X 512 3 X 2 X 1024 2 X 2 max - pooling 2 X 2 Up - conv . + BN + ReLU 48 X 32 48 X 32 Fig. 3 : Modified U-net architecture used for the reconstructions. The input is an image obtained by applying the adjoint of a linear operator to the measurements. Within each stage, we apply a 3 × 3 conv olution followed by a batch normalization and a rectified linear unit. The size of the feature maps at each stage is noted in the image. 5 10 15 20 25 30 35 40 45 Width(cm) 5 10 15 20 25 30 Depth (cm) 3500 4000 4500 5000 5500 6000 6500 5 10 15 20 25 30 35 40 45 Width(cm) 5 10 15 20 25 30 Depth (cm) 3500 4000 4500 5000 5500 6000 6500 5 10 15 20 25 30 35 40 45 Width(cm) 5 10 15 20 25 30 Depth (cm) 3500 4000 4500 5000 5500 6000 6500 0 10 20 30 40 50 60 70 80 90 100 Epochs 0 2 4 6 8 10 12 14 Loss × 10 4 training loss validation loss Fig. 4 : Example of training phantoms used to train the U-net neural network and a plot of the training and validation loss vs. epoch . ˜ v = A T y . While this method highlights some of the essential features, such a reconstruction is not quantitativ e and has sev ere artifacts due to the non-linearity in the system (see Fig. 2). In order to compensate for these artifacts, we use such an image as input to a deep-neural network that has been trained to map such an input to the actual image of the desired material properties such as the speed of sound in the medium. In particular, we use the U-net [25] with skip-connections to learn a mapping of this initial image to the actual reconstruction (see Fig. 3). This architecture is desirable because it has the entire input image in its receptiv e field and can hence learn features that are globally correlated. Furthermore the presence of skip-connections ensures that the architecture combines the features from dif ferent scales effecti vely . W e will refer to the proposed technique as direct deep learning (DDL) for the rest of the paper . IV . RESUL TS W e compare the proposed DDL algorithm to SAFT [26] and L-MBIR [6]. W e used a ten transducer system with an acquisition geometry in which one of the transducers trans- mits while the others receiv e. The transducers are spaced 4 cm apart. The transmitter sends a pulse of duration 50 µ s with a carrier frequency of 52 KHz. The recei ver collects T able I : A verage NRMSE and SSIM for SAFT , L-MBIR, and DDL for reconstructions from the test set after a best least-squares linear fit to the ground truth. Method SAFT L-MBIR DDL NRMSE 0.0614 0.0666 0.0188 SSIM 0.5583 0.4147 0.9340 263 samples with a sampling frequency of 200 KHz. The receiv ed signals were post-processed to eliminate the direct arriv al signal. The training data set was generated by using the k-W ave simulation software with its default boundary conditions [23] and is representati ve of the type of defects seen while inspecting thick, reinforced concrete walls with embedded steel plates. The density and attenuation were fixed, while the speed of sound varied from pixel to pixel depending on the material of the object. The background of the field of view is concrete with acoustic speed of 3680 m/s. The steel rebar is represented as circles with speed 5660 m/s. The defects are represented as rectangles with different speeds with possible alkali–silica reactions (ASR) [27], [28] inside with speed 4500 m/s. The cracks are represented as ASR crooked lines. In order to train the deep neural network, we used 1800 images of size 32 × 48 pixels for training the network, 200 for validation and 200 for testing. Stochastic gradient descent is used to optimize the loss function with batch size = 1, learning rate = 0.0001, and momentum = 0.5. The optimization was performed using the PyT orch [29] library . Fig. 4 shows examples of the training phantoms used to generate the ultrasound training data along with the curves for training and validation plots for the data-set. Samples 1 to 4 in Fig. 5 shows reconstructed images from the test set (not used in training) using SAFT , the linear MBIR of (3) and the proposed DDL approach. Notice that the units of each method are different, i.e. the unit in SAFT , L-MBIR, and DDL are pressure, reflectivity , and speed of sound, respectiv ely . What mak es DDL advantageous is that we are reconstructing the same unit as the ground truth which makes it easy to interpret the image. Also, notice that while L-MBIR is qualitativ ely superior to the SAFT reconstruction, it is unable to resolve some of the artifacts caused by re verberations and shadowing due to the linear (1) (2) (3) (4) (5) Ground T ruth 5 10 15 20 25 30 35 40 45 Width(cm) 5 10 15 20 25 30 Depth (cm) 3500 4000 4500 5000 5500 6000 6500 5 10 15 20 25 30 35 40 45 Width(cm) 5 10 15 20 25 30 Depth (cm) 3500 4000 4500 5000 5500 6000 6500 5 10 15 20 25 30 35 40 45 Width(cm) 5 10 15 20 25 30 Depth (cm) 3500 4000 4500 5000 5500 6000 6500 5 10 15 20 25 30 35 40 45 Width(cm) 5 10 15 20 25 30 Depth (cm) 3500 4000 4500 5000 5500 6000 6500 D13 0 5 10 15 20 25 30 35 40 45 width(cm) 5 10 15 20 25 30 depth(cm) SAFT 5 10 15 20 25 30 35 40 45 Width(cm) 5 10 15 20 25 30 Depth (cm) 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 5 10 15 20 25 30 35 40 45 Width(cm) 5 10 15 20 25 30 Depth (cm) 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 5 10 15 20 25 30 35 40 45 Width(cm) 5 10 15 20 25 30 Depth (cm) 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 5 10 15 20 25 30 35 40 45 Width(cm) 5 10 15 20 25 30 Depth (cm) 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 5 10 15 20 25 30 35 40 45 width(cm) 5 10 15 20 25 30 depth(cm) 0 5 10 15 20 25 30 35 40 45 50 L-MBIR 5 10 15 20 25 30 35 40 45 Width(cm) 5 10 15 20 25 30 Depth (cm) 0 50 100 150 200 250 300 350 400 450 500 5 10 15 20 25 30 35 40 45 Width(cm) 5 10 15 20 25 30 Depth (cm) 0 50 100 150 200 250 300 350 400 450 500 5 10 15 20 25 30 35 40 45 Width(cm) 5 10 15 20 25 30 Depth (cm) 0 50 100 150 200 250 300 350 400 450 500 5 10 15 20 25 30 35 40 45 Width(cm) 5 10 15 20 25 30 Depth (cm) 0 50 100 150 200 250 300 350 400 450 500 5 10 15 20 25 30 35 40 45 width(cm) 5 10 15 20 25 30 depth(cm) 0 1 2 3 4 5 6 7 8 9 10 DDL 5 10 15 20 25 30 35 40 45 Width(cm) 5 10 15 20 25 30 Depth (cm) 3500 4000 4500 5000 5500 6000 6500 5 10 15 20 25 30 35 40 45 Width(cm) 5 10 15 20 25 30 Depth (cm) 3500 4000 4500 5000 5500 6000 6500 5 10 15 20 25 30 35 40 45 Width(cm) 5 10 15 20 25 30 Depth (cm) 3500 4000 4500 5000 5500 6000 6500 5 10 15 20 25 30 35 40 45 Width(cm) 5 10 15 20 25 30 Depth (cm) 3500 4000 4500 5000 5500 6000 6500 5 10 15 20 25 30 35 40 45 width(cm) 5 10 15 20 25 30 depth(cm) 2000 2500 3000 3500 4000 4500 5000 5500 6000 6500 Fig. 5 : Comparison between all reconstruction results from k-wav e simulated data (samples 1 to 4) from the test set only and from experimental data (sample 5): the first row is the ground truth, the second row is SAFT reconstruction, the third row is linear MBIR reconstruction, and the fourth row is the proposed DDL reconstruction. DDL results in reconstructions with dramatic reduction in artifacts and is able to image behind occluding objects. model for the reflected signal shown in Fig. 5 in sample 1 and 2, respectively . In contrast, the proposed DDL approach suppresses these artifacts and results in dramatic improve- ments in image quality . Ho wever , for some weak reflections, DDL-generated artifacts are similar to the features in the training set (e.g. circular objects). Such artifacts may be hard to spot and pass as actual features in the specimen, such as the bottom left object in sample 4. Sample 5 in Fig. 5 shows reconstructed images from experimental data. The experiment is described in detail in [6]. The training of DDL was done with the same k- wa ve simulated data, except the concrete acoustic speed was changed to (2620 m/s), i.i.d. Gaussian noise, N (0 , 200 2 ) , was added to the ground truth to account for the modeling error , and the direct arriv al signals were not eliminated. Notice that the DDL reconstruction significantly improv es reconstruction quality by accurately reconstructing the steel rebar as well as the plate compared to SAFT and L-MBIR. T able I shows the NRMSE and SSIM of the three ap- proaches and illustrates that the proposed method also results in significant improvement of the quantitati ve accurac y of the results. A least squares fit to ground truth is used to scale and shift each reconstruction to optimize the RMSE for each method as in [19]. NRMSE uses k x r − x g k / k x g k , where x g is ground truth and x r is the best fit of the reconstruction to x g . For SSIM, the ground truth and the best fit reconstruction are both con verted to image intensity using Matlab mat2gray with the same intensity scale for each. V . CONCLUSIONS In this paper, we proposed a method for reflection model ultrasound reconstruction using a deep neural network. Our algorithm obtains an initial estimate using a linear back projection and then uses a trained neural network to map this preliminary reconstruction to the final solution. Using simulated and experimental data we sho wed that our algo- rithm produces a dramatic improvement in reconstruction quality compared to the typically used analytic algorithms as well as iterative algorithms based on linear models. VI. A CKNO WLEDGMENT Hani Almansouri and C.A. Bouman were supported by the U.S. Department of Energy . G.T . Buzzard was partially sup- ported by NSF CCF-1763896. S.V enkatakrishnan and Hector Santos-V illalobos were supported by the U.S. Department of Energys staff office of the Under Secretary for Science and Energy under the Subsurface T echnology and Engineer- ing Research, Dev elopment, and Demonstration (SubTER) Crosscut program, and the office of Nuclear Energy under the Light W ater Reactor Sustainability (L WRS) program. VII. REFERENCES [1] K. Hoegh and L. Khazanovich, “Extended synthetic aperture focusing technique for ultrasonic imaging of concrete, ” NDT & E International , vol. 74, pp. 33–42, 2015. [2] S. Bernard, V . Monteiller, D. Komatitsch, and P . Lasaygues, “Ul- trasonic computed tomography based on full-waveform in version for bone quantitative imaging, ” Physics in Medicine & Biology , vol. 62, no. 17, p. 7011, 2017. [3] Z. Shao, L. Shi, Z. Shao, and J. Cai, “Design and application of a small size SAFT imaging system for concrete structure, ” Review of Scientific Instruments , vol. 82, no. 7, p. 073708, 2011. [4] B. J. Engle, J. L. W . Schmerr, and A. Sedov , “Quantitative ultrasonic phased array imaging, ” AIP Conf. Proc. , vol. 1581, no. 7, p. 49, 2014. [5] G. Dobie, S. G. Pierce, and G. Hayward, “The feasibility of synthetic aperture guided wav e imaging to a mobile sensor platform, ” NDT and E International , vol. 58, no. 7, pp. 10–17, 2013. [6] H. Almansouri, S. V enkatakrishnan, D. Clayton, Y . Polsk y , C. Bouman, and H. Santos-Villalobos, “ Anisotropic modeling and joint-map stitching for improved ultrasound model-based iterative reconstruction of large and thick specimens, ” in AIP Conference Pr oceedings , vol. 1949, no. 1. AIP Publishing, 2018, p. 030002. [7] M. T . McCann, K. H. Jin, and M. Unser, “Con volutional neural networks for in verse problems in imaging: A revie w , ” IEEE Signal Pr ocessing Magazine , vol. 34, no. 6, pp. 85–95, 2017. [8] Y . Han and J. C. Y e, “Deep residual learning approach for sparse-view CT reconstruction, ” in Fully Three-Dimensional Image Reconstruction in Radiology and Nuclear Medicine . Fully3D conference organiza- tion, 2017. [9] K. H. Jin, M. T . McCann, E. Froustey , and M. Unser , “Deep con volutional neural network for inv erse problems in imaging, ” IEEE T ransactions on Image Pr ocessing , v ol. 26, no. 9, pp. 4509–4522, 2017. [10] Y . Han, J. Y oo, H. H. Kim, H. J. Shin, K. Sung, and J. C. Y e, “Deep learning with domain adaptation for accelerated projection- reconstruction MR, ” Magnetic resonance in medicine , vol. 80, no. 3, pp. 1189–1205, 2018. [11] S. W ang, Z. Su, L. Y ing, X. Peng, S. Zhu, F . Liang, D. Feng, and D. Liang, “ Accelerating magnetic resonance imaging via deep learn- ing, ” in Biomedical Imaging (ISBI), 2016 IEEE 13th International Symposium on . IEEE, 2016, pp. 514–517. [12] S. Antholzer, M. Haltmeier , and J. Schwab, “Deep learning for photoacoustic tomography from sparse data, ” arXiv preprint arXiv:1704.04587 , 2017. [13] A. Hauptmann, F . Lucka, M. Betcke, N. Huynh, J. Adler , B. Cox, P . Beard, S. Ourselin, and S. Arridge, “Model based learning for accelerated, limited-vie w 3D photoacoustic tomography , ” IEEE trans- actions on medical imaging , 2018. [14] A. Mousavi and R. G. Baraniuk, “Learning to inv ert: Signal recov ery via deep conv olutional networks, ” in Acoustics, Speech and Signal Pr ocessing (ICASSP), 2017 IEEE International Confer ence on . IEEE, 2017, pp. 2272–2276. [15] Y . Sun, Z. Xia, and U. S. Kamilov , “Efficient and accurate in version of multiple scattering with deep learning, ” Optics expr ess , v ol. 26, no. 11, pp. 14 678–14 688, 2018. [16] S. V . V enkatakrishnan, C. A. Bouman, and B. W ohlberg, “Plug-and- play priors for model based reconstruction, ” in Global Confer ence on Signal and Information Pr ocessing (GlobalSIP), 2013 IEEE . IEEE, 2013, pp. 945–948. [17] S. Sreehari, S. V . V enkatakrishnan, B. W ohlberg, G. T . Buzzard, L. F . Drummy , J. P . Simmons, and C. A. Bouman, “Plug-and-play priors for bright field electron tomography and sparse interpolation, ” IEEE T ransactions on Computational Imaging , vol. 2, no. 4, pp. 408–423, 2016. [18] K. Zhang, W . Zuo, S. Gu, and L. Zhang, “Learning deep cnn denoiser prior for image restoration, ” in 2017 IEEE Conference on Computer V ision and P attern Recognition (CVPR) , July 2017, pp. 2808–2817. [19] H. Gupta, K. H. Jin, H. Q. Nguyen, M. T . McCann, and M. Unser , “CNN-based projected gradient descent for consistent CT image reconstruction, ” IEEE transactions on medical imaging , vol. 37, no. 6, pp. 1440–1453, 2018. [20] J. Rick Chang, C.-L. Li, B. Poczos, B. V ijaya Kumar, and A. C. Sankaranarayanan, “One network to solve them all–solving linear in verse problems using deep projection models, ” in Pr oceedings of the IEEE Conference on Computer V ision and P attern Recognition , 2017, pp. 5888–5897. [21] J. Adler and O. ¨ Oktem, “Learned primal-dual reconstruction, ” IEEE transactions on medical imaging , vol. 37, no. 6, pp. 1322–1332, 2018. [22] T . Meinhardt, M. Mller , C. Hazirbas, and D. Cremers, “Learning proximal operators: Using denoising networks for regularizing inverse imaging problems, ” in ICCV , October 2017. [Online]. A vailable: https://github .com/tum- vision/learn prox ops [23] B. E. Treeby and B. T . Cox, “k-Wav e: Matlab toolbox for the simulation and reconstruction of photoacoustic wav e fields, ” Journal of biomedical optics , vol. 15, no. 2, p. 021314, 2010. [24] J.-B. Thibault, K. D. Sauer, C. A. Bouman, and J. Hsieh, “ A three-dimensional statistical approach to improv ed image quality for multislice helical ct, ” Medical physics , vol. 34, no. 11, pp. 4526–4544, 2007. [25] O. Ronneberger , P . Fischer, and T . Brox, “U-net: Conv olutional net- works for biomedical image segmentation, ” in International Confer- ence on Medical imag e computing and computer-assisted intervention . Springer , 2015, pp. 234–241. [26] Z. Shao, L. Shi, Z. Shao, and J. Cai, “Design and application of a small size saft imaging system for concrete structure, ” Review of Scientific Instruments , vol. 82, no. 7, p. 073708, 2011. [27] P . Barnes and J. Bensted, Structure and performance of cements . CRC Press, 2014. [28] D. W . Hobbs, Alkali-silica r eaction in concrete . London, 1988. [29] A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Y ang, Z. DeV ito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer, “ Automatic differen- tiation in pytorch, ” 2017.

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment