A CNN-Based Super-Resolution Technique for Active Fire Detection on Sentinel-2 Data

Remote Sensing applications can benefit from a relatively fine spatial resolution multispectral (MS) images and a high revisit frequency ensured by the twin satellites Sentinel-2. Unfortunately, only four out of thirteen bands are provided at the hig…

Authors: Massimiliano Gargiulo, Domenico Antonio Giuseppe DellAglio, Antonio Iodice

A CNN-Based Super-Resolution Technique for Active Fire Detection on   Sentinel-2 Data
1 A CNN-Based Sup er-Resolution T ec hnique for Activ e Fire Detection on Sen tinel-2 Data M. Gargiulo , D. A. G. Dell’Aglio , A. Io dice , D. Riccio , and G. Ruello Univ ersity of Naples “F ederico I I”, Italy Abstract — Remote Sensing applications can b enefit from a relatively fine spatial resolution m ultisp ectral (MS) images and a high revisit frequency ensured by the t win satellites Sen tinel- 2. Unfortunately , only four out of thirteen bands are pro vided at the highest resolution of 10 meters, and the others at 20 or 60 meters. F or instance the Short-W av e Infrared (SWIR) bands, pro vided at 20 meters, are very useful to detect active fires. Aiming to a more detailed Activ e Fire Detection (AFD) maps, w e propose a super-resolution data fusion metho d based on Con volutional Neural Netw ork (CNN) to mov e to wards the 10-m spatial resolution the SWIR bands. The proposed CNN-based solution ac hiev es b etter results than alternativ e metho ds in terms of some accuracy metrics. Moreov er w e test the super-resolved bands from an application p oin t of view b y monitoring active fire through classic indices. Adv an tages and limits of our prop osed approac h are v alidated on sp ecific geographical area (the mount V esuvius, close to Naples) that was damaged b y widespread fires during the summer of 2017. 1. INTRODUCTION The remote sensing pro ducts are exploiting more and more in the earth monitoring b ecause of the increasing n umber of satellites [1]. The Europ ean Space Agency has recently launc hed the twin satellites Sen tinel-2 which can acquire global data for differen t applications suc h as risk management (flo ods, subsidence, landslide), land monitoring, w ater managemen t, soil protection and so forth [2]. Sen tinel-2 data are ev en useful in burnt area and active fire monitoring, using several algorithms [3]. A plethora of these is essentially based on the threshold of sp ectral indices in volving Near-Infrared (NIR) and Short-W a ve Infrared (SWIR) bands [4, 5, 6] that Sentinel-2 provides at spat ial resolution of 10 m and 20 m, resp ectively . Therefore, it is common to resort to the 20-m resolution indices, b y just do wnscaling the NIR band from 10 m to 20 m. Ho w ev er, follo wing this approac h spatial information from NIR band would b e lost. An alternativ e approach to enhance the AFD metho d using the Sen tinel-2 images is to pro duce the Active Fire Indices (AFIs) b y upscaling the SWIR bands from 20 m to 10 m. Bey ond the shadow of a doubt the main issue is the c hoice of the metho d to impro ve the spatial resolution of the SWIR bands. In general, Single Image Sup er Resolution (SISR) and Sup er Resolution Data F usion (SRDF) metho ds are the tw o most popular wa ys to increase the spatial resolution of the images. The SISR metho ds do not use additional information from other sources, and they rely on spatial features of original image to increase its o wn resolution. On the other hand, SRDF metho ds (for instance, pan-sharp ening) are based on the idea that the spatial information from other sources is useful to improv e the spatial resolution of the original image [7]. In order to produce the 10-m AFIs from Sen tinel-2 bands with SRDF metho ds, the use of all the highest spatial resolution bands is not v ery b enefiting b ecause of their smok e-sensitivity . The ma jor contribution is deriv ed from the NIR whic h is the only band we consider in the SRDF approac hes. The rest of the pap er is organized as follows. Section 2 describ es the study area and the pick ed dataset. Section 3 gives more details ab out the methodology , fo cusing on the prop osed CNN-based sup er-resolution metho d, hereafter S RN N + , on the sp ectral fire indices and on the considered accuracy metrics. Section 4 summarizes exp erimen tal results, placing more attention on the sup er- resolv ed SWIR bands both in terms of visual insp ection and n umerical analysis while conclusions are drawn in Section 5. 2. STUDY AREA AND D A T ASET The area under inv estigation is located at the V esuvius (in Fig. 1), a volcano close to Naples,Italy . W e are motiv ated b y the c hoice of the study area since the presence of a natural park with a huge v ariety of flora and fauna considering its limited size [8]. A t the b eginning of July 2017, h undreds of wildfires ignited and damaged across the Italy coun try , whose the most serious were at V esuvius. In 2 fact, fires had b een in teresting the V esuvius area for several days and the situation quic kly b ecame more dangerous due to adv erse climatic conditions (winds and dry weather) [9]. The considered dataset is the Sen tinel-2 Level-1C product acquired on 12th July 2017. As we can see in Fig. 1, the area under in vestigation is mainly cov ered b y heavy smoke (Fig. 1-(b)) which reduces the usability of 10-m sp ectral information. (a) (b) Figure 1: (a) false colour comp osite ( ρ 12 , ρ 11 and ρ 8 bands) and (b) RGB image of V esuvius . 3. METHODOLOGY 3.1. Prop osed CNN-based Sup er-Resolution F usion Our goal is to impro v e the spatial resolution of SWIR bands using a Conv olutional Neural Netw ork (CNN). CNNs hav e attracted an increasing interest in man y remote sensing applications, lik e ob ject detection [10], classification [11], pansharpening [12], and others, b ecause of their capability to appro ximate complex non-linear functions, b enefiting from the reduction in computation time obtained thanks to the GPU usage. On the downside the av ailability of a large amount of data is required for training. In this work we prop ose to use a relatively shallow architecture. This is comp osed b y a cascade of L = 3 con volutional la yers. The first tw o are interlea v ed by Rectified Linear Unit (ReLU) activ ations that ensure fast con vergence of the training pro cess [11], and a linear activ ation function is considered in the last la yer. The l -th (1 ≤ l ≤ 3) conv olutional la yer, with N -band input x ( l ) , yields an M -band output y ( l ) y ( l ) = w ( l ) ∗ x ( l ) + b ( l ) . In l = 1 case, the x ( l ) input is equal to the input, instead in l = 3 case, the y ( l ) is the output of the CNN. The tensor w is a set of M conv olutional N × ( K × K ) filters, where a K × K is the receptive field, while b is a M -v ector bias. These parameters, Φ l ,  w ( l ) , b ( l )  , are refined during the training phase. F urther information ab out the CNN architecture can b e found in [13]. In the sup ervised learning w e need to generate a large amoun t of training samples, i.e. examples of inputs-target pairs. As reported in the pansharp ening case [14] the training samples ha ve to ensue the W ald’s protocol, that means to consider as inputs the downsampled P AN-MS pairs and taking as corresp onding output the original MS. This approach has inspired our study where the highest spatial resolution bands pla y the role of the P AN and the MS is acted b y SWIR. In our case w e consider the training samples x (1) = ( ˜ x , ˜ z ) as input to the netw ork, where ˜ x and ˜ z are respectively the low er resolution version of the SWIR bands and of the 10-m bands pro vided b y Sen tinel-2, instead we consider the sharp ened SWIR bands as output ( y (3) = x ). F urthermore, the cost function and the learning optimization algorithm are required in the learning phase. In more details we use the L1-norm as cost function, in place of the L2-norm, to b e more effective in error back-propagation when the computed errors are very low [12]. Sp ecifically , the loss is computed on the cost function by av eraging ov er the training examples at each up dating step of the learning pro cess: L (Φ ( n ) ) = E h    x − ˆ x (Φ ( n ) )    1 i where x represen ts the target and ˆ x the output of the CNN, dep enden t on the learnable parameters (Φ ( n ) ) . Instead, in this work we use the ADAM optimization method, an adaptiv e v ersion of the Sto c hastic Gradient Descent (SGD), and it adapts the learning rates for each parameter of the CNN. This metho d requires very few tuning [15] and minimizes loss v ery sp eedily [16]. 3 3.2. Sp ectral Fire Indices The prop osed mo del is ev aluated, from the application p oin t of view, b y monitoring active fires through the computation of three different sp ectral indices (in Fig. 2), mainly used to this aim in literature [17, 18, 19] because of their ease computing. The AFIs [17, 18] are defined on Sen tinel-2 data as follo ws: AF I 1 = ρ 12 ρ 8 AF I 2 = ρ 11 ρ 8 AF I 3 = ρ 12 ρ 11 where ρ 8 is the 10-m spatial resolution NIR band, cen tered at the w av elength of 0.834 µ m ; while ρ 11 and ρ 12 are the 20-m SWIR bands, centered at 1.610 µ m and 2.190 µ m , resp ectively . All of these bands represen t the radiance data at top of the atmosphere. The choice of these indices is based on their ph ysical prop erties . Specifically , the conditions AF I 1 > 1 and AF I 3 > 1 often o ccur in active fire; while the condition AF I 2 < 1 is v erified near the fire fronts [19]. AF I 1 AF I 2 AF I 3 Figure 2: Active Fire Indices related to V esuvius. 3.3. Results Accuracy Metrics T o ev aluate the p erformance when the target image is a v ailable (in our case, at 20-m spatial resolution), the prop osed method is compared to alternative methods using four reference metrics, commonly used for pansharp ening [20]: - Sp ectral Angular Mapp er (SAM) the sp ectral distortion b et ween pixel of reference image and estimated one [21]; - Universal Image Quality Index (UIQI, or Q-index), an image quality indicator introduced in [22]; - Relative Dimensionless Global Error (as known as ER GAS) which reduces to the ro ot mean square error (RMSE) in case of single band [7]; - High-frequency Correlation Co efficien t (HCC), the correlation co efficien t b etw een the high- pass comp onen ts of tw o images [13]. F or a full resolution analysis we consider the active fire monitoring application, and all the metho ds comp ete with eac h others in terms of binary classification. T o this end w e need to define a ground truth on which computing the main classification metrics. In this context, such ground truth is p erformed with a differential m ulti-temp oral approach, based on a thresholding of the difference b et w een tw o cloudily-free realizations of Normalized Difference V egetation Index (ND VI) in tw o differen t date (b efore and after the fire even t). This ground truth (GT) is affected b y noise (or small brigh t pixels) and so we ha v e used a morphological op erator (op ening) to erase this undesired noisy effect. Th us, w e compare this GT with the active fire maps obtained by thresholding the abov e- men tioned spectral indices. In our case, the AFIs use the sup er-resolv ed bands with the different considered approaches. In particular we consider differen t thresholds on each of AFIs to matc h the b est detection of the activ e fires. In order to ev aluate the qualit y of the obtained binary maps, we ha ve considered some metrics, typically used in classification task: - Precision (P) is the ratio b etw een the correctly predicted p ositiv e observ ations and the total predicted p ositiv e ones; - Recall (R) is the ratio b et ween the correctly predicted p ositiv e observ ations to the all obser- v ations in actual class; - Intersection ov er Union (IoU) is the ratio b et ween the o v erlapping area and the union area. The in tersection and the union are computed on the predicted p ositive observ ations and the p ositiv es from the GT. 4 It is worth while to remem b er that a high precision corresp onds to a low false p ositiv e rate. In other w ords, higher the p ercentage of correctly predicted positive ov er the total predicted p ositiv e, higher precision. Instead, a high recall corresp onds to a lo w false negative rate, that means higher recall higher detection rate. 4. RESUL TS AND DISCUSSION 4.1. T raining Phase Giv en the lack of sufficien t a v ailable input-output samples in the present con text, we start from a pre-trained CNN solution [13] to train the net work’s parameters Φ n . In [13] a super-resolution tec hnique is considered for ρ 11 band ( x = ρ 11 ), and thus we extend to the ρ 12 band an equiv alent solution ( x = ρ 12 ). In particular, to create a pre-trained solution for this other band as well w e use the iden tical dataset considered in [13]. After that, we fine-tune the w eights of the CNN from Naples in tw o different dates, close to the target date (sp ecifically June 27th and July 27th). This can b e considered as a geographical fine-tuning, b ecause we adapt the weigh ts of the CNN on the geometric features of the study area. Then, w e test this fine-tuned solution on the date under inv estigation (July 12th, 2017). Once left apart the target date for testing, 17 × 17 patches for training are uniformly extracted from tw o ab o ve-men tioned date in the remaining segments. Ov erall, 10k patches are collected from the considered dates and randomly partitioned in 80% for training phase and 20% for v alidation phase. The 8k training patches are group ed in 32- size mini-batc hes for the implementation of the ADAM-based training. The fine-tuned solution is considered b etter than the solution from scratch, when a large amoun t of data for training phase is not av ailable, or when the computing p ow er is not sufficien t [23]. Even tually , w e minimize the L1-norm cost function, defined in the Metho dology section, on the training examples using the AD AM learning algorithm. Thus, w e set the ADAM default v alues η = 0.002, β 1 = 0.9, and β 2 = 0.999, as rep orted in [24]. In this sp ecific case, the training phase requires 200 ep ochs (32 × 200 w eight update) p erformed in few m in utes using GPU cards, while the test can b e done in real-time. Nearest Neighbour z Bicubic HPF GS2-GLP S RN N + (Prop osed) Figure 3: Detail of the study area obtained b y sev eral sup er-resolution tec hniques and our prop osal to underline the impro vemen t in terms of sp ectral distortion. In the middle of the first ro w: z is only composed by R GB bands. 4.2. Comparison b etw een Super-Resolution Prop osal and SISR/SRDF techniques In this section, S R N N + is compared to a pre-trained CNN-based metho d (SRNN), three p opular SRDFs adapted to the Sentinel-2/SWIR problem, including GS2-GLP [25], HPF [26] and PRA CS [25], and even the SISRs, that are the Nearest Neigh b our (NNI) and bicubic in terp olation tech- niques. The numerical results obtained for the area of interest are rep orted in the left part of the T ab. 1. In the results we consider an a v erage on the SWIR bands. In the top part of the table, the S R N N + is compared to the SISR tec hniques, and the improv emen t is very remark able in the HCC metric which deals with the fact that high frequency comp onen ts are m uch affected b y the sup er-resolution and are mostly lo calized on b oundaries. Moving from the top to the b ottom of the table, the prop osed S R N N + metho d compares fav ourably against classical fusion metho ds, whic h 5 tak e information from the additional input band ρ 8 . In the p enultimate ro w it is giv en the p erfor- mance of the pre-trained SRNN mo del and even in this case the S R N N + p erforms sligh tly b etter results in terms of all the metrics at 20-m resolution. As it can b e seen the additional fine-tuning, although having few training patc hes, pro vide a further gain. T o conclude this section w e sho w in Figg. 4-3 some sample results (at 10-m without reference) whic h further confirm the effectiveness of the prop osed metho d. Nearest Neighbour z Bicubic HPF GS2-GLP S RN N + (Prop osed) Figure 4: Detail of the area under inv estigation obtained b y several sup er-resolution techniques and our prop osal to underline the improv ement in terms of sp ectral distortion. In the middle of the first ro w: z is only composed b y R GB bands that are affected by smok e presence (in the CNN input z is also comp osed b y ρ 8 band. 4.3. Comparison b etw een Differen t AFIs and Maps Once active fire ( AF ) is detected by considering the follo wed rules: AF D = AF I k > α , where k ∈ { 1 , 2 , 3 } , the performance is computed in terms of Precision, Recall and IoU and rep orted on the right-hand side of T ab. 1. The numerical results confirmed the effectiveness of the prop osal, and the Fig.5-6 further confirm the superiority of the prop osed metho d. As w e can see in T ab. 1, the S RN N + ha ve the b est p erformance in terms of precision metric. In particular, its v alues are m uch greater than those relating to the classic tec hniques, demonstrating that it b enefits from the join t information obtained from the visible bands. The low false alarm rate asp ect is w ell visible in Fig.6, where an urban detail included in the study area is sho wn. The Fig.6 only refers to AF I 2 , but similar results are provided by the other indices analysed. On the other hand, b oth in terms of recall and IoU measures, the prop osal ha ve worse performance. W e supp ose this are mainly due to the ground truth used in v alidation, whic h probably o ver-estimates the areas in terested b y fires. In fact, as we can see in the cen tral column of the figure 5, the S RN N + AFIs b etter define these areas, resulting lighter and thinner then that ones obtained b y other techniques. F urthermore we can observ e from visual inspection that the b oundaries are more evident considering ρ 12 and ρ 11 than ρ 12 and ρ 8 . In general, even though this determines a lo w detection rate on the maps obtained b y the AF I 3 with resp ect to AF I 1 . Methods SAM Q-index ER GAS HCC Precision Recall IoU (0) (1) (0) (1) (1) (1) (1) NNI 0.001960 0.9182 9.353 0.1355 0.8329 0.5773 0.5309 Bicubic 0.001964 0.9515 7.155 0.471 0.8387 0.5900 0.5471 HPF [26] 0.064590 0.9405 8.150 0.2826 0.7799 0.5991 0.5476 PRACS [25] 0.001979 0.9535 7.057 0.5117 0.7993 0.5987 0.5497 GS2-GLP [25] 0.050190 0.9540 7.043 0.4694 0.8008 0.6131 0.5571 SRNN [13] 0.001963 0.9688 5.943 0.6246 0.8373 0.5649 0.5158 S RN N + (Proposed) 0.001956 0.9743 5.425 0.6334 0.8414 0.5642 0.5157 T able 1: In the left part of the table: av erage results in terms of main metrics (at 20-m), t ypically used in pan- sharp ening and sup er-resolution context. In the righ t part of the table: av erage results in terms of classification metrics. 5. CONCLUSION In this work w e prop ose the S R N N + to further enhance the spatial resolution of the Sentinel-2 SWIR bands. F or the sp ecific goal (AFD) w e fine-tune the weigh ts of the CNN on the geographic study area and then w e test the proposed approach b oth in terms of visual qualit y assessment and AFD capability . Even tually we sho w very promising results in terms of super-resolution metrics and ev en in AFD. The ach ieved results encourage us to exploit different arc hitectural choices and/or learning strategies, and extending this approach to other remote sensing applications. 6 (Ground-T ruth) Bicubic GS2-GLP S RN N + (Prop osed) Figure 5: In the first ro w the R GB image in whic h we can observe the presence of the smoke and the ground truth. Then, from the second ro w to the b ottom: in the first column false-RGB, in the second AF I 1 and AF I 3 , in the third the resp ectiv e Maps. REFERENCES 1. Neha Joshi, Matthias Baumann, Andrea Ehammer, Rasm us F ensholt, Kenneth Grogan, P atrick Hostert, Martin Jepsen, T obias Kuemmerle, P atric k Meyfroidt, Edward Mitchard, et al., “A review of the application of optical and radar remote sensing data fusion to land use mapping and monitoring,” R emote Sensing , vol. 8, no. 1, pp. 70, 2016. 2. M. Drusch et al., “Sen tinel-2: Esa’s optical high-resolution mission for gmes op erational services,” R emote Sensing of Envir onment , v ol. 120, no. Supplement C, pp. 25 – 36, 2012, The Sen tinel Missions - New Opp ortunities for Science. 3. Astrid V erhegghen, Hugh Ev a, Guido Ceccherini, F rederic Achard, V alery Gond, Sylvie Gourlet-Fleury , and P aolo Cerutti, “The potential of Sentinel satellites for burn t area mapping and monitoring in the Congo Basin forests,” R emote Sensing , vol. 8, no. 12, pp. 986, 2016. 4. L Cicala, CV Angelino, N Fiscante, and SL Ullo, “Landsat-8 and Sen tinel-2 for fire monitoring at a lo cal scale: A case study on Vesuvius,” in 2018 IEEE International Confer enc e on Envir onmental Engine ering (EE) . IEEE, 2018, pp. 1–6. 5. Jos´ e Pereira, Emilio Ch uvieco, A Beudoin, and N Desbois, “Remote sensing of burned areas: A review. A review of remote sensing metho ds for the study of large wildland fires,” Dep artamento de Ge o gr afa, Universidad de A lc al , pp. 127–184, 01 1997. 6. Y ogesh Kan t and K. V. S. Badarinath, “Studies on land surface temperature o ver heteroge- neous areas using A VHRR data,” International Journal of R emote Sensing , v ol. 21, no. 8, pp. 1749–1756, 2000. 7. Lucien W ald, Data F usion. Definitions and Ar chite ctur es - F usion of Images of Differ ent Sp atial R esolutions , Presses de l’Ecole, Ecole des Mines de Paris, Paris, F rance, 2002, ISBN 2-911762-38-X. 8. Lucio Mascolo, Maurizio Sarti, F erdinando Nunziata, and Maurizio Migliaccio, “V esuvius national park monitoring by COSMO-SkyMed PingPong data analysis,” in ESA Sp e cial Pub- lic ation , 2013, vol. 713. 9. G Bovio, M Marchetti, L T onarelli, M Salis, G V acchiano, R Lo vreglio, M Elia, P Fiorucci, and D Ascoli, “Gli incendi boschivi stanno cam biando: cam biamo le strategie p er gov ernarli,” F or esta - Rivista di Selvic oltur a e d Ec olo gia F or estale , , no. 4, pp. 202–205, 2017. 10. W ei Guo, W en Y ang, Haijian Zhang, and Guang Hua, “Geospatial Ob ject Detection in High Resolution Satellite Images Based on Multi-Scale Conv olutional Neural Netw ork,” R emote Sensing , vol. 10, no. 1, pp. 131, 2018. 7 (Ground-T ruth) Bicubic GS2-GLP S RN N + (Prop osed ) Figure 6: In the first ro w the R GB image in whic h w e can observ e the absense of the smoke and the ground truth. Then, from the second ro w to the b ottom: in the first column false-R GB, in the second AF I 2 , and in the third the resp ectiv e Map. 11. Alex Krizhevsky , Ilya Sutsk ever, and Geoffrey E. Hinton, “Imagenet classification with deep con volutional neural netw orks,” pp. 1106–1114, 2012. 12. G. Scarpa, S. Vitale, and D. Cozzolino, “T arget-adaptive cnn-based pansharp ening,” IEEE T r ansactions on Ge oscienc e and R emote Sensing , vol. 56, no. 9, pp. 5443–5457, Sept 2018. 13. Massimiliano Gargiulo, Antonio Mazza, Raffaele Gaetano, Giusepp e Ruello, and Giusepp e Scarpa, “A CNN-Based F usion Metho d for Sup er-Resolution of Sen tinel-2 data,” IGARSS , 2018. 14. Giusepp e Masi, Davide Cozzolino, Luisa V erdoliv a, and Giusepp e Scarpa, “P ansharp ening b y con volutional neural netw orks,” R emote Sensing , vol. 8, no. 7, pp. 594, 2016. 15. Kyle D Julian and Mykel J Ko c henderfer, “Neural Net work Guidance for UA Vs,” p. 1743, 2017. 16. Diederik P Kingma and Jimm y Ba, “Adam: A metho d for sto c hastic optimization,” arXiv pr eprint arXiv:1412.6980 , 2014. 17. Haiyan Huang, Da vid Ro y , Luigi Boschetti, Hankui Zhang, L Y an, Sanath Kumar, Jose Gomez- Dans, and Jian Li, “Separability Analysis of Sentinel-2A Multi-Sp ectral Instrumen t (MSI) Data for Burned Area Discrimination,” R emote Sensing , v ol. 8, 11 2016. 8 18. Wilfrid Sc hro eder, Patricia Oliv a, Louis Giglio, Brad Quayle, Eck ehard Lorenz, and F abiano Morelli, “Active fire detection using Landsat-8/OLI data,” R emote Sensing of Envir onment , v ol. 185, 09 2015. 19. A Barducci, D Guzzi, P Marcoionni, and I Pippi, “Infrared detection of active fires and burnt areas: theory and observ ations,” Infr ar e d physics & te chnolo gy , vol. 43, no. 3-5, pp. 119–125, 2002. 20. P Jagalingam and Ark al Vittal Hegde, “A review of qualit y metrics for fused image,” A quatic Pr o c e dia , vol. 4, pp. 133–142, 2015. 21. Luciano Alparone, Lucien W ald, Jo celyn Chanussot, Claire Thomas, P aolo Gamba, and Lori Mann Bruce, “Comparison of pansharp ening algorithms: Outcome of the 2006 GRS- S data-fusion contest,” IEEE T r ansactions on Ge oscienc e and R emote Sensing , vol. 45, no. 10, pp. 3012–3021, 2007. 22. Zhou W ang and A. C. Bovik, “A universal image qualit y index,” IEEE Signal Pr o c essing L etters , vol. 9, no. 3, pp. 81–84, March 2002. 23. Nima T a jbakhsh, Jae Y Shin, Suryak anth R Gurudu, R T o dd Hurst, Christopher B Kendall, Mic hael B Gotw a y , and Jianming Liang, “Conv olutional neural netw orks for medical image analysis: F ull training or fine tuning?,” IEEE tr ansactions on me dic al imaging , v ol. 35, no. 5, pp. 1299–1312, 2016. 24. Sebastian Ruder, “An ov erview of gradien t descent optimization algorithms,” arXiv pr eprint arXiv:1609.04747 , 2016. 25. Gemine Viv one, Luciano Alparone, Jo celyn Chanussot, Mauro Dalla Mura, Andrea Garzelli, Giorgio A Licciardi, Ro cco Restaino, and Lucien W ald, “A critical comparison among pan- sharp ening algorithms,” IEEE T r ansactions on Ge oscienc e and R emote Sensing , v ol. 53, no. 5, pp. 2565–2586, 2015. 26. P .S. Cha vez and J.A. Anderson, “Comparison of three differen t methods to merge m ultires- olution and m ultisp ectral data: Landsat TM and SPOT panchromatic,” Photo gr amm. Eng. R emote Sens. , vol. 57, no. 3, pp. 295–303, 1991.

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment