G-LoG Bi-filtration for Medical Image Classification
Building practical filtrations on objects to detect topological and geometric features is an important task in the field of Topological Data Analysis (TDA). In this paper, leveraging the ability of the Laplacian of Gaussian operator to enhance the bo…
Authors: Qingsong Wang, Jiaxing He, Bingzhe Hou
G-LOG BI-FIL TRA TION F OR MEDICAL IMA GE CLASSIFICA TION QINGSONG W ANG, JIAXING HE, BINGZHE HOU, TIERU WU, Y ANG CAO, AND CAILING Y AO Abstract. Building practical filtrations on ob jects to detect topological and geometric features is an important task in the field of T op ological Data Analysis (TDA). In this pap er, leveraging the ability of the Laplacian of Gaussian operator to enhance the b oundaries of medical images, we define the G-LoG (Gaussian-Laplacian of Gaussian) bi-filtration to generate the features more suitable for multi-parameter persistence module. By modeling v olumetric images as bounded functions, then w e pro v e the interlea ving distance on the persistence modules obtained from our bi-filtrations on the b ounded functions is stable with resp ect to the maximum norm of the bounded functions. Finally , we conduct experiments on the MedMNIST dataset, comparing our bi-filtration against single-parameter filtration and the established deep learning baselines, including Go ogle AutoML Vision, ResNet, AutoKeras and auto-sklearn. Exp erimen ts results demonstrate that our bi-filtration significantly outperforms single-parameter filtration. Notably , a simple Multi-Lay er Perceptron (MLP) trained on the topological features generated b y our bi-filtration achiev es p erformance comparable to complex deep learning mo dels trained on the original dataset. 1. Intr oduction Ov er the last few years, T op ological Data Analysis (TDA), esp ecially p ersistent homology , has demonstrated significan t utility in both theoretical and applied domains (see surveys [41, 29, 7] and the references therein). P ersisten t homology is generated b y a filtration, where different filtrations capture distinct topological features. In the single-parameter setting, four w ell-kno wn filtrations are commonly used: the Vietoris-Rips, Čech and alpha filtrations for point clouds, and the lo wer-star filtration for cubical complexes. Since a single filtration can not often adequately capture enough structures, the concept of m ulti-parameter p ersisten t homology b ecomes crucial. Unlik e the single persistence, Carlsson and Zomoro dian prov ed there is no analogous complete discrete in v ariant for the multi-parameter mod- ule [8]. Ov er the y ears, several metho ds ha ve b een prop osed to construct multi-parameter filtra- tions for p oin t clouds. F or instance, Lesnic k and McCab e fo cus on nerve mo dels of sub division bi-filtrations[22], Alonso et al. prop osed the Delaunay bi-filtrations [2] and Corb et et al. proposed the rhomboid bi-filtration [12]. These approaches can b e found in [5] for more details. Ho wev er, un til now, metho ds that directly constructing bi-filtrations from images ha v e remained scarce. In [9], Carrière and Blum berg built bi-filtrations from a dataset of paired images, where eac h pair comprises imaging data of human tissue samples from patients with breast cancer. In [17], He et al. constructed mix-GENEO bi-filtration on MNIST and demonstrated the sup eriority of m ulti-parameter filtrations o v er single-parameter ones. Given that GENEO op erators require careful selection in sp ecific w ays [3], there is a need for more accessible alternativ es. Since the 2020 Mathematics Subje ct Classific ation. 55N31, 68T09. Key words and phr ases. Persisten t homology , multi-filtration, Gaussian k ernels, medical imaging. Qingsong W ang and Jiaxing He contributed equally to this study . 1 2 QINGSONG W ANG, JIAXING HE, BINGZHE HOU, TIERU WU, Y ANG CA O, AND CAILING Y AO Gaussian filtered image eliminates the noise and Laplacian op erator can detect edges of images, the Laplacian-of-Gaussian (LoG) has b ecome a cornerstone op erator in edge detection and texture enhancemen t, w e prop ose a simpler and more efficien t bi-parameter filtrations, named G-LoG. Then w e ev aluate their p erformances on the MedMNIST (v2) dataset [40]. Figures 1 and 2 illustrate our G-LoG bi-filtration metho d on 2D and 3D medical images, resp ectively . Figure 1. Bi-parameter p ersistence mo dules H 0 and H 1 generated b y G-LoG bi- filtration from a medical tissue image. Figure 2. Bi-parameter p ersistence mo dules H 0 , H 1 and H 2 generated b y G-LoG bi-filtration from a 3D medical organ volume. 1.1. Ov erview. In the recen t y ears, deep learning has rapidly become a cornerstone of medical analysis. In medical image analysis, the use of deep learning to disco v er or learn informativ e features pla ys an essential role across a wide range of tasks. Nev ertheless, the field still faces G-LOG BI-FIL TRA TION FOR MEDICAL IMA GE CLASSIFICA TION 3 enduring challenges, including top ological uncertaint y [34], issues asso ciated with high-dimensional data [37], in terpretabilit y and the reliance on v ast amoun ts of lab eled data [31]. T op ological Data Analysis (TDA), a p o w erful metho d for characterizing the connectivity and in trinsic features of medical images, has attracted widespread attention. T op ological Data Analysis, whic h can b e dated bac k to 1990s, with metho ds such as Morse- Smale complexes, Reeb graphs and size theory b eing introduced and studied. During this p erio d, F erri, F rosini, Landi and their collaborators also proposed early v ersions of the rank inv ariant in degree 0 and of persistent homotop y groups in m ulti-parameter persistence (see surv ey [4] and the references therein). So on after, Edelsbrunner, Letscher, and Zomoro dian in ven ted a version of homology theory named p ersisten t homology which provides insights into the top ological features that p ersist across different scales [15]. Until no w, TDA metho ds hav e b een in vestigated in several medical fields, including neurology , cardiology , hepatology , genomics and single-cell transcriptomics, drug discov ery , ev olutionary and protein structure analysis [33]. F or example, Hu et al. utilized discrete Morse theory to segment fine-grained structures in biomedical images [19], while Shen et al. introduced knot data analysis in [32] for capturing lo cal structures and connectivity in data. Casaclang-V erzosa et al. [10] c haracterized the natural progression of aortic stenosis using TDA in cardio v ascular research. P ersistent homology , as a ubiquitous to ol in TD A whic h can b e vectorized as the feature (suc h as p ersistence landscap e [6], persistence images [1], b etti curves [11], kernel metho ds [30], B-spline grids [14]) has b een successfully used in medical image analysis. F or example, Crawford et al. designed the smo oth Euler characteristic transform to quantify magnetic resonance images of tumors [13]. Y adav et al. emplo y ed p ersistent homology in histopathological cancer detection [38]. There are also sev eral w orks considered applications of m ulti-parameter p ersistence modules in medical image analysis. Vipond et al. emplo y ed the multi-parameter p ersistence landscap e, constructed via radius-codensity bifiltrations, to study immune cell lo cations in digital histology images from head and neck cancer [36]. Carrière and Blumberg prop osed multi-parameter p ersis- tence images [9] and utilized this vectorization method to analyze quantitativ e immunofluorescence images. The construction of most medical imaging datasets relies on prior knowledge from computer vision or the medical domain, making these datasets difficult to be used directly . T o av oid this issue, t wo versions of the MedMNIST dataset [40, 39] hav e b een successiv ely released. T o establish a b enchmark in this field, the developers employ ed sev eral types of mo dels as baseline classifiers for MedMNIST: ResNet [18] and v arious AutoML mo dels (including auto-sklearn [16], AutoKeras [20] and Google AutoML Vision https://cloud.google.com/automl ). In recent years, man y studies ha ve aimed to improv e classification p erformance on MedMNIST by modifying net w ork arc hitec- tures, such as FPVT [23], MedViT [26] and C-Mixer [35]. In [28], Nu w agira et al. v alidated the effectiv eness of T op-Med on MedMNIST, they show ed that integrating top ological feature vectors can enhance the accuracy and robustness of deep learning mo dels. Differen t from recen t authors’ efforts to v alidating the learning ability of the netw ork, w e simply employ a multila y er p erceptron (MLP) to classify features extracted from MedMNIST (v2) via m ulti-parameter p ersisten t homol- ogy . In this pap er, w e use the results from MedMNIST (v2) [40] and T op-Med [28] as baselines. 1.2. Motiv ation. Cho osing an appropriate multi-parameter filter function pla ys an imp ortant role in the classification task of m ulti-parameter persistent homology . If w e select unsuitable filter function γ = ( γ 1 , γ 2 ) to obtain multi-parameter p ersisten t homology , the multi-parameter p ersisten t homology ma y yield results comparable to single-parameter p ersistent homology indu ced b y γ 1 and 4 QINGSONG W ANG, JIAXING HE, BINGZHE HOU, TIERU WU, Y ANG CA O, AND CAILING Y AO γ 2 distinctly . Then w e will fail to exploit the benefits of multi-parameter p ersisten t homology . W e construct a bi-parameter filter function for image pro cessing based on the following three points. (1) T o extract features more suitable for p ersisten t homology , w e employ tw o approaches: first, w e use sublevel set functions to capture the original p ersisten t homology features of the im- age; second, to b etter adapt the image for classification tasks within the p ersisten t homology framew ork, we utilize the Laplacian op erator to extract image edges. (2) The level sets of the t wo parameters w e select should intersect. If the m ulti-parameter filter functions we construct are “indep endent", then multi-parameter filtration is essentially single-parameter in nature. W e provide the follo wing example to demonstrate that the bi- parameter p ersistence mo dule along each direction can b e decomp osed into the direct sum of the single-parameter p ersistence mo dules under certain conditions (the relev an t definitions for the example are provided in the Preliminaries). Example 1.1. (Essential single-parameter) Let M and N b e t wo compact, disjoint n -dimensional submanifolds of R n . Let f : R n → R and g : R n → R b e tw o contin uous sublevel set functions satisfying that f | M ≥ 0 , f | R n \M = 0 , g | N ≥ 0 , g | R n \N = 0 . W e pro vide Figure 3 as an example of the ab o ve definition. Figure 3. The preimages of the purple and green regions corresp ond to the man- ifolds M and N , resp ectively . G-LOG BI-FIL TRA TION FOR MEDICAL IMA GE CLASSIFICA TION 5 Let M i = { x ∈ R n ; f ( x ) ≤ i } , N j = { x ∈ R n ; g ( x ) ≤ j } , where i, j ∈ R . Since R n = M i ∪ N j , we can get the following exact sequence, i.e., the May er-Vietoris sequence, · · · → H k ( M i ∩ N j ) Φ − → H k ( M i ) ⊕ H k ( N j ) Ψ − → H k ( R n ) ∂ − → H k − 1 ( M i ∩ N j ) → · · · → H 0 ( R n ) → 0 . When k ≥ 1 , H k ( R n ) = 0 . Then w e hav e H k ( M i ∩ N j ) ∼ = H k ( M i ) ⊕ H k ( N j ) for k ≥ 1 . Define the bi-parameter p ersistence mo dule ( M , π ) by setting M i,j ( f , g ) = H ∗ ( M i ∩ N j ) , and define the single-parameter p ersistence mo dule ( M ( f , g ) | l , π ( f , g ) | l ) by ( M ( f , g ) | l ) t := M at + b , ( π ( f , g ) | l ) s,t := π ( f , g ) | as + b,at + b , where l ∈ R n is a line parametrized by t ∈ R 7→ at + b with a ∈ ( R + ) n \ { 0 } . No w we define single-parameter p ersistence mo dules ( M ( f ) , π ( f )) and ( M ( g ) , π ( g )) by M i ( g ) = H ∗ ( M i ) and M j ( f ) = H ∗ ( N j ) , resp ectively . Let u = ( u 1 , u 2 ) , v = ( v 1 , v 2 ) . There exist l ′ = a ′ t + b ′ , s ′ and t ′ suc h that ( u 1 , u 2 ) = a ′ s ′ + b ′ , ( v 1 , v 2 ) = a ′ t ′ + b ′ , then w e hav e the following comm utativ e diagram, ( M ( f , g ) | l ′ ) s ′ ( M ( f , g ) | l ′ ) t ′ M ( f ) u 1 ⊕ M ( g ) u 2 M ( f ) u 2 ⊕ V ( g ) v 2 ∼ = π ( f ,g ) s ′ ,t ′ ∼ = π ( f ) u 1 ,v 1 ⊕ π ( g ) v 1 ,v 2 Then we hav e M ( f , g ) | l ′ ∼ = M ( f ) ⊕ M ( g ) . In this case, the bi-parameter p ersistence mo dule decomposes exactly in to the direct sum of tw o single-parameter p ersistence mo dules in each direction. Consequently , p erforming classification using this bi-parameter filtration yields no fundamental improv ement or dif- ference compared to direct classification using t w o independent single-parameter p ersisten t homology . (3) Driv en b y motiv ation (2), the t w o selected filter functions should b e defined such that the in tersection of their sublevel sets is non-empty; therefore, we apply Gaussian smo othing to b oth the v o xel v alues and the Laplacian op erator (Laplacian of Gaussian, LoG). 1.3. Con tributions. In this pap er, w e provide a framework to build bi-parameter filtrations on v olumes. • G-LoG Bifiltration & Stabilit y: W e define the G-LoG bi-filtration and w e prov e the inter- lea ving distance on the p ersistence mo dules obtained from our bi-filtration on the b ounded functions is stable with resp ect to the maximum norm of the bounded functions. • Exp erimen tal V alidation: W e conduct exp eriments on the MedMNIST dataset to assess the p erformance of our bi-filtration in data-rich environmen ts. Our key findings include, • Sup eriorit y ov er single-parameter p ersisten t homology: Our results demonstrate that our bi-filtration outp erforms single-parameter persistent homology . • P erformance in 2D Image Classification: W e achiev e results comp etitiv e with sev eral established baseline metho ds. • Effectiv eness in 3D Image Classification: F or 3D image classification, our metho d yields comp etitiv e p erformance compared to leading baseline approac hes. T o foster further developmen ts at the in tersection of m ulti-parameter p ersistent homology , w e release our source code under: https://github.com/HeJiaxing- hjx/G- LoG- bifiltration- for - medical- imaging- classification.git . 6 QINGSONG W ANG, JIAXING HE, BINGZHE HOU, TIERU WU, Y ANG CA O, AND CAILING Y AO 2. Preliminaries In this section, we will in tro duce some definitions and properties used in this paper. Let R b e the set of real n um b ers. F or vectors s = ( s 1 , · · · , s m ) , t = ( t 1 , · · · , t m ) in R m , there is a natural partial order on R m b y taking s ≤ m t if and only if s i ≤ t i for all 1 ≤ i ≤ m . Consider the sublevel set filtration X γ s := { x ∈ X | γ ( x ) ≤ m s } with natural inclusion ι s , t , where γ = ( γ 1 , · · · , γ m ) which we called filter function. W e denote M s = H ∗ ( X s ; F ) . In this pap er, w e only consider X = R n . Definition 2.1. A p ersistence module ( M , π ) is a family of F -mo dules { M s } s ∈ R m together with homomorphisms { π s , t : M s → M t } s , t ∈ R m , s ≤ m t suc h that π s , u = π t , u ◦ π s , t and π s , s = id for any s ≤ m t ≤ m u . W e denote ( M , π ) b y M in brief and we also call { π s , t } transition maps of M . Let ( M , π ) , ( M ′ , π ′ ) b e tw o p ersistence mo dules. Definition 2.2. A persistence morphism φ : ( M , π ) → ( M ′ , π ′ ) is a family of linear maps h s : M s → M ′ s suc h that the following diagram comm utes for all s ≤ m t , M s M t M ′ s M ′ t h s π s , t h t π ′ s , t T wo p ersistence modules M = ( M , π ) and M ′ = ( M ′ , π ′ ) are isomorphic if there exist t w o morphisms h 1 : M → M ′ and h 2 : M ′ → M suc h that b oth comp ositions h 1 ◦ h 2 and h 2 ◦ h 1 are the iden tity morphisms on the corresp onding persistence mo dules, where the identit y morphism on M is the iden tity on M t for all t . Definition 2.3. Let ( M , π ) , ( M ′ , π ′ ) b e tw o p ersistence mo dules. Their direct sum ( N , θ ) is the p ersistence module whose underlying modules are N t = M t ⊕ M ′ t and accordingly , θ s , t = π s , t ⊕ π ′ s , t . F or a p ersistence mo dule ( M , π ) and ϵ ∈ R m , define a p ersistence mo dule ( M [ ϵ ] , π [ ϵ ]) by taking ( M [ ϵ ]) s = M s + ϵ and ( π [ ϵ ]) s , t = π s + ϵ , t + ϵ . This new p ersistence mo dule is called the ϵ -shift of M . The map Φ ϵ : ( M , π ) → ( M [ ϵ ] , π [ ϵ ]) defined b y Φ ϵ t = π t , t + ϵ is an ϵ -shift morphism of p ersistence mo dules. No w let us in tro duce in terlea ving distance on the space of p ersistence mo dules. If h is a homo- morphism from M to M ′ , then h ( ϵ ) : M ( ϵ ) → M ′ ( ϵ ) . Definition 2.4. Given ϵ > 0 , w e say that t wo p ersistence mo dules M and M ′ are ϵ -interlea ved if there exist tw o morphisms h 1 : M → M ′ ( ϵ ) and h 2 : M ′ → M ( ϵ ) , such that h 2 ( ϵ ) ◦ h 1 = Φ 2 ϵ M and Φ 2 ϵ M ′ = h 1 ( ϵ ) ◦ h 2 , where M ( ϵ ) is the shift module { M s + ϵ } s ∈ R m , ϵ = ( ϵ, · · · , ϵ ) ∈ R m , Φ 2 ϵ M and Φ 2 ϵ M ′ are the shift morphisms. The in terlea ving distance b et w een t wo multi-parameter p ersistence mo dules M and M ′ is defined to b e d I ( M , M ′ ) = inf { ϵ > 0 | M and M ′ are ϵ -interlea v ed } . The main prop erty of d I is that it is stable for multi-parameter filtrations that are obtained from the sublevel sets of functions. More precisely , giv en tw o contin uous function γ 1 , γ 2 : X → G-LOG BI-FIL TRA TION FOR MEDICAL IMA GE CLASSIFICA TION 7 R m , w e denote M ( γ 1 ) and M ( γ 2 ) by the multi-parameter p ersistence mo dules obtained from the corresp onding filtrations X γ 1 and X γ 2 , then d I ( M ( γ 1 ) , M ( γ 2 )) ≤ ∥ γ 1 − γ 2 ∥ ∞ , where ∥ γ ∥ ∞ = sup p ∈ X ∥ γ ( p ) ∥ ∞ = sup p ∈ X max {| γ 1 ( p ) | , · · · , | γ m ( p ) |} if X = ∅ , 0 if X = ∅ . Theorem 2.5 ([21]) . d I is stable. 3. G-LoG Bifil tra tion and its st ability One reason for the popularity of the m ulti-parameter p ersistence mo dules in TDA is that the transformation of a data set to the m ulti-parameter p ersistence modules is stable (Lipschitz con- tin uous) with resp ect to the in terlea ving distances. In this section, w e give the definition of our G-LoG bi-filtration and prov e its stabilit y . T o construct bi-filtration from volumes and images, we adopt notation consistent with [3]. W e represen t data as the function space Φ , defined as a set of real-v alued functions { φ i } i mapping from a top ological space R n to R , w e hav e ∥ φ 1 − φ 2 ∥ ∞ = sup x ∈ R n | φ 1 ( x ) − φ 2 ( x ) | . Then we can giv e our definition of G-LoG bi-filtration. Let x = ( x 1 , · · · , x n ) ∈ R n , and let G b e the Gaussian kernel defined on R n , G ( x 1 , · · · , x n ) = exp { − P n i =1 x 2 i 2 σ 2 } , (3.1) w e hav e △ G ( x 1 , · · · , x n ) = n X i =1 ∂ 2 ∂ 2 x i G ( x 1 , · · · , x n ) = P n i =1 x 2 i − nσ 2 σ 4 exp { − P n i =1 x 2 i 2 σ 2 } . Definition 3.1. Let φ b e a con tinous function from R n to R . W e define G-LoG bi-filtration on φ b y γ ψ = ( γ 1 φ , γ 2 φ ) , where γ 1 φ ( x ) = Z R · · · Z R φ ( x 1 − α 1 , · · · , x n − α n ) · G ( α 1 , · · · , α n ) dα 1 · · · dα n , γ 2 φ ( x ) = Z R · · · Z R φ ( x 1 − α 1 , · · · , x n − α n ) · △ G ( α 1 , · · · , α n ) dα 1 · · · dα n . Then we can get the following stability of the interlea ving distance on γ φ 1 and γ φ 2 with resp ect to the maxim um norm on φ 1 and φ 2 . Theorem 3.2. L et φ 1 , φ 2 : R n → R b e two c ontinuous functions and let M ( γ φ 1 ) and M ( γ φ 2 ) b e the c orr esp onding multi-p ar ameter p ersistenc e mo dules induc e d by γ φ 1 = ( γ 1 φ 1 , γ 2 φ 1 ) and γ φ 2 = ( γ 1 φ 2 , γ 2 φ 2 ) , r esp e ctively. Then, the fol lowing stability ine quality holds, d I ( M ( γ φ 1 ) , M ( γ φ 2 )) ≤ max((2 π σ 2 ) n 2 , 2 n (2 π σ 2 ) n 2 σ 2 ) · ∥ φ 1 − φ 2 ∥ ∞ Pr o of. Recall that the Gaussian integral yields Z R n exp {− P n i =1 α 2 i 2 σ 2 } dα 1 · · · dα n = (2 π σ 2 ) n 2 . (3.2) 8 QINGSONG W ANG, JIAXING HE, BINGZHE HOU, TIERU WU, Y ANG CA O, AND CAILING Y AO F rom this, we obtain ∥ γ 1 φ 1 − γ 1 φ 2 ∥ ∞ ≤ ∥ φ 1 − φ 2 ∥ ∞ Z R n | G ( α 1 , . . . , α n ) | dα 1 · · · dα n = (2 π σ 2 ) n 2 ∥ φ 1 − φ 2 ∥ ∞ . Similarly , for the second comp onent of the bi-filtration, we hav e ∥ γ 2 φ 1 − γ 2 φ 2 ∥ ∞ ≤ ∥ φ 1 − φ 2 ∥ ∞ Z R n 1 σ 4 n X i =1 α 2 i G ( α ) + nσ 2 G ( α ) ! dα 1 · · · dα n . F rom (3.2), we hav e α i ∼ N (0 , σ 2 ) , and notice that the second momen t is E [ P n i =1 α 2 i ] = nσ 2 , it follo ws that 1 σ 4 Z R n n X i =1 α 2 i exp {− P α 2 i 2 σ 2 } dα 1 · · · dα n + Z R n nσ 2 exp {− P α 2 i 2 σ 2 } dα 1 · · · dα n ! = 1 σ 4 nσ 2 · (2 π σ 2 ) n 2 + nσ 2 · (2 π σ 2 ) n 2 = 2 n (2 π σ 2 ) n 2 σ 2 . Consequen tly , w e establish the bound for the combined function γ φ : ∥ γ φ 1 − γ φ 2 ∥ ∞ ≤ max (2 π σ 2 ) n 2 , 2 n (2 π σ 2 ) n 2 σ 2 · ∥ φ 1 − φ 2 ∥ ∞ . Finally , b y Theorem 2.5 regarding the stability of the interlea ving distance, w e conclude, d I ( M ( γ φ 1 ) , M ( γ φ 2 )) ≤ ∥ γ φ 1 − γ φ 2 ∥ ∞ = sup x ∈ R n ∥ γ φ 1 − γ φ 2 ∥ ∞ = max {∥ γ 1 φ 1 − γ 1 φ 2 ∥ ∞ , ∥ γ 2 φ 1 − γ 2 φ 2 ∥ ∞ } ≤ C · ∥ φ 1 − φ 2 ∥ ∞ , where C = max n (2 π σ 2 ) n 2 , 2 n (2 π σ 2 ) n 2 σ 2 o . □ R emark 3.3 . T o build bi-filtrations quickly , we use the appro ximate m ulti-parameter p ersistence mo dules ˜ M M M A δ defined in [24], where δ > 0 . They pro ved that for t w o con tinous functions f , g : X → R m , the follo wing stability holds, d I ( ˜ M M M A δ ( f ) , ˜ M M M A δ ( g )) ≤ ∥ f − g ∥ ∞ + δ, Let f and g be γ φ 1 and γ φ 2 resp ectiv ely , w e hav e the follo wing stability , d I ( ˜ M M M A δ ( γ φ 1 ) , ˜ M M M A δ ( γ φ 2 )) ≤ max { (2 π σ 2 ) n 2 , 2 n (2 π σ 2 ) n 2 σ 2 } · ∥ φ 1 − φ 2 ∥ ∞ + δ. 4. Experiments W e conduct experiments on the MedMNIST dataset to ev aluate the effectiv eness of our G-LoG bi-filtration. W e compare our results against all baseline mac hine learning mo dels rep orted in MedMNIST [40], as well as the results from the single-parameter T op o-Med approac h [28]. The bi-parameter cubical filtrations are implemented using the multipers library ( https://davidlap ous.github.io/multipers/ ) and the GUDHI library ( https://gudhi.inria.fr/ ). W e prop ose the following pip eline for feature generation by G-LoG bi-filtration (illustrated in Figure 4). G-LOG BI-FIL TRA TION FOR MEDICAL IMA GE CLASSIFICA TION 9 2D/3D Medical Image bi-filtered simplicial complexes bi-parameter persistence modules bi-parameter persistence image G-L oG bi-filtr ation Homolo gy V e ctorization Result MLP Figure 4. Classification pip eline using G-LoG bi-filtration Our bi-filtrations are generated on a laptop equipp ed with an AMD Ryzen 7 5800H (Radeon Graphics) and 16GB of RAM. The MLP mo dels are subsequently trained on a system featuring an In tel Core Ultra 9 185H (2.30 GHz) and 32GB of RAM. Dataset MedMNIST is a large-scale, standardized collection of biomedical images inspired b y the MNIST format, comprising 12 2D datasets and 6 3D datasets. All images are prepro cessed into a uniform resolution of 28 × 28 (for 2D) or 28 × 28 × 28 (for 3D) and are provided with corresp onding classification labels, making them accessible to users w ithout prior domain exp ertise. Co vering primary data mo dalities in biomedical images, MedMNIST is designed to p erform classification on ligh tw eight 2D and 3D images with v arious data scales (from 100 to 100,000) and div erse tasks (binary/m ulti-class, ordinal regression and m ulti-lab el). With appro ximately 708,000 2D images and 10,000 3D samples in total, this dataset supports a wide range of researc h and e ducational needs in biomedical image analysis, computer vision, and machine learning. Construction of Bi-parameter Filtrations First, we conv ert the 2D color images into gra yscale and normalize the pixel v alues from the range [0 , 255] to [0 , 1] . Next, w e construct the bi-parameter filtration functions. As established in our discussion on researc h motiv ation (2), if the in tersection of the preimages of t w o sublevel set functions is too small, the extracted topological fea- tures degenerate into those obtained by applying tw o separate single-parameter filtrations, leading to a reduction in corresp onding feature coun t. T o intuitiv ely illustrate the imp ortance of capturing this in tersection, we employ Gaussian and Gaussian-Laplacian conv olutions as the t w o sublevel set filtrations on medical images. The parameters for the Gaussian kernels are set to σ = 0 (represent- ing no con volution), 0 . 5 , 1 and 1 . 5 , while the σ for the Gaussian-Laplacian (LoG) kernel is fixed at 1 . Finally , we construct the bi-parameter persistence modules. While the GUDHI library is highly efficien t for generating single-parameter cubical complexes, the multipers library [25] provides a user-friendly and high-performance framework for multi-parameter p ersisten t homology , enabling the generation of approximate p ersistence mo dules and in tegrating v arious v ectorization metho ds. By lev eraging b oth GUDHI [27] and m ultip ers, we are able to construct approximate bi-parameter p ersistence mo dules. Running time It tak es ab out 0 . 1 second to generate a bi-parameter p ersistence mo dule from a 28 × 28 medical image, whereas a 28 × 28 × 28 medical volume requires appro ximately 90 seconds. V ectorization W e leverage the multipers library to extract vectorized features from the multi- parameter p ersistence mo dules. Sp ecifically , w e generate Multi-parameter Persistence Images (MPIs) [9] by using a Gaussian kernel with a bandwidth of 0 . 01 and a weigh t parameter p = 2 . F or each i -dimensional p ersisten t homology H i , the resulting p ersistence image has a resolution of 50 × 50 . F or 2D image data, we generate MPIs for H 0 and H 1 ; after concatenation, eac h 2D image is rep- resen ted by a 5000 -dimensional v ector. F or 3D v olumetric data, w e generate MPIs for H 0 , H 1 and H 2 . By concatenating these images, w e obtain a 7500 -dimensional v ector for each 3D volume. Net w ork Hyp erparameters Consistent with the metho dology in [28], we emplo y a Multi-Lay er P erceptron (MLP) with three hidden lay ers to train on the multi-parameter p ersistence images. The MLP architecture consists of an input lay er, three hidden lay ers, and an output lay er. Sp ecifically , the three hidden la y ers are: a fully connected la y er with 256 neurons and ReLU activ ation, follow ed 10 QINGSONG W ANG, JIAXING HE, BINGZHE HOU, TIERU WU, Y ANG CA O, AND CAILING Y AO b y a fully connected la y er with 128 neurons and ReLU activ ation, and a final hidden la yer with 64 neurons and ReLU activ ation. The output la y er utilizes the Softmax activ ation function. F or mo del optimization, we use the cross-entrop y loss function and the A dam optimizer with a learning rate of 0 . 001 . The training is conducted for 100 ep o c hs with a batch size of 32. A dditionally , an early stopping strategy is implemented with a patience of 20, and the model achieving the highest A UC score on the v alidation set is sav ed for final ev aluation. Ev aluation Metrics Consistent with prior research, w e emplo y the a v erage Area Under the Curv e (AUC) and av erage Accuracy (A CC) as our ev aluation metrics. AUC is a threshold-free metric to ev aluate the contin uous prediction scores, while ACC ev aluates the discrete prediction lab els given threshold (or argmax). The computational pro cedures for these metrics are identical to those describ ed in [40]. 4.1. Results. As illustrated in T able 1, T able 2 and T able 3, we compare the p erformance of our prop osed metho d against the baseline results rep orted in [40] and [28] on the MedMNIST dataset. Method PathMNIST ChestMNIST DermaMNIST OCTMNIST PneumoniaMNIST RetinaMNIST AUC ACC AUC ACC A UC ACC A UC ACC AUC ACC AUC A CC ResNet-18 (28) 0.983 0.907 0.768 0.947 0.917 0.735 0.943 0.743 0.944 0.854 0.717 0.524 ResNet-18 (224) 0.989 0.909 0.773 0.947 0.920 0.754 0.958 0.763 0.956 0.864 0.710 0.493 ResNet-50 (28) 0.990 0.911 0.769 0.947 0.913 0.735 0.952 0.762 0.948 0.854 0.726 0.528 ResNet-50 (224) 0.989 0.892 0.773 0.948 0.912 0.731 0.958 0.776 0.962 0.884 0.716 0.511 auto-sklearn 0.934 0.716 0.649 0.779 0.902 0.719 0.887 0.601 0.942 0.855 0.690 0.515 AutoKeras 0.959 0.834 0.742 0.937 0.915 0.749 0.955 0.763 0.947 0.878 0.719 0.503 Google AutoML Vision 0.944 0.728 0.778 0.948 0.914 0.768 0.963 0.771 0.991 0.946 0.750 0.531 T op o-Med (MLP) 0.942 0.683 0.787 0.530 0.904 0.669 0.710 0.450 0.845 0.762 0.728 0.458 Ours (MLP) σ = 0 0.955 0.753 0.606 0.947 0.809 0.703 0.859 0.522 0.907 0.817 0.611 0.518 σ = 0 . 5 0.954 0.753 0.608 0.947 0.806 0.708 0.872 0.567 0.910 0.825 0.624 0.500 σ = 1 0.940 0.708 0.610 0.947 0.815 0.700 0.869 0.548 0.906 0.825 0.643 0.498 σ = 1 . 5 0.939 0.702 0.603 0.947 0.816 0.698 0.854 0.529 0.891 0.795 0.638 0.483 T able 1. Comparison of sev en deep learning baselines, single-parameter and our G-LoG bi-filtration on MedMNIST2D (I). Method BreastMNIST BloodMNIST TissueMNIST OrganAMNIST OrganCMNIST OrganSMNIST AUC ACC AUC A CC AUC AC C AUC ACC A UC ACC A UC ACC ResNet-18 (28) 0.901 0.863 0.998 0.958 0.930 0.676 0.997 0.935 0.992 0.900 0.972 0.782 ResNet-18 (224) 0.891 0.833 0.998 0.963 0.933 0.681 0.998 0.951 0.994 0.920 0.974 0.778 ResNet-50 (28) 0.857 0.812 0.997 0.956 0.931 0.680 0.997 0.935 0.992 0.905 0.972 0.770 ResNet-50 (224) 0.866 0.842 0.997 0.956 0.931 0.680 0.998 0.947 0.993 0.911 0.975 0.785 auto-sklearn 0.836 0.803 0.984 0.878 0.828 0.532 0.963 0.762 0.976 0.829 0.945 0.672 AutoKeras 0.871 0.831 0.998 0.961 0.941 0.703 0.994 0.905 0.990 0.879 0.974 0.813 Google AutoML Vision 0.919 0.861 0.998 0.966 0.924 0.673 0.990 0.886 0.988 0.877 0.964 0.749 T op o-Med (MLP) 0.821 0.737 0.973 0.798 0.837 0.450 0.921 0.523 0.894 0.489 0.910 0.532 Ours (MLP) σ = 0 0.820 0.731 0.977 0.847 0.849 0.537 0.934 0.591 0.935 0.593 0.927 0.605 σ = 0 . 5 0.784 0.814 0.979 0.844 0.845 0.533 0.936 0.584 0.933 0.601 0.929 0.588 σ = 1 0.752 0.750 0.977 0.840 0.840 0.529 0.931 0.562 0.931 0.588 0.918 0.573 σ = 1 . 5 0.730 0.763 0.978 0.846 0.837 0.528 0.927 0.559 0.922 0.565 0.910 0.565 T able 2. Comparison of sev en deep learning baselines, single-parameter and our G-LoG bi-filtration on MedMNIST2D (I I). G-LOG BI-FIL TRA TION FOR MEDICAL IMA GE CLASSIFICA TION 11 Method OrganMNIST3D NoduleMNIST3D F ractureMNIST3D AdrenalMNIST3D V esselMNIST3D SynapseMNIST3D AUC A CC AUC ACC AUC ACC A UC ACC AUC ACC AUC ACC ResNet-18+2.5D 0.977 0.788 0.838 0.835 0.587 0.451 0.718 0.772 0.748 0.846 0.634 0.696 ResNet-18+3D 0.996 0.907 0.863 0.844 0.712 0.508 0.827 0.721 0.874 0.877 0.820 0.745 ResNet-18+ACS 0.994 0.900 0.873 0.847 0.714 0.497 0.839 0.754 0.930 0.928 0.705 0.722 ResNet-50+2.5D 0.974 0.769 0.835 0.848 0.552 0.397 0.732 0.763 0.751 0.877 0.669 0.735 ResNet-50+3D 0.994 0.883 0.875 0.847 0.725 0.494 0.828 0.745 0.907 0.918 0.851 0.795 ResNet-50+ACS 0.994 0.889 0.886 0.841 0.750 0.517 0.828 0.758 0.912 0.858 0.719 0.709 auto-sklearn 0.977 0.814 0.914 0.874 0.628 0.453 0.828 0.802 0.910 0.915 0.631 0.730 AutoKeras 0.979 0.804 0.844 0.834 0.642 0.458 0.804 0.705 0.773 0.894 0.538 0.724 T op o-Med (MLP) 0.837 0.554 0.808 0.736 0.653 0.480 0.837 0.554 0.808 0.736 0.653 0.480 Ours (MLP) σ = 0 0.958 0.684 0.564 0.774 0.697 0.554 0.742 0.789 0.813 0.887 0.783 0.796 σ = 0 . 5 0.961 0.702 0.789 0.852 0.704 0.579 0.870 0.847 0.877 0.898 0.810 0.827 σ = 1 0.953 0.656 0.838 0.852 0.753 0.579 0.867 0.829 0.915 0.908 0.805 0.813 σ = 1 . 5 0.941 0.634 0.803 0.848 0.774 0.588 0.850 0.829 0.933 0.937 0.752 0.776 T able 3. Comparison of eight deep learning baselines, single-parameter and our G-LoG bi-filtration on MedMNIST3D (I). Results for 2D dataset As sho wn in the T able 1 and T able 2, our model outp erforms single- parameter persistent homology across all datasets except for the A UC scores on ChestMNIST, DermaMNIST and RetinaMNIST. W e observ e a p erformance increase of 5–10% on most datasets, with a notable 41.7% increase in ACC on the Chest dataset. Although our performance on 2D datasets do es not surpass the best-p erforming baseline mo dels, our results on the P athMNIST dataset achiev es an AUC of 95.5% and an ACC of 75.3%, whic h outp erform Auto-sklearn (93.4% AUC and 71.6% ACC). On the ChestMNIST dataset, our A CC reac hes 94.7%, surpassing Auto-sklearn (77.9%) and AutoKeras (93.7%). This p erformance is equiv alent to that of ResNet-18 (28), ResNet-18 (224) and ResNet-50 (28), trailing only behind ResNet-50 (224) and Google AutoML Vision (94.8%). Regarding the BreastMNIST dataset, our A CC score of 81.4% is higher than both ResNet-50 (28) (81.2%) and Auto-sklearn (80.3%). Finally , on the TissueMNIST dataset, our AUC and ACC scores (84.9% and 53.7%) outp erform the results from Auto-sklearn (82.8% AUC and 53.2% ACC). A cross the ma jority of datasets, σ = 0 . 5 yields sup erior results compared to σ = 0 , 1 and 1 . 5 . This empirically v alidates our motiv ation (2): the necessit y of achieving an appropriate in tersec- tion of multi-parameter sublevel sets. These results underscore the latent p otential of persistent homology , demonstrating that features extracted via this top ological approach alone are sufficien t to train mo dels. Results for 3D dataset It can b e observed in T able 3 that our metho d extracts sup erior features compared to single-parameter p ersistent homology , while our results remain highly competitive with the baseline metho ds. W e select σ = 0 , 0 . 5 , 1 , 1 . 5 to generate bi-parameter p ersistence features. It is eviden t that the AUC and A CC scores for σ = 0 are consistently low er than those for σ = 0 . 5 , 1 , 1 . 5 , whic h v alidates our initial motiv ation (2). Specifically , our metho d achiev es an AUC of 77.4% and an A CC of 58.8% on F ractureMNIST3D; 87.0% and 84.7% on AdrenalMNIST3D; 93.3% and 93.7% on V esselMNIST3D, resp ectively . On the SynapseMNIST3D dataset, our ACC reaches 82.7%. Ov erall, our approach outp erforms the baseline models in b oth A UC and ACC on the F ractureMNIST3D, A drenalMNIST3D and V esselMNIST3D datasets, while also surpassing the baseline in terms of A CC on the SynapseMNIST3D dataset. 12 QINGSONG W ANG, JIAXING HE, BINGZHE HOU, TIERU WU, Y ANG CA O, AND CAILING Y AO Consequen tly , by selecting appropriate filter functions, m ulti-parameter p ersisten t homology can extract sufficient geometric and topological features for classification tasks, p otentially ev en surpassing the p erformance of original features. 5. Conclusions and future work In this paper, w e introduce a bi-parameter p ersistence filtration called G-LoG. W e theoretically demonstrate that the resulting persistence modules on v olumetric images are stable under the in terleaving distance with resp ect to the maximum norm of the image data. The p ersistence mo dules induced by the G-LoG bi-filtration are computationally efficien t to generate. Our experimental results highligh t the significan t potential of m ulti-parameter persistence modules in biomedical image analysis. In the future, w e aim to extend our methodology in tw o directions primarily . First, we plan to develop a more robust framework for filtrations with more parameters, such as three-parameter filtrations, to capture even more intricate top ological features. W e b eliev e that by selecting more appropriate filtration parameters, sup erior p erformance in multi-parameter p ersistence mo dules can b e achiev ed. Second, we in tend to in tegrate our bi-filtration framework in to optimization pip elines, enabling its application to a broader range of domains, including computer graphics and end-to-end deep learning arc hitectures. References [1] H. Adams, T. Emerson, M. Kirby , R. Neville, C. Peterson, P . Shipman, S. Chepushtano v a, E. Hanson, F. Motta, and L. Ziegelmeier. P ersistence images: A stable vector represen tation of persistent homology . J. Mach. L e arn. R es. , 18(8): 1–35, 2017. [2] Á. J. Alonso, M. Kerber, T. Lam, and M. Lesnic k. Delauna y bifiltrations of functions on p oint clouds. In Pr o c e e dings of the 2024 Annual ACM-SIAM Symp osium on Discr ete Algorithms (SODA) , pages 4872–4891. SIAM, 2024. [3] M. G. Bergomi, P . F rosini, D. Giorgi, and N. Quercioli. T ow ards a topological–geometrical theory of group equiv ariant non-expansive operators for data analysis and machine learning. Nat. Mach. Intel l. , 1(9): 423–433, 2019. [4] S. Biasotti, L. De Floriani, B. F alcidieno, P . F rosini, D. Giorgi, C. Landi, L. Papaleo, and M. Spagnuolo. Describing shap es b y geometrical-top ological properties of real functions. ACM Comput. Surv. , 40(4), Octob er 2008. [5] A. J. Blum b erg and M. Lesnic k. Stabilit y of 2-parameter persistent homology . F ound. Comput. Math. , 24(2):385– 427, 2024. [6] P . Bub enik. Statistical top ological data analysis using p ersistence landscap es. J. Mach. L e arn. R es. , 16:77–102, 2015. [7] G. Carlsson and M. V ejdemo-Johansson. T op olo gic al data analysis with applic ations . Cambridge Universit y Press, Cambridge, 2021. [8] G. Carlsson and A. Zomoro dian. The theory of multidimensional p ersistence. Discrete Comput. Ge om. , 42(1): 71–93, 2009. [9] M. Carrie ` re and A. Blumberg. Multiparameter persistence image for topological machine learning. In Advanc es in Neur al Information Pr o c essing Systems , volume 33, pages 22432–22444, New Y ork, 2020. Curran Asso ciates, Inc. [10] G. Casaclang-V erzosa, S. Shrestha, M. J. Khalil, J. S. Cho, M. T okodi, S. Balla, M. Alkhouli, V. Badhw ar, J. Narula, J. D. Miller, et al. Netw ork tomography for understanding phenotypic presen tations in aortic stenosis. JACC: Car diovascular Imaging , 12(2):236–248, 2019. [11] Y. M. Chung and A. La wson. Persistence curves: A canonical framew ork for summarizing persistence diagrams. A dvanc es in Computational Mathematics , 48(1):6, 2022. [12] R. Corb et, M. Kerber, M. Lesnick, and G. Osang. Computing the multico v er bifiltration. Discr ete Comput. Ge om. , 70(2):376–405, 2023. G-LOG BI-FIL TRA TION FOR MEDICAL IMA GE CLASSIFICA TION 13 [13] L. Crawford, A. Mono d, A. X. Chen, S. Mukherjee, and R. Rabadán. Predicting clinical outcomes in glioblas- toma: an application of top ological and functional data analysis. J. Am. Stat. Asso c. , 115(531):1139–1150, 2020. [14] Z. Dong, H. Lin, C. Zhou, B. Zhang, and G. Li. Persistence b-spline grids: stable vector representation of persistence diagrams based on data fitting. Mach. L e arn. , 113(3):1373–1420, 2024. [15] H. Edelsbrunner, D. Letscher, and A. Zomoro dian. T opological p ersistence and simplification. Discr ete Comput. Ge om. , 28(4): 511–533, 2002. [16] M. F eurer, K. Eggensp erger, S. F alkner, M. Lindauer, and F. Hutter. Auto-sklearn 2.0: Hands-free automl via meta-learning. Journal of Machine L e arning R ese ar ch , 23(261):1–61, 2022. [17] J. He, B. Hou, T. W u, and Y. Xin. Mix-geneo: A flexible filtration for m ultiparameter p ersistent homology detects digital images. AIMS Math. , 10(10):24153–24178, 2025. [18] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Pr oc e e dings of the IEEE c onfer enc e on computer vision and p attern r ec o gnition , pages 770–778, 2016. [19] X. Hu, Y. W ang, L. F uxin, D. Samaras, and C. Chen. T op ology-aw are segmentation using discrete morse theory . In International Conferenc e on Le arning R epresentations , 2021. [20] H. Jin, Q. Song, and X. Hu. Auto-keras: An efficient neural architecture search system. In Pro c e edings of the 25th ACM SIGKDD international c onfer enc e on know le dge discovery & data mining , pages 1946–1956, 2019. [21] M. Lesnick. The theory of the interlea ving distance on multidimensional p ersistence modules. F ound. Comput. Math. , 15(3): 613–650, 2015. [22] M. Lesnick and K. McCab e. Nerve mo dels of sub division bifiltrations. arXiv preprint , 2024. [23] J. Liu, Y. Li, G. Cao, Y. Liu, and W. Cao. F eature pyramid vision transformer for medmnist classification decathlon. In 2022 International joint c onfer ence on neur al networks (IJCNN) , pages 1–8. IEEE, 2022. [24] D. Loiseaux, M. Carrière, and A. J. Blumberg. Multi-parameter mo dule approximation: an efficient and in- terpretable in v arian t for m ulti-parameter persistence mo dules with guaran tees. J Appl. and Comput. T opolo gy , 9(4):1–60, 2025. [25] D. Loiseaux and H. Schreiber. Multip ers: Multiparameter P ersistence for Machine Learning. Journal of Op en Sour c e Softwar e , 9(103):6773, No vem ber 2024. [26] O. N. Manzari, H. Ahmadabadi, H. Kashiani, S. B. Shokouhi, and A. A yatollahi. MedViT: a robust vision transformer for generalized medical image classification. Comput. Biol. Me d. , 157:106791, 2023. [27] C. Maria, J. D. Boissonnat, M. Glisse, and M. Y vinec. The gudhi library: Simplicial complexes and p ersisten t homology . In Mathematic al Software–ICMS 2014: 4th International Congr ess, Se oul, South Kor e a, A ugust 5-9, 2014. Pro c e edings 4 , pages 167–174. Springer, 2014. [28] B. Nuwagira, K. Caner, K. P . F an-Hsi, and B. Coskunuzer. T op ological Machine Learning for Low Data Medical Imaging. In Machine Le arning for Health (ML4H) , pages 824–838. PMLR, 2025. [29] L. Poltero vic h, D. Rosen, K. Sam vely an, and J. Zhang. T op olo gic al p ersistenc e in ge ometry and analysis , vol- ume 74 of University Le ctur e Series . American Mathematical So ciety , RI, 2020. [30] J. Reininghaus, S. Huber, U. Bauer, and R. Kwitt. A stable m ulti-scale kernel for top ological machine learning. In Pr o c e e dings of the IEEE confer enc e on c omputer vision and p attern r e c ognition , pages 4741–4748, 2015. [31] D. R. Sarv amangala and R. V. Kulk arni. Conv olutional neural netw orks in medical image understanding: a survey . Evol. Intel l. , 15(1):1–22, 2022. [32] L. Shen, H. F eng, F. Li, F. Lei, J. W u, and G. W ei. Knot data analysis using multiscale gauss link integral. Pr o c e e dings of the National Ac ademy of Scienc es , 121(42):e2408431121, 2024. [33] Y. Singh, C. M. F arrelly , Q. A. Hatha w ay , T. Leiner, J. Jagtap, G. E. Carlsson, and B. J. Erickson. T op ological data analysis in medical imaging: curren t state of the art. Insights Imaging , 14(1):58, 2023. [34] Y. Singh, Q. A. Hathaw ay , K. Dinak ar, L. J. Shaw, B. Erickson, F. Lop ez-Jimenez, and D. L. Bhatt. Quantifying the unknowns of plaque morphology: The role of top ological uncertaint y in coronary artery disease. Mayo Clinic Pr o c e e dings: Digital He alth , 3(2):100217, 2025. [35] S. Sun, X. Jia, and Z. Zheng. Complex mixer for MedMNIST classification decathlon. Appl. Intel l. , 55(16):1–12, 2025. [36] O. Vip ond, J. A. Bull, P . S. Macklin, U. Tillmann, C. W. Pugh, H. M. Byrne, and H. A. Harrington. Multi- parameter p ersisten t homology landscap es identify imm une cell spatial patterns in tumors. P. Natl. A cad. Sci. USA , 118(41), 2021. [37] C. W u and C. A. Hargreav es. T op ological machine learning for mixed numeric and categorical data. Int. J. A rtif. Intel l. T. , 30(05):2150025, 2021. 14 QINGSONG W ANG, JIAXING HE, BINGZHE HOU, TIERU WU, Y ANG CA O, AND CAILING Y AO [38] A. Y adav, F. Ahmed, O. Daescu, R. Gedik, and B. Coskun uzer. Histopathological cancer detection with top o- logical signatures. In 2023 IEEE International Conferenc e on Bioinformatics and Biome dicine (BIBM) , pages 1610–1619. IEEE, 2023. [39] J. Y ang, R. Shi, and B. Ni. Medmnist classification decathlon: A ligh tw eight automl benchmark for medical image analysis. In IEEE 18th International Symp osium on Biome dic al Imaging (ISBI) , pages 191–195, 2021. [40] J. Y ang, R. Shi, D. W ei, Z. Liu, L. Zhao, B. Ke, H. Pfister, and B. Ni. Medmnist v2-a large-scale light w eight benchmark for 2d and 3d biomedical image classification. Sci. Data , 10(1):41, 2023. [41] A. J. Zomoro dian. T op olo gy for c omputing , volume 16 of Cambridge Monogr aphs on Applie d and Computational Mathematics . Cambridge Universit y Press, Cambridge, 2005. Qingsong W ang, School of Ma thema tics, Jilin University, 130012, Changchun, P. R. China Email address : qswang21@mails.jlu.edu.cn Jiaxing He, School of Ma thema tics, Jilin University, 130012, Changchun, P. R. China Email address : 547337872@qq.com Bingzhe Hou, School of Ma thema tics, Jilin University, 130012, Changchun, P. R. China Email address : houbz@jlu.edu.cn Tieru Wu, School of Ma thema tics, Jilin University, 130012, Changchun, P. R. China, School of Ar tificial Intelligence, Jilin University, 130012, Changchun, P. R. China Email address : wutr@jlu.edu.cn Y ang Ca o, School of Ma thema tics, Jilin University, 130012, Changchun, P. R. China Email address : caoyang@jlu.edu.cn Cailing Y ao, School of Ma thema tics, Jilin University, 130012, Changchun, P. R. China Email address : 1290279144@qq.com
Original Paper
Loading high-quality paper...
Comments & Academic Discussion
Loading comments...
Leave a Comment