Optimized Weighted Voting System for Brain Tumor Classification Using MRI Images

The accurate classification of brain tumors from MRI scans is essential for effective diagnosis and treatment planning. This paper presents a weighted ensemble learning approach that combines deep learning and traditional machine learning models to i…

Authors: Ha Anh Vu

Optimized Weighted Voting System for Brain Tumor Classification Using MRI Images
Optimized W eighted V oting System for Brain T umor Classification Using MRI Images 1 st Ha Anh V u Artificial Intelligence Department FPT University Ho Chi Minh City , V ietnam 0009-0002-4921-9823 Abstract —The accurate classification of brain tumors fr om MRI scans is essential for effective diagnosis and treatment planning. This paper pr esents a weighted ensemble learning approach that combines deep learning and traditional ma- chine learning models to improv e classification performance. The proposed system integrates multiple classifiers, including ResNet101, DenseNet121, Xception, CNN-MRI, and ResNet50 with edge-enhanced images, SVM, and KNN with HOG features. A weighted voting mechanism assigns higher influence to models with better individual accuracy , ensuring r obust decision-making. Image processing techniques such as Balance Contrast Enhance- ment, K-means clustering, and Canny edge detection are applied to enhance feature extraction. Experimental evaluations on the Figshare and Kaggle MRI datasets demonstrate that the pr oposed method achieves state-of-the-art accuracy , outperforming existing models. These findings highlight the potential of ensemble-based learning for impro ving brain tumor classification, offering a reliable and scalable framework for medical image analysis. Index T erms —Brain T umor Classification, MRI, Ensemble Learning, V oting System, Deep Learning, Machine Learning I . I N T RO D U C T I O N Magnetic Resonance Imaging (MRI) is a critical tool for detecting and diagnosing brain tumors, where accurate classi- fication is essential for guiding treatment decisions. T umors such as glioma, meningioma, and pituitary tumors exhibit div erse morphological characteristics, including variations in shape, texture, and anatomical location. These differences and variations in MRI acquisition conditions pose challenges for precise classification. T raditional machine learning models lik e Support V ector Machines (SVM) and K-Nearest Neighbors (KNN) have shown effecti veness when lev eraging handcrafted features such as Histogram of Oriented Gradients (HOG). Meanwhile, deep learning models, particularly Conv olutional Neural Networks (CNNs) and ResNet architectures, hav e demonstrated promising results in medical imaging tasks. Howe ver , both approaches have limitations when applied in- dependently , necessitating an integrated strategy to enhance classification performance. This research proposes a weighted ensemble learning frame- work that combines multiple classifiers to improve brain tumor classification accuracy and robustness. The ensemble integrates traditional machine learning models with deep learn- ing architectures, lev eraging their complementary strengths. A weighted voting mechanism is employed to optimally combine predictions from different classifiers based on their individual performance. Image processing techniques such as Balance Contrast Enhancement (BCET), K-means clustering, and Canny edge detection are applied to enhance feature extraction, improving tumor boundary visibility and classi- fication accuracy . Specifically , SVM and KNN are used for shape and texture analysis, while CNN-MRI, ResNet101, and DenseNet121 process grayscale MRI images to extract deep spatial representations. ResNet50 and Xception are also trained on edge-detected images to enhance the e xtraction of structural features. Experimental ev aluations on the Figshare and Kaggle MRI datasets demonstrate that the proposed ensemble framework achiev es classification accuracy exceeding 99%, surpassing existing methods and highlighting its potential for real-world medical applications. The remainder of this paper is structured as follows: Section II revie ws related work on brain tumor classification using single and ensemble models. Section III details the methodology , including image processing, model selection, and the implementation of the weighted voting mechanism. Section IV presents the classification results and compares them with state-of-the-art approaches. Finally , Sec- tion V concludes the paper with key findings and directions for future research. I I . R E L A T E D W O R K Advancements in brain tumor classification hav e been driv en by both single-classifier models and ensemble-based approaches. While individual classifiers, such as deep learning- based Con volutional Neural Networks (CNNs) and traditional machine learning models like Support V ector Machines (SVM) and K-Nearest Neighbors (KNN), hav e demonstrated strong performance, each has its limitations in handling variations in tumor morphology . Ensemble learning addresses these challenges by integrating multiple models, le veraging their complementary strengths to improv e classification accuracy and robustness. This section provides an overvie w of both single-model and ensemble-based approaches in brain tumor classification. A. Single Classifiers CNNs are widely used in medical imaging for their abil- ity to extract hierarchical features from complex datasets. ResNet, VGGNet, and DenseNet have achiev ed state-of-the- art performance, particularly when fine-tuned with large-scale datasets like ImageNet [1], [2]. Further improvements, such as hyperparameter tuning and architectural enhancements, have significantly boosted classification accuracy [3]–[6]. Recent advancements include sub-region tumor analysis [7], Efficient- Net optimization [8], and the TD A framew ork for segmenta- tion, classification, and sev erity prediction [9]. T raditional machine learning models remain relev ant, es- pecially in resource-limited settings. SVM [10] and KNN [11], combined with feature extraction techniques like HOG and LBP [12], continue to perform well. Multi-kernel SVM [13], [14] and handcrafted features [15], [16] have sometimes riv aled deep learning methods. Hybrid approaches, such as integrating Canny edge detection with CNNs [17]–[19], fur- ther enhance classification accuracy by preserving spatial and textural information. B. Ensemble Learning Ensemble learning enhances predictiv e accuracy by com- bining multiple classifiers, reducing v ariance, and improving generalization, making it highly effecti ve in medical imaging. T echniques like bagging, boosting, and voting hav e been widely explored. In brain tumor classification, Bogacsovics et al. [20] used majority voting with CNNs like ResNet and MobileNet, while Siar et al. [21] applied weighted voting with VGG16 and ResNet50, improving performance. Sterniczuk et al. [22] further v alidated ensemble methods by integrating multiple deep learning models, lev eraging diverse feature extraction techniques for higher accuracy . Hybrid ensemble models hav e prov en effecti ve in impro ving accuracy . Munira et al. [23] combined CNNs with Random Forest using a voting mechanism, while Zhao et al. [24] in- tegrated SVM ensembles with PCA for dimensionality reduc- tion. These approaches demonstrate the benefits of combining classifiers for enhanced performance. Recent advancements in ensemble learning hav e incor- porated attention mechanisms and advanced feature fusion techniques to refine classification accuracy . Abdulsalomov et al. [25] enhanced model reliability by integrating the Con vo- lutional Block Attention Module (CBAM) and Bi-directional Feature Pyramid Networks (BiFPN) into an ensemble frame- work. Roy et al. [26] introduced a GAN-based augmentation technique within an explainable ensemble system, effecti vely addressing data imbalance issues and improving overall clas- sification performance. Building on these prior works, this study proposes an enhanced ensemble learning framew ork that integrates deep and traditional machine learning models. A previous study by V u et al. [27] explored an ensemble comprising KNN, SVM, CNN-MRI, and ResNet50, achie ving a peak accuracy of 98.36%. In contrast, the proposed method expands upon this by incorporating additional deep learning models, Xception, ResNet101, and DenseNet121, further improving classification accuracy beyond 99%. I I I . M E T H O D O L O G Y This study explores the integration of multiple classifiers with di verse data representations. Each model, when trained on different input types, contributes unique classification per- spectiv es for brain tumors. These v ariations complement one another within the ensemble system, ultimately improving accuracy . The choice of classifiers and input types is informed by preliminary experiments [18], [27] and their proven effec- tiv eness, as outlined in the section II. A. MRI data repr esentations The experiments used various data representations, includ- ing original images, edge images, and features like HOG [12], SIFT , ORB, EHD, PCA, color histograms, and LBP . Prior research found HOG, edge images, and original images most effecti ve for MRI processing, leading to their selection for this study . 1) Grayscale images: The grayscale MRI images will be maintained at their native resolution of 512x512 pixels to preserve their structural fidelity and prevent distortions caused by resizing. The Balance Contrast Enhancement T echnique (BCET) [28] will improve the visibility of tumor components by enhancing contrast, making key features more distinguish- able. 2) Histogram of Oriented Gradients (HOG): HOG [12] is a feature extraction method for brain tumor analysis, capturing shape and edge structures while remaining resilient to contrast and illumination changes. Unlike SIFT and ORB, which focus on key points, HOG preserves global structures crucial for MRI-based classification. Color histograms lack spatial details, EHD struggles with irregular tumor shapes, and LBP is noise- sensitiv e. PCA risks losing essential spatial features. HOG effecti vely balances global and local structures, preserving tumor boundaries and improving classification accuracy . 3) Edge imag es: Accurate tumor classification relies on well-defined edges to enhance MRI visibility . The pipeline in- cludes grayscale con version, BCET for contrast enhancement, and K-means clustering to segment the skull, soft tissues, and tumor [18]. Canny edge detection [17] refines tumor boundaries, using a 5×5 Gaussian filter for smoothing, a 3×3 Sobel operator for gradient detection, and non-maximum suppression to retain prominent edges. Double thresholding and hysteresis edge tracking remove false positives. Figure 1 illustrates the process, from the original MRI image to contrast enhancement, segmentation, and final edge-detected output, improving classification accuracy . (a) (b) (c) (d) Fig. 1: Edge detection process on MRI image: (a) Original (b) Contrasted (c) Segmented (d) Edge detected. 2 B. Classifiers 1) K-Nearest Neighbors (KNN): KNN is a non-parametric, instance-based algorithm that classifies samples based on distance similarity [15], [16]. Using HOG features, it captures tumor shape and texture. This study ev aluates Euclidean, Manhattan, Minkowski, and Chebyshev distance metrics while varying k from 1 to 10 to optimize the classification perfor- mance. 2) Support V ector Machines (SVM): SVM is widely used in high-dimensional medical imaging [13], [14], [29], map- ping non-linearly separable data to higher dimensions via kernel functions. The ev aluation of this research is based on seven kernels: Linear, RBF , Polynomial, Sigmoid, Chi- Square, Laplacian, and Gaussian—for classifying glioma, meningioma, and pituitary tumors. 3) CNN-MRI: Conv olutional Neural Networks (CNNs) ex- tract hierarchical features directly from MRI scans [3]–[5]. The CNN-MRI model (see Figure 2) is designed to process full-resolution grayscale MRI images, which prov ed in our previous work [27]. It consists of three con volutional layers, where lower layers focus on detecting edges and textures, while deeper layers identify tumor-specific structures. T o mitigate overfitting, the network is trained using the Adam optimizer with a dropout layer (0.5 rate). The final softmax layer classifies tumors into three distinct categories. 512 Conv2D: 32x3x3 256 MaxPooling 256 Conv2D: 64x3x3 128 MaxPooling 128 Conv2D: 128x3x3 64 MaxPooling 8192 Flatten 128 Dense: 128 128 Dropout: 0.5 4 Dense: 4 4 Softmax Fig. 2: The CNN-MRI architecture used in the experiments. 4) DensNet121: DenseNet121 improves feature learning by connecting all layers, enhancing tumor structure recognition while preserving spatial details [30], [31]. Its dense connec- tivity aids in detecting subtle tumor morphology differences. 5) ResNet: ResNet50 and ResNet101 improve feature learning by addressing the vanishing gradient problem [32], [33]. ResNet50 enhances tumor boundaries using Canny edge detection, while ResNet101, with its deeper architecture, cap- tures textures and semantic patterns for better tumor recogni- tion. 6) Xception: Xception [34] improves brain tumor clas- sification using depthwise separable con volutions, reducing complexity while preserving accuracy . It enhances boundary details in edge-enhanced MRI images and le verages pre- trained ImageNet weights to capture intricate tumor morphol- ogy , strengthening ensemble classification. C. V oting system Figure 3 depicts the workflow of the voting system, where MRI images undergo preprocessing before classification. De- pending on model requirements, images are used in their original form or transformed into alternati ve representations, such as HOG features or edge-detected images, to enhance feature extraction. Each classifier processes its respective input format and independently predicts the tumor class. Fig. 3: The voting system overvie w . A weighted voting mechanism determines the final clas- sification by assigning greater influence to higher accuracy models, ensuring reliable predictions. This strategy balances classifier strengths while minimizing the impact of weaker models. A mathematical formulation optimally integrates these weighted votes, selecting the class with the highest score as the final prediction. The voting system follows the mathematical formulation: y final = arg max c n X i =1 w i · p i,c ! (1) Where y final represents the predicted tumor class, w i is the weight assigned to classifier i , and p i,c denotes the confidence score for class c . D. Optimizations The optimization process refined key hyperparameters for brain tumor classification using AutoKeras [35] for CNN- MRI and Optuna [36] for other models. Standardized settings included a learning rate of 0.0001 for stability , batch size of 32 for ef ficiency , and dropout (0.3–0.5) with L2 weight decay to prevent ov erfitting. Activ ation functions varied ReLU for most Sand wish for Xception. Optimization strategies included SGD with Momentum for ResNet50, Adam for ResNet101 and DenseNet121, and RMSprop for Xception. Categorical Cross- Entropy was the primary loss function, with Focal Loss used for class imbalance in ResNet101. Uniform data augmentation techniques: flipping, rotation, zooming, brightness adjustments, and noise injection were ap- plied to enhance generalization. These optimizations strength- ened MRI-based brain tumor classification, ensuring robust performance across architectures. 3 I V . R E S U LT S This study utilizes two well-known MRI brain tumor datasets to improve classification accuracy and generalizabil- ity: the Kaggle Brain T umor MRI dataset 1 and the Figshare Brain T umor dataset 2 (see Figure 4). (a) Glioma (b) Meningioma (c) Pituitary (d) No T umor Fig. 4: Sample images representing each tumor class: glioma, meningioma, pituitary , and no tumor . The Kaggle dataset [37] comprises 7,023 MRI images categorized into glioma, meningioma, pituitary tumors, and no tumor, with a predefined train-test split of 5,712 training and 1,311 testing images, ensuring balanced class distribution for consistent benchmarking. In contrast, the Figshare dataset contains 3,064 T1-weighted MRI images from 233 patients, divided into glioma, meningioma, and pituitary tumors with a 70%-30% train-test split. Unlike Kaggle, Figshare presents an imbalanced class distribution, introducing additional chal- lenges for classification models. A. Individual Results T able I shows that deep learning models consistently out- perform traditional classifiers in accuracy and generalization. While KNN and SVM achiev ed competitive results, their performance depend on hyperparameters. KNN performed best with k = 3 using Euclidean distance, while SVM e xcelled with a linear kernel. T ABLE I: Classifier performance on Figshare and Kaggle datasets. Model Figshare Accuracy (%) Kaggle Accuracy (%) KNN 96.41 97.94 SVM 95.54 96.03 ResNet50 96.63 98.78 Xception 97.93 99.46 CNN-MRI 95.65 98.32 DenseNet121 98.15 99.60 ResNet101 98.37 99.08 Among deep learning models, ResNet101 and DenseNet121 demonstrated the highest performance, achieving 98.37% and 98.15% on Figshare and 99.08% and 99.60% on Kaggle, respectiv ely . The more profound architecture of ResNet101 enhanced hierarchical feature extraction, while DenseNet121 benefited from feature reuse and ef ficient gradient propaga- tion. Xception, le veraging depthwise separable con volutions, 1 https://www .kaggle.com/datasets/masoudnickparv ar/ brain- tumor- mri- dataset 2 https://figshare.com/articles/dataset/brain tumor dataset/1512427 achiev ed 97.93% on Figshare and 99.46% on Kaggle, ex- celling in spatial feature extraction. resnet50, trained on edge- enhanced images, performed well, particularly on Kaggle, highlighting the effecti veness of boundary-aware classification. Overall, models trained on the Kaggle dataset exhibited higher accuracy than those trained on Figshare, mainly due to Kaggle’ s larger dataset size and more balanced class dis- tribution. In contrast, the imbalanced distribution in Figshare posed additional challenges, particularly affecting SVM and CNN-MRI models. CNN-MRI, while robust, showed slightly lower accuracy (95.65% on Figshare, 98.32% on Kaggle) than deeper architectures, reflecting its limited feature extraction capability . B. V oting System Results Before Optimization All classifiers contributed equally in the no-weight sce- nario (see T able II), establishing a baseline to assess the ensemble system’ s performance. Despite no weighting, the system achieved 99.13% accuracy on Figshare and 99.54% on Kaggle. These results confirm that even a simple majority voting mechanism performs ef fectively , providing a strong benchmark before applying weight optimizations. T ABLE II: Classification scores for the no-weight scenario on Figshare and Kaggle datasets. Class Figshare Kaggle P R F1 # P R F1 # Glioma 0.97 0.98 0.97 428 1.00 0.99 0.99 300 Meningioma 0.94 0.93 0.94 213 0.99 0.99 0.99 306 Pituitary 0.98 0.98 0.99 279 0.99 0.99 1.00 300 No tumor – – – – 0.99 1.00 0.99 405 Accuracy: Figshare = 99.13%, Kaggle = 99.54% In the incremental-weight scenario (see T able III), classifiers were assigned weights based on their performance, ensuring that higher-performing models had a greater influence on the ensemble’ s decisions. The weighting scheme ranged from w 1 = 7 , w 2 = 6 , . . . , w 7 = 1 . On the Figshare dataset, this ap- proach achie ved an accuracy of 98.36%, slightly lower than the no-weight scenario, suggesting that equal contributions from all models may be more effecti ve in this case. Howe ver , on the Kaggle dataset, the incremental-weighting strategy maintained a high accuracy of 99.54%, with perfect classification for the no tumor class. T ABLE III: Classification scores for the incremental scenario on Figshare and Kaggle datasets. Class Figshare Kaggle P R F1 # P R F1 # Glioma 0.98 0.98 0.97 428 1.00 0.99 0.99 300 Meningioma 0.97 0.97 0.98 213 0.98 1.00 0.99 306 Pituitary 0.98 0.98 0.99 279 0.99 0.99 1.00 300 No tumor – – – – 1.00 1.00 1.00 405 Accuracy: Figshare = 98.36%, Kaggle = 99.54% In the highest-weight scenario (see T able IV), the most ac- curate individual model was assigned a slightly higher weight while incorporating contrib utions from other classifiers. On the Figshare dataset, ResNet101 achieved the highest individual 4 accuracy (see T able I), so it w as giv en a greater influence in the ensemble ( w 1 = 2 , w 2 = 1 , w 3 = 1 , . . . , w 7 = 1 ), resulting in 99.13% accuracy , matching the performance of the no-weight scenario. Meanwhile, on the Kaggle dataset, DenseNet121 demonstrated the strongest individual performance, and em- phasizing its contribution led to an optimized ensemble ac- curacy of 99.69%, with perfect classification across all tumor classes. T ABLE IV: Classification scores for the highest weight sce- nario on Figshare and Kaggle datasets. Class Figshare Kaggle P R F1 # P R F1 # Glioma 0.97 0.98 0.97 428 1.00 0.99 1.00 300 Meningioma 0.94 0.93 0.94 213 0.99 1.00 0.99 306 Pituitary 0.98 0.98 0.99 279 1.00 1.00 1.00 300 No tumor – – – – 1.00 1.00 1.00 405 Accuracy: Figshare = 99.13%, Kaggle = 99.69% C. Optimized V oting System Results An automated approach optimized vote weights (0–10), excluding models with a weight of 0. The top three con- figurations achiev ed 99.46% accuracy on Figshare (T able V) and 99.85% on Kaggle (T able VI). While all reached the same accuracy , they re vealed dif ferences in misclassification distribution. T ABLE V: T op three scenarios achieving 99.46% accuracy on the Figshare dataset. Scenario SVM KNN ResNet50 Xception CNN-MRI DenseNet121 ResNet101 1 1 3 1 3 2 1 4 2 1 1 0 1 1 1 2 3 1 2 1 1 0 1 2 On Figshare, top configurations prioritized ResNet101, the most accurate model, with other classifiers supporting bal- anced classification. On Kaggle, DenseNet121 and Xception had greater influence, aligning with their superior accuracy . While overall accuracy remained identical, weight distri- butions affected class-specific performance, highlighting the trade-off between accuracy and classification stability . T ABLE VI: T op three scenarios achie ved an accuracy of 99.85% running on Kaggle dataset. Scenario SVM KNN ResNet50 Xception CNN-MRI DenseNet121 ResNet101 1 1 1 1 4 2 4 2 2 0 0 1 2 1 2 1 3 0 0 1 3 1 2 1 D. Comparison The comparativ e analysis of the Figshare and Kaggle datasets highlights the accurac y gains achiev ed by our ensemble-based voting system. As shown in T able VII, prior single models like the 23-layer CNN 97.80% and ResNet50 (97.30%) lacked generalization, while ensemble systems (e.g., AlexNet, VGG-16, ResNet50) reached 97.55%. In contrast, our hybrid approach KNN, SVM, ResNet50, Xception, CNN- MRI, DenseNet121, ResNet101 achiev ed 99.46%, demonstrat- ing the benefits of integrating deep learning and traditional methods. Similarly , T able VIII confirms our method’ s superiority on the Kaggle dataset. While previous CNN and EfficientNet models peaked at 98.33% and ensemble systems (GoogleNet, ShuffleNet, SVM, KNN) at 98.40%, our optimized voting system achiev ed 99.85%, ensuring improved decision-making and robustness in medical diagnostics. T ABLE VII: Performance comparison of the proposed method with existing approaches on the Figshare dataset. Method Single Classifiers Acc. (%) Khan et al. [3] 23-layer CNN 97.80 Shnaka et al. [2] R-CNN 94.60 Momina et al. [38] ResNet50 95.9 Montoya et al. [39] ResNet50 97.30 V u et al. [18] ResNet50 96.53 Method Ensemble System Acc. (%) Dheepak et al. [13] SVM with various kernels 93.00 Siar et al. [21] AlexNet, VGG-16, VGG-19, ResNet50 97.55 Munira et al. [23] 23-layer CNN, Random Forest, SVM 96.52 Bogacsovics et al. [20] AlexNet, MobileNetv2, EfficientNet, ShuffleNetv2 92.00 V u et al. [27] KNN, SVM, CNN-MRI, ResNet50 98.36 Proposed Method KNN, SVM, ResNet50, Xception, CNN-MRI, DenseNet121, ResNet101 99.46 T ABLE VIII: Performance comparison of the proposed method with existing approaches on the Kaggle dataset. Method Single Classifiers Acc. (%) Asiri et al. [40] CNN 94.58 Ishaq et al. [8] EfficientNet 97.40 Rasheed et al. [4] CNN 97.85 Rasheed et al. [5] CNN 98.33 Ramakrishna et al. [41] EfficientNet 98.00 Method Ensemble System Acc. (%) Roy et al. [26] SVM, Random Forest, eXtreme Gradient Boosting 98.15 Guzm ´ an et al. [42] ResNet50, InceptionV3, InceptionResNetV2, Xception, Mo- bileNetV2, EfficientNetB0 97.12 Bansal et al. [43] CNN and SVM 98.00 Ali et al. [44] GoogleNet, ShuffleNet, NasNet-Mobile, LD A, SVM, KNN 98.40 Proposed Method KNN, SVM, ResNet50, Xception, CNN-MRI, DenseNet121, ResNet101 99.85 V . C O N C L U S I O N This paper proposes an ensemble-based classification sys- tem that employs a weighted voting mechanism to enhance the accuracy and reliability of MRI-based brain tumor di- agnosis. By integrating traditional machine learning models (KNN, SVM) with deep learning architectures (CNN-MRI, ResNet50, Xception, DenseNet121, and ResNet101), the pro- posed approach ef fectively lev erages div erse feature extrac- tion/representation techniques and classifiers. Experimental ev aluations on the Figshare and Kaggle datasets confirm that this ensemble framework outperforms individual classifiers, demonstrating the advantages of model diversity in medical image analysis. The results highlight that optimizing classifier contributions through a weighted voting strategy significantly improv es classification accuracy and robustness. The optimized ensemble achiev ed 99.46% accuracy on the Figshare dataset and 99.85% on the Kaggle dataset, surpassing both standalone models and prior ensemble- based methods. Analysis of the weighting distrib ution em- phasizes the importance of assigning greater influence to high-performing classifiers, particularly models trained on edge-enhanced images (ResNet50, Xception) and grayscale MRI scans (DenseNet121, ResNet101). Ho wev er, challenges remain, particularly in distinguishing between glioma and meningioma, indicating the need for further refinement in 5 weighting adaptation and feature extraction. Future work should explore dynamic weighting strategies and advanced feature engineering techniques to enhance classification preci- sion across all tumor categories. R E F E R E N C E S [1] T . G. Debelee, S. R. Kebede, F . Schwenker , and Z. M. Shew arega, “Deep learning in selected cancers’ image analysis—a surv ey , ” Journal of Imaging , vol. 6, no. 11, p. 121, 2020. [2] Z. N. K. Swati, Q. Zhao, M. Kabir , F . Ali, Z. Ali, S. Ahmed, and J. Lu, “Brain tumor classification for mr images using transfer learning and fine-tuning, ” Computerized Medical Imaging and Graphics , 2019. [3] M. S. I. Khan et al. , “ Accurate brain tumor detection using deep conv o- lutional neural network, ” Computational and Structural Biotechnology Journal 20 (2022) , 2020. [4] Z. Rasheed, Y .-K. Ma, I. Ullah, Y . Y . Ghadi, M. Z. Khan, M. A. Khan, A. Abdusalomov , F . Alqahtani, and A. M. Shehata, “Brain tumor classification from mri using image enhancement and conv olutional neural network techniques, ” Brain Sciences , vol. 13, no. 9, 2023. [Online]. A vailable: https://www .mdpi.com/2076- 3425/13/9/1320 [5] Z. Rasheed, Y .-K. Ma, I. Ullah, M. Al-Khasawneh, S. S. Almutairi, and M. Abohashrh, “Integrating con volutional neural netw orks with attention mechanisms for magnetic resonance imaging-based classification of brain tumors, ” Bioengineering . [6] E. Irmak, “Multi-classification of brain tumor mri images using deep con volutional neural network with fully optimized framework, ” Iranian Journal of Science and T echnolo gy , T ransactions of Electrical Engineer- ing , vol. 45, pp. 1015–1036, 2021. [7] J. Cheng, W . Huang, S. Cao, R. Y ang, W . Y ang, Z. Y un, Z. W ang, and Q. Feng, “Enhanced performance of brain tumor classification via tumor region augmentation and partition, ” PLOS ONE , 10 2015. [8] A. Ishaq, F . U. M. Ullah, P . Hamandawana, D.-J. Cho, and T .-S. Chung, “Improved efficientnet architecture for multi-grade brain tumor detection, ” Electr onics , vol. 14, no. 710, 2025. [9] A. S. Farhan, M. Khalid, and U. Manzoor, “Brain tumour diagnostics and analysis (tda): Se gmentation, classification and interacti ve interface, ” Computational AI in Medicine , 2025. [10] C. Cortes and V . V apnik, “Support-vector networks, ” Machine learning , vol. 20, no. 3, pp. 273–297, 1995. [11] A. Mucherino, P . J. Papajorgji, and P . M. Pardalos, k-Near est Neighbor Classification . New Y ork, NY : Springer New Y ork, 2009. [12] M. T erayama, J. Shin, and W .-D. Chang, “Object detection using histogram of oriented gradients, ” 10 2009. [13] G. Dheepak, J. A. Christaline, and D. V aishali, “Mehw-svm multi- kernel approach for improved brain tumour classification, ” IET Image Pr ocessing , 2023. [14] T . Sadad, A. Rehman, A. Munir , T . Saba, U. T ariq, N. A yesha, and R. Abbasi, “Brain tumor detection and multi-classification using ad- vanced deep learning techniques, ” Micr oscopy Resear ch and T echnique , 2021. [15] B. B. Pattanaik, K. Anitha, S. Rathore, P . Biswas, P . K. Sethy , and S. K. Behera, “Brain tumor magnetic resonance images classification based machine learning paradigms, ” Wsp ´ ołczesna Onkologia , 2022. [16] M. Havaei, P .-M. Jodoin, and H. Larochelle, “Efficient interacti ve brain tumor segmentation as within-brain knn classification, ” in Proceedings of the 22nd International Conference on P attern Recognition . IEEE, August 2014. [17] J. Canny , “ A computational approach to edge detection, ” IEEE T rans- actions on P attern Analysis and Machine Intelligence , 1986. [18] H. A. V u and S. V ajda, “ Adv ancing brain tumor diagnosis: A hybrid approach using edge detection and deep learning, ” in P attern Recog- nition , A. Antonacopoulos, S. Chaudhuri, R. Chellappa, C.-L. Liu, S. Bhattacharya, and U. Pal, Eds. Cham: Springer Nature Switzerland, 2025, pp. 226–241. [19] H. A. V u, “Integrating preprocessing methods and con volutional neural networks for effectiv e tumor detection in medical imaging, ” 2024. [20] G. Bogacso vics, B. Harangi, and A. Hajdu, “Developing div erse ensem- ble architectures for automatic brain tumor classification, ” Multimedia T ools and Applications , 2023. [21] M. Siar and M. T eshnehlab, “ A combination of feature extraction methods and deep learning for brain tumour classification, ” IET Image Pr ocessing , vol. 15, no. 7, pp. 1444–1456, 2021. [22] B. Sterniczuk and M. Charytanowicz, “ An ensemble transfer learning model for brain tumors classification using conv olutional neural net- works, ” Advances in Science and T echnology – Resear ch Journal , 2024. [23] H. A. Munira and M. S. Islam, “Hybrid deep learning models for multi- classification of tumour from brain mri, ” Journal of Information Systems Engineering and Business Intelligence , vol. 8, no. 2, pp. 162–174, 2022. [24] X. Zhao, Y . Y uan, W . W ang, J. Chen, and F . Zhao, “ An ensemble model based on voting system for brain tumor classification, ” in Journal of Physics: Conference Series , vol. 2024, 2021, p. 012010. [25] A. B. Abdusalomov , M. Mukhiddinov , and T . K. Whangbo, “Brain tumor detection based on deep learning approaches and magnetic resonance imaging, ” Cancers , vol. 15, no. 16, 2023. [26] P . Roy , F . M. S. Srijon, and P . Bhowmik, “ An explainable ensemble approach for advanced brain tumor classification applying dual-gan mechanism and feature extraction techniques ov er highly imbalanced data, ” PLOS ONE , 09 2024. [27] H. A. V u, K. Santosh, and S. V ajda, “ An e xpert voting system for brain tumor classification using mri images, ” in Proceedings of the Seventh International Confer ence on Recent T r ends in Image Pr ocessing and P attern Recognition (RTIP2R 2024) , Bhopal, India, December 2024. [28] L. J. GUO, “Balance contrast enhancement technique and its application in image colour composition, ” International J ournal of Remote Sensing , vol. 12, no. 10, pp. 2133–2151, 1991. [29] A. Patle and D. S. Chouhan, “Svm kernel functions for classification, ” in 2013 International Conference on Advances in T echnology and Engineering (ICATE) , 2013, pp. 1–9. [30] H. Mzoughi, I. Njeh, M. B. Slima, N. Farhat, and C. Mhiri, “Deep trans- fer learning (dtl) based-framework for an accurate multi-classification of mri brain tumors, ” 2023 International Confer ence on Cyberworlds (CW) . [31] M. M. Y apici, R. Karakis, and K. Gurkahraman, “Improving brain tumor classification with deep learning using synthetic data, ” Computers Materials and Continua , 2023. [32] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition, ” in 2016 IEEE Confer ence on Computer V ision and P attern Recognition (CVPR) , 2016. [33] Q. Zhang, “ A novel resnet101 model based on dense dilated conv olution for image classification, ” SN Applied Sciences , 2021. [34] N. Benzorgat, K. Xia, and M. N. E. Benzorgat, “Enhancing brain tumor mri classification with an ensemble of deep learning models and transformer integration, ” P eerJ Computer Science , 2024. [35] H. Jin, Q. Song, and X. Hu, “ Auto-keras: An efficient neural architecture search system, ” 2019. [36] T . Akiba, S. Sano, T . Y anase, T . Ohta, and M. Koyama, “Optuna: A next-generation hyperparameter optimization framework, ” 2019. [37] J. Chaki, “Brain tumor mri dataset, ” 2023. [38] M. Masood, T . Nazir , M. Nawaz, A. Mehmood, J. Rashid, H.-Y . Kwon, T . Mahmood, and A. Hussain, “ A novel deep learning method for recognition and classification of brain tumors from mri images, ” Diagnostics . [39] S. F . ´ Alvarez Montoya, A. E. Rojas, and L. F . N. no V ´ asquez, “Classification of brain tumors: A comparativ e approach of shallow and deep neural networks, ” SN Computer Science , vol. 5, no. 142, 2024. [Online]. A vailable: https://doi.org/10.1007/s42979- 023- 02431- 7 [40] A. A. Asiri, A. Shaf, T . Ali, M. Aamir , M. Irfan, and S. Alqahtani, “Enhancing brain tumor diagnosis: an optimized cnn hyperparameter model for improved accuracy and reliability , ” P eerJ Computer Science , 2024. [41] M. T . Ramakrishna, K. Pothanaicker, P . Selvaraj, S. B. Khan, V . K. V enkatesan, S. Alzahrani, and M. Alojail, “Leveraging efficientnetb3 in a deep learning framework for high-accuracy mri tumor classification, ” CMC: Computers, Materials & Continua , 2024. [42] M. A. G ´ omez-Guzm ´ an, L. Jim ´ enez-Berista ´ ın, E. E. Garc ´ ıa-Guerrero, and L ´ opez-Bonilla, “Classifying brain tumors on magnetic resonance imaging by using conv olutional neural networks, ” Electr onics , 2023. [43] S. Bansal, R. S. Jadon, and S. K. Gupta, “ A robust hybrid con volu- tional network for tumor classification using brain mri image datasets, ” International J ournal of Advanced Computer Science and Applications , 2024. [44] R. Ali, S. Al-jumaili, A. Duru, O. Ucan, A. Boyaci, and D. Duru, “Classification of brain tumors using mri images based on con volutional neural network and supervised machine learning algorithms, ” 10 2022. 6

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment