CINXE.COM

Brain MRI detection and classification: Harnessing convolutional neural networks and multi-level thresholding | PLOS ONE

<!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml" xmlns:dc="http://purl.org/dc/terms/" xmlns:doi="http://dx.doi.org/" lang="en" xml:lang="en" itemscope itemtype="http://schema.org/Article" class="no-js"> <head prefix="og: http://ogp.me/ns#"> <link rel="stylesheet" href="/resource/css/screen.css?112d78c04dc25a6fb55b68d577e0729a"/> <!-- allows for extra head tags --> <!-- hello --> <link rel="stylesheet" type="text/css" href="https://fonts.googleapis.com/css?family=Open+Sans:400,400i,600"> <link media="print" rel="stylesheet" type="text/css" href="/resource/css/print.css"/> <script type="text/javascript"> var siteUrlPrefix = "/plosone/"; </script> <script src="/resource/js/vendor/modernizr-v2.7.1.js" type="text/javascript"></script> <script src="/resource/js/vendor/detectizr.min.js" type="text/javascript"></script> <link rel="shortcut icon" href="/resource/img/favicon.ico" type="image/x-icon"/> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"/> <link rel="canonical" href="https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0306492" /> <meta name="description" content="Brain tumor detection in clinical applications is a complex and challenging task due to the intricate structures of the human brain. Magnetic Resonance (MR) imaging is widely preferred for this purpose because of its ability to provide detailed images of soft brain tissues, including brain tissue, cerebrospinal fluid, and blood vessels. However, accurately detecting brain tumors from MR images remains an open problem for researchers due to the variations in tumor characteristics such as intensity, texture, size, shape, and location. To address these issues, we propose a method that combines multi-level thresholding and Convolutional Neural Networks (CNN). Initially, we enhance the contrast of brain MR images using intensity transformations, which highlight the infected regions in the images. Then, we use the suggested CNN architecture to classify the enhanced MR images into normal and abnormal categories. Finally, we employ multi-level thresholding based on Tsallis entropy (TE) and differential evolution (DE) to detect tumor region(s) from the abnormal images. To refine the results, we apply morphological operations to minimize distortions caused by thresholding. The proposed method is evaluated using the widely used Harvard Medical School (HMS) dataset, and the results demonstrate promising performance with 99.5% classification accuracy and 92.84% dice similarity coefficient. Our approach outperforms existing state-of-the-art methods in brain tumor detection and automated disease diagnosis from MR images." /> <meta name="citation_abstract" content="Brain tumor detection in clinical applications is a complex and challenging task due to the intricate structures of the human brain. Magnetic Resonance (MR) imaging is widely preferred for this purpose because of its ability to provide detailed images of soft brain tissues, including brain tissue, cerebrospinal fluid, and blood vessels. However, accurately detecting brain tumors from MR images remains an open problem for researchers due to the variations in tumor characteristics such as intensity, texture, size, shape, and location. To address these issues, we propose a method that combines multi-level thresholding and Convolutional Neural Networks (CNN). Initially, we enhance the contrast of brain MR images using intensity transformations, which highlight the infected regions in the images. Then, we use the suggested CNN architecture to classify the enhanced MR images into normal and abnormal categories. Finally, we employ multi-level thresholding based on Tsallis entropy (TE) and differential evolution (DE) to detect tumor region(s) from the abnormal images. To refine the results, we apply morphological operations to minimize distortions caused by thresholding. The proposed method is evaluated using the widely used Harvard Medical School (HMS) dataset, and the results demonstrate promising performance with 99.5% classification accuracy and 92.84% dice similarity coefficient. Our approach outperforms existing state-of-the-art methods in brain tumor detection and automated disease diagnosis from MR images."> <meta name="keywords" content="Magnetic resonance imaging,Cancers and neoplasms,Neuroimaging,Imaging techniques,Computer architecture,Malignant tumors,Entropy,Convolution" /> <meta name="citation_doi" content="10.1371/journal.pone.0306492"/> <meta name="citation_author" content="Rasool Reddy Kamireddy"/> <meta name="citation_author_institution" content="Department of ECE, NRI Institute of Technology (Autonomous), Vijayawada, India"/> <meta name="citation_author" content="Rajesh N. V. P. S. Kandala"/> <meta name="citation_author_institution" content="School of Electronics Engineering (SENSE), VIT-AP University, Amaravati, Andhra Pradesh, India"/> <meta name="citation_author" content="Ravindra Dhuli"/> <meta name="citation_author_institution" content="School of Electronics Engineering (SENSE), VIT-AP University, Amaravati, Andhra Pradesh, India"/> <meta name="citation_author" content="Srinivasu Polinati"/> <meta name="citation_author_institution" content="Department of ECE, VIEW, Vishakhapatnam, Andhra Pradesh, India"/> <meta name="citation_author" content="Kamesh Sonti"/> <meta name="citation_author_institution" content="Department of ECE, SVEC, Tadepalligudem, Andhra Pradesh, India"/> <meta name="citation_author" content="Ryszard Tadeusiewicz"/> <meta name="citation_author_institution" content="Department of Biocybernetics and Biomedical Engineering, AGH University of Science and Technology, Krakow, Poland"/> <meta name="citation_author" content="Paweł Pławiak"/> <meta name="citation_author_institution" content="Department of Computer Science, Faculty of Computer Science and Telecommunications, Cracow University of Technology, Krakow, Poland"/> <meta name="citation_author_institution" content="Institute of Theoretical and Applied Informatics, Polish Academy of Sciences, Gliwice, Poland"/> <meta name="citation_title" content="Brain MRI detection and classification: Harnessing convolutional neural networks and multi-level thresholding"/> <meta itemprop="name" content="Brain MRI detection and classification: Harnessing convolutional neural networks and multi-level thresholding"/> <meta name="citation_journal_title" content="PLOS ONE"/> <meta name="citation_journal_abbrev" content="PLOS ONE"/> <meta name="citation_date" content="Aug 1, 2024"/> <meta name="citation_firstpage" content="e0306492"/> <meta name="citation_issue" content="8"/> <meta name="citation_volume" content="19"/> <meta name="citation_issn" content="1932-6203"/> <meta name="citation_publisher" content="Public Library of Science"/> <meta name="citation_pdf_url" content="https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0306492&type=printable"> <meta name="citation_article_type" content="Research Article"> <meta name="dc.identifier" content="10.1371/journal.pone.0306492" /> <meta name="twitter:card" content="summary" /> <meta name="twitter:site" content="plosone"/> <meta name="twitter:title" content="Brain MRI detection and classification: Harnessing convolutional neural networks and multi-level thresholding" /> <meta property="twitter:description" content="Brain tumor detection in clinical applications is a complex and challenging task due to the intricate structures of the human brain. Magnetic Resonance (MR) imaging is widely preferred for this purpose because of its ability to provide detailed images of soft brain tissues, including brain tissue, cerebrospinal fluid, and blood vessels. However, accurately detecting brain tumors from MR images remains an open problem for researchers due to the variations in tumor characteristics such as intensity, texture, size, shape, and location. To address these issues, we propose a method that combines multi-level thresholding and Convolutional Neural Networks (CNN). Initially, we enhance the contrast of brain MR images using intensity transformations, which highlight the infected regions in the images. Then, we use the suggested CNN architecture to classify the enhanced MR images into normal and abnormal categories. Finally, we employ multi-level thresholding based on Tsallis entropy (TE) and differential evolution (DE) to detect tumor region(s) from the abnormal images. To refine the results, we apply morphological operations to minimize distortions caused by thresholding. The proposed method is evaluated using the widely used Harvard Medical School (HMS) dataset, and the results demonstrate promising performance with 99.5% classification accuracy and 92.84% dice similarity coefficient. Our approach outperforms existing state-of-the-art methods in brain tumor detection and automated disease diagnosis from MR images." /> <meta property="twitter:image" content="https://journals.plos.org/plosone/article/figure/image?id=10.1371/journal.pone.0306492.g008&size=inline" /> <meta property="og:type" content="article" /> <meta property="og:url" content="https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0306492"/> <meta property="og:title" content="Brain MRI detection and classification: Harnessing convolutional neural networks and multi-level thresholding"/> <meta property="og:description" content="Brain tumor detection in clinical applications is a complex and challenging task due to the intricate structures of the human brain. Magnetic Resonance (MR) imaging is widely preferred for this purpose because of its ability to provide detailed images of soft brain tissues, including brain tissue, cerebrospinal fluid, and blood vessels. However, accurately detecting brain tumors from MR images remains an open problem for researchers due to the variations in tumor characteristics such as intensity, texture, size, shape, and location. To address these issues, we propose a method that combines multi-level thresholding and Convolutional Neural Networks (CNN). Initially, we enhance the contrast of brain MR images using intensity transformations, which highlight the infected regions in the images. Then, we use the suggested CNN architecture to classify the enhanced MR images into normal and abnormal categories. Finally, we employ multi-level thresholding based on Tsallis entropy (TE) and differential evolution (DE) to detect tumor region(s) from the abnormal images. To refine the results, we apply morphological operations to minimize distortions caused by thresholding. The proposed method is evaluated using the widely used Harvard Medical School (HMS) dataset, and the results demonstrate promising performance with 99.5% classification accuracy and 92.84% dice similarity coefficient. Our approach outperforms existing state-of-the-art methods in brain tumor detection and automated disease diagnosis from MR images."/> <meta property="og:image" content="https://journals.plos.org/plosone/article/figure/image?id=10.1371/journal.pone.0306492.g008&size=inline"/> <meta name="citation_reference" content="citation_title=The 2007 WHO classification of tumours of the central nervous system;citation_author=DN Louis;citation_author=H Ohgaki;citation_author=OD Wiestler;citation_author=WK Cavenee;citation_author=PC Burger;citation_author=A Jouvet;citation_journal_title=Acta neuropathologica;citation_volume=114;citation_number=114;citation_first_page=97;citation_last_page=109;citation_publication_date=2007;"/> <meta name="citation_reference" content="citation_title=A comparative study of medical imaging techniques;citation_author=H Kasban;citation_author=MA El-Bendary;citation_author=DH Salama;citation_journal_title=International Journal of Information Science and Intelligent System;citation_volume=4;citation_number=4;citation_issue=2;citation_first_page=37;citation_last_page=58;citation_publication_date=2015;"/> <meta name="citation_reference" content="citation_title=A survey of brain tumor segmentation and classification algorithms;citation_author=ES Biratu;citation_author=F Schwenker;citation_author=YM Ayano;citation_author=TG Debelee;citation_journal_title=Journal of Imaging;citation_volume=7;citation_number=7;citation_issue=9;citation_first_page=179;citation_publication_date=2021;"/> <meta name="citation_reference" content="citation_title=An unsupervised learning method with a clustering approach for tumor identification and tissue segmentation in magnetic resonance brain images;citation_author=G Vishnuvarthanan;citation_author=MP Rajasekaran;citation_author=P Subbaraj;citation_author=A Vishnuvarthanan;citation_journal_title=Applied Soft Computing;citation_volume=38;citation_number=38;citation_first_page=190;citation_last_page=212;citation_publication_date=2016;"/> <meta name="citation_reference" content="citation_title=K-means clustering and neural network for object detecting and identifying abnormality of brain tumor;citation_author=N Arunkumar;citation_author=MA Mohammed;citation_author=MK Abd Ghani;citation_author=DA Ibrahim;citation_author=E Abdulhay;citation_author=G Ramirez-Gonzalez;citation_author=VH de Albuquerque;citation_journal_title=Soft Computing;citation_volume=23;citation_number=23;citation_first_page=9083;citation_last_page=96;citation_publication_date=2019;"/> <meta name="citation_reference" content="citation_title=Pathological brain detection based on AlexNet and transfer learning;citation_author=S Lu;citation_author=Z Lu;citation_author=YD Zhang;citation_journal_title=Journal of computational science;citation_volume=30;citation_number=30;citation_first_page=41;citation_last_page=7;citation_publication_date=2019;"/> <meta name="citation_reference" content="citation_title=Image registration‐based brain tumor detection and segmentation using ANFIS classification approach;citation_author=E Nagarathinam;citation_author=T Ponnuchamy;citation_journal_title=International Journal of Imaging Systems and Technology;citation_volume=29;citation_number=29;citation_issue=4;citation_first_page=510;citation_last_page=7;citation_publication_date=2019;"/> <meta name="citation_reference" content="citation_title=BrainMRNet: Brain tumor detection using magnetic resonance images with a novel convolutional neural network model;citation_author=M Toğaçar;citation_author=B Ergen;citation_author=Z Cömert;citation_journal_title=Medical hypotheses;citation_volume=134;citation_number=134;citation_first_page=109531;citation_publication_date=2020;"/> <meta name="citation_reference" content="citation_title=Classification of brain MRI using hyper column technique with convolutional neural network and feature selection method;citation_author=M Toğaçar;citation_author=Z Cömert;citation_author=B Ergen;citation_journal_title=Expert Systems with Applications;citation_volume=149;citation_number=149;citation_first_page=113274;citation_publication_date=2020;"/> <meta name="citation_reference" content="citation_title=Classification of magnetic resonance images for brain tumour detection;citation_author=Y Kurmi;citation_author=V Chaurasia;citation_journal_title=IET Image Processing;citation_volume=14;citation_number=14;citation_issue=12;citation_first_page=2808;citation_last_page=18;citation_publication_date=2020;"/> <meta name="citation_reference" content="citation_title=IDSS-based Two stage classification of brain tumor using SVM;citation_author=S Polepaka;citation_author=CS Rao;citation_author=M Chandra Mohan;citation_journal_title=Health and Technology;citation_volume=10;citation_number=10;citation_issue=1;citation_first_page=249;citation_last_page=58;citation_publication_date=2020;"/> <meta name="citation_reference" content="citation_title=Retracted article: computer-aided detection of brain tumor from magnetic resonance images using deep learning network;citation_author=MM Chanu;citation_author=K Thongam;citation_journal_title=Journal of Ambient Intelligence and Humanized Computing;citation_volume=12;citation_number=12;citation_issue=7;citation_first_page=6911;citation_last_page=22;citation_publication_date=2021;"/> <meta name="citation_reference" content="citation_title=Brain Tumor Classification of MRI Images Using Deep Convolutional Neural Network;citation_author=S Kuraparthi;citation_author=MK Reddy;citation_author=CN Sujatha;citation_author=H Valiveti;citation_author=C Duggineni;citation_author=M Kollati;citation_journal_title=Traitement du Signal;citation_volume=38;citation_number=38;citation_issue=4;citation_publication_date=2021;"/> <meta name="citation_reference" content="citation_title=A data constrained approach for brain tumour detection using fused deep features and SVM;citation_author=PK Sethy;citation_author=SK Behera;citation_journal_title=Multimedia Tools and Applications;citation_volume=80;citation_number=80;citation_issue=19;citation_first_page=28745;citation_last_page=60;citation_publication_date=2021;"/> <meta name="citation_reference" content="citation_title=Towards better segmentation of abnormal part in multimodal images using kernel possibilistic C means particle swarm optimization with morphological reconstruction filters: Combination of KFCM and PSO with morphological filters;citation_author=R Sumathi;citation_author=V Mandadi;citation_journal_title=International Journal of E-Health and Medical Communications (IJEHMC);citation_volume=12;citation_number=12;citation_issue=3;citation_first_page=55;citation_last_page=73;citation_publication_date=2021;"/> <meta name="citation_reference" content="citation_title=Segmenting and classifying MRI multimodal images using cuckoo search optimization and KNN classifier;citation_author=R Sumathi;citation_author=M Venkatesulu;citation_author=SP Arjunan;citation_journal_title=IETE Journal of Research;citation_volume=69;citation_number=69;citation_issue=7;citation_first_page=3946;citation_last_page=53;citation_publication_date=2023;"/> <meta name="citation_reference" content="citation_title=A novel brain MRI image segmentation method using an improved multi-view fuzzy c-means clustering algorithm;citation_author=L Hua;citation_author=Y Gu;citation_author=X Gu;citation_author=J Xue;citation_author=T Ni;citation_journal_title=Frontiers in Neuroscience;citation_volume=15;citation_number=15;citation_first_page=662674;citation_publication_date=2021;"/> <meta name="citation_reference" content="Dehkordi AA, Hashemi M, Neshat M, Mirjalili S, Sadiq AS. Brain tumor detection and classification using a new evolutionary convolutional neural network. arXiv preprint arXiv:2204.12297. 2022 Apr 26."/> <meta name="citation_reference" content="citation_title=Enhanced watershed segmentation algorithm-based modified ResNet50 model for brain tumor detection;citation_author=AK Sharma;citation_author=A Nandal;citation_author=A Dhaka;citation_author=D Koundal;citation_author=DC Bogatinoska;citation_author=H Alyami;citation_journal_title=BioMed Research International;citation_volume=2022;citation_number=2,022;citation_publication_date=2022;"/> <meta name="citation_reference" content="citation_title=Deep learning model for automatic classification and prediction of brain tumor;citation_author=S Sharma;citation_author=S Gupta;citation_author=D Gupta;citation_author=A Juneja;citation_author=H Khatter;citation_author=S Malik;citation_journal_title=Journal of Sensors;citation_volume=2022;citation_number=2,022;citation_first_page=1;citation_last_page=1;citation_publication_date=2022;"/> <meta name="citation_reference" content="citation_title=A novel data augmentation-based brain tumor detection using convolutional neural network;citation_author=H Alsaif;citation_author=R Guesmi;citation_author=BM Alshammari;citation_author=T Hamrouni;citation_author=T Guesmi;citation_author=A Alzamil;citation_journal_title=Applied sciences;citation_volume=12;citation_number=12;citation_issue=8;citation_first_page=3773;citation_publication_date=2022;"/> <meta name="citation_reference" content="citation_title=Tumor segmentation by a self-organizing-map based active contour model (SOMACM) from the brain MRIs;citation_author=G Sandhya;citation_author=GB Kande;citation_author=TS Savithri;citation_journal_title=IETE Journal of Research;citation_volume=68;citation_number=68;citation_issue=6;citation_first_page=3927;citation_last_page=39;citation_publication_date=2022;"/> <meta name="citation_reference" content="citation_title=A novel framework for brain tumor detection based on convolutional variational generative models;citation_author=WM Salama;citation_author=A Shokry;citation_journal_title=Multimedia Tools and Applications;citation_volume=81;citation_number=81;citation_issue=12;citation_first_page=16441;citation_last_page=54;citation_publication_date=2022;"/> <meta name="citation_reference" content="citation_title=Brain tumor classification in magnetic resonance imaging images using convolutional neural network;citation_author=N Remzan;citation_author=K Tahiry;citation_author=A Farchi;citation_journal_title=IJECE;citation_volume=12;citation_number=12;citation_issue=6;citation_first_page=6664;citation_publication_date=2022;"/> <meta name="citation_reference" content="citation_title=MRI brain tumor detection and classification using parallel deep convolutional neural networks;citation_author=T Rahman;citation_author=MS Islam;citation_journal_title=Measurement: Sensors;citation_volume=26;citation_number=26;citation_first_page=100694;citation_publication_date=2023;"/> <meta name="citation_reference" content="citation_title=Detection of brain lesion location in MRI images using convolutional neural network and robust PCA;citation_author=M Ahmadi;citation_author=A Sharifi;citation_author=M Jafarian Fard;citation_author=N Soleimani;citation_journal_title=International journal of neuroscience;citation_volume=133;citation_number=133;citation_issue=1;citation_first_page=55;citation_last_page=66;citation_publication_date=2023;"/> <meta name="citation_reference" content="citation_title=Brain tumor classification using meta-heuristic optimized convolutional neural networks;citation_author=SZ Kurdi;citation_author=MH Ali;citation_author=MM Jaber;citation_author=T Saba;citation_author=A Rehman;citation_author=R Damaševičius;citation_journal_title=Journal of Personalized Medicine;citation_volume=13;citation_number=13;citation_issue=2;citation_first_page=181;citation_publication_date=2023;"/> <meta name="citation_reference" content="citation_title=Machine learning-based models for magnetic resonance imaging (mri)-based brain tumor classification;citation_author=AA Asiri;citation_author=B Khan;citation_author=F Muhammad;citation_author=HA Alshamrani;citation_author=KA Alshamrani;citation_author=M Irfan;citation_journal_title=Intell. Autom. Soft Comput;citation_volume=36;citation_number=36;citation_first_page=299;citation_last_page=312;citation_publication_date=2023;"/> <meta name="citation_reference" content="citation_title=Developing a hybrid algorithm to detect brain tumors from MRI images;citation_author=G Saad;citation_author=A Suliman;citation_author=L Bitar;citation_author=S Bshara;citation_journal_title=Egyptian Journal of Radiology and Nuclear Medicine;citation_volume=54;citation_number=54;citation_issue=1;citation_first_page=14;citation_publication_date=2023;"/> <meta name="citation_reference" content="citation_title=Tumor localization and classification from MRI of brain using deep convolution neural network and Salp swarm algorithm;citation_author=J Alyami;citation_author=A Rehman;citation_author=F Almutairi;citation_author=AM Fayyaz;citation_author=S Roy;citation_author=T Saba;citation_journal_title=Cognitive Computation;citation_volume=13;citation_number=13;citation_first_page=1;citation_last_page=1;citation_publication_date=2023;"/> <meta name="citation_reference" content="The Whole Brain Atlas [Internet]. www.med.harvard.edu . [cited 2024 Apr 16]. http://www.med.harvard.edu/AANLIB "/> <meta name="citation_reference" content="Gonzalez RC. Digital image processing. Pearson education india; 2009."/> <meta name="citation_reference" content="citation_title=A novel lightweight CNN architecture for the diagnosis of brain tumors using MR images;citation_author=KR Reddy;citation_author=R Dhuli;citation_journal_title=Diagnostics;citation_volume=13;citation_number=13;citation_issue=2;citation_first_page=312;citation_publication_date=2023;"/> <meta name="citation_reference" content="citation_title=Review of deep learning: concepts, CNN architectures, challenges, applications, future directions;citation_author=L Alzubaidi;citation_author=J Zhang;citation_author=AJ Humaidi;citation_author=A Al-Dujaili;citation_author=Y Duan;citation_author=O Al-Shamma;citation_journal_title=Journal of big Data;citation_volume=8;citation_number=8;citation_first_page=1;citation_last_page=74;citation_publication_date=2021;"/> <meta name="citation_reference" content="Lin M, Chen Q, Yan S. Network in network. arXiv preprint arXiv:1312.4400. 2013 Dec 16."/> <meta name="citation_reference" content="citation_title=Morphological grayscale reconstruction in image analysis: applications and efficient algorithms;citation_author=L. Vincent;citation_journal_title=IEEE transactions on image processing;citation_volume=2;citation_number=2;citation_issue=2;citation_first_page=176;citation_last_page=201;citation_publication_date=1993;"/> <meta name="citation_reference" content="citation_title=Possible generalization of Boltzmann-Gibbs statistics;citation_author=C. Tsallis;citation_journal_title=Journal of statistical physics;citation_volume=52;citation_number=52;citation_first_page=479;citation_last_page=87;citation_publication_date=1988;"/> <meta name="citation_reference" content="citation_title=A practical approach to global optimization;citation_author=K Price;citation_author=RM Storn;citation_author=JA Lampinen;citation_author=D Evolution;citation_publication_date=2005;citation_publisher=Natural Computing Series"/> <meta name="citation_reference" content="citation_title=Image segmentation by three-level thresholding based on maximum fuzzy entropy and genetic algorithm;citation_author=WB Tao;citation_author=JW Tian;citation_author=J Liu;citation_journal_title=Pattern Recognition Letters;citation_volume=24;citation_number=24;citation_issue=16;citation_first_page=3069;citation_last_page=78;citation_publication_date=2003;"/> <meta name="citation_reference" content="Kennedy J, Eberhart R. Particle swarm optimization. InProceedings of ICNN’95-international conference on neural networks 1995 Nov 27 (Vol. 4, pp. 1942–1948). ieee."/> <meta name="citation_reference" content="citation_title=A systematic analysis of performance measures for classification tasks;citation_author=M Sokolova;citation_author=G Lapalme;citation_journal_title=Information processing &amp; management;citation_volume=45;citation_number=45;citation_issue=4;citation_first_page=427;citation_last_page=37;citation_publication_date=2009;"/> <meta name="citation_reference" content="Raschka S. Model evaluation, model selection, and algorithm selection in machine learning. arXiv preprint arXiv:1811.12808. 2018 Nov 13."/> <meta name="citation_reference" content="citation_title=Stochastic gradient descent tricks;citation_inbook_title=Neural Networks: Tricks of the Trade;citation_author=L. Bottou;citation_first_page=421;citation_last_page=436;citation_publication_date=2012;citation_publisher=Springer Berlin Heidelberg"/> <meta name="citation_reference" content="Da K. A method for stochastic optimization. arXiv preprint arXiv:1412.6980. 2014 Dec."/> <meta name="citation_reference" content="citation_title=Adaptive subgradient methods for online learning and stochastic optimization;citation_author=J Duchi;citation_author=E Hazan;citation_author=Y Singer;citation_journal_title=Journal of machine learning research;citation_volume=12;citation_number=12;citation_issue=7;citation_publication_date=2011;"/> <meta name="citation_reference" content="Zeiler MD. Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701. 2012 Dec 22."/> <meta name="citation_reference" content="Hinton G, Srivastava N, Swersky K. Neural Networks for Machine Learning Lecture 6a Overview of mini—batch gradient descent [Internet]. http://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf "/> <meta name="citation_reference" content="citation_title=Incorporating nesterov momentum into adam;citation_author=D. Timothy;citation_journal_title=Natural Hazards;citation_volume=3;citation_number=3;citation_issue=2;citation_first_page=437;citation_last_page=53;citation_publication_date=2016;"/> <!-- DoubleClick overall ad setup script --> <script type='text/javascript'> var googletag = googletag || {}; googletag.cmd = googletag.cmd || []; (function() { var gads = document.createElement('script'); gads.async = true; gads.type = 'text/javascript'; var useSSL = 'https:' == document.location.protocol; gads.src = (useSSL ? 'https:' : 'http:') + '//www.googletagservices.com/tag/js/gpt.js'; var node = document.getElementsByTagName('script')[0]; node.parentNode.insertBefore(gads, node); })(); </script> <!-- DoubleClick ad slot setup script --> <script id="doubleClickSetupScript" type='text/javascript'> googletag.cmd.push(function() { googletag.defineSlot('/75507958/PONE_728x90_ATF', [728, 90], 'div-gpt-ad-1458247671871-0').addService(googletag.pubads()); googletag.defineSlot('/75507958/PONE_160x600_BTF', [160, 600], 'div-gpt-ad-1458247671871-1').addService(googletag.pubads()); var personalizedAds = window.plosCookieConsent && window.plosCookieConsent.hasConsented('advertising'); googletag.pubads().setRequestNonPersonalizedAds(personalizedAds ? 0 : 1); googletag.pubads().enableSingleRequest(); googletag.enableServices(); }); </script> <script type="text/javascript"> var WombatConfig = WombatConfig || {}; WombatConfig.journalKey = "PLoSONE"; WombatConfig.journalName = "PLOS ONE"; WombatConfig.figurePath = "/plosone/article/figure/image"; WombatConfig.figShareInstitutionString = "plos"; WombatConfig.doiResolverPrefix = "https://dx.plos.org/"; </script> <script type="text/javascript"> var WombatConfig = WombatConfig || {}; WombatConfig.metrics = WombatConfig.metrics || {}; WombatConfig.metrics.referenceUrl = "http://lagotto.io/plos"; WombatConfig.metrics.googleScholarUrl = "https://scholar.google.com/scholar"; WombatConfig.metrics.googleScholarCitationUrl = WombatConfig.metrics.googleScholarUrl + "?hl=en&lr=&q="; WombatConfig.metrics.crossrefUrl = "https://www.crossref.org"; </script> <script defer="defer" src="/resource/js/defer.js?5d23b84e5e396356b27c"></script><script src="/resource/js/sync.js?5d23b84e5e396356b27c"></script> <script src="/resource/js/vendor/jquery.min.js" type="text/javascript"></script> <script type="text/javascript" src="https://widgets.figshare.com/static/figshare.js"></script> <script src="/resource/js/vendor/fastclick/lib/fastclick.js" type="text/javascript"></script> <script src="/resource/js/vendor/foundation/foundation.js" type="text/javascript"></script> <script src="/resource/js/vendor/underscore-min.js" type="text/javascript"></script> <script src="/resource/js/vendor/underscore.string.min.js" type="text/javascript"></script> <script src="/resource/js/vendor/moment.js" type="text/javascript"></script> <script src="/resource/js/vendor/jquery-ui-effects.min.js" type="text/javascript"></script> <script src="/resource/js/vendor/foundation/foundation.tooltip.js" type="text/javascript"></script> <script src="/resource/js/vendor/foundation/foundation.dropdown.js" type="text/javascript"></script> <script src="/resource/js/vendor/foundation/foundation.tab.js" type="text/javascript"></script> <script src="/resource/js/vendor/foundation/foundation.reveal.js" type="text/javascript"></script> <script src="/resource/js/vendor/foundation/foundation.slider.js" type="text/javascript"></script> <script src="/resource/js/util/utils.js" type="text/javascript"></script> <script src="/resource/js/components/toggle.js" type="text/javascript"></script> <script src="/resource/js/components/truncate_elem.js" type="text/javascript"></script> <script src="/resource/js/components/tooltip_hover.js" type="text/javascript"></script> <script src="/resource/js/vendor/jquery.dotdotdot.js" type="text/javascript"></script> <!--For Google Tag manager to be able to track site information --> <script> dataLayer = [{ 'mobileSite': 'false', 'desktopSite': 'true' }]; </script> <title>Brain MRI detection and classification: Harnessing convolutional neural networks and multi-level thresholding | PLOS ONE</title> </head> <body class="article plosone"> <!-- Google Tag Manager --> <noscript><iframe src="//www.googletagmanager.com/ns.html?id=GTM-TP26BH" height="0" width="0" style="display:none;visibility:hidden"></iframe></noscript> <script> (function(w,d,s,l,i){w[l]=w[l]||[];w[l].push({'gtm.start': new Date().getTime(),event:'gtm.js'});var f=d.getElementsByTagName(s)[0], j=d.createElement(s),dl=l!='dataLayer'?'&l='+l:'';j.async=true;j.src= '//www.googletagmanager.com/gtm.js?id='+i+dl;f.parentNode.insertBefore(j,f); })(window,document,'script','dataLayer','GTM-TP26BH'); </script> <noscript><iframe src="//www.googletagmanager.com/ns.html?id=GTM-MQQMGF" height="0" width="0" style="display:none;visibility:hidden"></iframe></noscript> <script>(function(w,d,s,l,i){w[l]=w[l]||[];w[l].push({'gtm.start': new Date().getTime(),event:'gtm.js'});var f=d.getElementsByTagName(s)[0], j=d.createElement(s),dl=l!='dataLayer'?'&l='+l:'';j.async=true;j.src= '//www.googletagmanager.com/gtm.js?id='+i+dl;f.parentNode.insertBefore(j,f); })(window,document,'script','dataLayer','GTM-MQQMGF');</script> <!-- End Google Tag Manager --> <!-- Start of HubSpot Embed Code --> <script> // from https://developers.hubspot.com/docs/reference/api/analytics-and-events/cookie-banner/cookie-banner-api#example-using-a-third-party-cookie-banner-to-control-hubspot-cookies window.disableHubSpotCookieBanner = true; window._hsp = window._hsp || []; window._hsp.push(['setHubSpotConsent', { analytics: window.plosCookieConsent?.hasConsented('analytics'), advertisement: window.plosCookieConsent?.hasConsented('advertising'), functionality: window.plosCookieConsent?.hasConsented('functionality') } ]); </script> <script type="text/javascript" id="hs-script-loader" async defer src="//js.hs-scripts.com/44092021.js"></script> <!-- End of HubSpot Embed Code --> <!-- New Relic --> <script type="text/javascript"> ;window.NREUM||(NREUM={});NREUM.init={distributed_tracing:{enabled:true},privacy:{cookies_enabled:true},ajax:{deny_list:["bam.nr-data.net"]}}; window.NREUM||(NREUM={}),__nr_require=function(t,e,n){function r(n){if(!e[n]){var o=e[n]={exports:{}};t[n][0].call(o.exports,function(e){var o=t[n][1][e];return r(o||e)},o,o.exports)}return e[n].exports}if("function"==typeof __nr_require)return __nr_require;for(var o=0;o<n.length;o++)r(n[o]);return r}({1:[function(t,e,n){function r(t){try{s.console&&console.log(t)}catch(e){}}var o,i=t("ee"),a=t(32),s={};try{o=localStorage.getItem("__nr_flags").split(","),console&&"function"==typeof console.log&&(s.console=!0,o.indexOf("dev")!==-1&&(s.dev=!0),o.indexOf("nr_dev")!==-1&&(s.nrDev=!0))}catch(c){}s.nrDev&&i.on("internal-error",function(t){r(t.stack)}),s.dev&&i.on("fn-err",function(t,e,n){r(n.stack)}),s.dev&&(r("NR AGENT IN DEVELOPMENT MODE"),r("flags: "+a(s,function(t,e){return t}).join(", ")))},{}],2:[function(t,e,n){function r(t,e,n,r,s){try{l?l-=1:o(s||new UncaughtException(t,e,n),!0)}catch(f){try{i("ierr",[f,c.now(),!0])}catch(d){}}return"function"==typeof u&&u.apply(this,a(arguments))}function UncaughtException(t,e,n){this.message=t||"Uncaught error with no additional information",this.sourceURL=e,this.line=n}function o(t,e){var n=e?null:c.now();i("err",[t,n])}var i=t("handle"),a=t(33),s=t("ee"),c=t("loader"),f=t("gos"),u=window.onerror,d=!1,p="nr@seenError";if(!c.disabled){var l=0;c.features.err=!0,t(1),window.onerror=r;try{throw new Error}catch(h){"stack"in h&&(t(14),t(13),"addEventListener"in window&&t(7),c.xhrWrappable&&t(15),d=!0)}s.on("fn-start",function(t,e,n){d&&(l+=1)}),s.on("fn-err",function(t,e,n){d&&!n[p]&&(f(n,p,function(){return!0}),this.thrown=!0,o(n))}),s.on("fn-end",function(){d&&!this.thrown&&l>0&&(l-=1)}),s.on("internal-error",function(t){i("ierr",[t,c.now(),!0])})}},{}],3:[function(t,e,n){var r=t("loader");r.disabled||(r.features.ins=!0)},{}],4:[function(t,e,n){function r(){U++,L=g.hash,this[u]=y.now()}function o(){U--,g.hash!==L&&i(0,!0);var t=y.now();this[h]=~~this[h]+t-this[u],this[d]=t}function i(t,e){E.emit("newURL",[""+g,e])}function a(t,e){t.on(e,function(){this[e]=y.now()})}var s="-start",c="-end",f="-body",u="fn"+s,d="fn"+c,p="cb"+s,l="cb"+c,h="jsTime",m="fetch",v="addEventListener",w=window,g=w.location,y=t("loader");if(w[v]&&y.xhrWrappable&&!y.disabled){var x=t(11),b=t(12),E=t(9),R=t(7),O=t(14),T=t(8),P=t(15),S=t(10),M=t("ee"),N=M.get("tracer"),C=t(23);t(17),y.features.spa=!0;var L,U=0;M.on(u,r),b.on(p,r),S.on(p,r),M.on(d,o),b.on(l,o),S.on(l,o),M.buffer([u,d,"xhr-resolved"]),R.buffer([u]),O.buffer(["setTimeout"+c,"clearTimeout"+s,u]),P.buffer([u,"new-xhr","send-xhr"+s]),T.buffer([m+s,m+"-done",m+f+s,m+f+c]),E.buffer(["newURL"]),x.buffer([u]),b.buffer(["propagate",p,l,"executor-err","resolve"+s]),N.buffer([u,"no-"+u]),S.buffer(["new-jsonp","cb-start","jsonp-error","jsonp-end"]),a(T,m+s),a(T,m+"-done"),a(S,"new-jsonp"),a(S,"jsonp-end"),a(S,"cb-start"),E.on("pushState-end",i),E.on("replaceState-end",i),w[v]("hashchange",i,C(!0)),w[v]("load",i,C(!0)),w[v]("popstate",function(){i(0,U>1)},C(!0))}},{}],5:[function(t,e,n){function r(){var t=new PerformanceObserver(function(t,e){var n=t.getEntries();s(v,[n])});try{t.observe({entryTypes:["resource"]})}catch(e){}}function o(t){if(s(v,[window.performance.getEntriesByType(w)]),window.performance["c"+p])try{window.performance[h](m,o,!1)}catch(t){}else try{window.performance[h]("webkit"+m,o,!1)}catch(t){}}function i(t){}if(window.performance&&window.performance.timing&&window.performance.getEntriesByType){var a=t("ee"),s=t("handle"),c=t(14),f=t(13),u=t(6),d=t(23),p="learResourceTimings",l="addEventListener",h="removeEventListener",m="resourcetimingbufferfull",v="bstResource",w="resource",g="-start",y="-end",x="fn"+g,b="fn"+y,E="bstTimer",R="pushState",O=t("loader");if(!O.disabled){O.features.stn=!0,t(9),"addEventListener"in window&&t(7);var T=NREUM.o.EV;a.on(x,function(t,e){var n=t[0];n instanceof T&&(this.bstStart=O.now())}),a.on(b,function(t,e){var n=t[0];n instanceof T&&s("bst",[n,e,this.bstStart,O.now()])}),c.on(x,function(t,e,n){this.bstStart=O.now(),this.bstType=n}),c.on(b,function(t,e){s(E,[e,this.bstStart,O.now(),this.bstType])}),f.on(x,function(){this.bstStart=O.now()}),f.on(b,function(t,e){s(E,[e,this.bstStart,O.now(),"requestAnimationFrame"])}),a.on(R+g,function(t){this.time=O.now(),this.startPath=location.pathname+location.hash}),a.on(R+y,function(t){s("bstHist",[location.pathname+location.hash,this.startPath,this.time])}),u()?(s(v,[window.performance.getEntriesByType("resource")]),r()):l in window.performance&&(window.performance["c"+p]?window.performance[l](m,o,d(!1)):window.performance[l]("webkit"+m,o,d(!1))),document[l]("scroll",i,d(!1)),document[l]("keypress",i,d(!1)),document[l]("click",i,d(!1))}}},{}],6:[function(t,e,n){e.exports=function(){return"PerformanceObserver"in window&&"function"==typeof window.PerformanceObserver}},{}],7:[function(t,e,n){function r(t){for(var e=t;e&&!e.hasOwnProperty(u);)e=Object.getPrototypeOf(e);e&&o(e)}function o(t){s.inPlace(t,[u,d],"-",i)}function i(t,e){return t[1]}var a=t("ee").get("events"),s=t("wrap-function")(a,!0),c=t("gos"),f=XMLHttpRequest,u="addEventListener",d="removeEventListener";e.exports=a,"getPrototypeOf"in Object?(r(document),r(window),r(f.prototype)):f.prototype.hasOwnProperty(u)&&(o(window),o(f.prototype)),a.on(u+"-start",function(t,e){var n=t[1];if(null!==n&&("function"==typeof n||"object"==typeof n)){var r=c(n,"nr@wrapped",function(){function t(){if("function"==typeof n.handleEvent)return n.handleEvent.apply(n,arguments)}var e={object:t,"function":n}[typeof n];return e?s(e,"fn-",null,e.name||"anonymous"):n});this.wrapped=t[1]=r}}),a.on(d+"-start",function(t){t[1]=this.wrapped||t[1]})},{}],8:[function(t,e,n){function r(t,e,n){var r=t[e];"function"==typeof r&&(t[e]=function(){var t=i(arguments),e={};o.emit(n+"before-start",[t],e);var a;e[m]&&e[m].dt&&(a=e[m].dt);var s=r.apply(this,t);return o.emit(n+"start",[t,a],s),s.then(function(t){return o.emit(n+"end",[null,t],s),t},function(t){throw o.emit(n+"end",[t],s),t})})}var o=t("ee").get("fetch"),i=t(33),a=t(32);e.exports=o;var s=window,c="fetch-",f=c+"body-",u=["arrayBuffer","blob","json","text","formData"],d=s.Request,p=s.Response,l=s.fetch,h="prototype",m="nr@context";d&&p&&l&&(a(u,function(t,e){r(d[h],e,f),r(p[h],e,f)}),r(s,"fetch",c),o.on(c+"end",function(t,e){var n=this;if(e){var r=e.headers.get("content-length");null!==r&&(n.rxSize=r),o.emit(c+"done",[null,e],n)}else o.emit(c+"done",[t],n)}))},{}],9:[function(t,e,n){var r=t("ee").get("history"),o=t("wrap-function")(r);e.exports=r;var i=window.history&&window.history.constructor&&window.history.constructor.prototype,a=window.history;i&&i.pushState&&i.replaceState&&(a=i),o.inPlace(a,["pushState","replaceState"],"-")},{}],10:[function(t,e,n){function r(t){function e(){f.emit("jsonp-end",[],l),t.removeEventListener("load",e,c(!1)),t.removeEventListener("error",n,c(!1))}function n(){f.emit("jsonp-error",[],l),f.emit("jsonp-end",[],l),t.removeEventListener("load",e,c(!1)),t.removeEventListener("error",n,c(!1))}var r=t&&"string"==typeof t.nodeName&&"script"===t.nodeName.toLowerCase();if(r){var o="function"==typeof t.addEventListener;if(o){var a=i(t.src);if(a){var d=s(a),p="function"==typeof d.parent[d.key];if(p){var l={};u.inPlace(d.parent,[d.key],"cb-",l),t.addEventListener("load",e,c(!1)),t.addEventListener("error",n,c(!1)),f.emit("new-jsonp",[t.src],l)}}}}}function o(){return"addEventListener"in window}function i(t){var e=t.match(d);return e?e[1]:null}function a(t,e){var n=t.match(l),r=n[1],o=n[3];return o?a(o,e[r]):e[r]}function s(t){var e=t.match(p);return e&&e.length>=3?{key:e[2],parent:a(e[1],window)}:{key:t,parent:window}}var c=t(23),f=t("ee").get("jsonp"),u=t("wrap-function")(f);if(e.exports=f,o()){var d=/[?&](?:callback|cb)=([^&#]+)/,p=/(.*)\.([^.]+)/,l=/^(\w+)(\.|$)(.*)$/,h=["appendChild","insertBefore","replaceChild"];Node&&Node.prototype&&Node.prototype.appendChild?u.inPlace(Node.prototype,h,"dom-"):(u.inPlace(HTMLElement.prototype,h,"dom-"),u.inPlace(HTMLHeadElement.prototype,h,"dom-"),u.inPlace(HTMLBodyElement.prototype,h,"dom-")),f.on("dom-start",function(t){r(t[0])})}},{}],11:[function(t,e,n){var r=t("ee").get("mutation"),o=t("wrap-function")(r),i=NREUM.o.MO;e.exports=r,i&&(window.MutationObserver=function(t){return this instanceof i?new i(o(t,"fn-")):i.apply(this,arguments)},MutationObserver.prototype=i.prototype)},{}],12:[function(t,e,n){function r(t){var e=i.context(),n=s(t,"executor-",e,null,!1),r=new f(n);return i.context(r).getCtx=function(){return e},r}var o=t("wrap-function"),i=t("ee").get("promise"),a=t("ee").getOrSetContext,s=o(i),c=t(32),f=NREUM.o.PR;e.exports=i,f&&(window.Promise=r,["all","race"].forEach(function(t){var e=f[t];f[t]=function(n){function r(t){return function(){i.emit("propagate",[null,!o],a,!1,!1),o=o||!t}}var o=!1;c(n,function(e,n){Promise.resolve(n).then(r("all"===t),r(!1))});var a=e.apply(f,arguments),s=f.resolve(a);return s}}),["resolve","reject"].forEach(function(t){var e=f[t];f[t]=function(t){var n=e.apply(f,arguments);return t!==n&&i.emit("propagate",[t,!0],n,!1,!1),n}}),f.prototype["catch"]=function(t){return this.then(null,t)},f.prototype=Object.create(f.prototype,{constructor:{value:r}}),c(Object.getOwnPropertyNames(f),function(t,e){try{r[e]=f[e]}catch(n){}}),o.wrapInPlace(f.prototype,"then",function(t){return function(){var e=this,n=o.argsToArray.apply(this,arguments),r=a(e);r.promise=e,n[0]=s(n[0],"cb-",r,null,!1),n[1]=s(n[1],"cb-",r,null,!1);var c=t.apply(this,n);return r.nextPromise=c,i.emit("propagate",[e,!0],c,!1,!1),c}}),i.on("executor-start",function(t){t[0]=s(t[0],"resolve-",this,null,!1),t[1]=s(t[1],"resolve-",this,null,!1)}),i.on("executor-err",function(t,e,n){t[1](n)}),i.on("cb-end",function(t,e,n){i.emit("propagate",[n,!0],this.nextPromise,!1,!1)}),i.on("propagate",function(t,e,n){this.getCtx&&!e||(this.getCtx=function(){if(t instanceof Promise)var e=i.context(t);return e&&e.getCtx?e.getCtx():this})}),r.toString=function(){return""+f})},{}],13:[function(t,e,n){var r=t("ee").get("raf"),o=t("wrap-function")(r),i="equestAnimationFrame";e.exports=r,o.inPlace(window,["r"+i,"mozR"+i,"webkitR"+i,"msR"+i],"raf-"),r.on("raf-start",function(t){t[0]=o(t[0],"fn-")})},{}],14:[function(t,e,n){function r(t,e,n){t[0]=a(t[0],"fn-",null,n)}function o(t,e,n){this.method=n,this.timerDuration=isNaN(t[1])?0:+t[1],t[0]=a(t[0],"fn-",this,n)}var i=t("ee").get("timer"),a=t("wrap-function")(i),s="setTimeout",c="setInterval",f="clearTimeout",u="-start",d="-";e.exports=i,a.inPlace(window,[s,"setImmediate"],s+d),a.inPlace(window,[c],c+d),a.inPlace(window,[f,"clearImmediate"],f+d),i.on(c+u,r),i.on(s+u,o)},{}],15:[function(t,e,n){function r(t,e){d.inPlace(e,["onreadystatechange"],"fn-",s)}function o(){var t=this,e=u.context(t);t.readyState>3&&!e.resolved&&(e.resolved=!0,u.emit("xhr-resolved",[],t)),d.inPlace(t,y,"fn-",s)}function i(t){x.push(t),m&&(E?E.then(a):w?w(a):(R=-R,O.data=R))}function a(){for(var t=0;t<x.length;t++)r([],x[t]);x.length&&(x=[])}function s(t,e){return e}function c(t,e){for(var n in t)e[n]=t[n];return e}t(7);var f=t("ee"),u=f.get("xhr"),d=t("wrap-function")(u),p=t(23),l=NREUM.o,h=l.XHR,m=l.MO,v=l.PR,w=l.SI,g="readystatechange",y=["onload","onerror","onabort","onloadstart","onloadend","onprogress","ontimeout"],x=[];e.exports=u;var b=window.XMLHttpRequest=function(t){var e=new h(t);try{u.emit("new-xhr",[e],e),e.addEventListener(g,o,p(!1))}catch(n){try{u.emit("internal-error",[n])}catch(r){}}return e};if(c(h,b),b.prototype=h.prototype,d.inPlace(b.prototype,["open","send"],"-xhr-",s),u.on("send-xhr-start",function(t,e){r(t,e),i(e)}),u.on("open-xhr-start",r),m){var E=v&&v.resolve();if(!w&&!v){var R=1,O=document.createTextNode(R);new m(a).observe(O,{characterData:!0})}}else f.on("fn-end",function(t){t[0]&&t[0].type===g||a()})},{}],16:[function(t,e,n){function r(t){if(!s(t))return null;var e=window.NREUM;if(!e.loader_config)return null;var n=(e.loader_config.accountID||"").toString()||null,r=(e.loader_config.agentID||"").toString()||null,f=(e.loader_config.trustKey||"").toString()||null;if(!n||!r)return null;var h=l.generateSpanId(),m=l.generateTraceId(),v=Date.now(),w={spanId:h,traceId:m,timestamp:v};return(t.sameOrigin||c(t)&&p())&&(w.traceContextParentHeader=o(h,m),w.traceContextStateHeader=i(h,v,n,r,f)),(t.sameOrigin&&!u()||!t.sameOrigin&&c(t)&&d())&&(w.newrelicHeader=a(h,m,v,n,r,f)),w}function o(t,e){return"00-"+e+"-"+t+"-01"}function i(t,e,n,r,o){var i=0,a="",s=1,c="",f="";return o+"@nr="+i+"-"+s+"-"+n+"-"+r+"-"+t+"-"+a+"-"+c+"-"+f+"-"+e}function a(t,e,n,r,o,i){var a="btoa"in window&&"function"==typeof window.btoa;if(!a)return null;var s={v:[0,1],d:{ty:"Browser",ac:r,ap:o,id:t,tr:e,ti:n}};return i&&r!==i&&(s.d.tk=i),btoa(JSON.stringify(s))}function s(t){return f()&&c(t)}function c(t){var e=!1,n={};if("init"in NREUM&&"distributed_tracing"in NREUM.init&&(n=NREUM.init.distributed_tracing),t.sameOrigin)e=!0;else if(n.allowed_origins instanceof Array)for(var r=0;r<n.allowed_origins.length;r++){var o=h(n.allowed_origins[r]);if(t.hostname===o.hostname&&t.protocol===o.protocol&&t.port===o.port){e=!0;break}}return e}function f(){return"init"in NREUM&&"distributed_tracing"in NREUM.init&&!!NREUM.init.distributed_tracing.enabled}function u(){return"init"in NREUM&&"distributed_tracing"in NREUM.init&&!!NREUM.init.distributed_tracing.exclude_newrelic_header}function d(){return"init"in NREUM&&"distributed_tracing"in NREUM.init&&NREUM.init.distributed_tracing.cors_use_newrelic_header!==!1}function p(){return"init"in NREUM&&"distributed_tracing"in NREUM.init&&!!NREUM.init.distributed_tracing.cors_use_tracecontext_headers}var l=t(29),h=t(18);e.exports={generateTracePayload:r,shouldGenerateTrace:s}},{}],17:[function(t,e,n){function r(t){var e=this.params,n=this.metrics;if(!this.ended){this.ended=!0;for(var r=0;r<p;r++)t.removeEventListener(d[r],this.listener,!1);e.aborted||(n.duration=a.now()-this.startTime,this.loadCaptureCalled||4!==t.readyState?null==e.status&&(e.status=0):i(this,t),n.cbTime=this.cbTime,s("xhr",[e,n,this.startTime,this.endTime,"xhr"],this))}}function o(t,e){var n=c(e),r=t.params;r.hostname=n.hostname,r.port=n.port,r.protocol=n.protocol,r.host=n.hostname+":"+n.port,r.pathname=n.pathname,t.parsedOrigin=n,t.sameOrigin=n.sameOrigin}function i(t,e){t.params.status=e.status;var n=v(e,t.lastSize);if(n&&(t.metrics.rxSize=n),t.sameOrigin){var r=e.getResponseHeader("X-NewRelic-App-Data");r&&(t.params.cat=r.split(", ").pop())}t.loadCaptureCalled=!0}var a=t("loader");if(a.xhrWrappable&&!a.disabled){var s=t("handle"),c=t(18),f=t(16).generateTracePayload,u=t("ee"),d=["load","error","abort","timeout"],p=d.length,l=t("id"),h=t(24),m=t(22),v=t(19),w=t(23),g=NREUM.o.REQ,y=window.XMLHttpRequest;a.features.xhr=!0,t(15),t(8),u.on("new-xhr",function(t){var e=this;e.totalCbs=0,e.called=0,e.cbTime=0,e.end=r,e.ended=!1,e.xhrGuids={},e.lastSize=null,e.loadCaptureCalled=!1,e.params=this.params||{},e.metrics=this.metrics||{},t.addEventListener("load",function(n){i(e,t)},w(!1)),h&&(h>34||h<10)||t.addEventListener("progress",function(t){e.lastSize=t.loaded},w(!1))}),u.on("open-xhr-start",function(t){this.params={method:t[0]},o(this,t[1]),this.metrics={}}),u.on("open-xhr-end",function(t,e){"loader_config"in NREUM&&"xpid"in NREUM.loader_config&&this.sameOrigin&&e.setRequestHeader("X-NewRelic-ID",NREUM.loader_config.xpid);var n=f(this.parsedOrigin);if(n){var r=!1;n.newrelicHeader&&(e.setRequestHeader("newrelic",n.newrelicHeader),r=!0),n.traceContextParentHeader&&(e.setRequestHeader("traceparent",n.traceContextParentHeader),n.traceContextStateHeader&&e.setRequestHeader("tracestate",n.traceContextStateHeader),r=!0),r&&(this.dt=n)}}),u.on("send-xhr-start",function(t,e){var n=this.metrics,r=t[0],o=this;if(n&&r){var i=m(r);i&&(n.txSize=i)}this.startTime=a.now(),this.listener=function(t){try{"abort"!==t.type||o.loadCaptureCalled||(o.params.aborted=!0),("load"!==t.type||o.called===o.totalCbs&&(o.onloadCalled||"function"!=typeof e.onload))&&o.end(e)}catch(n){try{u.emit("internal-error",[n])}catch(r){}}};for(var s=0;s<p;s++)e.addEventListener(d[s],this.listener,w(!1))}),u.on("xhr-cb-time",function(t,e,n){this.cbTime+=t,e?this.onloadCalled=!0:this.called+=1,this.called!==this.totalCbs||!this.onloadCalled&&"function"==typeof n.onload||this.end(n)}),u.on("xhr-load-added",function(t,e){var n=""+l(t)+!!e;this.xhrGuids&&!this.xhrGuids[n]&&(this.xhrGuids[n]=!0,this.totalCbs+=1)}),u.on("xhr-load-removed",function(t,e){var n=""+l(t)+!!e;this.xhrGuids&&this.xhrGuids[n]&&(delete this.xhrGuids[n],this.totalCbs-=1)}),u.on("xhr-resolved",function(){this.endTime=a.now()}),u.on("addEventListener-end",function(t,e){e instanceof y&&"load"===t[0]&&u.emit("xhr-load-added",[t[1],t[2]],e)}),u.on("removeEventListener-end",function(t,e){e instanceof y&&"load"===t[0]&&u.emit("xhr-load-removed",[t[1],t[2]],e)}),u.on("fn-start",function(t,e,n){e instanceof y&&("onload"===n&&(this.onload=!0),("load"===(t[0]&&t[0].type)||this.onload)&&(this.xhrCbStart=a.now()))}),u.on("fn-end",function(t,e){this.xhrCbStart&&u.emit("xhr-cb-time",[a.now()-this.xhrCbStart,this.onload,e],e)}),u.on("fetch-before-start",function(t){function e(t,e){var n=!1;return e.newrelicHeader&&(t.set("newrelic",e.newrelicHeader),n=!0),e.traceContextParentHeader&&(t.set("traceparent",e.traceContextParentHeader),e.traceContextStateHeader&&t.set("tracestate",e.traceContextStateHeader),n=!0),n}var n,r=t[1]||{};"string"==typeof t[0]?n=t[0]:t[0]&&t[0].url?n=t[0].url:window.URL&&t[0]&&t[0]instanceof URL&&(n=t[0].href),n&&(this.parsedOrigin=c(n),this.sameOrigin=this.parsedOrigin.sameOrigin);var o=f(this.parsedOrigin);if(o&&(o.newrelicHeader||o.traceContextParentHeader))if("string"==typeof t[0]||window.URL&&t[0]&&t[0]instanceof URL){var i={};for(var a in r)i[a]=r[a];i.headers=new Headers(r.headers||{}),e(i.headers,o)&&(this.dt=o),t.length>1?t[1]=i:t.push(i)}else t[0]&&t[0].headers&&e(t[0].headers,o)&&(this.dt=o)}),u.on("fetch-start",function(t,e){this.params={},this.metrics={},this.startTime=a.now(),this.dt=e,t.length>=1&&(this.target=t[0]),t.length>=2&&(this.opts=t[1]);var n,r=this.opts||{},i=this.target;"string"==typeof i?n=i:"object"==typeof i&&i instanceof g?n=i.url:window.URL&&"object"==typeof i&&i instanceof URL&&(n=i.href),o(this,n);var s=(""+(i&&i instanceof g&&i.method||r.method||"GET")).toUpperCase();this.params.method=s,this.txSize=m(r.body)||0}),u.on("fetch-done",function(t,e){this.endTime=a.now(),this.params||(this.params={}),this.params.status=e?e.status:0;var n;"string"==typeof this.rxSize&&this.rxSize.length>0&&(n=+this.rxSize);var r={txSize:this.txSize,rxSize:n,duration:a.now()-this.startTime};s("xhr",[this.params,r,this.startTime,this.endTime,"fetch"],this)})}},{}],18:[function(t,e,n){var r={};e.exports=function(t){if(t in r)return r[t];var e=document.createElement("a"),n=window.location,o={};e.href=t,o.port=e.port;var i=e.href.split("://");!o.port&&i[1]&&(o.port=i[1].split("/")[0].split("@").pop().split(":")[1]),o.port&&"0"!==o.port||(o.port="https"===i[0]?"443":"80"),o.hostname=e.hostname||n.hostname,o.pathname=e.pathname,o.protocol=i[0],"/"!==o.pathname.charAt(0)&&(o.pathname="/"+o.pathname);var a=!e.protocol||":"===e.protocol||e.protocol===n.protocol,s=e.hostname===document.domain&&e.port===n.port;return o.sameOrigin=a&&(!e.hostname||s),"/"===o.pathname&&(r[t]=o),o}},{}],19:[function(t,e,n){function r(t,e){var n=t.responseType;return"json"===n&&null!==e?e:"arraybuffer"===n||"blob"===n||"json"===n?o(t.response):"text"===n||""===n||void 0===n?o(t.responseText):void 0}var o=t(22);e.exports=r},{}],20:[function(t,e,n){function r(){}function o(t,e,n,r){return function(){return u.recordSupportability("API/"+e+"/called"),i(t+e,[f.now()].concat(s(arguments)),n?null:this,r),n?void 0:this}}var i=t("handle"),a=t(32),s=t(33),c=t("ee").get("tracer"),f=t("loader"),u=t(25),d=NREUM;"undefined"==typeof window.newrelic&&(newrelic=d);var p=["setPageViewName","setCustomAttribute","setErrorHandler","finished","addToTrace","inlineHit","addRelease"],l="api-",h=l+"ixn-";a(p,function(t,e){d[e]=o(l,e,!0,"api")}),d.addPageAction=o(l,"addPageAction",!0),d.setCurrentRouteName=o(l,"routeName",!0),e.exports=newrelic,d.interaction=function(){return(new r).get()};var m=r.prototype={createTracer:function(t,e){var n={},r=this,o="function"==typeof e;return i(h+"tracer",[f.now(),t,n],r),function(){if(c.emit((o?"":"no-")+"fn-start",[f.now(),r,o],n),o)try{return e.apply(this,arguments)}catch(t){throw c.emit("fn-err",[arguments,this,t],n),t}finally{c.emit("fn-end",[f.now()],n)}}}};a("actionText,setName,setAttribute,save,ignore,onEnd,getContext,end,get".split(","),function(t,e){m[e]=o(h,e)}),newrelic.noticeError=function(t,e){"string"==typeof t&&(t=new Error(t)),u.recordSupportability("API/noticeError/called"),i("err",[t,f.now(),!1,e])}},{}],21:[function(t,e,n){function r(t){if(NREUM.init){for(var e=NREUM.init,n=t.split("."),r=0;r<n.length-1;r++)if(e=e[n[r]],"object"!=typeof e)return;return e=e[n[n.length-1]]}}e.exports={getConfiguration:r}},{}],22:[function(t,e,n){e.exports=function(t){if("string"==typeof t&&t.length)return t.length;if("object"==typeof t){if("undefined"!=typeof ArrayBuffer&&t instanceof ArrayBuffer&&t.byteLength)return t.byteLength;if("undefined"!=typeof Blob&&t instanceof Blob&&t.size)return t.size;if(!("undefined"!=typeof FormData&&t instanceof FormData))try{return JSON.stringify(t).length}catch(e){return}}}},{}],23:[function(t,e,n){var r=!1;try{var o=Object.defineProperty({},"passive",{get:function(){r=!0}});window.addEventListener("testPassive",null,o),window.removeEventListener("testPassive",null,o)}catch(i){}e.exports=function(t){return r?{passive:!0,capture:!!t}:!!t}},{}],24:[function(t,e,n){var r=0,o=navigator.userAgent.match(/Firefox[\/\s](\d+\.\d+)/);o&&(r=+o[1]),e.exports=r},{}],25:[function(t,e,n){function r(t,e){var n=[a,t,{name:t},e];return i("storeMetric",n,null,"api"),n}function o(t,e){var n=[s,t,{name:t},e];return i("storeEventMetrics",n,null,"api"),n}var i=t("handle"),a="sm",s="cm";e.exports={constants:{SUPPORTABILITY_METRIC:a,CUSTOM_METRIC:s},recordSupportability:r,recordCustom:o}},{}],26:[function(t,e,n){function r(){return s.exists&&performance.now?Math.round(performance.now()):(i=Math.max((new Date).getTime(),i))-a}function o(){return i}var i=(new Date).getTime(),a=i,s=t(34);e.exports=r,e.exports.offset=a,e.exports.getLastTimestamp=o},{}],27:[function(t,e,n){function r(t){return!(!t||!t.protocol||"file:"===t.protocol)}e.exports=r},{}],28:[function(t,e,n){function r(t,e){var n=t.getEntries();n.forEach(function(t){"first-paint"===t.name?p("timing",["fp",Math.floor(t.startTime)]):"first-contentful-paint"===t.name&&p("timing",["fcp",Math.floor(t.startTime)])})}function o(t,e){var n=t.getEntries();if(n.length>0){var r=n[n.length-1];if(c&&c<r.startTime)return;p("lcp",[r])}}function i(t){t.getEntries().forEach(function(t){t.hadRecentInput||p("cls",[t])})}function a(t){if(t instanceof v&&!g){var e=Math.round(t.timeStamp),n={type:t.type};e<=l.now()?n.fid=l.now()-e:e>l.offset&&e<=Date.now()?(e-=l.offset,n.fid=l.now()-e):e=l.now(),g=!0,p("timing",["fi",e,n])}}function s(t){"hidden"===t&&(c=l.now(),p("pageHide",[c]))}if(!("init"in NREUM&&"page_view_timing"in NREUM.init&&"enabled"in NREUM.init.page_view_timing&&NREUM.init.page_view_timing.enabled===!1)){var c,f,u,d,p=t("handle"),l=t("loader"),h=t(31),m=t(23),v=NREUM.o.EV;if("PerformanceObserver"in window&&"function"==typeof window.PerformanceObserver){f=new PerformanceObserver(r);try{f.observe({entryTypes:["paint"]})}catch(w){}u=new PerformanceObserver(o);try{u.observe({entryTypes:["largest-contentful-paint"]})}catch(w){}d=new PerformanceObserver(i);try{d.observe({type:"layout-shift",buffered:!0})}catch(w){}}if("addEventListener"in document){var g=!1,y=["click","keydown","mousedown","pointerdown","touchstart"];y.forEach(function(t){document.addEventListener(t,a,m(!1))})}h(s)}},{}],29:[function(t,e,n){function r(){function t(){return e?15&e[n++]:16*Math.random()|0}var e=null,n=0,r=window.crypto||window.msCrypto;r&&r.getRandomValues&&(e=r.getRandomValues(new Uint8Array(31)));for(var o,i="xxxxxxxx-xxxx-4xxx-yxxx-xxxxxxxxxxxx",a="",s=0;s<i.length;s++)o=i[s],"x"===o?a+=t().toString(16):"y"===o?(o=3&t()|8,a+=o.toString(16)):a+=o;return a}function o(){return a(16)}function i(){return a(32)}function a(t){function e(){return n?15&n[r++]:16*Math.random()|0}var n=null,r=0,o=window.crypto||window.msCrypto;o&&o.getRandomValues&&Uint8Array&&(n=o.getRandomValues(new Uint8Array(31)));for(var i=[],a=0;a<t;a++)i.push(e().toString(16));return i.join("")}e.exports={generateUuid:r,generateSpanId:o,generateTraceId:i}},{}],30:[function(t,e,n){function r(t,e){if(!o)return!1;if(t!==o)return!1;if(!e)return!0;if(!i)return!1;for(var n=i.split("."),r=e.split("."),a=0;a<r.length;a++)if(r[a]!==n[a])return!1;return!0}var o=null,i=null,a=/Version\/(\S+)\s+Safari/;if(navigator.userAgent){var s=navigator.userAgent,c=s.match(a);c&&s.indexOf("Chrome")===-1&&s.indexOf("Chromium")===-1&&(o="Safari",i=c[1])}e.exports={agent:o,version:i,match:r}},{}],31:[function(t,e,n){function r(t){function e(){t(s&&document[s]?document[s]:document[i]?"hidden":"visible")}"addEventListener"in document&&a&&document.addEventListener(a,e,o(!1))}var o=t(23);e.exports=r;var i,a,s;"undefined"!=typeof document.hidden?(i="hidden",a="visibilitychange",s="visibilityState"):"undefined"!=typeof document.msHidden?(i="msHidden",a="msvisibilitychange"):"undefined"!=typeof document.webkitHidden&&(i="webkitHidden",a="webkitvisibilitychange",s="webkitVisibilityState")},{}],32:[function(t,e,n){function r(t,e){var n=[],r="",i=0;for(r in t)o.call(t,r)&&(n[i]=e(r,t[r]),i+=1);return n}var o=Object.prototype.hasOwnProperty;e.exports=r},{}],33:[function(t,e,n){function r(t,e,n){e||(e=0),"undefined"==typeof n&&(n=t?t.length:0);for(var r=-1,o=n-e||0,i=Array(o<0?0:o);++r<o;)i[r]=t[e+r];return i}e.exports=r},{}],34:[function(t,e,n){e.exports={exists:"undefined"!=typeof window.performance&&window.performance.timing&&"undefined"!=typeof window.performance.timing.navigationStart}},{}],ee:[function(t,e,n){function r(){}function o(t){function e(t){return t&&t instanceof r?t:t?f(t,c,a):a()}function n(n,r,o,i,a){if(a!==!1&&(a=!0),!l.aborted||i){t&&a&&t(n,r,o);for(var s=e(o),c=m(n),f=c.length,u=0;u<f;u++)c[u].apply(s,r);var p=d[y[n]];return p&&p.push([x,n,r,s]),s}}function i(t,e){g[t]=m(t).concat(e)}function h(t,e){var n=g[t];if(n)for(var r=0;r<n.length;r++)n[r]===e&&n.splice(r,1)}function m(t){return g[t]||[]}function v(t){return p[t]=p[t]||o(n)}function w(t,e){l.aborted||u(t,function(t,n){e=e||"feature",y[n]=e,e in d||(d[e]=[])})}var g={},y={},x={on:i,addEventListener:i,removeEventListener:h,emit:n,get:v,listeners:m,context:e,buffer:w,abort:s,aborted:!1};return x}function i(t){return f(t,c,a)}function a(){return new r}function s(){(d.api||d.feature)&&(l.aborted=!0,d=l.backlog={})}var c="nr@context",f=t("gos"),u=t(32),d={},p={},l=e.exports=o();e.exports.getOrSetContext=i,l.backlog=d},{}],gos:[function(t,e,n){function r(t,e,n){if(o.call(t,e))return t[e];var r=n();if(Object.defineProperty&&Object.keys)try{return Object.defineProperty(t,e,{value:r,writable:!0,enumerable:!1}),r}catch(i){}return t[e]=r,r}var o=Object.prototype.hasOwnProperty;e.exports=r},{}],handle:[function(t,e,n){function r(t,e,n,r){o.buffer([t],r),o.emit(t,e,n)}var o=t("ee").get("handle");e.exports=r,r.ee=o},{}],id:[function(t,e,n){function r(t){var e=typeof t;return!t||"object"!==e&&"function"!==e?-1:t===window?0:a(t,i,function(){return o++})}var o=1,i="nr@id",a=t("gos");e.exports=r},{}],loader:[function(t,e,n){function r(){if(!P++){var t=T.info=NREUM.info,e=v.getElementsByTagName("script")[0];if(setTimeout(f.abort,3e4),!(t&&t.licenseKey&&t.applicationID&&e))return f.abort();c(R,function(e,n){t[e]||(t[e]=n)});var n=a();s("mark",["onload",n+T.offset],null,"api"),s("timing",["load",n]);var r=v.createElement("script");0===t.agent.indexOf("http://")||0===t.agent.indexOf("https://")?r.src=t.agent:r.src=h+"://"+t.agent,e.parentNode.insertBefore(r,e)}}function o(){"complete"===v.readyState&&i()}function i(){s("mark",["domContent",a()+T.offset],null,"api")}var a=t(26),s=t("handle"),c=t(32),f=t("ee"),u=t(30),d=t(27),p=t(21),l=t(23),h=p.getConfiguration("ssl")===!1?"http":"https",m=window,v=m.document,w="addEventListener",g="attachEvent",y=m.XMLHttpRequest,x=y&&y.prototype,b=!d(m.location);NREUM.o={ST:setTimeout,SI:m.setImmediate,CT:clearTimeout,XHR:y,REQ:m.Request,EV:m.Event,PR:m.Promise,MO:m.MutationObserver};var E=""+location,R={beacon:"bam.nr-data.net",errorBeacon:"bam.nr-data.net",agent:"js-agent.newrelic.com/nr-spa-1212.min.js"},O=y&&x&&x[w]&&!/CriOS/.test(navigator.userAgent),T=e.exports={offset:a.getLastTimestamp(),now:a,origin:E,features:{},xhrWrappable:O,userAgent:u,disabled:b};if(!b){t(20),t(28),v[w]?(v[w]("DOMContentLoaded",i,l(!1)),m[w]("load",r,l(!1))):(v[g]("onreadystatechange",o),m[g]("onload",r)),s("mark",["firstbyte",a.getLastTimestamp()],null,"api");var P=0}},{}],"wrap-function":[function(t,e,n){function r(t,e){function n(e,n,r,c,f){function nrWrapper(){var i,a,u,p;try{a=this,i=d(arguments),u="function"==typeof r?r(i,a):r||{}}catch(l){o([l,"",[i,a,c],u],t)}s(n+"start",[i,a,c],u,f);try{return p=e.apply(a,i)}catch(h){throw s(n+"err",[i,a,h],u,f),h}finally{s(n+"end",[i,a,p],u,f)}}return a(e)?e:(n||(n=""),nrWrapper[p]=e,i(e,nrWrapper,t),nrWrapper)}function r(t,e,r,o,i){r||(r="");var s,c,f,u="-"===r.charAt(0);for(f=0;f<e.length;f++)c=e[f],s=t[c],a(s)||(t[c]=n(s,u?c+r:r,o,c,i))}function s(n,r,i,a){if(!h||e){var s=h;h=!0;try{t.emit(n,r,i,e,a)}catch(c){o([c,n,r,i],t)}h=s}}return t||(t=u),n.inPlace=r,n.flag=p,n}function o(t,e){e||(e=u);try{e.emit("internal-error",t)}catch(n){}}function i(t,e,n){if(Object.defineProperty&&Object.keys)try{var r=Object.keys(t);return r.forEach(function(n){Object.defineProperty(e,n,{get:function(){return t[n]},set:function(e){return t[n]=e,e}})}),e}catch(i){o([i],n)}for(var a in t)l.call(t,a)&&(e[a]=t[a]);return e}function a(t){return!(t&&t instanceof Function&&t.apply&&!t[p])}function s(t,e){var n=e(t);return n[p]=t,i(t,n,u),n}function c(t,e,n){var r=t[e];t[e]=s(r,n)}function f(){for(var t=arguments.length,e=new Array(t),n=0;n<t;++n)e[n]=arguments[n];return e}var u=t("ee"),d=t(33),p="nr@original",l=Object.prototype.hasOwnProperty,h=!1;e.exports=r,e.exports.wrapFunction=s,e.exports.wrapInPlace=c,e.exports.argsToArray=f},{}]},{},["loader",2,17,5,3,4]); ;NREUM.loader_config={accountID:"804283",trustKey:"804283",agentID:"402703674",licenseKey:"cf99e8d2a3",applicationID:"402703674"} ;NREUM.info={beacon:"bam.nr-data.net",errorBeacon:"bam.nr-data.net",licenseKey:"cf99e8d2a3", // Modified this value from the generated script, to pass prod vs dev applicationID: window.location.hostname.includes('journals.plos.org') ? "402703674" : "402694889", sa:1} </script> <!-- End New Relic --> <header> <div id="topslot" class="head-top"> <a id="skip-to-content" tabindex="0" class="button" href="#main-content"> Skip to main content </a> <div class="center"> <div class="title">Advertisement</div> <!-- DoubleClick Ad Zone --> <div class='advertisement' id='div-gpt-ad-1458247671871-0' style='width:728px; height:90px;'> <script type='text/javascript'> googletag.cmd.push(function() { googletag.display('div-gpt-ad-1458247671871-0'); }); </script> </div> </div> </div> <div id="user" class="nav" data-user-management-url="https://community.plos.org"> </div> <div id="pagehdr"> <nav class="nav-main"> <h1 class="logo"> <a href="/plosone/.">PLOS ONE</a> </h1> <section class="top-bar-section"> <ul class="nav-elements"> <li class="multi-col-parent menu-section-header has-dropdown" id="publish"> Publish <div class="dropdown mega "> <ul class="multi-col" id="publish-dropdown-list"> <li class="menu-section-header " id="submissions"> <span class="menu-section-header-title"> Submissions </span> <ul class="menu-section " id="submissions-dropdown-list"> <li> <a href="/plosone/s/getting-started" >Getting Started</a> </li> <li> <a href="/plosone/s/submission-guidelines" >Submission Guidelines</a> </li> <li> <a href="/plosone/s/figures" >Figures</a> </li> <li> <a href="/plosone/s/tables" >Tables</a> </li> <li> <a href="/plosone/s/supporting-information" >Supporting Information</a> </li> <li> <a href="/plosone/s/latex" >LaTeX</a> </li> <li> <a href="/plosone/s/what-we-publish" >What We Publish</a> </li> <li> <a href="/plosone/s/preprints" >Preprints</a> </li> <li> <a href="/plosone/s/revising-your-manuscript" >Revising Your Manuscript</a> </li> <li> <a href="/plosone/s/submit-now" >Submit Now</a> </li> <li> <a href="https://collections.plos.org/s/calls-for-papers" >Calls for Papers</a> </li> </ul> </li> <li class="menu-section-header " id="policies"> <span class="menu-section-header-title"> Policies </span> <ul class="menu-section " id="policies-dropdown-list"> <li> <a href="/plosone/s/best-practices-in-research-reporting" >Best Practices in Research Reporting</a> </li> <li> <a href="/plosone/s/human-subjects-research" >Human Subjects Research</a> </li> <li> <a href="/plosone/s/animal-research" >Animal Research</a> </li> <li> <a href="/plosone/s/competing-interests" >Competing Interests</a> </li> <li> <a href="/plosone/s/disclosure-of-funding-sources" >Disclosure of Funding Sources</a> </li> <li> <a href="/plosone/s/licenses-and-copyright" >Licenses and Copyright</a> </li> <li> <a href="/plosone/s/data-availability" >Data Availability</a> </li> <li> <a href="/plosone/s/complementary-research" >Complementary Research</a> </li> <li> <a href="/plosone/s/materials-software-and-code-sharing" >Materials, Software and Code Sharing</a> </li> <li> <a href="/plosone/s/ethical-publishing-practice" >Ethical Publishing Practice</a> </li> <li> <a href="/plosone/s/authorship" >Authorship</a> </li> <li> <a href="/plosone/s/corrections-expressions-of-concern-and-retractions" >Corrections, Expressions of Concern, and Retractions</a> </li> </ul> </li> <li class="menu-section-header " id="manuscript-review-and-publication"> <span class="menu-section-header-title"> Manuscript Review and Publication </span> <ul class="menu-section " id="manuscript-review-and-publication-dropdown-list"> <li> <a href="/plosone/s/criteria-for-publication" >Criteria for Publication</a> </li> <li> <a href="/plosone/s/editorial-and-peer-review-process" >Editorial and Peer Review Process</a> </li> <li> <a href="https://plos.org/resources/editor-center" >Editor Center</a> </li> <li> <a href="/plosone/s/resources-for-editors" >Resources for Editors</a> </li> <li> <a href="/plosone/s/reviewer-guidelines" >Guidelines for Reviewers</a> </li> <li> <a href="/plosone/s/accepted-manuscripts" >Accepted Manuscripts</a> </li> <li> <a href="/plosone/s/comments" >Comments</a> </li> </ul> </li> </ul> <div class="calloutcontainer"> <h3 class="callout-headline">Submit Your Manuscript</h3> <div class="action-contain"> <p class="callout-content"> Discover a faster, simpler path to publishing in a high-quality journal. <em>PLOS ONE</em> promises fair, rigorous peer review, broad scope, and wide readership – a perfect fit for your research every time. </p> <p class="button-contain special"> <a class="button button-default" href="/plosone/static/publish"> Learn More </a> <a class="button-link" href="https://www.editorialmanager.com/pone/default.asp"> Submit Now </a> </p> </div> <!-- opens in siteMenuCalloutDescription --> </div> </div> </li> <li class="menu-section-header has-dropdown " id="about"> <span class="menu-section-header-title"> About </span> <ul class="menu-section dropdown " id="about-dropdown-list"> <li> <a href="/plosone/static/publish" >Why Publish with PLOS ONE</a> </li> <li> <a href="/plosone/s/journal-information" >Journal Information</a> </li> <li> <a href="/plosone/s/staff-editors" >Staff Editors</a> </li> <li> <a href="/plosone/static/editorial-board" >Editorial Board</a> </li> <li> <a href="/plosone/s/section-editors" >Section Editors</a> </li> <li> <a href="/plosone/s/advisory-groups" >Advisory Groups</a> </li> <li> <a href="/plosone/s/find-and-read-articles" >Find and Read Articles</a> </li> <li> <a href="/plosone/s/publishing-information" >Publishing Information</a> </li> <li> <a href="https://plos.org/publication-fees" >Publication Fees</a> </li> <li> <a href="https://plos.org/press-and-media" >Press and Media</a> </li> <li> <a href="/plosone/s/contact" >Contact</a> </li> </ul> </li> <li data-js-tooltip-hover="trigger" class="subject-area menu-section-header"> Browse </li> <script src="/resource/js/vendor/jquery.hoverIntent.js" type="text/javascript"></script> <script src="/resource/js/components/menu_drop.js" type="text/javascript"></script> <script src="/resource/js/components/hover_delay.js" type="text/javascript"></script> <li id="navsearch" class="head-search"> <form name="searchForm" action="/plosone/search" method="get"> <fieldset> <legend>Search</legend> <label for="search">Search</label> <div class="search-contain"> <input id="search" type="text" name="q" placeholder="SEARCH" required/> <button id="headerSearchButton" type="submit" aria-label="Submit search"> <i title="Submit search" class="search-icon"></i> </button> </div> </fieldset> <input type="hidden" name="filterJournals" value="PLoSONE"/> </form> <a id="advSearch" href="/plosone/search"> advanced search </a> <script src="/resource/js/components/placeholder_style.js" type="text/javascript"></script> </li> </ul> </section> </nav> </div> </header> <section id="taxonomyContainer"> <script src="/resource/js/taxonomy-browser.js" type="text/javascript"></script> <div id="taxonomy-browser" class="areas" data-search-url="/plosone/browse"> <div class="wrapper"> <div class="taxonomy-header"> Browse Subject Areas <div id="subjInfo">?</div> <div id="subjInfoText"> <p>Click through the PLOS taxonomy to find articles in your field.</p> <p>For more information about PLOS Subject Areas, click <a href="https://github.com/PLOS/plos-thesaurus/blob/master/README.md" target="_blank" title="Link opens in new window">here</a>. </p> </div> </div> <div class="levels"> <div class="levels-container cf"> <div class="levels-position"></div> </div> <a href="#" class="prev"></a> <a href="#" class="next active"></a> </div> </div> <div class="taxonomy-browser-border-bottom"></div> </div> </section> <main id="main-content"> <div class="set-grid"> <header class="title-block"> <script src="/resource/js/components/signposts.js" type="text/javascript"></script> <ul id="almSignposts" class="signposts"> <li id="loadingMetrics"> <p>Loading metrics</p> </li> </ul> <script type="text/template" id="signpostsGeneralErrorTemplate"> <li id="metricsError">Article metrics are unavailable at this time. Please try again later.</li> </script> <script type="text/template" id="signpostsNewArticleErrorTemplate"> <li></li><li></li><li id="tooSoon">Article metrics are unavailable for recently published articles.</li> </script> <script type="text/template" id="signpostsTemplate"> <li id="almSaves"> <%= s.numberFormat(saveCount, 0) %> <div class="tools" data-js-tooltip-hover="trigger"> <a class="metric-term" href="/plosone/article/metrics?id=10.1371/journal.pone.0306492#savedHeader">Save</a> <p class="saves-tip" data-js-tooltip-hover="target"><a href="/plosone/article/metrics?id=10.1371/journal.pone.0306492#savedHeader">Total Mendeley and Citeulike bookmarks.</a></p> </div> </li> <li id="almCitations"> <%= s.numberFormat(citationCount, 0) %> <div class="tools" data-js-tooltip-hover="trigger"> <a class="metric-term" href="/plosone/article/metrics?id=10.1371/journal.pone.0306492#citedHeader">Citation</a> <p class="citations-tip" data-js-tooltip-hover="target"><a href="/plosone/article/metrics?id=10.1371/journal.pone.0306492#citedHeader">Paper's citation count computed by Dimensions.</a></p> </div> </li> <li id="almViews"> <%= s.numberFormat(viewCount, 0) %> <div class="tools" data-js-tooltip-hover="trigger"> <a class="metric-term" href="/plosone/article/metrics?id=10.1371/journal.pone.0306492#viewedHeader">View</a> <p class="views-tip" data-js-tooltip-hover="target"><a href="/plosone/article/metrics?id=10.1371/journal.pone.0306492#viewedHeader">PLOS views and downloads.</a></p> </div> </li> <li id="almShares"> <%= s.numberFormat(shareCount, 0) %> <div class="tools" data-js-tooltip-hover="trigger"> <a class="metric-term" href="/plosone/article/metrics?id=10.1371/journal.pone.0306492#discussedHeader">Share</a> <p class="shares-tip" data-js-tooltip-hover="target"><a href="/plosone/article/metrics?id=10.1371/journal.pone.0306492#discussedHeader">Sum of Facebook, Twitter, Reddit and Wikipedia activity.</a></p> </div> </li> </script> <div class="article-meta"> <div class="classifications"> <p class="license-short" id="licenseShort">Open Access</p> <p class="peer-reviewed" id="peerReviewed">Peer-reviewed</p> <div class="article-type" > <p class="type-article" id="artType">Research Article</p> </div> </div> </div> <div class="article-title-etc"> <div class="title-authors"> <h1 id="artTitle"><?xml version="1.0" encoding="UTF-8"?>Brain MRI detection and classification: Harnessing convolutional neural networks and multi-level thresholding</h1> <ul class="author-list clearfix" data-js-tooltip="tooltip_container" id="author-list"> <li data-js-tooltip="tooltip_trigger" > <a data-author-id="0" class="author-name" > Rasool Reddy Kamireddy,</a> <div id="author-meta-0" class="author-info" data-js-tooltip="tooltip_target"> <p class="roles" id="authRoles"> <span class="type">Roles</span> Conceptualization, Methodology, Writing – original draft </p> <p id="authAffiliations-0"><span class="type">Affiliation</span> Department of ECE, NRI Institute of Technology (Autonomous), Vijayawada, India </p> <div> <p class="orcid" id="authOrcid-0"> <span> <a id="connect-orcid-link" href="https://orcid.org/0000-0001-8256-6224" target="_blank" title="ORCID Registry"> <img id="orcid-id-logo" src="/resource/img/orcid_16x16.png" width="16" height="16" alt="ORCID logo"/> https://orcid.org/0000-0001-8256-6224 </a> </span> </p> </div> <a data-js-tooltip="tooltip_close" class="close" id="tooltipClose0"> &#x02A2F; </a> </div> </li> <li data-js-tooltip="tooltip_trigger" > <a data-author-id="1" class="author-name" > Rajesh N. V. P. S. Kandala <span class="email"> </span>,</a> <div id="author-meta-1" class="author-info" data-js-tooltip="tooltip_target"> <p class="roles" id="authRoles"> <span class="type">Roles</span> Formal analysis, Methodology, Validation, Writing – review & editing </p> <p id="authCorresponding-1"> <span class="email">* E-mail:</span> <a href="mailto:kandala.rajesh2014@gmail.com">kandala.rajesh2014@gmail.com</a>, <a href="mailto:rajesh.k@vitap.ac.in">rajesh.k@vitap.ac.in</a> (RNVPSK); <a href="mailto:plawiak@pk.edu.pl">plawiak@pk.edu.pl</a> (PP)</p> <p id="authAffiliations-1"><span class="type">Affiliation</span> School of Electronics Engineering (SENSE), VIT-AP University, Amaravati, Andhra Pradesh, India </p> <div> <p class="orcid" id="authOrcid-1"> <span> <a id="connect-orcid-link" href="https://orcid.org/0000-0003-3751-0453" target="_blank" title="ORCID Registry"> <img id="orcid-id-logo" src="/resource/img/orcid_16x16.png" width="16" height="16" alt="ORCID logo"/> https://orcid.org/0000-0003-3751-0453 </a> </span> </p> </div> <a data-js-tooltip="tooltip_close" class="close" id="tooltipClose1"> &#x02A2F; </a> </div> </li> <li data-js-tooltip="tooltip_trigger" > <a data-author-id="2" class="author-name" > Ravindra Dhuli,</a> <div id="author-meta-2" class="author-info" data-js-tooltip="tooltip_target"> <p class="roles" id="authRoles"> <span class="type">Roles</span> Formal analysis, Supervision, Writing – review & editing </p> <p id="authAffiliations-2"><span class="type">Affiliation</span> School of Electronics Engineering (SENSE), VIT-AP University, Amaravati, Andhra Pradesh, India </p> <a data-js-tooltip="tooltip_close" class="close" id="tooltipClose2"> &#x02A2F; </a> </div> </li> <li data-js-tooltip="tooltip_trigger" > <a data-author-id="3" class="author-name" > Srinivasu Polinati,</a> <div id="author-meta-3" class="author-info" data-js-tooltip="tooltip_target"> <p class="roles" id="authRoles"> <span class="type">Roles</span> Data curation, Visualization </p> <p id="authAffiliations-3"><span class="type">Affiliation</span> Department of ECE, VIEW, Vishakhapatnam, Andhra Pradesh, India </p> <a data-js-tooltip="tooltip_close" class="close" id="tooltipClose3"> &#x02A2F; </a> </div> </li> <li data-js-tooltip="tooltip_trigger" > <a data-author-id="4" class="author-name" > Kamesh Sonti,</a> <div id="author-meta-4" class="author-info" data-js-tooltip="tooltip_target"> <p class="roles" id="authRoles"> <span class="type">Roles</span> Software, Validation </p> <p id="authAffiliations-4"><span class="type">Affiliation</span> Department of ECE, SVEC, Tadepalligudem, Andhra Pradesh, India </p> <a data-js-tooltip="tooltip_close" class="close" id="tooltipClose4"> &#x02A2F; </a> </div> </li> <li data-js-tooltip="tooltip_trigger" > <a data-author-id="5" class="author-name" > Ryszard Tadeusiewicz,</a> <div id="author-meta-5" class="author-info" data-js-tooltip="tooltip_target"> <p class="roles" id="authRoles"> <span class="type">Roles</span> Investigation, Validation </p> <p id="authAffiliations-5"><span class="type">Affiliation</span> Department of Biocybernetics and Biomedical Engineering, AGH University of Science and Technology, Krakow, Poland </p> <a data-js-tooltip="tooltip_close" class="close" id="tooltipClose5"> &#x02A2F; </a> </div> </li> <li data-js-tooltip="tooltip_trigger" > <a data-author-id="6" class="author-name" > Paweł Pławiak <span class="email"> </span></a> <div id="author-meta-6" class="author-info" data-js-tooltip="tooltip_target"> <p class="roles" id="authRoles"> <span class="type">Roles</span> Investigation, Supervision </p> <p id="authCorresponding-6"> <span class="email">* E-mail:</span> <a href="mailto:kandala.rajesh2014@gmail.com">kandala.rajesh2014@gmail.com</a>, <a href="mailto:rajesh.k@vitap.ac.in">rajesh.k@vitap.ac.in</a> (RNVPSK); <a href="mailto:plawiak@pk.edu.pl">plawiak@pk.edu.pl</a> (PP)</p> <p id="authAffiliations-6"><span class="type">Affiliations</span> Department of Computer Science, Faculty of Computer Science and Telecommunications, Cracow University of Technology, Krakow, Poland, Institute of Theoretical and Applied Informatics, Polish Academy of Sciences, Gliwice, Poland </p> <a data-js-tooltip="tooltip_close" class="close" id="tooltipClose6"> &#x02A2F; </a> </div> </li> </ul> <script src="/resource/js/components/tooltip.js" type="text/javascript"></script> </div> <div id="floatTitleTop" data-js-floater="title_author" class="float-title" role="presentation"> <div class="set-grid"> <div class="float-title-inner"> <h1><?xml version="1.0" encoding="UTF-8"?>Brain MRI detection and classification: Harnessing convolutional neural networks and multi-level thresholding</h1> <ul id="floatAuthorList" data-js-floater="floated_authors"> <li data-float-index="1">Rasool Reddy Kamireddy,&nbsp; </li> <li data-float-index="2">Rajesh N. V. P. S. Kandala,&nbsp; </li> <li data-float-index="3">Ravindra Dhuli,&nbsp; </li> <li data-float-index="4">Srinivasu Polinati,&nbsp; </li> <li data-float-index="5">Kamesh Sonti,&nbsp; </li> <li data-float-index="6">Ryszard Tadeusiewicz,&nbsp; </li> <li data-float-index="7">Paweł Pławiak </li> </ul> </div> <div class="logo-close" id="titleTopCloser"> <img src="/resource/img/logo-plos.png" style="height: 2em" alt="PLOS" /> <div class="close-floater" title="close">x</div> </div> </div> </div> <ul class="date-doi"> <li id="artPubDate">Published: August 1, 2024</li> <li id="artDoi"> <a href="https://doi.org/10.1371/journal.pone.0306492">https://doi.org/10.1371/journal.pone.0306492</a> </li> <li class="flex-spacer"></li> </ul> </div> <div> </div> </header> <section class="article-body"> <ul class="article-tabs"> <li class="tab-title active" id="tabArticle"> <a href="/plosone/article?id=10.1371/journal.pone.0306492" class="article-tab-1">Article</a> </li> <li class="tab-title " id="tabAuthors"> <a href="/plosone/article/authors?id=10.1371/journal.pone.0306492" class="article-tab-2">Authors</a> </li> <li class="tab-title " id="tabMetrics"> <a href="/plosone/article/metrics?id=10.1371/journal.pone.0306492" class="article-tab-3">Metrics</a> </li> <li class="tab-title " id="tabComments"> <a href="/plosone/article/comments?id=10.1371/journal.pone.0306492" class="article-tab-4">Comments</a> </li> <li class="tab-title" id="tabRelated"> <a class="article-tab-5" id="tabRelated-link">Media Coverage</a> <script>$(document).ready(function() { $.getMediaLink("10.1371/journal.pone.0306492").then(function (url) { $("#tabRelated-link").attr("href", url) } ) })</script> </li> </ul> <div class="article-container"> <div id="nav-article"> <ul class="nav-secondary"> <li class="nav-comments" id="nav-comments"> <a href="article/comments?id=10.1371/journal.pone.0306492">Reader Comments</a> </li> <li id="nav-figures"><a href="#" data-doi="10.1371/journal.pone.0306492">Figures</a></li> </ul> <div id="nav-data-linking" data-data-url=""> </div> </div> <script src="/resource/js/components/scroll.js" type="text/javascript"></script> <script src="/resource/js/components/nav_builder.js" type="text/javascript"></script> <script src="/resource/js/components/floating_nav.js" type="text/javascript"></script> <div id="figure-lightbox-container"></div> <script id="figure-lightbox-template" type="text/template"> <div id="figure-lightbox" class="reveal-modal full" data-reveal aria-hidden="true" role="dialog"> <div class="lb-header"> <h1 id="lb-title"><%= articleTitle %></h1> <div id="lb-authors"> <span>Rasool Reddy Kamireddy</span> <span>Rajesh N. V. P. S. Kandala</span> <a class="more-authors" href="/plosone/article/authors?id=10.1371/journal.pone.0306492">...</a> <span>Paweł Pławiak</span> </div> <div class="lb-close" title="close">&nbsp;</div> </div> <div class="img-container"> <div class="loader"> <i class="fa-spinner"></i> </div> <img class="main-lightbox-image" src=""/> <aside id="figures-list"> <% figureList.each(function (ix, figure) { %> <div class="change-img" data-doi="<%= figure.getAttribute('data-doi') %>"> <img class="aside-figure" src="/plosone/article/figure/image?size=inline&id=<%= figure.getAttribute('data-doi') %>" /> </div> <% }) %> <div class="dummy-figure"> </div> </aside> </div> <div id="lightbox-footer"> <div id="btns-container" class="lightbox-row <% if(figureList.length <= 1) { print('one-figure-only') } %>"> <div class="fig-btns-container reset-zoom-wrapper left"> <span class="fig-btn reset-zoom-btn">Reset zoom</span> </div> <div class="zoom-slider-container"> <div class="range-slider-container"> <span id="lb-zoom-min"></span> <div class="range-slider round" data-slider data-options="start: 20; end: 200; initial: 20;"> <span class="range-slider-handle" role="slider" tabindex="0"></span> <span class="range-slider-active-segment"></span> <input type="hidden"> </div> <span id="lb-zoom-max"></span> </div> </div> <% if(figureList.length > 1) { %> <div class="fig-btns-container"> <span class="fig-btn all-fig-btn"><i class="icon icon-all"></i> All Figures</span> <span class="fig-btn next-fig-btn"><i class="icon icon-next"></i> Next</span> <span class="fig-btn prev-fig-btn"><i class="icon icon-prev"></i> Previous</span> </div> <% } %> </div> <div id="image-context"> </div> </div> </div> </script> <script id="image-context-template" type="text/template"> <div class="footer-text"> <div id="figure-description-wrapper"> <div id="view-more-wrapper" style="<% descriptionExpanded? print('display:none;') : '' %>"> <span id="figure-title"><%= title %></span> <p id="figure-description"> <%= description %>&nbsp;&nbsp; </p> <span id="view-more">show more<i class="icon-arrow-right"></i></span> </div> <div id="view-less-wrapper" style="<% descriptionExpanded? print('display:inline-block;') : '' %>" > <span id="figure-title"><%= title %></span> <p id="full-figure-description"> <%= description %>&nbsp;&nbsp; <span id="view-less">show less<i class="icon-arrow-left"></i></span> </p> </div> </div> </div> <div id="show-context-container"> <a class="btn show-context" href="<%= showInContext(strippedDoi) %>">Show in Context</a> </div> <div id="download-buttons"> <h3>Download:</h3> <div class="item"> <a href="/plosone/article/figure/image?size=original&download=&id=<%= doi %>" title="original image"> <span class="download-btn">TIFF</span> </a> <span class="file-size"><%= fileSizes.original %></span> </div> <div class="item"> <a href="/plosone/article/figure/image?size=large&download=&id=<%= doi %>" title="large image"> <span class="download-btn">PNG</span> </a> <span class="file-size"><%= fileSizes.large %></span> </div> <div class="item"> <a href="/plosone/article/figure/powerpoint?id=<%= doi %>" title="PowerPoint slide"> <span class="download-btn">PPT</span> </a> </div> </div> </script> <div class="article-content"> <div id="figure-carousel-section"> <h2>Figures</h2> <div id="figure-carousel"> <div class="carousel-wrapper"> <div class="slider"> <div class="carousel-item lightbox-figure" data-doi="10.1371/journal.pone.0306492.t001"> <img src="/plosone/article/figure/image?size=inline&amp;id=10.1371/journal.pone.0306492.t001" loading="lazy" alt="Table 1" /> </div> <div class="carousel-item lightbox-figure" data-doi="10.1371/journal.pone.0306492.g001"> <img src="/plosone/article/figure/image?size=inline&amp;id=10.1371/journal.pone.0306492.g001" loading="lazy" alt="Fig 1" /> </div> <div class="carousel-item lightbox-figure" data-doi="10.1371/journal.pone.0306492.g002"> <img src="/plosone/article/figure/image?size=inline&amp;id=10.1371/journal.pone.0306492.g002" loading="lazy" alt="Fig 2" /> </div> <div class="carousel-item lightbox-figure" data-doi="10.1371/journal.pone.0306492.g003"> <img src="/plosone/article/figure/image?size=inline&amp;id=10.1371/journal.pone.0306492.g003" loading="lazy" alt="Fig 3" /> </div> <div class="carousel-item lightbox-figure" data-doi="10.1371/journal.pone.0306492.g004"> <img src="/plosone/article/figure/image?size=inline&amp;id=10.1371/journal.pone.0306492.g004" loading="lazy" alt="Fig 4" /> </div> <div class="carousel-item lightbox-figure" data-doi="10.1371/journal.pone.0306492.g005"> <img src="/plosone/article/figure/image?size=inline&amp;id=10.1371/journal.pone.0306492.g005" loading="lazy" alt="Fig 5" /> </div> <div class="carousel-item lightbox-figure" data-doi="10.1371/journal.pone.0306492.g006"> <img src="/plosone/article/figure/image?size=inline&amp;id=10.1371/journal.pone.0306492.g006" loading="lazy" alt="Fig 6" /> </div> <div class="carousel-item lightbox-figure" data-doi="10.1371/journal.pone.0306492.t002"> <img src="/plosone/article/figure/image?size=inline&amp;id=10.1371/journal.pone.0306492.t002" loading="lazy" alt="Table 2" /> </div> <div class="carousel-item lightbox-figure" data-doi="10.1371/journal.pone.0306492.t003"> <img src="/plosone/article/figure/image?size=inline&amp;id=10.1371/journal.pone.0306492.t003" loading="lazy" alt="Table 3" /> </div> <div class="carousel-item lightbox-figure" data-doi="10.1371/journal.pone.0306492.t004"> <img src="/plosone/article/figure/image?size=inline&amp;id=10.1371/journal.pone.0306492.t004" loading="lazy" alt="Table 4" /> </div> <div class="carousel-item lightbox-figure" data-doi="10.1371/journal.pone.0306492.t005"> <img src="/plosone/article/figure/image?size=inline&amp;id=10.1371/journal.pone.0306492.t005" loading="lazy" alt="Table 5" /> </div> <div class="carousel-item lightbox-figure" data-doi="10.1371/journal.pone.0306492.t006"> <img src="/plosone/article/figure/image?size=inline&amp;id=10.1371/journal.pone.0306492.t006" loading="lazy" alt="Table 6" /> </div> <div class="carousel-item lightbox-figure" data-doi="10.1371/journal.pone.0306492.t007"> <img src="/plosone/article/figure/image?size=inline&amp;id=10.1371/journal.pone.0306492.t007" loading="lazy" alt="Table 7" /> </div> <div class="carousel-item lightbox-figure" data-doi="10.1371/journal.pone.0306492.t008"> <img src="/plosone/article/figure/image?size=inline&amp;id=10.1371/journal.pone.0306492.t008" loading="lazy" alt="Table 8" /> </div> <div class="carousel-item lightbox-figure" data-doi="10.1371/journal.pone.0306492.t009"> <img src="/plosone/article/figure/image?size=inline&amp;id=10.1371/journal.pone.0306492.t009" loading="lazy" alt="Table 9" /> </div> <div class="carousel-item lightbox-figure" data-doi="10.1371/journal.pone.0306492.g007"> <img src="/plosone/article/figure/image?size=inline&amp;id=10.1371/journal.pone.0306492.g007" loading="lazy" alt="Fig 7" /> </div> <div class="carousel-item lightbox-figure" data-doi="10.1371/journal.pone.0306492.g008"> <img src="/plosone/article/figure/image?size=inline&amp;id=10.1371/journal.pone.0306492.g008" loading="lazy" alt="Fig 8" /> </div> <div class="carousel-item lightbox-figure" data-doi="10.1371/journal.pone.0306492.t010"> <img src="/plosone/article/figure/image?size=inline&amp;id=10.1371/journal.pone.0306492.t010" loading="lazy" alt="Table 10" /> </div> <div class="carousel-item lightbox-figure" data-doi="10.1371/journal.pone.0306492.t011"> <img src="/plosone/article/figure/image?size=inline&amp;id=10.1371/journal.pone.0306492.t011" loading="lazy" alt="Table 11" /> </div> <div class="carousel-item lightbox-figure" data-doi="10.1371/journal.pone.0306492.t012"> <img src="/plosone/article/figure/image?size=inline&amp;id=10.1371/journal.pone.0306492.t012" loading="lazy" alt="Table 12" /> </div> </div> </div> <div class="carousel-control"> <span class="button previous"></span> <span class="button next"></span> </div> <div class="carousel-page-buttons"> </div> </div> </div> <script src="/resource/js/vendor/jquery.touchswipe.js" type="text/javascript"></script> <script src="/resource/js/components/figure_carousel.js" type="text/javascript"></script> <script src="/resource/js/vendor/jquery.dotdotdot.js" type="text/javascript"></script> <div class="article-text" id="artText"> <div xmlns:plos="http://plos.org" class="abstract toc-section abstract-type-"><a id="abstract0" name="abstract0" data-toc="abstract0" class="link-target" title="Abstract"></a><h2>Abstract</h2><div class="abstract-content"><a id="article1.front1.article-meta1.abstract1.p1" name="article1.front1.article-meta1.abstract1.p1" class="link-target"></a><p>Brain tumor detection in clinical applications is a complex and challenging task due to the intricate structures of the human brain. Magnetic Resonance (MR) imaging is widely preferred for this purpose because of its ability to provide detailed images of soft brain tissues, including brain tissue, cerebrospinal fluid, and blood vessels. However, accurately detecting brain tumors from MR images remains an open problem for researchers due to the variations in tumor characteristics such as intensity, texture, size, shape, and location. To address these issues, we propose a method that combines multi-level thresholding and Convolutional Neural Networks (CNN). Initially, we enhance the contrast of brain MR images using intensity transformations, which highlight the infected regions in the images. Then, we use the suggested CNN architecture to classify the enhanced MR images into normal and abnormal categories. Finally, we employ multi-level thresholding based on Tsallis entropy (TE) and differential evolution (DE) to detect tumor region(s) from the abnormal images. To refine the results, we apply morphological operations to minimize distortions caused by thresholding. The proposed method is evaluated using the widely used Harvard Medical School (HMS) dataset, and the results demonstrate promising performance with 99.5% classification accuracy and 92.84% dice similarity coefficient. Our approach outperforms existing state-of-the-art methods in brain tumor detection and automated disease diagnosis from MR images.</p> </div></div> <div xmlns:plos="http://plos.org" class="articleinfo"><p><strong>Citation: </strong>Kamireddy RR, Kandala RNVPS, Dhuli R, Polinati S, Sonti K, Tadeusiewicz R, et al. (2024) Brain MRI detection and classification: Harnessing convolutional neural networks and multi-level thresholding. PLoS ONE 19(8): e0306492. https://doi.org/10.1371/journal.pone.0306492</p><p><strong>Editor: </strong>Toqeer Mahmood, National Textile University, PAKISTAN</p><p><strong>Received: </strong>January 27, 2024; <strong>Accepted: </strong>June 18, 2024; <strong>Published: </strong> August 1, 2024</p><p><strong>Copyright: </strong> © 2024 Kamireddy et al. This is an open access article distributed under the terms of the <a href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution License</a>, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.</p><p><strong>Data Availability: </strong>All of the relevant data are contained at the provided URL (<a href="http://www.med.harvard.edu/AANLIB/">http://www.med.harvard.edu/AANLIB/</a>) and that others would be able to access these data in the same manner as the authors. The authors did not have any special access privileges that others would not have.</p><p><strong>Funding: </strong>The author(s) received no specific funding for this work.</p><p><strong>Competing interests: </strong> The authors have declared that no competing interests exist.</p></div> <div xmlns:plos="http://plos.org" id="section1" class="section toc-section"><a id="sec001" name="sec001" data-toc="sec001" class="link-target" title="1. Introduction"></a><h2>1. Introduction</h2><a id="article1.body1.sec1.p1" name="article1.body1.sec1.p1" class="link-target"></a><p>Medical imaging plays a crucial role in clinical settings, providing valuable assistance to radiologists in patient analysis. Within medical imaging, the accurate detection and classification of brain tumors are particularly challenging and of utmost importance. In recent decades, research in medical imaging has flourished across various disciplines, including health monitoring, mathematics, computer science, engineering, and medicine.</p> <a id="article1.body1.sec1.p2" name="article1.body1.sec1.p2" class="link-target"></a><p>Brain tumors are characterized by the abnormal and uncontrolled growth of cells within or around the brain. They are classified as non-malignant (benign) or malignant. Malignant tumors consist of cancerous cells that can spread to other parts of the body from their point of origin, whereas non-malignant tumors lack cancerous cells and do not metastasize. The World Health Organization (WHO) categorizes brain tumors into four grades: Grade I-IV. Grade I tumors are considered benign and typically pose a minimal threat. Grade II tumors are low-grade malignant tumors with a higher tendency to recur and progress to a higher grade. Grade III and IV tumors are classified as malignant and are generally more aggressive. Grade I tumors usually do not infiltrate nearby brain tissues, while grade II tumors may occasionally spread around brain tissues. In contrast, grade III and IV tumors are more likely to invade other brain tissues or even the spinal cord, making their treatment more challenging and posing significant risks to healthy brain tissues [<a href="#pone.0306492.ref001" class="ref-tip">1</a>]. Therefore, early identification and accurate classification of such brain tumors are of paramount importance in medical imaging.</p> <a id="article1.body1.sec1.p3" name="article1.body1.sec1.p3" class="link-target"></a><p>The advancement of medical imaging in brain tumor detection has been driven by the evolution of various scanning methods. Among these methods, Magnetic Resonance (MR) imaging has emerged as a widely utilized approach for examining brain images due to its exceptional ability to accurately differentiate soft tissues based on the characteristics of abnormal cells, including their location, shape, and size. Consequently, MR imaging plays a pivotal role in precise brain tumor detection [<a href="#pone.0306492.ref002" class="ref-tip">2</a>]. Moreover, MR imaging offers the advantage of being non-invasive and provides multiple images with varying contrast visualizations of the same tissue, thereby furnishing radiologists with additional details during patient diagnosis.</p> <a id="article1.body1.sec1.p4" name="article1.body1.sec1.p4" class="link-target"></a><p>Despite the benefits of MR imaging, clinical practitioners often face challenges as they require significant expertise and time to manually represent and classify brain tumors from MR images accurately. This manual process can introduce errors in the tumor detection procedure. To address these limitations, various research investigations have been conducted in recent years to identify and classify brain MR images more effectively, thereby minimizing these issues [<a href="#pone.0306492.ref003" class="ref-tip">3</a>]. In the subsequent section, we present a review of some notable works in this field and summarize the key challenges encountered in brain tumor detection using medical imaging.</p> </div> <div xmlns:plos="http://plos.org" id="section2" class="section toc-section"><a id="sec002" name="sec002" data-toc="sec002" class="link-target" title="2. Literature review"></a><h2>2. Literature review</h2><a id="article1.body1.sec2.p1" name="article1.body1.sec2.p1" class="link-target"></a><p>Several studies have proposed different approaches for brain tumor detection using medical imaging. An unsupervised learning method for identifying brain tumors and segmenting tissues in MR images is presented in [<a href="#pone.0306492.ref004" class="ref-tip">4</a>]. The approach uses a clustering technique that combines self-organizing maps (SOM) and fuzzy K-means (FKM) algorithms to extract features from the image. The identified features are then clustered into different classes based on the image intensity values. An improved automated brain tumor detection system was developed by Arunkumar et al. [<a href="#pone.0306492.ref005" class="ref-tip">5</a>], using K-means clustering and artificial neural networks. (ANN). In order to implement an automated tumor detection model, Lu et al. [<a href="#pone.0306492.ref006" class="ref-tip">6</a>] combined transfer learning with AlexNet. Nagapattinam et al. [<a href="#pone.0306492.ref007" class="ref-tip">7</a>] proposed an automated CAD approach for brain tumor segmentation using genetic and adaptive neuro-fuzzy inference system (ANFIS) techniques.</p> <a id="article1.body1.sec2.p2" name="article1.body1.sec2.p2" class="link-target"></a><p>Using hyper-column methods and attention modules, Toaçar et al. [<a href="#pone.0306492.ref008" class="ref-tip">8</a>] designed the BrainMRNet architecture. A new deep-learning method that combines recursive feature elimination (RFE) and support vector machines (SVM) was introduced by Toaçar et al. [<a href="#pone.0306492.ref009" class="ref-tip">9</a>]. By make use of support vector machines (SVMs) and multi-layer perceptron (MLP) an automatic method for brain tumors detection is presented in [<a href="#pone.0306492.ref010" class="ref-tip">10</a>]. A two-stage classification framework for brain tumor identification using an SVM classifier is proposed in [<a href="#pone.0306492.ref011" class="ref-tip">11</a>]. The first stage involves identifying tumor and non-tumor regions using an image processing technique called Image Difference of Smoothed Signals (IDSS). In the second stage, features extracted from the tumor regions are fed to the SVM classifier for classification into different tumor types.</p> <a id="article1.body1.sec2.p3" name="article1.body1.sec2.p3" class="link-target"></a><p>An augmentation-based 2D convolutional neural network (CNN) system was proposed by Chanu et al. [<a href="#pone.0306492.ref012" class="ref-tip">12</a>]. A new framework for segmenting and classifying brain tumors using deep convolutional neural networks (DCNN) was presented by Kuraparthi et al. [<a href="#pone.0306492.ref013" class="ref-tip">13</a>]. Sethy et al. [<a href="#pone.0306492.ref014" class="ref-tip">14</a>] implemented a deep feature fusion technique to distinguish brain MR images using VGG-16, principal component analysis (PCA) and SVM. In [<a href="#pone.0306492.ref015" class="ref-tip">15</a>], a new method for segmenting abnormal parts in multimodal images by combining the kernel possibilistic C-means (KPCM) clustering algorithm, particle swarm optimization (PSO), and morphological reconstruction filters is proposed.</p> <a id="article1.body1.sec2.p4" name="article1.body1.sec2.p4" class="link-target"></a><p>An automated tool for the early identification of brain tumors from MRI multimodal images are developed by combining the Cuckoo Search (CS) optimization algorithm with the K-nearest neighbor (KNN) classifier [<a href="#pone.0306492.ref016" class="ref-tip">16</a>]. An improved multi-view fuzzy c-means clustering (IMV-FCM) algorithm is developed in [<a href="#pone.0306492.ref017" class="ref-tip">17</a>]. The proposed algorithm aims to overcome the limitation of traditional methods by using multiple views of the image to capture the complex features and details of the brain tissues.</p> <a id="article1.body1.sec2.p5" name="article1.body1.sec2.p5" class="link-target"></a><p>The CNN framework was used by Amin et al. [<a href="#pone.0306492.ref018" class="ref-tip">18</a>] to identify and categories brain tumors. Arpit Kumar Sharma et al. [<a href="#pone.0306492.ref019" class="ref-tip">19</a>] designed a technique based on the modified ResNet50 architecture and enhanced watershed (EWS) algorithm to distinguish between pathological and normal brain MR scans. In [<a href="#pone.0306492.ref020" class="ref-tip">20</a>, <a href="#pone.0306492.ref021" class="ref-tip">21</a>], new deep-learning-based algorithms for predicting MR-based brain tumors are presented. In [<a href="#pone.0306492.ref022" class="ref-tip">22</a>], a new approach for brain tumor segmentation from MRI images is presented. It combines the SOM and active contour model (ACM) techniques. The proposed approach, SOMACM, initializes the contour using SOM, which the ACM further refines. Wessam et al. [<a href="#pone.0306492.ref023" class="ref-tip">23</a>] designed a classification framework for brain tumors based on variational auto-encoders and CNNs. Remzan et al. [<a href="#pone.0306492.ref024" class="ref-tip">24</a>] developed a deep learning-based automatic tumor detection system.</p> <a id="article1.body1.sec2.p6" name="article1.body1.sec2.p6" class="link-target"></a><p>Rahman et al. [<a href="#pone.0306492.ref025" class="ref-tip">25</a>] proposed a parallel deep CNN (PDCNN) architecture to detect brain tumors in MRI images. Mohsen Ahmadi et al. [<a href="#pone.0306492.ref026" class="ref-tip">26</a>] implemented a CNN and robust principal component analysis (RPCA) based approach for the identification of brain lesion in MR images. Sarah Zuhair Kurdi et al. [<a href="#pone.0306492.ref027" class="ref-tip">27</a>] presented a meta-heuristic optimized CNN (MHO-CNN) architecture for the classification of brain MR images. Abdullah et al. [<a href="#pone.0306492.ref028" class="ref-tip">28</a>] suggested a ML-based approach the identification of brain tumors using MR images. Ghada Saad et al. [<a href="#pone.0306492.ref029" class="ref-tip">29</a>] developed a hybrid approach for the identification of brain abnormality from MR images using shape, and texture features. Jaber Alyami1et al. [<a href="#pone.0306492.ref030" class="ref-tip">30</a>] proposed a novel approach for the localization of brain tumors from MR images using VGG, slap swam approach (SSA), and cubic-based SVM classifier. <a href="#pone-0306492-t001">Table 1</a> illustrates the pros and cons of the existing studies.</p> <a class="link-target" id="pone-0306492-t001" name="pone-0306492-t001"></a><div class="figure" data-doi="10.1371/journal.pone.0306492.t001"><div class="img-box"><a title="Click for larger image" href="article/figure/image?size=medium&amp;id=10.1371/journal.pone.0306492.t001" data-doi="10.1371/journal.pone.0306492" data-uri="10.1371/journal.pone.0306492.t001"><img src="article/figure/image?size=inline&amp;id=10.1371/journal.pone.0306492.t001" alt="thumbnail" class="thumbnail" loading="lazy"></a><div class="expand"></div></div><div class="figure-inline-download"> Download: <ul><li><a href="article/figure/powerpoint?id=10.1371/journal.pone.0306492.t001"><div class="definition-label">PPT</div><div class="definition-description">PowerPoint slide</div></a></li><li><a href="article/figure/image?download&amp;size=large&amp;id=10.1371/journal.pone.0306492.t001"><div class="definition-label">PNG</div><div class="definition-description">larger image</div></a></li><li><a href="article/figure/image?download&amp;size=original&amp;id=10.1371/journal.pone.0306492.t001"><div class="definition-label">TIFF</div><div class="definition-description">original image</div></a></li></ul></div><div class="figcaption"><span>Table 1. </span> Pros and cons of the state-of-the-art methods.</div><p class="caption_target"></p><p class="caption_object"><a href="https://doi.org/10.1371/journal.pone.0306492.t001"> https://doi.org/10.1371/journal.pone.0306492.t001</a></p></div> <div id="section1" class="section toc-section"><a id="sec003" name="sec003" class="link-target" title="2.1. Research gaps"></a> <h3>2.1. Research gaps</h3> <a id="article1.body1.sec2.sec1.p1" name="article1.body1.sec2.sec1.p1" class="link-target"></a><p>From the above-mentioned conventional brain tumor identification and classification approaches, we identify the following research gaps:</p> <ol class="order"> <li>While some approaches have employed traditional local texture feature extraction methods like local binary patterns (LBPs) [<a href="#pone.0306492.ref011" class="ref-tip">11</a>], these techniques are sensitive to illumination variations, random noise, and rotations. As a result, their robustness and accuracy in detecting brain tumors may be compromised.</li> <li>Certain studies [<a href="#pone.0306492.ref007" class="ref-tip">7</a>, <a href="#pone.0306492.ref010" class="ref-tip">10</a>] have utilized statistical texture features to differentiate between normal and abnormal brain MR images. However, they do not account for the spatial correlation between adjacent pixels, potentially limiting their ability to capture important spatial information relevant to accurate tumor detection.</li> <li>Many existing approaches neglect data augmentation [<a href="#pone.0306492.ref008" class="ref-tip">8</a>, <a href="#pone.0306492.ref009" class="ref-tip">9</a>, <a href="#pone.0306492.ref019" class="ref-tip">19</a>, <a href="#pone.0306492.ref029" class="ref-tip">29</a>], leading to lower classification accuracy. Machine learning and deep learning models often rely on a sufficient amount of data for optimal performance. The lack of data augmentation can hinder the model’s ability to generalize well to new and unseen cases.</li> <li>Several studies have resorted to pre-trained CNN models like ResNet-50, VGG, and AlexNet [<a href="#pone.0306492.ref006" class="ref-tip">6</a>, <a href="#pone.0306492.ref009" class="ref-tip">9</a>, <a href="#pone.0306492.ref014" class="ref-tip">14</a>, <a href="#pone.0306492.ref020" class="ref-tip">20</a>, <a href="#pone.0306492.ref021" class="ref-tip">21</a>, <a href="#pone.0306492.ref023" class="ref-tip">23</a>, <a href="#pone.0306492.ref024" class="ref-tip">24</a>]. However, employing these models requires a large number of parameters, resulting in increased computational complexity and potentially limiting their practicality in real-time applications.</li> </ol><a id="article1.body1.sec2.sec1.p2" name="article1.body1.sec2.sec1.p2" class="link-target"></a><p>To address these challenges and enhance brain tumor detection accuracy while mitigating overfitting and reducing computational complexity, we propose a novel framework. This distinctive approach incorporates advanced texture feature extraction methods, considering spatial correlations, and integrates data augmentation techniques to boost classification performance. Additionally, our framework introduces different strategies to optimize and streamline the training of deep learning models, ensuring efficient and accurate tumor detection in medical imaging. By bridging these research gaps, our approach aims to significantly improve brain tumor diagnosis and classification in clinical settings.</p> <a id="article1.body1.sec2.sec1.p3" name="article1.body1.sec2.sec1.p3" class="link-target"></a><p>The article is structured as follows: Section 3 presents the proposed techniques. Section 4, the outcomes of the segmentation and classification of the proposed and state-of-the-art frameworks are compared and analyzed. Section 5 highlights the major findings of the proposed framework. Finally, Section 6 presents the conclusion of the study.</p> </div> </div> <div xmlns:plos="http://plos.org" id="section3" class="section toc-section"><a id="sec004" name="sec004" data-toc="sec004" class="link-target" title="3. Proposed methodology"></a><h2>3. Proposed methodology</h2><a id="article1.body1.sec3.p1" name="article1.body1.sec3.p1" class="link-target"></a><p>Detecting and categorizing brain tumors through MRI images can be challenging because of the differences in the characteristics of tumors. In this study, we introduce an innovative approach that utilizes CNN and multi-level thresholding to overcome this issue. Our proposed system comprises two components: classification and segmentation, as shown in <a href="#pone-0306492-g001">Fig 1</a>. The subsequent sections provide a comprehensive overview of these phases.</p> <a class="link-target" id="pone-0306492-g001" name="pone-0306492-g001"></a><div class="figure" data-doi="10.1371/journal.pone.0306492.g001"><div class="img-box"><a title="Click for larger image" href="article/figure/image?size=medium&amp;id=10.1371/journal.pone.0306492.g001" data-doi="10.1371/journal.pone.0306492" data-uri="10.1371/journal.pone.0306492.g001"><img src="article/figure/image?size=inline&amp;id=10.1371/journal.pone.0306492.g001" alt="thumbnail" class="thumbnail" loading="lazy"></a><div class="expand"></div></div><div class="figure-inline-download"> Download: <ul><li><a href="article/figure/powerpoint?id=10.1371/journal.pone.0306492.g001"><div class="definition-label">PPT</div><div class="definition-description">PowerPoint slide</div></a></li><li><a href="article/figure/image?download&amp;size=large&amp;id=10.1371/journal.pone.0306492.g001"><div class="definition-label">PNG</div><div class="definition-description">larger image</div></a></li><li><a href="article/figure/image?download&amp;size=original&amp;id=10.1371/journal.pone.0306492.g001"><div class="definition-label">TIFF</div><div class="definition-description">original image</div></a></li></ul></div><div class="figcaption"><span>Fig 1. </span> The proposed brain tumor segmentation and classification approach.</div><p class="caption_target"></p><p class="caption_object"><a href="https://doi.org/10.1371/journal.pone.0306492.g001"> https://doi.org/10.1371/journal.pone.0306492.g001</a></p></div> <div id="section1" class="section toc-section"><a id="sec005" name="sec005" class="link-target" title="3.1. Classification phase"></a> <h3>3.1. Classification phase</h3> <a id="article1.body1.sec3.sec1.p1" name="article1.body1.sec3.sec1.p1" class="link-target"></a><p>The brain MR image classification process consists of two stages. In the first stage, an intensity transformation operator is used to adjust the contrast of the images. In the second stage, image data augmentation techniques are applied to enhance the model performance by reducing overfitting.</p> <div id="section1" class="section toc-section"><a id="sec006" name="sec006" class="link-target" title="3.1.1. Database"></a><h4>3.1.1. Database.</h4><a id="article1.body1.sec3.sec1.sec1.p1" name="article1.body1.sec3.sec1.sec1.p1" class="link-target"></a><p>To evaluate the performance of the proposed model, we collected 264 T2-weighted brain MR images with 256×256 resolution from a publicly available database, namely Harvard medical school [<a href="#pone.0306492.ref031" class="ref-tip">31</a>], which includes 194 abnormal and 70 normal subjects. However, this dataset may not be sufficient to build an efficient model. Therefore, further, we employ data augmentation described in section 3.1.3.</p> </div> <div id="section2" class="section toc-section"><a id="sec007" name="sec007" class="link-target" title="3.1.2. Contrast enhancement"></a><h4>3.1.2. Contrast enhancement.</h4><a id="article1.body1.sec3.sec1.sec2.p1" name="article1.body1.sec3.sec1.sec2.p1" class="link-target"></a><p>MRI images of the brain often suffer from unwanted information or artifacts, which can occur during the scanning process. In some cases, artifacts present in brain MR images can make it challenging for radiologists to accurately identify or extract the region of interest, especially when abnormalities are present. To address this issue, we utilized an intensity transformation method to increase the contrast of the images and improve their overall quality. For this purpose, we used a MATLAB built-in function, imadjust [<a href="#pone.0306492.ref032" class="ref-tip">32</a>]. This function adjusts the contrast of an image by stretching its intensity values to cover the full dynamic range. This function maps the input image’s intensities to new values spanning a specified range of pixel values in the output image. Here, we limit the low and high pixel values between 0.01 and 0.99. By this, we can improve the contrast of brain MR images. After that, we applied data augmentation to improve the model classification accuracy.</p> </div> <div id="section3" class="section toc-section"><a id="sec008" name="sec008" class="link-target" title="3.1.3 Data augmentation"></a><h4>3.1.3 Data augmentation.</h4><a id="article1.body1.sec3.sec1.sec3.p1" name="article1.body1.sec3.sec1.sec3.p1" class="link-target"></a><p>The CNN models heavily depend on the size and diversification of data to minimize overfitting problems. However, many application domains lack diversified and massive data, especially medical image analysis. Therefore, to enhance the performance of our proposed framework and minimize overfitting, we utilized image data augmentation techniques. These techniques involved applying geometric transformation operators, including scaling, translation, reflection, shearing, and rotation, to create a more diverse set of input data with the help of configurations mentioned in the work [<a href="#pone.0306492.ref033" class="ref-tip">33</a>]. This process induced 2376 brain MR images from the dataset mentioned in section 3.1.1. The augmented data raised the original data to 2376 MR images, 1746 abnormal and 630 healthy. After that, we employed the suggested CNN framework on these augmented images to detect abnormal MR images.</p> </div> <div id="section4" class="section toc-section"><a id="sec009" name="sec009" class="link-target" title="3.1.4. The proposed CNN architecture"></a><h4>3.1.4. The proposed CNN architecture.</h4><a id="article1.body1.sec3.sec1.sec4.p1" name="article1.body1.sec3.sec1.sec4.p1" class="link-target"></a><p>CNN is a typical neural network model that proved efficient in image classification and recognition. Usually, they include convolutional layers (that apply convolution operation on the input data), activation layers (introduce non-linearity into the data), batch normalization or batch norm (enhance the network stability), pooling layers (that down-sample the spatial dimensions of the data), fully connected layers (that use standard multi-layer perceptron architecture) and softmax (estimate the class probabilities) [<a href="#pone.0306492.ref033" class="ref-tip">33</a>]. CNNs have some critical advantages over traditional neural networks [<a href="#pone.0306492.ref034" class="ref-tip">34</a>], such as:</p> <ul class="bulleted"> <li><strong>Parameter Sharing:</strong> CNNs use shared weights in the convolutional layers, which reduce the number of parameters to learn, leading to reduced overfitting and increased efficiency.</li> <li><strong>Spatial Invariance:</strong> CNNs have a property called spatial invariance, meaning that the network is translation invariant and can detect the same feature anywhere in the input.</li> <li><strong>Downsampling:</strong> CNNs have pooling layers that down-sample the spatial dimensions of the data, reducing the number of parameters to learn and the computational cost, making the network less prone to overfitting.</li> </ul><a id="article1.body1.sec3.sec1.sec4.p2" name="article1.body1.sec3.sec1.sec4.p2" class="link-target"></a><p>CNNs have proven to be effective in image classification and computer vision tasks due to their properties stated above. Prior studies have developed conventional deep learning models such as ResNet-50, VGG, and AlexNet to detect abnormalities in brain MR images [<a href="#pone.0306492.ref006" class="ref-tip">6</a>, <a href="#pone.0306492.ref008" class="ref-tip">8</a>, <a href="#pone.0306492.ref012" class="ref-tip">12</a>, <a href="#pone.0306492.ref013" class="ref-tip">13</a>, <a href="#pone.0306492.ref015" class="ref-tip">15</a>, <a href="#pone.0306492.ref024" class="ref-tip">24</a>, <a href="#pone.0306492.ref025" class="ref-tip">25</a>]. However, these models require many parameters, leading to increased computational complexity. To alleviate this issue, we suggested a lightweight CNN architecture that reduces the number of trainable and non-trainable parameters, thereby reducing training time without compromising classification performance.</p> <a id="article1.body1.sec3.sec1.sec4.p3" name="article1.body1.sec3.sec1.sec4.p3" class="link-target"></a><p><a href="#pone-0306492-g002">Fig 2</a> represents the architecture of our lightweight CNN framework. It consists of three fundamental blocks, namely, Convolutional, Identity, and Inception, shown in Figs <a href="#pone-0306492-g003">3</a>–<a href="#pone-0306492-g005">5</a>, in addition to zero padding, average pooling, and a softmax layer. Here, initially, zero padding was applied to preserve the characteristics of the image at the edges and control the dimensions of the output feature map. Then, a sequence of layers was employed to attain significant edge features such as a 7×7 convolution layer with 32 filters, batch norm, ReLU activation, and 3×3 max pooling with stride 2. After that, we used a series of convolutional, identity, and inception blocks with specified filters, namely, F1, F2, F3, F4, F5, and F6. The significance of these blocks is described in the subsequent sections.</p> <a class="link-target" id="pone-0306492-g002" name="pone-0306492-g002"></a><div class="figure" data-doi="10.1371/journal.pone.0306492.g002"><div class="img-box"><a title="Click for larger image" href="article/figure/image?size=medium&amp;id=10.1371/journal.pone.0306492.g002" data-doi="10.1371/journal.pone.0306492" data-uri="10.1371/journal.pone.0306492.g002"><img src="article/figure/image?size=inline&amp;id=10.1371/journal.pone.0306492.g002" alt="thumbnail" class="thumbnail" loading="lazy"></a><div class="expand"></div></div><div class="figure-inline-download"> Download: <ul><li><a href="article/figure/powerpoint?id=10.1371/journal.pone.0306492.g002"><div class="definition-label">PPT</div><div class="definition-description">PowerPoint slide</div></a></li><li><a href="article/figure/image?download&amp;size=large&amp;id=10.1371/journal.pone.0306492.g002"><div class="definition-label">PNG</div><div class="definition-description">larger image</div></a></li><li><a href="article/figure/image?download&amp;size=original&amp;id=10.1371/journal.pone.0306492.g002"><div class="definition-label">TIFF</div><div class="definition-description">original image</div></a></li></ul></div><div class="figcaption"><span>Fig 2. </span> The suggested CNN framework.</div><p class="caption_target"></p><p class="caption_object"><a href="https://doi.org/10.1371/journal.pone.0306492.g002"> https://doi.org/10.1371/journal.pone.0306492.g002</a></p></div><a class="link-target" id="pone-0306492-g003" name="pone-0306492-g003"></a><div class="figure" data-doi="10.1371/journal.pone.0306492.g003"><div class="img-box"><a title="Click for larger image" href="article/figure/image?size=medium&amp;id=10.1371/journal.pone.0306492.g003" data-doi="10.1371/journal.pone.0306492" data-uri="10.1371/journal.pone.0306492.g003"><img src="article/figure/image?size=inline&amp;id=10.1371/journal.pone.0306492.g003" alt="thumbnail" class="thumbnail" loading="lazy"></a><div class="expand"></div></div><div class="figure-inline-download"> Download: <ul><li><a href="article/figure/powerpoint?id=10.1371/journal.pone.0306492.g003"><div class="definition-label">PPT</div><div class="definition-description">PowerPoint slide</div></a></li><li><a href="article/figure/image?download&amp;size=large&amp;id=10.1371/journal.pone.0306492.g003"><div class="definition-label">PNG</div><div class="definition-description">larger image</div></a></li><li><a href="article/figure/image?download&amp;size=original&amp;id=10.1371/journal.pone.0306492.g003"><div class="definition-label">TIFF</div><div class="definition-description">original image</div></a></li></ul></div><div class="figcaption"><span>Fig 3. </span> Block diagram of the identity block.</div><p class="caption_target"></p><p class="caption_object"><a href="https://doi.org/10.1371/journal.pone.0306492.g003"> https://doi.org/10.1371/journal.pone.0306492.g003</a></p></div><a class="link-target" id="pone-0306492-g004" name="pone-0306492-g004"></a><div class="figure" data-doi="10.1371/journal.pone.0306492.g004"><div class="img-box"><a title="Click for larger image" href="article/figure/image?size=medium&amp;id=10.1371/journal.pone.0306492.g004" data-doi="10.1371/journal.pone.0306492" data-uri="10.1371/journal.pone.0306492.g004"><img src="article/figure/image?size=inline&amp;id=10.1371/journal.pone.0306492.g004" alt="thumbnail" class="thumbnail" loading="lazy"></a><div class="expand"></div></div><div class="figure-inline-download"> Download: <ul><li><a href="article/figure/powerpoint?id=10.1371/journal.pone.0306492.g004"><div class="definition-label">PPT</div><div class="definition-description">PowerPoint slide</div></a></li><li><a href="article/figure/image?download&amp;size=large&amp;id=10.1371/journal.pone.0306492.g004"><div class="definition-label">PNG</div><div class="definition-description">larger image</div></a></li><li><a href="article/figure/image?download&amp;size=original&amp;id=10.1371/journal.pone.0306492.g004"><div class="definition-label">TIFF</div><div class="definition-description">original image</div></a></li></ul></div><div class="figcaption"><span>Fig 4. </span> Block diagram of the convolution block.</div><p class="caption_target"></p><p class="caption_object"><a href="https://doi.org/10.1371/journal.pone.0306492.g004"> https://doi.org/10.1371/journal.pone.0306492.g004</a></p></div><a class="link-target" id="pone-0306492-g005" name="pone-0306492-g005"></a><div class="figure" data-doi="10.1371/journal.pone.0306492.g005"><div class="img-box"><a title="Click for larger image" href="article/figure/image?size=medium&amp;id=10.1371/journal.pone.0306492.g005" data-doi="10.1371/journal.pone.0306492" data-uri="10.1371/journal.pone.0306492.g005"><img src="article/figure/image?size=inline&amp;id=10.1371/journal.pone.0306492.g005" alt="thumbnail" class="thumbnail" loading="lazy"></a><div class="expand"></div></div><div class="figure-inline-download"> Download: <ul><li><a href="article/figure/powerpoint?id=10.1371/journal.pone.0306492.g005"><div class="definition-label">PPT</div><div class="definition-description">PowerPoint slide</div></a></li><li><a href="article/figure/image?download&amp;size=large&amp;id=10.1371/journal.pone.0306492.g005"><div class="definition-label">PNG</div><div class="definition-description">larger image</div></a></li><li><a href="article/figure/image?download&amp;size=original&amp;id=10.1371/journal.pone.0306492.g005"><div class="definition-label">TIFF</div><div class="definition-description">original image</div></a></li></ul></div><div class="figcaption"><span>Fig 5. </span> Block diagram of the inception block.</div><p class="caption_target"></p><p class="caption_object"><a href="https://doi.org/10.1371/journal.pone.0306492.g005"> https://doi.org/10.1371/journal.pone.0306492.g005</a></p></div><a id="article1.body1.sec3.sec1.sec4.p4" name="article1.body1.sec3.sec1.sec4.p4" class="link-target"></a><p><em>A</em>. <em>Identity block</em>. Identity networks, also known as identity-based networks or identity mappings, are deep neural network architectures that use identity maps to improve learning. The primary goal of identity networks is to allow for deeper architectures that are easier to train and converge than traditional deep neural networks. It is achieved by adding identity maps as skip connections that bypass one or more layers and map inputs directly to outputs. The identity maps provide a direct path for the gradients to flow during backpropagation, preventing the vanishing or exploding gradients problem and enabling the network to learn more effectively. The concept of identity networks was introduced in the ResNet (Residual Network) architecture, which demonstrated state-of-the-art performance in computer vision tasks. Since then, identity-based architectures have become popular and widely used in various fields, including computer vision, speech recognition, and natural language processing.</p> <a id="article1.body1.sec3.sec1.sec4.p5" name="article1.body1.sec3.sec1.sec4.p5" class="link-target"></a><p>The suggested identity network used in implementing our model is shown in <a href="#pone-0306492-g003">Fig 3</a>. It mainly consists of three convolution modules (two 1×1 and one 3×3) with filter sizes F1, F2, and F3, respectively. In addition, each convolution module is preceded by batch norm and ReLU activation layer. We employed three identity networks in the proposed model with F1 = 32, 64; F2 = 32, 64; F3 = 64, 256.</p> <a id="article1.body1.sec3.sec1.sec4.p6" name="article1.body1.sec3.sec1.sec4.p6" class="link-target"></a><p><em>B</em>. <em>Convolution block</em>. The main motive of the proposed CNN frameworks is to achieve high accuracy with low-computational cost. To meet this criterion, we introduced the ‘Convolution’ module into the presented architecture, represented in <a href="#pone-0306492-g004">Fig 4</a>. The suggested convolution module has four 1×1 convolution blocks with filters F1, F2, and F3, and followed by a batch norm and ReLU activation. The 1×1 convolution is primarily utilized in [<a href="#pone.0306492.ref035" class="ref-tip">35</a>] for cross-channel pooling, but later it is employed in implementing modern architectures such as GoogleNet, ResNet, SqueezeNet, and Inception-ResNet since</p> <ol class="order"> <li>It reduces the number of feature maps.</li> <li>Reduce the computational cost by minimizing the parameter map.</li> <li>Introduce the non-linearity into the network.</li> <li>Create smaller architectures that retain a higher degree of accuracy.</li> </ol><a id="article1.body1.sec3.sec1.sec4.p7" name="article1.body1.sec3.sec1.sec4.p7" class="link-target"></a><p>In the suggested CNN architecture, we used two convolution blocks with F1 = 32, 64; F2 = 32, 64; and F3 = 64, 128 with stride s = 1 and 2.</p> <a id="article1.body1.sec3.sec1.sec4.p8" name="article1.body1.sec3.sec1.sec4.p8" class="link-target"></a><p>The proposed CNN model includes a 5×5 average pooling with stride 3, a 1×1 convolutional layer with 64 filters, and a softmax layer. These layers were incorporated before the outcomes were passed to the segmentation phase, which identifies the affected region of pathological brain MR images.</p> <a id="article1.body1.sec3.sec1.sec4.p9" name="article1.body1.sec3.sec1.sec4.p9" class="link-target"></a><p><em>C</em>. <em>Inception block</em>. Inception is a type of deep neural network architecture introduced in 2014 by Google researchers. The name “Inception” refers to the architecture’s ability to examine multiple scales or “aspects” of the input data simultaneously, in a parallel manner. It is achieved using multiple branches with different kernel sizes in the convolutional layers, allowing the network to capture features at different scales. The multiple branches then concatenate their output activations to form a combined feature representation, which is then passed on to the next layer. This architecture helps reduce overfitting, improve accuracy, and allow for more efficient use of computation resources. The Inception architecture has gained widespread popularity in the field of computer vision, particularly for image classification tasks. Its impact on the development of deep learning models for a variety of applications has been significant.</p> <a id="article1.body1.sec3.sec1.sec4.p10" name="article1.body1.sec3.sec1.sec4.p10" class="link-target"></a><p>The proposed inception network utilized in the implementation of our model is shown in <a href="#pone-0306492-g005">Fig 5</a>. It mainly consists of six convolution modules (four 1×1, one 3×3, and one 5×5) with filter sizes F1, F2, F3, F4, F5, and F6 and each convolution module is followed by a ReLU activation layer. In our model, we applied two inception network architectures with F1 = 32, 64; F2 = 64, 64; F3 = 96, 128; F4 = 16, 16; F5 = 32, 64; and F6 = 32, 32. The entire flow of the proposed classification model is illustrated in the Algorithm 1.</p> <a id="article1.body1.sec3.sec1.sec4.p11" name="article1.body1.sec3.sec1.sec4.p11" class="link-target"></a><p><em>D</em>. <em>Inception block</em>. Inception is a type of deep neural network architecture introduced in 2014 by Google researchers. The name “Inception” refers to the architecture’s ability to examine multiple scales or “aspects” of the input data simultaneously, in a parallel manner. It is achieved using multiple branches with different kernel sizes in the convolutional layers, allowing the network to capture features at different scales. The multiple branches then concatenate their output activations to form a combined feature representation, which is then passed on to the next layer. This architecture helps reduce overfitting, improve accuracy, and allow for more efficient use of computation resources. The Inception architecture has gained widespread popularity in the field of computer vision, particularly for image classification tasks. Its impact on the development of deep learning models for a variety of applications has been significant.</p> <a id="article1.body1.sec3.sec1.sec4.p12" name="article1.body1.sec3.sec1.sec4.p12" class="link-target"></a><p>The proposed inception network utilized in the implementation of our model is shown in <a href="#pone-0306492-g005">Fig 5</a>. It mainly consists of six convolution modules (four 1×1, one 3×3, and one 5×5) with filter sizes F1, F2, F3, F4, F5, and F6 and each convolution module is followed by a ReLU activation layer. In our model, we applied two inception network architectures with F1 = 32, 64; F2 = 64, 64; F3 = 96, 128; F4 = 16, 16; F5 = 32, 64; and F6 = 32, 32. The entire flow of the proposed classification model is illustrated in the Algorithm 1.</p> <a id="article1.body1.sec3.sec1.sec4.p13" name="article1.body1.sec3.sec1.sec4.p13" class="link-target"></a><p><strong>Algorithm 1:</strong> MRI Classification and Tumor Segmentation</p> <a id="article1.body1.sec3.sec1.sec4.p14" name="article1.body1.sec3.sec1.sec4.p14" class="link-target"></a><p><strong>Input:</strong> Original MR-T2 images: original images (194 abnormal, 70 normal)</p> <a id="article1.body1.sec3.sec1.sec4.p15" name="article1.body1.sec3.sec1.sec4.p15" class="link-target"></a><p><strong>Output:</strong> Classified images: classified_images (normal/abnormal labels)</p> <a id="article1.body1.sec3.sec1.sec4.p16" name="article1.body1.sec3.sec1.sec4.p16" class="link-target"></a><p><strong>Steps:</strong></p> <a id="article1.body1.sec3.sec1.sec4.p17" name="article1.body1.sec3.sec1.sec4.p17" class="link-target"></a><p> 1. <strong>Pre-processing:</strong> For each image in original_images:</p> <a id="article1.body1.sec3.sec1.sec4.p18" name="article1.body1.sec3.sec1.sec4.p18" class="link-target"></a><p>  • Apply “imadjust” function to enhance contrast and normalize intensity range to [0.01, 0.99].</p> <a id="article1.body1.sec3.sec1.sec4.p19" name="article1.body1.sec3.sec1.sec4.p19" class="link-target"></a><p>  • Store the pre-processed images in preprocessed_images.</p> <a id="article1.body1.sec3.sec1.sec4.p20" name="article1.body1.sec3.sec1.sec4.p20" class="link-target"></a><p> 2. <strong>Data Augmentation:</strong></p> <a id="article1.body1.sec3.sec1.sec4.p21" name="article1.body1.sec3.sec1.sec4.p21" class="link-target"></a><p>  • Apply data augmentation techniques (e.g., flipping, scaling, and rotation) to preprocessed_images to create a larger dataset augmented_images.</p> <a id="article1.body1.sec3.sec1.sec4.p22" name="article1.body1.sec3.sec1.sec4.p22" class="link-target"></a><p>  • Ensure augmented_images maintain a balanced class distribution (normal/abnormal).</p> <a id="article1.body1.sec3.sec1.sec4.p23" name="article1.body1.sec3.sec1.sec4.p23" class="link-target"></a><p> 3. <strong>CNN Model:</strong> Define a lightweight CNN architecture with:</p> <a id="article1.body1.sec3.sec1.sec4.p24" name="article1.body1.sec3.sec1.sec4.p24" class="link-target"></a><p>  • Convolutional blocks for feature extraction.</p> <a id="article1.body1.sec3.sec1.sec4.p25" name="article1.body1.sec3.sec1.sec4.p25" class="link-target"></a><p>  • Identity blocks for gradient flow improvement.</p> <a id="article1.body1.sec3.sec1.sec4.p26" name="article1.body1.sec3.sec1.sec4.p26" class="link-target"></a><p>  • Inception blocks for efficient feature reuse.</p> <a id="article1.body1.sec3.sec1.sec4.p27" name="article1.body1.sec3.sec1.sec4.p27" class="link-target"></a><p>  • Zero padding to maintain image size during convolution.</p> <a id="article1.body1.sec3.sec1.sec4.p28" name="article1.body1.sec3.sec1.sec4.p28" class="link-target"></a><p>  • Average pooling for dimensionality reduction.</p> <a id="article1.body1.sec3.sec1.sec4.p29" name="article1.body1.sec3.sec1.sec4.p29" class="link-target"></a><p>  • Softmax layer for final classification (normal/abnormal).</p> <a id="article1.body1.sec3.sec1.sec4.p30" name="article1.body1.sec3.sec1.sec4.p30" class="link-target"></a><p>  • Train the CNN model on augmented_images with appropriate loss function and optimizer.</p> <a id="article1.body1.sec3.sec1.sec4.p31" name="article1.body1.sec3.sec1.sec4.p31" class="link-target"></a><p> 4. <strong>Classification:</strong></p> <a id="article1.body1.sec3.sec1.sec4.p32" name="article1.body1.sec3.sec1.sec4.p32" class="link-target"></a><p>  • Use the trained CNN model to classify each image in preprocessed_images.</p> <a id="article1.body1.sec3.sec1.sec4.p33" name="article1.body1.sec3.sec1.sec4.p33" class="link-target"></a><p>  • Store the predicted labels (normal/abnormal) in classified_images.</p> </div> </div> <div id="section2" class="section toc-section"><a id="sec010" name="sec010" class="link-target" title="3.2. Segmentation"></a> <h3>3.2. Segmentation</h3> <a id="article1.body1.sec3.sec2.p1" name="article1.body1.sec3.sec2.p1" class="link-target"></a><p>Segmentation is essential in various image-processing applications, such as medical imaging, content-based image retrieval, and computer vision. Medical imaging is imperative to identify the region of interest (ROI) in patients with brain-related diseases. In this study, our approach aims to differentiate the normal and abnormal regions of brain MR images by thresholding and morphological operations.</p> <a id="article1.body1.sec3.sec2.p2" name="article1.body1.sec3.sec2.p2" class="link-target"></a><p>We first pre-process the brain MR images and estimate the global thresholding value using Tsallis entropy-based multi-level thresholding and differential evolution (DE) to accomplish this. This allows us to differentiate the affected and unaffected areas of the image using the estimated thresholding value. However, this process can lead to imperfections in the obtained threshold image. To address this issue, we perform post-processing using morphological operations [<a href="#pone.0306492.ref032" class="ref-tip">32</a>, <a href="#pone.0306492.ref036" class="ref-tip">36</a>]. This method ensures the accuracy of the segmentation results and allows us to identify the ROI for further analysis.</p> <div id="section1" class="section toc-section"><a id="sec011" name="sec011" class="link-target" title="3.2.1. Multi-level Tsallis entropy"></a><h4>3.2.1. Multi-level Tsallis entropy.</h4><a id="article1.body1.sec3.sec2.sec1.p1" name="article1.body1.sec3.sec2.sec1.p1" class="link-target"></a><p>In the context of image processing, entropy relatively stores the precise variations contained in the image. Let us assume that <em>J</em> be an input image with the dimensions of <em>U</em> × <em>V</em> and the corresponding normalized histogram of the image <em>J</em> is denoted by <em>H</em> = (<em>h</em><sub>1</sub>, <em>h</em><sub>2</sub>, <em>h</em><sub>3</sub>…, <em>h</em><sub><em>m</em></sub>), where <span class="inline-formula"><img src="article/file?type=thumbnail&amp;id=10.1371/journal.pone.0306492.e001" loading="lazy" class="inline-graphic"></span>; <em>r</em><sub><em>m</em></sub> be the number of occurrences of gray-level <em>m</em> and <em>L</em> is the total number of gray-levels of <em>J</em> (usually 255). Now, partition the image into <em>m</em>+1 classes by <em>m</em> thresholds (<em>T</em>) and then estimate the Tsallis entropy [<a href="#pone.0306492.ref037" class="ref-tip">37</a>] for each partition using <a href="#pone.0306492.e002">Eq (1)</a>: <a name="pone.0306492.e002" id="pone.0306492.e002" class="link-target"></a><span class="equation"><img src="article/file?type=thumbnail&amp;id=10.1371/journal.pone.0306492.e002" loading="lazy" class="inline-graphic"><span class="note">(1)</span></span> where γ represents an entropy index and it is always a real value.</p> <a name="pone.0306492.e003" id="pone.0306492.e003" class="link-target"></a><span class="equation"><img src="article/file?type=thumbnail&amp;id=10.1371/journal.pone.0306492.e003" loading="lazy" class="inline-graphic"><span class="note">(2)</span></span><a id="article1.body1.sec3.sec2.sec1.p2" name="article1.body1.sec3.sec2.sec1.p2" class="link-target"></a><p>To attain the significant threshold value <em>T</em><sub><em>opt</em></sub>, the total Tsallis entropy function must be maximized as follows: <a name="pone.0306492.e004" id="pone.0306492.e004" class="link-target"></a><span class="equation"><img src="article/file?type=thumbnail&amp;id=10.1371/journal.pone.0306492.e004" loading="lazy" class="inline-graphic"><span class="note">(3)</span></span></p> <a id="article1.body1.sec3.sec2.sec1.p3" name="article1.body1.sec3.sec2.sec1.p3" class="link-target"></a><p>The above task (<a href="#pone.0306492.e004">Eq (3)</a>) is achieved by a population-based meta-heuristic global optimization approach known as Differential Evolution [<a href="#pone.0306492.ref038" class="ref-tip">38</a>]. It is a simple and efficient evolutionary approach than other existing evolution frameworks like a genetic algorithm (GA) [<a href="#pone.0306492.ref039" class="ref-tip">39</a>], and particle swarm optimization (PSO) [<a href="#pone.0306492.ref040" class="ref-tip">40</a>]. Here, it is maximized the entropy function by iteratively enhancing a candidate solution subject to an evolutionary procedure. The whole process of DE is illustrated in <a href="#pone-0306492-g006">Fig 6</a>. After performing the thresholding, for an effective outcome, furthermore, we employed post-processing using mathematical morphology concepts.</p> <a class="link-target" id="pone-0306492-g006" name="pone-0306492-g006"></a><div class="figure" data-doi="10.1371/journal.pone.0306492.g006"><div class="img-box"><a title="Click for larger image" href="article/figure/image?size=medium&amp;id=10.1371/journal.pone.0306492.g006" data-doi="10.1371/journal.pone.0306492" data-uri="10.1371/journal.pone.0306492.g006"><img src="article/figure/image?size=inline&amp;id=10.1371/journal.pone.0306492.g006" alt="thumbnail" class="thumbnail" loading="lazy"></a><div class="expand"></div></div><div class="figure-inline-download"> Download: <ul><li><a href="article/figure/powerpoint?id=10.1371/journal.pone.0306492.g006"><div class="definition-label">PPT</div><div class="definition-description">PowerPoint slide</div></a></li><li><a href="article/figure/image?download&amp;size=large&amp;id=10.1371/journal.pone.0306492.g006"><div class="definition-label">PNG</div><div class="definition-description">larger image</div></a></li><li><a href="article/figure/image?download&amp;size=original&amp;id=10.1371/journal.pone.0306492.g006"><div class="definition-label">TIFF</div><div class="definition-description">original image</div></a></li></ul></div><div class="figcaption"><span>Fig 6. </span> Flow diagram of the differential evolution.</div><p class="caption_target"></p><p class="caption_object"><a href="https://doi.org/10.1371/journal.pone.0306492.g006"> https://doi.org/10.1371/journal.pone.0306492.g006</a></p></div></div> <div id="section2" class="section toc-section"><a id="sec012" name="sec012" class="link-target" title="3.2.2. Post-processing"></a><h4>3.2.2. Post-processing.</h4><a id="article1.body1.sec3.sec2.sec2.p1" name="article1.body1.sec3.sec2.sec2.p1" class="link-target"></a><p>Post-processing is an essential step in the segmentation process to remove imperfections that occur in the threshold image. It involves morphological image operations such as erosion and dilation, taking into account the shape and boundary area of the tumor. By performing these operations on the threshold image, we can improve the accuracy of tumor detection in brain MR images. A disk-shaped template with a radius of ten is typically used for this purpose. Algorithm 2 outlines the proposed segmentation algorithm.</p> <a id="article1.body1.sec3.sec2.sec2.p2" name="article1.body1.sec3.sec2.sec2.p2" class="link-target"></a><p><strong>Algorithm 2:</strong> The proposed segmentation approach</p> <a id="article1.body1.sec3.sec2.sec2.p3" name="article1.body1.sec3.sec2.sec2.p3" class="link-target"></a><p> 1. Read the enhanced brain MR image, <em>J</em>.</p> <a id="article1.body1.sec3.sec2.sec2.p4" name="article1.body1.sec3.sec2.sec2.p4" class="link-target"></a><p> 2. Perform multi-level Tsallis entropy using the process outlined in section 3.2.1.</p> <a id="article1.body1.sec3.sec2.sec2.p5" name="article1.body1.sec3.sec2.sec2.p5" class="link-target"></a><p> 3. Employ DE approach to optimize the entropy function. To achieve this, here, we consider the following parameters:</p> <a id="article1.body1.sec3.sec2.sec2.p6" name="article1.body1.sec3.sec2.sec2.p6" class="link-target"></a><p>  Number of thresholds = 6,</p> <a id="article1.body1.sec3.sec2.sec2.p7" name="article1.body1.sec3.sec2.sec2.p7" class="link-target"></a><p>  Optimization parameters (<em>D</em>) = 12,</p> <a id="article1.body1.sec3.sec2.sec2.p8" name="article1.body1.sec3.sec2.sec2.p8" class="link-target"></a><p>  Population size (<em>NP</em>) = 10 × D,</p> <a id="article1.body1.sec3.sec2.sec2.p9" name="article1.body1.sec3.sec2.sec2.p9" class="link-target"></a><p>  Weighting factor (<em>F</em>) = 0.5,</p> <a id="article1.body1.sec3.sec2.sec2.p10" name="article1.body1.sec3.sec2.sec2.p10" class="link-target"></a><p>  Cross-over probability (<em>CR</em>) = 0.9,</p> <a id="article1.body1.sec3.sec2.sec2.p11" name="article1.body1.sec3.sec2.sec2.p11" class="link-target"></a><p> 4. To obtain an appropriate thresholding value (<em>T</em><sub><em>a</em></sub>), we consider the mean of the first three largest thresholding values.</p> <a id="article1.body1.sec3.sec2.sec2.p12" name="article1.body1.sec3.sec2.sec2.p12" class="link-target"></a><p> 5. Obtain the segmented image by applying the binarization process using the threshold attained in step 4.</p> <a id="article1.body1.sec3.sec2.sec2.p13" name="article1.body1.sec3.sec2.sec2.p13" class="link-target"></a><p> 6. For significant brain tumor segmentation, finally, we employed post-processing which is described in section 3.2.2.</p> </div> </div> <div id="section3" class="section toc-section"><a id="sec013" name="sec013" class="link-target" title="3.3. Evaluation measures"></a> <h3>3.3. Evaluation measures</h3> <a id="article1.body1.sec3.sec3.p1" name="article1.body1.sec3.sec3.p1" class="link-target"></a><p>The presented framework is assessed through the following metrics [<a href="#pone.0306492.ref041" class="ref-tip">41</a>]: <a name="pone.0306492.e005" id="pone.0306492.e005" class="link-target"></a><span class="equation"><img src="article/file?type=thumbnail&amp;id=10.1371/journal.pone.0306492.e005" loading="lazy" class="inline-graphic"><span class="note">(4)</span></span> <a name="pone.0306492.e006" id="pone.0306492.e006" class="link-target"></a><span class="equation"><img src="article/file?type=thumbnail&amp;id=10.1371/journal.pone.0306492.e006" loading="lazy" class="inline-graphic"><span class="note">(5)</span></span> <a name="pone.0306492.e007" id="pone.0306492.e007" class="link-target"></a><span class="equation"><img src="article/file?type=thumbnail&amp;id=10.1371/journal.pone.0306492.e007" loading="lazy" class="inline-graphic"><span class="note">(6)</span></span> <a name="pone.0306492.e008" id="pone.0306492.e008" class="link-target"></a><span class="equation"><img src="article/file?type=thumbnail&amp;id=10.1371/journal.pone.0306492.e008" loading="lazy" class="inline-graphic"><span class="note">(7)</span></span> <a name="pone.0306492.e009" id="pone.0306492.e009" class="link-target"></a><span class="equation"><img src="article/file?type=thumbnail&amp;id=10.1371/journal.pone.0306492.e009" loading="lazy" class="inline-graphic"><span class="note">(8)</span></span> <a name="pone.0306492.e010" id="pone.0306492.e010" class="link-target"></a><span class="equation"><img src="article/file?type=thumbnail&amp;id=10.1371/journal.pone.0306492.e010" loading="lazy" class="inline-graphic"><span class="note">(9)</span></span> <a name="pone.0306492.e011" id="pone.0306492.e011" class="link-target"></a><span class="equation"><img src="article/file?type=thumbnail&amp;id=10.1371/journal.pone.0306492.e011" loading="lazy" class="inline-graphic"><span class="note">(10)</span></span> where, <em>T</em> = Segmented image; <em>T</em><sub><em>G</em></sub> = Ground truth; <em>TP</em> = True Positive; <em>TN</em> = True Negative; <em>FP</em> = False Positive, and <em>FN</em> = False Negative.</p> </div> </div> <div xmlns:plos="http://plos.org" id="section4" class="section toc-section"><a id="sec014" name="sec014" data-toc="sec014" class="link-target" title="4. Experimental results"></a><h2>4. Experimental results</h2><a id="article1.body1.sec4.p1" name="article1.body1.sec4.p1" class="link-target"></a><p>The simulation results of the suggested approach are presented in the section. To test the reliability of our framework, we conducted extensive simulations based on K-fold cross-validation (K-FCV). Generally, K-FCV is an easy and effective technique for reducing overfitting compared to other validation strategies [<a href="#pone.0306492.ref042" class="ref-tip">42</a>]. However, choosing of K is a crucial part of the validation process. A model with a low variance and a high bias will result from a smaller K-fold sample size. In a similar vein, when the K-fold parameter is significantly increased, the model becomes overfit. Hence, we chose the number 5 for K because it seemed like a good compromise between reliability and randomness. For ease of comprehension, the presented model simulation results are subdivided into two sections. Here, the first module identifies the abnormality of brain MR images. The second module deals with the segmentation of pathological brain MR images.</p> <div id="section1" class="section toc-section"><a id="sec015" name="sec015" class="link-target" title="4.1. Identification of brain abnormality"></a> <h3>4.1. Identification of brain abnormality</h3> <a id="article1.body1.sec4.sec1.p1" name="article1.body1.sec4.sec1.p1" class="link-target"></a><p>We used the suggested CNN framework to predict brain MR image abnormality. Our model was trained on contrasted enhanced augmented images, allowing it to automatically detect relevant edge details through hidden layers and backpropagation learning. To optimize the training process, we utilized a batch size of 64 and trained for 30 epochs. We also experimented with various optimizers to minimize loss, including SGDM [<a href="#pone.0306492.ref043" class="ref-tip">43</a>], AdaMax [<a href="#pone.0306492.ref044" class="ref-tip">44</a>], Adam [<a href="#pone.0306492.ref044" class="ref-tip">44</a>], Adagrad [<a href="#pone.0306492.ref045" class="ref-tip">45</a>], Adadelta [<a href="#pone.0306492.ref046" class="ref-tip">46</a>], RMSProp [<a href="#pone.0306492.ref047" class="ref-tip">47</a>], and Nadam [<a href="#pone.0306492.ref048" class="ref-tip">48</a>], and the specific hyperparameters for each are listed in <a href="#pone-0306492-t002">Table 2</a>.</p> <a class="link-target" id="pone-0306492-t002" name="pone-0306492-t002"></a><div class="figure" data-doi="10.1371/journal.pone.0306492.t002"><div class="img-box"><a title="Click for larger image" href="article/figure/image?size=medium&amp;id=10.1371/journal.pone.0306492.t002" data-doi="10.1371/journal.pone.0306492" data-uri="10.1371/journal.pone.0306492.t002"><img src="article/figure/image?size=inline&amp;id=10.1371/journal.pone.0306492.t002" alt="thumbnail" class="thumbnail" loading="lazy"></a><div class="expand"></div></div><div class="figure-inline-download"> Download: <ul><li><a href="article/figure/powerpoint?id=10.1371/journal.pone.0306492.t002"><div class="definition-label">PPT</div><div class="definition-description">PowerPoint slide</div></a></li><li><a href="article/figure/image?download&amp;size=large&amp;id=10.1371/journal.pone.0306492.t002"><div class="definition-label">PNG</div><div class="definition-description">larger image</div></a></li><li><a href="article/figure/image?download&amp;size=original&amp;id=10.1371/journal.pone.0306492.t002"><div class="definition-label">TIFF</div><div class="definition-description">original image</div></a></li></ul></div><div class="figcaption"><span>Table 2. </span> Various optimization algorithms’ parameters considered in this work.</div><p class="caption_target"></p><p class="caption_object"><a href="https://doi.org/10.1371/journal.pone.0306492.t002"> https://doi.org/10.1371/journal.pone.0306492.t002</a></p></div><a id="article1.body1.sec4.sec1.p2" name="article1.body1.sec4.sec1.p2" class="link-target"></a><p>Tables <a href="#pone-0306492-t003">3</a>–<a href="#pone-0306492-t009">9</a> display the performance of our strategy on various optimizers. It is evident from these tables that Adadelta underperforms compared to other optimizers, particularly in identifying healthy brain MR images due to the drastic decrease in learning rate in the later stages of training. We found that Adagrad and RMSProp outperformed Adadelta by about 95%, but this level of accuracy is not acceptable for clinical diagnosis. On the other hand, SGDM, Adam, AdaMax, and Nadam produced relatively high accuracy, averaging at around 99%. However, among these optimizers, Adam and Nadam significantly improved the proposed technique’s performance by effectively minimizing the loss function. They achieved 99.66% TPR, 99.2% TNR, 99.71% PPV, 99.52% F-Score, 99.42% AUC, and 99.5% accuracy on the proposed CNN architecture. Primarily it is because they slow down when converging to the local minima and minimize the high variance.</p> <a class="link-target" id="pone-0306492-t003" name="pone-0306492-t003"></a><div class="figure" data-doi="10.1371/journal.pone.0306492.t003"><div class="img-box"><a title="Click for larger image" href="article/figure/image?size=medium&amp;id=10.1371/journal.pone.0306492.t003" data-doi="10.1371/journal.pone.0306492" data-uri="10.1371/journal.pone.0306492.t003"><img src="article/figure/image?size=inline&amp;id=10.1371/journal.pone.0306492.t003" alt="thumbnail" class="thumbnail" loading="lazy"></a><div class="expand"></div></div><div class="figure-inline-download"> Download: <ul><li><a href="article/figure/powerpoint?id=10.1371/journal.pone.0306492.t003"><div class="definition-label">PPT</div><div class="definition-description">PowerPoint slide</div></a></li><li><a href="article/figure/image?download&amp;size=large&amp;id=10.1371/journal.pone.0306492.t003"><div class="definition-label">PNG</div><div class="definition-description">larger image</div></a></li><li><a href="article/figure/image?download&amp;size=original&amp;id=10.1371/journal.pone.0306492.t003"><div class="definition-label">TIFF</div><div class="definition-description">original image</div></a></li></ul></div><div class="figcaption"><span>Table 3. </span> Performance measures of the proposed CNN architecture: SGDM optimization.</div><p class="caption_target"></p><p class="caption_object"><a href="https://doi.org/10.1371/journal.pone.0306492.t003"> https://doi.org/10.1371/journal.pone.0306492.t003</a></p></div><a class="link-target" id="pone-0306492-t004" name="pone-0306492-t004"></a><div class="figure" data-doi="10.1371/journal.pone.0306492.t004"><div class="img-box"><a title="Click for larger image" href="article/figure/image?size=medium&amp;id=10.1371/journal.pone.0306492.t004" data-doi="10.1371/journal.pone.0306492" data-uri="10.1371/journal.pone.0306492.t004"><img src="article/figure/image?size=inline&amp;id=10.1371/journal.pone.0306492.t004" alt="thumbnail" class="thumbnail" loading="lazy"></a><div class="expand"></div></div><div class="figure-inline-download"> Download: <ul><li><a href="article/figure/powerpoint?id=10.1371/journal.pone.0306492.t004"><div class="definition-label">PPT</div><div class="definition-description">PowerPoint slide</div></a></li><li><a href="article/figure/image?download&amp;size=large&amp;id=10.1371/journal.pone.0306492.t004"><div class="definition-label">PNG</div><div class="definition-description">larger image</div></a></li><li><a href="article/figure/image?download&amp;size=original&amp;id=10.1371/journal.pone.0306492.t004"><div class="definition-label">TIFF</div><div class="definition-description">original image</div></a></li></ul></div><div class="figcaption"><span>Table 4. </span> Performance measures of the proposed CNN architecture: Adam optimization.</div><p class="caption_target"></p><p class="caption_object"><a href="https://doi.org/10.1371/journal.pone.0306492.t004"> https://doi.org/10.1371/journal.pone.0306492.t004</a></p></div><a class="link-target" id="pone-0306492-t005" name="pone-0306492-t005"></a><div class="figure" data-doi="10.1371/journal.pone.0306492.t005"><div class="img-box"><a title="Click for larger image" href="article/figure/image?size=medium&amp;id=10.1371/journal.pone.0306492.t005" data-doi="10.1371/journal.pone.0306492" data-uri="10.1371/journal.pone.0306492.t005"><img src="article/figure/image?size=inline&amp;id=10.1371/journal.pone.0306492.t005" alt="thumbnail" class="thumbnail" loading="lazy"></a><div class="expand"></div></div><div class="figure-inline-download"> Download: <ul><li><a href="article/figure/powerpoint?id=10.1371/journal.pone.0306492.t005"><div class="definition-label">PPT</div><div class="definition-description">PowerPoint slide</div></a></li><li><a href="article/figure/image?download&amp;size=large&amp;id=10.1371/journal.pone.0306492.t005"><div class="definition-label">PNG</div><div class="definition-description">larger image</div></a></li><li><a href="article/figure/image?download&amp;size=original&amp;id=10.1371/journal.pone.0306492.t005"><div class="definition-label">TIFF</div><div class="definition-description">original image</div></a></li></ul></div><div class="figcaption"><span>Table 5. </span> Performance measures of the proposed CNN architecture: Adamax optimization.</div><p class="caption_target"></p><p class="caption_object"><a href="https://doi.org/10.1371/journal.pone.0306492.t005"> https://doi.org/10.1371/journal.pone.0306492.t005</a></p></div><a class="link-target" id="pone-0306492-t006" name="pone-0306492-t006"></a><div class="figure" data-doi="10.1371/journal.pone.0306492.t006"><div class="img-box"><a title="Click for larger image" href="article/figure/image?size=medium&amp;id=10.1371/journal.pone.0306492.t006" data-doi="10.1371/journal.pone.0306492" data-uri="10.1371/journal.pone.0306492.t006"><img src="article/figure/image?size=inline&amp;id=10.1371/journal.pone.0306492.t006" alt="thumbnail" class="thumbnail" loading="lazy"></a><div class="expand"></div></div><div class="figure-inline-download"> Download: <ul><li><a href="article/figure/powerpoint?id=10.1371/journal.pone.0306492.t006"><div class="definition-label">PPT</div><div class="definition-description">PowerPoint slide</div></a></li><li><a href="article/figure/image?download&amp;size=large&amp;id=10.1371/journal.pone.0306492.t006"><div class="definition-label">PNG</div><div class="definition-description">larger image</div></a></li><li><a href="article/figure/image?download&amp;size=original&amp;id=10.1371/journal.pone.0306492.t006"><div class="definition-label">TIFF</div><div class="definition-description">original image</div></a></li></ul></div><div class="figcaption"><span>Table 6. </span> Performance measures of the proposed CNN architecture: Adadelta optimization.</div><p class="caption_target"></p><p class="caption_object"><a href="https://doi.org/10.1371/journal.pone.0306492.t006"> https://doi.org/10.1371/journal.pone.0306492.t006</a></p></div><a class="link-target" id="pone-0306492-t007" name="pone-0306492-t007"></a><div class="figure" data-doi="10.1371/journal.pone.0306492.t007"><div class="img-box"><a title="Click for larger image" href="article/figure/image?size=medium&amp;id=10.1371/journal.pone.0306492.t007" data-doi="10.1371/journal.pone.0306492" data-uri="10.1371/journal.pone.0306492.t007"><img src="article/figure/image?size=inline&amp;id=10.1371/journal.pone.0306492.t007" alt="thumbnail" class="thumbnail" loading="lazy"></a><div class="expand"></div></div><div class="figure-inline-download"> Download: <ul><li><a href="article/figure/powerpoint?id=10.1371/journal.pone.0306492.t007"><div class="definition-label">PPT</div><div class="definition-description">PowerPoint slide</div></a></li><li><a href="article/figure/image?download&amp;size=large&amp;id=10.1371/journal.pone.0306492.t007"><div class="definition-label">PNG</div><div class="definition-description">larger image</div></a></li><li><a href="article/figure/image?download&amp;size=original&amp;id=10.1371/journal.pone.0306492.t007"><div class="definition-label">TIFF</div><div class="definition-description">original image</div></a></li></ul></div><div class="figcaption"><span>Table 7. </span> Performance measures of the proposed CNN architecture: Adagrad optimization.</div><p class="caption_target"></p><p class="caption_object"><a href="https://doi.org/10.1371/journal.pone.0306492.t007"> https://doi.org/10.1371/journal.pone.0306492.t007</a></p></div><a class="link-target" id="pone-0306492-t008" name="pone-0306492-t008"></a><div class="figure" data-doi="10.1371/journal.pone.0306492.t008"><div class="img-box"><a title="Click for larger image" href="article/figure/image?size=medium&amp;id=10.1371/journal.pone.0306492.t008" data-doi="10.1371/journal.pone.0306492" data-uri="10.1371/journal.pone.0306492.t008"><img src="article/figure/image?size=inline&amp;id=10.1371/journal.pone.0306492.t008" alt="thumbnail" class="thumbnail" loading="lazy"></a><div class="expand"></div></div><div class="figure-inline-download"> Download: <ul><li><a href="article/figure/powerpoint?id=10.1371/journal.pone.0306492.t008"><div class="definition-label">PPT</div><div class="definition-description">PowerPoint slide</div></a></li><li><a href="article/figure/image?download&amp;size=large&amp;id=10.1371/journal.pone.0306492.t008"><div class="definition-label">PNG</div><div class="definition-description">larger image</div></a></li><li><a href="article/figure/image?download&amp;size=original&amp;id=10.1371/journal.pone.0306492.t008"><div class="definition-label">TIFF</div><div class="definition-description">original image</div></a></li></ul></div><div class="figcaption"><span>Table 8. </span> Performance measures of the proposed CNN architecture: Nadam optimization.</div><p class="caption_target"></p><p class="caption_object"><a href="https://doi.org/10.1371/journal.pone.0306492.t008"> https://doi.org/10.1371/journal.pone.0306492.t008</a></p></div><a class="link-target" id="pone-0306492-t009" name="pone-0306492-t009"></a><div class="figure" data-doi="10.1371/journal.pone.0306492.t009"><div class="img-box"><a title="Click for larger image" href="article/figure/image?size=medium&amp;id=10.1371/journal.pone.0306492.t009" data-doi="10.1371/journal.pone.0306492" data-uri="10.1371/journal.pone.0306492.t009"><img src="article/figure/image?size=inline&amp;id=10.1371/journal.pone.0306492.t009" alt="thumbnail" class="thumbnail" loading="lazy"></a><div class="expand"></div></div><div class="figure-inline-download"> Download: <ul><li><a href="article/figure/powerpoint?id=10.1371/journal.pone.0306492.t009"><div class="definition-label">PPT</div><div class="definition-description">PowerPoint slide</div></a></li><li><a href="article/figure/image?download&amp;size=large&amp;id=10.1371/journal.pone.0306492.t009"><div class="definition-label">PNG</div><div class="definition-description">larger image</div></a></li><li><a href="article/figure/image?download&amp;size=original&amp;id=10.1371/journal.pone.0306492.t009"><div class="definition-label">TIFF</div><div class="definition-description">original image</div></a></li></ul></div><div class="figcaption"><span>Table 9. </span> Performance measures of the proposed CNN architecture: RMSProp optimization.</div><p class="caption_target"></p><p class="caption_object"><a href="https://doi.org/10.1371/journal.pone.0306492.t009"> https://doi.org/10.1371/journal.pone.0306492.t009</a></p></div></div> <div id="section2" class="section toc-section"><a id="sec016" name="sec016" class="link-target" title="4.2. Segmentation of abnormal brain MR images"></a> <h3>4.2. Segmentation of abnormal brain MR images</h3> <a id="article1.body1.sec4.sec2.p1" name="article1.body1.sec4.sec2.p1" class="link-target"></a><p>Brain tumor segmentation from MR images is a method for differentiating the diseased tissue from the healthy tissue to differentiate the infected area from its non-infected area at the pixel level. To meet this criterion, we proposed Tsallis entropy and DE-based multi-level thresholding approach in this work. Here, initially, partitions the image into six classes (e.g., class 1, class 2… class 6) using randomly initialized thresholding values says that t<sub>1</sub>, t<sub>2</sub>… t6 (t<sub>1</sub> &lt; t2 &lt; t<sub>3</sub> &lt;…. &lt; t<sub>6</sub>). Afterward, we calculate the Tsallis entropy from each class and then estimate the optimal thresholding value with the help of DE. The implications of the suggested segmentation model on single and multiple tumors are illustrated in Figs <a href="#pone-0306492-g007">7</a> and <a href="#pone-0306492-g008">8</a>. Further, to test the impact of the proposed segmentation, we collected 25 infected brain tumor images from the database mentioned in section 3.1.1 and the corresponding inferences are shown in <a href="#pone-0306492-t010">Table 10</a> and the highest attained measures are highlighted in bold face.</p> <a class="link-target" id="pone-0306492-g007" name="pone-0306492-g007"></a><div class="figure" data-doi="10.1371/journal.pone.0306492.g007"><div class="img-box"><a title="Click for larger image" href="article/figure/image?size=medium&amp;id=10.1371/journal.pone.0306492.g007" data-doi="10.1371/journal.pone.0306492" data-uri="10.1371/journal.pone.0306492.g007"><img src="article/figure/image?size=inline&amp;id=10.1371/journal.pone.0306492.g007" alt="thumbnail" class="thumbnail" loading="lazy"></a><div class="expand"></div></div><div class="figure-inline-download"> Download: <ul><li><a href="article/figure/powerpoint?id=10.1371/journal.pone.0306492.g007"><div class="definition-label">PPT</div><div class="definition-description">PowerPoint slide</div></a></li><li><a href="article/figure/image?download&amp;size=large&amp;id=10.1371/journal.pone.0306492.g007"><div class="definition-label">PNG</div><div class="definition-description">larger image</div></a></li><li><a href="article/figure/image?download&amp;size=original&amp;id=10.1371/journal.pone.0306492.g007"><div class="definition-label">TIFF</div><div class="definition-description">original image</div></a></li></ul></div><div class="figcaption"><span>Fig 7. </span> The results of the applied segmentation technique to detect single tumor can be observed in four sets of images.</div><p class="caption_target"><a id="article1.body1.sec4.sec2.fig1.caption1.p1" name="article1.body1.sec4.sec2.fig1.caption1.p1" class="link-target"></a><p><strong>(a)-(d)</strong> shows the original input images, <strong>(e)-(h)</strong>, the images have undergone contrast enhancement through intensity transformation to improve the visibility of the tumors, <strong>(i)-(l)</strong>, the segmentation technique has been applied using multi-level thresholding to distinguish the tumor regions from healthy, <strong>(m)-(p)</strong>, refined segmented images.</p> </p><p class="caption_object"><a href="https://doi.org/10.1371/journal.pone.0306492.g007"> https://doi.org/10.1371/journal.pone.0306492.g007</a></p></div><a class="link-target" id="pone-0306492-g008" name="pone-0306492-g008"></a><div class="figure" data-doi="10.1371/journal.pone.0306492.g008"><div class="img-box"><a title="Click for larger image" href="article/figure/image?size=medium&amp;id=10.1371/journal.pone.0306492.g008" data-doi="10.1371/journal.pone.0306492" data-uri="10.1371/journal.pone.0306492.g008"><img src="article/figure/image?size=inline&amp;id=10.1371/journal.pone.0306492.g008" alt="thumbnail" class="thumbnail" loading="lazy"></a><div class="expand"></div></div><div class="figure-inline-download"> Download: <ul><li><a href="article/figure/powerpoint?id=10.1371/journal.pone.0306492.g008"><div class="definition-label">PPT</div><div class="definition-description">PowerPoint slide</div></a></li><li><a href="article/figure/image?download&amp;size=large&amp;id=10.1371/journal.pone.0306492.g008"><div class="definition-label">PNG</div><div class="definition-description">larger image</div></a></li><li><a href="article/figure/image?download&amp;size=original&amp;id=10.1371/journal.pone.0306492.g008"><div class="definition-label">TIFF</div><div class="definition-description">original image</div></a></li></ul></div><div class="figcaption"><span>Fig 8. </span> The results of the applied segmentation technique to detect multiple tumors can be observed in four sets of images.</div><p class="caption_target"><a id="article1.body1.sec4.sec2.fig2.caption1.p1" name="article1.body1.sec4.sec2.fig2.caption1.p1" class="link-target"></a><p>(a)-(d) shows the original input images, (e)-(h), the images have undergone contrast enhancement through intensity transformation to improve the visibility of the tumors, (i)-(l), the segmentation technique has been applied using multi-level thresholding to distinguish the tumor regions from healthy, (m)-(p), refined segmented images.</p> </p><p class="caption_object"><a href="https://doi.org/10.1371/journal.pone.0306492.g008"> https://doi.org/10.1371/journal.pone.0306492.g008</a></p></div><a class="link-target" id="pone-0306492-t010" name="pone-0306492-t010"></a><div class="figure" data-doi="10.1371/journal.pone.0306492.t010"><div class="img-box"><a title="Click for larger image" href="article/figure/image?size=medium&amp;id=10.1371/journal.pone.0306492.t010" data-doi="10.1371/journal.pone.0306492" data-uri="10.1371/journal.pone.0306492.t010"><img src="article/figure/image?size=inline&amp;id=10.1371/journal.pone.0306492.t010" alt="thumbnail" class="thumbnail" loading="lazy"></a><div class="expand"></div></div><div class="figure-inline-download"> Download: <ul><li><a href="article/figure/powerpoint?id=10.1371/journal.pone.0306492.t010"><div class="definition-label">PPT</div><div class="definition-description">PowerPoint slide</div></a></li><li><a href="article/figure/image?download&amp;size=large&amp;id=10.1371/journal.pone.0306492.t010"><div class="definition-label">PNG</div><div class="definition-description">larger image</div></a></li><li><a href="article/figure/image?download&amp;size=original&amp;id=10.1371/journal.pone.0306492.t010"><div class="definition-label">TIFF</div><div class="definition-description">original image</div></a></li></ul></div><div class="figcaption"><span>Table 10. </span> Evaluation of metrics of the suggested segmentation strategy.</div><p class="caption_target"></p><p class="caption_object"><a href="https://doi.org/10.1371/journal.pone.0306492.t010"> https://doi.org/10.1371/journal.pone.0306492.t010</a></p></div></div> </div> <div xmlns:plos="http://plos.org" id="section5" class="section toc-section"><a id="sec017" name="sec017" data-toc="sec017" class="link-target" title="5. Discussion"></a><h2>5. Discussion</h2><a id="article1.body1.sec5.p1" name="article1.body1.sec5.p1" class="link-target"></a><p>Detecting brain tumors from MR images is a difficult and intricate task, mainly due to the impact of noise, limited data, and the movements of organs within the brain. To resolve these issues, researchers introduced various methodologies (refer to Section 2) based on fundamental steps involved in machine learning. However, they have a few problems (refer to Section 2.1). Hence, in this work, we presented a distinctive methodology for an early diagnosis of brain tumors using CNN and multi-level image thresholding. <a href="#pone-0306492-t011">Table 11</a> indicates the classification performance of the suggested model with other state-of-the-art approaches, as discussed in Section 2. From this, we noted that the suggested diagnosis model attained approximately 1.2% higher accuracy than the conventional pre-trained CNN architectures [<a href="#pone.0306492.ref006" class="ref-tip">6</a>, <a href="#pone.0306492.ref009" class="ref-tip">9</a>, <a href="#pone.0306492.ref014" class="ref-tip">14</a>, <a href="#pone.0306492.ref020" class="ref-tip">20</a>, <a href="#pone.0306492.ref021" class="ref-tip">21</a>, <a href="#pone.0306492.ref023" class="ref-tip">23</a>, <a href="#pone.0306492.ref024" class="ref-tip">24</a>] and other deep learning frameworks. This slight improvement is very important in the brain image analysis since brain tumor is life-threatening disease. The significant merits of the presented technique are:</p> <ol class="order"> <li>Our model instantaneously reads the structure of brain MR images, extracts the hidden details for identifying abnormal patients, and minimizes the intervention of human beings.</li> <li>Required a smaller number of parameters than the pre-trained CNN models, that is approximately 5, 53,794 (or 0.55 million), including 5, 49,890 trainable and 3904 non-trainable.</li> <li>Overcome the over-fitting issues and improve the model generalization ability by introducing the weights into the convolutional layers using the concept of ‘He weight initialization’.</li> <li>Significantly obtained high classification accuracy due to image augmentation.</li> <li>Effectively extract the hidden texture details without the intervention of humans.</li> </ol><a class="link-target" id="pone-0306492-t011" name="pone-0306492-t011"></a><div class="figure" data-doi="10.1371/journal.pone.0306492.t011"><div class="img-box"><a title="Click for larger image" href="article/figure/image?size=medium&amp;id=10.1371/journal.pone.0306492.t011" data-doi="10.1371/journal.pone.0306492" data-uri="10.1371/journal.pone.0306492.t011"><img src="article/figure/image?size=inline&amp;id=10.1371/journal.pone.0306492.t011" alt="thumbnail" class="thumbnail" loading="lazy"></a><div class="expand"></div></div><div class="figure-inline-download"> Download: <ul><li><a href="article/figure/powerpoint?id=10.1371/journal.pone.0306492.t011"><div class="definition-label">PPT</div><div class="definition-description">PowerPoint slide</div></a></li><li><a href="article/figure/image?download&amp;size=large&amp;id=10.1371/journal.pone.0306492.t011"><div class="definition-label">PNG</div><div class="definition-description">larger image</div></a></li><li><a href="article/figure/image?download&amp;size=original&amp;id=10.1371/journal.pone.0306492.t011"><div class="definition-label">TIFF</div><div class="definition-description">original image</div></a></li></ul></div><div class="figcaption"><span>Table 11. </span> Classification performance of the implemented approach and state-of-the-art works.</div><p class="caption_target"></p><p class="caption_object"><a href="https://doi.org/10.1371/journal.pone.0306492.t011"> https://doi.org/10.1371/journal.pone.0306492.t011</a></p></div><a id="article1.body1.sec5.p2" name="article1.body1.sec5.p2" class="link-target"></a><p>Similarly, when comparing our method with the existing segmentation models, we found that a 3% increase in DSC value (<a href="#pone-0306492-t012">Table 12</a>). In the medical imaging, this improvement is crucial because it improves the efficiency of diagnostic tool. The main motives behind the success of the suggested segmentation model are as follows:</p> <a id="article1.body1.sec5.p3" name="article1.body1.sec5.p3" class="link-target"></a><p>Tsallis entropy is ability to analyze the non-extensive details.</p> <ol class="order"> <li>Tsallis entropy is taken into consideration of correlation between sub-samples due to its pseudo-additivity property.</li> <li>Differential evolution requires less parameter tuning, fast and accurate convergence towards finding a global minimum.</li> </ol><a class="link-target" id="pone-0306492-t012" name="pone-0306492-t012"></a><div class="figure" data-doi="10.1371/journal.pone.0306492.t012"><div class="img-box"><a title="Click for larger image" href="article/figure/image?size=medium&amp;id=10.1371/journal.pone.0306492.t012" data-doi="10.1371/journal.pone.0306492" data-uri="10.1371/journal.pone.0306492.t012"><img src="article/figure/image?size=inline&amp;id=10.1371/journal.pone.0306492.t012" alt="thumbnail" class="thumbnail" loading="lazy"></a><div class="expand"></div></div><div class="figure-inline-download"> Download: <ul><li><a href="article/figure/powerpoint?id=10.1371/journal.pone.0306492.t012"><div class="definition-label">PPT</div><div class="definition-description">PowerPoint slide</div></a></li><li><a href="article/figure/image?download&amp;size=large&amp;id=10.1371/journal.pone.0306492.t012"><div class="definition-label">PNG</div><div class="definition-description">larger image</div></a></li><li><a href="article/figure/image?download&amp;size=original&amp;id=10.1371/journal.pone.0306492.t012"><div class="definition-label">TIFF</div><div class="definition-description">original image</div></a></li></ul></div><div class="figcaption"><span>Table 12. </span> Segmentation performance of the implemented model and state-of-the-art works.</div><p class="caption_target"></p><p class="caption_object"><a href="https://doi.org/10.1371/journal.pone.0306492.t012"> https://doi.org/10.1371/journal.pone.0306492.t012</a></p></div> <div id="section1" class="section toc-section"><a id="sec018" name="sec018" class="link-target" title="5.1. Advantages"></a> <h3>5.1. Advantages</h3> <a id="article1.body1.sec5.sec1.p1" name="article1.body1.sec5.sec1.p1" class="link-target"></a><p>Major advantages of the proposed model compared to the existing approaches stated above are as follows:</p> <ul class="bulleted"> <li><strong>Efficiency:</strong> Lightweight CNN architectures require less training time and computational resources compared to complex models. This makes them suitable for deployment on devices with limited processing power.</li> <li><strong>Improved Classification Accuracy:</strong> The use of a CNN can potentially achieve higher accuracy in classifying normal and abnormal MR images compared to traditional machine learning methods. This leads to a more reliable identification of potential tumor cases.</li> <li><strong>Better Tumor Segmentation:</strong> The Tsallis entropy and DE-based multi-level thresholding approach offers a clear method for segmenting the tumor region at the pixel level. This provides a detailed map of the diseased tissue, aiding in treatment planning and surgical procedures.</li> <li><strong>Potential for Automation:</strong> By automating the classification and segmentation process, this methodology can significantly reduce the workload for radiologists and improve the efficiency of brain tumor diagnosis.</li> </ul></div> <div id="section2" class="section toc-section"><a id="sec019" name="sec019" class="link-target" title="5.2. Limitations"></a> <h3>5.2. Limitations</h3> <a id="article1.body1.sec5.sec2.p1" name="article1.body1.sec5.sec2.p1" class="link-target"></a><p>As we know every approach have some pros and cons, here, we listed a few limitations of the presented framework:</p> <ul class="bulleted"> <li><strong>Sensitivity to Training Data:</strong> Lightweight CNN’s can be sensitive to the quality and size of the training dataset. Limited data or data with specific biases might affect the model’s generalization.</li> <li><strong>Requirement for Pre-processing:</strong> The Tsallis entropy and DE-based approach might require specific pre-processing steps for optimal performance. This can add complexity to the overall pipeline.</li> <li><strong>Potential for Inaccuracy:</strong> Thresholding techniques can be susceptible to noise and artifacts in MR images, leading to inaccurate segmentation boundaries.</li> </ul></div> <div id="section3" class="section toc-section"><a id="sec020" name="sec020" class="link-target" title="5.3. Future scope"></a> <h3>5.3. Future scope</h3> <a id="article1.body1.sec5.sec3.p1" name="article1.body1.sec5.sec3.p1" class="link-target"></a><p>To limit the above-mentioned issues, in the future our work is extended using the following technologies:</p> <ul class="bulleted"> <li><strong>Deep Learning Integration:</strong> Explore integrating the Tsallis entropy and DE-based approach with deep learning architectures for potentially improved segmentation accuracy and robustness to noise.</li> <li><strong>Real-time Applications:</strong> Investigate methods to optimize the methodology for real-time applications, such as image-guided surgery or intra-operative tumor identification.</li> <li><strong>Explainable AI:</strong> Explore techniques for making the CNN model more interpretable. This can help healthcare professionals understand the rationale behind the model’s predictions and build trust in its results.</li> <li><strong>Generalizability Studies:</strong> Conduct studies to evaluate the generalizability of the methodology on diverse datasets with different types of brain tumors and image acquisition protocols.</li> <li><strong>Incorporation of Clinical Data:</strong> Consider incorporating additional clinical data (e.g., patient history) into the model to potentially improve the accuracy of both classification and segmentation.</li> </ul></div> </div> <div xmlns:plos="http://plos.org" id="section6" class="section toc-section"><a id="sec021" name="sec021" data-toc="sec021" class="link-target" title="6. Conclusion"></a><h2>6. Conclusion</h2><a id="article1.body1.sec6.p1" name="article1.body1.sec6.p1" class="link-target"></a><p>In this study, we developed a new methodology to distinguish between normal and abnormal MR images by identifying the infected areas from brain tumor images. Here, preprocessing is primarily used to reduce the effects of unwanted artifacts that happen while capturing MR images. Then, we used image augmentation based on geometric transformations to improve the predictive model’s performance. Further, we extracted the hidden texture details from the augmented images and classified them as normal and abnormal by the suggested CNN architecture. Finally, MR images of the pathological brain were subjected to multi-level thresholding to isolate the region of interest. From the empirical findings of brain MR images, we witnessed that our suggested model identifies abnormal patients with an accuracy of 99.5%. From the detailed analysis of experimental assessments, it is noted that compared to the existing approaches, the proposed methodology accurately classifies the given brain MR images as normal and abnormal with 99.5% accuracy and effectively identifies the location of affected regions with 0.93 DSC. Hence, the presented technique can be used as a powerful tool for MR-based brain tumor classification and identification. In the future, we would like to extend our work on 3-dimensional brain MR images.</p> </div> <div xmlns:plos="http://plos.org" class="section toc-section"><a id="ack" name="ack" data-toc="ack" title="Acknowledgments" class="link-target"></a><h2>Acknowledgments</h2> <div id="section1" class="section toc-section"><a id="sec022" name="sec022" class="link-target" title="Declaration of generative AI and AI-assisted technologies in the writing process"></a> <h3>Declaration of generative AI and AI-assisted technologies in the writing process</h3> <a id="article1.back1.ack1.sec1.p1" name="article1.back1.ack1.sec1.p1" class="link-target"></a><p>During the preparation of this work the author(s) used [ChatGpT 3.0] (partially) in order to improve the style of the article suitable for technical article. After using this tool, the author(s) reviewed and edited the content as needed and take(s) full responsibility for the content of the publication.</p> </div> </div><div xmlns:plos="http://plos.org" class="toc-section"><a id="references" name="references" class="link-target" data-toc="references" title="References"></a><h2>References</h2><ol class="references"><li id="ref1"><span class="order">1. </span><a name="pone.0306492.ref001" id="pone.0306492.ref001" class="link-target"></a>Louis DN, Ohgaki H, Wiestler OD, Cavenee WK, Burger PC, Jouvet A, et al. The 2007 WHO classification of tumours of the central nervous system. Acta neuropathologica. 2007 Aug;114:97–109. pmid:17618441 <ul class="reflinks" data-doi="10.1007/s00401-007-0243-4"><li><a href="https://doi.org/10.1007/s00401-007-0243-4" data-author="doi-provided" data-cit="doi-provided" data-title="doi-provided" target="_new" title="Go to article"> View Article </a></li><li><a href="http://www.ncbi.nlm.nih.gov/pubmed/17618441" target="_new" title="Go to article in PubMed"> PubMed/NCBI </a></li><li><a href="http://scholar.google.com/scholar?q=The+2007+WHO+classification+of+tumours+of+the+central+nervous+system+Louis+2007" target="_new" title="Go to article in Google Scholar"> Google Scholar </a></li></ul></li><li id="ref2"><span class="order">2. </span><a name="pone.0306492.ref002" id="pone.0306492.ref002" class="link-target"></a>Kasban H, El-Bendary MA, Salama DH. A comparative study of medical imaging techniques. International Journal of Information Science and Intelligent System. 2015 Apr;4(2):37–58. <ul class="reflinks"><li><a href="#" data-author="Kasban" data-cit="KasbanH%2C%20El-BendaryMA%2C%20SalamaDH.%20A%20comparative%20study%20of%20medical%20imaging%20techniques.%20International%20Journal%20of%20Information%20Science%20and%20Intelligent%20System.%202015%20Apr%3B4%282%29%3A37%E2%80%9358." data-title="A%20comparative%20study%20of%20medical%20imaging%20techniques" target="_new" title="Go to article in CrossRef"> View Article </a></li><li><a href="http://scholar.google.com/scholar?q=A+comparative+study+of+medical+imaging+techniques+Kasban+2015" target="_new" title="Go to article in Google Scholar"> Google Scholar </a></li></ul></li><li id="ref3"><span class="order">3. </span><a name="pone.0306492.ref003" id="pone.0306492.ref003" class="link-target"></a>Biratu ES, Schwenker F, Ayano YM, Debelee TG. A survey of brain tumor segmentation and classification algorithms. Journal of Imaging. 2021 Sep 6;7(9):179. pmid:34564105 <ul class="reflinks" data-doi="10.3390/jimaging7090179"><li><a href="https://doi.org/10.3390/jimaging7090179" data-author="doi-provided" data-cit="doi-provided" data-title="doi-provided" target="_new" title="Go to article"> View Article </a></li><li><a href="http://www.ncbi.nlm.nih.gov/pubmed/34564105" target="_new" title="Go to article in PubMed"> PubMed/NCBI </a></li><li><a href="http://scholar.google.com/scholar?q=A+survey+of+brain+tumor+segmentation+and+classification+algorithms+Biratu+2021" target="_new" title="Go to article in Google Scholar"> Google Scholar </a></li></ul></li><li id="ref4"><span class="order">4. </span><a name="pone.0306492.ref004" id="pone.0306492.ref004" class="link-target"></a>Vishnuvarthanan G, Rajasekaran MP, Subbaraj P, Vishnuvarthanan A. An unsupervised learning method with a clustering approach for tumor identification and tissue segmentation in magnetic resonance brain images. Applied Soft Computing. 2016 Jan 1;38:190–212. <ul class="reflinks"><li><a href="#" data-author="Vishnuvarthanan" data-cit="VishnuvarthananG%2C%20RajasekaranMP%2C%20SubbarajP%2C%20VishnuvarthananA.%20An%20unsupervised%20learning%20method%20with%20a%20clustering%20approach%20for%20tumor%20identification%20and%20tissue%20segmentation%20in%20magnetic%20resonance%20brain%20images.%20Applied%20Soft%20Computing.%202016%20Jan%201%3B38%3A190%E2%80%93212." data-title="An%20unsupervised%20learning%20method%20with%20a%20clustering%20approach%20for%20tumor%20identification%20and%20tissue%20segmentation%20in%20magnetic%20resonance%20brain%20images" target="_new" title="Go to article in CrossRef"> View Article </a></li><li><a href="http://scholar.google.com/scholar?q=An+unsupervised+learning+method+with+a+clustering+approach+for+tumor+identification+and+tissue+segmentation+in+magnetic+resonance+brain+images+Vishnuvarthanan+2016" target="_new" title="Go to article in Google Scholar"> Google Scholar </a></li></ul></li><li id="ref5"><span class="order">5. </span><a name="pone.0306492.ref005" id="pone.0306492.ref005" class="link-target"></a>Arunkumar N, Mohammed MA, Abd Ghani MK, Ibrahim DA, Abdulhay E, Ramirez-Gonzalez G, de Albuquerque VH. K-means clustering and neural network for object detecting and identifying abnormality of brain tumor. Soft Computing. 2019 Oct;23:9083–96. <ul class="reflinks"><li><a href="#" data-author="Arunkumar" data-cit="ArunkumarN%2C%20MohammedMA%2C%20Abd%20GhaniMK%2C%20IbrahimDA%2C%20AbdulhayE%2C%20Ramirez-GonzalezG%2C%20de%20AlbuquerqueVH.%20K-means%20clustering%20and%20neural%20network%20for%20object%20detecting%20and%20identifying%20abnormality%20of%20brain%20tumor.%20Soft%20Computing.%202019%20Oct%3B23%3A9083%E2%80%9396." data-title="K-means%20clustering%20and%20neural%20network%20for%20object%20detecting%20and%20identifying%20abnormality%20of%20brain%20tumor" target="_new" title="Go to article in CrossRef"> View Article </a></li><li><a href="http://scholar.google.com/scholar?q=K-means+clustering+and+neural+network+for+object+detecting+and+identifying+abnormality+of+brain+tumor+Arunkumar+2019" target="_new" title="Go to article in Google Scholar"> Google Scholar </a></li></ul></li><li id="ref6"><span class="order">6. </span><a name="pone.0306492.ref006" id="pone.0306492.ref006" class="link-target"></a>Lu S, Lu Z, Zhang YD. Pathological brain detection based on AlexNet and transfer learning. Journal of computational science. 2019 Jan 1;30:41–7. <ul class="reflinks"><li><a href="#" data-author="Lu" data-cit="LuS%2C%20LuZ%2C%20ZhangYD.%20Pathological%20brain%20detection%20based%20on%20AlexNet%20and%20transfer%20learning.%20Journal%20of%20computational%20science.%202019%20Jan%201%3B30%3A41%E2%80%937." data-title="Pathological%20brain%20detection%20based%20on%20AlexNet%20and%20transfer%20learning" target="_new" title="Go to article in CrossRef"> View Article </a></li><li><a href="http://scholar.google.com/scholar?q=Pathological+brain+detection+based+on+AlexNet+and+transfer+learning+Lu+2019" target="_new" title="Go to article in Google Scholar"> Google Scholar </a></li></ul></li><li id="ref7"><span class="order">7. </span><a name="pone.0306492.ref007" id="pone.0306492.ref007" class="link-target"></a>Nagarathinam E, Ponnuchamy T. Image registration‐based brain tumor detection and segmentation using ANFIS classification approach. International Journal of Imaging Systems and Technology. 2019 Dec;29(4):510–7. <ul class="reflinks"><li><a href="#" data-author="Nagarathinam" data-cit="NagarathinamE%2C%20PonnuchamyT.%20Image%20registration%E2%80%90based%20brain%20tumor%20detection%20and%20segmentation%20using%20ANFIS%20classification%20approach.%20International%20Journal%20of%20Imaging%20Systems%20and%20Technology.%202019%20Dec%3B29%284%29%3A510%E2%80%937." data-title="Image%20registration%E2%80%90based%20brain%20tumor%20detection%20and%20segmentation%20using%20ANFIS%20classification%20approach" target="_new" title="Go to article in CrossRef"> View Article </a></li><li><a href="http://scholar.google.com/scholar?q=Image+registration%E2%80%90based+brain+tumor+detection+and+segmentation+using+ANFIS+classification+approach+Nagarathinam+2019" target="_new" title="Go to article in Google Scholar"> Google Scholar </a></li></ul></li><li id="ref8"><span class="order">8. </span><a name="pone.0306492.ref008" id="pone.0306492.ref008" class="link-target"></a>Toğaçar M, Ergen B, Cömert Z. BrainMRNet: Brain tumor detection using magnetic resonance images with a novel convolutional neural network model. Medical hypotheses. 2020 Jan 1;134:109531. pmid:31877442 <ul class="reflinks" data-doi="10.1016/j.mehy.2019.109531"><li><a href="https://doi.org/10.1016/j.mehy.2019.109531" data-author="doi-provided" data-cit="doi-provided" data-title="doi-provided" target="_new" title="Go to article"> View Article </a></li><li><a href="http://www.ncbi.nlm.nih.gov/pubmed/31877442" target="_new" title="Go to article in PubMed"> PubMed/NCBI </a></li><li><a href="http://scholar.google.com/scholar?q=BrainMRNet%3A+Brain+tumor+detection+using+magnetic+resonance+images+with+a+novel+convolutional+neural+network+model+To%C4%9Fa%C3%A7ar+2020" target="_new" title="Go to article in Google Scholar"> Google Scholar </a></li></ul></li><li id="ref9"><span class="order">9. </span><a name="pone.0306492.ref009" id="pone.0306492.ref009" class="link-target"></a>Toğaçar M, Cömert Z, Ergen B. Classification of brain MRI using hyper column technique with convolutional neural network and feature selection method. Expert Systems with Applications. 2020 Jul 1;149:113274. <ul class="reflinks"><li><a href="#" data-author="To%C4%9Fa%C3%A7ar" data-cit="To%C4%9Fa%C3%A7arM%2C%20C%C3%B6mertZ%2C%20ErgenB.%20Classification%20of%20brain%20MRI%20using%20hyper%20column%20technique%20with%20convolutional%20neural%20network%20and%20feature%20selection%20method.%20Expert%20Systems%20with%20Applications.%202020%20Jul%201%3B149%3A113274." data-title="Classification%20of%20brain%20MRI%20using%20hyper%20column%20technique%20with%20convolutional%20neural%20network%20and%20feature%20selection%20method" target="_new" title="Go to article in CrossRef"> View Article </a></li><li><a href="http://scholar.google.com/scholar?q=Classification+of+brain+MRI+using+hyper+column+technique+with+convolutional+neural+network+and+feature+selection+method+To%C4%9Fa%C3%A7ar+2020" target="_new" title="Go to article in Google Scholar"> Google Scholar </a></li></ul></li><li id="ref10"><span class="order">10. </span><a name="pone.0306492.ref010" id="pone.0306492.ref010" class="link-target"></a>Kurmi Y, Chaurasia V. Classification of magnetic resonance images for brain tumour detection. IET Image Processing. 2020 Oct;14(12):2808–18. <ul class="reflinks"><li><a href="#" data-author="Kurmi" data-cit="KurmiY%2C%20ChaurasiaV.%20Classification%20of%20magnetic%20resonance%20images%20for%20brain%20tumour%20detection.%20IET%20Image%20Processing.%202020%20Oct%3B14%2812%29%3A2808%E2%80%9318." data-title="Classification%20of%20magnetic%20resonance%20images%20for%20brain%20tumour%20detection" target="_new" title="Go to article in CrossRef"> View Article </a></li><li><a href="http://scholar.google.com/scholar?q=Classification+of+magnetic+resonance+images+for+brain+tumour+detection+Kurmi+2020" target="_new" title="Go to article in Google Scholar"> Google Scholar </a></li></ul></li><li id="ref11"><span class="order">11. </span><a name="pone.0306492.ref011" id="pone.0306492.ref011" class="link-target"></a>Polepaka S, Rao CS, Chandra Mohan M. IDSS-based Two stage classification of brain tumor using SVM. Health and Technology. 2020 Jan;10(1):249–58. <ul class="reflinks"><li><a href="#" data-author="Polepaka" data-cit="PolepakaS%2C%20RaoCS%2C%20Chandra%20MohanM.%20IDSS-based%20Two%20stage%20classification%20of%20brain%20tumor%20using%20SVM.%20Health%20and%20Technology.%202020%20Jan%3B10%281%29%3A249%E2%80%9358." data-title="IDSS-based%20Two%20stage%20classification%20of%20brain%20tumor%20using%20SVM" target="_new" title="Go to article in CrossRef"> View Article </a></li><li><a href="http://scholar.google.com/scholar?q=IDSS-based+Two+stage+classification+of+brain+tumor+using+SVM+Polepaka+2020" target="_new" title="Go to article in Google Scholar"> Google Scholar </a></li></ul></li><li id="ref12"><span class="order">12. </span><a name="pone.0306492.ref012" id="pone.0306492.ref012" class="link-target"></a>Chanu MM, Thongam K. Retracted article: computer-aided detection of brain tumor from magnetic resonance images using deep learning network. Journal of Ambient Intelligence and Humanized Computing. 2021 Jul;12(7):6911–22. <ul class="reflinks"><li><a href="#" data-author="Chanu" data-cit="ChanuMM%2C%20ThongamK.%20Retracted%20article%3A%20computer-aided%20detection%20of%20brain%20tumor%20from%20magnetic%20resonance%20images%20using%20deep%20learning%20network.%20Journal%20of%20Ambient%20Intelligence%20and%20Humanized%20Computing.%202021%20Jul%3B12%287%29%3A6911%E2%80%9322." data-title="Retracted%20article%3A%20computer-aided%20detection%20of%20brain%20tumor%20from%20magnetic%20resonance%20images%20using%20deep%20learning%20network" target="_new" title="Go to article in CrossRef"> View Article </a></li><li><a href="http://scholar.google.com/scholar?q=Retracted+article%3A+computer-aided+detection+of+brain+tumor+from+magnetic+resonance+images+using+deep+learning+network+Chanu+2021" target="_new" title="Go to article in Google Scholar"> Google Scholar </a></li></ul></li><li id="ref13"><span class="order">13. </span><a name="pone.0306492.ref013" id="pone.0306492.ref013" class="link-target"></a>Kuraparthi S, Reddy MK, Sujatha CN, Valiveti H, Duggineni C, Kollati M, et al. Brain Tumor Classification of MRI Images Using Deep Convolutional Neural Network. Traitement du Signal. 2021 Aug 1;38(4). <ul class="reflinks"><li><a href="#" data-author="Kuraparthi" data-cit="KuraparthiS%2C%20ReddyMK%2C%20SujathaCN%2C%20ValivetiH%2C%20DuggineniC%2C%20KollatiM%2C%20et%20al.%20Brain%20Tumor%20Classification%20of%20MRI%20Images%20Using%20Deep%20Convolutional%20Neural%20Network.%20Traitement%20du%20Signal.%202021%20Aug%201%3B38%284%29." data-title="Brain%20Tumor%20Classification%20of%20MRI%20Images%20Using%20Deep%20Convolutional%20Neural%20Network" target="_new" title="Go to article in CrossRef"> View Article </a></li><li><a href="http://scholar.google.com/scholar?q=Brain+Tumor+Classification+of+MRI+Images+Using+Deep+Convolutional+Neural+Network+Kuraparthi+2021" target="_new" title="Go to article in Google Scholar"> Google Scholar </a></li></ul></li><li id="ref14"><span class="order">14. </span><a name="pone.0306492.ref014" id="pone.0306492.ref014" class="link-target"></a>Sethy PK, Behera SK. A data constrained approach for brain tumour detection using fused deep features and SVM. Multimedia Tools and Applications. 2021 Aug;80(19):28745–60. <ul class="reflinks"><li><a href="#" data-author="Sethy" data-cit="SethyPK%2C%20BeheraSK.%20A%20data%20constrained%20approach%20for%20brain%20tumour%20detection%20using%20fused%20deep%20features%20and%20SVM.%20Multimedia%20Tools%20and%20Applications.%202021%20Aug%3B80%2819%29%3A28745%E2%80%9360." data-title="A%20data%20constrained%20approach%20for%20brain%20tumour%20detection%20using%20fused%20deep%20features%20and%20SVM" target="_new" title="Go to article in CrossRef"> View Article </a></li><li><a href="http://scholar.google.com/scholar?q=A+data+constrained+approach+for+brain+tumour+detection+using+fused+deep+features+and+SVM+Sethy+2021" target="_new" title="Go to article in Google Scholar"> Google Scholar </a></li></ul></li><li id="ref15"><span class="order">15. </span><a name="pone.0306492.ref015" id="pone.0306492.ref015" class="link-target"></a>Sumathi R, Mandadi V. Towards better segmentation of abnormal part in multimodal images using kernel possibilistic C means particle swarm optimization with morphological reconstruction filters: Combination of KFCM and PSO with morphological filters. International Journal of E-Health and Medical Communications (IJEHMC). 2021 May 1;12(3):55–73. <ul class="reflinks"><li><a href="#" data-author="Sumathi" data-cit="SumathiR%2C%20MandadiV.%20Towards%20better%20segmentation%20of%20abnormal%20part%20in%20multimodal%20images%20using%20kernel%20possibilistic%20C%20means%20particle%20swarm%20optimization%20with%20morphological%20reconstruction%20filters%3A%20Combination%20of%20KFCM%20and%20PSO%20with%20morphological%20filters.%20International%20Journal%20of%20E-Health%20and%20Medical%20Communications%20%28IJEHMC%29.%202021%20May%201%3B12%283%29%3A55%E2%80%9373." data-title="Towards%20better%20segmentation%20of%20abnormal%20part%20in%20multimodal%20images%20using%20kernel%20possibilistic%20C%20means%20particle%20swarm%20optimization%20with%20morphological%20reconstruction%20filters%3A%20Combination%20of%20KFCM%20and%20PSO%20with%20morphological%20filters" target="_new" title="Go to article in CrossRef"> View Article </a></li><li><a href="http://scholar.google.com/scholar?q=Towards+better+segmentation+of+abnormal+part+in+multimodal+images+using+kernel+possibilistic+C+means+particle+swarm+optimization+with+morphological+reconstruction+filters%3A+Combination+of+KFCM+and+PSO+with+morphological+filters+Sumathi+2021" target="_new" title="Go to article in Google Scholar"> Google Scholar </a></li></ul></li><li id="ref16"><span class="order">16. </span><a name="pone.0306492.ref016" id="pone.0306492.ref016" class="link-target"></a>Sumathi R, Venkatesulu M, Arjunan SP. Segmenting and classifying MRI multimodal images using cuckoo search optimization and KNN classifier. IETE Journal of Research. 2023 Sep 7;69(7):3946–53. <ul class="reflinks"><li><a href="#" data-author="Sumathi" data-cit="SumathiR%2C%20VenkatesuluM%2C%20ArjunanSP.%20Segmenting%20and%20classifying%20MRI%20multimodal%20images%20using%20cuckoo%20search%20optimization%20and%20KNN%20classifier.%20IETE%20Journal%20of%20Research.%202023%20Sep%207%3B69%287%29%3A3946%E2%80%9353." data-title="Segmenting%20and%20classifying%20MRI%20multimodal%20images%20using%20cuckoo%20search%20optimization%20and%20KNN%20classifier" target="_new" title="Go to article in CrossRef"> View Article </a></li><li><a href="http://scholar.google.com/scholar?q=Segmenting+and+classifying+MRI+multimodal+images+using+cuckoo+search+optimization+and+KNN+classifier+Sumathi+2023" target="_new" title="Go to article in Google Scholar"> Google Scholar </a></li></ul></li><li id="ref17"><span class="order">17. </span><a name="pone.0306492.ref017" id="pone.0306492.ref017" class="link-target"></a>Hua L, Gu Y, Gu X, Xue J, Ni T. A novel brain MRI image segmentation method using an improved multi-view fuzzy c-means clustering algorithm. Frontiers in Neuroscience. 2021 Mar 25;15:662674. pmid:33841095 <ul class="reflinks" data-doi="10.3389/fnins.2021.662674"><li><a href="https://doi.org/10.3389/fnins.2021.662674" data-author="doi-provided" data-cit="doi-provided" data-title="doi-provided" target="_new" title="Go to article"> View Article </a></li><li><a href="http://www.ncbi.nlm.nih.gov/pubmed/33841095" target="_new" title="Go to article in PubMed"> PubMed/NCBI </a></li><li><a href="http://scholar.google.com/scholar?q=A+novel+brain+MRI+image+segmentation+method+using+an+improved+multi-view+fuzzy+c-means+clustering+algorithm+Hua+2021" target="_new" title="Go to article in Google Scholar"> Google Scholar </a></li></ul></li><li id="ref18"><span class="order">18. </span><a name="pone.0306492.ref018" id="pone.0306492.ref018" class="link-target"></a>Dehkordi AA, Hashemi M, Neshat M, Mirjalili S, Sadiq AS. Brain tumor detection and classification using a new evolutionary convolutional neural network. arXiv preprint arXiv:2204.12297. 2022 Apr 26. <ul class="find-nolinks"></ul></li><li id="ref19"><span class="order">19. </span><a name="pone.0306492.ref019" id="pone.0306492.ref019" class="link-target"></a>Sharma AK, Nandal A, Dhaka A, Koundal D, Bogatinoska DC, Alyami H. Enhanced watershed segmentation algorithm-based modified ResNet50 model for brain tumor detection. BioMed Research International. 2022 Feb 24;2022. pmid:35252454 <ul class="reflinks" data-doi="10.1155/2022/7348344"><li><a href="https://doi.org/10.1155/2022/7348344" data-author="doi-provided" data-cit="doi-provided" data-title="doi-provided" target="_new" title="Go to article"> View Article </a></li><li><a href="http://www.ncbi.nlm.nih.gov/pubmed/35252454" target="_new" title="Go to article in PubMed"> PubMed/NCBI </a></li><li><a href="http://scholar.google.com/scholar?q=Enhanced+watershed+segmentation+algorithm-based+modified+ResNet50+model+for+brain+tumor+detection+Sharma+2022" target="_new" title="Go to article in Google Scholar"> Google Scholar </a></li></ul></li><li id="ref20"><span class="order">20. </span><a name="pone.0306492.ref020" id="pone.0306492.ref020" class="link-target"></a>Sharma S, Gupta S, Gupta D, Juneja A, Khatter H, Malik S, et al. Deep learning model for automatic classification and prediction of brain tumor. Journal of Sensors. 2022 Apr 8;2022:1–1. <ul class="reflinks"><li><a href="#" data-author="Sharma" data-cit="SharmaS%2C%20GuptaS%2C%20GuptaD%2C%20JunejaA%2C%20KhatterH%2C%20MalikS%2C%20et%20al.%20Deep%20learning%20model%20for%20automatic%20classification%20and%20prediction%20of%20brain%20tumor.%20Journal%20of%20Sensors.%202022%20Apr%208%3B2022%3A1%E2%80%931." data-title="Deep%20learning%20model%20for%20automatic%20classification%20and%20prediction%20of%20brain%20tumor" target="_new" title="Go to article in CrossRef"> View Article </a></li><li><a href="http://scholar.google.com/scholar?q=Deep+learning+model+for+automatic+classification+and+prediction+of+brain+tumor+Sharma+2022" target="_new" title="Go to article in Google Scholar"> Google Scholar </a></li></ul></li><li id="ref21"><span class="order">21. </span><a name="pone.0306492.ref021" id="pone.0306492.ref021" class="link-target"></a>Alsaif H, Guesmi R, Alshammari BM, Hamrouni T, Guesmi T, Alzamil A, et al. A novel data augmentation-based brain tumor detection using convolutional neural network. Applied sciences. 2022 Apr 8;12(8):3773. <ul class="reflinks"><li><a href="#" data-author="Alsaif" data-cit="AlsaifH%2C%20GuesmiR%2C%20AlshammariBM%2C%20HamrouniT%2C%20GuesmiT%2C%20AlzamilA%2C%20et%20al.%20A%20novel%20data%20augmentation-based%20brain%20tumor%20detection%20using%20convolutional%20neural%20network.%20Applied%20sciences.%202022%20Apr%208%3B12%288%29%3A3773." data-title="A%20novel%20data%20augmentation-based%20brain%20tumor%20detection%20using%20convolutional%20neural%20network" target="_new" title="Go to article in CrossRef"> View Article </a></li><li><a href="http://scholar.google.com/scholar?q=A+novel+data+augmentation-based+brain+tumor+detection+using+convolutional+neural+network+Alsaif+2022" target="_new" title="Go to article in Google Scholar"> Google Scholar </a></li></ul></li><li id="ref22"><span class="order">22. </span><a name="pone.0306492.ref022" id="pone.0306492.ref022" class="link-target"></a>Sandhya G, Kande GB, Savithri TS. Tumor segmentation by a self-organizing-map based active contour model (SOMACM) from the brain MRIs. IETE Journal of Research. 2022 Nov 2;68(6):3927–39. <ul class="reflinks"><li><a href="#" data-author="Sandhya" data-cit="SandhyaG%2C%20KandeGB%2C%20SavithriTS.%20Tumor%20segmentation%20by%20a%20self-organizing-map%20based%20active%20contour%20model%20%28SOMACM%29%20from%20the%20brain%20MRIs.%20IETE%20Journal%20of%20Research.%202022%20Nov%202%3B68%286%29%3A3927%E2%80%9339." data-title="Tumor%20segmentation%20by%20a%20self-organizing-map%20based%20active%20contour%20model%20%28SOMACM%29%20from%20the%20brain%20MRIs" target="_new" title="Go to article in CrossRef"> View Article </a></li><li><a href="http://scholar.google.com/scholar?q=Tumor+segmentation+by+a+self-organizing-map+based+active+contour+model+%28SOMACM%29+from+the+brain+MRIs+Sandhya+2022" target="_new" title="Go to article in Google Scholar"> Google Scholar </a></li></ul></li><li id="ref23"><span class="order">23. </span><a name="pone.0306492.ref023" id="pone.0306492.ref023" class="link-target"></a>Salama WM, Shokry A. A novel framework for brain tumor detection based on convolutional variational generative models. Multimedia Tools and Applications. 2022 May;81(12):16441–54. <ul class="reflinks"><li><a href="#" data-author="Salama" data-cit="SalamaWM%2C%20ShokryA.%20A%20novel%20framework%20for%20brain%20tumor%20detection%20based%20on%20convolutional%20variational%20generative%20models.%20Multimedia%20Tools%20and%20Applications.%202022%20May%3B81%2812%29%3A16441%E2%80%9354." data-title="A%20novel%20framework%20for%20brain%20tumor%20detection%20based%20on%20convolutional%20variational%20generative%20models" target="_new" title="Go to article in CrossRef"> View Article </a></li><li><a href="http://scholar.google.com/scholar?q=A+novel+framework+for+brain+tumor+detection+based+on+convolutional+variational+generative+models+Salama+2022" target="_new" title="Go to article in Google Scholar"> Google Scholar </a></li></ul></li><li id="ref24"><span class="order">24. </span><a name="pone.0306492.ref024" id="pone.0306492.ref024" class="link-target"></a>Remzan N, Tahiry K, Farchi A. Brain tumor classification in magnetic resonance imaging images using convolutional neural network. IJECE. 2022 Dec 1;12(6):6664. <ul class="reflinks"><li><a href="#" data-author="Remzan" data-cit="RemzanN%2C%20TahiryK%2C%20FarchiA.%20Brain%20tumor%20classification%20in%20magnetic%20resonance%20imaging%20images%20using%20convolutional%20neural%20network.%20IJECE.%202022%20Dec%201%3B12%286%29%3A6664." data-title="Brain%20tumor%20classification%20in%20magnetic%20resonance%20imaging%20images%20using%20convolutional%20neural%20network" target="_new" title="Go to article in CrossRef"> View Article </a></li><li><a href="http://scholar.google.com/scholar?q=Brain+tumor+classification+in+magnetic+resonance+imaging+images+using+convolutional+neural+network+Remzan+2022" target="_new" title="Go to article in Google Scholar"> Google Scholar </a></li></ul></li><li id="ref25"><span class="order">25. </span><a name="pone.0306492.ref025" id="pone.0306492.ref025" class="link-target"></a>Rahman T, Islam MS. MRI brain tumor detection and classification using parallel deep convolutional neural networks. Measurement: Sensors. 2023 Apr 1;26:100694. <ul class="reflinks"><li><a href="#" data-author="Rahman" data-cit="RahmanT%2C%20IslamMS.%20MRI%20brain%20tumor%20detection%20and%20classification%20using%20parallel%20deep%20convolutional%20neural%20networks.%20Measurement%3A%20Sensors.%202023%20Apr%201%3B26%3A100694." data-title="MRI%20brain%20tumor%20detection%20and%20classification%20using%20parallel%20deep%20convolutional%20neural%20networks" target="_new" title="Go to article in CrossRef"> View Article </a></li><li><a href="http://scholar.google.com/scholar?q=MRI+brain+tumor+detection+and+classification+using+parallel+deep+convolutional+neural+networks+Rahman+2023" target="_new" title="Go to article in Google Scholar"> Google Scholar </a></li></ul></li><li id="ref26"><span class="order">26. </span><a name="pone.0306492.ref026" id="pone.0306492.ref026" class="link-target"></a>Ahmadi M, Sharifi A, Jafarian Fard M, Soleimani N. Detection of brain lesion location in MRI images using convolutional neural network and robust PCA. International journal of neuroscience. 2023 Jan 2;133(1):55–66. pmid:33517817 <ul class="reflinks" data-doi="10.1080/00207454.2021.1883602"><li><a href="https://doi.org/10.1080/00207454.2021.1883602" data-author="doi-provided" data-cit="doi-provided" data-title="doi-provided" target="_new" title="Go to article"> View Article </a></li><li><a href="http://www.ncbi.nlm.nih.gov/pubmed/33517817" target="_new" title="Go to article in PubMed"> PubMed/NCBI </a></li><li><a href="http://scholar.google.com/scholar?q=Detection+of+brain+lesion+location+in+MRI+images+using+convolutional+neural+network+and+robust+PCA+Ahmadi+2023" target="_new" title="Go to article in Google Scholar"> Google Scholar </a></li></ul></li><li id="ref27"><span class="order">27. </span><a name="pone.0306492.ref027" id="pone.0306492.ref027" class="link-target"></a>Kurdi SZ, Ali MH, Jaber MM, Saba T, Rehman A, Damaševičius R. Brain tumor classification using meta-heuristic optimized convolutional neural networks. Journal of Personalized Medicine. 2023 Jan 20;13(2):181. pmid:36836415 <ul class="reflinks" data-doi="10.3390/jpm13020181"><li><a href="https://doi.org/10.3390/jpm13020181" data-author="doi-provided" data-cit="doi-provided" data-title="doi-provided" target="_new" title="Go to article"> View Article </a></li><li><a href="http://www.ncbi.nlm.nih.gov/pubmed/36836415" target="_new" title="Go to article in PubMed"> PubMed/NCBI </a></li><li><a href="http://scholar.google.com/scholar?q=Brain+tumor+classification+using+meta-heuristic+optimized+convolutional+neural+networks+Kurdi+2023" target="_new" title="Go to article in Google Scholar"> Google Scholar </a></li></ul></li><li id="ref28"><span class="order">28. </span><a name="pone.0306492.ref028" id="pone.0306492.ref028" class="link-target"></a>Asiri AA, Khan B, Muhammad F, Alshamrani HA, Alshamrani KA, Irfan M, et al. Machine learning-based models for magnetic resonance imaging (mri)-based brain tumor classification. Intell. Autom. Soft Comput. 2023 Jan 1;36:299–312. <ul class="reflinks"><li><a href="#" data-author="Asiri" data-cit="AsiriAA%2C%20KhanB%2C%20MuhammadF%2C%20AlshamraniHA%2C%20AlshamraniKA%2C%20IrfanM%2C%20et%20al.%20Machine%20learning-based%20models%20for%20magnetic%20resonance%20imaging%20%28mri%29-based%20brain%20tumor%20classification.%20Intell.%20Autom.%20Soft%20Comput.%202023%20Jan%201%3B36%3A299%E2%80%93312." data-title="Machine%20learning-based%20models%20for%20magnetic%20resonance%20imaging%20%28mri%29-based%20brain%20tumor%20classification" target="_new" title="Go to article in CrossRef"> View Article </a></li><li><a href="http://scholar.google.com/scholar?q=Machine+learning-based+models+for+magnetic+resonance+imaging+%28mri%29-based+brain+tumor+classification+Asiri+2023" target="_new" title="Go to article in Google Scholar"> Google Scholar </a></li></ul></li><li id="ref29"><span class="order">29. </span><a name="pone.0306492.ref029" id="pone.0306492.ref029" class="link-target"></a>Saad G, Suliman A, Bitar L, Bshara S. Developing a hybrid algorithm to detect brain tumors from MRI images. Egyptian Journal of Radiology and Nuclear Medicine. 2023 Jan 18;54(1):14. <ul class="reflinks"><li><a href="#" data-author="Saad" data-cit="SaadG%2C%20SulimanA%2C%20BitarL%2C%20BsharaS.%20Developing%20a%20hybrid%20algorithm%20to%20detect%20brain%20tumors%20from%20MRI%20images.%20Egyptian%20Journal%20of%20Radiology%20and%20Nuclear%20Medicine.%202023%20Jan%2018%3B54%281%29%3A14." data-title="Developing%20a%20hybrid%20algorithm%20to%20detect%20brain%20tumors%20from%20MRI%20images" target="_new" title="Go to article in CrossRef"> View Article </a></li><li><a href="http://scholar.google.com/scholar?q=Developing+a+hybrid+algorithm+to+detect+brain+tumors+from+MRI+images+Saad+2023" target="_new" title="Go to article in Google Scholar"> Google Scholar </a></li></ul></li><li id="ref30"><span class="order">30. </span><a name="pone.0306492.ref030" id="pone.0306492.ref030" class="link-target"></a>Alyami J, Rehman A, Almutairi F, Fayyaz AM, Roy S, Saba T, et al. Tumor localization and classification from MRI of brain using deep convolution neural network and Salp swarm algorithm. Cognitive Computation. 2023 Jan 13:1–1. <ul class="reflinks"><li><a href="#" data-author="Alyami" data-cit="AlyamiJ%2C%20RehmanA%2C%20AlmutairiF%2C%20FayyazAM%2C%20RoyS%2C%20SabaT%2C%20et%20al.%20Tumor%20localization%20and%20classification%20from%20MRI%20of%20brain%20using%20deep%20convolution%20neural%20network%20and%20Salp%20swarm%20algorithm.%20Cognitive%20Computation.%202023%20Jan%2013%3A1%E2%80%931." data-title="Tumor%20localization%20and%20classification%20from%20MRI%20of%20brain%20using%20deep%20convolution%20neural%20network%20and%20Salp%20swarm%20algorithm" target="_new" title="Go to article in CrossRef"> View Article </a></li><li><a href="http://scholar.google.com/scholar?q=Tumor+localization+and+classification+from+MRI+of+brain+using+deep+convolution+neural+network+and+Salp+swarm+algorithm+Alyami+2023" target="_new" title="Go to article in Google Scholar"> Google Scholar </a></li></ul></li><li id="ref31"><span class="order">31. </span><a name="pone.0306492.ref031" id="pone.0306492.ref031" class="link-target"></a>The Whole Brain Atlas [Internet]. <a href="http://www.med.harvard.edu">www.med.harvard.edu</a>. [cited 2024 Apr 16]. <a href="http://www.med.harvard.edu/AANLIB">http://www.med.harvard.edu/AANLIB</a> <ul class="find-nolinks"></ul></li><li id="ref32"><span class="order">32. </span><a name="pone.0306492.ref032" id="pone.0306492.ref032" class="link-target"></a>Gonzalez RC. Digital image processing. Pearson education india; 2009. <ul class="find-nolinks"></ul></li><li id="ref33"><span class="order">33. </span><a name="pone.0306492.ref033" id="pone.0306492.ref033" class="link-target"></a>Reddy KR, Dhuli R. A novel lightweight CNN architecture for the diagnosis of brain tumors using MR images. Diagnostics. 2023 Jan 14;13(2):312. pmid:36673122 <ul class="reflinks" data-doi="10.3390/diagnostics13020312"><li><a href="https://doi.org/10.3390/diagnostics13020312" data-author="doi-provided" data-cit="doi-provided" data-title="doi-provided" target="_new" title="Go to article"> View Article </a></li><li><a href="http://www.ncbi.nlm.nih.gov/pubmed/36673122" target="_new" title="Go to article in PubMed"> PubMed/NCBI </a></li><li><a href="http://scholar.google.com/scholar?q=A+novel+lightweight+CNN+architecture+for+the+diagnosis+of+brain+tumors+using+MR+images+Reddy+2023" target="_new" title="Go to article in Google Scholar"> Google Scholar </a></li></ul></li><li id="ref34"><span class="order">34. </span><a name="pone.0306492.ref034" id="pone.0306492.ref034" class="link-target"></a>Alzubaidi L, Zhang J, Humaidi AJ, Al-Dujaili A, Duan Y, Al-Shamma O, et al. Review of deep learning: concepts, CNN architectures, challenges, applications, future directions. Journal of big Data. 2021 Dec;8:1–74. <ul class="reflinks"><li><a href="#" data-author="Alzubaidi" data-cit="AlzubaidiL%2C%20ZhangJ%2C%20HumaidiAJ%2C%20Al-DujailiA%2C%20DuanY%2C%20Al-ShammaO%2C%20et%20al.%20Review%20of%20deep%20learning%3A%20concepts%2C%20CNN%20architectures%2C%20challenges%2C%20applications%2C%20future%20directions.%20Journal%20of%20big%20Data.%202021%20Dec%3B8%3A1%E2%80%9374." data-title="Review%20of%20deep%20learning%3A%20concepts%2C%20CNN%20architectures%2C%20challenges%2C%20applications%2C%20future%20directions" target="_new" title="Go to article in CrossRef"> View Article </a></li><li><a href="http://scholar.google.com/scholar?q=Review+of+deep+learning%3A+concepts%2C+CNN+architectures%2C+challenges%2C+applications%2C+future+directions+Alzubaidi+2021" target="_new" title="Go to article in Google Scholar"> Google Scholar </a></li></ul></li><li id="ref35"><span class="order">35. </span><a name="pone.0306492.ref035" id="pone.0306492.ref035" class="link-target"></a>Lin M, Chen Q, Yan S. Network in network. arXiv preprint arXiv:1312.4400. 2013 Dec 16. <ul class="find-nolinks"></ul></li><li id="ref36"><span class="order">36. </span><a name="pone.0306492.ref036" id="pone.0306492.ref036" class="link-target"></a>Vincent L. Morphological grayscale reconstruction in image analysis: applications and efficient algorithms. IEEE transactions on image processing. 1993 Apr;2(2):176–201. pmid:18296207 <ul class="reflinks" data-doi="10.1109/83.217222"><li><a href="https://doi.org/10.1109/83.217222" data-author="doi-provided" data-cit="doi-provided" data-title="doi-provided" target="_new" title="Go to article"> View Article </a></li><li><a href="http://www.ncbi.nlm.nih.gov/pubmed/18296207" target="_new" title="Go to article in PubMed"> PubMed/NCBI </a></li><li><a href="http://scholar.google.com/scholar?q=Morphological+grayscale+reconstruction+in+image+analysis%3A+applications+and+efficient+algorithms+Vincent+1993" target="_new" title="Go to article in Google Scholar"> Google Scholar </a></li></ul></li><li id="ref37"><span class="order">37. </span><a name="pone.0306492.ref037" id="pone.0306492.ref037" class="link-target"></a>Tsallis C. Possible generalization of Boltzmann-Gibbs statistics. Journal of statistical physics. 1988 Jul;52:479–87. <ul class="reflinks"><li><a href="#" data-author="Tsallis" data-cit="TsallisC.%20Possible%20generalization%20of%20Boltzmann-Gibbs%20statistics.%20Journal%20of%20statistical%20physics.%201988%20Jul%3B52%3A479%E2%80%9387." data-title="Possible%20generalization%20of%20Boltzmann-Gibbs%20statistics" target="_new" title="Go to article in CrossRef"> View Article </a></li><li><a href="http://scholar.google.com/scholar?q=Possible+generalization+of+Boltzmann-Gibbs+statistics+Tsallis+1988" target="_new" title="Go to article in Google Scholar"> Google Scholar </a></li></ul></li><li id="ref38"><span class="order">38. </span><a name="pone.0306492.ref038" id="pone.0306492.ref038" class="link-target"></a>Price K, Storn RM, Lampinen JA, Evolution D. A practical approach to global optimization. Natural Computing Series. 2005. <ul class="find-nolinks"></ul></li><li id="ref39"><span class="order">39. </span><a name="pone.0306492.ref039" id="pone.0306492.ref039" class="link-target"></a>Tao WB, Tian JW, Liu J. Image segmentation by three-level thresholding based on maximum fuzzy entropy and genetic algorithm. Pattern Recognition Letters. 2003 Dec 1;24(16):3069–78. <ul class="reflinks"><li><a href="#" data-author="Tao" data-cit="TaoWB%2C%20TianJW%2C%20LiuJ.%20Image%20segmentation%20by%20three-level%20thresholding%20based%20on%20maximum%20fuzzy%20entropy%20and%20genetic%20algorithm.%20Pattern%20Recognition%20Letters.%202003%20Dec%201%3B24%2816%29%3A3069%E2%80%9378." data-title="Image%20segmentation%20by%20three-level%20thresholding%20based%20on%20maximum%20fuzzy%20entropy%20and%20genetic%20algorithm" target="_new" title="Go to article in CrossRef"> View Article </a></li><li><a href="http://scholar.google.com/scholar?q=Image+segmentation+by+three-level+thresholding+based+on+maximum+fuzzy+entropy+and+genetic+algorithm+Tao+2003" target="_new" title="Go to article in Google Scholar"> Google Scholar </a></li></ul></li><li id="ref40"><span class="order">40. </span><a name="pone.0306492.ref040" id="pone.0306492.ref040" class="link-target"></a>Kennedy J, Eberhart R. Particle swarm optimization. InProceedings of ICNN’95-international conference on neural networks 1995 Nov 27 (Vol. 4, pp. 1942–1948). ieee. <ul class="find-nolinks"></ul></li><li id="ref41"><span class="order">41. </span><a name="pone.0306492.ref041" id="pone.0306492.ref041" class="link-target"></a>Sokolova M, Lapalme G. A systematic analysis of performance measures for classification tasks. Information processing &amp; management. 2009 Jul 1;45(4):427–37. <ul class="reflinks"><li><a href="#" data-author="Sokolova" data-cit="SokolovaM%2C%20LapalmeG.%20A%20systematic%20analysis%20of%20performance%20measures%20for%20classification%20tasks.%20Information%20processing%20%26%20management.%202009%20Jul%201%3B45%284%29%3A427%E2%80%9337." data-title="A%20systematic%20analysis%20of%20performance%20measures%20for%20classification%20tasks" target="_new" title="Go to article in CrossRef"> View Article </a></li><li><a href="http://scholar.google.com/scholar?q=A+systematic+analysis+of+performance+measures+for+classification+tasks+Sokolova+2009" target="_new" title="Go to article in Google Scholar"> Google Scholar </a></li></ul></li><li id="ref42"><span class="order">42. </span><a name="pone.0306492.ref042" id="pone.0306492.ref042" class="link-target"></a>Raschka S. Model evaluation, model selection, and algorithm selection in machine learning. arXiv preprint arXiv:1811.12808. 2018 Nov 13. <ul class="find-nolinks"></ul></li><li id="ref43"><span class="order">43. </span><a name="pone.0306492.ref043" id="pone.0306492.ref043" class="link-target"></a>Bottou L. Stochastic gradient descent tricks. In Neural Networks: Tricks of the Trade: Second Edition 2012 Jan 1 (pp. 421–436). Berlin, Heidelberg: Springer Berlin Heidelberg. <ul class="find-nolinks"></ul></li><li id="ref44"><span class="order">44. </span><a name="pone.0306492.ref044" id="pone.0306492.ref044" class="link-target"></a>Da K. A method for stochastic optimization. arXiv preprint arXiv:1412.6980. 2014 Dec. <ul class="find-nolinks"></ul></li><li id="ref45"><span class="order">45. </span><a name="pone.0306492.ref045" id="pone.0306492.ref045" class="link-target"></a>Duchi J, Hazan E, Singer Y. Adaptive subgradient methods for online learning and stochastic optimization. Journal of machine learning research. 2011 Jul 1;12(7). <ul class="reflinks"><li><a href="#" data-author="Duchi" data-cit="DuchiJ%2C%20HazanE%2C%20SingerY.%20Adaptive%20subgradient%20methods%20for%20online%20learning%20and%20stochastic%20optimization.%20Journal%20of%20machine%20learning%20research.%202011%20Jul%201%3B12%287%29." data-title="Adaptive%20subgradient%20methods%20for%20online%20learning%20and%20stochastic%20optimization" target="_new" title="Go to article in CrossRef"> View Article </a></li><li><a href="http://scholar.google.com/scholar?q=Adaptive+subgradient+methods+for+online+learning+and+stochastic+optimization+Duchi+2011" target="_new" title="Go to article in Google Scholar"> Google Scholar </a></li></ul></li><li id="ref46"><span class="order">46. </span><a name="pone.0306492.ref046" id="pone.0306492.ref046" class="link-target"></a>Zeiler MD. Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701. 2012 Dec 22. <ul class="find-nolinks"></ul></li><li id="ref47"><span class="order">47. </span><a name="pone.0306492.ref047" id="pone.0306492.ref047" class="link-target"></a>Hinton G, Srivastava N, Swersky K. Neural Networks for Machine Learning Lecture 6a Overview of mini—batch gradient descent [Internet]. <a href="http://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf">http://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf</a> <ul class="find-nolinks"></ul></li><li id="ref48"><span class="order">48. </span><a name="pone.0306492.ref048" id="pone.0306492.ref048" class="link-target"></a>Timothy D. Incorporating nesterov momentum into adam. Natural Hazards. 2016 Feb;3(2):437–53. <ul class="reflinks"><li><a href="#" data-author="Timothy" data-cit="TimothyD.%20Incorporating%20nesterov%20momentum%20into%20adam.%20Natural%20Hazards.%202016%20Feb%3B3%282%29%3A437%E2%80%9353." data-title="Incorporating%20nesterov%20momentum%20into%20adam" target="_new" title="Go to article in CrossRef"> View Article </a></li><li><a href="http://scholar.google.com/scholar?q=Incorporating+nesterov+momentum+into+adam+Timothy+2016" target="_new" title="Go to article in Google Scholar"> Google Scholar </a></li></ul></li></ol></div> <div class="ref-tooltip"> <div class="ref_tooltip-content"> </div> </div> </div> </div> </div> </section> <aside class="article-aside"> <!--[if IE 9]> <style> .dload-xml {margin-top: 38px} </style> <![endif]--> <div class="dload-menu"> <div class="dload-pdf"> <a href="/plosone/article/file?id=10.1371/journal.pone.0306492&type=printable" id="downloadPdf" target="_blank">Download PDF</a> </div> <div data-js-tooltip-hover="trigger" class="dload-hover">&nbsp; <ul class="dload-xml" data-js-tooltip-hover="target"> <li><a href="/plosone/article/citation?id=10.1371/journal.pone.0306492" id="downloadCitation">Citation</a></li> <li><a href="/plosone/article/file?id=10.1371/journal.pone.0306492&type=manuscript" id="downloadXml">XML</a> </li> </ul> </div> </div> <div class="aside-container"> <div class="print-article" id="printArticle" data-js-tooltip-hover="trigger"> <a href="#" onclick="window.print(); return false;" class="preventDefault" id="printBrowser">Print</a> </div> <div class="share-article" id="shareArticle" data-js-tooltip-hover="trigger"> Share <ul data-js-tooltip-hover="target" class="share-options" id="share-options"> <li><a href="https://www.reddit.com/submit?url=https%3A%2F%2Fdx.plos.org%2F10.1371%2Fjournal.pone.0306492" id="shareReddit" target="_blank" title="Submit to Reddit"><img src="/resource/img/icon.reddit.16.png" width="16" height="16" alt="Reddit">Reddit</a></li> <li><a href="https://www.facebook.com/share.php?u=https%3A%2F%2Fdx.plos.org%2F10.1371%2Fjournal.pone.0306492&t=Brain MRI detection and classification: Harnessing convolutional neural networks and multi-level thresholding" id="shareFacebook" target="_blank" title="Share on Facebook"><img src="/resource/img/icon.fb.16.png" width="16" height="16" alt="Facebook">Facebook</a></li> <li><a href="https://www.linkedin.com/shareArticle?url=https%3A%2F%2Fdx.plos.org%2F10.1371%2Fjournal.pone.0306492&title=Brain MRI detection and classification: Harnessing convolutional neural networks and multi-level thresholding&summary=Checkout this article I found at PLOS" id="shareLinkedIn" target="_blank" title="Add to LinkedIn"><img src="/resource/img/icon.linkedin.16.png" width="16" height="16" alt="LinkedIn">LinkedIn</a></li> <li><a href="https://www.mendeley.com/import/?url=https%3A%2F%2Fdx.plos.org%2F10.1371%2Fjournal.pone.0306492" id="shareMendeley" target="_blank" title="Add to Mendeley"><img src="/resource/img/icon.mendeley.16.png" width="16" height="16" alt="Mendeley">Mendeley</a></li> <li><a href="https://twitter.com/intent/tweet?url=https%3A%2F%2Fdx.plos.org%2F10.1371%2Fjournal.pone.0306492&text=%23PLOSONE%3A%20Brain MRI detection and classification: Harnessing convolutional neural networks and multi-level thresholding" target="_blank" title="share on Twitter" id="twitter-share-link"><img src="/resource/img/icon.twtr.16.png" width="16" height="16" alt="Twitter">Twitter</a></li> <li><a href="mailto:?subject=Brain MRI detection and classification: Harnessing convolutional neural networks and multi-level thresholding&body=I%20thought%20you%20would%20find%20this%20article%20interesting.%20From%20PLOS ONE:%20https%3A%2F%2Fdx.plos.org%2F10.1371%2Fjournal.pone.0306492" id="shareEmail" rel="noreferrer" aria-label="Email"><img src="/resource/img/icon.email.16.png" width="16" height="16" alt="Email">Email</a></li> <script src="/resource/js/components/tweet140.js" type="text/javascript"></script> </ul> </div> </div>   <!-- Crossmark 2.0 widget --> <script src="https://crossmark-cdn.crossref.org/widget/v2.0/widget.js"></script> <a aria-label="Check for updates via CrossMark" data-target="crossmark"> <img alt="Check for updates via CrossMark" width="150" src="https://crossmark-cdn.crossref.org/widget/v2.0/logos/CROSSMARK_BW_horizontal.svg"> </a> <!-- End Crossmark 2.0 widget --> <div class="aside-container collections-aside-container"><!-- React Magic --></div> <div class="skyscraper-container"> <div class="title">Advertisement</div> <!-- DoubleClick Ad Zone --> <div class='advertisement' id='div-gpt-ad-1458247671871-1' style='width:160px; height:600px;'> <script type='text/javascript'> googletag.cmd.push(function() { googletag.display('div-gpt-ad-1458247671871-1'); }); </script> </div> </div> <div class="subject-areas-container"> <h3>Subject Areas <div id="subjInfo">?</div> <div id="subjInfoText"> <p>For more information about PLOS Subject Areas, click <a href="https://github.com/PLOS/plos-thesaurus/blob/master/README.md" target="_blank" title="Link opens in new window">here</a>.</p> <span class="inline-intro">We want your feedback.</span> Do these Subject Areas make sense for this article? Click the target next to the incorrect Subject Area and let us know. Thanks for your help! </div> </h3> <ul id="subjectList"> <li> <a class="taxo-term" title="Search for articles about Magnetic resonance imaging" href="/plosone/search?filterSubjects=Magnetic+resonance+imaging&filterJournals=PLoSONE&q=">Magnetic resonance imaging</a> <span class="taxo-flag">&nbsp;</span> <div class="taxo-tooltip" data-categoryname="Magnetic resonance imaging"><p class="taxo-explain">Is the Subject Area <strong>"Magnetic resonance imaging"</strong> applicable to this article? <button id="noFlag" data-action="remove">Yes</button> <button id="flagIt" value="flagno" data-action="add">No</button></p> <p class="taxo-confirm">Thanks for your feedback.</p> </div> </li> <li> <a class="taxo-term" title="Search for articles about Cancers and neoplasms" href="/plosone/search?filterSubjects=Cancers+and+neoplasms&filterJournals=PLoSONE&q=">Cancers and neoplasms</a> <span class="taxo-flag">&nbsp;</span> <div class="taxo-tooltip" data-categoryname="Cancers and neoplasms"><p class="taxo-explain">Is the Subject Area <strong>"Cancers and neoplasms"</strong> applicable to this article? <button id="noFlag" data-action="remove">Yes</button> <button id="flagIt" value="flagno" data-action="add">No</button></p> <p class="taxo-confirm">Thanks for your feedback.</p> </div> </li> <li> <a class="taxo-term" title="Search for articles about Neuroimaging" href="/plosone/search?filterSubjects=Neuroimaging&filterJournals=PLoSONE&q=">Neuroimaging</a> <span class="taxo-flag">&nbsp;</span> <div class="taxo-tooltip" data-categoryname="Neuroimaging"><p class="taxo-explain">Is the Subject Area <strong>"Neuroimaging"</strong> applicable to this article? <button id="noFlag" data-action="remove">Yes</button> <button id="flagIt" value="flagno" data-action="add">No</button></p> <p class="taxo-confirm">Thanks for your feedback.</p> </div> </li> <li> <a class="taxo-term" title="Search for articles about Imaging techniques" href="/plosone/search?filterSubjects=Imaging+techniques&filterJournals=PLoSONE&q=">Imaging techniques</a> <span class="taxo-flag">&nbsp;</span> <div class="taxo-tooltip" data-categoryname="Imaging techniques"><p class="taxo-explain">Is the Subject Area <strong>"Imaging techniques"</strong> applicable to this article? <button id="noFlag" data-action="remove">Yes</button> <button id="flagIt" value="flagno" data-action="add">No</button></p> <p class="taxo-confirm">Thanks for your feedback.</p> </div> </li> <li> <a class="taxo-term" title="Search for articles about Computer architecture" href="/plosone/search?filterSubjects=Computer+architecture&filterJournals=PLoSONE&q=">Computer architecture</a> <span class="taxo-flag">&nbsp;</span> <div class="taxo-tooltip" data-categoryname="Computer architecture"><p class="taxo-explain">Is the Subject Area <strong>"Computer architecture"</strong> applicable to this article? <button id="noFlag" data-action="remove">Yes</button> <button id="flagIt" value="flagno" data-action="add">No</button></p> <p class="taxo-confirm">Thanks for your feedback.</p> </div> </li> <li> <a class="taxo-term" title="Search for articles about Malignant tumors" href="/plosone/search?filterSubjects=Malignant+tumors&filterJournals=PLoSONE&q=">Malignant tumors</a> <span class="taxo-flag">&nbsp;</span> <div class="taxo-tooltip" data-categoryname="Malignant tumors"><p class="taxo-explain">Is the Subject Area <strong>"Malignant tumors"</strong> applicable to this article? <button id="noFlag" data-action="remove">Yes</button> <button id="flagIt" value="flagno" data-action="add">No</button></p> <p class="taxo-confirm">Thanks for your feedback.</p> </div> </li> <li> <a class="taxo-term" title="Search for articles about Entropy" href="/plosone/search?filterSubjects=Entropy&filterJournals=PLoSONE&q=">Entropy</a> <span class="taxo-flag">&nbsp;</span> <div class="taxo-tooltip" data-categoryname="Entropy"><p class="taxo-explain">Is the Subject Area <strong>"Entropy"</strong> applicable to this article? <button id="noFlag" data-action="remove">Yes</button> <button id="flagIt" value="flagno" data-action="add">No</button></p> <p class="taxo-confirm">Thanks for your feedback.</p> </div> </li> <li> <a class="taxo-term" title="Search for articles about Convolution" href="/plosone/search?filterSubjects=Convolution&filterJournals=PLoSONE&q=">Convolution</a> <span class="taxo-flag">&nbsp;</span> <div class="taxo-tooltip" data-categoryname="Convolution"><p class="taxo-explain">Is the Subject Area <strong>"Convolution"</strong> applicable to this article? <button id="noFlag" data-action="remove">Yes</button> <button id="flagIt" value="flagno" data-action="add">No</button></p> <p class="taxo-confirm">Thanks for your feedback.</p> </div> </li> </ul> </div> <div id="subjectErrors"></div> </aside> </div> </main> <footer id="pageftr"> <div class="row"> <div class="block x-small"> <ul class="nav nav-secondary"> <li class="ftr-header"><a href="https://plos.org/our-journals/">Publications</a></li> <li><a href="/plosbiology/" id="ftr-bio">PLOS Biology</a></li> <li><a href="/climate/" id="ftr-climate">PLOS Climate</a></li> <li><a href="/complexsystems/" id="ftr-complex-systems">PLOS Complex Systems</a></li> <li><a href="/ploscompbiol/" id="ftr-compbio">PLOS Computational Biology</a></li> <li><a href="/digitalhealth/" id="ftr-digitalhealth">PLOS Digital Health</a></li> <li><a href="/plosgenetics/" id="ftr-gen">PLOS Genetics</a></li> <li><a href="/globalpublichealth/" id="ftr-globalpublichealth">PLOS Global Public Health</a></li> </ul> </div> <div class="block x-small"> <ul class="nav nav-secondary"> <li class="ftr-header">&nbsp;</li> <li><a href="/plosmedicine/" id="ftr-med">PLOS Medicine</a></li> <li><a href="/mentalhealth/" id="ftr-mental-health">PLOS Mental Health</a></li> <li><a href="/plosntds/" id="ftr-ntds">PLOS Neglected Tropical Diseases</a></li> <li><a href="/plosone/" id="ftr-one">PLOS One</a></li> <li><a href="/plospathogens/" id="ftr-path">PLOS Pathogens</a></li> <li><a href="/sustainabilitytransformation/" id="ftr-sustainabilitytransformation">PLOS Sustainability and Transformation</a></li> <li><a href="/water/" id="ftr-water">PLOS Water</a></li> </ul> </div> <div class="block xx-small"> <ul class="nav nav-tertiary"> <li> <a href="https://plos.org" id="ftr-home">Home</a> </li> <li> <a href="https://blogs.plos.org" id="ftr-blog">Blogs</a> </li> <li> <a href="https://collections.plos.org/" id="ftr-collections">Collections</a> </li> <li> <a href="mailto:webmaster@plos.org" id="ftr-feedback">Give feedback</a> </li> <li> <a href="/plosone/lockss-manifest" id="ftr-lockss">LOCKSS</a> </li> </ul> </div> <div class="block xx-small"> <ul class="nav nav-primary"> <li><a href="https://plos.org/privacy-policy" id="ftr-privacy">Privacy Policy</a></li> <li><a href="https://plos.org/terms-of-use" id="ftr-terms">Terms of Use</a></li> <li><a href="https://plos.org/advertise/" id="ftr-advertise">Advertise</a></li> <li><a href="https://plos.org/media-inquiries" id="ftr-media">Media Inquiries</a></li> <li><a href="https://plos.org/contact" id="ftr-contact">Contact</a></li> </ul> </div> </div> <div class="row"> <p> <img src="/resource/img/logo-plos-footer.png" alt="PLOS" class="logo-footer"/> <span class="footer-non-profit-statement">PLOS is a nonprofit 501(c)(3) corporation, #C2354500, based in California, US</span> </p> <div class="block"> </div> </div> <script src="/resource/js/global.js" type="text/javascript"></script> </footer> <script type="text/javascript"> var ArticleData = { doi: '10.1371/journal.pone.0306492', title: '<article-title xmlns:mml=\"http://www.w3.org/1998/Math/MathML\" xmlns:xlink=\"http://www.w3.org/1999/xlink\">Brain MRI detection and classification: Harnessing convolutional neural networks and multi-level thresholding<\/article-title>', date: 'Aug 01, 2024' }; </script> <script src="/resource/js/components/show_onscroll.js" type="text/javascript"></script> <script src="/resource/js/components/pagination.js" type="text/javascript"></script> <script src="/resource/js/vendor/spin.js" type="text/javascript"></script> <script src="/resource/js/pages/article.js" type="text/javascript"></script> <script src="/resource/js/pages/article_references.js" type="text/javascript"></script> <script src="/resource/js/pages/article_sidebar.js" type="text/javascript"></script> <script src="/resource/js/vendor/foundation/foundation.dropdown.js" type="text/javascript"></script> <script src="/resource/js/components/table_open.js" type="text/javascript"></script> <script src="/resource/js/components/figshare.js" type="text/javascript"></script> <script src="/resource/js/vendor/jquery.panzoom.min.js" type="text/javascript"></script> <script src="/resource/js/vendor/jquery.mousewheel.js" type="text/javascript"></script> <script src="/resource/js/components/lightbox.js" type="text/javascript"></script> <script src="/resource/js/pages/article_body.js" type="text/javascript"></script> <!-- This file should be loaded before the renderJs, to avoid conflicts with the FigShare, that implements the MathJax also. --> <!-- mathjax configuration options --> <!-- more can be found at http://docs.mathjax.org/en/latest/ --> <script type="text/x-mathjax-config"> MathJax.Hub.Config({ "HTML-CSS": { scale: 100, availableFonts: ["STIX","TeX"], preferredFont: "STIX", webFont: "STIX-Web", linebreaks: { automatic: false } }, jax: ["input/MathML", "output/HTML-CSS"] }); </script> <script type="text/javascript" src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/MathJax.js?config=MML_HTMLorMML"></script> <div class="reveal-modal-bg"></div> </body> </html>

Pages: 1 2 3 4 5 6 7 8 9 10