CINXE.COM
Search results for: image classification
<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: image classification</title> <meta name="description" content="Search results for: image classification"> <meta name="keywords" content="image classification"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="image classification" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="image classification"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 4553</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: image classification</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4553</span> Evaluating Classification with Efficacy Metrics</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Guofan%20Shao">Guofan Shao</a>, <a href="https://publications.waset.org/abstracts/search?q=Lina%20Tang"> Lina Tang</a>, <a href="https://publications.waset.org/abstracts/search?q=Hao%20Zhang"> Hao Zhang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The values of image classification accuracy are affected by class size distributions and classification schemes, making it difficult to compare the performance of classification algorithms across different remote sensing data sources and classification systems. Based on the term efficacy from medicine and pharmacology, we have developed the metrics of image classification efficacy at the map and class levels. The novelty of this approach is that a baseline classification is involved in computing image classification efficacies so that the effects of class statistics are reduced. Furthermore, the image classification efficacies are interpretable and comparable, and thus, strengthen the assessment of image data classification methods. We use real-world and hypothetical examples to explain the use of image classification efficacies. The metrics of image classification efficacy meet the critical need to rectify the strategy for the assessment of image classification performance as image classification methods are becoming more diversified. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=accuracy%20assessment" title="accuracy assessment">accuracy assessment</a>, <a href="https://publications.waset.org/abstracts/search?q=efficacy" title=" efficacy"> efficacy</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20classification" title=" image classification"> image classification</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=uncertainty" title=" uncertainty"> uncertainty</a> </p> <a href="https://publications.waset.org/abstracts/142555/evaluating-classification-with-efficacy-metrics" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/142555.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">210</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4552</span> Review on Effective Texture Classification Techniques</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sujata%20S.%20Kulkarni">Sujata S. Kulkarni</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Effective and efficient texture feature extraction and classification is an important problem in image understanding and recognition. This paper gives a review on effective texture classification method. The objective of the problem of texture representation is to reduce the amount of raw data presented by the image, while preserving the information needed for the task. Texture analysis is important in many applications of computer image analysis for classification include industrial and biomedical surface inspection, for example for defects and disease, ground classification of satellite or aerial imagery and content-based access to image databases. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=compressed%20sensing" title="compressed sensing">compressed sensing</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20extraction" title=" feature extraction"> feature extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20classification" title=" image classification"> image classification</a>, <a href="https://publications.waset.org/abstracts/search?q=texture%20analysis" title=" texture analysis"> texture analysis</a> </p> <a href="https://publications.waset.org/abstracts/24461/review-on-effective-texture-classification-techniques" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/24461.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">434</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4551</span> Image Classification with Localization Using Convolutional Neural Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bhuyain%20Mobarok%20Hossain">Bhuyain Mobarok Hossain</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Image classification and localization research is currently an important strategy in the field of computer vision. The evolution and advancement of deep learning and convolutional neural networks (CNN) have greatly improved the capabilities of object detection and image-based classification. Target detection is important to research in the field of computer vision, especially in video surveillance systems. To solve this problem, we will be applying a convolutional neural network of multiple scales at multiple locations in the image in one sliding window. Most translation networks move away from the bounding box around the area of interest. In contrast to this architecture, we consider the problem to be a classification problem where each pixel of the image is a separate section. Image classification is the method of predicting an individual category or specifying by a shoal of data points. Image classification is a part of the classification problem, including any labels throughout the image. The image can be classified as a day or night shot. Or, likewise, images of cars and motorbikes will be automatically placed in their collection. The deep learning of image classification generally includes convolutional layers; the invention of it is referred to as a convolutional neural network (CNN). <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20classification" title="image classification">image classification</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20detection" title=" object detection"> object detection</a>, <a href="https://publications.waset.org/abstracts/search?q=localization" title=" localization"> localization</a>, <a href="https://publications.waset.org/abstracts/search?q=particle%20filter" title=" particle filter"> particle filter</a> </p> <a href="https://publications.waset.org/abstracts/139288/image-classification-with-localization-using-convolutional-neural-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/139288.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">305</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4550</span> Hyperspectral Image Classification Using Tree Search Algorithm</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shreya%20Pare">Shreya Pare</a>, <a href="https://publications.waset.org/abstracts/search?q=Parvin%20Akhter"> Parvin Akhter</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Remotely sensing image classification becomes a very challenging task owing to the high dimensionality of hyperspectral images. The pixel-wise classification methods fail to take the spatial structure information of an image. Therefore, to improve the performance of classification, spatial information can be integrated into the classification process. In this paper, the multilevel thresholding algorithm based on a modified fuzzy entropy function is used to perform the segmentation of hyperspectral images. The fuzzy parameters of the MFE function have been optimized by using a new meta-heuristic algorithm based on the Tree-Search algorithm. The segmented image is classified by a large distribution machine (LDM) classifier. Experimental results are shown on a hyperspectral image dataset. The experimental outputs indicate that the proposed technique (MFE-TSA-LDM) achieves much higher classification accuracy for hyperspectral images when compared to state-of-art classification techniques. The proposed algorithm provides accurate segmentation and classification maps, thus becoming more suitable for image classification with large spatial structures. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=classification" title="classification">classification</a>, <a href="https://publications.waset.org/abstracts/search?q=hyperspectral%20images" title=" hyperspectral images"> hyperspectral images</a>, <a href="https://publications.waset.org/abstracts/search?q=large%20distribution%20margin" title=" large distribution margin"> large distribution margin</a>, <a href="https://publications.waset.org/abstracts/search?q=modified%20fuzzy%20entropy%20function" title=" modified fuzzy entropy function"> modified fuzzy entropy function</a>, <a href="https://publications.waset.org/abstracts/search?q=multilevel%20thresholding" title=" multilevel thresholding"> multilevel thresholding</a>, <a href="https://publications.waset.org/abstracts/search?q=tree%20search%20algorithm" title=" tree search algorithm"> tree search algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=hyperspectral%20image%20classification%20using%20tree%20search%20algorithm" title=" hyperspectral image classification using tree search algorithm"> hyperspectral image classification using tree search algorithm</a> </p> <a href="https://publications.waset.org/abstracts/143284/hyperspectral-image-classification-using-tree-search-algorithm" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/143284.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">177</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4549</span> Automatic Classification Using Dynamic Fuzzy C Means Algorithm and Mathematical Morphology: Application in 3D MRI Image</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Abdelkhalek%20Bakkari">Abdelkhalek Bakkari</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Image segmentation is a critical step in image processing and pattern recognition. In this paper, we proposed a new robust automatic image classification based on a dynamic fuzzy c-means algorithm and mathematical morphology. The proposed segmentation algorithm (DFCM_MM) has been applied to MR perfusion images. The obtained results show the validity and robustness of the proposed approach. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=segmentation" title="segmentation">segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=classification" title=" classification"> classification</a>, <a href="https://publications.waset.org/abstracts/search?q=dynamic" title=" dynamic"> dynamic</a>, <a href="https://publications.waset.org/abstracts/search?q=fuzzy%20c-means" title=" fuzzy c-means"> fuzzy c-means</a>, <a href="https://publications.waset.org/abstracts/search?q=MR%20image" title=" MR image"> MR image</a> </p> <a href="https://publications.waset.org/abstracts/13711/automatic-classification-using-dynamic-fuzzy-c-means-algorithm-and-mathematical-morphology-application-in-3d-mri-image" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/13711.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">478</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4548</span> Urban Land Cover from GF-2 Satellite Images Using Object Based and Neural Network Classifications</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Lamyaa%20Gamal%20El-Deen%20Taha">Lamyaa Gamal El-Deen Taha</a>, <a href="https://publications.waset.org/abstracts/search?q=Ashraf%20Sharawi"> Ashraf Sharawi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> China launched satellite GF-2 in 2014. This study deals with comparing nearest neighbor object-based classification and neural network classification methods for classification of the fused GF-2 image. Firstly, rectification of GF-2 image was performed. Secondly, a comparison between nearest neighbor object-based classification and neural network classification for classification of fused GF-2 was performed. Thirdly, the overall accuracy of classification and kappa index were calculated. Results indicate that nearest neighbor object-based classification is better than neural network classification for urban mapping. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=GF-2%20images" title="GF-2 images">GF-2 images</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20extraction-rectification" title=" feature extraction-rectification"> feature extraction-rectification</a>, <a href="https://publications.waset.org/abstracts/search?q=nearest%20neighbour%20object%20based%20classification" title=" nearest neighbour object based classification"> nearest neighbour object based classification</a>, <a href="https://publications.waset.org/abstracts/search?q=segmentation%20algorithms" title=" segmentation algorithms"> segmentation algorithms</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20network%20classification" title=" neural network classification"> neural network classification</a>, <a href="https://publications.waset.org/abstracts/search?q=multilayer%20perceptron" title=" multilayer perceptron"> multilayer perceptron</a> </p> <a href="https://publications.waset.org/abstracts/84243/urban-land-cover-from-gf-2-satellite-images-using-object-based-and-neural-network-classifications" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/84243.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">389</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4547</span> Assessment of Planet Image for Land Cover Mapping Using Soft and Hard Classifiers</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Lamyaa%20Gamal%20El-Deen%20Taha">Lamyaa Gamal El-Deen Taha</a>, <a href="https://publications.waset.org/abstracts/search?q=Ashraf%20Sharawi"> Ashraf Sharawi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Planet image is a new data source from planet lab. This research is concerned with the assessment of Planet image for land cover mapping. Two pixel based classifiers and one subpixel based classifier were compared. Firstly, rectification of Planet image was performed. Secondly, a comparison between minimum distance, maximum likelihood and neural network classifications for classification of Planet image was performed. Thirdly, the overall accuracy of classification and kappa coefficient were calculated. Results indicate that neural network classification is best followed by maximum likelihood classifier then minimum distance classification for land cover mapping. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=planet%20image" title="planet image">planet image</a>, <a href="https://publications.waset.org/abstracts/search?q=land%20cover%20mapping" title=" land cover mapping"> land cover mapping</a>, <a href="https://publications.waset.org/abstracts/search?q=rectification" title=" rectification"> rectification</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20network%20classification" title=" neural network classification"> neural network classification</a>, <a href="https://publications.waset.org/abstracts/search?q=multilayer%20perceptron" title=" multilayer perceptron"> multilayer perceptron</a>, <a href="https://publications.waset.org/abstracts/search?q=soft%20classifiers" title=" soft classifiers"> soft classifiers</a>, <a href="https://publications.waset.org/abstracts/search?q=hard%20classifiers" title=" hard classifiers"> hard classifiers</a> </p> <a href="https://publications.waset.org/abstracts/89202/assessment-of-planet-image-for-land-cover-mapping-using-soft-and-hard-classifiers" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/89202.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">187</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4546</span> Using Self Organizing Feature Maps for Classification in RGB Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hassan%20Masoumi">Hassan Masoumi</a>, <a href="https://publications.waset.org/abstracts/search?q=Ahad%20Salimi"> Ahad Salimi</a>, <a href="https://publications.waset.org/abstracts/search?q=Nazanin%20Barhemmat"> Nazanin Barhemmat</a>, <a href="https://publications.waset.org/abstracts/search?q=Babak%20Gholami"> Babak Gholami</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Artificial neural networks have gained a lot of interest as empirical models for their powerful representational capacity, multi input and output mapping characteristics. In fact, most feed-forward networks with nonlinear nodal functions have been proved to be universal approximates. In this paper, we propose a new supervised method for color image classification based on self organizing feature maps (SOFM). This algorithm is based on competitive learning. The method partitions the input space using self-organizing feature maps to introduce the concept of local neighborhoods. Our image classification system entered into RGB image. Experiments with simulated data showed that separability of classes increased when increasing training time. In additional, the result shows proposed algorithms are effective for color image classification. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=classification" title="classification">classification</a>, <a href="https://publications.waset.org/abstracts/search?q=SOFM%20algorithm" title=" SOFM algorithm"> SOFM algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20network" title=" neural network"> neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=neighborhood" title=" neighborhood"> neighborhood</a>, <a href="https://publications.waset.org/abstracts/search?q=RGB%20image" title=" RGB image"> RGB image</a> </p> <a href="https://publications.waset.org/abstracts/26819/using-self-organizing-feature-maps-for-classification-in-rgb-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/26819.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">478</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4545</span> Selection of Appropriate Classification Technique for Lithological Mapping of Gali Jagir Area, Pakistan </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Khunsa%20Fatima">Khunsa Fatima</a>, <a href="https://publications.waset.org/abstracts/search?q=Umar%20K.%20Khattak"> Umar K. Khattak</a>, <a href="https://publications.waset.org/abstracts/search?q=Allah%20Bakhsh%20Kausar"> Allah Bakhsh Kausar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Satellite images interpretation and analysis assist geologists by providing valuable information about geology and minerals of an area to be surveyed. A test site in Fatejang of district Attock has been studied using Landsat ETM+ and ASTER satellite images for lithological mapping. Five different supervised image classification techniques namely maximum likelihood, parallelepiped, minimum distance to mean, mahalanobis distance and spectral angle mapper have been performed on both satellite data images to find out the suitable classification technique for lithological mapping in the study area. Results of these five image classification techniques were compared with the geological map produced by Geological Survey of Pakistan. The result of maximum likelihood classification technique applied on ASTER satellite image has the highest correlation of 0.66 with the geological map. Field observations and XRD spectra of field samples also verified the results. A lithological map was then prepared based on the maximum likelihood classification of ASTER satellite image. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=ASTER" title="ASTER">ASTER</a>, <a href="https://publications.waset.org/abstracts/search?q=Landsat-ETM%2B" title=" Landsat-ETM+"> Landsat-ETM+</a>, <a href="https://publications.waset.org/abstracts/search?q=satellite" title=" satellite"> satellite</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20classification" title=" image classification"> image classification</a> </p> <a href="https://publications.waset.org/abstracts/3823/selection-of-appropriate-classification-technique-for-lithological-mapping-of-gali-jagir-area-pakistan" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/3823.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">394</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4544</span> Medical Image Classification Using Legendre Multifractal Spectrum Features</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=R.%20Korchiyne">R. Korchiyne</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Sbihi"> A. Sbihi</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20M.%20Farssi"> S. M. Farssi</a>, <a href="https://publications.waset.org/abstracts/search?q=R.%20Touahni"> R. Touahni</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20Tahiri%20Alaoui"> M. Tahiri Alaoui</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Trabecular bone structure is important texture in the study of osteoporosis. Legendre multifractal spectrum can reflect the complex and self-similarity characteristic of structures. The main objective of this paper is to develop a new technique of medical image classification based on Legendre multifractal spectrum. Novel features have been developed from basic geometrical properties of this spectrum in a supervised image classification. The proposed method has been successfully used to classify medical images of bone trabeculations, and could be a useful supplement to the clinical observations for osteoporosis diagnosis. A comparative study with existing data reveals that the results of this approach are concordant. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=multifractal%20analysis" title="multifractal analysis">multifractal analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=medical%20image" title=" medical image"> medical image</a>, <a href="https://publications.waset.org/abstracts/search?q=osteoporosis" title=" osteoporosis"> osteoporosis</a>, <a href="https://publications.waset.org/abstracts/search?q=fractal%20dimension" title=" fractal dimension"> fractal dimension</a>, <a href="https://publications.waset.org/abstracts/search?q=Legendre%20spectrum" title=" Legendre spectrum"> Legendre spectrum</a>, <a href="https://publications.waset.org/abstracts/search?q=supervised%20classification" title=" supervised classification"> supervised classification</a> </p> <a href="https://publications.waset.org/abstracts/15795/medical-image-classification-using-legendre-multifractal-spectrum-features" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/15795.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">514</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4543</span> U-Net Based Multi-Output Network for Lung Disease Segmentation and Classification Using Chest X-Ray Dataset</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jaiden%20X.%20Schraut">Jaiden X. Schraut</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Medical Imaging Segmentation of Chest X-rays is used for the purpose of identification and differentiation of lung cancer, pneumonia, COVID-19, and similar respiratory diseases. Widespread application of computer-supported perception methods into the diagnostic pipeline has been demonstrated to increase prognostic accuracy and aid doctors in efficiently treating patients. Modern models attempt the task of segmentation and classification separately and improve diagnostic efficiency; however, to further enhance this process, this paper proposes a multi-output network that follows a U-Net architecture for image segmentation output and features an additional CNN module for auxiliary classification output. The proposed model achieves a final Jaccard Index of .9634 for image segmentation and a final accuracy of .9600 for classification on the COVID-19 radiography database. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=chest%20X-ray" title="chest X-ray">chest X-ray</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20segmentation" title=" image segmentation"> image segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20classification" title=" image classification"> image classification</a> </p> <a href="https://publications.waset.org/abstracts/155537/u-net-based-multi-output-network-for-lung-disease-segmentation-and-classification-using-chest-x-ray-dataset" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/155537.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">144</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4542</span> Deep Learning-Based Automated Structure Deterioration Detection for Building Structures: A Technological Advancement for Ensuring Structural Integrity</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kavita%20Bodke">Kavita Bodke</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Structural health monitoring (SHM) is experiencing growth, necessitating the development of distinct methodologies to address its expanding scope effectively. In this study, we developed automatic structure damage identification, which incorporates three unique types of a building’s structural integrity. The first pertains to the presence of fractures within the structure, the second relates to the issue of dampness within the structure, and the third involves corrosion inside the structure. This study employs image classification techniques to discern between intact and impaired structures within structural data. The aim of this research is to find automatic damage detection with the probability of each damage class being present in one image. Based on this probability, we know which class has a higher probability or is more affected than the other classes. Utilizing photographs captured by a mobile camera serves as the input for an image classification system. Image classification was employed in our study to perform multi-class and multi-label classification. The objective was to categorize structural data based on the presence of cracks, moisture, and corrosion. In the context of multi-class image classification, our study employed three distinct methodologies: Random Forest, Multilayer Perceptron, and CNN. For the task of multi-label image classification, the models employed were Rasnet, Xceptionet, and Inception. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=SHM" title="SHM">SHM</a>, <a href="https://publications.waset.org/abstracts/search?q=CNN" title=" CNN"> CNN</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-class%20classification" title=" multi-class classification"> multi-class classification</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-label%20classification" title=" multi-label classification"> multi-label classification</a> </p> <a href="https://publications.waset.org/abstracts/187844/deep-learning-based-automated-structure-deterioration-detection-for-building-structures-a-technological-advancement-for-ensuring-structural-integrity" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/187844.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">36</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4541</span> Satellite Image Classification Using Firefly Algorithm </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Paramjit%20Kaur">Paramjit Kaur</a>, <a href="https://publications.waset.org/abstracts/search?q=Harish%20Kundra"> Harish Kundra</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In the recent years, swarm intelligence based firefly algorithm has become a great focus for the researchers to solve the real time optimization problems. Here, firefly algorithm is used for the application of satellite image classification. For experimentation, Alwar area is considered to multiple land features like vegetation, barren, hilly, residential and water surface. Alwar dataset is considered with seven band satellite images. Firefly Algorithm is based on the attraction of less bright fireflies towards more brightener one. For the evaluation of proposed concept accuracy assessment parameters are calculated using error matrix. With the help of Error matrix, parameters of Kappa Coefficient, Overall Accuracy and feature wise accuracy parameters of user’s accuracy & producer’s accuracy can be calculated. Overall results are compared with BBO, PSO, Hybrid FPAB/BBO, Hybrid ACO/SOFM and Hybrid ACO/BBO based on the kappa coefficient and overall accuracy parameters. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20classification" title="image classification">image classification</a>, <a href="https://publications.waset.org/abstracts/search?q=firefly%20algorithm" title=" firefly algorithm"> firefly algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=satellite%20image%20classification" title=" satellite image classification"> satellite image classification</a>, <a href="https://publications.waset.org/abstracts/search?q=terrain%20classification" title=" terrain classification"> terrain classification</a> </p> <a href="https://publications.waset.org/abstracts/64829/satellite-image-classification-using-firefly-algorithm" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/64829.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">400</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4540</span> Scene Classification Using Hierarchy Neural Network, Directed Acyclic Graph Structure, and Label Relations</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Po-Jen%20Chen">Po-Jen Chen</a>, <a href="https://publications.waset.org/abstracts/search?q=Jian-Jiun%20Ding"> Jian-Jiun Ding</a>, <a href="https://publications.waset.org/abstracts/search?q=Hung-Wei%20Hsu"> Hung-Wei Hsu</a>, <a href="https://publications.waset.org/abstracts/search?q=Chien-Yao%20Wang"> Chien-Yao Wang</a>, <a href="https://publications.waset.org/abstracts/search?q=Jia-Ching%20Wang"> Jia-Ching Wang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> A more accurate scene classification algorithm using label relations and the hierarchy neural network was developed in this work. In many classification algorithms, it is assumed that the labels are mutually exclusive. This assumption is true in some specific problems, however, for scene classification, the assumption is not reasonable. Because there are a variety of objects with a photo image, it is more practical to assign multiple labels for an image. In this paper, two label relations, which are exclusive relation and hierarchical relation, were adopted in the classification process to achieve more accurate multiple label classification results. Moreover, the hierarchy neural network (hierarchy NN) is applied to classify the image and the directed acyclic graph structure is used for predicting a more reasonable result which obey exclusive and hierarchical relations. Simulations show that, with these techniques, a much more accurate scene classification result can be achieved. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20network" title="convolutional neural network">convolutional neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=label%20relation" title=" label relation"> label relation</a>, <a href="https://publications.waset.org/abstracts/search?q=hierarchy%20neural%20network" title=" hierarchy neural network"> hierarchy neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=scene%20classification" title=" scene classification"> scene classification</a> </p> <a href="https://publications.waset.org/abstracts/66516/scene-classification-using-hierarchy-neural-network-directed-acyclic-graph-structure-and-label-relations" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/66516.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">457</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4539</span> Reliable Soup: Reliable-Driven Model Weight Fusion on Ultrasound Imaging Classification</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shuge%20Lei">Shuge Lei</a>, <a href="https://publications.waset.org/abstracts/search?q=Haonan%20Hu"> Haonan Hu</a>, <a href="https://publications.waset.org/abstracts/search?q=Dasheng%20Sun"> Dasheng Sun</a>, <a href="https://publications.waset.org/abstracts/search?q=Huabin%20Zhang"> Huabin Zhang</a>, <a href="https://publications.waset.org/abstracts/search?q=Kehong%20Yuan"> Kehong Yuan</a>, <a href="https://publications.waset.org/abstracts/search?q=Jian%20Dai"> Jian Dai</a>, <a href="https://publications.waset.org/abstracts/search?q=Yan%20Tong"> Yan Tong</a> </p> <p class="card-text"><strong>Abstract:</strong></p> It remains challenging to measure reliability from classification results from different machine learning models. This paper proposes a reliable soup optimization algorithm based on the model weight fusion algorithm Model Soup, aiming to improve reliability by using dual-channel reliability as the objective function to fuse a series of weights in the breast ultrasound classification models. Experimental results on breast ultrasound clinical datasets demonstrate that reliable soup significantly enhances the reliability of breast ultrasound image classification tasks. The effectiveness of the proposed approach was verified via multicenter trials. The results from five centers indicate that the reliability optimization algorithm can enhance the reliability of the breast ultrasound image classification model and exhibit low multicenter correlation. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=breast%20ultrasound%20image%20classification" title="breast ultrasound image classification">breast ultrasound image classification</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20attribution" title=" feature attribution"> feature attribution</a>, <a href="https://publications.waset.org/abstracts/search?q=reliability%20assessment" title=" reliability assessment"> reliability assessment</a>, <a href="https://publications.waset.org/abstracts/search?q=reliability%20optimization" title=" reliability optimization"> reliability optimization</a> </p> <a href="https://publications.waset.org/abstracts/176773/reliable-soup-reliable-driven-model-weight-fusion-on-ultrasound-imaging-classification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/176773.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">85</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4538</span> A Custom Convolutional Neural Network with Hue, Saturation, Value Color for Malaria Classification</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ghazala%20Hcini">Ghazala Hcini</a>, <a href="https://publications.waset.org/abstracts/search?q=Imen%20Jdey"> Imen Jdey</a>, <a href="https://publications.waset.org/abstracts/search?q=Hela%20Ltifi"> Hela Ltifi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Malaria disease should be considered and handled as a potential restorative catastrophe. One of the most challenging tasks in the field of microscopy image processing is due to differences in test design and vulnerability of cell classifications. In this article, we focused on applying deep learning to classify patients by identifying images of infected and uninfected cells. We performed multiple forms, counting a classification approach using the Hue, Saturation, Value (HSV) color space. HSV is used since of its superior ability to speak to image brightness; at long last, for classification, a convolutional neural network (CNN) architecture is created. Clusters of focus were used to deliver the classification. The highlights got to be forbidden, and a few more clamor sorts are included in the information. The suggested method has a precision of 99.79%, a recall value of 99.55%, and provides 99.96% accuracy. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title="deep learning">deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20network" title=" convolutional neural network"> convolutional neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20classification" title=" image classification"> image classification</a>, <a href="https://publications.waset.org/abstracts/search?q=color%20transformation" title=" color transformation"> color transformation</a>, <a href="https://publications.waset.org/abstracts/search?q=HSV%20color" title=" HSV color"> HSV color</a>, <a href="https://publications.waset.org/abstracts/search?q=malaria%20diagnosis" title=" malaria diagnosis"> malaria diagnosis</a>, <a href="https://publications.waset.org/abstracts/search?q=malaria%20cells%20images" title=" malaria cells images"> malaria cells images</a> </p> <a href="https://publications.waset.org/abstracts/161232/a-custom-convolutional-neural-network-with-hue-saturation-value-color-for-malaria-classification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/161232.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">88</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4537</span> Automatic Method for Classification of Informative and Noninformative Images in Colonoscopy Video</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nidhal%20K.%20Azawi">Nidhal K. Azawi</a>, <a href="https://publications.waset.org/abstracts/search?q=John%20M.%20Gauch"> John M. Gauch</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Colorectal cancer is one of the leading causes of cancer death in the US and the world, which is why millions of colonoscopy examinations are performed annually. Unfortunately, noise, specular highlights, and motion artifacts corrupt many images in a typical colonoscopy exam. The goal of our research is to produce automated techniques to detect and correct or remove these noninformative images from colonoscopy videos, so physicians can focus their attention on informative images. In this research, we first automatically extract features from images. Then we use machine learning and deep neural network to classify colonoscopy images as either informative or noninformative. Our results show that we achieve image classification accuracy between 92-98%. We also show how the removal of noninformative images together with image alignment can aid in the creation of image panoramas and other visualizations of colonoscopy images. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=colonoscopy%20classification" title="colonoscopy classification">colonoscopy classification</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20extraction" title=" feature extraction"> feature extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20alignment" title=" image alignment"> image alignment</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a> </p> <a href="https://publications.waset.org/abstracts/92461/automatic-method-for-classification-of-informative-and-noninformative-images-in-colonoscopy-video" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/92461.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">253</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4536</span> Classification of Hyperspectral Image Using Mathematical Morphological Operator-Based Distance Metric</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Geetika%20Barman">Geetika Barman</a>, <a href="https://publications.waset.org/abstracts/search?q=B.%20S.%20Daya%20Sagar"> B. S. Daya Sagar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this article, we proposed a pixel-wise classification of hyperspectral images using a mathematical morphology operator-based distance metric called “dilation distance” and “erosion distance”. This method involves measuring the spatial distance between the spectral features of a hyperspectral image across the bands. The key concept of the proposed approach is that the “dilation distance” is the maximum distance a pixel can be moved without changing its classification, whereas the “erosion distance” is the maximum distance that a pixel can be moved before changing its classification. The spectral signature of the hyperspectral image carries unique class information and shape for each class. This article demonstrates how easily the dilation and erosion distance can measure spatial distance compared to other approaches. This property is used to calculate the spatial distance between hyperspectral image feature vectors across the bands. The dissimilarity matrix is then constructed using both measures extracted from the feature spaces. The measured distance metric is used to distinguish between the spectral features of various classes and precisely distinguish between each class. This is illustrated using both toy data and real datasets. Furthermore, we investigated the role of flat vs. non-flat structuring elements in capturing the spatial features of each class in the hyperspectral image. In order to validate, we compared the proposed approach to other existing methods and demonstrated empirically that mathematical operator-based distance metric classification provided competitive results and outperformed some of them. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=dilation%20distance" title="dilation distance">dilation distance</a>, <a href="https://publications.waset.org/abstracts/search?q=erosion%20distance" title=" erosion distance"> erosion distance</a>, <a href="https://publications.waset.org/abstracts/search?q=hyperspectral%20image%20classification" title=" hyperspectral image classification"> hyperspectral image classification</a>, <a href="https://publications.waset.org/abstracts/search?q=mathematical%20morphology" title=" mathematical morphology"> mathematical morphology</a> </p> <a href="https://publications.waset.org/abstracts/166292/classification-of-hyperspectral-image-using-mathematical-morphological-operator-based-distance-metric" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/166292.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">87</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4535</span> A New Approach for Improving Accuracy of Multi Label Stream Data</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kunal%20Shah">Kunal Shah</a>, <a href="https://publications.waset.org/abstracts/search?q=Swati%20Patel"> Swati Patel</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Many real world problems involve data which can be considered as multi-label data streams. Efficient methods exist for multi-label classification in non streaming scenarios. However, learning in evolving streaming scenarios is more challenging, as the learners must be able to adapt to change using limited time and memory. Classification is used to predict class of unseen instance as accurate as possible. Multi label classification is a variant of single label classification where set of labels associated with single instance. Multi label classification is used by modern applications, such as text classification, functional genomics, image classification, music categorization etc. This paper introduces the task of multi-label classification, methods for multi-label classification and evolution measure for multi-label classification. Also, comparative analysis of multi label classification methods on the basis of theoretical study, and then on the basis of simulation was done on various data sets. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=binary%20relevance" title="binary relevance">binary relevance</a>, <a href="https://publications.waset.org/abstracts/search?q=concept%20drift" title=" concept drift"> concept drift</a>, <a href="https://publications.waset.org/abstracts/search?q=data%20stream%20mining" title=" data stream mining"> data stream mining</a>, <a href="https://publications.waset.org/abstracts/search?q=MLSC" title=" MLSC"> MLSC</a>, <a href="https://publications.waset.org/abstracts/search?q=multiple%20window%20with%20buffer" title=" multiple window with buffer"> multiple window with buffer</a> </p> <a href="https://publications.waset.org/abstracts/33035/a-new-approach-for-improving-accuracy-of-multi-label-stream-data" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/33035.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">584</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4534</span> Crop Classification using Unmanned Aerial Vehicle Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Iqra%20Yaseen">Iqra Yaseen</a> </p> <p class="card-text"><strong>Abstract:</strong></p> One of the well-known areas of computer science and engineering, image processing in the context of computer vision has been essential to automation. In remote sensing, medical science, and many other fields, it has made it easier to uncover previously undiscovered facts. Grading of diverse items is now possible because of neural network algorithms, categorization, and digital image processing. Its use in the classification of agricultural products, particularly in the grading of seeds or grains and their cultivars, is widely recognized. A grading and sorting system enables the preservation of time, consistency, and uniformity. Global population growth has led to an increase in demand for food staples, biofuel, and other agricultural products. To meet this demand, available resources must be used and managed more effectively. Image processing is rapidly growing in the field of agriculture. Many applications have been developed using this approach for crop identification and classification, land and disease detection and for measuring other parameters of crop. Vegetation localization is the base of performing these task. Vegetation helps to identify the area where the crop is present. The productivity of the agriculture industry can be increased via image processing that is based upon Unmanned Aerial Vehicle photography and satellite. In this paper we use the machine learning techniques like Convolutional Neural Network, deep learning, image processing, classification, You Only Live Once to UAV imaging dataset to divide the crop into distinct groups and choose the best way to use it. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title="image processing">image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=UAV" title=" UAV"> UAV</a>, <a href="https://publications.waset.org/abstracts/search?q=YOLO" title=" YOLO"> YOLO</a>, <a href="https://publications.waset.org/abstracts/search?q=CNN" title=" CNN"> CNN</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=classification" title=" classification"> classification</a> </p> <a href="https://publications.waset.org/abstracts/157744/crop-classification-using-unmanned-aerial-vehicle-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/157744.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">107</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4533</span> A General Framework for Knowledge Discovery from Echocardiographic and Natural Images </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=S.%20Nandagopalan">S. Nandagopalan</a>, <a href="https://publications.waset.org/abstracts/search?q=N.%20Pradeep"> N. Pradeep</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The aim of this paper is to propose a general framework for storing, analyzing, and extracting knowledge from two-dimensional echocardiographic images, color Doppler images, non-medical images, and general data sets. A number of high performance data mining algorithms have been used to carry out this task. Our framework encompasses four layers namely physical storage, object identification, knowledge discovery, user level. Techniques such as active contour model to identify the cardiac chambers, pixel classification to segment the color Doppler echo image, universal model for image retrieval, Bayesian method for classification, parallel algorithms for image segmentation, etc., were employed. Using the feature vector database that have been efficiently constructed, one can perform various data mining tasks like clustering, classification, etc. with efficient algorithms along with image mining given a query image. All these facilities are included in the framework that is supported by state-of-the-art user interface (UI). The algorithms were tested with actual patient data and Coral image database and the results show that their performance is better than the results reported already. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=active%20contour" title="active contour">active contour</a>, <a href="https://publications.waset.org/abstracts/search?q=Bayesian" title=" Bayesian"> Bayesian</a>, <a href="https://publications.waset.org/abstracts/search?q=echocardiographic%20image" title=" echocardiographic image"> echocardiographic image</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20vector" title=" feature vector"> feature vector</a> </p> <a href="https://publications.waset.org/abstracts/42868/a-general-framework-for-knowledge-discovery-from-echocardiographic-and-natural-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/42868.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">445</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4532</span> A General Framework for Knowledge Discovery Using High Performance Machine Learning Algorithms</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=S.%20Nandagopalan">S. Nandagopalan</a>, <a href="https://publications.waset.org/abstracts/search?q=N.%20Pradeep"> N. Pradeep</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The aim of this paper is to propose a general framework for storing, analyzing, and extracting knowledge from two-dimensional echocardiographic images, color Doppler images, non-medical images, and general data sets. A number of high performance data mining algorithms have been used to carry out this task. Our framework encompasses four layers namely physical storage, object identification, knowledge discovery, user level. Techniques such as active contour model to identify the cardiac chambers, pixel classification to segment the color Doppler echo image, universal model for image retrieval, Bayesian method for classification, parallel algorithms for image segmentation, etc., were employed. Using the feature vector database that have been efficiently constructed, one can perform various data mining tasks like clustering, classification, etc. with efficient algorithms along with image mining given a query image. All these facilities are included in the framework that is supported by state-of-the-art user interface (UI). The algorithms were tested with actual patient data and Coral image database and the results show that their performance is better than the results reported already. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=active%20contour" title="active contour">active contour</a>, <a href="https://publications.waset.org/abstracts/search?q=bayesian" title=" bayesian"> bayesian</a>, <a href="https://publications.waset.org/abstracts/search?q=echocardiographic%20image" title=" echocardiographic image"> echocardiographic image</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20vector" title=" feature vector"> feature vector</a> </p> <a href="https://publications.waset.org/abstracts/42632/a-general-framework-for-knowledge-discovery-using-high-performance-machine-learning-algorithms" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/42632.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">420</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4531</span> Enhanced Image Representation for Deep Belief Network Classification of Hyperspectral Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Khitem%20Amiri">Khitem Amiri</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohamed%20Farah"> Mohamed Farah</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Image classification is a challenging task and is gaining lots of interest since it helps us to understand the content of images. Recently Deep Learning (DL) based methods gave very interesting results on several benchmarks. For Hyperspectral images (HSI), the application of DL techniques is still challenging due to the scarcity of labeled data and to the curse of dimensionality. Among other approaches, Deep Belief Network (DBN) based approaches gave a fair classification accuracy. In this paper, we address the problem of the curse of dimensionality by reducing the number of bands and replacing the HSI channels by the channels representing radiometric indices. Therefore, instead of using all the HSI bands, we compute the radiometric indices such as NDVI (Normalized Difference Vegetation Index), NDWI (Normalized Difference Water Index), etc, and we use the combination of these indices as input for the Deep Belief Network (DBN) based classification model. Thus, we keep almost all the pertinent spectral information while reducing considerably the size of the image. In order to test our image representation, we applied our method on several HSI datasets including the Indian pines dataset, Jasper Ridge data and it gave comparable results to the state of the art methods while reducing considerably the time of training and testing. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=hyperspectral%20images" title="hyperspectral images">hyperspectral images</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20belief%20network" title=" deep belief network"> deep belief network</a>, <a href="https://publications.waset.org/abstracts/search?q=radiometric%20indices" title=" radiometric indices"> radiometric indices</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20classification" title=" image classification"> image classification</a> </p> <a href="https://publications.waset.org/abstracts/93458/enhanced-image-representation-for-deep-belief-network-classification-of-hyperspectral-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/93458.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">280</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4530</span> Automatic Moment-Based Texture Segmentation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Tudor%20Barbu">Tudor Barbu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> An automatic moment-based texture segmentation approach is proposed in this paper. First, we describe the related work in this computer vision domain. Our texture feature extraction, the first part of the texture recognition process, produces a set of moment-based feature vectors. For each image pixel, a texture feature vector is computed as a sequence of area moments. Second, an automatic pixel classification approach is proposed. The feature vectors are clustered using some unsupervised classification algorithm, the optimal number of clusters being determined using a measure based on validation indexes. From the resulted pixel classes one determines easily the desired texture regions of the image. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20segmentation" title="image segmentation">image segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=moment-based" title=" moment-based"> moment-based</a>, <a href="https://publications.waset.org/abstracts/search?q=texture%20analysis" title=" texture analysis"> texture analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=automatic%20classification" title=" automatic classification"> automatic classification</a>, <a href="https://publications.waset.org/abstracts/search?q=validation%20indexes" title=" validation indexes"> validation indexes</a> </p> <a href="https://publications.waset.org/abstracts/3065/automatic-moment-based-texture-segmentation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/3065.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">416</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4529</span> An Improvement of Multi-Label Image Classification Method Based on Histogram of Oriented Gradient</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ziad%20Abdallah">Ziad Abdallah</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohamad%20Oueidat"> Mohamad Oueidat</a>, <a href="https://publications.waset.org/abstracts/search?q=Ali%20El-Zaart"> Ali El-Zaart</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Image Multi-label Classification (IMC) assigns a label or a set of labels to an image. The big demand for image annotation and archiving in the web attracts the researchers to develop many algorithms for this application domain. The existing techniques for IMC have two drawbacks: The description of the elementary characteristics from the image and the correlation between labels are not taken into account. In this paper, we present an algorithm (MIML-HOGLPP), which simultaneously handles these limitations. The algorithm uses the histogram of gradients as feature descriptor. It applies the Label Priority Power-set as multi-label transformation to solve the problem of label correlation. The experiment shows that the results of MIML-HOGLPP are better in terms of some of the evaluation metrics comparing with the two existing techniques. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=data%20mining" title="data mining">data mining</a>, <a href="https://publications.waset.org/abstracts/search?q=information%20retrieval%20system" title=" information retrieval system"> information retrieval system</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-label" title=" multi-label"> multi-label</a>, <a href="https://publications.waset.org/abstracts/search?q=problem%20transformation" title=" problem transformation"> problem transformation</a>, <a href="https://publications.waset.org/abstracts/search?q=histogram%20of%20gradients" title=" histogram of gradients"> histogram of gradients</a> </p> <a href="https://publications.waset.org/abstracts/66645/an-improvement-of-multi-label-image-classification-method-based-on-histogram-of-oriented-gradient" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/66645.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">374</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4528</span> Optimizing Machine Learning Through Python Based Image Processing Techniques</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Srinidhi.%20A">Srinidhi. A</a>, <a href="https://publications.waset.org/abstracts/search?q=Naveed%20Ahmed"> Naveed Ahmed</a>, <a href="https://publications.waset.org/abstracts/search?q=Twinkle%20Hareendran"> Twinkle Hareendran</a>, <a href="https://publications.waset.org/abstracts/search?q=Vriksha%20Prakash"> Vriksha Prakash</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This work reviews some of the advanced image processing techniques for deep learning applications. Object detection by template matching, image denoising, edge detection, and super-resolution modelling are but a few of the tasks. The paper looks in into great detail, given that such tasks are crucial preprocessing steps that increase the quality and usability of image datasets in subsequent deep learning tasks. We review some of the methods for the assessment of image quality, more specifically sharpness, which is crucial to ensure a robust performance of models. Further, we will discuss the development of deep learning models specific to facial emotion detection, age classification, and gender classification, which essentially includes the preprocessing techniques interrelated with model performance. Conclusions from this study pinpoint the best practices in the preparation of image datasets, targeting the best trade-off between computational efficiency and retaining important image features critical for effective training of deep learning models. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title="image processing">image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning%20applications" title=" machine learning applications"> machine learning applications</a>, <a href="https://publications.waset.org/abstracts/search?q=template%20matching" title=" template matching"> template matching</a>, <a href="https://publications.waset.org/abstracts/search?q=emotion%20detection" title=" emotion detection"> emotion detection</a> </p> <a href="https://publications.waset.org/abstracts/193107/optimizing-machine-learning-through-python-based-image-processing-techniques" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/193107.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">13</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4527</span> Evaluation of Robust Feature Descriptors for Texture Classification</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jia-Hong%20Lee">Jia-Hong Lee</a>, <a href="https://publications.waset.org/abstracts/search?q=Mei-Yi%20Wu"> Mei-Yi Wu</a>, <a href="https://publications.waset.org/abstracts/search?q=Hsien-Tsung%20Kuo"> Hsien-Tsung Kuo</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Texture is an important characteristic in real and synthetic scenes. Texture analysis plays a critical role in inspecting surfaces and provides important techniques in a variety of applications. Although several descriptors have been presented to extract texture features, the development of object recognition is still a difficult task due to the complex aspects of texture. Recently, many robust and scaling-invariant image features such as SIFT, SURF and ORB have been successfully used in image retrieval and object recognition. In this paper, we have tried to compare the performance for texture classification using these feature descriptors with k-means clustering. Different classifiers including K-NN, Naive Bayes, Back Propagation Neural Network , Decision Tree and Kstar were applied in three texture image sets - UIUCTex, KTH-TIPS and Brodatz, respectively. Experimental results reveal SIFTS as the best average accuracy rate holder in UIUCTex, KTH-TIPS and SURF is advantaged in Brodatz texture set. BP neuro network works best in the test set classification among all used classifiers. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=texture%20classification" title="texture classification">texture classification</a>, <a href="https://publications.waset.org/abstracts/search?q=texture%20descriptor" title=" texture descriptor"> texture descriptor</a>, <a href="https://publications.waset.org/abstracts/search?q=SIFT" title=" SIFT"> SIFT</a>, <a href="https://publications.waset.org/abstracts/search?q=SURF" title=" SURF"> SURF</a>, <a href="https://publications.waset.org/abstracts/search?q=ORB" title=" ORB"> ORB</a> </p> <a href="https://publications.waset.org/abstracts/11046/evaluation-of-robust-feature-descriptors-for-texture-classification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/11046.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">369</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4526</span> Optimizing Perennial Plants Image Classification by Fine-Tuning Deep Neural Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Khairani%20Binti%20Supyan">Khairani Binti Supyan</a>, <a href="https://publications.waset.org/abstracts/search?q=Fatimah%20Khalid"> Fatimah Khalid</a>, <a href="https://publications.waset.org/abstracts/search?q=Mas%20Rina%20Mustaffa"> Mas Rina Mustaffa</a>, <a href="https://publications.waset.org/abstracts/search?q=Azreen%20Bin%20Azman"> Azreen Bin Azman</a>, <a href="https://publications.waset.org/abstracts/search?q=Amirul%20Azuani%20Romle"> Amirul Azuani Romle</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Perennial plant classification plays a significant role in various agricultural and environmental applications, assisting in plant identification, disease detection, and biodiversity monitoring. Nevertheless, attaining high accuracy in perennial plant image classification remains challenging due to the complex variations in plant appearance, the diverse range of environmental conditions under which images are captured, and the inherent variability in image quality stemming from various factors such as lighting conditions, camera settings, and focus. This paper proposes an adaptation approach to optimize perennial plant image classification by fine-tuning the pre-trained DNNs model. This paper explores the efficacy of fine-tuning prevalent architectures, namely VGG16, ResNet50, and InceptionV3, leveraging transfer learning to tailor the models to the specific characteristics of perennial plant datasets. A subset of the MYLPHerbs dataset consisted of 6 perennial plant species of 13481 images under various environmental conditions that were used in the experiments. Different strategies for fine-tuning, including adjusting learning rates, training set sizes, data augmentation, and architectural modifications, were investigated. The experimental outcomes underscore the effectiveness of fine-tuning deep neural networks for perennial plant image classification, with ResNet50 showcasing the highest accuracy of 99.78%. Despite ResNet50's superior performance, both VGG16 and InceptionV3 achieved commendable accuracy of 99.67% and 99.37%, respectively. The overall outcomes reaffirm the robustness of the fine-tuning approach across different deep neural network architectures, offering insights into strategies for optimizing model performance in the domain of perennial plant image classification. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=perennial%20plants" title="perennial plants">perennial plants</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20classification" title=" image classification"> image classification</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20neural%20networks" title=" deep neural networks"> deep neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=fine-tuning" title=" fine-tuning"> fine-tuning</a>, <a href="https://publications.waset.org/abstracts/search?q=transfer%20learning" title=" transfer learning"> transfer learning</a>, <a href="https://publications.waset.org/abstracts/search?q=VGG16" title=" VGG16"> VGG16</a>, <a href="https://publications.waset.org/abstracts/search?q=ResNet50" title=" ResNet50"> ResNet50</a>, <a href="https://publications.waset.org/abstracts/search?q=InceptionV3" title=" InceptionV3"> InceptionV3</a> </p> <a href="https://publications.waset.org/abstracts/182850/optimizing-perennial-plants-image-classification-by-fine-tuning-deep-neural-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/182850.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">64</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4525</span> INRAM-3DCNN: Multi-Scale Convolutional Neural Network Based on Residual and Attention Module Combined with Multilayer Perceptron for Hyperspectral Image Classification</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jianhong%20Xiang">Jianhong Xiang</a>, <a href="https://publications.waset.org/abstracts/search?q=Rui%20Sun"> Rui Sun</a>, <a href="https://publications.waset.org/abstracts/search?q=Linyu%20Wang"> Linyu Wang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In recent years, due to the continuous improvement of deep learning theory, Convolutional Neural Network (CNN) has played a great superior performance in the research of Hyperspectral Image (HSI) classification. Since HSI has rich spatial-spectral information, only utilizing a single dimensional or single size convolutional kernel will limit the detailed feature information received by CNN, which limits the classification accuracy of HSI. In this paper, we design a multi-scale CNN with MLP based on residual and attention modules (INRAM-3DCNN) for the HSI classification task. We propose to use multiple 3D convolutional kernels to extract the packet feature information and fully learn the spatial-spectral features of HSI while designing residual 3D convolutional branches to avoid the decline of classification accuracy due to network degradation. Secondly, we also design the 2D Inception module with a joint channel attention mechanism to quickly extract key spatial feature information at different scales of HSI and reduce the complexity of the 3D model. Due to the high parallel processing capability and nonlinear global action of the Multilayer Perceptron (MLP), we use it in combination with the previous CNN structure for the final classification process. The experimental results on two HSI datasets show that the proposed INRAM-3DCNN method has superior classification performance and can perform the classification task excellently. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=INRAM-3DCNN" title="INRAM-3DCNN">INRAM-3DCNN</a>, <a href="https://publications.waset.org/abstracts/search?q=residual" title=" residual"> residual</a>, <a href="https://publications.waset.org/abstracts/search?q=channel%20attention" title=" channel attention"> channel attention</a>, <a href="https://publications.waset.org/abstracts/search?q=hyperspectral%20image%20classification" title=" hyperspectral image classification"> hyperspectral image classification</a> </p> <a href="https://publications.waset.org/abstracts/177814/inram-3dcnn-multi-scale-convolutional-neural-network-based-on-residual-and-attention-module-combined-with-multilayer-perceptron-for-hyperspectral-image-classification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/177814.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">79</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4524</span> Isolation and Classification of Red Blood Cells in Anemic Microscopic Images </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jameela%20Ali%20Alkrimi">Jameela Ali Alkrimi</a>, <a href="https://publications.waset.org/abstracts/search?q=Abdul%20Rahim%20Ahmad"> Abdul Rahim Ahmad</a>, <a href="https://publications.waset.org/abstracts/search?q=Azizah%20Suliman"> Azizah Suliman</a>, <a href="https://publications.waset.org/abstracts/search?q=Loay%20E.%20George"> Loay E. George</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Red blood cells (RBCs) are among the most commonly and intensively studied type of blood cells in cell biology. The lack of RBCs is a condition characterized by lower than normal hemoglobin level; this condition is referred to as 'anemia'. In this study, a software was developed to isolate RBCs by using a machine learning approach to classify anemic RBCs in microscopic images. Several features of RBCs were extracted using image processing algorithms, including principal component analysis (PCA). With the proposed method, RBCs were isolated in 34 second from an image containing 18 to 27 cells. We also proposed that PCA could be performed to increase the speed and efficiency of classification. Our classifier algorithm yielded accuracy rates of 100%, 99.99%, and 96.50% for K-nearest neighbor (K-NN) algorithm, support vector machine (SVM), and neural network ANN, respectively. Classification was evaluated in highly sensitivity, specificity, and kappa statistical parameters. In conclusion, the classification results were obtained for a short time period with more efficient when PCA was used. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=red%20blood%20cells" title="red blood cells">red blood cells</a>, <a href="https://publications.waset.org/abstracts/search?q=pre-processing%20image%20algorithms" title=" pre-processing image algorithms"> pre-processing image algorithms</a>, <a href="https://publications.waset.org/abstracts/search?q=classification%20algorithms" title=" classification algorithms"> classification algorithms</a>, <a href="https://publications.waset.org/abstracts/search?q=principal%20component%20analysis%20PCA" title=" principal component analysis PCA"> principal component analysis PCA</a>, <a href="https://publications.waset.org/abstracts/search?q=confusion%20matrix" title=" confusion matrix"> confusion matrix</a>, <a href="https://publications.waset.org/abstracts/search?q=kappa%20statistical%20parameters" title=" kappa statistical parameters"> kappa statistical parameters</a>, <a href="https://publications.waset.org/abstracts/search?q=ROC" title=" ROC"> ROC</a> </p> <a href="https://publications.waset.org/abstracts/13133/isolation-and-classification-of-red-blood-cells-in-anemic-microscopic-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/13133.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">405</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">‹</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=image%20classification&page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=image%20classification&page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=image%20classification&page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=image%20classification&page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=image%20classification&page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=image%20classification&page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=image%20classification&page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=image%20classification&page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=image%20classification&page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=image%20classification&page=151">151</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=image%20classification&page=152">152</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=image%20classification&page=2" rel="next">›</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">© 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">×</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>