CINXE.COM
Search results for: image classification
<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: image classification</title> <meta name="description" content="Search results for: image classification"> <meta name="keywords" content="image classification"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="image classification" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="image classification"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 2463</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: image classification</h1> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2463</span> Object-Based Image Indexing and Retrieval in DCT Domain using Clustering Techniques </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Hossein%20Nezamabadi-pour">Hossein Nezamabadi-pour</a>, <a href="https://publications.waset.org/search?q=Saeid%20Saryazdi"> Saeid Saryazdi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>In this paper, we present a new and effective image indexing technique that extracts features directly from DCT domain. Our proposed approach is an object-based image indexing. For each block of size 8*8 in DCT domain a feature vector is extracted. Then, feature vectors of all blocks of image using a k-means algorithm is clustered into groups. Each cluster represents a special object of the image. Then we select some clusters that have largest members after clustering. The centroids of the selected clusters are taken as image feature vectors and indexed into the database. Also, we propose an approach for using of proposed image indexing method in automatic image classification. Experimental results on a database of 800 images from 8 semantic groups in automatic image classification are reported.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Object-based%20image%20retrieval" title="Object-based image retrieval">Object-based image retrieval</a>, <a href="https://publications.waset.org/search?q=DCT%20domain" title=" DCT domain"> DCT domain</a>, <a href="https://publications.waset.org/search?q=Image%20indexing" title=" Image indexing"> Image indexing</a>, <a href="https://publications.waset.org/search?q=Image%20classification." title=" Image classification."> Image classification.</a> </p> <a href="https://publications.waset.org/4766/object-based-image-indexing-and-retrieval-in-dct-domain-using-clustering-techniques" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/4766/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/4766/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/4766/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/4766/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/4766/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/4766/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/4766/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/4766/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/4766/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/4766/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/4766.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2025</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2462</span> An Amalgam Approach for DICOM Image Classification and Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=J.%20Umamaheswari">J. Umamaheswari</a>, <a href="https://publications.waset.org/search?q=G.%20Radhamani"> G. Radhamani</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>This paper describes about the process of recognition and classification of brain images such as normal and abnormal based on PSO-SVM. Image Classification is becoming more important for medical diagnosis process. In medical area especially for diagnosis the abnormality of the patient is classified, which plays a great role for the doctors to diagnosis the patient according to the severeness of the diseases. In case of DICOM images it is very tough for optimal recognition and early detection of diseases. Our work focuses on recognition and classification of DICOM image based on collective approach of digital image processing. For optimal recognition and classification Particle Swarm Optimization (PSO), Genetic Algorithm (GA) and Support Vector Machine (SVM) are used. The collective approach by using PSO-SVM gives high approximation capability and much faster convergence.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Recognition" title="Recognition">Recognition</a>, <a href="https://publications.waset.org/search?q=classification" title=" classification"> classification</a>, <a href="https://publications.waset.org/search?q=Relaxed%20Median%20Filter" title=" Relaxed Median Filter"> Relaxed Median Filter</a>, <a href="https://publications.waset.org/search?q=Adaptive%20thresholding" title=" Adaptive thresholding"> Adaptive thresholding</a>, <a href="https://publications.waset.org/search?q=clustering%20and%20Neural%20Networks" title=" clustering and Neural Networks"> clustering and Neural Networks</a> </p> <a href="https://publications.waset.org/13370/an-amalgam-approach-for-dicom-image-classification-and-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/13370/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/13370/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/13370/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/13370/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/13370/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/13370/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/13370/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/13370/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/13370/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/13370/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/13370.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2259</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2461</span> Fuzzy Inference System Based Unhealthy Region Classification in Plant Leaf Image</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=K.%20Muthukannan">K. Muthukannan</a>, <a href="https://publications.waset.org/search?q=P.%20Latha"> P. Latha</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In addition to environmental parameters like rain, temperature diseases on crop is a major factor which affects production quality & quantity of crop yield. Hence disease management is a key issue in agriculture. For the management of disease, it needs to be detected at early stage. So, treat it properly & control spread of the disease. Now a day, it is possible to use the images of diseased leaf to detect the type of disease by using image processing techniques. This can be achieved by extracting features from the images which can be further used with classification algorithms or content based image retrieval systems. In this paper, color image is used to extract the features such as mean and standard deviation after the process of region cropping. The selected features are taken from the cropped image with different image size samples. Then, the extracted features are taken in to the account for classification using Fuzzy Inference System (FIS). <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Image%20Cropping" title="Image Cropping">Image Cropping</a>, <a href="https://publications.waset.org/search?q=Classification" title=" Classification"> Classification</a>, <a href="https://publications.waset.org/search?q=Color" title=" Color"> Color</a>, <a href="https://publications.waset.org/search?q=Fuzzy%20Rule" title=" Fuzzy Rule"> Fuzzy Rule</a>, <a href="https://publications.waset.org/search?q=Feature%20Extraction." title=" Feature Extraction."> Feature Extraction.</a> </p> <a href="https://publications.waset.org/10001823/fuzzy-inference-system-based-unhealthy-region-classification-in-plant-leaf-image" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10001823/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10001823/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10001823/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10001823/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10001823/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10001823/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10001823/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10001823/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10001823/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10001823/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10001823.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1889</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2460</span> Automatic Fingerprint Classification Using Graph Theory</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Mana%20Tarjoman">Mana Tarjoman</a>, <a href="https://publications.waset.org/search?q=Shaghayegh%20Zarei"> Shaghayegh Zarei</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>Using efficient classification methods is necessary for automatic fingerprint recognition system. This paper introduces a new structural approach to fingerprint classification by using the directional image of fingerprints to increase the number of subclasses. In this method, the directional image of fingerprints is segmented into regions consisting of pixels with the same direction. Afterwards the relational graph to the segmented image is constructed and according to it, the super graph including prominent information of this graph is formed. Ultimately we apply a matching technique to compare obtained graph with the model graphs in order to classify fingerprints by using cost function. Increasing the number of subclasses with acceptable accuracy in classification and faster processing in fingerprints recognition, makes this system superior.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Classification" title="Classification">Classification</a>, <a href="https://publications.waset.org/search?q=Directional%20image" title=" Directional image"> Directional image</a>, <a href="https://publications.waset.org/search?q=Fingerprint" title=" Fingerprint"> Fingerprint</a>, <a href="https://publications.waset.org/search?q=Graph" title=" Graph"> Graph</a>, <a href="https://publications.waset.org/search?q=Super%20graph." title=" Super graph."> Super graph.</a> </p> <a href="https://publications.waset.org/6400/automatic-fingerprint-classification-using-graph-theory" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/6400/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/6400/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/6400/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/6400/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/6400/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/6400/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/6400/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/6400/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/6400/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/6400/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/6400.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">3634</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2459</span> Selection of Appropriate Classification Technique for Lithological Mapping of Gali Jagir Area, Pakistan </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Khunsa%20Fatima">Khunsa Fatima</a>, <a href="https://publications.waset.org/search?q=Umar%20K.%20Khattak"> Umar K. Khattak</a>, <a href="https://publications.waset.org/search?q=Allah%20Bakhsh%20Kausar"> Allah Bakhsh Kausar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>Satellite images interpretation and analysis assist geologists by providing valuable information about geology and minerals of an area to be surveyed. A test site in Fatejang of district Attock has been studied using Landsat ETM+ and ASTER satellite images for lithological mapping. Five different supervised image classification techniques namely maximum likelihood, parallelepiped, minimum distance to mean, mahalanobis distance and spectral angle mapper have been performed upon both satellite data images to find out the suitable classification technique for lithological mapping in the study area. Results of these five image classification techniques were compared with the geological map produced by Geological Survey of Pakistan. Result of maximum likelihood classification technique applied on ASTER satellite image has highest correlation of 0.66 with the geological map. Field observations and XRD spectra of field samples also verified the results. A lithological map was then prepared based on the maximum likelihood classification of ASTER satellite image.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=ASTER" title="ASTER">ASTER</a>, <a href="https://publications.waset.org/search?q=Landsat-ETM%2B" title=" Landsat-ETM+"> Landsat-ETM+</a>, <a href="https://publications.waset.org/search?q=Satellite" title=" Satellite"> Satellite</a>, <a href="https://publications.waset.org/search?q=Image%20classification." title=" Image classification."> Image classification.</a> </p> <a href="https://publications.waset.org/9996817/selection-of-appropriate-classification-technique-for-lithological-mapping-of-gali-jagir-area-pakistan" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/9996817/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/9996817/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/9996817/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/9996817/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/9996817/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/9996817/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/9996817/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/9996817/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/9996817/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/9996817/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/9996817.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2920</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2458</span> Using Self Organizing Feature Maps for Classification in RGB Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Hassan%20Masoumi">Hassan Masoumi</a>, <a href="https://publications.waset.org/search?q=Ahad%20Salimi"> Ahad Salimi</a>, <a href="https://publications.waset.org/search?q=Nazanin%20Barhemmat"> Nazanin Barhemmat</a>, <a href="https://publications.waset.org/search?q=Babak%20Gholami"> Babak Gholami</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Artificial neural networks have gained a lot of interest as empirical models for their powerful representational capacity, multi input and output mapping characteristics. In fact, most feedforward networks with nonlinear nodal functions have been proved to be universal approximates. In this paper, we propose a new supervised method for color image classification based on selforganizing feature maps (SOFM). This algorithm is based on competitive learning. The method partitions the input space using self-organizing feature maps to introduce the concept of local neighborhoods. Our image classification system entered into RGB image. Experiments with simulated data showed that separability of classes increased when increasing training time. In additional, the result shows proposed algorithms are effective for color image classification. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Classification" title="Classification">Classification</a>, <a href="https://publications.waset.org/search?q=SOFM" title=" SOFM"> SOFM</a>, <a href="https://publications.waset.org/search?q=neural%20network" title=" neural network"> neural network</a>, <a href="https://publications.waset.org/search?q=RGB%20images." title=" RGB images."> RGB images.</a> </p> <a href="https://publications.waset.org/10002035/using-self-organizing-feature-maps-for-classification-in-rgb-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10002035/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10002035/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10002035/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10002035/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10002035/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10002035/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10002035/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10002035/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10002035/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10002035/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10002035.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2319</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2457</span> Color Image Segmentation Using SVM Pixel Classification Image</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=K.%20Sakthivel">K. Sakthivel</a>, <a href="https://publications.waset.org/search?q=R.%20Nallusamy"> R. Nallusamy</a>, <a href="https://publications.waset.org/search?q=C.%20Kavitha"> C. Kavitha</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>The goal of image segmentation is to cluster pixels into salient image regions. Segmentation could be used for object recognition, occlusion boundary estimation within motion or stereo systems, image compression, image editing, or image database lookup. In this paper, we present a color image segmentation using support vector machine (SVM) pixel classification. Firstly, the pixel level color and texture features of the image are extracted and they are used as input to the SVM classifier. These features are extracted using the homogeneity model and Gabor Filter. With the extracted pixel level features, the SVM Classifier is trained by using FCM (Fuzzy C-Means).The image segmentation takes the advantage of both the pixel level information of the image and also the ability of the SVM Classifier. The Experiments show that the proposed method has a very good segmentation result and a better efficiency, increases the quality of the image segmentation compared with the other segmentation methods proposed in the literature.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Image%20Segmentation" title="Image Segmentation">Image Segmentation</a>, <a href="https://publications.waset.org/search?q=Support%20Vector%20Machine" title=" Support Vector Machine"> Support Vector Machine</a>, <a href="https://publications.waset.org/search?q=Fuzzy%20C%E2%80%93Means" title=" Fuzzy C–Means"> Fuzzy C–Means</a>, <a href="https://publications.waset.org/search?q=Pixel%20Feature" title=" Pixel Feature"> Pixel Feature</a>, <a href="https://publications.waset.org/search?q=Texture%20Feature" title=" Texture Feature"> Texture Feature</a>, <a href="https://publications.waset.org/search?q=Homogeneity%0D%0Amodel" title=" Homogeneity model"> Homogeneity model</a>, <a href="https://publications.waset.org/search?q=Gabor%20Filter." title=" Gabor Filter."> Gabor Filter.</a> </p> <a href="https://publications.waset.org/10000781/color-image-segmentation-using-svm-pixel-classification-image" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10000781/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10000781/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10000781/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10000781/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10000781/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10000781/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10000781/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10000781/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10000781/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10000781/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10000781.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">6747</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2456</span> An Efficient Classification Method for Inverse Synthetic Aperture Radar Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Sang-Hong%20Park">Sang-Hong Park</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper proposes an efficient method to classify inverse synthetic aperture (ISAR) images. Because ISAR images can be translated and rotated in the 2-dimensional image place, invariance to the two factors is indispensable for successful classification. The proposed method achieves invariance to translation and rotation of ISAR images using a combination of two-dimensional Fourier transform, polar mapping and correlation-based alignment of the image. Classification is conducted using a simple matching score classifier. In simulations using the real ISAR images of five scaled models measured in a compact range, the proposed method yields classification ratios higher than 97 %. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Radar" title="Radar">Radar</a>, <a href="https://publications.waset.org/search?q=ISAR" title=" ISAR"> ISAR</a>, <a href="https://publications.waset.org/search?q=radar%20target%20classification" title=" radar target classification"> radar target classification</a>, <a href="https://publications.waset.org/search?q=radar%0Aimaging." title=" radar imaging."> radar imaging.</a> </p> <a href="https://publications.waset.org/8895/an-efficient-classification-method-for-inverse-synthetic-aperture-radar-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/8895/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/8895/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/8895/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/8895/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/8895/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/8895/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/8895/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/8895/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/8895/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/8895/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/8895.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2194</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2455</span> A Review on Image Segmentation Techniques and Performance Measures</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=David%20Libouga%20Li%20Gwet">David Libouga Li Gwet</a>, <a href="https://publications.waset.org/search?q=Marius%20Otesteanu"> Marius Otesteanu</a>, <a href="https://publications.waset.org/search?q=Ideal%20Oscar%20Libouga"> Ideal Oscar Libouga</a>, <a href="https://publications.waset.org/search?q=Laurent%20Bitjoka"> Laurent Bitjoka</a>, <a href="https://publications.waset.org/search?q=Gheorghe%20D.%20Popa"> Gheorghe D. Popa</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Image segmentation is a method to extract regions of interest from an image. It remains a fundamental problem in computer vision. The increasing diversity and the complexity of segmentation algorithms have led us firstly, to make a review and classify segmentation techniques, secondly to identify the most used measures of segmentation performance and thirdly, discuss deeply on segmentation philosophy in order to help the choice of adequate segmentation techniques for some applications. To justify the relevance of our analysis, recent algorithms of segmentation are presented through the proposed classification. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Classification" title="Classification">Classification</a>, <a href="https://publications.waset.org/search?q=image%20segmentation" title=" image segmentation"> image segmentation</a>, <a href="https://publications.waset.org/search?q=measures%20of%20performance." title=" measures of performance."> measures of performance.</a> </p> <a href="https://publications.waset.org/10009909/a-review-on-image-segmentation-techniques-and-performance-measures" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10009909/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10009909/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10009909/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10009909/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10009909/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10009909/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10009909/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10009909/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10009909/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10009909/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10009909.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2052</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2454</span> A New Method for Image Classification Based on Multi-level Neural Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Samy%20Sadek">Samy Sadek</a>, <a href="https://publications.waset.org/search?q=Ayoub%20Al-Hamadi"> Ayoub Al-Hamadi</a>, <a href="https://publications.waset.org/search?q=Bernd%20Michaelis"> Bernd Michaelis</a>, <a href="https://publications.waset.org/search?q=Usama%20Sayed"> Usama Sayed</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we propose a supervised method for color image classification based on a multilevel sigmoidal neural network (MSNN) model. In this method, images are classified into five categories, i.e., “Car", “Building", “Mountain", “Farm" and “Coast". This classification is performed without any segmentation processes. To verify the learning capabilities of the proposed method, we compare our MSNN model with the traditional Sigmoidal Neural Network (SNN) model. Results of comparison have shown that the MSNN model performs better than the traditional SNN model in the context of training run time and classification rate. Both color moments and multi-level wavelets decomposition technique are used to extract features from images. The proposed method has been tested on a variety of real and synthetic images. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Image%20classification" title="Image classification">Image classification</a>, <a href="https://publications.waset.org/search?q=multi-level%20neural%20networks" title=" multi-level neural networks"> multi-level neural networks</a>, <a href="https://publications.waset.org/search?q=feature%20extraction" title="feature extraction">feature extraction</a>, <a href="https://publications.waset.org/search?q=wavelets%20decomposition." title=" wavelets decomposition."> wavelets decomposition.</a> </p> <a href="https://publications.waset.org/9822/a-new-method-for-image-classification-based-on-multi-level-neural-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/9822/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/9822/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/9822/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/9822/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/9822/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/9822/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/9822/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/9822/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/9822/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/9822/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/9822.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1648</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2453</span> Moment Invariants in Image Analysis</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Jan%20Flusser">Jan Flusser</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper aims to present a survey of object recognition/classification methods based on image moments. We review various types of moments (geometric moments, complex moments) and moment-based invariants with respect to various image degradations and distortions (rotation, scaling, affine transform, image blurring, etc.) which can be used as shape descriptors for classification. We explain a general theory how to construct these invariants and show also a few of them in explicit forms. We review efficient numerical algorithms that can be used for moment computation and demonstrate practical examples of using moment invariants in real applications. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Object%20recognition" title="Object recognition">Object recognition</a>, <a href="https://publications.waset.org/search?q=degraded%20images" title=" degraded images"> degraded images</a>, <a href="https://publications.waset.org/search?q=moments" title=" moments"> moments</a>, <a href="https://publications.waset.org/search?q=moment%20invariants" title=" moment invariants"> moment invariants</a>, <a href="https://publications.waset.org/search?q=geometric%20invariants" title=" geometric invariants"> geometric invariants</a>, <a href="https://publications.waset.org/search?q=invariants%20to%20convolution" title=" invariants to convolution"> invariants to convolution</a>, <a href="https://publications.waset.org/search?q=moment%20computation." title=" moment computation."> moment computation.</a> </p> <a href="https://publications.waset.org/8899/moment-invariants-in-image-analysis" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/8899/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/8899/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/8899/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/8899/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/8899/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/8899/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/8899/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/8899/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/8899/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/8899/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/8899.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">3923</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2452</span> Data Oriented Model of Image: as a Framework for Image Processing</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=A.%20Habibizad%20Navin">A. Habibizad Navin</a>, <a href="https://publications.waset.org/search?q=A.%20Sadighi"> A. Sadighi</a>, <a href="https://publications.waset.org/search?q=M.%20Naghian%20Fesharaki"> M. Naghian Fesharaki</a>, <a href="https://publications.waset.org/search?q=M.%20Mirnia"> M. Mirnia</a>, <a href="https://publications.waset.org/search?q=M.%20Teshnelab"> M. Teshnelab</a>, <a href="https://publications.waset.org/search?q=R.%20Keshmiri"> R. Keshmiri</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>This paper presents a new data oriented model of image. Then a representation of it, ADBT, is introduced. The ability of ADBT is clustering, segmentation, measuring similarity of images etc, with desired precision and corresponding speed.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Data%20oriented%20modelling" title="Data oriented modelling">Data oriented modelling</a>, <a href="https://publications.waset.org/search?q=image" title=" image"> image</a>, <a href="https://publications.waset.org/search?q=clustering" title=" clustering"> clustering</a>, <a href="https://publications.waset.org/search?q=segmentation" title=" segmentation"> segmentation</a>, <a href="https://publications.waset.org/search?q=classification" title=" classification"> classification</a>, <a href="https://publications.waset.org/search?q=ADBT%20and%20image%20processing." title=" ADBT and image processing."> ADBT and image processing.</a> </p> <a href="https://publications.waset.org/97/data-oriented-model-of-image-as-a-framework-for-image-processing" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/97/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/97/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/97/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/97/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/97/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/97/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/97/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/97/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/97/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/97/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/97.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1800</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2451</span> Image Classification and Accuracy Assessment Using the Confusion Matrix, Contingency Matrix, and Kappa Coefficient</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=F.%20F.%20Howard">F. F. Howard</a>, <a href="https://publications.waset.org/search?q=C.%20B.%20Boye"> C. B. Boye</a>, <a href="https://publications.waset.org/search?q=I.%20Yakubu"> I. Yakubu</a>, <a href="https://publications.waset.org/search?q=J.%20S.%20Y.%20Kuma"> J. S. Y. Kuma</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>One of the ways that could be used for the production of land use and land cover maps by a procedure known as image classification is the use of the remote sensing technique. Numerous elements ought to be taken into consideration, including the availability of highly satisfactory Landsat imagery, secondary data and a precise classification process. The goal of this study was to classify and map the land use and land cover of the study area using remote sensing and Geospatial Information System (GIS) analysis. The classification was done using Landsat 8 satellite images acquired in December 2020 covering the study area. The Landsat image was downloaded from the USGS. The Landsat image with 30 m resolution was geo-referenced to the WGS_84 datum and Universal Transverse Mercator (UTM) Zone 30N coordinate projection system. A radiometric correction was applied to the image to reduce the noise in the image. This study consists of two sections: the Land Use/Land Cover (LULC) and Accuracy Assessments using the confusion and contingency matrix and the Kappa coefficient. The LULC classifications were vegetation (agriculture) (67.87%), water bodies (0.01%), mining areas (5.24%), forest (26.02%), and settlement (0.88%). The overall accuracy of 97.87% and the kappa coefficient (K) of 97.3% were obtained for the confusion matrix. While an overall accuracy of 95.7% and a Kappa coefficient of 0.947 were obtained for the contingency matrix, the kappa coefficients were rated as substantial; hence, the classified image is fit for further research.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Confusion%20Matrix" title="Confusion Matrix">Confusion Matrix</a>, <a href="https://publications.waset.org/search?q=contingency%20matrix" title=" contingency matrix"> contingency matrix</a>, <a href="https://publications.waset.org/search?q=kappa%20coefficient" title=" kappa coefficient"> kappa coefficient</a>, <a href="https://publications.waset.org/search?q=land%20used%2F%20land%20cover" title=" land used/ land cover"> land used/ land cover</a>, <a href="https://publications.waset.org/search?q=accuracy%20assessment." title=" accuracy assessment."> accuracy assessment.</a> </p> <a href="https://publications.waset.org/10013249/image-classification-and-accuracy-assessment-using-the-confusion-matrix-contingency-matrix-and-kappa-coefficient" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10013249/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10013249/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10013249/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10013249/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10013249/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10013249/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10013249/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10013249/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10013249/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10013249/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10013249.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">254</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2450</span> Image Spam Detection Using Color Features and K-Nearest Neighbor Classification</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=T.%20Kumaresan">T. Kumaresan</a>, <a href="https://publications.waset.org/search?q=S.%20Sanjushree"> S. Sanjushree</a>, <a href="https://publications.waset.org/search?q=C.%20Palanisamy"> C. Palanisamy</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>Image spam is a kind of email spam where the spam text is embedded with an image. It is a new spamming technique being used by spammers to send their messages to bulk of internet users. Spam email has become a big problem in the lives of internet users, causing time consumption and economic losses. The main objective of this paper is to detect the image spam by using histogram properties of an image. Though there are many techniques to automatically detect and avoid this problem, spammers employing new tricks to bypass those techniques, as a result those techniques are inefficient to detect the spam mails. In this paper we have proposed a new method to detect the image spam. Here the image features are extracted by using RGB histogram, HSV histogram and combination of both RGB and HSV histogram. Based on the optimized image feature set classification is done by using k- Nearest Neighbor(k-NN) algorithm. Experimental result shows that our method has achieved better accuracy. From the result it is known that combination of RGB and HSV histogram with k-NN algorithm gives the best accuracy in spam detection.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=File%20Type" title="File Type">File Type</a>, <a href="https://publications.waset.org/search?q=HSV%20Histogram" title=" HSV Histogram"> HSV Histogram</a>, <a href="https://publications.waset.org/search?q=k-NN" title=" k-NN"> k-NN</a>, <a href="https://publications.waset.org/search?q=RGB%20Histogram" title=" RGB Histogram"> RGB Histogram</a>, <a href="https://publications.waset.org/search?q=Spam%20Detection." title=" Spam Detection."> Spam Detection.</a> </p> <a href="https://publications.waset.org/10000193/image-spam-detection-using-color-features-and-k-nearest-neighbor-classification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10000193/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10000193/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10000193/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10000193/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10000193/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10000193/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10000193/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10000193/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10000193/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10000193/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10000193.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2142</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2449</span> Hybrid Color-Texture Space for Image Classification</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Hassan%20El%20Maia">Hassan El Maia</a>, <a href="https://publications.waset.org/search?q=Ahmed%20Hammouch"> Ahmed Hammouch</a>, <a href="https://publications.waset.org/search?q=Driss%20Aboutajdine"> Driss Aboutajdine</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>This work presents an approach for the construction of a hybrid color-texture space by using mutual information. Feature extraction is done by the Laws filter with SVM (Support Vectors Machine) as a classifier. The classification is applied on the VisTex database and a SPOT HRV (XS) image representing two forest areas in the region of Rabat in Morocco. The result of classification obtained in the hybrid space is compared with the one obtained in the RGB color space.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Color" title="Color">Color</a>, <a href="https://publications.waset.org/search?q=texture" title=" texture"> texture</a>, <a href="https://publications.waset.org/search?q=laws%20filter" title=" laws filter"> laws filter</a>, <a href="https://publications.waset.org/search?q=mutual%20information" title=" mutual information"> mutual information</a>, <a href="https://publications.waset.org/search?q=SVM" title=" SVM"> SVM</a>, <a href="https://publications.waset.org/search?q=hybrid%20space." title=" hybrid space."> hybrid space.</a> </p> <a href="https://publications.waset.org/8321/hybrid-color-texture-space-for-image-classification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/8321/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/8321/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/8321/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/8321/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/8321/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/8321/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/8321/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/8321/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/8321/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/8321/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/8321.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1826</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2448</span> Automatic Moment-Based Texture Segmentation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Tudor%20Barbu">Tudor Barbu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>An automatic moment-based texture segmentation approach is proposed in this paper. First, we describe the related work in this computer vision domain. Our texture feature extraction, the first part of the texture recognition process, produces a set of moment-based feature vectors. For each image pixel, a texture feature vector is computed as a sequence of area moments. Then, an automatic pixel classification approach is proposed. The feature vectors are clustered using an unsupervised classification algorithm, the optimal number of clusters being determined using a measure based on validation indexes. From the resulted pixel classes one determines easily the desired texture regions of the image.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Image%20segmentation" title="Image segmentation">Image segmentation</a>, <a href="https://publications.waset.org/search?q=moment-based%20texture%20analysis" title=" moment-based texture analysis"> moment-based texture analysis</a>, <a href="https://publications.waset.org/search?q=automatic%20classification" title=" automatic classification"> automatic classification</a>, <a href="https://publications.waset.org/search?q=validity%20indexes." title=" validity indexes."> validity indexes.</a> </p> <a href="https://publications.waset.org/9996875/automatic-moment-based-texture-segmentation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/9996875/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/9996875/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/9996875/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/9996875/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/9996875/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/9996875/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/9996875/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/9996875/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/9996875/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/9996875/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/9996875.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2379</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2447</span> Fuzzy Based Visual Texture Feature for Psoriasis Image Analysis</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=G.%20Murugeswari">G. Murugeswari</a>, <a href="https://publications.waset.org/search?q=A.%20Suruliandi"> A. Suruliandi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>This paper proposes a rotational invariant texture feature based on the roughness property of the image for psoriasis image analysis. In this work, we have applied this feature for image classification and segmentation. The fuzzy concept is employed to overcome the imprecision of roughness. Since the psoriasis lesion is modeled by a rough surface, the feature is extended for calculating the Psoriasis Area Severity Index value. For classification and segmentation, the Nearest Neighbor algorithm is applied. We have obtained promising results for identifying affected lesions by using the roughness index and severity level estimation.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Fuzzy%20texture%20feature" title="Fuzzy texture feature">Fuzzy texture feature</a>, <a href="https://publications.waset.org/search?q=psoriasis" title=" psoriasis"> psoriasis</a>, <a href="https://publications.waset.org/search?q=roughness%20feature" title=" roughness feature"> roughness feature</a>, <a href="https://publications.waset.org/search?q=skin%20disease." title=" skin disease."> skin disease.</a> </p> <a href="https://publications.waset.org/10000824/fuzzy-based-visual-texture-feature-for-psoriasis-image-analysis" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10000824/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10000824/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10000824/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10000824/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10000824/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10000824/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10000824/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10000824/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10000824/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10000824/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10000824.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2116</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2446</span> A General Framework for Knowledge Discovery Using High Performance Machine Learning Algorithms</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=S.%20Nandagopalan">S. Nandagopalan</a>, <a href="https://publications.waset.org/search?q=N.%20Pradeep"> N. Pradeep</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The aim of this paper is to propose a general framework for storing, analyzing, and extracting knowledge from two-dimensional echocardiographic images, color Doppler images, non-medical images, and general data sets. A number of high performance data mining algorithms have been used to carry out this task. Our framework encompasses four layers namely physical storage, object identification, knowledge discovery, user level. Techniques such as active contour model to identify the cardiac chambers, pixel classification to segment the color Doppler echo image, universal model for image retrieval, Bayesian method for classification, parallel algorithms for image segmentation, etc., were employed. Using the feature vector database that have been efficiently constructed, one can perform various data mining tasks like clustering, classification, etc. with efficient algorithms along with image mining given a query image. All these facilities are included in the framework that is supported by state-of-the-art user interface (UI). The algorithms were tested with actual patient data and Coral image database and the results show that their performance is better than the results reported already. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Active%20Contour" title="Active Contour">Active Contour</a>, <a href="https://publications.waset.org/search?q=Bayesian" title=" Bayesian"> Bayesian</a>, <a href="https://publications.waset.org/search?q=Echocardiographic%0D%0Aimage" title=" Echocardiographic image"> Echocardiographic image</a>, <a href="https://publications.waset.org/search?q=Feature%20vector." title=" Feature vector."> Feature vector.</a> </p> <a href="https://publications.waset.org/10003112/a-general-framework-for-knowledge-discovery-using-high-performance-machine-learning-algorithms" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10003112/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10003112/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10003112/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10003112/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10003112/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10003112/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10003112/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10003112/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10003112/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10003112/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10003112.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1713</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2445</span> An Improvement of Multi-Label Image Classification Method Based on Histogram of Oriented Gradient</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Ziad%20Abdallah">Ziad Abdallah</a>, <a href="https://publications.waset.org/search?q=Mohamad%20Oueidat"> Mohamad Oueidat</a>, <a href="https://publications.waset.org/search?q=Ali%20El-Zaart"> Ali El-Zaart</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Image Multi-label Classification (IMC) assigns a label or a set of labels to an image. The big demand for image annotation and archiving in the web attracts the researchers to develop many algorithms for this application domain. The existing techniques for IMC have two drawbacks: The description of the elementary characteristics from the image and the correlation between labels are not taken into account. In this paper, we present an algorithm (MIML-HOGLPP), which simultaneously handles these limitations. The algorithm uses the histogram of gradients as feature descriptor. It applies the Label Priority Power-set as multi-label transformation to solve the problem of label correlation. The experiment shows that the results of MIML-HOGLPP are better in terms of some of the evaluation metrics comparing with the two existing techniques. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Data%20mining" title="Data mining">Data mining</a>, <a href="https://publications.waset.org/search?q=information%20retrieval%20system" title=" information retrieval system"> information retrieval system</a>, <a href="https://publications.waset.org/search?q=multi-label" title=" multi-label"> multi-label</a>, <a href="https://publications.waset.org/search?q=problem%20transformation" title=" problem transformation"> problem transformation</a>, <a href="https://publications.waset.org/search?q=histogram%20of%20gradients." title=" histogram of gradients."> histogram of gradients.</a> </p> <a href="https://publications.waset.org/10006395/an-improvement-of-multi-label-image-classification-method-based-on-histogram-of-oriented-gradient" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10006395/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10006395/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10006395/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10006395/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10006395/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10006395/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10006395/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10006395/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10006395/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10006395/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10006395.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1316</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2444</span> Input Textural Feature Selection By Mutual Information For Multispectral Image Classification</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Mounir%20Ait%20kerroum">Mounir Ait kerroum</a>, <a href="https://publications.waset.org/search?q=Ahmed%20Hammouch"> Ahmed Hammouch</a>, <a href="https://publications.waset.org/search?q=Driss%20Aboutajdine"> Driss Aboutajdine</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Texture information plays increasingly an important role in remotely sensed imagery classification and many pattern recognition applications. However, the selection of relevant textural features to improve this classification accuracy is not a straightforward task. This work investigates the effectiveness of two Mutual Information Feature Selector (MIFS) algorithms to select salient textural features that contain highly discriminatory information for multispectral imagery classification. The input candidate features are extracted from a SPOT High Resolution Visible(HRV) image using Wavelet Transform (WT) at levels (l = 1,2). The experimental results show that the selected textural features according to MIFS algorithms make the largest contribution to improve the classification accuracy than classical approaches such as Principal Components Analysis (PCA) and Linear Discriminant Analysis (LDA). <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Feature%20Selection" title="Feature Selection">Feature Selection</a>, <a href="https://publications.waset.org/search?q=Texture" title=" Texture"> Texture</a>, <a href="https://publications.waset.org/search?q=Mutual%20Information" title=" Mutual Information"> Mutual Information</a>, <a href="https://publications.waset.org/search?q=Wavelet%20Transform" title="Wavelet Transform">Wavelet Transform</a>, <a href="https://publications.waset.org/search?q=SVM%20classification" title=" SVM classification"> SVM classification</a>, <a href="https://publications.waset.org/search?q=SPOT%20Imagery." title=" SPOT Imagery."> SPOT Imagery.</a> </p> <a href="https://publications.waset.org/10231/input-textural-feature-selection-by-mutual-information-for-multispectral-image-classification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10231/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10231/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10231/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10231/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10231/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10231/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10231/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10231/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10231/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10231/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10231.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1554</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2443</span> An Improved k Nearest Neighbor Classifier Using Interestingness Measures for Medical Image Mining</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=J.%20Alamelu%20Mangai">J. Alamelu Mangai</a>, <a href="https://publications.waset.org/search?q=Satej%20Wagle"> Satej Wagle</a>, <a href="https://publications.waset.org/search?q=V.%20Santhosh%20Kumar"> V. Santhosh Kumar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p class="Abstract" style="text-indent:10.2pt">The exponential increase in the volume of medical image database has imposed new challenges to clinical routine in maintaining patient history, diagnosis, treatment and monitoring. With the advent of data mining and machine learning techniques it is possible to automate and/or assist physicians in clinical diagnosis. In this research a medical image classification framework using data mining techniques is proposed. It involves feature extraction, feature selection, feature discretization and classification. In the classification phase, the performance of the traditional kNN k nearest neighbor classifier is improved using a feature weighting scheme and a distance weighted voting instead of simple majority voting. Feature weights are calculated using the interestingness measures used in association rule mining. Experiments on the retinal fundus images show that the proposed framework improves the classification accuracy of traditional kNN from 78.57 % to 92.85 %.<o:p></o:p></p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Medical%20Image%20Mining" title="Medical Image Mining">Medical Image Mining</a>, <a href="https://publications.waset.org/search?q=Data%20Mining" title=" Data Mining"> Data Mining</a>, <a href="https://publications.waset.org/search?q=Feature%20Weighting" title=" Feature Weighting"> Feature Weighting</a>, <a href="https://publications.waset.org/search?q=Association%20Rule%20Mining" title=" Association Rule Mining"> Association Rule Mining</a>, <a href="https://publications.waset.org/search?q=k%20nearest%20neighbor%20classifier." title=" k nearest neighbor classifier."> k nearest neighbor classifier.</a> </p> <a href="https://publications.waset.org/16638/an-improved-k-nearest-neighbor-classifier-using-interestingness-measures-for-medical-image-mining" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/16638/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/16638/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/16638/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/16638/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/16638/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/16638/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/16638/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/16638/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/16638/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/16638/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/16638.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">3308</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2442</span> Evaluation of Robust Feature Descriptors for Texture Classification</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Jia-Hong%20Lee">Jia-Hong Lee</a>, <a href="https://publications.waset.org/search?q=Mei-Yi%20Wu"> Mei-Yi Wu</a>, <a href="https://publications.waset.org/search?q=Hsien-Tsung%20Kuo"> Hsien-Tsung Kuo</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Texture is an important characteristic in real and synthetic scenes. Texture analysis plays a critical role in inspecting surfaces and provides important techniques in a variety of applications. Although several descriptors have been presented to extract texture features, the development of object recognition is still a difficult task due to the complex aspects of texture. Recently, many robust and scaling-invariant image features such as SIFT, SURF and ORB have been successfully used in image retrieval and object recognition. In this paper, we have tried to compare the performance for texture classification using these feature descriptors with k-means clustering. Different classifiers including K-NN, Naive Bayes, Back Propagation Neural Network , Decision Tree and Kstar were applied in three texture image sets - UIUCTex, KTH-TIPS and Brodatz, respectively. Experimental results reveal SIFTS as the best average accuracy rate holder in UIUCTex, KTH-TIPS and SURF is advantaged in Brodatz texture set. BP neuro network works best in the test set classification among all used classifiers. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Texture%20classification" title="Texture classification">Texture classification</a>, <a href="https://publications.waset.org/search?q=texture%20descriptor" title=" texture descriptor"> texture descriptor</a>, <a href="https://publications.waset.org/search?q=SIFT" title=" SIFT"> SIFT</a>, <a href="https://publications.waset.org/search?q=SURF" title=" SURF"> SURF</a>, <a href="https://publications.waset.org/search?q=ORB." title=" ORB."> ORB.</a> </p> <a href="https://publications.waset.org/10003623/evaluation-of-robust-feature-descriptors-for-texture-classification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10003623/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10003623/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10003623/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10003623/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10003623/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10003623/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10003623/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10003623/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10003623/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10003623/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10003623.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1601</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2441</span> Classification of Computer Generated Images from Photographic Images Using Convolutional Neural Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Chaitanya%20Chawla">Chaitanya Chawla</a>, <a href="https://publications.waset.org/search?q=Divya%20Panwar"> Divya Panwar</a>, <a href="https://publications.waset.org/search?q=Gurneesh%20Singh%20Anand"> Gurneesh Singh Anand</a>, <a href="https://publications.waset.org/search?q=M.%20P.%20S%20Bhatia"> M. P. S Bhatia</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents a deep-learning mechanism for classifying computer generated images and photographic images. The proposed method accounts for a convolutional layer capable of automatically learning correlation between neighbouring pixels. In the current form, Convolutional Neural Network (CNN) will learn features based on an image's content instead of the structural features of the image. The layer is particularly designed to subdue an image's content and robustly learn the sensor pattern noise features (usually inherited from image processing in a camera) as well as the statistical properties of images. The paper was assessed on latest natural and computer generated images, and it was concluded that it performs better than the current state of the art methods. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Image%20forensics" title="Image forensics">Image forensics</a>, <a href="https://publications.waset.org/search?q=computer%20graphics" title=" computer graphics"> computer graphics</a>, <a href="https://publications.waset.org/search?q=classification" title=" classification"> classification</a>, <a href="https://publications.waset.org/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/search?q=convolutional%20neural%20networks." title=" convolutional neural networks."> convolutional neural networks.</a> </p> <a href="https://publications.waset.org/10009593/classification-of-computer-generated-images-from-photographic-images-using-convolutional-neural-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10009593/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10009593/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10009593/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10009593/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10009593/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10009593/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10009593/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10009593/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10009593/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10009593/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10009593.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1175</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2440</span> Use of Segmentation and Color Adjustment for Skin Tone Classification in Dermatological Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=F.%20Duarte">F. Duarte</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>The work aims to evaluate the use of classical image processing methodologies towards skin tone classification in dermatological images. The skin tone is an important attribute when considering several factor for skin cancer diagnosis. Currently, there is a lack of clear methodologies to classify the skin tone based only on the dermatological image. In this work, a recent released dataset with the label for skin tone was used as reference for the evaluation of classical methodologies for segmentation and adjustment of color space for classification of skin tone in dermatological images. It was noticed that even though the classical methodologies can work fine for segmentation and color adjustment, classifying the skin tone without proper control of the acquisition of the sample images ended being very unreliable.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Segmentation" title="Segmentation">Segmentation</a>, <a href="https://publications.waset.org/search?q=classification" title=" classification"> classification</a>, <a href="https://publications.waset.org/search?q=color%20space" title=" color space"> color space</a>, <a href="https://publications.waset.org/search?q=skin%20tone" title=" skin tone"> skin tone</a>, <a href="https://publications.waset.org/search?q=Fitzpatrick." title=" Fitzpatrick."> Fitzpatrick.</a> </p> <a href="https://publications.waset.org/10013880/use-of-segmentation-and-color-adjustment-for-skin-tone-classification-in-dermatological-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10013880/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10013880/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10013880/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10013880/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10013880/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10013880/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10013880/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10013880/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10013880/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10013880/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10013880.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">18</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2439</span> Evolving a Fuzzy Rule-Base for Image Segmentation </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=A.%20Borji">A. Borji</a>, <a href="https://publications.waset.org/search?q=M.%20Hamidi"> M. Hamidi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> A new method for color image segmentation using fuzzy logic is proposed in this paper. Our aim here is to automatically produce a fuzzy system for color classification and image segmentation with least number of rules and minimum error rate. Particle swarm optimization is a sub class of evolutionary algorithms that has been inspired from social behavior of fishes, bees, birds, etc, that live together in colonies. We use comprehensive learning particle swarm optimization (CLPSO) technique to find optimal fuzzy rules and membership functions because it discourages premature convergence. Here each particle of the swarm codes a set of fuzzy rules. During evolution, a population member tries to maximize a fitness criterion which is here high classification rate and small number of rules. Finally, particle with the highest fitness value is selected as the best set of fuzzy rules for image segmentation. Our results, using this method for soccer field image segmentation in Robocop contests shows 89% performance. Less computational load is needed when using this method compared with other methods like ANFIS, because it generates a smaller number of fuzzy rules. Large train dataset and its variety, makes the proposed method invariant to illumination noise <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Comprehensive%20learning%20Particle%20Swarmoptimization" title="Comprehensive learning Particle Swarmoptimization">Comprehensive learning Particle Swarmoptimization</a>, <a href="https://publications.waset.org/search?q=fuzzy%20classification." title=" fuzzy classification."> fuzzy classification.</a> </p> <a href="https://publications.waset.org/3386/evolving-a-fuzzy-rule-base-for-image-segmentation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/3386/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/3386/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/3386/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/3386/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/3386/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/3386/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/3386/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/3386/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/3386/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/3386/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/3386.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1956</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2438</span> Dynamic Clustering using Particle Swarm Optimization with Application in Unsupervised Image Classification</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Mahamed%20G.H.%20Omran">Mahamed G.H. Omran</a>, <a href="https://publications.waset.org/search?q=Andries%20P%20Engelbrecht"> Andries P Engelbrecht</a>, <a href="https://publications.waset.org/search?q=Ayed%20Salman"> Ayed Salman</a> </p> <p class="card-text"><strong>Abstract:</strong></p> A new dynamic clustering approach (DCPSO), based on Particle Swarm Optimization, is proposed. This approach is applied to unsupervised image classification. The proposed approach automatically determines the "optimum" number of clusters and simultaneously clusters the data set with minimal user interference. The algorithm starts by partitioning the data set into a relatively large number of clusters to reduce the effects of initial conditions. Using binary particle swarm optimization the "best" number of clusters is selected. The centers of the chosen clusters is then refined via the Kmeans clustering algorithm. The experiments conducted show that the proposed approach generally found the "optimum" number of clusters on the tested images. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Clustering%20Validation" title="Clustering Validation">Clustering Validation</a>, <a href="https://publications.waset.org/search?q=Particle%20Swarm%20Optimization" title=" Particle Swarm Optimization"> Particle Swarm Optimization</a>, <a href="https://publications.waset.org/search?q=Unsupervised%20Clustering" title=" Unsupervised Clustering"> Unsupervised Clustering</a>, <a href="https://publications.waset.org/search?q=Unsupervised%20Image%20Classification." title=" Unsupervised Image Classification."> Unsupervised Image Classification.</a> </p> <a href="https://publications.waset.org/11937/dynamic-clustering-using-particle-swarm-optimization-with-application-in-unsupervised-image-classification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/11937/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/11937/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/11937/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/11937/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/11937/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/11937/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/11937/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/11937/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/11937/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/11937/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/11937.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2454</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2437</span> Evaluation of Classifiers Based On I2C Distance for Action Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Lei%20Zhang">Lei Zhang</a>, <a href="https://publications.waset.org/search?q=Tao%20Wang"> Tao Wang</a>, <a href="https://publications.waset.org/search?q=Xiantong%20Zhen"> Xiantong Zhen</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>Naive Bayes Nearest Neighbor (NBNN) and its variants, i,e., local NBNN and the NBNN kernels, are local feature-based classifiers that have achieved impressive performance in image classification. By exploiting instance-to-class (I2C) distances (instance means image/video in image/video classification), they avoid quantization errors of local image descriptors in the bag of words (BoW) model. However, the performances of NBNN, local NBNN and the NBNN kernels have not been validated on video analysis. In this paper, we introduce these three classifiers into human action recognition and conduct comprehensive experiments on the benchmark KTH and the realistic HMDB datasets. The results shows that those I2C based classifiers consistently outperform the SVM classifier with the BoW model.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Instance-to-class%20distance" title="Instance-to-class distance">Instance-to-class distance</a>, <a href="https://publications.waset.org/search?q=NBNN" title=" NBNN"> NBNN</a>, <a href="https://publications.waset.org/search?q=Local%20NBNN" title=" Local NBNN"> Local NBNN</a>, <a href="https://publications.waset.org/search?q=NBNN%20kernel." title=" NBNN kernel."> NBNN kernel.</a> </p> <a href="https://publications.waset.org/12474/evaluation-of-classifiers-based-on-i2c-distance-for-action-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/12474/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/12474/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/12474/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/12474/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/12474/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/12474/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/12474/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/12474/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/12474/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/12474/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/12474.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1659</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2436</span> Image Segmentation Using 2-D Histogram in RGB Color Space in Digital Libraries </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=El%20Asnaoui%20Khalid">El Asnaoui Khalid</a>, <a href="https://publications.waset.org/search?q=Aksasse%20Brahim"> Aksasse Brahim</a>, <a href="https://publications.waset.org/search?q=Ouanan%20Mohammed"> Ouanan Mohammed </a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents an unsupervised color image segmentation method. It is based on a hierarchical analysis of 2-D histogram in RGB color space. This histogram minimizes storage space of images and thus facilitates the operations between them. The improved segmentation approach shows a better identification of objects in a color image and, at the same time, the system is fast. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Image%20segmentation" title="Image segmentation">Image segmentation</a>, <a href="https://publications.waset.org/search?q=hierarchical%20analysis" title=" hierarchical analysis"> hierarchical analysis</a>, <a href="https://publications.waset.org/search?q=2-D%20histogram" title=" 2-D histogram"> 2-D histogram</a>, <a href="https://publications.waset.org/search?q=Classification." title=" Classification."> Classification.</a> </p> <a href="https://publications.waset.org/10003798/image-segmentation-using-2-d-histogram-in-rgb-color-space-in-digital-libraries" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10003798/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10003798/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10003798/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10003798/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10003798/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10003798/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10003798/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10003798/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10003798/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10003798/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10003798.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1626</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2435</span> ANN-Based Classification of Indirect Immuno Fluorescence Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=P.%20Soda">P. Soda</a>, <a href="https://publications.waset.org/search?q=G.Iannello"> G.Iannello</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>In this paper we address the issue of classifying the fluorescent intensity of a sample in Indirect Immuno-Fluorescence (IIF). Since IIF is a subjective, semi-quantitative test in its very nature, we discuss a strategy to reliably label the image data set by using the diagnoses performed by different physicians. Then, we discuss image pre-processing, feature extraction and selection. Finally, we propose two ANN-based classifiers that can separate intrinsically dubious samples and whose error tolerance can be flexibly set. Measured performance shows error rates less than 1%, which candidates the method to be used in daily medical practice either to perform pre-selection of cases to be examined, or to act as a second reader.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Artificial%20neural%20networks" title="Artificial neural networks">Artificial neural networks</a>, <a href="https://publications.waset.org/search?q=computer%20aided%20diagnosis" title=" computer aided diagnosis"> computer aided diagnosis</a>, <a href="https://publications.waset.org/search?q=image%20classification" title=" image classification"> image classification</a>, <a href="https://publications.waset.org/search?q=indirect%20immuno-fluorescence" title=" indirect immuno-fluorescence"> indirect immuno-fluorescence</a>, <a href="https://publications.waset.org/search?q=pattern%20recognition." title=" pattern recognition."> pattern recognition.</a> </p> <a href="https://publications.waset.org/8631/ann-based-classification-of-indirect-immuno-fluorescence-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/8631/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/8631/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/8631/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/8631/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/8631/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/8631/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/8631/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/8631/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/8631/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/8631/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/8631.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1569</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2434</span> Combined Feature Based Hyperspectral Image Classification Technique Using Support Vector Machines</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Mrs.K.Kavitha">Mrs.K.Kavitha</a>, <a href="https://publications.waset.org/search?q=S.Arivazhagan"> S.Arivazhagan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>A spatial classification technique incorporating a State of Art Feature Extraction algorithm is proposed in this paper for classifying a heterogeneous classes present in hyper spectral images. The classification accuracy can be improved if and only if both the feature extraction and classifier selection are proper. As the classes in the hyper spectral images are assumed to have different textures, textural classification is entertained. Run Length feature extraction is entailed along with the Principal Components and Independent Components. A Hyperspectral Image of Indiana Site taken by AVIRIS is inducted for the experiment. Among the original 220 bands, a subset of 120 bands is selected. Gray Level Run Length Matrix (GLRLM) is calculated for the selected forty bands. From GLRLMs the Run Length features for individual pixels are calculated. The Principle Components are calculated for other forty bands. Independent Components are calculated for next forty bands. As Principal & Independent Components have the ability to represent the textural content of pixels, they are treated as features. The summation of Run Length features, Principal Components, and Independent Components forms the Combined Features which are used for classification. SVM with Binary Hierarchical Tree is used to classify the hyper spectral image. Results are validated with ground truth and accuracies are calculated.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Multi-class" title="Multi-class">Multi-class</a>, <a href="https://publications.waset.org/search?q=Run%20Length%20features" title=" Run Length features"> Run Length features</a>, <a href="https://publications.waset.org/search?q=PCA" title=" PCA"> PCA</a>, <a href="https://publications.waset.org/search?q=ICA" title=" ICA"> ICA</a>, <a href="https://publications.waset.org/search?q=classification%20and%20Support%20Vector%20Machines." title=" classification and Support Vector Machines."> classification and Support Vector Machines.</a> </p> <a href="https://publications.waset.org/11395/combined-feature-based-hyperspectral-image-classification-technique-using-support-vector-machines" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/11395/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/11395/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/11395/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/11395/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/11395/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/11395/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/11395/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/11395/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/11395/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/11395/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/11395.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1523</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">‹</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=image%20classification&page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=image%20classification&page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=image%20classification&page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=image%20classification&page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=image%20classification&page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=image%20classification&page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=image%20classification&page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=image%20classification&page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=image%20classification&page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=image%20classification&page=82">82</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=image%20classification&page=83">83</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=image%20classification&page=2" rel="next">›</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">© 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">×</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>