CINXE.COM
Search results for: color feature extraction
<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: color feature extraction</title> <meta name="description" content="Search results for: color feature extraction"> <meta name="keywords" content="color feature extraction"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="color feature extraction" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="color feature extraction"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 4178</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: color feature extraction</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4178</span> Towards Integrating Statistical Color Features for Human Skin Detection</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mohd%20Zamri%20Osman">Mohd Zamri Osman</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohd%20Aizaini%20Maarof"> Mohd Aizaini Maarof</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohd%20Foad%20Rohani"> Mohd Foad Rohani</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Human skin detection recognized as the primary step in most of the applications such as face detection, illicit image filtering, hand recognition and video surveillance. The performance of any skin detection applications greatly relies on the two components: feature extraction and classification method. Skin color is the most vital information used for skin detection purpose. However, color feature alone sometimes could not handle images with having same color distribution with skin color. A color feature of pixel-based does not eliminate the skin-like color due to the intensity of skin and skin-like color fall under the same distribution. Hence, the statistical color analysis will be exploited such mean and standard deviation as an additional feature to increase the reliability of skin detector. In this paper, we studied the effectiveness of statistical color feature for human skin detection. Furthermore, the paper analyzed the integrated color and texture using eight classifiers with three color spaces of RGB, YCbCr, and HSV. The experimental results show that the integrating statistical feature using Random Forest classifier achieved a significant performance with an F1-score 0.969. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=color%20space" title="color space">color space</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20network" title=" neural network"> neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=random%20forest" title=" random forest"> random forest</a>, <a href="https://publications.waset.org/abstracts/search?q=skin%20detection" title=" skin detection"> skin detection</a>, <a href="https://publications.waset.org/abstracts/search?q=statistical%20feature" title=" statistical feature"> statistical feature</a> </p> <a href="https://publications.waset.org/abstracts/43485/towards-integrating-statistical-color-features-for-human-skin-detection" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/43485.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">462</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4177</span> An Automated System for the Detection of Citrus Greening Disease Based on Visual Descriptors</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sidra%20Naeem">Sidra Naeem</a>, <a href="https://publications.waset.org/abstracts/search?q=Ayesha%20Naeem"> Ayesha Naeem</a>, <a href="https://publications.waset.org/abstracts/search?q=Sahar%20Rahim"> Sahar Rahim</a>, <a href="https://publications.waset.org/abstracts/search?q=Nadia%20Nawaz%20Qadri"> Nadia Nawaz Qadri</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Citrus greening is a bacterial disease that causes considerable damage to citrus fruits worldwide. Efficient method for this disease detection must be carried out to minimize the production loss. This paper presents a pattern recognition system that comprises three stages for the detection of citrus greening from Orange leaves: segmentation, feature extraction and classification. Image segmentation is accomplished by adaptive thresholding. The feature extraction stage comprises of three visual descriptors i.e. shape, color and texture. From shape feature we have used asymmetry index, from color feature we have used histogram of Cb component from YCbCr domain and from texture feature we have used local binary pattern. Classification was done using support vector machines and k nearest neighbors. The best performances of the system is Accuracy = 88.02% and AUROC = 90.1% was achieved by automatic segmented images. Our experiments validate that: (1). Segmentation is an imperative preprocessing step for computer assisted diagnosis of citrus greening, and (2). The combination of shape, color and texture features form a complementary set towards the identification of citrus greening disease. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=citrus%20greening" title="citrus greening">citrus greening</a>, <a href="https://publications.waset.org/abstracts/search?q=pattern%20recognition" title=" pattern recognition"> pattern recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20extraction" title=" feature extraction"> feature extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=classification" title=" classification"> classification</a> </p> <a href="https://publications.waset.org/abstracts/98969/an-automated-system-for-the-detection-of-citrus-greening-disease-based-on-visual-descriptors" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/98969.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">184</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4176</span> RGB Color Based Real Time Traffic Sign Detection and Feature Extraction System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kay%20Thinzar%20Phu">Kay Thinzar Phu</a>, <a href="https://publications.waset.org/abstracts/search?q=Lwin%20Lwin%20Oo"> Lwin Lwin Oo</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In an intelligent transport system and advanced driver assistance system, the developing of real-time traffic sign detection and recognition (TSDR) system plays an important part in recent research field. There are many challenges for developing real-time TSDR system due to motion artifacts, variable lighting and weather conditions and situations of traffic signs. Researchers have already proposed various methods to minimize the challenges problem. The aim of the proposed research is to develop an efficient and effective TSDR in real time. This system proposes an adaptive thresholding method based on RGB color for traffic signs detection and new features for traffic signs recognition. In this system, the RGB color thresholding is used to detect the blue and yellow color traffic signs regions. The system performs the shape identify to decide whether the output candidate region is traffic sign or not. Lastly, new features such as termination points, bifurcation points, and 90’ angles are extracted from validated image. This system uses Myanmar Traffic Sign dataset. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=adaptive%20thresholding%20based%20on%20RGB%20color" title="adaptive thresholding based on RGB color">adaptive thresholding based on RGB color</a>, <a href="https://publications.waset.org/abstracts/search?q=blue%20color%20detection" title=" blue color detection"> blue color detection</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20extraction" title=" feature extraction"> feature extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=yellow%20color%20detection" title=" yellow color detection"> yellow color detection</a> </p> <a href="https://publications.waset.org/abstracts/77127/rgb-color-based-real-time-traffic-sign-detection-and-feature-extraction-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/77127.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">313</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4175</span> Local Texture and Global Color Descriptors for Content Based Image Retrieval</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Tajinder%20Kaur">Tajinder Kaur</a>, <a href="https://publications.waset.org/abstracts/search?q=Anu%20Bala"> Anu Bala</a> </p> <p class="card-text"><strong>Abstract:</strong></p> An image retrieval system is a computer system for browsing, searching, and retrieving images from a large database of digital images a new algorithm meant for content-based image retrieval (CBIR) is presented in this paper. The proposed method combines the color and texture features which are extracted the global and local information of the image. The local texture feature is extracted by using local binary patterns (LBP), which are evaluated by taking into consideration of local difference between the center pixel and its neighbors. For the global color feature, the color histogram (CH) is used which is calculated by RGB (red, green, and blue) spaces separately. In this paper, the combination of color and texture features are proposed for content-based image retrieval. The performance of the proposed method is tested on Corel 1000 database which is the natural database. The results after being investigated show a significant improvement in terms of their evaluation measures as compared to LBP and CH. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=color" title="color">color</a>, <a href="https://publications.waset.org/abstracts/search?q=texture" title=" texture"> texture</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20extraction" title=" feature extraction"> feature extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=local%20binary%20patterns" title=" local binary patterns"> local binary patterns</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20retrieval" title=" image retrieval"> image retrieval</a> </p> <a href="https://publications.waset.org/abstracts/25503/local-texture-and-global-color-descriptors-for-content-based-image-retrieval" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/25503.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">366</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4174</span> An Ensemble-based Method for Vehicle Color Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Saeedeh%20Barzegar%20Khalilsaraei">Saeedeh Barzegar Khalilsaraei</a>, <a href="https://publications.waset.org/abstracts/search?q=Manoocheher%20Kelarestaghi"> Manoocheher Kelarestaghi</a>, <a href="https://publications.waset.org/abstracts/search?q=Farshad%20Eshghi"> Farshad Eshghi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The vehicle color, as a prominent and stable feature, helps to identify a vehicle more accurately. As a result, vehicle color recognition is of great importance in intelligent transportation systems. Unlike conventional methods which use only a single Convolutional Neural Network (CNN) for feature extraction or classification, in this paper, four CNNs, with different architectures well-performing in different classes, are trained to extract various features from the input image. To take advantage of the distinct capability of each network, the multiple outputs are combined using a stack generalization algorithm as an ensemble technique. As a result, the final model performs better than each CNN individually in vehicle color identification. The evaluation results in terms of overall average accuracy and accuracy variance show the proposed method’s outperformance compared to the state-of-the-art rivals. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Vehicle%20Color%20Recognition" title="Vehicle Color Recognition">Vehicle Color Recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=Ensemble%20Algorithm" title="Ensemble Algorithm">Ensemble Algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=Stack%20Generalization" title="Stack Generalization">Stack Generalization</a>, <a href="https://publications.waset.org/abstracts/search?q=Convolutional%20Neural%20Network" title="Convolutional Neural Network">Convolutional Neural Network</a> </p> <a href="https://publications.waset.org/abstracts/146909/an-ensemble-based-method-for-vehicle-color-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/146909.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">85</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4173</span> A Survey of Feature Selection and Feature Extraction Techniques in Machine Learning</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Samina%20Khalid">Samina Khalid</a>, <a href="https://publications.waset.org/abstracts/search?q=Shamila%20Nasreen"> Shamila Nasreen</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Dimensionality reduction as a preprocessing step to machine learning is effective in removing irrelevant and redundant data, increasing learning accuracy, and improving result comprehensibility. However, the recent increase of dimensionality of data poses a severe challenge to many existing feature selection and feature extraction methods with respect to efficiency and effectiveness. In the field of machine learning and pattern recognition, dimensionality reduction is important area, where many approaches have been proposed. In this paper, some widely used feature selection and feature extraction techniques have analyzed with the purpose of how effectively these techniques can be used to achieve high performance of learning algorithms that ultimately improves predictive accuracy of classifier. An endeavor to analyze dimensionality reduction techniques briefly with the purpose to investigate strengths and weaknesses of some widely used dimensionality reduction methods is presented. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=age%20related%20macular%20degeneration" title="age related macular degeneration">age related macular degeneration</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20selection%20feature%20subset%20selection%20feature%20extraction%2Ftransformation" title=" feature selection feature subset selection feature extraction/transformation"> feature selection feature subset selection feature extraction/transformation</a>, <a href="https://publications.waset.org/abstracts/search?q=FSA%E2%80%99s" title=" FSA’s"> FSA’s</a>, <a href="https://publications.waset.org/abstracts/search?q=relief" title=" relief"> relief</a>, <a href="https://publications.waset.org/abstracts/search?q=correlation%20based%20method" title=" correlation based method"> correlation based method</a>, <a href="https://publications.waset.org/abstracts/search?q=PCA" title=" PCA"> PCA</a>, <a href="https://publications.waset.org/abstracts/search?q=ICA" title=" ICA"> ICA</a> </p> <a href="https://publications.waset.org/abstracts/6168/a-survey-of-feature-selection-and-feature-extraction-techniques-in-machine-learning" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/6168.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">496</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4172</span> SCNet: A Vehicle Color Classification Network Based on Spatial Cluster Loss and Channel Attention Mechanism</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Fei%20Gao">Fei Gao</a>, <a href="https://publications.waset.org/abstracts/search?q=Xinyang%20Dong"> Xinyang Dong</a>, <a href="https://publications.waset.org/abstracts/search?q=Yisu%20Ge"> Yisu Ge</a>, <a href="https://publications.waset.org/abstracts/search?q=Shufang%20Lu"> Shufang Lu</a>, <a href="https://publications.waset.org/abstracts/search?q=Libo%20Weng"> Libo Weng</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Vehicle color recognition plays an important role in traffic accident investigation. However, due to the influence of illumination, weather, and noise, vehicle color recognition still faces challenges. In this paper, a vehicle color classification network based on spatial cluster loss and channel attention mechanism (SCNet) is proposed for vehicle color recognition. A channel attention module is applied to extract the features of vehicle color representative regions and reduce the weight of nonrepresentative color regions in the channel. The proposed loss function, called spatial clustering loss (SC-loss), consists of two channel-specific components, such as a concentration component and a diversity component. The concentration component forces all feature channels belonging to the same class to be concentrated through the channel cluster. The diversity components impose additional constraints on the channels through the mean distance coefficient, making them mutually exclusive in spatial dimensions. In the comparison experiments, the proposed method can achieve state-of-the-art performance on the public datasets, VCD, and VeRi, which are 96.1% and 96.2%, respectively. In addition, the ablation experiment further proves that SC-loss can effectively improve the accuracy of vehicle color recognition. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=feature%20extraction" title="feature extraction">feature extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks" title=" convolutional neural networks"> convolutional neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=intelligent%20transportation" title=" intelligent transportation"> intelligent transportation</a>, <a href="https://publications.waset.org/abstracts/search?q=vehicle%20color%20recognition" title=" vehicle color recognition"> vehicle color recognition</a> </p> <a href="https://publications.waset.org/abstracts/132381/scnet-a-vehicle-color-classification-network-based-on-spatial-cluster-loss-and-channel-attention-mechanism" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/132381.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">183</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4171</span> Reduction of False Positives in Head-Shoulder Detection Based on Multi-Part Color Segmentation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Lae-Jeong%20Park">Lae-Jeong Park</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The paper presents a method that utilizes figure-ground color segmentation to extract effective global feature in terms of false positive reduction in the head-shoulder detection. Conventional detectors that rely on local features such as HOG due to real-time operation suffer from false positives. Color cue in an input image provides salient information on a global characteristic which is necessary to alleviate the false positives of the local feature based detectors. An effective approach that uses figure-ground color segmentation has been presented in an effort to reduce the false positives in object detection. In this paper, an extended version of the approach is presented that adopts separate multipart foregrounds instead of a single prior foreground and performs the figure-ground color segmentation with each of the foregrounds. The multipart foregrounds include the parts of the head-shoulder shape and additional auxiliary foregrounds being optimized by a search algorithm. A classifier is constructed with the feature that consists of a set of the multiple resulting segmentations. Experimental results show that the presented method can discriminate more false positive than the single prior shape-based classifier as well as detectors with the local features. The improvement is possible because the presented approach can reduce the false positives that have the same colors in the head and shoulder foregrounds. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=pedestrian%20detection" title="pedestrian detection">pedestrian detection</a>, <a href="https://publications.waset.org/abstracts/search?q=color%20segmentation" title=" color segmentation"> color segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=false%20positive" title=" false positive"> false positive</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20extraction" title=" feature extraction"> feature extraction</a> </p> <a href="https://publications.waset.org/abstracts/61932/reduction-of-false-positives-in-head-shoulder-detection-based-on-multi-part-color-segmentation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/61932.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">281</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4170</span> Contrast Enhancement of Color Images with Color Morphing Approach</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Javed%20Khan">Javed Khan</a>, <a href="https://publications.waset.org/abstracts/search?q=Aamir%20Saeed%20Malik"> Aamir Saeed Malik</a>, <a href="https://publications.waset.org/abstracts/search?q=Nidal%20Kamel"> Nidal Kamel</a>, <a href="https://publications.waset.org/abstracts/search?q=Sarat%20Chandra%20Dass"> Sarat Chandra Dass</a>, <a href="https://publications.waset.org/abstracts/search?q=Azura%20Mohd%20Affandi"> Azura Mohd Affandi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Low contrast images can result from the wrong setting of image acquisition or poor illumination conditions. Such images may not be visually appealing and can be difficult for feature extraction. Contrast enhancement of color images can be useful in medical area for visual inspection. In this paper, a new technique is proposed to improve the contrast of color images. The RGB (red, green, blue) color image is transformed into normalized RGB color space. Adaptive histogram equalization technique is applied to each of the three channels of normalized RGB color space. The corresponding channels in the original image (low contrast) and that of contrast enhanced image with adaptive histogram equalization (AHE) are morphed together in proper proportions. The proposed technique is tested on seventy color images of acne patients. The results of the proposed technique are analyzed using cumulative variance and contrast improvement factor measures. The results are also compared with decorrelation stretch. Both subjective and quantitative analysis demonstrates that the proposed techniques outperform the other techniques. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=contrast%20enhacement" title="contrast enhacement">contrast enhacement</a>, <a href="https://publications.waset.org/abstracts/search?q=normalized%20RGB" title=" normalized RGB"> normalized RGB</a>, <a href="https://publications.waset.org/abstracts/search?q=adaptive%20histogram%20equalization" title=" adaptive histogram equalization"> adaptive histogram equalization</a>, <a href="https://publications.waset.org/abstracts/search?q=cumulative%20variance." title=" cumulative variance."> cumulative variance.</a> </p> <a href="https://publications.waset.org/abstracts/42755/contrast-enhancement-of-color-images-with-color-morphing-approach" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/42755.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">376</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4169</span> Visual Thing Recognition with Binary Scale-Invariant Feature Transform and Support Vector Machine Classifiers Using Color Information</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Wei-Jong%20Yang">Wei-Jong Yang</a>, <a href="https://publications.waset.org/abstracts/search?q=Wei-Hau%20Du"> Wei-Hau Du</a>, <a href="https://publications.waset.org/abstracts/search?q=Pau-Choo%20Chang"> Pau-Choo Chang</a>, <a href="https://publications.waset.org/abstracts/search?q=Jar-Ferr%20Yang"> Jar-Ferr Yang</a>, <a href="https://publications.waset.org/abstracts/search?q=Pi-Hsia%20Hung"> Pi-Hsia Hung</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The demands of smart visual thing recognition in various devices have been increased rapidly for daily smart production, living and learning systems in recent years. This paper proposed a visual thing recognition system, which combines binary scale-invariant feature transform (SIFT), bag of words model (BoW), and support vector machine (SVM) by using color information. Since the traditional SIFT features and SVM classifiers only use the gray information, color information is still an important feature for visual thing recognition. With color-based SIFT features and SVM, we can discard unreliable matching pairs and increase the robustness of matching tasks. The experimental results show that the proposed object recognition system with color-assistant SIFT SVM classifier achieves higher recognition rate than that with the traditional gray SIFT and SVM classification in various situations. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=color%20moments" title="color moments">color moments</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20thing%20recognition%20system" title=" visual thing recognition system"> visual thing recognition system</a>, <a href="https://publications.waset.org/abstracts/search?q=SIFT" title=" SIFT"> SIFT</a>, <a href="https://publications.waset.org/abstracts/search?q=color%20SIFT" title=" color SIFT"> color SIFT</a> </p> <a href="https://publications.waset.org/abstracts/62857/visual-thing-recognition-with-binary-scale-invariant-feature-transform-and-support-vector-machine-classifiers-using-color-information" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/62857.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">467</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4168</span> Myanmar Character Recognition Using Eight Direction Chain Code Frequency Features </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kyi%20Pyar%20Zaw">Kyi Pyar Zaw</a>, <a href="https://publications.waset.org/abstracts/search?q=Zin%20Mar%20Kyu"> Zin Mar Kyu </a> </p> <p class="card-text"><strong>Abstract:</strong></p> Character recognition is the process of converting a text image file into editable and searchable text file. Feature Extraction is the heart of any character recognition system. The character recognition rate may be low or high depending on the extracted features. In the proposed paper, 25 features for one character are used in character recognition. Basically, there are three steps of character recognition such as character segmentation, feature extraction and classification. In segmentation step, horizontal cropping method is used for line segmentation and vertical cropping method is used for character segmentation. In the Feature extraction step, features are extracted in two ways. The first way is that the 8 features are extracted from the entire input character using eight direction chain code frequency extraction. The second way is that the input character is divided into 16 blocks. For each block, although 8 feature values are obtained through eight-direction chain code frequency extraction method, we define the sum of these 8 feature values as a feature for one block. Therefore, 16 features are extracted from that 16 blocks in the second way. We use the number of holes feature to cluster the similar characters. We can recognize the almost Myanmar common characters with various font sizes by using these features. All these 25 features are used in both training part and testing part. In the classification step, the characters are classified by matching the all features of input character with already trained features of characters. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=chain%20code%20frequency" title="chain code frequency">chain code frequency</a>, <a href="https://publications.waset.org/abstracts/search?q=character%20recognition" title=" character recognition"> character recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20extraction" title=" feature extraction"> feature extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=features%20matching" title=" features matching"> features matching</a>, <a href="https://publications.waset.org/abstracts/search?q=segmentation" title=" segmentation"> segmentation</a> </p> <a href="https://publications.waset.org/abstracts/77278/myanmar-character-recognition-using-eight-direction-chain-code-frequency-features" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/77278.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">320</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4167</span> A Feature Clustering-Based Sequential Selection Approach for Color Texture Classification</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mohamed%20Alimoussa">Mohamed Alimoussa</a>, <a href="https://publications.waset.org/abstracts/search?q=Alice%20Porebski"> Alice Porebski</a>, <a href="https://publications.waset.org/abstracts/search?q=Nicolas%20Vandenbroucke"> Nicolas Vandenbroucke</a>, <a href="https://publications.waset.org/abstracts/search?q=Rachid%20Oulad%20Haj%20Thami"> Rachid Oulad Haj Thami</a>, <a href="https://publications.waset.org/abstracts/search?q=Sana%20El%20Fkihi"> Sana El Fkihi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Color and texture are highly discriminant visual cues that provide an essential information in many types of images. Color texture representation and classification is therefore one of the most challenging problems in computer vision and image processing applications. Color textures can be represented in different color spaces by using multiple image descriptors which generate a high dimensional set of texture features. In order to reduce the dimensionality of the feature set, feature selection techniques can be used. The goal of feature selection is to find a relevant subset from an original feature space that can improve the accuracy and efficiency of a classification algorithm. Traditionally, feature selection is focused on removing irrelevant features, neglecting the possible redundancy between relevant ones. This is why some feature selection approaches prefer to use feature clustering analysis to aid and guide the search. These techniques can be divided into two categories. i) Feature clustering-based ranking algorithm uses feature clustering as an analysis that comes before feature ranking. Indeed, after dividing the feature set into groups, these approaches perform a feature ranking in order to select the most discriminant feature of each group. ii) Feature clustering-based subset search algorithms can use feature clustering following one of three strategies; as an initial step that comes before the search, binded and combined with the search or as the search alternative and replacement. In this paper, we propose a new feature clustering-based sequential selection approach for the purpose of color texture representation and classification. Our approach is a three step algorithm. First, irrelevant features are removed from the feature set thanks to a class-correlation measure. Then, introducing a new automatic feature clustering algorithm, the feature set is divided into several feature clusters. Finally, a sequential search algorithm, based on a filter model and a separability measure, builds a relevant and non redundant feature subset: at each step, a feature is selected and features of the same cluster are removed and thus not considered thereafter. This allows to significantly speed up the selection process since large number of redundant features are eliminated at each step. The proposed algorithm uses the clustering algorithm binded and combined with the search. Experiments using a combination of two well known texture descriptors, namely Haralick features extracted from Reduced Size Chromatic Co-occurence Matrices (RSCCMs) and features extracted from Local Binary patterns (LBP) image histograms, on five color texture data sets, Outex, NewBarktex, Parquet, Stex and USPtex demonstrate the efficiency of our method compared to seven of the state of the art methods in terms of accuracy and computation time. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=feature%20selection" title="feature selection">feature selection</a>, <a href="https://publications.waset.org/abstracts/search?q=color%20texture%20classification" title=" color texture classification"> color texture classification</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20clustering" title=" feature clustering"> feature clustering</a>, <a href="https://publications.waset.org/abstracts/search?q=color%20LBP" title=" color LBP"> color LBP</a>, <a href="https://publications.waset.org/abstracts/search?q=chromatic%20cooccurrence%20matrix" title=" chromatic cooccurrence matrix"> chromatic cooccurrence matrix</a> </p> <a href="https://publications.waset.org/abstracts/128745/a-feature-clustering-based-sequential-selection-approach-for-color-texture-classification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/128745.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">136</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4166</span> A Computer-Aided System for Tooth Shade Matching</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Zuhal%20Kurt">Zuhal Kurt</a>, <a href="https://publications.waset.org/abstracts/search?q=Meral%20Kurt"> Meral Kurt</a>, <a href="https://publications.waset.org/abstracts/search?q=Bilge%20T.%20Bal"> Bilge T. Bal</a>, <a href="https://publications.waset.org/abstracts/search?q=Kemal%20Ozkan"> Kemal Ozkan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Shade matching and reproduction is the most important element of success in prosthetic dentistry. Until recently, shade matching procedure was implemented by dentists visual perception with the help of shade guides. Since many factors influence visual perception; tooth shade matching using visual devices (shade guides) is highly subjective and inconsistent. Subjective nature of this process has lead to the development of instrumental devices. Nowadays, colorimeters, spectrophotometers, spectroradiometers and digital image analysing systems are used for instrumental shade selection. Instrumental devices have advantages that readings are quantifiable, can obtain more rapidly and simply, objectively and precisely. However, these devices have noticeable drawbacks. For example, translucent structure and irregular surfaces of teeth lead to defects on measurement with these devices. Also between the results acquired by devices with different measurement principles may make inconsistencies. So, its obligatory to search for new methods for dental shade matching process. A computer-aided system device; digital camera has developed rapidly upon today. Currently, advances in image processing and computing have resulted in the extensive use of digital cameras for color imaging. This procedure has a much cheaper process than the use of traditional contact-type color measurement devices. Digital cameras can be taken by the place of contact-type instruments for shade selection and overcome their disadvantages. Images taken from teeth show morphology and color texture of teeth. In last decades, a new method was recommended to compare the color of shade tabs taken by a digital camera using color features. This method showed that visual and computer-aided shade matching systems should be used as concatenated. Recently using methods of feature extraction techniques are based on shape description and not used color information. However, color is mostly experienced as an essential property in depicting and extracting features from objects in the world around us. When local feature descriptors with color information are extended by concatenating color descriptor with the shape descriptor, that descriptor will be effective on visual object recognition and classification task. Therefore, the color descriptor is to be used in combination with a shape descriptor it does not need to contain any spatial information, which leads us to use local histograms. This local color histogram method is remain reliable under variation of photometric changes, geometrical changes and variation of image quality. So, coloring local feature extraction methods are used to extract features, and also the Scale Invariant Feature Transform (SIFT) descriptor used to for shape description in the proposed method. After the combination of these descriptors, the state-of-art descriptor named by Color-SIFT will be used in this study. Finally, the image feature vectors obtained from quantization algorithm are fed to classifiers such as Nearest Neighbor (KNN), Naive Bayes or Support Vector Machines (SVM) to determine label(s) of the visual object category or matching. In this study, SVM are used as classifiers for color determination and shade matching. Finally, experimental results of this method will be compared with other recent studies. It is concluded from the study that the proposed method is remarkable development on computer aided tooth shade determination system. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=classifiers" title="classifiers">classifiers</a>, <a href="https://publications.waset.org/abstracts/search?q=color%20determination" title=" color determination"> color determination</a>, <a href="https://publications.waset.org/abstracts/search?q=computer-aided%20system" title=" computer-aided system"> computer-aided system</a>, <a href="https://publications.waset.org/abstracts/search?q=tooth%20shade%20matching" title=" tooth shade matching"> tooth shade matching</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20extraction" title=" feature extraction"> feature extraction</a> </p> <a href="https://publications.waset.org/abstracts/51113/a-computer-aided-system-for-tooth-shade-matching" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/51113.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">444</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4165</span> An Automatic Feature Extraction Technique for 2D Punch Shapes</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Awais%20Ahmad%20Khan">Awais Ahmad Khan</a>, <a href="https://publications.waset.org/abstracts/search?q=Emad%20Abouel%20Nasr"> Emad Abouel Nasr</a>, <a href="https://publications.waset.org/abstracts/search?q=H.%20M.%20A.%20Hussein"> H. M. A. Hussein</a>, <a href="https://publications.waset.org/abstracts/search?q=Abdulrahman%20Al-Ahmari"> Abdulrahman Al-Ahmari</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Sheet-metal parts have been widely applied in electronics, communication and mechanical industries in recent decades; but the advancement in sheet-metal part design and manufacturing is still behind in comparison with the increasing importance of sheet-metal parts in modern industry. This paper presents a methodology for automatic extraction of some common 2D internal sheet metal features. The features used in this study are taken from Unipunch ™ catalogue. The extraction process starts with the data extraction from STEP file using an object oriented approach and with the application of suitable algorithms and rules, all features contained in the catalogue are automatically extracted. Since the extracted features include geometry and engineering information, they will be effective for downstream application such as feature rebuilding and process planning. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=feature%20extraction" title="feature extraction">feature extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=internal%20features" title=" internal features"> internal features</a>, <a href="https://publications.waset.org/abstracts/search?q=punch%20shapes" title=" punch shapes"> punch shapes</a>, <a href="https://publications.waset.org/abstracts/search?q=sheet%20metal" title=" sheet metal"> sheet metal</a> </p> <a href="https://publications.waset.org/abstracts/45001/an-automatic-feature-extraction-technique-for-2d-punch-shapes" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/45001.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">615</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4164</span> Comparative Analysis of Feature Extraction and Classification Techniques</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=R.%20L.%20Ujjwal">R. L. Ujjwal</a>, <a href="https://publications.waset.org/abstracts/search?q=Abhishek%20Jain"> Abhishek Jain</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In the field of computer vision, most facial variations such as identity, expression, emotions and gender have been extensively studied. Automatic age estimation has been rarely explored. With age progression of a human, the features of the face changes. This paper is providing a new comparable study of different type of algorithm to feature extraction [Hybrid features using HAAR cascade & HOG features] & classification [KNN & SVM] training dataset. By using these algorithms we are trying to find out one of the best classification algorithms. Same thing we have done on the feature selection part, we extract the feature by using HAAR cascade and HOG. This work will be done in context of age group classification model. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title="computer vision">computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=age%20group" title=" age group"> age group</a>, <a href="https://publications.waset.org/abstracts/search?q=face%20detection" title=" face detection"> face detection</a> </p> <a href="https://publications.waset.org/abstracts/58670/comparative-analysis-of-feature-extraction-and-classification-techniques" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/58670.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">368</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4163</span> Automatic Multi-Label Image Annotation System Guided by Firefly Algorithm and Bayesian Method</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Saad%20M.%20Darwish">Saad M. Darwish</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohamed%20A.%20El-Iskandarani"> Mohamed A. El-Iskandarani</a>, <a href="https://publications.waset.org/abstracts/search?q=Guitar%20M.%20Shawkat"> Guitar M. Shawkat</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Nowadays, the amount of available multimedia data is continuously on the rise. The need to find a required image for an ordinary user is a challenging task. Content based image retrieval (CBIR) computes relevance based on the visual similarity of low-level image features such as color, textures, etc. However, there is a gap between low-level visual features and semantic meanings required by applications. The typical method of bridging the semantic gap is through the automatic image annotation (AIA) that extracts semantic features using machine learning techniques. In this paper, a multi-label image annotation system guided by Firefly and Bayesian method is proposed. Firstly, images are segmented using the maximum variance intra cluster and Firefly algorithm, which is a swarm-based approach with high convergence speed, less computation rate and search for the optimal multiple threshold. Feature extraction techniques based on color features and region properties are applied to obtain the representative features. After that, the images are annotated using translation model based on the Net Bayes system, which is efficient for multi-label learning with high precision and less complexity. Experiments are performed using Corel Database. The results show that the proposed system is better than traditional ones for automatic image annotation and retrieval. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=feature%20extraction" title="feature extraction">feature extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20selection" title=" feature selection"> feature selection</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20annotation" title=" image annotation"> image annotation</a>, <a href="https://publications.waset.org/abstracts/search?q=classification" title=" classification"> classification</a> </p> <a href="https://publications.waset.org/abstracts/18552/automatic-multi-label-image-annotation-system-guided-by-firefly-algorithm-and-bayesian-method" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/18552.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">586</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4162</span> Kernel-Based Double Nearest Proportion Feature Extraction for Hyperspectral Image Classification</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hung-Sheng%20Lin">Hung-Sheng Lin</a>, <a href="https://publications.waset.org/abstracts/search?q=Cheng-Hsuan%20Li"> Cheng-Hsuan Li</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Over the past few years, kernel-based algorithms have been widely used to extend some linear feature extraction methods such as principal component analysis (PCA), linear discriminate analysis (LDA), and nonparametric weighted feature extraction (NWFE) to their nonlinear versions, kernel principal component analysis (KPCA), generalized discriminate analysis (GDA), and kernel nonparametric weighted feature extraction (KNWFE), respectively. These nonlinear feature extraction methods can detect nonlinear directions with the largest nonlinear variance or the largest class separability based on the given kernel function. Moreover, they have been applied to improve the target detection or the image classification of hyperspectral images. The double nearest proportion feature extraction (DNP) can effectively reduce the overlap effect and have good performance in hyperspectral image classification. The DNP structure is an extension of the k-nearest neighbor technique. For each sample, there are two corresponding nearest proportions of samples, the self-class nearest proportion and the other-class nearest proportion. The term “nearest proportion” used here consider both the local information and other more global information. With these settings, the effect of the overlap between the sample distributions can be reduced. Usually, the maximum likelihood estimator and the related unbiased estimator are not ideal estimators in high dimensional inference problems, particularly in small data-size situation. Hence, an improved estimator by shrinkage estimation (regularization) is proposed. Based on the DNP structure, LDA is included as a special case. In this paper, the kernel method is applied to extend DNP to kernel-based DNP (KDNP). In addition to the advantages of DNP, KDNP surpasses DNP in the experimental results. According to the experiments on the real hyperspectral image data sets, the classification performance of KDNP is better than that of PCA, LDA, NWFE, and their kernel versions, KPCA, GDA, and KNWFE. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=feature%20extraction" title="feature extraction">feature extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=kernel%20method" title=" kernel method"> kernel method</a>, <a href="https://publications.waset.org/abstracts/search?q=double%20nearest%20proportion%20feature%20extraction" title=" double nearest proportion feature extraction"> double nearest proportion feature extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=kernel%20double%20nearest%20feature%20extraction" title=" kernel double nearest feature extraction"> kernel double nearest feature extraction</a> </p> <a href="https://publications.waset.org/abstracts/54639/kernel-based-double-nearest-proportion-feature-extraction-for-hyperspectral-image-classification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/54639.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">344</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4161</span> Field-Programmable Gate Arrays Based High-Efficiency Oriented Fast and Rotated Binary Robust Independent Elementary Feature Extraction Method Using Feature Zone Strategy</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Huang%20Bai-Cheng">Huang Bai-Cheng</a> </p> <p class="card-text"><strong>Abstract:</strong></p> When deploying the Oriented Fast and Rotated Binary Robust Independent Elementary Feature (BRIEF) (ORB) extraction algorithm on field-programmable gate arrays (FPGA), the access of global storage for 31×31 pixel patches of the features has become the bottleneck of the system efficiency. Therefore, a feature zone strategy has been proposed. Zones are searched as features are detected. Pixels around the feature zones are extracted from global memory and distributed into patches corresponding to feature coordinates. The proposed FPGA structure is targeted on a Xilinx FPGA development board of Zynq UltraScale+ series, and multiple datasets are tested. Compared with the streaming pixel patch extraction method, the proposed architecture obtains at least two times acceleration consuming extra 3.82% Flip-Flops (FFs) and 7.78% Look-Up Tables (LUTs). Compared with the non-streaming one, the proposed architecture saves 22.3% LUT and 1.82% FF, causing a latency of only 0.2ms and a drop in frame rate for 1. Compared with the related works, the proposed strategy and hardware architecture have the superiority of keeping a balance between FPGA resources and performance. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=feature%20extraction" title="feature extraction">feature extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=real-time" title=" real-time"> real-time</a>, <a href="https://publications.waset.org/abstracts/search?q=ORB" title=" ORB"> ORB</a>, <a href="https://publications.waset.org/abstracts/search?q=FPGA%20implementation" title=" FPGA implementation"> FPGA implementation</a> </p> <a href="https://publications.waset.org/abstracts/158130/field-programmable-gate-arrays-based-high-efficiency-oriented-fast-and-rotated-binary-robust-independent-elementary-feature-extraction-method-using-feature-zone-strategy" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/158130.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">122</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4160</span> Cigarette Smoke Detection Based on YOLOV3</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Wei%20Li">Wei Li</a>, <a href="https://publications.waset.org/abstracts/search?q=Tuo%20Yang"> Tuo Yang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In order to satisfy the real-time and accurate requirements of cigarette smoke detection in complex scenes, a cigarette smoke detection technology based on the combination of deep learning and color features was proposed. Firstly, based on the color features of cigarette smoke, the suspicious cigarette smoke area in the image is extracted. Secondly, combined with the efficiency of cigarette smoke detection and the problem of network overfitting, a network model for cigarette smoke detection was designed according to YOLOV3 algorithm to reduce the false detection rate. The experimental results show that the method is feasible and effective, and the accuracy of cigarette smoke detection is up to 99.13%, which satisfies the requirements of real-time cigarette smoke detection in complex scenes. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title="deep learning">deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=cigarette%20smoke%20detection" title=" cigarette smoke detection"> cigarette smoke detection</a>, <a href="https://publications.waset.org/abstracts/search?q=YOLOV3" title=" YOLOV3"> YOLOV3</a>, <a href="https://publications.waset.org/abstracts/search?q=color%20feature%20extraction" title=" color feature extraction"> color feature extraction</a> </p> <a href="https://publications.waset.org/abstracts/159151/cigarette-smoke-detection-based-on-yolov3" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/159151.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">87</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4159</span> Experimental Characterization of the Color Quality and Error Rate for an Red, Green, and Blue-Based Light Emission Diode-Fixture Used in Visible Light Communications</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Juan%20F.%20Gutierrez">Juan F. Gutierrez</a>, <a href="https://publications.waset.org/abstracts/search?q=Jesus%20M.%20Quintero"> Jesus M. Quintero</a>, <a href="https://publications.waset.org/abstracts/search?q=Diego%20Sandoval"> Diego Sandoval</a> </p> <p class="card-text"><strong>Abstract:</strong></p> An important feature of LED technology is the fast on-off commutation, which allows data transmission. Visible Light Communication (VLC) is a wireless method to transmit data with visible light. Modulation formats such as On-Off Keying (OOK) and Color Shift Keying (CSK) are used in VLC. Since CSK is based on three color bands uses red, green, and blue monochromatic LED (RGB-LED) to define a pattern of chromaticities. This type of CSK provides poor color quality in the illuminated area. This work presents the design and implementation of a VLC system using RGB-based CSK with 16, 8, and 4 color points, mixing with a steady baseline of a phosphor white-LED, to improve the color quality of the LED-Fixture. The experimental system was assessed in terms of the Color Rendering Index (CRI) and the Symbol Error Rate (SER). Good color quality performance of the LED-Fixture was obtained with an acceptable SER. The laboratory setup used to characterize and calibrate an LED-Fixture is described. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=VLC" title="VLC">VLC</a>, <a href="https://publications.waset.org/abstracts/search?q=indoor%20lighting" title=" indoor lighting"> indoor lighting</a>, <a href="https://publications.waset.org/abstracts/search?q=color%20quality" title=" color quality"> color quality</a>, <a href="https://publications.waset.org/abstracts/search?q=symbol%20error%20rate" title=" symbol error rate"> symbol error rate</a>, <a href="https://publications.waset.org/abstracts/search?q=color%20shift%20keying" title=" color shift keying"> color shift keying</a> </p> <a href="https://publications.waset.org/abstracts/158336/experimental-characterization-of-the-color-quality-and-error-rate-for-an-red-green-and-blue-based-light-emission-diode-fixture-used-in-visible-light-communications" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/158336.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">99</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4158</span> Automatic Extraction of Water Bodies Using Whole-R Method</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nikhat%20Nawaz">Nikhat Nawaz</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20Srinivasulu"> S. Srinivasulu</a>, <a href="https://publications.waset.org/abstracts/search?q=P.%20Kesava%20Rao"> P. Kesava Rao</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Feature extraction plays an important role in many remote sensing applications. Automatic extraction of water bodies is of great significance in many remote sensing applications like change detection, image retrieval etc. This paper presents a procedure for automatic extraction of water information from remote sensing images. The algorithm uses the relative location of R-colour component of the chromaticity diagram. This method is then integrated with the effectiveness of the spatial scale transformation of whole method. The whole method is based on water index fitted from spectral library. Experimental results demonstrate the improved accuracy and effectiveness of the integrated method for automatic extraction of water bodies. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=feature%20extraction" title="feature extraction">feature extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=remote%20sensing" title=" remote sensing"> remote sensing</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20retrieval" title=" image retrieval"> image retrieval</a>, <a href="https://publications.waset.org/abstracts/search?q=chromaticity" title=" chromaticity"> chromaticity</a>, <a href="https://publications.waset.org/abstracts/search?q=water%20index" title=" water index"> water index</a>, <a href="https://publications.waset.org/abstracts/search?q=spectral%20library" title=" spectral library"> spectral library</a>, <a href="https://publications.waset.org/abstracts/search?q=integrated%20method" title=" integrated method "> integrated method </a> </p> <a href="https://publications.waset.org/abstracts/2097/automatic-extraction-of-water-bodies-using-whole-r-method" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/2097.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">384</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4157</span> Time-Frequency Feature Extraction Method Based on Micro-Doppler Signature of Ground Moving Targets</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ke%20Ren">Ke Ren</a>, <a href="https://publications.waset.org/abstracts/search?q=Huiruo%20Shi"> Huiruo Shi</a>, <a href="https://publications.waset.org/abstracts/search?q=Linsen%20Li"> Linsen Li</a>, <a href="https://publications.waset.org/abstracts/search?q=Baoshuai%20Wang"> Baoshuai Wang</a>, <a href="https://publications.waset.org/abstracts/search?q=Yu%20Zhou"> Yu Zhou</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Since some discriminative features are required for ground moving targets classification, we propose a new feature extraction method based on micro-Doppler signature. Firstly, the time-frequency analysis of measured data indicates that the time-frequency spectrograms of the three kinds of ground moving targets, i.e., single walking person, two people walking and a moving wheeled vehicle, are discriminative. Then, a three-dimensional time-frequency feature vector is extracted from the time-frequency spectrograms to depict these differences. At last, a Support Vector Machine (SVM) classifier is trained with the proposed three-dimensional feature vector. The classification accuracy to categorize ground moving targets into the three kinds of the measured data is found to be over 96%, which demonstrates the good discriminative ability of the proposed micro-Doppler feature. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=micro-doppler" title="micro-doppler">micro-doppler</a>, <a href="https://publications.waset.org/abstracts/search?q=time-frequency%20analysis" title=" time-frequency analysis"> time-frequency analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20extraction" title=" feature extraction"> feature extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=radar%20target%20classification" title=" radar target classification"> radar target classification</a> </p> <a href="https://publications.waset.org/abstracts/66995/time-frequency-feature-extraction-method-based-on-micro-doppler-signature-of-ground-moving-targets" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/66995.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">405</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4156</span> Real-Time Multi-Vehicle Tracking Application at Intersections Based on Feature Selection in Combination with Color Attribution</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Qiang%20Zhang">Qiang Zhang</a>, <a href="https://publications.waset.org/abstracts/search?q=Xiaojian%20Hu"> Xiaojian Hu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In multi-vehicle tracking, based on feature selection, the tracking system efficiently tracks vehicles in a video with minimal error in combination with color attribution, which focuses on presenting a simple and fast, yet accurate and robust solution to the problem such as inaccurately and untimely responses of statistics-based adaptive traffic control system in the intersection scenario. In this study, a real-time tracking system is proposed for multi-vehicle tracking in the intersection scene. Considering the complexity and application feasibility of the algorithm, in the object detection step, the detection result provided by virtual loops were post-processed and then used as the input for the tracker. For the tracker, lightweight methods were designed to extract and select features and incorporate them into the adaptive color tracking (ACT) framework. And the approbatory online feature selection algorithms are integrated on the mature ACT system with good compatibility. The proposed feature selection methods and multi-vehicle tracking method are evaluated on KITTI datasets and show efficient vehicle tracking performance when compared to the other state-of-the-art approaches in the same category. And the system performs excellently on the video sequences recorded at the intersection. Furthermore, the presented vehicle tracking system is suitable for surveillance applications. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=real-time" title="real-time">real-time</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-vehicle%20tracking" title=" multi-vehicle tracking"> multi-vehicle tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20selection" title=" feature selection"> feature selection</a>, <a href="https://publications.waset.org/abstracts/search?q=color%20attribution" title=" color attribution"> color attribution</a> </p> <a href="https://publications.waset.org/abstracts/136438/real-time-multi-vehicle-tracking-application-at-intersections-based-on-feature-selection-in-combination-with-color-attribution" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/136438.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">163</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4155</span> Feature Evaluation Based on Random Subspace and Multiple-K Ensemble</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jaehong%20Yu">Jaehong Yu</a>, <a href="https://publications.waset.org/abstracts/search?q=Seoung%20Bum%20Kim"> Seoung Bum Kim</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Clustering analysis can facilitate the extraction of intrinsic patterns in a dataset and reveal its natural groupings without requiring class information. For effective clustering analysis in high dimensional datasets, unsupervised dimensionality reduction is an important task. Unsupervised dimensionality reduction can generally be achieved by feature extraction or feature selection. In many situations, feature selection methods are more appropriate than feature extraction methods because of their clear interpretation with respect to the original features. The unsupervised feature selection can be categorized as feature subset selection and feature ranking method, and we focused on unsupervised feature ranking methods which evaluate the features based on their importance scores. Recently, several unsupervised feature ranking methods were developed based on ensemble approaches to achieve their higher accuracy and stability. However, most of the ensemble-based feature ranking methods require the true number of clusters. Furthermore, these algorithms evaluate the feature importance depending on the ensemble clustering solution, and they produce undesirable evaluation results if the clustering solutions are inaccurate. To address these limitations, we proposed an ensemble-based feature ranking method with random subspace and multiple-k ensemble (FRRM). The proposed FRRM algorithm evaluates the importance of each feature with the random subspace ensemble, and all evaluation results are combined with the ensemble importance scores. Moreover, FRRM does not require the determination of the true number of clusters in advance through the use of the multiple-k ensemble idea. Experiments on various benchmark datasets were conducted to examine the properties of the proposed FRRM algorithm and to compare its performance with that of existing feature ranking methods. The experimental results demonstrated that the proposed FRRM outperformed the competitors. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=clustering%20analysis" title="clustering analysis">clustering analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=multiple-k%20ensemble" title=" multiple-k ensemble"> multiple-k ensemble</a>, <a href="https://publications.waset.org/abstracts/search?q=random%20subspace-based%20feature%20evaluation" title=" random subspace-based feature evaluation"> random subspace-based feature evaluation</a>, <a href="https://publications.waset.org/abstracts/search?q=unsupervised%20feature%20ranking" title=" unsupervised feature ranking"> unsupervised feature ranking</a> </p> <a href="https://publications.waset.org/abstracts/52081/feature-evaluation-based-on-random-subspace-and-multiple-k-ensemble" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/52081.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">339</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4154</span> A Communication Signal Recognition Algorithm Based on Holder Coefficient Characteristics</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hui%20Zhang">Hui Zhang</a>, <a href="https://publications.waset.org/abstracts/search?q=Ye%20Tian"> Ye Tian</a>, <a href="https://publications.waset.org/abstracts/search?q=Fang%20Ye"> Fang Ye</a>, <a href="https://publications.waset.org/abstracts/search?q=Ziming%20Guo"> Ziming Guo</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Communication signal modulation recognition technology is one of the key technologies in the field of modern information warfare. At present, communication signal automatic modulation recognition methods are mainly divided into two major categories. One is the maximum likelihood hypothesis testing method based on decision theory, the other is a statistical pattern recognition method based on feature extraction. Now, the most commonly used is a statistical pattern recognition method, which includes feature extraction and classifier design. With the increasingly complex electromagnetic environment of communications, how to effectively extract the features of various signals at low signal-to-noise ratio (SNR) is a hot topic for scholars in various countries. To solve this problem, this paper proposes a feature extraction algorithm for the communication signal based on the improved Holder cloud feature. And the extreme learning machine (ELM) is used which aims at the problem of the real-time in the modern warfare to classify the extracted features. The algorithm extracts the digital features of the improved cloud model without deterministic information in a low SNR environment, and uses the improved cloud model to obtain more stable Holder cloud features and the performance of the algorithm is improved. This algorithm addresses the problem that a simple feature extraction algorithm based on Holder coefficient feature is difficult to recognize at low SNR, and it also has a better recognition accuracy. The results of simulations show that the approach in this paper still has a good classification result at low SNR, even when the SNR is -15dB, the recognition accuracy still reaches 76%. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=communication%20signal" title="communication signal">communication signal</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20extraction" title=" feature extraction"> feature extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=Holder%20coefficient" title=" Holder coefficient"> Holder coefficient</a>, <a href="https://publications.waset.org/abstracts/search?q=improved%20cloud%20model" title=" improved cloud model"> improved cloud model</a> </p> <a href="https://publications.waset.org/abstracts/101463/a-communication-signal-recognition-algorithm-based-on-holder-coefficient-characteristics" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/101463.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">155</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4153</span> Statistical Feature Extraction Method for Wood Species Recognition System </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mohd%20Iz%27aan%20Paiz%20Bin%20Zamri">Mohd Iz'aan Paiz Bin Zamri</a>, <a href="https://publications.waset.org/abstracts/search?q=Anis%20Salwa%20Mohd%20Khairuddin"> Anis Salwa Mohd Khairuddin</a>, <a href="https://publications.waset.org/abstracts/search?q=Norrima%20Mokhtar"> Norrima Mokhtar</a>, <a href="https://publications.waset.org/abstracts/search?q=Rubiyah%20Yusof"> Rubiyah Yusof</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Effective statistical feature extraction and classification are important in image-based automatic inspection and analysis. An automatic wood species recognition system is designed to perform wood inspection at custom checkpoints to avoid mislabeling of timber which will results to loss of income to the timber industry. The system focuses on analyzing the statistical pores properties of the wood images. This paper proposed a fuzzy-based feature extractor which mimics the experts’ knowledge on wood texture to extract the properties of pores distribution from the wood surface texture. The proposed feature extractor consists of two steps namely pores extraction and fuzzy pores management. The total number of statistical features extracted from each wood image is 38 features. Then, a backpropagation neural network is used to classify the wood species based on the statistical features. A comprehensive set of experiments on a database composed of 5200 macroscopic images from 52 tropical wood species was used to evaluate the performance of the proposed feature extractor. The advantage of the proposed feature extraction technique is that it mimics the experts’ interpretation on wood texture which allows human involvement when analyzing the wood texture. Experimental results show the efficiency of the proposed method. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=classification" title="classification">classification</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20extraction" title=" feature extraction"> feature extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=fuzzy" title=" fuzzy"> fuzzy</a>, <a href="https://publications.waset.org/abstracts/search?q=inspection%20system" title=" inspection system"> inspection system</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20analysis" title=" image analysis"> image analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=macroscopic%20images" title=" macroscopic images"> macroscopic images</a> </p> <a href="https://publications.waset.org/abstracts/36415/statistical-feature-extraction-method-for-wood-species-recognition-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/36415.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">425</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4152</span> Comprehensive Feature Extraction for Optimized Condition Assessment of Fuel Pumps</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ugochukwu%20Ejike%20Akpudo">Ugochukwu Ejike Akpudo</a>, <a href="https://publications.waset.org/abstracts/search?q=Jank-Wook%20Hur"> Jank-Wook Hur</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The increasing demand for improved productivity, maintainability, and reliability has prompted rapidly increasing research studies on the emerging condition-based maintenance concept- Prognostics and health management (PHM). Varieties of fuel pumps serve critical functions in several hydraulic systems; hence, their failure can have daunting effects on productivity, safety, etc. The need for condition monitoring and assessment of these pumps cannot be overemphasized, and this has led to the uproar in research studies on standard feature extraction techniques for optimized condition assessment of fuel pumps. By extracting time-based, frequency-based and the more robust time-frequency based features from these vibrational signals, a more comprehensive feature assessment (and selection) can be achieved for a more accurate and reliable condition assessment of these pumps. With the aid of emerging deep classification and regression algorithms like the locally linear embedding (LLE), we propose a method for comprehensive condition assessment of electromagnetic fuel pumps (EMFPs). Results show that the LLE as a comprehensive feature extraction technique yields better feature fusion/dimensionality reduction results for condition assessment of EMFPs against the use of single features. Also, unlike other feature fusion techniques, its capabilities as a fault classification technique were explored, and the results show an acceptable accuracy level using standard performance metrics for evaluation. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=electromagnetic%20fuel%20pumps" title="electromagnetic fuel pumps">electromagnetic fuel pumps</a>, <a href="https://publications.waset.org/abstracts/search?q=comprehensive%20feature%20extraction" title=" comprehensive feature extraction"> comprehensive feature extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=condition%20assessment" title=" condition assessment"> condition assessment</a>, <a href="https://publications.waset.org/abstracts/search?q=locally%20linear%20embedding" title=" locally linear embedding"> locally linear embedding</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20fusion" title=" feature fusion"> feature fusion</a> </p> <a href="https://publications.waset.org/abstracts/111870/comprehensive-feature-extraction-for-optimized-condition-assessment-of-fuel-pumps" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/111870.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">117</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4151</span> Video Text Information Detection and Localization in Lecture Videos Using Moments </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Belkacem%20Soundes">Belkacem Soundes</a>, <a href="https://publications.waset.org/abstracts/search?q=Guezouli%20Larbi"> Guezouli Larbi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents a robust and accurate method for text detection and localization over lecture videos. Frame regions are classified into text or background based on visual feature analysis. However, lecture video shows significant degradation mainly related to acquisition conditions, camera motion and environmental changes resulting in low quality videos. Hence, affecting feature extraction and description efficiency. Moreover, traditional text detection methods cannot be directly applied to lecture videos. Therefore, robust feature extraction methods dedicated to this specific video genre are required for robust and accurate text detection and extraction. Method consists of a three-step process: Slide region detection and segmentation; Feature extraction and non-text filtering. For robust and effective features extraction moment functions are used. Two distinct types of moments are used: orthogonal and non-orthogonal. For orthogonal Zernike Moments, both Pseudo Zernike moments are used, whereas for non-orthogonal ones Hu moments are used. Expressivity and description efficiency are given and discussed. Proposed approach shows that in general, orthogonal moments show high accuracy in comparison to the non-orthogonal one. Pseudo Zernike moments are more effective than Zernike with better computation time. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=text%20detection" title="text detection">text detection</a>, <a href="https://publications.waset.org/abstracts/search?q=text%20localization" title=" text localization"> text localization</a>, <a href="https://publications.waset.org/abstracts/search?q=lecture%20videos" title=" lecture videos"> lecture videos</a>, <a href="https://publications.waset.org/abstracts/search?q=pseudo%20zernike%20moments" title=" pseudo zernike moments"> pseudo zernike moments</a> </p> <a href="https://publications.waset.org/abstracts/109549/video-text-information-detection-and-localization-in-lecture-videos-using-moments" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/109549.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">151</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4150</span> Color Fusion of Remote Sensing Images for Imparting Fluvial Geomorphological Features of River Yamuna and Ganga over Doon Valley </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=P.%20S.%20Jagadeesh%20Kumar">P. S. Jagadeesh Kumar</a>, <a href="https://publications.waset.org/abstracts/search?q=Tracy%20Lin%20Huan"> Tracy Lin Huan</a>, <a href="https://publications.waset.org/abstracts/search?q=Rebecca%20K.%20Rossi"> Rebecca K. Rossi</a>, <a href="https://publications.waset.org/abstracts/search?q=Yanmin%20Yuan"> Yanmin Yuan</a>, <a href="https://publications.waset.org/abstracts/search?q=Xianpei%20Li"> Xianpei Li</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The fiscal growth of any country hinges on the prudent administration of water resources. The river Yamuna and Ganga are measured as the life line of India as it affords the needs for life to endure. Earth observation over remote sensing images permits the precise description and identification of ingredients on the superficial from space and airborne platforms. Multiple and heterogeneous image sources are accessible for the same geographical section; multispectral, hyperspectral, radar, multitemporal, and multiangular images. In this paper, a taxonomical learning of the fluvial geomorphological features of river Yamuna and Ganga over doon valley using color fusion of multispectral remote sensing images was performed. Experimental results exhibited that the segmentation based colorization technique stranded on pattern recognition, and color mapping fashioned more colorful and truthful colorized images for geomorphological feature extraction. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=color%20fusion" title="color fusion">color fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=geomorphology" title=" geomorphology"> geomorphology</a>, <a href="https://publications.waset.org/abstracts/search?q=fluvial%20processes" title=" fluvial processes"> fluvial processes</a>, <a href="https://publications.waset.org/abstracts/search?q=multispectral%20images" title=" multispectral images"> multispectral images</a>, <a href="https://publications.waset.org/abstracts/search?q=pattern%20recognition" title=" pattern recognition"> pattern recognition</a> </p> <a href="https://publications.waset.org/abstracts/87961/color-fusion-of-remote-sensing-images-for-imparting-fluvial-geomorphological-features-of-river-yamuna-and-ganga-over-doon-valley" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/87961.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">306</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4149</span> Feature Extraction of MFCC Based on Fisher-Ratio and Correlated Distance Criterion for Underwater Target Signal</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Han%20Xue">Han Xue</a>, <a href="https://publications.waset.org/abstracts/search?q=Zhang%20Lanyue"> Zhang Lanyue</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In order to seek more effective feature extraction technology, feature extraction method based on MFCC combined with vector hydrophone is exposed in the paper. The sound pressure signal and particle velocity signal of two kinds of ships are extracted by using MFCC and its evolution form, and the extracted features are fused by using fisher-ratio and correlated distance criterion. The features are then identified by BP neural network. The results showed that MFCC, First-Order Differential MFCC and Second-Order Differential MFCC features can be used as effective features for recognition of underwater targets, and the fusion feature can improve the recognition rate. Moreover, the results also showed that the recognition rate of the particle velocity signal is higher than that of the sound pressure signal, and it reflects the superiority of vector signal processing. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=vector%20information" title="vector information">vector information</a>, <a href="https://publications.waset.org/abstracts/search?q=MFCC" title=" MFCC"> MFCC</a>, <a href="https://publications.waset.org/abstracts/search?q=differential%20MFCC" title=" differential MFCC"> differential MFCC</a>, <a href="https://publications.waset.org/abstracts/search?q=fusion%20feature" title=" fusion feature"> fusion feature</a>, <a href="https://publications.waset.org/abstracts/search?q=BP%20neural%20network" title=" BP neural network "> BP neural network </a> </p> <a href="https://publications.waset.org/abstracts/33608/feature-extraction-of-mfcc-based-on-fisher-ratio-and-correlated-distance-criterion-for-underwater-target-signal" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/33608.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">529</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">‹</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=color%20feature%20extraction&page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=color%20feature%20extraction&page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=color%20feature%20extraction&page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=color%20feature%20extraction&page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=color%20feature%20extraction&page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=color%20feature%20extraction&page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=color%20feature%20extraction&page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=color%20feature%20extraction&page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=color%20feature%20extraction&page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=color%20feature%20extraction&page=139">139</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=color%20feature%20extraction&page=140">140</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=color%20feature%20extraction&page=2" rel="next">›</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">© 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">×</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>