CINXE.COM

Search results for: Ensemble Classifier

<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: Ensemble Classifier</title> <meta name="description" content="Search results for: Ensemble Classifier"> <meta name="keywords" content="Ensemble Classifier"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="Ensemble Classifier" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="Ensemble Classifier"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 525</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: Ensemble Classifier</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">525</span> Sentiment Analysis of Ensemble-Based Classifiers for E-Mail Data</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Muthukumarasamy%20Govindarajan">Muthukumarasamy Govindarajan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Detection of unwanted, unsolicited mails called spam from email is an interesting area of research. It is necessary to evaluate the performance of any new spam classifier using standard data sets. Recently, ensemble-based classifiers have gained popularity in this domain. In this research work, an efficient email filtering approach based on ensemble methods is addressed for developing an accurate and sensitive spam classifier. The proposed approach employs Naive Bayes (NB), Support Vector Machine (SVM) and Genetic Algorithm (GA) as base classifiers along with different ensemble methods. The experimental results show that the ensemble classifier was performing with accuracy greater than individual classifiers, and also hybrid model results are found to be better than the combined models for the e-mail dataset. The proposed ensemble-based classifiers turn out to be good in terms of classification accuracy, which is considered to be an important criterion for building a robust spam classifier. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=accuracy" title="accuracy">accuracy</a>, <a href="https://publications.waset.org/abstracts/search?q=arcing" title=" arcing"> arcing</a>, <a href="https://publications.waset.org/abstracts/search?q=bagging" title=" bagging"> bagging</a>, <a href="https://publications.waset.org/abstracts/search?q=genetic%20algorithm" title=" genetic algorithm"> genetic algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=Naive%20Bayes" title=" Naive Bayes"> Naive Bayes</a>, <a href="https://publications.waset.org/abstracts/search?q=sentiment%20mining" title=" sentiment mining"> sentiment mining</a>, <a href="https://publications.waset.org/abstracts/search?q=support%20vector%20machine" title=" support vector machine"> support vector machine</a> </p> <a href="https://publications.waset.org/abstracts/112240/sentiment-analysis-of-ensemble-based-classifiers-for-e-mail-data" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/112240.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">142</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">524</span> Evaluation of Ensemble Classifiers for Intrusion Detection </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=M.%20Govindarajan">M. Govindarajan </a> </p> <p class="card-text"><strong>Abstract:</strong></p> One of the major developments in machine learning in the past decade is the ensemble method, which finds highly accurate classifier by combining many moderately accurate component classifiers. In this research work, new ensemble classification methods are proposed with homogeneous ensemble classifier using bagging and heterogeneous ensemble classifier using arcing and their performances are analyzed in terms of accuracy. A Classifier ensemble is designed using Radial Basis Function (RBF) and Support Vector Machine (SVM) as base classifiers. The feasibility and the benefits of the proposed approaches are demonstrated by the means of standard datasets of intrusion detection. The main originality of the proposed approach is based on three main parts: preprocessing phase, classification phase, and combining phase. A wide range of comparative experiments is conducted for standard datasets of intrusion detection. The performance of the proposed homogeneous and heterogeneous ensemble classifiers are compared to the performance of other standard homogeneous and heterogeneous ensemble methods. The standard homogeneous ensemble methods include Error correcting output codes, Dagging and heterogeneous ensemble methods include majority voting, stacking. The proposed ensemble methods provide significant improvement of accuracy compared to individual classifiers and the proposed bagged RBF and SVM performs significantly better than ECOC and Dagging and the proposed hybrid RBF-SVM performs significantly better than voting and stacking. Also heterogeneous models exhibit better results than homogeneous models for standard datasets of intrusion detection.&nbsp; <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=data%20mining" title="data mining">data mining</a>, <a href="https://publications.waset.org/abstracts/search?q=ensemble" title=" ensemble"> ensemble</a>, <a href="https://publications.waset.org/abstracts/search?q=radial%20basis%20function" title=" radial basis function"> radial basis function</a>, <a href="https://publications.waset.org/abstracts/search?q=support%20vector%20machine" title=" support vector machine"> support vector machine</a>, <a href="https://publications.waset.org/abstracts/search?q=accuracy" title=" accuracy"> accuracy</a> </p> <a href="https://publications.waset.org/abstracts/43650/evaluation-of-ensemble-classifiers-for-intrusion-detection" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/43650.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">248</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">523</span> Breast Cancer Survivability Prediction via Classifier Ensemble</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mohamed%20Al-Badrashiny">Mohamed Al-Badrashiny</a>, <a href="https://publications.waset.org/abstracts/search?q=Abdelghani%20Bellaachia"> Abdelghani Bellaachia</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents a classifier ensemble approach for predicting the survivability of the breast cancer patients using the latest database version of the Surveillance, Epidemiology, and End Results (SEER) Program of the National Cancer Institute. The system consists of two main components; features selection and classifier ensemble components. The features selection component divides the features in SEER database into four groups. After that it tries to find the most important features among the four groups that maximizes the weighted average F-score of a certain classification algorithm. The ensemble component uses three different classifiers, each of which models different set of features from SEER through the features selection module. On top of them, another classifier is used to give the final decision based on the output decisions and confidence scores from each of the underlying classifiers. Different classification algorithms have been examined; the best setup found is by using the decision tree, Bayesian network, and Na&uml;ıve Bayes algorithms for the underlying classifiers and Na&uml;ıve Bayes for the classifier ensemble step. The system outperforms all published systems to date when evaluated against the exact same data of SEER (period of 1973-2002). It gives 87.39% weighted average F-score compared to 85.82% and 81.34% of the other published systems. By increasing the data size to cover the whole database (period of 1973-2014), the overall weighted average F-score jumps to 92.4% on the held out unseen test set. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=classifier%20ensemble" title="classifier ensemble">classifier ensemble</a>, <a href="https://publications.waset.org/abstracts/search?q=breast%20cancer%20survivability" title=" breast cancer survivability"> breast cancer survivability</a>, <a href="https://publications.waset.org/abstracts/search?q=data%20mining" title=" data mining"> data mining</a>, <a href="https://publications.waset.org/abstracts/search?q=SEER" title=" SEER"> SEER</a> </p> <a href="https://publications.waset.org/abstracts/42621/breast-cancer-survivability-prediction-via-classifier-ensemble" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/42621.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">328</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">522</span> Machine Learning Predictive Models for Hydroponic Systems: A Case Study Nutrient Film Technique and Deep Flow Technique</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kritiyaporn%20Kunsook">Kritiyaporn Kunsook</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Machine learning algorithms (MLAs) such us artificial neural networks (ANNs), decision tree, support vector machines (SVMs), Naïve Bayes, and ensemble classifier by voting are powerful data driven methods that are relatively less widely used in the mapping of technique of system, and thus have not been comparatively evaluated together thoroughly in this field. The performances of a series of MLAs, ANNs, decision tree, SVMs, Naïve Bayes, and ensemble classifier by voting in technique of hydroponic systems prospectively modeling are compared based on the accuracy of each model. Classification of hydroponic systems only covers the test samples from vegetables grown with Nutrient film technique (NFT) and Deep flow technique (DFT). The feature, which are the characteristics of vegetables compose harvesting height width, temperature, require light and color. The results indicate that the classification performance of the ANNs is 98%, decision tree is 98%, SVMs is 97.33%, Naïve Bayes is 96.67%, and ensemble classifier by voting is 98.96% algorithm respectively. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=artificial%20neural%20networks" title="artificial neural networks">artificial neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=decision%20tree" title=" decision tree"> decision tree</a>, <a href="https://publications.waset.org/abstracts/search?q=support%20vector%20machines" title=" support vector machines"> support vector machines</a>, <a href="https://publications.waset.org/abstracts/search?q=na%C3%AFve%20Bayes" title=" naïve Bayes"> naïve Bayes</a>, <a href="https://publications.waset.org/abstracts/search?q=ensemble%20classifier%20by%20voting" title=" ensemble classifier by voting"> ensemble classifier by voting</a> </p> <a href="https://publications.waset.org/abstracts/91070/machine-learning-predictive-models-for-hydroponic-systems-a-case-study-nutrient-film-technique-and-deep-flow-technique" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/91070.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">372</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">521</span> Multi-Sensor Target Tracking Using Ensemble Learning</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bhekisipho%20Twala">Bhekisipho Twala</a>, <a href="https://publications.waset.org/abstracts/search?q=Mantepu%20Masetshaba"> Mantepu Masetshaba</a>, <a href="https://publications.waset.org/abstracts/search?q=Ramapulana%20Nkoana"> Ramapulana Nkoana</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Multiple classifier systems combine several individual classifiers to deliver a final classification decision. However, an increasingly controversial question is whether such systems can outperform the single best classifier, and if so, what form of multiple classifiers system yields the most significant benefit. Also, multi-target tracking detection using multiple sensors is an important research field in mobile techniques and military applications. In this paper, several multiple classifiers systems are evaluated in terms of their ability to predict a system’s failure or success for multi-sensor target tracking tasks. The Bristol Eden project dataset is utilised for this task. Experimental and simulation results show that the human activity identification system can fulfill requirements of target tracking due to improved sensors classification performances with multiple classifier systems constructed using boosting achieving higher accuracy rates. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=single%20classifier" title="single classifier">single classifier</a>, <a href="https://publications.waset.org/abstracts/search?q=ensemble%20learning" title=" ensemble learning"> ensemble learning</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-target%20tracking" title=" multi-target tracking"> multi-target tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=multiple%20classifiers" title=" multiple classifiers"> multiple classifiers</a> </p> <a href="https://publications.waset.org/abstracts/140816/multi-sensor-target-tracking-using-ensemble-learning" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/140816.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">268</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">520</span> Random Subspace Ensemble of CMAC Classifiers </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Somaiyeh%20Dehghan">Somaiyeh Dehghan</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohammad%20Reza%20Kheirkhahan%20Haghighi"> Mohammad Reza Kheirkhahan Haghighi </a> </p> <p class="card-text"><strong>Abstract:</strong></p> The rapid growth of domains that have data with a large number of features, while the number of samples is limited has caused difficulty in constructing strong classifiers. To reduce the dimensionality of the feature space becomes an essential step in classification task. Random subspace method (or attribute bagging) is an ensemble classifier that consists of several classifiers that each base learner in ensemble has subset of features. In the present paper, we introduce Random Subspace Ensemble of CMAC neural network (RSE-CMAC), each of which has training with subset of features. Then we use this model for classification task. For evaluation performance of our model, we compare it with bagging algorithm on 36 UCI datasets. The results reveal that the new model has better performance. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=classification" title="classification">classification</a>, <a href="https://publications.waset.org/abstracts/search?q=random%20subspace" title=" random subspace"> random subspace</a>, <a href="https://publications.waset.org/abstracts/search?q=ensemble" title=" ensemble"> ensemble</a>, <a href="https://publications.waset.org/abstracts/search?q=CMAC%20neural%20network" title=" CMAC neural network"> CMAC neural network</a> </p> <a href="https://publications.waset.org/abstracts/14371/random-subspace-ensemble-of-cmac-classifiers" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/14371.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">329</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">519</span> Rank-Based Chain-Mode Ensemble for Binary Classification</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Chongya%20Song">Chongya Song</a>, <a href="https://publications.waset.org/abstracts/search?q=Kang%20Yen"> Kang Yen</a>, <a href="https://publications.waset.org/abstracts/search?q=Alexander%20Pons"> Alexander Pons</a>, <a href="https://publications.waset.org/abstracts/search?q=Jin%20Liu"> Jin Liu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In the field of machine learning, the ensemble has been employed as a common methodology to improve the performance upon multiple base classifiers. However, the true predictions are often canceled out by the false ones during consensus due to a phenomenon called &ldquo;curse of correlation&rdquo; which is represented as the strong interferences among the predictions produced by the base classifiers. In addition, the existing practices are still not able to effectively mitigate the problem of imbalanced classification. Based on the analysis on our experiment results, we conclude that the two problems are caused by some inherent deficiencies in the approach of consensus. Therefore, we create an enhanced ensemble algorithm which adopts a designed rank-based chain-mode consensus to overcome the two problems. In order to evaluate the proposed ensemble algorithm, we employ a well-known benchmark data set NSL-KDD (the improved version of dataset KDDCup99 produced by University of New Brunswick) to make comparisons between the proposed and 8 common ensemble algorithms. Particularly, each compared ensemble classifier uses the same 22 base classifiers, so that the differences in terms of the improvements toward the accuracy and reliability upon the base classifiers can be truly revealed. As a result, the proposed rank-based chain-mode consensus is proved to be a more effective ensemble solution than the traditional consensus approach, which outperforms the 8 ensemble algorithms by 20% on almost all compared metrices which include accuracy, precision, recall, F1-score and area under receiver operating characteristic curve. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=consensus" title="consensus">consensus</a>, <a href="https://publications.waset.org/abstracts/search?q=curse%20of%20correlation" title=" curse of correlation"> curse of correlation</a>, <a href="https://publications.waset.org/abstracts/search?q=imbalance%20classification" title=" imbalance classification"> imbalance classification</a>, <a href="https://publications.waset.org/abstracts/search?q=rank-based%20chain-mode%20ensemble" title=" rank-based chain-mode ensemble"> rank-based chain-mode ensemble</a> </p> <a href="https://publications.waset.org/abstracts/112891/rank-based-chain-mode-ensemble-for-binary-classification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/112891.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">138</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">518</span> A Dataset of Program Educational Objectives Mapped to ABET Outcomes: Data Cleansing, Exploratory Data Analysis and Modeling</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Addin%20Osman">Addin Osman</a>, <a href="https://publications.waset.org/abstracts/search?q=Anwar%20Ali%20Yahya"> Anwar Ali Yahya</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohammed%20Basit%20Kamal"> Mohammed Basit Kamal </a> </p> <p class="card-text"><strong>Abstract:</strong></p> Datasets or collections are becoming important assets by themselves and now they can be accepted as a primary intellectual output of a research. The quality and usage of the datasets depend mainly on the context under which they have been collected, processed, analyzed, validated, and interpreted. This paper aims to present a collection of program educational objectives mapped to student&rsquo;s outcomes collected from self-study reports prepared by 32 engineering programs accredited by ABET. The manual mapping (classification) of this data is a notoriously tedious, time consuming process. In addition, it requires experts in the area, which are mostly not available. It has been shown the operational settings under which the collection has been produced. The collection has been cleansed, preprocessed, some features have been selected and preliminary exploratory data analysis has been performed so as to illustrate the properties and usefulness of the collection. At the end, the collection has been benchmarked using nine of the most widely used supervised multiclass classification techniques (Binary Relevance, Label Powerset, Classifier Chains, Pruned Sets, Random k-label sets, Ensemble of Classifier Chains, Ensemble of Pruned Sets, Multi-Label k-Nearest Neighbors and Back-Propagation Multi-Label Learning). The techniques have been compared to each other using five well-known measurements (Accuracy, Hamming Loss, Micro-F, Macro-F, and Macro-F). The Ensemble of Classifier Chains and Ensemble of Pruned Sets have achieved encouraging performance compared to other experimented multi-label classification methods. The Classifier Chains method has shown the worst performance. To recap, the benchmark has achieved promising results by utilizing preliminary exploratory data analysis performed on the collection, proposing new trends for research and providing a baseline for future studies. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=ABET" title="ABET">ABET</a>, <a href="https://publications.waset.org/abstracts/search?q=accreditation" title=" accreditation"> accreditation</a>, <a href="https://publications.waset.org/abstracts/search?q=benchmark%20collection" title=" benchmark collection"> benchmark collection</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=program%20educational%20objectives" title=" program educational objectives"> program educational objectives</a>, <a href="https://publications.waset.org/abstracts/search?q=student%20outcomes" title=" student outcomes"> student outcomes</a>, <a href="https://publications.waset.org/abstracts/search?q=supervised%20multi-class%20classification" title=" supervised multi-class classification"> supervised multi-class classification</a>, <a href="https://publications.waset.org/abstracts/search?q=text%20mining" title=" text mining"> text mining</a> </p> <a href="https://publications.waset.org/abstracts/91604/a-dataset-of-program-educational-objectives-mapped-to-abet-outcomes-data-cleansing-exploratory-data-analysis-and-modeling" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/91604.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">173</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">517</span> Methods for Enhancing Ensemble Learning or Improving Classifiers of This Technique in the Analysis and Classification of Brain Signals</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Seyed%20Mehdi%20Ghezi">Seyed Mehdi Ghezi</a>, <a href="https://publications.waset.org/abstracts/search?q=Hesam%20Hasanpoor"> Hesam Hasanpoor</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This scientific article explores enhancement methods for ensemble learning with the aim of improving the performance of classifiers in the analysis and classification of brain signals. The research approach in this field consists of two main parts, each with its own strengths and weaknesses. The choice of approach depends on the specific research question and available resources. By combining these approaches and leveraging their respective strengths, researchers can enhance the accuracy and reliability of classification results, consequently advancing our understanding of the brain and its functions. The first approach focuses on utilizing machine learning methods to identify the best features among the vast array of features present in brain signals. The selection of features varies depending on the research objective, and different techniques have been employed for this purpose. For instance, the genetic algorithm has been used in some studies to identify the best features, while optimization methods have been utilized in others to identify the most influential features. Additionally, machine learning techniques have been applied to determine the influential electrodes in classification. Ensemble learning plays a crucial role in identifying the best features that contribute to learning, thereby improving the overall results. The second approach concentrates on designing and implementing methods for selecting the best classifier or utilizing meta-classifiers to enhance the final results in ensemble learning. In a different section of the research, a single classifier is used instead of multiple classifiers, employing different sets of features to improve the results. The article provides an in-depth examination of each technique, highlighting their advantages and limitations. By integrating these techniques, researchers can enhance the performance of classifiers in the analysis and classification of brain signals. This advancement in ensemble learning methodologies contributes to a better understanding of the brain and its functions, ultimately leading to improved accuracy and reliability in brain signal analysis and classification. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=ensemble%20learning" title="ensemble learning">ensemble learning</a>, <a href="https://publications.waset.org/abstracts/search?q=brain%20signals" title=" brain signals"> brain signals</a>, <a href="https://publications.waset.org/abstracts/search?q=classification" title=" classification"> classification</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20selection" title=" feature selection"> feature selection</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=genetic%20algorithm" title=" genetic algorithm"> genetic algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=optimization%20methods" title=" optimization methods"> optimization methods</a>, <a href="https://publications.waset.org/abstracts/search?q=influential%20features" title=" influential features"> influential features</a>, <a href="https://publications.waset.org/abstracts/search?q=influential%20electrodes" title=" influential electrodes"> influential electrodes</a>, <a href="https://publications.waset.org/abstracts/search?q=meta-classifiers" title=" meta-classifiers"> meta-classifiers</a> </p> <a href="https://publications.waset.org/abstracts/177312/methods-for-enhancing-ensemble-learning-or-improving-classifiers-of-this-technique-in-the-analysis-and-classification-of-brain-signals" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/177312.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">75</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">516</span> Multi-Class Text Classification Using Ensembles of Classifiers </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Syed%20Basit%20Ali%20Shah%20Bukhari">Syed Basit Ali Shah Bukhari</a>, <a href="https://publications.waset.org/abstracts/search?q=Yan%20%20Qiang"> Yan Qiang</a>, <a href="https://publications.waset.org/abstracts/search?q=Saad%20Abdul%20Rauf"> Saad Abdul Rauf</a>, <a href="https://publications.waset.org/abstracts/search?q=Syed%20Saqlaina%20Bukhari"> Syed Saqlaina Bukhari</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Text Classification is the methodology to classify any given text into the respective category from a given set of categories. It is highly important and vital to use proper set of pre-processing , feature selection and classification techniques to achieve this purpose. In this paper we have used different ensemble techniques along with variance in feature selection parameters to see the change in overall accuracy of the result and also on some other individual class based features which include precision value of each individual category of the text. After subjecting our data through pre-processing and feature selection techniques , different individual classifiers were tested first and after that classifiers were combined to form ensembles to increase their accuracy. Later we also studied the impact of decreasing the classification categories on over all accuracy of data. Text classification is highly used in sentiment analysis on social media sites such as twitter for realizing people’s opinions about any cause or it is also used to analyze customer’s reviews about certain products or services. Opinion mining is a vital task in data mining and text categorization is a back-bone to opinion mining. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Natural%20Language%20Processing" title="Natural Language Processing">Natural Language Processing</a>, <a href="https://publications.waset.org/abstracts/search?q=Ensemble%20Classifier" title=" Ensemble Classifier"> Ensemble Classifier</a>, <a href="https://publications.waset.org/abstracts/search?q=Bagging%20Classifier" title=" Bagging Classifier"> Bagging Classifier</a>, <a href="https://publications.waset.org/abstracts/search?q=AdaBoost" title=" AdaBoost"> AdaBoost</a> </p> <a href="https://publications.waset.org/abstracts/123394/multi-class-text-classification-using-ensembles-of-classifiers" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/123394.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">232</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">515</span> Classification of Red, Green and Blue Values from Face Images Using k-NN Classifier to Predict the Skin or Non-Skin</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kemal%20Polat">Kemal Polat</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this study, it has been estimated whether there is skin by using RBG values obtained from the camera and k-nearest neighbor (k-NN) classifier. The dataset used in this study has an unbalanced distribution and a linearly non-separable structure. This problem can also be called a big data problem. The Skin dataset was taken from UCI machine learning repository. As the classifier, we have used the k-NN method to handle this big data problem. For k value of k-NN classifier, we have used as 1. To train and test the k-NN classifier, 50-50% training-testing partition has been used. As the performance metrics, TP rate, FP Rate, Precision, recall, f-measure and AUC values have been used to evaluate the performance of k-NN classifier. These obtained results are as follows: 0.999, 0.001, 0.999, 0.999, 0.999, and 1,00. As can be seen from the obtained results, this proposed method could be used to predict whether the image is skin or not. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=k-NN%20classifier" title="k-NN classifier">k-NN classifier</a>, <a href="https://publications.waset.org/abstracts/search?q=skin%20or%20non-skin%20classification" title=" skin or non-skin classification"> skin or non-skin classification</a>, <a href="https://publications.waset.org/abstracts/search?q=RGB%20values" title=" RGB values"> RGB values</a>, <a href="https://publications.waset.org/abstracts/search?q=classification" title=" classification"> classification</a> </p> <a href="https://publications.waset.org/abstracts/86538/classification-of-red-green-and-blue-values-from-face-images-using-k-nn-classifier-to-predict-the-skin-or-non-skin" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/86538.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">248</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">514</span> An Ensemble System of Classifiers for Computer-Aided Volcano Monitoring</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Flavio%20Cannavo">Flavio Cannavo</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Continuous evaluation of the status of potentially hazardous volcanos plays a key role for civil protection purposes. The importance of monitoring volcanic activity, especially for energetic paroxysms that usually come with tephra emissions, is crucial not only for exposures to the local population but also for airline traffic. Presently, real-time surveillance of most volcanoes worldwide is essentially delegated to one or more human experts in volcanology, who interpret data coming from different kind of monitoring networks. Unfavorably, the high nonlinearity of the complex and coupled volcanic dynamics leads to a large variety of different volcanic behaviors. Moreover, continuously measured parameters (e.g. seismic, deformation, infrasonic and geochemical signals) are often not able to fully explain the ongoing phenomenon, thus making the fast volcano state assessment a very puzzling task for the personnel on duty at the control rooms. With the aim of aiding the personnel on duty in volcano surveillance, here we introduce a system based on an ensemble of data-driven classifiers to infer automatically the ongoing volcano status from all the available different kind of measurements. The system consists of a heterogeneous set of independent classifiers, each one built with its own data and algorithm. Each classifier gives an output about the volcanic status. The ensemble technique allows weighting the single classifier output to combine all the classifications into a single status that maximizes the performance. We tested the model on the Mt. Etna (Italy) case study by considering a long record of multivariate data from 2011 to 2015 and cross-validated it. Results indicate that the proposed model is effective and of great power for decision-making purposes. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bayesian%20networks" title="Bayesian networks">Bayesian networks</a>, <a href="https://publications.waset.org/abstracts/search?q=expert%20system" title=" expert system"> expert system</a>, <a href="https://publications.waset.org/abstracts/search?q=mount%20Etna" title=" mount Etna"> mount Etna</a>, <a href="https://publications.waset.org/abstracts/search?q=volcano%20monitoring" title=" volcano monitoring"> volcano monitoring</a> </p> <a href="https://publications.waset.org/abstracts/67701/an-ensemble-system-of-classifiers-for-computer-aided-volcano-monitoring" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/67701.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">246</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">513</span> Fuzzy Optimization Multi-Objective Clustering Ensemble Model for Multi-Source Data Analysis</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=C.%20B.%20Le">C. B. Le</a>, <a href="https://publications.waset.org/abstracts/search?q=V.%20N.%20Pham"> V. N. Pham</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In modern data analysis, multi-source data appears more and more in real applications. Multi-source data clustering has emerged as a important issue in the data mining and machine learning community. Different data sources provide information about different data. Therefore, multi-source data linking is essential to improve clustering performance. However, in practice multi-source data is often heterogeneous, uncertain, and large. This issue is considered a major challenge from multi-source data. Ensemble is a versatile machine learning model in which learning techniques can work in parallel, with big data. Clustering ensemble has been shown to outperform any standard clustering algorithm in terms of accuracy and robustness. However, most of the traditional clustering ensemble approaches are based on single-objective function and single-source data. This paper proposes a new clustering ensemble method for multi-source data analysis. The fuzzy optimized multi-objective clustering ensemble method is called FOMOCE. Firstly, a clustering ensemble mathematical model based on the structure of multi-objective clustering function, multi-source data, and dark knowledge is introduced. Then, rules for extracting dark knowledge from the input data, clustering algorithms, and base clusterings are designed and applied. Finally, a clustering ensemble algorithm is proposed for multi-source data analysis. The experiments were performed on the standard sample data set. The experimental results demonstrate the superior performance of the FOMOCE method compared to the existing clustering ensemble methods and multi-source clustering methods. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=clustering%20ensemble" title="clustering ensemble">clustering ensemble</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-source" title=" multi-source"> multi-source</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-objective" title=" multi-objective"> multi-objective</a>, <a href="https://publications.waset.org/abstracts/search?q=fuzzy%20clustering" title=" fuzzy clustering"> fuzzy clustering</a> </p> <a href="https://publications.waset.org/abstracts/136598/fuzzy-optimization-multi-objective-clustering-ensemble-model-for-multi-source-data-analysis" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/136598.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">189</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">512</span> Parkinson’s Disease Detection Analysis through Machine Learning Approaches</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Muhtasim%20Shafi%20Kader">Muhtasim Shafi Kader</a>, <a href="https://publications.waset.org/abstracts/search?q=Fizar%20Ahmed"> Fizar Ahmed</a>, <a href="https://publications.waset.org/abstracts/search?q=Annesha%20Acharjee"> Annesha Acharjee</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Machine learning and data mining are crucial in health care, as well as medical information and detection. Machine learning approaches are now being utilized to improve awareness of a variety of critical health issues, including diabetes detection, neuron cell tumor diagnosis, COVID 19 identification, and so on. Parkinson’s disease is basically a disease for our senior citizens in Bangladesh. Parkinson's Disease indications often seem progressive and get worst with time. People got affected trouble walking and communicating with the condition advances. Patients can also have psychological and social vagaries, nap problems, hopelessness, reminiscence loss, and weariness. Parkinson's disease can happen in both men and women. Though men are affected by the illness at a proportion that is around partial of them are women. In this research, we have to get out the accurate ML algorithm to find out the disease with a predictable dataset and the model of the following machine learning classifiers. Therefore, nine ML classifiers are secondhand to portion study to use machine learning approaches like as follows, Naive Bayes, Adaptive Boosting, Bagging Classifier, Decision Tree Classifier, Random Forest classifier, XBG Classifier, K Nearest Neighbor Classifier, Support Vector Machine Classifier, and Gradient Boosting Classifier are used. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=naive%20bayes" title="naive bayes">naive bayes</a>, <a href="https://publications.waset.org/abstracts/search?q=adaptive%20boosting" title=" adaptive boosting"> adaptive boosting</a>, <a href="https://publications.waset.org/abstracts/search?q=bagging%20classifier" title=" bagging classifier"> bagging classifier</a>, <a href="https://publications.waset.org/abstracts/search?q=decision%20tree%20classifier" title=" decision tree classifier"> decision tree classifier</a>, <a href="https://publications.waset.org/abstracts/search?q=random%20forest%20classifier" title=" random forest classifier"> random forest classifier</a>, <a href="https://publications.waset.org/abstracts/search?q=XBG%20classifier" title=" XBG classifier"> XBG classifier</a>, <a href="https://publications.waset.org/abstracts/search?q=k%20nearest%20neighbor%20classifier" title=" k nearest neighbor classifier"> k nearest neighbor classifier</a>, <a href="https://publications.waset.org/abstracts/search?q=support%20vector%20classifier" title=" support vector classifier"> support vector classifier</a>, <a href="https://publications.waset.org/abstracts/search?q=gradient%20boosting%20classifier" title=" gradient boosting classifier"> gradient boosting classifier</a> </p> <a href="https://publications.waset.org/abstracts/148163/parkinsons-disease-detection-analysis-through-machine-learning-approaches" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/148163.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">129</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">511</span> Use of Fractal Geometry in Machine Learning</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Fuad%20M.%20Alkoot">Fuad M. Alkoot</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The main component of a machine learning system is the classifier. Classifiers are mathematical models that can perform classification tasks for a specific application area. Additionally, many classifiers are combined using any of the available methods to reduce the classifier error rate. The benefits gained from the combination of multiple classifier designs has motivated the development of diverse approaches to multiple classifiers. We aim to investigate using fractal geometry to develop an improved classifier combiner. Initially we experiment with measuring the fractal dimension of data and use the results in the development of a combiner strategy. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=fractal%20geometry" title="fractal geometry">fractal geometry</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=classifier" title=" classifier"> classifier</a>, <a href="https://publications.waset.org/abstracts/search?q=fractal%20dimension" title=" fractal dimension"> fractal dimension</a> </p> <a href="https://publications.waset.org/abstracts/141274/use-of-fractal-geometry-in-machine-learning" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/141274.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">217</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">510</span> Decision Trees Constructing Based on K-Means Clustering Algorithm</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Loai%20Abdallah">Loai Abdallah</a>, <a href="https://publications.waset.org/abstracts/search?q=Malik%20Yousef"> Malik Yousef</a> </p> <p class="card-text"><strong>Abstract:</strong></p> A domain space for the data should reflect the actual similarity between objects. Since objects belonging to the same cluster usually share some common traits even though their geometric distance might be relatively large. In general, the Euclidean distance of data points that represented by large number of features is not capturing the actual relation between those points. In this study, we propose a new method to construct a different space that is based on clustering to form a new distance metric. The new distance space is based on ensemble clustering (EC). The EC distance space is defined by tracking the membership of the points over multiple runs of clustering algorithm metric. Over this distance, we train the decision trees classifier (DT-EC). The results obtained by applying DT-EC on 10 datasets confirm our hypotheses that embedding the EC space as a distance metric would improve the performance. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=ensemble%20clustering" title="ensemble clustering">ensemble clustering</a>, <a href="https://publications.waset.org/abstracts/search?q=decision%20trees" title=" decision trees"> decision trees</a>, <a href="https://publications.waset.org/abstracts/search?q=classification" title=" classification"> classification</a>, <a href="https://publications.waset.org/abstracts/search?q=K%20nearest%20neighbors" title=" K nearest neighbors"> K nearest neighbors</a> </p> <a href="https://publications.waset.org/abstracts/89656/decision-trees-constructing-based-on-k-means-clustering-algorithm" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/89656.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">190</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">509</span> Extreme Temperature Response to Solar Radiation Management in Southeast Asia</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Heri%20Kuswanto">Heri Kuswanto</a>, <a href="https://publications.waset.org/abstracts/search?q=Brina%20Miftahurrohmah"> Brina Miftahurrohmah</a>, <a href="https://publications.waset.org/abstracts/search?q=Fatkhurokhman%20Fauzi"> Fatkhurokhman Fauzi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Southeast Asia has experienced rising temperatures and is predicted to reach a 1.5°C increase by 2030, which is earlier than the Paris Agreement target. Solar Radiation Management (SRM) has been proposed as an alternative to combat global warming. This research investigates changes in the annual maximum temperature (TXx) with and without SRM over southeast Asia. We examined outputs from three ensemble members of the Geoengineering Large Ensemble Project (GLENS) experiment for the period 2051 to 2080. One ensemble member generated outputs that significantly deviated from the others, leading to the removal of ensemble 3 from the impact analysis. Our observations indicate that the magnitude of TXx changes with SRM is heterogeneous across countries. We found that SRM significantly reduces TXx levels compared to historical periods. Furthermore, SRM can reduce temperatures by up to 5°C compared to scenarios without SRM, with even more pronounced effects in Thailand, Cambodia, Laos, and Myanmar. This indicates that SRM can mitigate climate change by lowering future TXx levels. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=solar%20radiation%20management" title="solar radiation management">solar radiation management</a>, <a href="https://publications.waset.org/abstracts/search?q=GLENS" title=" GLENS"> GLENS</a>, <a href="https://publications.waset.org/abstracts/search?q=extreme" title=" extreme"> extreme</a>, <a href="https://publications.waset.org/abstracts/search?q=temperature" title=" temperature"> temperature</a>, <a href="https://publications.waset.org/abstracts/search?q=ensemble" title=" ensemble"> ensemble</a> </p> <a href="https://publications.waset.org/abstracts/193495/extreme-temperature-response-to-solar-radiation-management-in-southeast-asia" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/193495.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">14</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">508</span> Speaker Recognition Using LIRA Neural Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nestor%20A.%20Garcia%20Fragoso">Nestor A. Garcia Fragoso</a>, <a href="https://publications.waset.org/abstracts/search?q=Tetyana%20Baydyk"> Tetyana Baydyk</a>, <a href="https://publications.waset.org/abstracts/search?q=Ernst%20Kussul"> Ernst Kussul</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This article contains information from our investigation in the field of voice recognition. For this purpose, we created a voice database that contains different phrases in two languages, English and Spanish, for men and women. As a classifier, the LIRA (Limited Receptive Area) grayscale neural classifier was selected. The LIRA grayscale neural classifier was developed for image recognition tasks and demonstrated good results. Therefore, we decided to develop a recognition system using this classifier for voice recognition. From a specific set of speakers, we can recognize the speaker&rsquo;s voice. For this purpose, the system uses spectrograms of the voice signals as input to the system, extracts the characteristics and identifies the speaker. The results are described and analyzed in this article. The classifier can be used for speaker identification in security system or smart buildings for different types of intelligent devices. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=extreme%20learning" title="extreme learning">extreme learning</a>, <a href="https://publications.waset.org/abstracts/search?q=LIRA%20neural%20classifier" title=" LIRA neural classifier"> LIRA neural classifier</a>, <a href="https://publications.waset.org/abstracts/search?q=speaker%20identification" title=" speaker identification"> speaker identification</a>, <a href="https://publications.waset.org/abstracts/search?q=voice%20recognition" title=" voice recognition"> voice recognition</a> </p> <a href="https://publications.waset.org/abstracts/112384/speaker-recognition-using-lira-neural-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/112384.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">177</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">507</span> Enhancing Predictive Accuracy in Pharmaceutical Sales through an Ensemble Kernel Gaussian Process Regression Approach</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shahin%20Mirshekari">Shahin Mirshekari</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohammadreza%20Moradi"> Mohammadreza Moradi</a>, <a href="https://publications.waset.org/abstracts/search?q=Hossein%20Jafari"> Hossein Jafari</a>, <a href="https://publications.waset.org/abstracts/search?q=Mehdi%20Jafari"> Mehdi Jafari</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohammad%20Ensaf"> Mohammad Ensaf</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This research employs Gaussian Process Regression (GPR) with an ensemble kernel, integrating Exponential Squared, Revised Matern, and Rational Quadratic kernels to analyze pharmaceutical sales data. Bayesian optimization was used to identify optimal kernel weights: 0.76 for Exponential Squared, 0.21 for Revised Matern, and 0.13 for Rational Quadratic. The ensemble kernel demonstrated superior performance in predictive accuracy, achieving an R² score near 1.0, and significantly lower values in MSE, MAE, and RMSE. These findings highlight the efficacy of ensemble kernels in GPR for predictive analytics in complex pharmaceutical sales datasets. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Gaussian%20process%20regression" title="Gaussian process regression">Gaussian process regression</a>, <a href="https://publications.waset.org/abstracts/search?q=ensemble%20kernels" title=" ensemble kernels"> ensemble kernels</a>, <a href="https://publications.waset.org/abstracts/search?q=bayesian%20optimization" title=" bayesian optimization"> bayesian optimization</a>, <a href="https://publications.waset.org/abstracts/search?q=pharmaceutical%20sales%20analysis" title=" pharmaceutical sales analysis"> pharmaceutical sales analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=time%20series%20forecasting" title=" time series forecasting"> time series forecasting</a>, <a href="https://publications.waset.org/abstracts/search?q=data%20analysis" title=" data analysis"> data analysis</a> </p> <a href="https://publications.waset.org/abstracts/181581/enhancing-predictive-accuracy-in-pharmaceutical-sales-through-an-ensemble-kernel-gaussian-process-regression-approach" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/181581.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">71</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">506</span> Comparing SVM and Naïve Bayes Classifier for Automatic Microaneurysm Detections </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=A.%20Sopharak">A. Sopharak</a>, <a href="https://publications.waset.org/abstracts/search?q=B.%20Uyyanonvara"> B. Uyyanonvara</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20Barman"> S. Barman </a> </p> <p class="card-text"><strong>Abstract:</strong></p> Diabetic retinopathy is characterized by the development of retinal microaneurysms. The damage can be prevented if disease is treated in its early stages. In this paper, we are comparing Support Vector Machine (SVM) and Naïve Bayes (NB) classifiers for automatic microaneurysm detection in images acquired through non-dilated pupils. The Nearest Neighbor classifier is used as a baseline for comparison. Detected microaneurysms are validated with expert ophthalmologists’ hand-drawn ground-truths. The sensitivity, specificity, precision and accuracy of each method are also compared. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=diabetic%20retinopathy" title="diabetic retinopathy">diabetic retinopathy</a>, <a href="https://publications.waset.org/abstracts/search?q=microaneurysm" title=" microaneurysm"> microaneurysm</a>, <a href="https://publications.waset.org/abstracts/search?q=naive%20Bayes%20classifier" title=" naive Bayes classifier"> naive Bayes classifier</a>, <a href="https://publications.waset.org/abstracts/search?q=SVM%20classifier" title=" SVM classifier"> SVM classifier</a> </p> <a href="https://publications.waset.org/abstracts/3939/comparing-svm-and-naive-bayes-classifier-for-automatic-microaneurysm-detections" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/3939.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">329</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">505</span> Performance Assessment of Multi-Level Ensemble for Multi-Class Problems</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rodolfo%20Lorbieski">Rodolfo Lorbieski</a>, <a href="https://publications.waset.org/abstracts/search?q=Silvia%20Modesto%20Nassar"> Silvia Modesto Nassar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Many supervised machine learning tasks require decision making across numerous different classes. Multi-class classification has several applications, such as face recognition, text recognition and medical diagnostics. The objective of this article is to analyze an adapted method of Stacking in multi-class problems, which combines ensembles within the ensemble itself. For this purpose, a training similar to Stacking was used, but with three levels, where the final decision-maker (level 2) performs its training by combining outputs from the tree-based pair of meta-classifiers (level 1) from Bayesian families. These are in turn trained by pairs of base classifiers (level 0) of the same family. This strategy seeks to promote diversity among the ensembles forming the meta-classifier level 2. Three performance measures were used: (1) accuracy, (2) area under the ROC curve, and (3) time for three factors: (a) datasets, (b) experiments and (c) levels. To compare the factors, ANOVA three-way test was executed for each performance measure, considering 5 datasets by 25 experiments by 3 levels. A triple interaction between factors was observed only in time. The accuracy and area under the ROC curve presented similar results, showing a double interaction between level and experiment, as well as for the dataset factor. It was concluded that level 2 had an average performance above the other levels and that the proposed method is especially efficient for multi-class problems when compared to binary problems. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=stacking" title="stacking">stacking</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-layers" title=" multi-layers"> multi-layers</a>, <a href="https://publications.waset.org/abstracts/search?q=ensemble" title=" ensemble"> ensemble</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-class" title=" multi-class"> multi-class</a> </p> <a href="https://publications.waset.org/abstracts/77466/performance-assessment-of-multi-level-ensemble-for-multi-class-problems" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/77466.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">269</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">504</span> Feature Evaluation Based on Random Subspace and Multiple-K Ensemble</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jaehong%20Yu">Jaehong Yu</a>, <a href="https://publications.waset.org/abstracts/search?q=Seoung%20Bum%20Kim"> Seoung Bum Kim</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Clustering analysis can facilitate the extraction of intrinsic patterns in a dataset and reveal its natural groupings without requiring class information. For effective clustering analysis in high dimensional datasets, unsupervised dimensionality reduction is an important task. Unsupervised dimensionality reduction can generally be achieved by feature extraction or feature selection. In many situations, feature selection methods are more appropriate than feature extraction methods because of their clear interpretation with respect to the original features. The unsupervised feature selection can be categorized as feature subset selection and feature ranking method, and we focused on unsupervised feature ranking methods which evaluate the features based on their importance scores. Recently, several unsupervised feature ranking methods were developed based on ensemble approaches to achieve their higher accuracy and stability. However, most of the ensemble-based feature ranking methods require the true number of clusters. Furthermore, these algorithms evaluate the feature importance depending on the ensemble clustering solution, and they produce undesirable evaluation results if the clustering solutions are inaccurate. To address these limitations, we proposed an ensemble-based feature ranking method with random subspace and multiple-k ensemble (FRRM). The proposed FRRM algorithm evaluates the importance of each feature with the random subspace ensemble, and all evaluation results are combined with the ensemble importance scores. Moreover, FRRM does not require the determination of the true number of clusters in advance through the use of the multiple-k ensemble idea. Experiments on various benchmark datasets were conducted to examine the properties of the proposed FRRM algorithm and to compare its performance with that of existing feature ranking methods. The experimental results demonstrated that the proposed FRRM outperformed the competitors. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=clustering%20analysis" title="clustering analysis">clustering analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=multiple-k%20ensemble" title=" multiple-k ensemble"> multiple-k ensemble</a>, <a href="https://publications.waset.org/abstracts/search?q=random%20subspace-based%20feature%20evaluation" title=" random subspace-based feature evaluation"> random subspace-based feature evaluation</a>, <a href="https://publications.waset.org/abstracts/search?q=unsupervised%20feature%20ranking" title=" unsupervised feature ranking"> unsupervised feature ranking</a> </p> <a href="https://publications.waset.org/abstracts/52081/feature-evaluation-based-on-random-subspace-and-multiple-k-ensemble" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/52081.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">339</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">503</span> Modeling Activity Pattern Using XGBoost for Mining Smart Card Data</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Eui-Jin%20Kim">Eui-Jin Kim</a>, <a href="https://publications.waset.org/abstracts/search?q=Hasik%20Lee"> Hasik Lee</a>, <a href="https://publications.waset.org/abstracts/search?q=Su-Jin%20Park"> Su-Jin Park</a>, <a href="https://publications.waset.org/abstracts/search?q=Dong-Kyu%20Kim"> Dong-Kyu Kim</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Smart-card data are expected to provide information on activity pattern as an alternative to conventional person trip surveys. The focus of this study is to propose a method for training the person trip surveys to supplement the smart-card data that does not contain the purpose of each trip. We selected only available features from smart card data such as spatiotemporal information on the trip and geographic information system (GIS) data near the stations to train the survey data. XGboost, which is state-of-the-art tree-based ensemble classifier, was used to train data from multiple sources. This classifier uses a more regularized model formalization to control the over-fitting and show very fast execution time with well-performance. The validation results showed that proposed method efficiently estimated the trip purpose. GIS data of station and duration of stay at the destination were significant features in modeling trip purpose. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=activity%20pattern" title="activity pattern">activity pattern</a>, <a href="https://publications.waset.org/abstracts/search?q=data%20fusion" title=" data fusion"> data fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=smart-card" title=" smart-card"> smart-card</a>, <a href="https://publications.waset.org/abstracts/search?q=XGboost" title=" XGboost"> XGboost</a> </p> <a href="https://publications.waset.org/abstracts/80202/modeling-activity-pattern-using-xgboost-for-mining-smart-card-data" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/80202.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">246</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">502</span> Application of Bayesian Model Averaging and Geostatistical Output Perturbation to Generate Calibrated Ensemble Weather Forecast</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Muhammad%20Luthfi">Muhammad Luthfi</a>, <a href="https://publications.waset.org/abstracts/search?q=Sutikno%20Sutikno"> Sutikno Sutikno</a>, <a href="https://publications.waset.org/abstracts/search?q=Purhadi%20Purhadi"> Purhadi Purhadi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Weather forecast has necessarily been improved to provide the communities an accurate and objective prediction as well. To overcome such issue, the numerical-based weather forecast was extensively developed to reduce the subjectivity of forecast. Yet the Numerical Weather Predictions (NWPs) outputs are unfortunately issued without taking dynamical weather behavior and local terrain features into account. Thus, NWPs outputs are not able to accurately forecast the weather quantities, particularly for medium and long range forecast. The aim of this research is to aid and extend the development of ensemble forecast for Meteorology, Climatology, and Geophysics Agency of Indonesia. Ensemble method is an approach combining various deterministic forecast to produce more reliable one. However, such forecast is biased and uncalibrated due to its underdispersive or overdispersive nature. As one of the parametric methods, Bayesian Model Averaging (BMA) generates the calibrated ensemble forecast and constructs predictive PDF for specified period. Such method is able to utilize ensemble of any size but does not take spatial correlation into account. Whereas space dependencies involve the site of interest and nearby site, influenced by dynamic weather behavior. Meanwhile, Geostatistical Output Perturbation (GOP) reckons the spatial correlation to generate future weather quantities, though merely built by a single deterministic forecast, and is able to generate an ensemble of any size as well. This research conducts both BMA and GOP to generate the calibrated ensemble forecast for the daily temperature at few meteorological sites nearby Indonesia international airport. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bayesian%20Model%20Averaging" title="Bayesian Model Averaging">Bayesian Model Averaging</a>, <a href="https://publications.waset.org/abstracts/search?q=ensemble%20forecast" title=" ensemble forecast"> ensemble forecast</a>, <a href="https://publications.waset.org/abstracts/search?q=geostatistical%20output%20perturbation" title=" geostatistical output perturbation"> geostatistical output perturbation</a>, <a href="https://publications.waset.org/abstracts/search?q=numerical%20weather%20prediction" title=" numerical weather prediction"> numerical weather prediction</a>, <a href="https://publications.waset.org/abstracts/search?q=temperature" title=" temperature"> temperature</a> </p> <a href="https://publications.waset.org/abstracts/68771/application-of-bayesian-model-averaging-and-geostatistical-output-perturbation-to-generate-calibrated-ensemble-weather-forecast" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/68771.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">280</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">501</span> Segmentation of Liver Using Random Forest Classifier </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Gajendra%20Kumar%20%20Mourya">Gajendra Kumar Mourya</a>, <a href="https://publications.waset.org/abstracts/search?q=Dinesh%20%20Bhatia"> Dinesh Bhatia</a>, <a href="https://publications.waset.org/abstracts/search?q=Akash%20%20Handique"> Akash Handique</a>, <a href="https://publications.waset.org/abstracts/search?q=Sunita%20Warjri"> Sunita Warjri</a>, <a href="https://publications.waset.org/abstracts/search?q=Syed%20Achaab%20Amir"> Syed Achaab Amir </a> </p> <p class="card-text"><strong>Abstract:</strong></p> Nowadays, Medical imaging has become an integral part of modern healthcare. Abdominal CT images are an invaluable mean for abdominal organ investigation and have been widely studied in the recent years. Diagnosis of liver pathologies is one of the major areas of current interests in the field of medical image processing and is still an open problem. To deeply study and diagnose the liver, segmentation of liver is done to identify which part of the liver is mostly affected. Manual segmentation of the liver in CT images is time-consuming and suffers from inter- and intra-observer differences. However, automatic or semi-automatic computer aided segmentation of the Liver is a challenging task due to inter-patient Liver shape and size variability. In this paper, we present a technique for automatic segmenting the liver from CT images using Random Forest Classifier. Random forests or random decision forests are an ensemble learning method for classification that operate by constructing a multitude of decision trees at training time and outputting the class that is the mode of the classes of the individual trees. After comparing with various other techniques, it was found that Random Forest Classifier provide a better segmentation results with respect to accuracy and speed. We have done the validation of our results using various techniques and it shows above 89% accuracy in all the cases. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=CT%20images" title="CT images">CT images</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20validation" title=" image validation"> image validation</a>, <a href="https://publications.waset.org/abstracts/search?q=random%20forest" title=" random forest"> random forest</a>, <a href="https://publications.waset.org/abstracts/search?q=segmentation" title=" segmentation"> segmentation</a> </p> <a href="https://publications.waset.org/abstracts/77535/segmentation-of-liver-using-random-forest-classifier" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/77535.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">313</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">500</span> Measuring Multi-Class Linear Classifier for Image Classification</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Fatma%20Susilawati%20Mohamad">Fatma Susilawati Mohamad</a>, <a href="https://publications.waset.org/abstracts/search?q=Azizah%20Abdul%20Manaf"> Azizah Abdul Manaf</a>, <a href="https://publications.waset.org/abstracts/search?q=Fadhillah%20Ahmad"> Fadhillah Ahmad</a>, <a href="https://publications.waset.org/abstracts/search?q=Zarina%20Mohamad"> Zarina Mohamad</a>, <a href="https://publications.waset.org/abstracts/search?q=Wan%20Suryani%20Wan%20Awang"> Wan Suryani Wan Awang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> A simple and robust multi-class linear classifier is proposed and implemented. For a pair of classes of the linear boundary, a collection of segments of hyper planes created as perpendicular bisectors of line segments linking centroids of the classes or part of classes. Nearest Neighbor and Linear Discriminant Analysis are compared in the experiments to see the performances of each classifier in discriminating ripeness of oil palm. This paper proposes a multi-class linear classifier using Linear Discriminant Analysis (LDA) for image identification. Result proves that LDA is well capable in separating multi-class features for ripeness identification. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=multi-class" title="multi-class">multi-class</a>, <a href="https://publications.waset.org/abstracts/search?q=linear%20classifier" title=" linear classifier"> linear classifier</a>, <a href="https://publications.waset.org/abstracts/search?q=nearest%20neighbor" title=" nearest neighbor"> nearest neighbor</a>, <a href="https://publications.waset.org/abstracts/search?q=linear%20discriminant%20analysis" title=" linear discriminant analysis"> linear discriminant analysis</a> </p> <a href="https://publications.waset.org/abstracts/51310/measuring-multi-class-linear-classifier-for-image-classification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/51310.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">538</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">499</span> Dynamic Gabor Filter Facial Features-Based Recognition of Emotion in Video Sequences</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=T.%20Hari%20Prasath">T. Hari Prasath</a>, <a href="https://publications.waset.org/abstracts/search?q=P.%20Ithaya%20Rani"> P. Ithaya Rani</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In the world of visual technology, recognizing emotions from the face images is a challenging task. Several related methods have not utilized the dynamic facial features effectively for high performance. This paper proposes a method for emotions recognition using dynamic facial features with high performance. Initially, local features are captured by Gabor filter with different scale and orientations in each frame for finding the position and scale of face part from different backgrounds. The Gabor features are sent to the ensemble classifier for detecting Gabor facial features. The region of dynamic features is captured from the Gabor facial features in the consecutive frames which represent the dynamic variations of facial appearances. In each region of dynamic features is normalized using Z-score normalization method which is further encoded into binary pattern features with the help of threshold values. The binary features are passed to Multi-class AdaBoost classifier algorithm with the well-trained database contain happiness, sadness, surprise, fear, anger, disgust, and neutral expressions to classify the discriminative dynamic features for emotions recognition. The developed method is deployed on the Ryerson Multimedia Research Lab and Cohn-Kanade databases and they show significant performance improvement owing to their dynamic features when compared with the existing methods. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=detecting%20face" title="detecting face">detecting face</a>, <a href="https://publications.waset.org/abstracts/search?q=Gabor%20filter" title=" Gabor filter"> Gabor filter</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-class%20AdaBoost%20classifier" title=" multi-class AdaBoost classifier"> multi-class AdaBoost classifier</a>, <a href="https://publications.waset.org/abstracts/search?q=Z-score%20normalization" title=" Z-score normalization"> Z-score normalization</a> </p> <a href="https://publications.waset.org/abstracts/85005/dynamic-gabor-filter-facial-features-based-recognition-of-emotion-in-video-sequences" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/85005.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">278</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">498</span> Lipschitz Classifiers Ensembles: Usage for Classification of Target Events in C-OTDR Monitoring Systems </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Andrey%20V.%20Timofeev">Andrey V. Timofeev</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper introduces an original method for guaranteed estimation of the accuracy of an ensemble of Lipschitz classifiers. The solution was obtained as a finite closed set of alternative hypotheses, which contains an object of classification with a probability of not less than the specified value. Thus, the classification is represented by a set of hypothetical classes. In this case, the smaller the cardinality of the discrete set of hypothetical classes is, the higher is the classification accuracy. Experiments have shown that if the cardinality of the classifiers ensemble is increased then the cardinality of this set of hypothetical classes is reduced. The problem of the guaranteed estimation of the accuracy of an ensemble of Lipschitz classifiers is relevant in the multichannel classification of target events in C-OTDR monitoring systems. Results of suggested approach practical usage to accuracy control in C-OTDR monitoring systems are present. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Lipschitz%20classifiers" title="Lipschitz classifiers">Lipschitz classifiers</a>, <a href="https://publications.waset.org/abstracts/search?q=confidence%20set" title=" confidence set"> confidence set</a>, <a href="https://publications.waset.org/abstracts/search?q=C-OTDR%20monitoring" title=" C-OTDR monitoring"> C-OTDR monitoring</a>, <a href="https://publications.waset.org/abstracts/search?q=classifiers%20accuracy" title=" classifiers accuracy"> classifiers accuracy</a>, <a href="https://publications.waset.org/abstracts/search?q=classifiers%20ensemble" title=" classifiers ensemble"> classifiers ensemble</a> </p> <a href="https://publications.waset.org/abstracts/21073/lipschitz-classifiers-ensembles-usage-for-classification-of-target-events-in-c-otdr-monitoring-systems" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/21073.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">492</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">497</span> Evaluation of Machine Learning Algorithms and Ensemble Methods for Prediction of Students’ Graduation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Soha%20A.%20Bahanshal">Soha A. Bahanshal</a>, <a href="https://publications.waset.org/abstracts/search?q=Vaibhav%20Verdhan"> Vaibhav Verdhan</a>, <a href="https://publications.waset.org/abstracts/search?q=Bayong%20Kim"> Bayong Kim</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Graduation rates at six-year colleges are becoming a more essential indicator for incoming fresh students and for university rankings. Predicting student graduation is extremely beneficial to schools and has a huge potential for targeted intervention. It is important for educational institutions since it enables the development of strategic plans that will assist or improve students' performance in achieving their degrees on time (GOT). A first step and a helping hand in extracting useful information from these data and gaining insights into the prediction of students' progress and performance is offered by machine learning techniques. Data analysis and visualization techniques are applied to understand and interpret the data. The data used for the analysis contains students who have graduated in 6 years in the academic year 2017-2018 for science majors. This analysis can be used to predict the graduation of students in the next academic year. Different Predictive modelings such as logistic regression, decision trees, support vector machines, Random Forest, Naïve Bayes, and KNeighborsClassifier are applied to predict whether a student will graduate. These classifiers were evaluated with k folds of 5. The performance of these classifiers was compared based on accuracy measurement. The results indicated that Ensemble Classifier achieves better accuracy, about 91.12%. This GOT prediction model would hopefully be useful to university administration and academics in developing measures for assisting and boosting students' academic performance and ensuring they graduate on time. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=prediction" title="prediction">prediction</a>, <a href="https://publications.waset.org/abstracts/search?q=decision%20trees" title=" decision trees"> decision trees</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=support%20vector%20machine" title=" support vector machine"> support vector machine</a>, <a href="https://publications.waset.org/abstracts/search?q=ensemble%20model" title=" ensemble model"> ensemble model</a>, <a href="https://publications.waset.org/abstracts/search?q=student%20graduation" title=" student graduation"> student graduation</a>, <a href="https://publications.waset.org/abstracts/search?q=GOT%20graduate%20on%20time" title=" GOT graduate on time"> GOT graduate on time</a> </p> <a href="https://publications.waset.org/abstracts/167620/evaluation-of-machine-learning-algorithms-and-ensemble-methods-for-prediction-of-students-graduation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/167620.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">72</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">496</span> Simulation of Optimal Runoff Hydrograph Using Ensemble of Radar Rainfall and Blending of Runoffs Model</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Myungjin%20Lee">Myungjin Lee</a>, <a href="https://publications.waset.org/abstracts/search?q=Daegun%20Han"> Daegun Han</a>, <a href="https://publications.waset.org/abstracts/search?q=Jongsung%20Kim"> Jongsung Kim</a>, <a href="https://publications.waset.org/abstracts/search?q=Soojun%20Kim"> Soojun Kim</a>, <a href="https://publications.waset.org/abstracts/search?q=Hung%20Soo%20Kim"> Hung Soo Kim</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Recently, the localized heavy rainfall and typhoons are frequently occurred due to the climate change and the damage is becoming bigger. Therefore, we may need a more accurate prediction of the rainfall and runoff. However, the gauge rainfall has the limited accuracy in space. Radar rainfall is better than gauge rainfall for the explanation of the spatial variability of rainfall but it is mostly underestimated with the uncertainty involved. Therefore, the ensemble of radar rainfall was simulated using error structure to overcome the uncertainty and gauge rainfall. The simulated ensemble was used as the input data of the rainfall-runoff models for obtaining the ensemble of runoff hydrographs. The previous studies discussed about the accuracy of the rainfall-runoff model. Even if the same input data such as rainfall is used for the runoff analysis using the models in the same basin, the models can have different results because of the uncertainty involved in the models. Therefore, we used two models of the SSARR model which is the lumped model, and the Vflo model which is a distributed model and tried to simulate the optimum runoff considering the uncertainty of each rainfall-runoff model. The study basin is located in Han river basin and we obtained one integrated runoff hydrograph which is an optimum runoff hydrograph using the blending methods such as Multi-Model Super Ensemble (MMSE), Simple Model Average (SMA), Mean Square Error (MSE). From this study, we could confirm the accuracy of rainfall and rainfall-runoff model using ensemble scenario and various rainfall-runoff model and we can use this result to study flood control measure due to climate change. Acknowledgements: This work is supported by the Korea Agency for Infrastructure Technology Advancement(KAIA) grant funded by the Ministry of Land, Infrastructure and Transport (Grant 18AWMP-B083066-05). <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=radar%20rainfall%20ensemble" title="radar rainfall ensemble">radar rainfall ensemble</a>, <a href="https://publications.waset.org/abstracts/search?q=rainfall-runoff%20models" title=" rainfall-runoff models"> rainfall-runoff models</a>, <a href="https://publications.waset.org/abstracts/search?q=blending%20method" title=" blending method"> blending method</a>, <a href="https://publications.waset.org/abstracts/search?q=optimum%20runoff%20hydrograph" title=" optimum runoff hydrograph"> optimum runoff hydrograph</a> </p> <a href="https://publications.waset.org/abstracts/76203/simulation-of-optimal-runoff-hydrograph-using-ensemble-of-radar-rainfall-and-blending-of-runoffs-model" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/76203.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">280</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">&lsaquo;</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=Ensemble%20Classifier&amp;page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=Ensemble%20Classifier&amp;page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=Ensemble%20Classifier&amp;page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=Ensemble%20Classifier&amp;page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=Ensemble%20Classifier&amp;page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=Ensemble%20Classifier&amp;page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=Ensemble%20Classifier&amp;page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=Ensemble%20Classifier&amp;page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=Ensemble%20Classifier&amp;page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=Ensemble%20Classifier&amp;page=17">17</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=Ensemble%20Classifier&amp;page=18">18</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=Ensemble%20Classifier&amp;page=2" rel="next">&rsaquo;</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">&copy; 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">&times;</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>

Pages: 1 2 3 4 5 6 7 8 9 10