CINXE.COM

Search results for: ensemble machine learning

<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: ensemble machine learning</title> <meta name="description" content="Search results for: ensemble machine learning"> <meta name="keywords" content="ensemble machine learning"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="ensemble machine learning" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="ensemble machine learning"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 8638</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: ensemble machine learning</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8638</span> Fuzzy Optimization Multi-Objective Clustering Ensemble Model for Multi-Source Data Analysis</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=C.%20B.%20Le">C. B. Le</a>, <a href="https://publications.waset.org/abstracts/search?q=V.%20N.%20Pham"> V. N. Pham</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In modern data analysis, multi-source data appears more and more in real applications. Multi-source data clustering has emerged as a important issue in the data mining and machine learning community. Different data sources provide information about different data. Therefore, multi-source data linking is essential to improve clustering performance. However, in practice multi-source data is often heterogeneous, uncertain, and large. This issue is considered a major challenge from multi-source data. Ensemble is a versatile machine learning model in which learning techniques can work in parallel, with big data. Clustering ensemble has been shown to outperform any standard clustering algorithm in terms of accuracy and robustness. However, most of the traditional clustering ensemble approaches are based on single-objective function and single-source data. This paper proposes a new clustering ensemble method for multi-source data analysis. The fuzzy optimized multi-objective clustering ensemble method is called FOMOCE. Firstly, a clustering ensemble mathematical model based on the structure of multi-objective clustering function, multi-source data, and dark knowledge is introduced. Then, rules for extracting dark knowledge from the input data, clustering algorithms, and base clusterings are designed and applied. Finally, a clustering ensemble algorithm is proposed for multi-source data analysis. The experiments were performed on the standard sample data set. The experimental results demonstrate the superior performance of the FOMOCE method compared to the existing clustering ensemble methods and multi-source clustering methods. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=clustering%20ensemble" title="clustering ensemble">clustering ensemble</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-source" title=" multi-source"> multi-source</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-objective" title=" multi-objective"> multi-objective</a>, <a href="https://publications.waset.org/abstracts/search?q=fuzzy%20clustering" title=" fuzzy clustering"> fuzzy clustering</a> </p> <a href="https://publications.waset.org/abstracts/136598/fuzzy-optimization-multi-objective-clustering-ensemble-model-for-multi-source-data-analysis" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/136598.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">189</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8637</span> Evaluation of Ensemble Classifiers for Intrusion Detection </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=M.%20Govindarajan">M. Govindarajan </a> </p> <p class="card-text"><strong>Abstract:</strong></p> One of the major developments in machine learning in the past decade is the ensemble method, which finds highly accurate classifier by combining many moderately accurate component classifiers. In this research work, new ensemble classification methods are proposed with homogeneous ensemble classifier using bagging and heterogeneous ensemble classifier using arcing and their performances are analyzed in terms of accuracy. A Classifier ensemble is designed using Radial Basis Function (RBF) and Support Vector Machine (SVM) as base classifiers. The feasibility and the benefits of the proposed approaches are demonstrated by the means of standard datasets of intrusion detection. The main originality of the proposed approach is based on three main parts: preprocessing phase, classification phase, and combining phase. A wide range of comparative experiments is conducted for standard datasets of intrusion detection. The performance of the proposed homogeneous and heterogeneous ensemble classifiers are compared to the performance of other standard homogeneous and heterogeneous ensemble methods. The standard homogeneous ensemble methods include Error correcting output codes, Dagging and heterogeneous ensemble methods include majority voting, stacking. The proposed ensemble methods provide significant improvement of accuracy compared to individual classifiers and the proposed bagged RBF and SVM performs significantly better than ECOC and Dagging and the proposed hybrid RBF-SVM performs significantly better than voting and stacking. Also heterogeneous models exhibit better results than homogeneous models for standard datasets of intrusion detection.&nbsp; <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=data%20mining" title="data mining">data mining</a>, <a href="https://publications.waset.org/abstracts/search?q=ensemble" title=" ensemble"> ensemble</a>, <a href="https://publications.waset.org/abstracts/search?q=radial%20basis%20function" title=" radial basis function"> radial basis function</a>, <a href="https://publications.waset.org/abstracts/search?q=support%20vector%20machine" title=" support vector machine"> support vector machine</a>, <a href="https://publications.waset.org/abstracts/search?q=accuracy" title=" accuracy"> accuracy</a> </p> <a href="https://publications.waset.org/abstracts/43650/evaluation-of-ensemble-classifiers-for-intrusion-detection" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/43650.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">248</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8636</span> Methods for Enhancing Ensemble Learning or Improving Classifiers of This Technique in the Analysis and Classification of Brain Signals</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Seyed%20Mehdi%20Ghezi">Seyed Mehdi Ghezi</a>, <a href="https://publications.waset.org/abstracts/search?q=Hesam%20Hasanpoor"> Hesam Hasanpoor</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This scientific article explores enhancement methods for ensemble learning with the aim of improving the performance of classifiers in the analysis and classification of brain signals. The research approach in this field consists of two main parts, each with its own strengths and weaknesses. The choice of approach depends on the specific research question and available resources. By combining these approaches and leveraging their respective strengths, researchers can enhance the accuracy and reliability of classification results, consequently advancing our understanding of the brain and its functions. The first approach focuses on utilizing machine learning methods to identify the best features among the vast array of features present in brain signals. The selection of features varies depending on the research objective, and different techniques have been employed for this purpose. For instance, the genetic algorithm has been used in some studies to identify the best features, while optimization methods have been utilized in others to identify the most influential features. Additionally, machine learning techniques have been applied to determine the influential electrodes in classification. Ensemble learning plays a crucial role in identifying the best features that contribute to learning, thereby improving the overall results. The second approach concentrates on designing and implementing methods for selecting the best classifier or utilizing meta-classifiers to enhance the final results in ensemble learning. In a different section of the research, a single classifier is used instead of multiple classifiers, employing different sets of features to improve the results. The article provides an in-depth examination of each technique, highlighting their advantages and limitations. By integrating these techniques, researchers can enhance the performance of classifiers in the analysis and classification of brain signals. This advancement in ensemble learning methodologies contributes to a better understanding of the brain and its functions, ultimately leading to improved accuracy and reliability in brain signal analysis and classification. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=ensemble%20learning" title="ensemble learning">ensemble learning</a>, <a href="https://publications.waset.org/abstracts/search?q=brain%20signals" title=" brain signals"> brain signals</a>, <a href="https://publications.waset.org/abstracts/search?q=classification" title=" classification"> classification</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20selection" title=" feature selection"> feature selection</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=genetic%20algorithm" title=" genetic algorithm"> genetic algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=optimization%20methods" title=" optimization methods"> optimization methods</a>, <a href="https://publications.waset.org/abstracts/search?q=influential%20features" title=" influential features"> influential features</a>, <a href="https://publications.waset.org/abstracts/search?q=influential%20electrodes" title=" influential electrodes"> influential electrodes</a>, <a href="https://publications.waset.org/abstracts/search?q=meta-classifiers" title=" meta-classifiers"> meta-classifiers</a> </p> <a href="https://publications.waset.org/abstracts/177312/methods-for-enhancing-ensemble-learning-or-improving-classifiers-of-this-technique-in-the-analysis-and-classification-of-brain-signals" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/177312.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">75</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8635</span> Enhancing Sell-In and Sell-Out Forecasting Using Ensemble Machine Learning Method</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Vishal%20Das">Vishal Das</a>, <a href="https://publications.waset.org/abstracts/search?q=Tianyi%20Mao"> Tianyi Mao</a>, <a href="https://publications.waset.org/abstracts/search?q=Zhicheng%20Geng"> Zhicheng Geng</a>, <a href="https://publications.waset.org/abstracts/search?q=Carmen%20Flores"> Carmen Flores</a>, <a href="https://publications.waset.org/abstracts/search?q=Diego%20Pelloso"> Diego Pelloso</a>, <a href="https://publications.waset.org/abstracts/search?q=Fang%20Wang"> Fang Wang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Accurate sell-in and sell-out forecasting is a ubiquitous problem in the retail industry. It is an important element of any demand planning activity. As a global food and beverage company, Nestlé has hundreds of products in each geographical location that they operate in. Each product has its sell-in and sell-out time series data, which are forecasted on a weekly and monthly scale for demand and financial planning. To address this challenge, Nestlé Chilein collaboration with Amazon Machine Learning Solutions Labhas developed their in-house solution of using machine learning models for forecasting. Similar products are combined together such that there is one model for each product category. In this way, the models learn from a larger set of data, and there are fewer models to maintain. The solution is scalable to all product categories and is developed to be flexible enough to include any new product or eliminate any existing product in a product category based on requirements. We show how we can use the machine learning development environment on Amazon Web Services (AWS) to explore a set of forecasting models and create business intelligence dashboards that can be used with the existing demand planning tools in Nestlé. We explored recent deep learning networks (DNN), which show promising results for a variety of time series forecasting problems. Specifically, we used a DeepAR autoregressive model that can group similar time series together and provide robust predictions. To further enhance the accuracy of the predictions and include domain-specific knowledge, we designed an ensemble approach using DeepAR and XGBoost regression model. As part of the ensemble approach, we interlinked the sell-out and sell-in information to ensure that a future sell-out influences the current sell-in predictions. Our approach outperforms the benchmark statistical models by more than 50%. The machine learning (ML) pipeline implemented in the cloud is currently being extended for other product categories and is getting adopted by other geomarkets. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=sell-in%20and%20sell-out%20forecasting" title="sell-in and sell-out forecasting">sell-in and sell-out forecasting</a>, <a href="https://publications.waset.org/abstracts/search?q=demand%20planning" title=" demand planning"> demand planning</a>, <a href="https://publications.waset.org/abstracts/search?q=DeepAR" title=" DeepAR"> DeepAR</a>, <a href="https://publications.waset.org/abstracts/search?q=retail" title=" retail"> retail</a>, <a href="https://publications.waset.org/abstracts/search?q=ensemble%20machine%20learning" title=" ensemble machine learning"> ensemble machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=time-series" title=" time-series"> time-series</a> </p> <a href="https://publications.waset.org/abstracts/169497/enhancing-sell-in-and-sell-out-forecasting-using-ensemble-machine-learning-method" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/169497.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">273</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8634</span> Breast Cancer Prediction Using Score-Level Fusion of Machine Learning and Deep Learning Models</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sam%20Khozama">Sam Khozama</a>, <a href="https://publications.waset.org/abstracts/search?q=Ali%20M.%20Mayya"> Ali M. Mayya</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Breast cancer is one of the most common types in women. Early prediction of breast cancer helps physicians detect cancer in its early stages. Big cancer data needs a very powerful tool to analyze and extract predictions. Machine learning and deep learning are two of the most efficient tools for predicting cancer based on textual data. In this study, we developed a fusion model of two machine learning and deep learning models. To obtain the final prediction, Long-Short Term Memory (LSTM) and ensemble learning with hyper parameters optimization are used, and score-level fusion is used. Experiments are done on the Breast Cancer Surveillance Consortium (BCSC) dataset after balancing and grouping the class categories. Five different training scenarios are used, and the tests show that the designed fusion model improved the performance by 3.3% compared to the individual models. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title="machine learning">machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=cancer%20prediction" title=" cancer prediction"> cancer prediction</a>, <a href="https://publications.waset.org/abstracts/search?q=breast%20cancer" title=" breast cancer"> breast cancer</a>, <a href="https://publications.waset.org/abstracts/search?q=LSTM" title=" LSTM"> LSTM</a>, <a href="https://publications.waset.org/abstracts/search?q=fusion" title=" fusion"> fusion</a> </p> <a href="https://publications.waset.org/abstracts/155602/breast-cancer-prediction-using-score-level-fusion-of-machine-learning-and-deep-learning-models" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/155602.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">163</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8633</span> A Review of Machine Learning for Big Data</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Devatha%20Kalyan%20Kumar">Devatha Kalyan Kumar</a>, <a href="https://publications.waset.org/abstracts/search?q=Aravindraj%20D."> Aravindraj D.</a>, <a href="https://publications.waset.org/abstracts/search?q=Sadathulla%20A."> Sadathulla A.</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Big data are now rapidly expanding in all engineering and science and many other domains. The potential of large or massive data is undoubtedly significant, make sense to require new ways of thinking and learning techniques to address the various big data challenges. Machine learning is continuously unleashing its power in a wide range of applications. In this paper, the latest advances and advancements in the researches on machine learning for big data processing. First, the machine learning techniques methods in recent studies, such as deep learning, representation learning, transfer learning, active learning and distributed and parallel learning. Then focus on the challenges and possible solutions of machine learning for big data. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=active%20learning" title="active learning">active learning</a>, <a href="https://publications.waset.org/abstracts/search?q=big%20data" title=" big data"> big data</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a> </p> <a href="https://publications.waset.org/abstracts/72161/a-review-of-machine-learning-for-big-data" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/72161.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">446</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8632</span> Machine Learning Predictive Models for Hydroponic Systems: A Case Study Nutrient Film Technique and Deep Flow Technique</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kritiyaporn%20Kunsook">Kritiyaporn Kunsook</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Machine learning algorithms (MLAs) such us artificial neural networks (ANNs), decision tree, support vector machines (SVMs), Naïve Bayes, and ensemble classifier by voting are powerful data driven methods that are relatively less widely used in the mapping of technique of system, and thus have not been comparatively evaluated together thoroughly in this field. The performances of a series of MLAs, ANNs, decision tree, SVMs, Naïve Bayes, and ensemble classifier by voting in technique of hydroponic systems prospectively modeling are compared based on the accuracy of each model. Classification of hydroponic systems only covers the test samples from vegetables grown with Nutrient film technique (NFT) and Deep flow technique (DFT). The feature, which are the characteristics of vegetables compose harvesting height width, temperature, require light and color. The results indicate that the classification performance of the ANNs is 98%, decision tree is 98%, SVMs is 97.33%, Naïve Bayes is 96.67%, and ensemble classifier by voting is 98.96% algorithm respectively. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=artificial%20neural%20networks" title="artificial neural networks">artificial neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=decision%20tree" title=" decision tree"> decision tree</a>, <a href="https://publications.waset.org/abstracts/search?q=support%20vector%20machines" title=" support vector machines"> support vector machines</a>, <a href="https://publications.waset.org/abstracts/search?q=na%C3%AFve%20Bayes" title=" naïve Bayes"> naïve Bayes</a>, <a href="https://publications.waset.org/abstracts/search?q=ensemble%20classifier%20by%20voting" title=" ensemble classifier by voting"> ensemble classifier by voting</a> </p> <a href="https://publications.waset.org/abstracts/91070/machine-learning-predictive-models-for-hydroponic-systems-a-case-study-nutrient-film-technique-and-deep-flow-technique" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/91070.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">372</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8631</span> Data Modeling and Calibration of In-Line Pultrusion and Laser Ablation Machine Processes</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=David%20F.%20Nettleton">David F. Nettleton</a>, <a href="https://publications.waset.org/abstracts/search?q=Christian%20Wasiak"> Christian Wasiak</a>, <a href="https://publications.waset.org/abstracts/search?q=Jonas%20Dorissen"> Jonas Dorissen</a>, <a href="https://publications.waset.org/abstracts/search?q=David%20Gillen"> David Gillen</a>, <a href="https://publications.waset.org/abstracts/search?q=Alexandr%20Tretyak"> Alexandr Tretyak</a>, <a href="https://publications.waset.org/abstracts/search?q=Elodie%20Bugnicourt"> Elodie Bugnicourt</a>, <a href="https://publications.waset.org/abstracts/search?q=Alejandro%20Rosales"> Alejandro Rosales</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this work, preliminary results are given for the modeling and calibration of two inline processes, pultrusion, and laser ablation, using machine learning techniques. The end product of the processes is the core of a medical guidewire, manufactured to comply with a user specification of diameter and flexibility. An ensemble approach is followed which requires training several models. Two state of the art machine learning algorithms are benchmarked: Kernel Recursive Least Squares (KRLS) and Support Vector Regression (SVR). The final objective is to build a precise digital model of the pultrusion and laser ablation process in order to calibrate the resulting diameter and flexibility of a medical guidewire, which is the end product while taking into account the friction on the forming die. The result is an ensemble of models, whose output is within a strict required tolerance and which covers the required range of diameter and flexibility of the guidewire end product. The modeling and automatic calibration of complex in-line industrial processes is a key aspect of the Industry 4.0 movement for cyber-physical systems. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=calibration" title="calibration">calibration</a>, <a href="https://publications.waset.org/abstracts/search?q=data%20modeling" title=" data modeling"> data modeling</a>, <a href="https://publications.waset.org/abstracts/search?q=industrial%20processes" title=" industrial processes"> industrial processes</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a> </p> <a href="https://publications.waset.org/abstracts/79973/data-modeling-and-calibration-of-in-line-pultrusion-and-laser-ablation-machine-processes" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/79973.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">298</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8630</span> Gradient Boosted Trees on Spark Platform for Supervised Learning in Health Care Big Data</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Gayathri%20Nagarajan">Gayathri Nagarajan</a>, <a href="https://publications.waset.org/abstracts/search?q=L.%20D.%20Dhinesh%20Babu"> L. D. Dhinesh Babu </a> </p> <p class="card-text"><strong>Abstract:</strong></p> Health care is one of the prominent industries that generate voluminous data thereby finding the need of machine learning techniques with big data solutions for efficient processing and prediction. Missing data, incomplete data, real time streaming data, sensitive data, privacy, heterogeneity are few of the common challenges to be addressed for efficient processing and mining of health care data. In comparison with other applications, accuracy and fast processing are of higher importance for health care applications as they are related to the human life directly. Though there are many machine learning techniques and big data solutions used for efficient processing and prediction in health care data, different techniques and different frameworks are proved to be effective for different applications largely depending on the characteristics of the datasets. In this paper, we present a framework that uses ensemble machine learning technique gradient boosted trees for data classification in health care big data. The framework is built on Spark platform which is fast in comparison with other traditional frameworks. Unlike other works that focus on a single technique, our work presents a comparison of six different machine learning techniques along with gradient boosted trees on datasets of different characteristics. Five benchmark health care datasets are considered for experimentation, and the results of different machine learning techniques are discussed in comparison with gradient boosted trees. The metric chosen for comparison is misclassification error rate and the run time of the algorithms. The goal of this paper is to i) Compare the performance of gradient boosted trees with other machine learning techniques in Spark platform specifically for health care big data and ii) Discuss the results from the experiments conducted on datasets of different characteristics thereby drawing inference and conclusion. The experimental results show that the accuracy is largely dependent on the characteristics of the datasets for other machine learning techniques whereas gradient boosting trees yields reasonably stable results in terms of accuracy without largely depending on the dataset characteristics. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=big%20data%20analytics" title="big data analytics">big data analytics</a>, <a href="https://publications.waset.org/abstracts/search?q=ensemble%20machine%20learning" title=" ensemble machine learning"> ensemble machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=gradient%20boosted%20trees" title=" gradient boosted trees"> gradient boosted trees</a>, <a href="https://publications.waset.org/abstracts/search?q=Spark%20platform" title=" Spark platform"> Spark platform</a> </p> <a href="https://publications.waset.org/abstracts/74866/gradient-boosted-trees-on-spark-platform-for-supervised-learning-in-health-care-big-data" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/74866.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">240</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8629</span> Evaluation of Machine Learning Algorithms and Ensemble Methods for Prediction of Students’ Graduation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Soha%20A.%20Bahanshal">Soha A. Bahanshal</a>, <a href="https://publications.waset.org/abstracts/search?q=Vaibhav%20Verdhan"> Vaibhav Verdhan</a>, <a href="https://publications.waset.org/abstracts/search?q=Bayong%20Kim"> Bayong Kim</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Graduation rates at six-year colleges are becoming a more essential indicator for incoming fresh students and for university rankings. Predicting student graduation is extremely beneficial to schools and has a huge potential for targeted intervention. It is important for educational institutions since it enables the development of strategic plans that will assist or improve students' performance in achieving their degrees on time (GOT). A first step and a helping hand in extracting useful information from these data and gaining insights into the prediction of students' progress and performance is offered by machine learning techniques. Data analysis and visualization techniques are applied to understand and interpret the data. The data used for the analysis contains students who have graduated in 6 years in the academic year 2017-2018 for science majors. This analysis can be used to predict the graduation of students in the next academic year. Different Predictive modelings such as logistic regression, decision trees, support vector machines, Random Forest, Naïve Bayes, and KNeighborsClassifier are applied to predict whether a student will graduate. These classifiers were evaluated with k folds of 5. The performance of these classifiers was compared based on accuracy measurement. The results indicated that Ensemble Classifier achieves better accuracy, about 91.12%. This GOT prediction model would hopefully be useful to university administration and academics in developing measures for assisting and boosting students' academic performance and ensuring they graduate on time. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=prediction" title="prediction">prediction</a>, <a href="https://publications.waset.org/abstracts/search?q=decision%20trees" title=" decision trees"> decision trees</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=support%20vector%20machine" title=" support vector machine"> support vector machine</a>, <a href="https://publications.waset.org/abstracts/search?q=ensemble%20model" title=" ensemble model"> ensemble model</a>, <a href="https://publications.waset.org/abstracts/search?q=student%20graduation" title=" student graduation"> student graduation</a>, <a href="https://publications.waset.org/abstracts/search?q=GOT%20graduate%20on%20time" title=" GOT graduate on time"> GOT graduate on time</a> </p> <a href="https://publications.waset.org/abstracts/167620/evaluation-of-machine-learning-algorithms-and-ensemble-methods-for-prediction-of-students-graduation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/167620.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">72</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8628</span> Sentiment Analysis of Ensemble-Based Classifiers for E-Mail Data</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Muthukumarasamy%20Govindarajan">Muthukumarasamy Govindarajan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Detection of unwanted, unsolicited mails called spam from email is an interesting area of research. It is necessary to evaluate the performance of any new spam classifier using standard data sets. Recently, ensemble-based classifiers have gained popularity in this domain. In this research work, an efficient email filtering approach based on ensemble methods is addressed for developing an accurate and sensitive spam classifier. The proposed approach employs Naive Bayes (NB), Support Vector Machine (SVM) and Genetic Algorithm (GA) as base classifiers along with different ensemble methods. The experimental results show that the ensemble classifier was performing with accuracy greater than individual classifiers, and also hybrid model results are found to be better than the combined models for the e-mail dataset. The proposed ensemble-based classifiers turn out to be good in terms of classification accuracy, which is considered to be an important criterion for building a robust spam classifier. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=accuracy" title="accuracy">accuracy</a>, <a href="https://publications.waset.org/abstracts/search?q=arcing" title=" arcing"> arcing</a>, <a href="https://publications.waset.org/abstracts/search?q=bagging" title=" bagging"> bagging</a>, <a href="https://publications.waset.org/abstracts/search?q=genetic%20algorithm" title=" genetic algorithm"> genetic algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=Naive%20Bayes" title=" Naive Bayes"> Naive Bayes</a>, <a href="https://publications.waset.org/abstracts/search?q=sentiment%20mining" title=" sentiment mining"> sentiment mining</a>, <a href="https://publications.waset.org/abstracts/search?q=support%20vector%20machine" title=" support vector machine"> support vector machine</a> </p> <a href="https://publications.waset.org/abstracts/112240/sentiment-analysis-of-ensemble-based-classifiers-for-e-mail-data" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/112240.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">142</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8627</span> Improve Student Performance Prediction Using Majority Vote Ensemble Model for Higher Education</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Wade%20Ghribi">Wade Ghribi</a>, <a href="https://publications.waset.org/abstracts/search?q=Abdelmoty%20M.%20Ahmed"> Abdelmoty M. Ahmed</a>, <a href="https://publications.waset.org/abstracts/search?q=Ahmed%20Said%20Badawy"> Ahmed Said Badawy</a>, <a href="https://publications.waset.org/abstracts/search?q=Belgacem%20Bouallegue"> Belgacem Bouallegue</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In higher education institutions, the most pressing priority is to improve student performance and retention. Large volumes of student data are used in Educational Data Mining techniques to find new hidden information from students' learning behavior, particularly to uncover the early symptom of at-risk pupils. On the other hand, data with noise, outliers, and irrelevant information may provide incorrect conclusions. By identifying features of students' data that have the potential to improve performance prediction results, comparing and identifying the most appropriate ensemble learning technique after preprocessing the data, and optimizing the hyperparameters, this paper aims to develop a reliable students' performance prediction model for Higher Education Institutions. Data was gathered from two different systems: a student information system and an e-learning system for undergraduate students in the College of Computer Science of a Saudi Arabian State University. The cases of 4413 students were used in this article. The process includes data collection, data integration, data preprocessing (such as cleaning, normalization, and transformation), feature selection, pattern extraction, and, finally, model optimization and assessment. Random Forest, Bagging, Stacking, Majority Vote, and two types of Boosting techniques, AdaBoost and XGBoost, are ensemble learning approaches, whereas Decision Tree, Support Vector Machine, and Artificial Neural Network are supervised learning techniques. Hyperparameters for ensemble learning systems will be fine-tuned to provide enhanced performance and optimal output. The findings imply that combining features of students' behavior from e-learning and students' information systems using Majority Vote produced better outcomes than the other ensemble techniques. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=educational%20data%20mining" title="educational data mining">educational data mining</a>, <a href="https://publications.waset.org/abstracts/search?q=student%20performance%20prediction" title=" student performance prediction"> student performance prediction</a>, <a href="https://publications.waset.org/abstracts/search?q=e-learning" title=" e-learning"> e-learning</a>, <a href="https://publications.waset.org/abstracts/search?q=classification" title=" classification"> classification</a>, <a href="https://publications.waset.org/abstracts/search?q=ensemble%20learning" title=" ensemble learning"> ensemble learning</a>, <a href="https://publications.waset.org/abstracts/search?q=higher%20education" title=" higher education"> higher education</a> </p> <a href="https://publications.waset.org/abstracts/149220/improve-student-performance-prediction-using-majority-vote-ensemble-model-for-higher-education" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/149220.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">107</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8626</span> Rank-Based Chain-Mode Ensemble for Binary Classification</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Chongya%20Song">Chongya Song</a>, <a href="https://publications.waset.org/abstracts/search?q=Kang%20Yen"> Kang Yen</a>, <a href="https://publications.waset.org/abstracts/search?q=Alexander%20Pons"> Alexander Pons</a>, <a href="https://publications.waset.org/abstracts/search?q=Jin%20Liu"> Jin Liu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In the field of machine learning, the ensemble has been employed as a common methodology to improve the performance upon multiple base classifiers. However, the true predictions are often canceled out by the false ones during consensus due to a phenomenon called &ldquo;curse of correlation&rdquo; which is represented as the strong interferences among the predictions produced by the base classifiers. In addition, the existing practices are still not able to effectively mitigate the problem of imbalanced classification. Based on the analysis on our experiment results, we conclude that the two problems are caused by some inherent deficiencies in the approach of consensus. Therefore, we create an enhanced ensemble algorithm which adopts a designed rank-based chain-mode consensus to overcome the two problems. In order to evaluate the proposed ensemble algorithm, we employ a well-known benchmark data set NSL-KDD (the improved version of dataset KDDCup99 produced by University of New Brunswick) to make comparisons between the proposed and 8 common ensemble algorithms. Particularly, each compared ensemble classifier uses the same 22 base classifiers, so that the differences in terms of the improvements toward the accuracy and reliability upon the base classifiers can be truly revealed. As a result, the proposed rank-based chain-mode consensus is proved to be a more effective ensemble solution than the traditional consensus approach, which outperforms the 8 ensemble algorithms by 20% on almost all compared metrices which include accuracy, precision, recall, F1-score and area under receiver operating characteristic curve. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=consensus" title="consensus">consensus</a>, <a href="https://publications.waset.org/abstracts/search?q=curse%20of%20correlation" title=" curse of correlation"> curse of correlation</a>, <a href="https://publications.waset.org/abstracts/search?q=imbalance%20classification" title=" imbalance classification"> imbalance classification</a>, <a href="https://publications.waset.org/abstracts/search?q=rank-based%20chain-mode%20ensemble" title=" rank-based chain-mode ensemble"> rank-based chain-mode ensemble</a> </p> <a href="https://publications.waset.org/abstracts/112891/rank-based-chain-mode-ensemble-for-binary-classification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/112891.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">138</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8625</span> Accelerating Quantum Chemistry Calculations: Machine Learning for Efficient Evaluation of Electron-Repulsion Integrals</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nishant%20Rodrigues">Nishant Rodrigues</a>, <a href="https://publications.waset.org/abstracts/search?q=Nicole%20Spanedda"> Nicole Spanedda</a>, <a href="https://publications.waset.org/abstracts/search?q=Chilukuri%20K.%20Mohan"> Chilukuri K. Mohan</a>, <a href="https://publications.waset.org/abstracts/search?q=Arindam%20Chakraborty"> Arindam Chakraborty</a> </p> <p class="card-text"><strong>Abstract:</strong></p> A crucial objective in quantum chemistry is the computation of the energy levels of chemical systems. This task requires electron-repulsion integrals as inputs, and the steep computational cost of evaluating these integrals poses a major numerical challenge in efficient implementation of quantum chemical software. This work presents a moment-based machine-learning approach for the efficient evaluation of electron-repulsion integrals. These integrals were approximated using linear combinations of a small number of moments. Machine learning algorithms were applied to estimate the coefficients in the linear combination. A random forest approach was used to identify promising features using a recursive feature elimination approach, which performed best for learning the sign of each coefficient but not the magnitude. A neural network with two hidden layers were then used to learn the coefficient magnitudes along with an iterative feature masking approach to perform input vector compression, identifying a small subset of orbitals whose coefficients are sufficient for the quantum state energy computation. Finally, a small ensemble of neural networks (with a median rule for decision fusion) was shown to improve results when compared to a single network. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=quantum%20energy%20calculations" title="quantum energy calculations">quantum energy calculations</a>, <a href="https://publications.waset.org/abstracts/search?q=atomic%20orbitals" title=" atomic orbitals"> atomic orbitals</a>, <a href="https://publications.waset.org/abstracts/search?q=electron-repulsion%20integrals" title=" electron-repulsion integrals"> electron-repulsion integrals</a>, <a href="https://publications.waset.org/abstracts/search?q=ensemble%20machine%20learning" title=" ensemble machine learning"> ensemble machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=random%20forests" title=" random forests"> random forests</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20networks" title=" neural networks"> neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20extraction" title=" feature extraction"> feature extraction</a> </p> <a href="https://publications.waset.org/abstracts/167152/accelerating-quantum-chemistry-calculations-machine-learning-for-efficient-evaluation-of-electron-repulsion-integrals" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/167152.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">113</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8624</span> Machine Learning Model to Predict TB Bacteria-Resistant Drugs from TB Isolates</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rosa%20Tsegaye%20Aga">Rosa Tsegaye Aga</a>, <a href="https://publications.waset.org/abstracts/search?q=Xuan%20Jiang"> Xuan Jiang</a>, <a href="https://publications.waset.org/abstracts/search?q=Pavel%20Vazquez%20Faci"> Pavel Vazquez Faci</a>, <a href="https://publications.waset.org/abstracts/search?q=Siqing%20Liu"> Siqing Liu</a>, <a href="https://publications.waset.org/abstracts/search?q=Simon%20Rayner"> Simon Rayner</a>, <a href="https://publications.waset.org/abstracts/search?q=Endalkachew%20Alemu"> Endalkachew Alemu</a>, <a href="https://publications.waset.org/abstracts/search?q=Markos%0D%0AAbebe"> Markos Abebe</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Tuberculosis (TB) is a major cause of disease globally. In most cases, TB is treatable and curable, but only with the proper treatment. There is a time when drug-resistant TB occurs when bacteria become resistant to the drugs that are used to treat TB. Current strategies to identify drug-resistant TB bacteria are laboratory-based, and it takes a longer time to identify the drug-resistant bacteria and treat the patient accordingly. But machine learning (ML) and data science approaches can offer new approaches to the problem. In this study, we propose to develop an ML-based model to predict the antibiotic resistance phenotypes of TB isolates in minutes and give the right treatment to the patient immediately. The study has been using the whole genome sequence (WGS) of TB isolates as training data that have been extracted from the NCBI repository and contain different countries’ samples to build the ML models. The reason that different countries’ samples have been included is to generalize the large group of TB isolates from different regions in the world. This supports the model to train different behaviors of the TB bacteria and makes the model robust. The model training has been considering three pieces of information that have been extracted from the WGS data to train the model. These are all variants that have been found within the candidate genes (F1), predetermined resistance-associated variants (F2), and only resistance-associated gene information for the particular drug. Two major datasets have been constructed using these three information. F1 and F2 information have been considered as two independent datasets, and the third information is used as a class to label the two datasets. Five machine learning algorithms have been considered to train the model. These are Support Vector Machine (SVM), Random forest (RF), Logistic regression (LR), Gradient Boosting, and Ada boost algorithms. The models have been trained on the datasets F1, F2, and F1F2 that is the F1 and the F2 dataset merged. Additionally, an ensemble approach has been used to train the model. The ensemble approach has been considered to run F1 and F2 datasets on gradient boosting algorithm and use the output as one dataset that is called F1F2 ensemble dataset and train a model using this dataset on the five algorithms. As the experiment shows, the ensemble approach model that has been trained on the Gradient Boosting algorithm outperformed the rest of the models. In conclusion, this study suggests the ensemble approach, that is, the RF + Gradient boosting model, to predict the antibiotic resistance phenotypes of TB isolates by outperforming the rest of the models. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title="machine learning">machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=MTB" title=" MTB"> MTB</a>, <a href="https://publications.waset.org/abstracts/search?q=WGS" title=" WGS"> WGS</a>, <a href="https://publications.waset.org/abstracts/search?q=drug%20resistant%20TB" title=" drug resistant TB"> drug resistant TB</a> </p> <a href="https://publications.waset.org/abstracts/181519/machine-learning-model-to-predict-tb-bacteria-resistant-drugs-from-tb-isolates" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/181519.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">52</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8623</span> Comparative Evaluation of Accuracy of Selected Machine Learning Classification Techniques for Diagnosis of Cancer: A Data Mining Approach</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rajvir%20Kaur">Rajvir Kaur</a>, <a href="https://publications.waset.org/abstracts/search?q=Jeewani%20Anupama%20Ginige"> Jeewani Anupama Ginige</a> </p> <p class="card-text"><strong>Abstract:</strong></p> With recent trends in Big Data and advancements in Information and Communication Technologies, the healthcare industry is at the stage of its transition from clinician oriented to technology oriented. Many people around the world die of cancer because the diagnosis of disease was not done at an early stage. Nowadays, the computational methods in the form of Machine Learning (ML) are used to develop automated decision support systems that can diagnose cancer with high confidence in a timely manner. This paper aims to carry out the comparative evaluation of a selected set of ML classifiers on two existing datasets: breast cancer and cervical cancer. The ML classifiers compared in this study are Decision Tree (DT), Support Vector Machine (SVM), k-Nearest Neighbor (k-NN), Logistic Regression, Ensemble (Bagged Tree) and Artificial Neural Networks (ANN). The evaluation is carried out based on standard evaluation metrics Precision (P), Recall (R), F1-score and Accuracy. The experimental results based on the evaluation metrics show that ANN showed the highest-level accuracy (99.4%) when tested with breast cancer dataset. On the other hand, when these ML classifiers are tested with the cervical cancer dataset, Ensemble (Bagged Tree) technique gave better accuracy (93.1%) in comparison to other classifiers. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=artificial%20neural%20networks" title="artificial neural networks">artificial neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=breast%20cancer" title=" breast cancer"> breast cancer</a>, <a href="https://publications.waset.org/abstracts/search?q=classifiers" title=" classifiers"> classifiers</a>, <a href="https://publications.waset.org/abstracts/search?q=cervical%20cancer" title=" cervical cancer"> cervical cancer</a>, <a href="https://publications.waset.org/abstracts/search?q=f-score" title=" f-score"> f-score</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=precision" title=" precision"> precision</a>, <a href="https://publications.waset.org/abstracts/search?q=recall" title=" recall"> recall</a> </p> <a href="https://publications.waset.org/abstracts/92517/comparative-evaluation-of-accuracy-of-selected-machine-learning-classification-techniques-for-diagnosis-of-cancer-a-data-mining-approach" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/92517.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">277</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8622</span> A Dynamic Ensemble Learning Approach for Online Anomaly Detection in Alibaba Datacenters</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Wanyi%20Zhu">Wanyi Zhu</a>, <a href="https://publications.waset.org/abstracts/search?q=Xia%20Ming"> Xia Ming</a>, <a href="https://publications.waset.org/abstracts/search?q=Huafeng%20Wang"> Huafeng Wang</a>, <a href="https://publications.waset.org/abstracts/search?q=Junda%20Chen"> Junda Chen</a>, <a href="https://publications.waset.org/abstracts/search?q=Lu%20Liu"> Lu Liu</a>, <a href="https://publications.waset.org/abstracts/search?q=Jiangwei%20Jiang"> Jiangwei Jiang</a>, <a href="https://publications.waset.org/abstracts/search?q=Guohua%20Liu"> Guohua Liu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Anomaly detection is a first and imperative step needed to respond to unexpected problems and to assure high performance and security in large data center management. This paper presents an online anomaly detection system through an innovative approach of ensemble machine learning and adaptive differentiation algorithms, and applies them to performance data collected from a continuous monitoring system for multi-tier web applications running in Alibaba data centers. We evaluate the effectiveness and efficiency of this algorithm with production traffic data and compare with the traditional anomaly detection approaches such as a static threshold and other deviation-based detection techniques. The experiment results show that our algorithm correctly identifies the unexpected performance variances of any running application, with an acceptable false positive rate. This proposed approach has already been deployed in real-time production environments to enhance the efficiency and stability in daily data center operations. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Alibaba%20data%20centers" title="Alibaba data centers">Alibaba data centers</a>, <a href="https://publications.waset.org/abstracts/search?q=anomaly%20detection" title=" anomaly detection"> anomaly detection</a>, <a href="https://publications.waset.org/abstracts/search?q=big%20data%20computation" title=" big data computation"> big data computation</a>, <a href="https://publications.waset.org/abstracts/search?q=dynamic%20ensemble%20learning" title=" dynamic ensemble learning"> dynamic ensemble learning</a> </p> <a href="https://publications.waset.org/abstracts/86171/a-dynamic-ensemble-learning-approach-for-online-anomaly-detection-in-alibaba-datacenters" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/86171.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">200</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8621</span> Predicting Machine-Down of Woodworking Industrial Machines</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Matteo%20Calabrese">Matteo Calabrese</a>, <a href="https://publications.waset.org/abstracts/search?q=Martin%20Cimmino"> Martin Cimmino</a>, <a href="https://publications.waset.org/abstracts/search?q=Dimos%20Kapetis"> Dimos Kapetis</a>, <a href="https://publications.waset.org/abstracts/search?q=Martina%20Manfrin"> Martina Manfrin</a>, <a href="https://publications.waset.org/abstracts/search?q=Donato%20Concilio"> Donato Concilio</a>, <a href="https://publications.waset.org/abstracts/search?q=Giuseppe%20Toscano"> Giuseppe Toscano</a>, <a href="https://publications.waset.org/abstracts/search?q=Giovanni%20Ciandrini"> Giovanni Ciandrini</a>, <a href="https://publications.waset.org/abstracts/search?q=Giancarlo%20Paccapeli"> Giancarlo Paccapeli</a>, <a href="https://publications.waset.org/abstracts/search?q=Gianluca%20Giarratana"> Gianluca Giarratana</a>, <a href="https://publications.waset.org/abstracts/search?q=Marco%20Siciliano"> Marco Siciliano</a>, <a href="https://publications.waset.org/abstracts/search?q=Andrea%20Forlani"> Andrea Forlani</a>, <a href="https://publications.waset.org/abstracts/search?q=Alberto%20Carrotta"> Alberto Carrotta</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper we describe a machine learning methodology for Predictive Maintenance (PdM) applied on woodworking industrial machines. PdM is a prominent strategy consisting of all the operational techniques and actions required to ensure machine availability and to prevent a machine-down failure. One of the challenges with PdM approach is to design and develop of an embedded smart system to enable the health status of the machine. The proposed approach allows screening simultaneously multiple connected machines, thus providing real-time monitoring that can be adopted with maintenance management. This is achieved by applying temporal feature engineering techniques and training an ensemble of classification algorithms to predict Remaining Useful Lifetime of woodworking machines. The effectiveness of the methodology is demonstrated by testing an independent sample of additional woodworking machines without presenting machine down event. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=predictive%20maintenance" title="predictive maintenance">predictive maintenance</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=connected%20machines" title=" connected machines"> connected machines</a>, <a href="https://publications.waset.org/abstracts/search?q=artificial%20intelligence" title=" artificial intelligence"> artificial intelligence</a> </p> <a href="https://publications.waset.org/abstracts/98461/predicting-machine-down-of-woodworking-industrial-machines" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/98461.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">226</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8620</span> Comparison Study of Machine Learning Classifiers for Speech Emotion Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Aishwarya%20Ravindra%20Fursule">Aishwarya Ravindra Fursule</a>, <a href="https://publications.waset.org/abstracts/search?q=Shruti%20Kshirsagar"> Shruti Kshirsagar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In the intersection of artificial intelligence and human-centered computing, this paper delves into speech emotion recognition (SER). It presents a comparative analysis of machine learning models such as K-Nearest Neighbors (KNN),logistic regression, support vector machines (SVM), decision trees, ensemble classifiers, and random forests, applied to SER. The research employs four datasets: Crema D, SAVEE, TESS, and RAVDESS. It focuses on extracting salient audio signal features like Zero Crossing Rate (ZCR), Chroma_stft, Mel Frequency Cepstral Coefficients (MFCC), root mean square (RMS) value, and MelSpectogram. These features are used to train and evaluate the models’ ability to recognize eight types of emotions from speech: happy, sad, neutral, angry, calm, disgust, fear, and surprise. Among the models, the Random Forest algorithm demonstrated superior performance, achieving approximately 79% accuracy. This suggests its suitability for SER within the parameters of this study. The research contributes to SER by showcasing the effectiveness of various machine learning algorithms and feature extraction techniques. The findings hold promise for the development of more precise emotion recognition systems in the future. This abstract provides a succinct overview of the paper’s content, methods, and results. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=comparison" title="comparison">comparison</a>, <a href="https://publications.waset.org/abstracts/search?q=ML%20classifiers" title=" ML classifiers"> ML classifiers</a>, <a href="https://publications.waset.org/abstracts/search?q=KNN" title=" KNN"> KNN</a>, <a href="https://publications.waset.org/abstracts/search?q=decision%20tree" title=" decision tree"> decision tree</a>, <a href="https://publications.waset.org/abstracts/search?q=SVM" title=" SVM"> SVM</a>, <a href="https://publications.waset.org/abstracts/search?q=random%20forest" title=" random forest"> random forest</a>, <a href="https://publications.waset.org/abstracts/search?q=logistic%20regression" title=" logistic regression"> logistic regression</a>, <a href="https://publications.waset.org/abstracts/search?q=ensemble%20classifiers" title=" ensemble classifiers"> ensemble classifiers</a> </p> <a href="https://publications.waset.org/abstracts/185323/comparison-study-of-machine-learning-classifiers-for-speech-emotion-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/185323.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">45</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8619</span> Ensemble Machine Learning Approach for Estimating Missing Data from CO₂ Time Series</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Atbin%20Mahabbati">Atbin Mahabbati</a>, <a href="https://publications.waset.org/abstracts/search?q=Jason%20Beringer"> Jason Beringer</a>, <a href="https://publications.waset.org/abstracts/search?q=Matthias%20Leopold"> Matthias Leopold</a> </p> <p class="card-text"><strong>Abstract:</strong></p> To address the global challenges of climate and environmental changes, there is a need for quantifying and reducing uncertainties in environmental data, including observations of carbon, water, and energy. Global eddy covariance flux tower networks (FLUXNET), and their regional counterparts (i.e., OzFlux, AmeriFlux, China Flux, etc.) were established in the late 1990s and early 2000s to address the demand. Despite the capability of eddy covariance in validating process modelling analyses, field surveys and remote sensing assessments, there are some serious concerns regarding the challenges associated with the technique, e.g. data gaps and uncertainties. To address these concerns, this research has developed an ensemble model to fill the data gaps of CO₂ flux to avoid the limitations of using a single algorithm, and therefore, provide less error and decline the uncertainties associated with the gap-filling process. In this study, the data of five towers in the OzFlux Network (Alice Springs Mulga, Calperum, Gingin, Howard Springs and Tumbarumba) during 2013 were used to develop an ensemble machine learning model, using five feedforward neural networks (FFNN) with different structures combined with an eXtreme Gradient Boosting (XGB) algorithm. The former methods, FFNN, provided the primary estimations in the first layer, while the later, XGB, used the outputs of the first layer as its input to provide the final estimations of CO₂ flux. The introduced model showed slight superiority over each single FFNN and the XGB, while each of these two methods was used individually, overall RMSE: 2.64, 2.91, and 3.54 g C m⁻² yr⁻¹ respectively (3.54 provided by the best FFNN). The most significant improvement happened to the estimation of the extreme diurnal values (during midday and sunrise), as well as nocturnal estimations, which is generally considered as one of the most challenging parts of CO₂ flux gap-filling. The towers, as well as seasonality, showed different levels of sensitivity to improvements provided by the ensemble model. For instance, Tumbarumba showed more sensitivity compared to Calperum, where the differences between the Ensemble model on the one hand and the FFNNs and XGB, on the other hand, were the least of all 5 sites. Besides, the performance difference between the ensemble model and its components individually were more significant during the warm season (Jan, Feb, Mar, Oct, Nov, and Dec) compared to the cold season (Apr, May, Jun, Jul, Aug, and Sep) due to the higher amount of photosynthesis of plants, which led to a larger range of CO₂ exchange. In conclusion, the introduced ensemble model slightly improved the accuracy of CO₂ flux gap-filling and robustness of the model. Therefore, using ensemble machine learning models is potentially capable of improving data estimation and regression outcome when it seems to be no more room for improvement while using a single algorithm. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=carbon%20flux" title="carbon flux">carbon flux</a>, <a href="https://publications.waset.org/abstracts/search?q=Eddy%20covariance" title=" Eddy covariance"> Eddy covariance</a>, <a href="https://publications.waset.org/abstracts/search?q=extreme%20gradient%20boosting" title=" extreme gradient boosting"> extreme gradient boosting</a>, <a href="https://publications.waset.org/abstracts/search?q=gap-filling%20comparison" title=" gap-filling comparison"> gap-filling comparison</a>, <a href="https://publications.waset.org/abstracts/search?q=hybrid%20model" title=" hybrid model"> hybrid model</a>, <a href="https://publications.waset.org/abstracts/search?q=OzFlux%20network" title=" OzFlux network"> OzFlux network</a> </p> <a href="https://publications.waset.org/abstracts/122496/ensemble-machine-learning-approach-for-estimating-missing-data-from-co2-time-series" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/122496.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">139</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8618</span> Modern Machine Learning Conniptions for Automatic Speech Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=S.%20Jagadeesh%20Kumar">S. Jagadeesh Kumar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This expose presents a luculent of recent machine learning practices as employed in the modern and as pertinent to prospective automatic speech recognition schemes. The aspiration is to promote additional traverse ablution among the machine learning and automatic speech recognition factions that have transpired in the precedent. The manuscript is structured according to the chief machine learning archetypes that are furthermore trendy by now or have latency for building momentous hand-outs to automatic speech recognition expertise. The standards offered and convoluted in this article embraces adaptive and multi-task learning, active learning, Bayesian learning, discriminative learning, generative learning, supervised and unsupervised learning. These learning archetypes are aggravated and conferred in the perspective of automatic speech recognition tools and functions. This manuscript bequeaths and surveys topical advances of deep learning and learning with sparse depictions; further limelight is on their incessant significance in the evolution of automatic speech recognition. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=automatic%20speech%20recognition" title="automatic speech recognition">automatic speech recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning%20methods" title=" deep learning methods"> deep learning methods</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning%20archetypes" title=" machine learning archetypes"> machine learning archetypes</a>, <a href="https://publications.waset.org/abstracts/search?q=Bayesian%20learning" title=" Bayesian learning"> Bayesian learning</a>, <a href="https://publications.waset.org/abstracts/search?q=supervised%20and%20unsupervised%20learning" title=" supervised and unsupervised learning"> supervised and unsupervised learning</a> </p> <a href="https://publications.waset.org/abstracts/71467/modern-machine-learning-conniptions-for-automatic-speech-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/71467.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">447</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8617</span> Design of an Ensemble Learning Behavior Anomaly Detection Framework</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Abdoulaye%20Diop">Abdoulaye Diop</a>, <a href="https://publications.waset.org/abstracts/search?q=Nahid%20Emad"> Nahid Emad</a>, <a href="https://publications.waset.org/abstracts/search?q=Thierry%20Winter"> Thierry Winter</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohamed%20Hilia"> Mohamed Hilia</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Data assets protection is a crucial issue in the cybersecurity field. Companies use logical access control tools to vault their information assets and protect them against external threats, but they lack solutions to counter insider threats. Nowadays, insider threats are the most significant concern of security analysts. They are mainly individuals with legitimate access to companies information systems, which use their rights with malicious intents. In several fields, behavior anomaly detection is the method used by cyber specialists to counter the threats of user malicious activities effectively. In this paper, we present the step toward the construction of a user and entity behavior analysis framework by proposing a behavior anomaly detection model. This model combines machine learning classification techniques and graph-based methods, relying on linear algebra and parallel computing techniques. We show the utility of an ensemble learning approach in this context. We present some detection methods tests results on an representative access control dataset. The use of some explored classifiers gives results up to 99% of accuracy. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=cybersecurity" title="cybersecurity">cybersecurity</a>, <a href="https://publications.waset.org/abstracts/search?q=data%20protection" title=" data protection"> data protection</a>, <a href="https://publications.waset.org/abstracts/search?q=access%20control" title=" access control"> access control</a>, <a href="https://publications.waset.org/abstracts/search?q=insider%20threat" title=" insider threat"> insider threat</a>, <a href="https://publications.waset.org/abstracts/search?q=user%20behavior%20analysis" title=" user behavior analysis"> user behavior analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=ensemble%20learning" title=" ensemble learning"> ensemble learning</a>, <a href="https://publications.waset.org/abstracts/search?q=high%20performance%20computing" title=" high performance computing"> high performance computing</a> </p> <a href="https://publications.waset.org/abstracts/109933/design-of-an-ensemble-learning-behavior-anomaly-detection-framework" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/109933.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">128</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8616</span> Scalable Learning of Tree-Based Models on Sparsely Representable Data</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Fares%20Hedayatit">Fares Hedayatit</a>, <a href="https://publications.waset.org/abstracts/search?q=Arnauld%20Joly"> Arnauld Joly</a>, <a href="https://publications.waset.org/abstracts/search?q=Panagiotis%20Papadimitriou"> Panagiotis Papadimitriou</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Many machine learning tasks such as text annotation usually require training over very big datasets, e.g., millions of web documents, that can be represented in a sparse input space. State-of the-art tree-based ensemble algorithms cannot scale to such datasets, since they include operations whose running time is a function of the input space size rather than a function of the non-zero input elements. In this paper, we propose an efficient splitting algorithm to leverage input sparsity within decision tree methods. Our algorithm improves training time over sparse datasets by more than two orders of magnitude and it has been incorporated in the current version of scikit-learn.org, the most popular open source Python machine learning library. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=big%20data" title="big data">big data</a>, <a href="https://publications.waset.org/abstracts/search?q=sparsely%20representable%20data" title=" sparsely representable data"> sparsely representable data</a>, <a href="https://publications.waset.org/abstracts/search?q=tree-based%20models" title=" tree-based models"> tree-based models</a>, <a href="https://publications.waset.org/abstracts/search?q=scalable%20learning" title=" scalable learning"> scalable learning</a> </p> <a href="https://publications.waset.org/abstracts/52853/scalable-learning-of-tree-based-models-on-sparsely-representable-data" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/52853.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">263</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8615</span> Tongue Image Retrieval Based Using Machine Learning</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ahmad%20FAROOQ">Ahmad FAROOQ</a>, <a href="https://publications.waset.org/abstracts/search?q=Xinfeng%20Zhang"> Xinfeng Zhang</a>, <a href="https://publications.waset.org/abstracts/search?q=Fahad%20Sabah"> Fahad Sabah</a>, <a href="https://publications.waset.org/abstracts/search?q=Raheem%20Sarwar"> Raheem Sarwar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In Traditional Chinese Medicine, tongue diagnosis is a vital inspection tool (TCM). In this study, we explore the potential of machine learning in tongue diagnosis. It begins with the cataloguing of the various classifications and characteristics of the human tongue. We infer 24 kinds of tongues from the material and coating of the tongue, and we identify 21 attributes of the tongue. The next step is to apply machine learning methods to the tongue dataset. We use the Weka machine learning platform to conduct the experiment for performance analysis. The 457 instances of the tongue dataset are used to test the performance of five different machine learning methods, including SVM, Random Forests, Decision Trees, and Naive Bayes. Based on accuracy and Area under the ROC Curve, the Support Vector Machine algorithm was shown to be the most effective for tongue diagnosis (AUC). <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=medical%20imaging" title="medical imaging">medical imaging</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20retrieval" title=" image retrieval"> image retrieval</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=tongue" title=" tongue"> tongue</a> </p> <a href="https://publications.waset.org/abstracts/176849/tongue-image-retrieval-based-using-machine-learning" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/176849.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">81</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8614</span> Ensemble of Deep CNN Architecture for Classifying the Source and Quality of Teff Cereal</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Belayneh%20Matebie">Belayneh Matebie</a>, <a href="https://publications.waset.org/abstracts/search?q=Michael%20Melese"> Michael Melese</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The study focuses on addressing the challenges in classifying and ensuring the quality of Eragrostis Teff, a small and round grain that is the smallest cereal grain. Employing a traditional classification method is challenging because of its small size and the similarity of its environmental characteristics. To overcome this, this study employs a machine learning approach to develop a source and quality classification system for Teff cereal. Data is collected from various production areas in the Amhara regions, considering two types of cereal (high and low quality) across eight classes. A total of 5,920 images are collected, with 740 images for each class. Image enhancement techniques, including scaling, data augmentation, histogram equalization, and noise removal, are applied to preprocess the data. Convolutional Neural Network (CNN) is then used to extract relevant features and reduce dimensionality. The dataset is split into 80% for training and 20% for testing. Different classifiers, including FVGG16, FINCV3, QSCTC, EMQSCTC, SVM, and RF, are employed for classification, achieving accuracy rates ranging from 86.91% to 97.72%. The ensemble of FVGG16, FINCV3, and QSCTC using the Max-Voting approach outperforms individual algorithms. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Teff" title="Teff">Teff</a>, <a href="https://publications.waset.org/abstracts/search?q=ensemble%20learning" title=" ensemble learning"> ensemble learning</a>, <a href="https://publications.waset.org/abstracts/search?q=max-voting" title=" max-voting"> max-voting</a>, <a href="https://publications.waset.org/abstracts/search?q=CNN" title=" CNN"> CNN</a>, <a href="https://publications.waset.org/abstracts/search?q=SVM" title=" SVM"> SVM</a>, <a href="https://publications.waset.org/abstracts/search?q=RF" title=" RF"> RF</a> </p> <a href="https://publications.waset.org/abstracts/186043/ensemble-of-deep-cnn-architecture-for-classifying-the-source-and-quality-of-teff-cereal" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/186043.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">53</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8613</span> Optimizing E-commerce Retention: A Detailed Study of Machine Learning Techniques for Churn Prediction</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Saurabh%20Kumar">Saurabh Kumar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In the fiercely competitive landscape of e-commerce, understanding and mitigating customer churn has become paramount for sustainable business growth. This paper presents a thorough investigation into the application of machine learning techniques for churn prediction in e-commerce, aiming to provide actionable insights for businesses seeking to enhance customer retention strategies. We conduct a comparative study of various machine learning algorithms, including traditional statistical methods and ensemble techniques, leveraging a rich dataset sourced from Kaggle. Through rigorous evaluation, we assess the predictive performance, interpretability, and scalability of each method, elucidating their respective strengths and limitations in capturing the intricate dynamics of customer churn. We identified the XGBoost classifier to be the best performing. Our findings not only offer practical guidelines for selecting suitable modeling approaches but also contribute to the broader understanding of customer behavior in the e-commerce domain. Ultimately, this research equips businesses with the knowledge and tools necessary to proactively identify and address churn, thereby fostering long-term customer relationships and sustaining competitive advantage. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=customer%20churn" title="customer churn">customer churn</a>, <a href="https://publications.waset.org/abstracts/search?q=e-commerce" title=" e-commerce"> e-commerce</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning%20techniques" title=" machine learning techniques"> machine learning techniques</a>, <a href="https://publications.waset.org/abstracts/search?q=predictive%20performance" title=" predictive performance"> predictive performance</a>, <a href="https://publications.waset.org/abstracts/search?q=sustainable%20business%20growth" title=" sustainable business growth"> sustainable business growth</a> </p> <a href="https://publications.waset.org/abstracts/189791/optimizing-e-commerce-retention-a-detailed-study-of-machine-learning-techniques-for-churn-prediction" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/189791.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">27</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8612</span> Faster, Lighter, More Accurate: A Deep Learning Ensemble for Content Moderation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Arian%20Hosseini">Arian Hosseini</a>, <a href="https://publications.waset.org/abstracts/search?q=Mahmudul%20Hasan"> Mahmudul Hasan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> To address the increasing need for efficient and accurate content moderation, we propose an efficient and lightweight deep classification ensemble structure. Our approach is based on a combination of simple visual features, designed for high-accuracy classification of violent content with low false positives. Our ensemble architecture utilizes a set of lightweight models with narrowed-down color features, and we apply it to both images and videos. We evaluated our approach using a large dataset of explosion and blast contents and compared its performance to popular deep learning models such as ResNet-50. Our evaluation results demonstrate significant improvements in prediction accuracy, while benefiting from 7.64x faster inference and lower computation cost. While our approach is tailored to explosion detection, it can be applied to other similar content moderation and violence detection use cases as well. Based on our experiments, we propose a "think small, think many" philosophy in classification scenarios. We argue that transforming a single, large, monolithic deep model into a verification-based step model ensemble of multiple small, simple, and lightweight models with narrowed-down visual features can possibly lead to predictions with higher accuracy. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20classification" title="deep classification">deep classification</a>, <a href="https://publications.waset.org/abstracts/search?q=content%20moderation" title=" content moderation"> content moderation</a>, <a href="https://publications.waset.org/abstracts/search?q=ensemble%20learning" title=" ensemble learning"> ensemble learning</a>, <a href="https://publications.waset.org/abstracts/search?q=explosion%20detection" title=" explosion detection"> explosion detection</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20processing" title=" video processing"> video processing</a> </p> <a href="https://publications.waset.org/abstracts/183644/faster-lighter-more-accurate-a-deep-learning-ensemble-for-content-moderation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/183644.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">54</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8611</span> Optimize Data Evaluation Metrics for Fraud Detection Using Machine Learning</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jennifer%20Leach">Jennifer Leach</a>, <a href="https://publications.waset.org/abstracts/search?q=Umashanger%20Thayasivam"> Umashanger Thayasivam</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The use of technology has benefited society in more ways than one ever thought possible. Unfortunately, though, as society’s knowledge of technology has advanced, so has its knowledge of ways to use technology to manipulate people. This has led to a simultaneous advancement in the world of fraud. Machine learning techniques can offer a possible solution to help decrease this advancement. This research explores how the use of various machine learning techniques can aid in detecting fraudulent activity across two different types of fraudulent data, and the accuracy, precision, recall, and F1 were recorded for each method. Each machine learning model was also tested across five different training and testing splits in order to discover which testing split and technique would lead to the most optimal results. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=data%20science" title="data science">data science</a>, <a href="https://publications.waset.org/abstracts/search?q=fraud%20detection" title=" fraud detection"> fraud detection</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=supervised%20learning" title=" supervised learning"> supervised learning</a> </p> <a href="https://publications.waset.org/abstracts/149142/optimize-data-evaluation-metrics-for-fraud-detection-using-machine-learning" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/149142.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">195</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8610</span> A Comparative Analysis of Machine Learning Techniques for PM10 Forecasting in Vilnius</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mina%20Adel%20Shokry%20Fahim">Mina Adel Shokry Fahim</a>, <a href="https://publications.waset.org/abstracts/search?q=J%C5%ABrat%C4%97%20Su%C5%BEiedelyt%C4%97%20Visockien%C4%97"> Jūratė Sužiedelytė Visockienė</a> </p> <p class="card-text"><strong>Abstract:</strong></p> With the growing concern over air pollution (AP), it is clear that this has gained more prominence than ever before. The level of consciousness has increased and a sense of knowledge now has to be forwarded as a duty by those enlightened enough to disseminate it to others. This realisation often comes after an understanding of how poor air quality indices (AQI) damage human health. The study focuses on assessing air pollution prediction models specifically for Lithuania, addressing a substantial need for empirical research within the region. Concentrating on Vilnius, it specifically examines particulate matter concentrations 10 micrometers or less in diameter (PM10). Utilizing Gaussian Process Regression (GPR) and Regression Tree Ensemble, and Regression Tree methodologies, predictive forecasting models are validated and tested using hourly data from January 2020 to December 2022. The study explores the classification of AP data into anthropogenic and natural sources, the impact of AP on human health, and its connection to cardiovascular diseases. The study revealed varying levels of accuracy among the models, with GPR achieving the highest accuracy, indicated by an RMSE of 4.14 in validation and 3.89 in testing. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=air%20pollution" title="air pollution">air pollution</a>, <a href="https://publications.waset.org/abstracts/search?q=anthropogenic%20and%20natural%20sources" title=" anthropogenic and natural sources"> anthropogenic and natural sources</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=Gaussian%20process%20regression" title=" Gaussian process regression"> Gaussian process regression</a>, <a href="https://publications.waset.org/abstracts/search?q=tree%20ensemble" title=" tree ensemble"> tree ensemble</a>, <a href="https://publications.waset.org/abstracts/search?q=forecasting%20models" title=" forecasting models"> forecasting models</a>, <a href="https://publications.waset.org/abstracts/search?q=particulate%20matter" title=" particulate matter"> particulate matter</a> </p> <a href="https://publications.waset.org/abstracts/183947/a-comparative-analysis-of-machine-learning-techniques-for-pm10-forecasting-in-vilnius" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/183947.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">53</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8609</span> Melanoma and Non-Melanoma, Skin Lesion Classification, Using a Deep Learning Model</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shaira%20L.%20Kee">Shaira L. Kee</a>, <a href="https://publications.waset.org/abstracts/search?q=Michael%20Aaron%20G.%20Sy"> Michael Aaron G. Sy</a>, <a href="https://publications.waset.org/abstracts/search?q=Myles%20%20Joshua%20%20T.%20Tan"> Myles Joshua T. Tan</a>, <a href="https://publications.waset.org/abstracts/search?q=Hezerul%20Abdul%20Karim"> Hezerul Abdul Karim</a>, <a href="https://publications.waset.org/abstracts/search?q=Nouar%20AlDahoul"> Nouar AlDahoul</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Skin diseases are considered the fourth most common disease, with melanoma and non-melanoma skin cancer as the most common type of cancer in Caucasians. The alarming increase in Skin Cancer cases shows an urgent need for further research to improve diagnostic methods, as early diagnosis can significantly improve the 5-year survival rate. Machine Learning algorithms for image pattern analysis in diagnosing skin lesions can dramatically increase the accuracy rate of detection and decrease possible human errors. Several studies have shown the diagnostic performance of computer algorithms outperformed dermatologists. However, existing methods still need improvements to reduce diagnostic errors and generate efficient and accurate results. Our paper proposes an ensemble method to classify dermoscopic images into benign and malignant skin lesions. The experiments were conducted using the International Skin Imaging Collaboration (ISIC) image samples. The dataset contains 3,297 dermoscopic images with benign and malignant categories. The results show improvement in performance with an accuracy of 88% and an F1 score of 87%, outperforming other existing models such as support vector machine (SVM), Residual network (ResNet50), EfficientNetB0, EfficientNetB4, and VGG16. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20learning%20-%20VGG16%20-%20efficientNet%20-%20CNN%20%E2%80%93%20ensemble%20%E2%80%93%0D%0Adermoscopic%20images%20-%20%20melanoma" title="deep learning - VGG16 - efficientNet - CNN – ensemble – dermoscopic images - melanoma">deep learning - VGG16 - efficientNet - CNN – ensemble – dermoscopic images - melanoma</a> </p> <a href="https://publications.waset.org/abstracts/162765/melanoma-and-non-melanoma-skin-lesion-classification-using-a-deep-learning-model" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/162765.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">81</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">&lsaquo;</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=ensemble%20machine%20learning&amp;page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=ensemble%20machine%20learning&amp;page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=ensemble%20machine%20learning&amp;page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=ensemble%20machine%20learning&amp;page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=ensemble%20machine%20learning&amp;page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=ensemble%20machine%20learning&amp;page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=ensemble%20machine%20learning&amp;page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=ensemble%20machine%20learning&amp;page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=ensemble%20machine%20learning&amp;page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=ensemble%20machine%20learning&amp;page=287">287</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=ensemble%20machine%20learning&amp;page=288">288</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=ensemble%20machine%20learning&amp;page=2" rel="next">&rsaquo;</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">&copy; 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">&times;</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>

Pages: 1 2 3 4 5 6 7 8 9 10