CINXE.COM

Search results for: clustering algorithm

<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: clustering algorithm</title> <meta name="description" content="Search results for: clustering algorithm"> <meta name="keywords" content="clustering algorithm"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="clustering algorithm" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="clustering algorithm"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 3997</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: clustering algorithm</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3997</span> ACOPIN: An ACO Algorithm with TSP Approach for Clustering Proteins in Protein Interaction Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jamaludin%20Sallim">Jamaludin Sallim</a>, <a href="https://publications.waset.org/abstracts/search?q=Rozlina%20Mohamed"> Rozlina Mohamed</a>, <a href="https://publications.waset.org/abstracts/search?q=Roslina%20Abdul%20Hamid"> Roslina Abdul Hamid</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we proposed an Ant Colony Optimization (ACO) algorithm together with Traveling Salesman Problem (TSP) approach to investigate the clustering problem in Protein Interaction Networks (PIN). We named this combination as ACOPIN. The purpose of this work is two-fold. First, to test the efficacy of ACO in clustering PIN and second, to propose the simple generalization of the ACO algorithm that might allow its application in clustering proteins in PIN. We split this paper to three main sections. First, we describe the PIN and clustering proteins in PIN. Second, we discuss the steps involved in each phase of ACO algorithm. Finally, we present some results of the investigation with the clustering patterns. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=ant%20colony%20optimization%20algorithm" title="ant colony optimization algorithm">ant colony optimization algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=searching%20algorithm" title=" searching algorithm"> searching algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=protein%20functional%20module" title=" protein functional module"> protein functional module</a>, <a href="https://publications.waset.org/abstracts/search?q=protein%20interaction%20network" title=" protein interaction network "> protein interaction network </a> </p> <a href="https://publications.waset.org/abstracts/22367/acopin-an-aco-algorithm-with-tsp-approach-for-clustering-proteins-in-protein-interaction-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/22367.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">611</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3996</span> Flowing Online Vehicle GPS Data Clustering Using a New Parallel K-Means Algorithm</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Orhun%20Vural">Orhun Vural</a>, <a href="https://publications.waset.org/abstracts/search?q=Oguz%20%20Bayat"> Oguz Bayat</a>, <a href="https://publications.waset.org/abstracts/search?q=Rustu%20Akay"> Rustu Akay</a>, <a href="https://publications.waset.org/abstracts/search?q=Osman%20N.%20Ucan"> Osman N. Ucan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This study presents a new parallel approach clustering of GPS data. Evaluation has been made by comparing execution time of various clustering algorithms on GPS data. This paper aims to propose a parallel based on neighborhood K-means algorithm to make it faster. The proposed parallelization approach assumes that each GPS data represents a vehicle and to communicate between vehicles close to each other after vehicles are clustered. This parallelization approach has been examined on different sized continuously changing GPS data and compared with serial K-means algorithm and other serial clustering algorithms. The results demonstrated that proposed parallel K-means algorithm has been shown to work much faster than other clustering algorithms. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=parallel%20k-means%20algorithm" title="parallel k-means algorithm">parallel k-means algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=parallel%20clustering" title=" parallel clustering"> parallel clustering</a>, <a href="https://publications.waset.org/abstracts/search?q=clustering%20algorithms" title=" clustering algorithms"> clustering algorithms</a>, <a href="https://publications.waset.org/abstracts/search?q=clustering%20on%20flowing%20data" title=" clustering on flowing data"> clustering on flowing data</a> </p> <a href="https://publications.waset.org/abstracts/86622/flowing-online-vehicle-gps-data-clustering-using-a-new-parallel-k-means-algorithm" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/86622.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">221</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3995</span> Improved K-Means Clustering Algorithm Using RHadoop with Combiner</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ji%20Eun%20Shin">Ji Eun Shin</a>, <a href="https://publications.waset.org/abstracts/search?q=Dong%20Hoon%20Lim"> Dong Hoon Lim</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Data clustering is a common technique used in data analysis and is used in many applications, such as artificial intelligence, pattern recognition, economics, ecology, psychiatry and marketing. K-means clustering is a well-known clustering algorithm aiming to cluster a set of data points to a predefined number of clusters. In this paper, we implement K-means algorithm based on MapReduce framework with RHadoop to make the clustering method applicable to large scale data. RHadoop is a collection of R packages that allow users to manage and analyze data with Hadoop. The main idea is to introduce a combiner as a function of our map output to decrease the amount of data needed to be processed by reducers. The experimental results demonstrated that K-means algorithm using RHadoop can scale well and efficiently process large data sets on commodity hardware. We also showed that our K-means algorithm using RHadoop with combiner was faster than regular algorithm without combiner as the size of data set increases. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=big%20data" title="big data">big data</a>, <a href="https://publications.waset.org/abstracts/search?q=combiner" title=" combiner"> combiner</a>, <a href="https://publications.waset.org/abstracts/search?q=K-means%20clustering" title=" K-means clustering"> K-means clustering</a>, <a href="https://publications.waset.org/abstracts/search?q=RHadoop" title=" RHadoop"> RHadoop</a> </p> <a href="https://publications.waset.org/abstracts/41570/improved-k-means-clustering-algorithm-using-rhadoop-with-combiner" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/41570.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">438</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3994</span> Chemical Reaction Algorithm for Expectation Maximization Clustering</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Li%20Ni">Li Ni</a>, <a href="https://publications.waset.org/abstracts/search?q=Pen%20ManMan"> Pen ManMan</a>, <a href="https://publications.waset.org/abstracts/search?q=Li%20KenLi"> Li KenLi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Clustering is an intensive research for some years because of its multifaceted applications, such as biology, information retrieval, medicine, business and so on. The expectation maximization (EM) is a kind of algorithm framework in clustering methods, one of the ten algorithms of machine learning. Traditionally, optimization of objective function has been the standard approach in EM. Hence, research has investigated the utility of evolutionary computing and related techniques in the regard. Chemical Reaction Optimization (CRO) is a recently established method. So the property embedded in CRO is used to solve optimization problems. This paper presents an algorithm framework (EM-CRO) with modified CRO operators based on EM cluster problems. The hybrid algorithm is mainly to solve the problem of initial value sensitivity of the objective function optimization clustering algorithm. Our experiments mainly take the EM classic algorithm:k-means and fuzzy k-means as an example, through the CRO algorithm to optimize its initial value, get K-means-CRO and FKM-CRO algorithm. The experimental results of them show that there is improved efficiency for solving objective function optimization clustering problems. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=chemical%20reaction%20optimization" title="chemical reaction optimization">chemical reaction optimization</a>, <a href="https://publications.waset.org/abstracts/search?q=expection%20maimization" title=" expection maimization"> expection maimization</a>, <a href="https://publications.waset.org/abstracts/search?q=initia" title=" initia"> initia</a>, <a href="https://publications.waset.org/abstracts/search?q=objective%20function%20clustering" title=" objective function clustering"> objective function clustering</a> </p> <a href="https://publications.waset.org/abstracts/54706/chemical-reaction-algorithm-for-expectation-maximization-clustering" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/54706.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">713</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3993</span> Multimodal Optimization of Density-Based Clustering Using Collective Animal Behavior Algorithm</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kristian%20Bautista">Kristian Bautista</a>, <a href="https://publications.waset.org/abstracts/search?q=Ruben%20A.%20Idoy"> Ruben A. Idoy</a> </p> <p class="card-text"><strong>Abstract:</strong></p> A bio-inspired metaheuristic algorithm inspired by the theory of collective animal behavior (CAB) was integrated to density-based clustering modeled as multimodal optimization problem. The algorithm was tested on synthetic, Iris, Glass, Pima and Thyroid data sets in order to measure its effectiveness relative to CDE-based Clustering algorithm. Upon preliminary testing, it was found out that one of the parameter settings used was ineffective in performing clustering when applied to the algorithm prompting the researcher to do an investigation. It was revealed that fine tuning distance δ3 that determines the extent to which a given data point will be clustered helped improve the quality of cluster output. Even though the modification of distance δ3 significantly improved the solution quality and cluster output of the algorithm, results suggest that there is no difference between the population mean of the solutions obtained using the original and modified parameter setting for all data sets. This implies that using either the original or modified parameter setting will not have any effect towards obtaining the best global and local animal positions. Results also suggest that CDE-based clustering algorithm is better than CAB-density clustering algorithm for all data sets. Nevertheless, CAB-density clustering algorithm is still a good clustering algorithm because it has correctly identified the number of classes of some data sets more frequently in a thirty trial run with a much smaller standard deviation, a potential in clustering high dimensional data sets. Thus, the researcher recommends further investigation in the post-processing stage of the algorithm. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=clustering" title="clustering">clustering</a>, <a href="https://publications.waset.org/abstracts/search?q=metaheuristics" title=" metaheuristics"> metaheuristics</a>, <a href="https://publications.waset.org/abstracts/search?q=collective%20animal%20behavior%20algorithm" title=" collective animal behavior algorithm"> collective animal behavior algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=density-based%20%20clustering" title=" density-based clustering"> density-based clustering</a>, <a href="https://publications.waset.org/abstracts/search?q=multimodal%20optimization" title=" multimodal optimization"> multimodal optimization</a> </p> <a href="https://publications.waset.org/abstracts/94254/multimodal-optimization-of-density-based-clustering-using-collective-animal-behavior-algorithm" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/94254.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">230</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3992</span> A Fuzzy Kernel K-Medoids Algorithm for Clustering Uncertain Data Objects</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Behnam%20Tavakkol">Behnam Tavakkol</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Uncertain data mining algorithms use different ways to consider uncertainty in data such as by representing a data object as a sample of points or a probability distribution. Fuzzy methods have long been used for clustering traditional (certain) data objects. They are used to produce non-crisp cluster labels. For uncertain data, however, besides some uncertain fuzzy k-medoids algorithms, not many other fuzzy clustering methods have been developed. In this work, we develop a fuzzy kernel k-medoids algorithm for clustering uncertain data objects. The developed fuzzy kernel k-medoids algorithm is superior to existing fuzzy k-medoids algorithms in clustering data sets with non-linearly separable clusters. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=clustering%20algorithm" title="clustering algorithm">clustering algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=fuzzy%20methods" title=" fuzzy methods"> fuzzy methods</a>, <a href="https://publications.waset.org/abstracts/search?q=kernel%20k-medoids" title=" kernel k-medoids"> kernel k-medoids</a>, <a href="https://publications.waset.org/abstracts/search?q=uncertain%20data" title=" uncertain data"> uncertain data</a> </p> <a href="https://publications.waset.org/abstracts/123501/a-fuzzy-kernel-k-medoids-algorithm-for-clustering-uncertain-data-objects" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/123501.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">215</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3991</span> 3D Mesh Coarsening via Uniform Clustering</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shuhua%20Lai">Shuhua Lai</a>, <a href="https://publications.waset.org/abstracts/search?q=Kairui%20Chen"> Kairui Chen</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we present a fast and efficient mesh coarsening algorithm for 3D triangular meshes. Theis approach can be applied to very complex 3D meshes of arbitrary topology and with millions of vertices. The algorithm is based on the clustering of the input mesh elements, which divides the faces of an input mesh into a given number of clusters for clustering purpose by approximating the Centroidal Voronoi Tessellation of the input mesh. Once a clustering is achieved, it provides us an efficient way to construct uniform tessellations, and therefore leads to good coarsening of polygonal meshes. With proliferation of 3D scanners, this coarsening algorithm is particularly useful for reverse engineering applications of 3D models, which in many cases are dense, non-uniform, irregular and arbitrary topology. Examples demonstrating effectiveness of the new algorithm are also included in the paper. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=coarsening" title="coarsening">coarsening</a>, <a href="https://publications.waset.org/abstracts/search?q=mesh%20clustering" title=" mesh clustering"> mesh clustering</a>, <a href="https://publications.waset.org/abstracts/search?q=shape%20approximation" title=" shape approximation"> shape approximation</a>, <a href="https://publications.waset.org/abstracts/search?q=mesh%20simplification" title=" mesh simplification"> mesh simplification</a> </p> <a href="https://publications.waset.org/abstracts/48919/3d-mesh-coarsening-via-uniform-clustering" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/48919.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">380</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3990</span> An Experimental Study on Some Conventional and Hybrid Models of Fuzzy Clustering</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jeugert%20Kujtila">Jeugert Kujtila</a>, <a href="https://publications.waset.org/abstracts/search?q=Kristi%20Hoxhalli"> Kristi Hoxhalli</a>, <a href="https://publications.waset.org/abstracts/search?q=Ramazan%20Dalipi"> Ramazan Dalipi</a>, <a href="https://publications.waset.org/abstracts/search?q=Erjon%20Cota"> Erjon Cota</a>, <a href="https://publications.waset.org/abstracts/search?q=Ardit%20Murati"> Ardit Murati</a>, <a href="https://publications.waset.org/abstracts/search?q=Erind%20Bedalli"> Erind Bedalli</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Clustering is a versatile instrument in the analysis of collections of data providing insights of the underlying structures of the dataset and enhancing the modeling capabilities. The fuzzy approach to the clustering problem increases the flexibility involving the concept of partial memberships (some value in the continuous interval [0, 1]) of the instances in the clusters. Several fuzzy clustering algorithms have been devised like FCM, Gustafson-Kessel, Gath-Geva, kernel-based FCM, PCM etc. Each of these algorithms has its own advantages and drawbacks, so none of these algorithms would be able to perform superiorly in all datasets. In this paper we will experimentally compare FCM, GK, GG algorithm and a hybrid two-stage fuzzy clustering model combining the FCM and Gath-Geva algorithms. Firstly we will theoretically dis-cuss the advantages and drawbacks for each of these algorithms and we will describe the hybrid clustering model exploiting the advantages and diminishing the drawbacks of each algorithm. Secondly we will experimentally compare the accuracy of the hybrid model by applying it on several benchmark and synthetic datasets. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=fuzzy%20clustering" title="fuzzy clustering">fuzzy clustering</a>, <a href="https://publications.waset.org/abstracts/search?q=fuzzy%20c-means%20algorithm%20%28FCM%29" title=" fuzzy c-means algorithm (FCM)"> fuzzy c-means algorithm (FCM)</a>, <a href="https://publications.waset.org/abstracts/search?q=Gustafson-Kessel%20algorithm" title=" Gustafson-Kessel algorithm"> Gustafson-Kessel algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=hybrid%20clustering%20model" title=" hybrid clustering model"> hybrid clustering model</a> </p> <a href="https://publications.waset.org/abstracts/67863/an-experimental-study-on-some-conventional-and-hybrid-models-of-fuzzy-clustering" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/67863.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">514</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3989</span> A Learning-Based EM Mixture Regression Algorithm</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yi-Cheng%20Tian">Yi-Cheng Tian</a>, <a href="https://publications.waset.org/abstracts/search?q=Miin-Shen%20Yang"> Miin-Shen Yang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The mixture likelihood approach to clustering is a popular clustering method where the expectation and maximization (EM) algorithm is the most used mixture likelihood method. In the literature, the EM algorithm had been used for mixture regression models. However, these EM mixture regression algorithms are sensitive to initial values with a priori number of clusters. In this paper, to resolve these drawbacks, we construct a learning-based schema for the EM mixture regression algorithm such that it is free of initializations and can automatically obtain an approximately optimal number of clusters. Some numerical examples and comparisons demonstrate the superiority and usefulness of the proposed learning-based EM mixture regression algorithm. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=clustering" title="clustering">clustering</a>, <a href="https://publications.waset.org/abstracts/search?q=EM%20algorithm" title=" EM algorithm"> EM algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=Gaussian%20mixture%20model" title=" Gaussian mixture model"> Gaussian mixture model</a>, <a href="https://publications.waset.org/abstracts/search?q=mixture%20regression%20model" title=" mixture regression model"> mixture regression model</a> </p> <a href="https://publications.waset.org/abstracts/25163/a-learning-based-em-mixture-regression-algorithm" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/25163.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">510</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3988</span> Spectral Clustering for Manufacturing Cell Formation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yessica%20Nataliani">Yessica Nataliani</a>, <a href="https://publications.waset.org/abstracts/search?q=Miin-Shen%20Yang"> Miin-Shen Yang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Cell formation (CF) is an important step in group technology. It is used in designing cellular manufacturing systems using similarities between parts in relation to machines so that it can identify part families and machine groups. There are many CF methods in the literature, but there is less spectral clustering used in CF. In this paper, we propose a spectral clustering algorithm for machine-part CF. Some experimental examples are used to illustrate its efficiency. Overall, the spectral clustering algorithm can be used in CF with a wide variety of machine/part matrices. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=group%20technology" title="group technology">group technology</a>, <a href="https://publications.waset.org/abstracts/search?q=cell%20formation" title=" cell formation"> cell formation</a>, <a href="https://publications.waset.org/abstracts/search?q=spectral%20clustering" title=" spectral clustering"> spectral clustering</a>, <a href="https://publications.waset.org/abstracts/search?q=grouping%20efficiency" title=" grouping efficiency"> grouping efficiency</a> </p> <a href="https://publications.waset.org/abstracts/72294/spectral-clustering-for-manufacturing-cell-formation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/72294.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">405</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3987</span> Anomaly Detection Based Fuzzy K-Mode Clustering for Categorical Data</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Murat%20Yazici">Murat Yazici</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Anomalies are irregularities found in data that do not adhere to a well-defined standard of normal behavior. The identification of outliers or anomalies in data has been a subject of study within the statistics field since the 1800s. Over time, a variety of anomaly detection techniques have been developed in several research communities. The cluster analysis can be used to detect anomalies. It is the process of associating data with clusters that are as similar as possible while dissimilar clusters are associated with each other. Many of the traditional cluster algorithms have limitations in dealing with data sets containing categorical properties. To detect anomalies in categorical data, fuzzy clustering approach can be used with its advantages. The fuzzy k-Mode (FKM) clustering algorithm, which is one of the fuzzy clustering approaches, by extension to the k-means algorithm, is reported for clustering datasets with categorical values. It is a form of clustering: each point can be associated with more than one cluster. In this paper, anomaly detection is performed on two simulated data by using the FKM cluster algorithm. As a significance of the study, the FKM cluster algorithm allows to determine anomalies with their abnormality degree in contrast to numerous anomaly detection algorithms. According to the results, the FKM cluster algorithm illustrated good performance in the anomaly detection of data, including both one anomaly and more than one anomaly. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=fuzzy%20k-mode%20clustering" title="fuzzy k-mode clustering">fuzzy k-mode clustering</a>, <a href="https://publications.waset.org/abstracts/search?q=anomaly%20detection" title=" anomaly detection"> anomaly detection</a>, <a href="https://publications.waset.org/abstracts/search?q=noise" title=" noise"> noise</a>, <a href="https://publications.waset.org/abstracts/search?q=categorical%20data" title=" categorical data"> categorical data</a> </p> <a href="https://publications.waset.org/abstracts/185305/anomaly-detection-based-fuzzy-k-mode-clustering-for-categorical-data" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/185305.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">53</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3986</span> K-Means Clustering-Based Infinite Feature Selection Method</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Seyyedeh%20Faezeh%20Hassani%20Ziabari">Seyyedeh Faezeh Hassani Ziabari</a>, <a href="https://publications.waset.org/abstracts/search?q=Sadegh%20Eskandari"> Sadegh Eskandari</a>, <a href="https://publications.waset.org/abstracts/search?q=Maziar%20Salahi"> Maziar Salahi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Infinite Feature Selection (IFS) algorithm is an efficient feature selection algorithm that selects a subset of features of all sizes (including infinity). In this paper, we present an improved version of it, called clustering IFS (CIFS), by clustering the dataset in advance. To do so, first, we apply the K-means algorithm to cluster the dataset, then we apply IFS. In the CIFS method, the spatial and temporal complexities are reduced compared to the IFS method. Experimental results on 6 datasets show the superiority of CIFS compared to IFS in terms of accuracy, running time, and memory consumption. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=feature%20selection" title="feature selection">feature selection</a>, <a href="https://publications.waset.org/abstracts/search?q=infinite%20feature%20selection" title=" infinite feature selection"> infinite feature selection</a>, <a href="https://publications.waset.org/abstracts/search?q=clustering" title=" clustering"> clustering</a>, <a href="https://publications.waset.org/abstracts/search?q=graph" title=" graph"> graph</a> </p> <a href="https://publications.waset.org/abstracts/155406/k-means-clustering-based-infinite-feature-selection-method" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/155406.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">128</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3985</span> Using Genetic Algorithms and Rough Set Based Fuzzy K-Modes to Improve Centroid Model Clustering Performance on Categorical Data</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rishabh%20Srivastav">Rishabh Srivastav</a>, <a href="https://publications.waset.org/abstracts/search?q=Divyam%20%20Sharma"> Divyam Sharma</a> </p> <p class="card-text"><strong>Abstract:</strong></p> We propose an algorithm to cluster categorical data named as ‘Genetic algorithm initialized rough set based fuzzy K-Modes for categorical data’. We propose an amalgamation of the simple K-modes algorithm, the Rough and Fuzzy set based K-modes and the Genetic Algorithm to form a new algorithm,which we hypothesise, will provide better Centroid Model clustering results, than existing standard algorithms. In the proposed algorithm, the initialization and updation of modes is done by the use of genetic algorithms while the membership values are calculated using the rough set and fuzzy logic. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=categorical%20data" title="categorical data">categorical data</a>, <a href="https://publications.waset.org/abstracts/search?q=fuzzy%20logic" title=" fuzzy logic"> fuzzy logic</a>, <a href="https://publications.waset.org/abstracts/search?q=genetic%20algorithm" title=" genetic algorithm"> genetic algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=K%20modes%20clustering" title=" K modes clustering"> K modes clustering</a>, <a href="https://publications.waset.org/abstracts/search?q=rough%20sets" title=" rough sets"> rough sets</a> </p> <a href="https://publications.waset.org/abstracts/128558/using-genetic-algorithms-and-rough-set-based-fuzzy-k-modes-to-improve-centroid-model-clustering-performance-on-categorical-data" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/128558.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">246</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3984</span> Finding Bicluster on Gene Expression Data of Lymphoma Based on Singular Value Decomposition and Hierarchical Clustering</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Alhadi%20Bustaman">Alhadi Bustaman</a>, <a href="https://publications.waset.org/abstracts/search?q=Soeganda%20Formalidin"> Soeganda Formalidin</a>, <a href="https://publications.waset.org/abstracts/search?q=Titin%20Siswantining"> Titin Siswantining</a> </p> <p class="card-text"><strong>Abstract:</strong></p> DNA microarray technology is used to analyze thousand gene expression data simultaneously and a very important task for drug development and test, function annotation, and cancer diagnosis. Various clustering methods have been used for analyzing gene expression data. However, when analyzing very large and heterogeneous collections of gene expression data, conventional clustering methods often cannot produce a satisfactory solution. Biclustering algorithm has been used as an alternative approach to identifying structures from gene expression data. In this paper, we introduce a transform technique based on singular value decomposition to identify normalized matrix of gene expression data followed by Mixed-Clustering algorithm and the Lift algorithm, inspired in the node-deletion and node-addition phases proposed by Cheng and Church based on Agglomerative Hierarchical Clustering (AHC). Experimental study on standard datasets demonstrated the effectiveness of the algorithm in gene expression data. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=agglomerative%20hierarchical%20clustering%20%28AHC%29" title="agglomerative hierarchical clustering (AHC)">agglomerative hierarchical clustering (AHC)</a>, <a href="https://publications.waset.org/abstracts/search?q=biclustering" title=" biclustering"> biclustering</a>, <a href="https://publications.waset.org/abstracts/search?q=gene%20expression%20data" title=" gene expression data"> gene expression data</a>, <a href="https://publications.waset.org/abstracts/search?q=lymphoma" title=" lymphoma"> lymphoma</a>, <a href="https://publications.waset.org/abstracts/search?q=singular%20value%20decomposition%20%28SVD%29" title=" singular value decomposition (SVD)"> singular value decomposition (SVD)</a> </p> <a href="https://publications.waset.org/abstracts/72889/finding-bicluster-on-gene-expression-data-of-lymphoma-based-on-singular-value-decomposition-and-hierarchical-clustering" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/72889.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">278</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3983</span> Multi-Cluster Overlapping K-Means Extension Algorithm (MCOKE)</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Said%20Baadel">Said Baadel</a>, <a href="https://publications.waset.org/abstracts/search?q=Fadi%20Thabtah"> Fadi Thabtah</a>, <a href="https://publications.waset.org/abstracts/search?q=Joan%20Lu"> Joan Lu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Clustering involves the partitioning of n objects into k clusters. Many clustering algorithms use hard-partitioning techniques where each object is assigned to one cluster. In this paper, we propose an overlapping algorithm MCOKE which allows objects to belong to one or more clusters. The algorithm is different from fuzzy clustering techniques because objects that overlap are assigned a membership value of 1 (one) as opposed to a fuzzy membership degree. The algorithm is also different from other overlapping algorithms that require a similarity threshold to be defined as a priority which can be difficult to determine by novice users. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=data%20mining" title="data mining">data mining</a>, <a href="https://publications.waset.org/abstracts/search?q=k-means" title=" k-means"> k-means</a>, <a href="https://publications.waset.org/abstracts/search?q=MCOKE" title=" MCOKE"> MCOKE</a>, <a href="https://publications.waset.org/abstracts/search?q=overlapping" title=" overlapping"> overlapping</a> </p> <a href="https://publications.waset.org/abstracts/18638/multi-cluster-overlapping-k-means-extension-algorithm-mcoke" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/18638.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">575</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3982</span> Semi-Supervised Hierarchical Clustering Given a Reference Tree of Labeled Documents</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ying%20Zhao">Ying Zhao</a>, <a href="https://publications.waset.org/abstracts/search?q=Xingyan%20Bin"> Xingyan Bin</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Semi-supervised clustering algorithms have been shown effective to improve clustering process with even limited supervision. However, semi-supervised hierarchical clustering remains challenging due to the complexities of expressing constraints for agglomerative clustering algorithms. This paper proposes novel semi-supervised agglomerative clustering algorithms to build a hierarchy based on a known reference tree. We prove that by enforcing distance constraints defined by a reference tree during the process of hierarchical clustering, the resultant tree is guaranteed to be consistent with the reference tree. We also propose a framework that allows the hierarchical tree generation be aware of levels of levels of the agglomerative tree under creation, so that metric weights can be learned and adopted at each level in a recursive fashion. The experimental evaluation shows that the additional cost of our contraint-based semi-supervised hierarchical clustering algorithm (HAC) is negligible, and our combined semi-supervised HAC algorithm outperforms the state-of-the-art algorithms on real-world datasets. The experiments also show that our proposed methods can improve clustering performance even with a small number of unevenly distributed labeled data. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=semi-supervised%20clustering" title="semi-supervised clustering">semi-supervised clustering</a>, <a href="https://publications.waset.org/abstracts/search?q=hierarchical%0D%0Aagglomerative%20clustering" title=" hierarchical agglomerative clustering"> hierarchical agglomerative clustering</a>, <a href="https://publications.waset.org/abstracts/search?q=reference%20trees" title=" reference trees"> reference trees</a>, <a href="https://publications.waset.org/abstracts/search?q=distance%20constraints" title=" distance constraints "> distance constraints </a> </p> <a href="https://publications.waset.org/abstracts/19478/semi-supervised-hierarchical-clustering-given-a-reference-tree-of-labeled-documents" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/19478.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">547</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3981</span> An Improved K-Means Algorithm for Gene Expression Data Clustering</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Billel%20Kenidra">Billel Kenidra</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohamed%20Benmohammed"> Mohamed Benmohammed</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Data mining technique used in the field of clustering is a subject of active research and assists in biological pattern recognition and extraction of new knowledge from raw data. Clustering means the act of partitioning an unlabeled dataset into groups of similar objects. Each group, called a cluster, consists of objects that are similar between themselves and dissimilar to objects of other groups. Several clustering methods are based on partitional clustering. This category attempts to directly decompose the dataset into a set of disjoint clusters leading to an integer number of clusters that optimizes a given criterion function. The criterion function may emphasize a local or a global structure of the data, and its optimization is an iterative relocation procedure. The K-Means algorithm is one of the most widely used partitional clustering techniques. Since K-Means is extremely sensitive to the initial choice of centers and a poor choice of centers may lead to a local optimum that is quite inferior to the global optimum, we propose a strategy to initiate K-Means centers. The improved K-Means algorithm is compared with the original K-Means, and the results prove how the efficiency has been significantly improved. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=microarray%20data%20mining" title="microarray data mining">microarray data mining</a>, <a href="https://publications.waset.org/abstracts/search?q=biological%20pattern%20recognition" title=" biological pattern recognition"> biological pattern recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=partitional%20clustering" title=" partitional clustering"> partitional clustering</a>, <a href="https://publications.waset.org/abstracts/search?q=k-means%20algorithm" title=" k-means algorithm"> k-means algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=centroid%20initialization" title=" centroid initialization"> centroid initialization</a> </p> <a href="https://publications.waset.org/abstracts/83541/an-improved-k-means-algorithm-for-gene-expression-data-clustering" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/83541.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">190</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3980</span> A Weighted K-Medoids Clustering Algorithm for Effective Stability in Vehicular Ad Hoc Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rejab%20Hajlaoui">Rejab Hajlaoui</a>, <a href="https://publications.waset.org/abstracts/search?q=Tarek%20Moulahi"> Tarek Moulahi</a>, <a href="https://publications.waset.org/abstracts/search?q=Herv%C3%A9%20Guyennet"> Hervé Guyennet</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In a highway scenario, the vehicle speed can exceed 120 kmph. Therefore, any vehicle can enter or leave the network within a very short time. This mobility adversely affects the network connectivity and decreases the life time of all established links. To ensure an effective stability in vehicular ad hoc networks with minimum broadcasting storm, we have developed a weighted algorithm based on the k-medoids clustering algorithm (WKCA). Indeed, the number of clusters and the initial cluster heads will not be selected randomly as usual, but considering the available transmission range and the environment size. Then, to ensure optimal assignment of nodes to clusters in both k-medoids phases, the combined weight of any node will be computed according to additional metrics including direction, relative speed and proximity. Empirical results prove that in addition to the convergence speed that characterizes the k-medoids algorithm, our proposed model performs well both AODV-Clustering and OLSR-Clustering protocols under different densities and velocities in term of end-to-end delay, packet delivery ratio, and throughput. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=communication" title="communication">communication</a>, <a href="https://publications.waset.org/abstracts/search?q=clustering%20algorithm" title=" clustering algorithm"> clustering algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=k-medoids" title=" k-medoids"> k-medoids</a>, <a href="https://publications.waset.org/abstracts/search?q=sensor" title=" sensor"> sensor</a>, <a href="https://publications.waset.org/abstracts/search?q=vehicular%20ad%20hoc%20network" title=" vehicular ad hoc network"> vehicular ad hoc network</a> </p> <a href="https://publications.waset.org/abstracts/69086/a-weighted-k-medoids-clustering-algorithm-for-effective-stability-in-vehicular-ad-hoc-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/69086.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">238</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3979</span> Fuzzy Optimization Multi-Objective Clustering Ensemble Model for Multi-Source Data Analysis</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=C.%20B.%20Le">C. B. Le</a>, <a href="https://publications.waset.org/abstracts/search?q=V.%20N.%20Pham"> V. N. Pham</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In modern data analysis, multi-source data appears more and more in real applications. Multi-source data clustering has emerged as a important issue in the data mining and machine learning community. Different data sources provide information about different data. Therefore, multi-source data linking is essential to improve clustering performance. However, in practice multi-source data is often heterogeneous, uncertain, and large. This issue is considered a major challenge from multi-source data. Ensemble is a versatile machine learning model in which learning techniques can work in parallel, with big data. Clustering ensemble has been shown to outperform any standard clustering algorithm in terms of accuracy and robustness. However, most of the traditional clustering ensemble approaches are based on single-objective function and single-source data. This paper proposes a new clustering ensemble method for multi-source data analysis. The fuzzy optimized multi-objective clustering ensemble method is called FOMOCE. Firstly, a clustering ensemble mathematical model based on the structure of multi-objective clustering function, multi-source data, and dark knowledge is introduced. Then, rules for extracting dark knowledge from the input data, clustering algorithms, and base clusterings are designed and applied. Finally, a clustering ensemble algorithm is proposed for multi-source data analysis. The experiments were performed on the standard sample data set. The experimental results demonstrate the superior performance of the FOMOCE method compared to the existing clustering ensemble methods and multi-source clustering methods. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=clustering%20ensemble" title="clustering ensemble">clustering ensemble</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-source" title=" multi-source"> multi-source</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-objective" title=" multi-objective"> multi-objective</a>, <a href="https://publications.waset.org/abstracts/search?q=fuzzy%20clustering" title=" fuzzy clustering"> fuzzy clustering</a> </p> <a href="https://publications.waset.org/abstracts/136598/fuzzy-optimization-multi-objective-clustering-ensemble-model-for-multi-source-data-analysis" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/136598.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">189</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3978</span> Hybrid Hierarchical Clustering Approach for Community Detection in Social Network</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Radhia%20Toujani">Radhia Toujani</a>, <a href="https://publications.waset.org/abstracts/search?q=Jalel%20Akaichi"> Jalel Akaichi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Social Networks generally present a hierarchy of communities. To determine these communities and the relationship between them, detection algorithms should be applied. Most of the existing algorithms, proposed for hierarchical communities identification, are based on either agglomerative clustering or divisive clustering. In this paper, we present a hybrid hierarchical clustering approach for community detection based on both bottom-up and bottom-down clustering. Obviously, our approach provides more relevant community structure than hierarchical method which considers only divisive or agglomerative clustering to identify communities. Moreover, we performed some comparative experiments to enhance the quality of the clustering results and to show the effectiveness of our algorithm. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=agglomerative%20hierarchical%20clustering" title="agglomerative hierarchical clustering">agglomerative hierarchical clustering</a>, <a href="https://publications.waset.org/abstracts/search?q=community%20structure" title=" community structure"> community structure</a>, <a href="https://publications.waset.org/abstracts/search?q=divisive%20hierarchical%20clustering" title=" divisive hierarchical clustering"> divisive hierarchical clustering</a>, <a href="https://publications.waset.org/abstracts/search?q=hybrid%20hierarchical%20clustering" title=" hybrid hierarchical clustering"> hybrid hierarchical clustering</a>, <a href="https://publications.waset.org/abstracts/search?q=opinion%20mining" title=" opinion mining"> opinion mining</a>, <a href="https://publications.waset.org/abstracts/search?q=social%20network" title=" social network"> social network</a>, <a href="https://publications.waset.org/abstracts/search?q=social%20network%20analysis" title=" social network analysis"> social network analysis</a> </p> <a href="https://publications.waset.org/abstracts/63702/hybrid-hierarchical-clustering-approach-for-community-detection-in-social-network" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/63702.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">365</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3977</span> A Hybrid Method for Determination of Effective Poles Using Clustering Dominant Pole Algorithm </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Anuj%20Abraham">Anuj Abraham</a>, <a href="https://publications.waset.org/abstracts/search?q=N.%20Pappa"> N. Pappa</a>, <a href="https://publications.waset.org/abstracts/search?q=Daniel%20Honc"> Daniel Honc</a>, <a href="https://publications.waset.org/abstracts/search?q=Rahul%20Sharma"> Rahul Sharma</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, an analysis of some model order reduction techniques is presented. A new hybrid algorithm for model order reduction of linear time invariant systems is compared with the conventional techniques namely Balanced Truncation, Hankel Norm reduction and Dominant Pole Algorithm (DPA). The proposed hybrid algorithm is known as Clustering Dominant Pole Algorithm (CDPA) is able to compute the full set of dominant poles and its cluster center efficiently. The dominant poles of a transfer function are specific eigenvalues of the state space matrix of the corresponding dynamical system. The effectiveness of this novel technique is shown through the simulation results. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=balanced%20truncation" title="balanced truncation">balanced truncation</a>, <a href="https://publications.waset.org/abstracts/search?q=clustering" title=" clustering"> clustering</a>, <a href="https://publications.waset.org/abstracts/search?q=dominant%20pole" title=" dominant pole"> dominant pole</a>, <a href="https://publications.waset.org/abstracts/search?q=Hankel%20norm" title=" Hankel norm"> Hankel norm</a>, <a href="https://publications.waset.org/abstracts/search?q=model%20reduction" title=" model reduction"> model reduction</a> </p> <a href="https://publications.waset.org/abstracts/17480/a-hybrid-method-for-determination-of-effective-poles-using-clustering-dominant-pole-algorithm" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/17480.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">599</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3976</span> Clustering Categorical Data Using the K-Means Algorithm and the Attribute’s Relative Frequency</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Semeh%20Ben%20Salem">Semeh Ben Salem</a>, <a href="https://publications.waset.org/abstracts/search?q=Sami%20Naouali"> Sami Naouali</a>, <a href="https://publications.waset.org/abstracts/search?q=Moetez%20Sallami"> Moetez Sallami</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Clustering is a well known data mining technique used in pattern recognition and information retrieval. The initial dataset to be clustered can either contain categorical or numeric data. Each type of data has its own specific clustering algorithm. In this context, two algorithms are proposed: the <em>k</em>-means for clustering numeric datasets and the <em>k</em>-modes for categorical datasets. The main encountered problem in data mining applications is clustering categorical dataset so relevant in the datasets. One main issue to achieve the clustering process on categorical values is to transform the categorical attributes into numeric measures and directly apply the <em>k</em>-means algorithm instead the <em>k</em>-modes. In this paper, it is proposed to experiment an approach based on the previous issue by transforming the categorical values into numeric ones using the relative frequency of each modality in the attributes. The proposed approach is compared with a previously method based on transforming the categorical datasets into binary values. The scalability and accuracy of the two methods are experimented. The obtained results show that our proposed method outperforms the binary method in all cases. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=clustering" title="clustering">clustering</a>, <a href="https://publications.waset.org/abstracts/search?q=unsupervised%20learning" title=" unsupervised learning"> unsupervised learning</a>, <a href="https://publications.waset.org/abstracts/search?q=pattern%20recognition" title=" pattern recognition"> pattern recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=categorical%20datasets" title=" categorical datasets"> categorical datasets</a>, <a href="https://publications.waset.org/abstracts/search?q=knowledge%20discovery" title=" knowledge discovery"> knowledge discovery</a>, <a href="https://publications.waset.org/abstracts/search?q=k-means" title=" k-means"> k-means</a> </p> <a href="https://publications.waset.org/abstracts/73588/clustering-categorical-data-using-the-k-means-algorithm-and-the-attributes-relative-frequency" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/73588.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">259</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3975</span> Analysis of ECGs Survey Data by Applying Clustering Algorithm </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Irum%20Matloob">Irum Matloob</a>, <a href="https://publications.waset.org/abstracts/search?q=Shoab%20Ahmad%20Khan"> Shoab Ahmad Khan</a>, <a href="https://publications.waset.org/abstracts/search?q=Fahim%20Arif"> Fahim Arif</a> </p> <p class="card-text"><strong>Abstract:</strong></p> As Indo-pak has been the victim of heart diseases since many decades. Many surveys showed that percentage of cardiac patients is increasing in Pakistan day by day, and special attention is needed to pay on this issue. The framework is proposed for performing detailed analysis of ECG survey data which is conducted for measuring the prevalence of heart diseases statistics in Pakistan. The ECG survey data is evaluated or filtered by using automated Minnesota codes and only those ECGs are used for further analysis which is fulfilling the standardized conditions mentioned in the Minnesota codes. Then feature selection is performed by applying proposed algorithm based on discernibility matrix, for selecting relevant features from the database. Clustering is performed for exposing natural clusters from the ECG survey data by applying spectral clustering algorithm using fuzzy c means algorithm. The hidden patterns and interesting relationships which have been exposed after this analysis are useful for further detailed analysis and for many other multiple purposes. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=arrhythmias" title="arrhythmias">arrhythmias</a>, <a href="https://publications.waset.org/abstracts/search?q=centroids" title=" centroids"> centroids</a>, <a href="https://publications.waset.org/abstracts/search?q=ECG" title=" ECG"> ECG</a>, <a href="https://publications.waset.org/abstracts/search?q=clustering" title=" clustering"> clustering</a>, <a href="https://publications.waset.org/abstracts/search?q=discernibility%20matrix" title=" discernibility matrix"> discernibility matrix</a> </p> <a href="https://publications.waset.org/abstracts/32782/analysis-of-ecgs-survey-data-by-applying-clustering-algorithm" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/32782.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">351</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3974</span> Decision Trees Constructing Based on K-Means Clustering Algorithm</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Loai%20Abdallah">Loai Abdallah</a>, <a href="https://publications.waset.org/abstracts/search?q=Malik%20Yousef"> Malik Yousef</a> </p> <p class="card-text"><strong>Abstract:</strong></p> A domain space for the data should reflect the actual similarity between objects. Since objects belonging to the same cluster usually share some common traits even though their geometric distance might be relatively large. In general, the Euclidean distance of data points that represented by large number of features is not capturing the actual relation between those points. In this study, we propose a new method to construct a different space that is based on clustering to form a new distance metric. The new distance space is based on ensemble clustering (EC). The EC distance space is defined by tracking the membership of the points over multiple runs of clustering algorithm metric. Over this distance, we train the decision trees classifier (DT-EC). The results obtained by applying DT-EC on 10 datasets confirm our hypotheses that embedding the EC space as a distance metric would improve the performance. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=ensemble%20clustering" title="ensemble clustering">ensemble clustering</a>, <a href="https://publications.waset.org/abstracts/search?q=decision%20trees" title=" decision trees"> decision trees</a>, <a href="https://publications.waset.org/abstracts/search?q=classification" title=" classification"> classification</a>, <a href="https://publications.waset.org/abstracts/search?q=K%20nearest%20neighbors" title=" K nearest neighbors"> K nearest neighbors</a> </p> <a href="https://publications.waset.org/abstracts/89656/decision-trees-constructing-based-on-k-means-clustering-algorithm" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/89656.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">190</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3973</span> Ensuring Uniform Energy Consumption in Non-Deterministic Wireless Sensor Network to Protract Networks Lifetime</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Vrince%20Vimal">Vrince Vimal</a>, <a href="https://publications.waset.org/abstracts/search?q=Madhav%20J.%20Nigam"> Madhav J. Nigam</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Wireless sensor networks have enticed much of the spotlight from researchers all around the world, owing to its extensive applicability in agricultural, industrial and military fields. Energy conservation node deployment stratagems play a notable role for active implementation of Wireless Sensor Networks. Clustering is the approach in wireless sensor networks which improves energy efficiency in the network. The clustering algorithm needs to have an optimum size and number of clusters, as clustering, if not implemented properly, cannot effectively increase the life of the network. In this paper, an algorithm has been proposed to address connectivity issues with the aim of ensuring the uniform energy consumption of nodes in every part of the network. The results obtained after simulation showed that the proposed algorithm has an edge over existing algorithms in terms of throughput and networks lifetime. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Wireless%20Sensor%20network%20%28WSN%29" title="Wireless Sensor network (WSN)">Wireless Sensor network (WSN)</a>, <a href="https://publications.waset.org/abstracts/search?q=Random%20Deployment" title=" Random Deployment"> Random Deployment</a>, <a href="https://publications.waset.org/abstracts/search?q=Clustering" title=" Clustering"> Clustering</a>, <a href="https://publications.waset.org/abstracts/search?q=Isolated%20Nodes" title=" Isolated Nodes"> Isolated Nodes</a>, <a href="https://publications.waset.org/abstracts/search?q=Networks%20Lifetime" title=" Networks Lifetime"> Networks Lifetime</a> </p> <a href="https://publications.waset.org/abstracts/71975/ensuring-uniform-energy-consumption-in-non-deterministic-wireless-sensor-network-to-protract-networks-lifetime" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/71975.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">336</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3972</span> A Comparative Study of Multi-SOM Algorithms for Determining the Optimal Number of Clusters</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Im%C3%A8n%20Khanchouch">Imèn Khanchouch</a>, <a href="https://publications.waset.org/abstracts/search?q=Malika%20Charrad"> Malika Charrad</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohamed%20Limam"> Mohamed Limam</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The interpretation of the quality of clusters and the determination of the optimal number of clusters is still a crucial problem in clustering. We focus in this paper on multi-SOM clustering method which overcomes the problem of extracting the number of clusters from the SOM map through the use of a clustering validity index. We then tested multi-SOM using real and artificial data sets with different evaluation criteria not used previously such as Davies Bouldin index, Dunn index and silhouette index. The developed multi-SOM algorithm is compared to k-means and Birch methods. Results show that it is more efficient than classical clustering methods. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=clustering" title="clustering">clustering</a>, <a href="https://publications.waset.org/abstracts/search?q=SOM" title=" SOM"> SOM</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-SOM" title=" multi-SOM"> multi-SOM</a>, <a href="https://publications.waset.org/abstracts/search?q=DB%20index" title=" DB index"> DB index</a>, <a href="https://publications.waset.org/abstracts/search?q=Dunn%20index" title=" Dunn index"> Dunn index</a>, <a href="https://publications.waset.org/abstracts/search?q=silhouette%20index" title=" silhouette index"> silhouette index</a> </p> <a href="https://publications.waset.org/abstracts/17422/a-comparative-study-of-multi-som-algorithms-for-determining-the-optimal-number-of-clusters" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/17422.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">599</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3971</span> Improved Color-Based K-Mean Algorithm for Clustering of Satellite Image</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sangeeta%20Yadav">Sangeeta Yadav</a>, <a href="https://publications.waset.org/abstracts/search?q=Mantosh%20Biswas"> Mantosh Biswas</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we proposed an improved color based K-mean algorithm for clustering of satellite Image (SAR). Our method comprises of two stages. The first step is an interactive selection process where users are required to input the number of colors (ncolor), number of clusters, and then they are prompted to select the points in each color cluster. In the second step these points are given as input to K-mean clustering algorithm that clusters the image based on color and Minimum Square Euclidean distance. The proposed method reduces the mixed pixel problem to a great extent. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=cluster" title="cluster">cluster</a>, <a href="https://publications.waset.org/abstracts/search?q=ncolor%20method" title=" ncolor method"> ncolor method</a>, <a href="https://publications.waset.org/abstracts/search?q=K-mean%20method" title=" K-mean method"> K-mean method</a>, <a href="https://publications.waset.org/abstracts/search?q=interactive%20selection%20process" title=" interactive selection process"> interactive selection process</a> </p> <a href="https://publications.waset.org/abstracts/64532/improved-color-based-k-mean-algorithm-for-clustering-of-satellite-image" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/64532.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">297</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3970</span> Interpretation and Clustering Framework for Analyzing ECG Survey Data</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Irum%20Matloob">Irum Matloob</a>, <a href="https://publications.waset.org/abstracts/search?q=Shoab%20Ahmad%20Khan"> Shoab Ahmad Khan</a>, <a href="https://publications.waset.org/abstracts/search?q=Fahim%20Arif"> Fahim Arif </a> </p> <p class="card-text"><strong>Abstract:</strong></p> As Indo-Pak has been the victim of heart diseases since many decades. Many surveys showed that percentage of cardiac patients is increasing in Pakistan day by day, and special attention is needed to pay on this issue. The framework is proposed for performing detailed analysis of ECG survey data which is conducted for measuring prevalence of heart diseases statistics in Pakistan. The ECG survey data is evaluated or filtered by using automated Minnesota codes and only those ECGs are used for further analysis which is fulfilling the standardized conditions mentioned in the Minnesota codes. Then feature selection is performed by applying proposed algorithm based on discernibility matrix, for selecting relevant features from the database. Clustering is performed for exposing natural clusters from the ECG survey data by applying spectral clustering algorithm using fuzzy c means algorithm. The hidden patterns and interesting relationships which have been exposed after this analysis are useful for further detailed analysis and for many other multiple purposes. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=arrhythmias" title="arrhythmias">arrhythmias</a>, <a href="https://publications.waset.org/abstracts/search?q=centroids" title=" centroids"> centroids</a>, <a href="https://publications.waset.org/abstracts/search?q=ECG" title=" ECG"> ECG</a>, <a href="https://publications.waset.org/abstracts/search?q=clustering" title=" clustering"> clustering</a>, <a href="https://publications.waset.org/abstracts/search?q=discernibility%20matrix" title=" discernibility matrix"> discernibility matrix</a> </p> <a href="https://publications.waset.org/abstracts/16194/interpretation-and-clustering-framework-for-analyzing-ecg-survey-data" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/16194.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">469</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3969</span> A Polynomial Time Clustering Algorithm for Solving the Assignment Problem in the Vehicle Routing Problem</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Lydia%20Wahid">Lydia Wahid</a>, <a href="https://publications.waset.org/abstracts/search?q=Mona%20F.%20Ahmed"> Mona F. Ahmed</a>, <a href="https://publications.waset.org/abstracts/search?q=Nevin%20Darwish"> Nevin Darwish</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The vehicle routing problem (VRP) consists of a group of customers that needs to be served. Each customer has a certain demand of goods. A central depot having a fleet of vehicles is responsible for supplying the customers with their demands. The problem is composed of two subproblems: The first subproblem is an assignment problem where the number of vehicles that will be used as well as the customers assigned to each vehicle are determined. The second subproblem is the routing problem in which for each vehicle having a number of customers assigned to it, the order of visits of the customers is determined. Optimal number of vehicles, as well as optimal total distance, should be achieved. In this paper, an approach for solving the first subproblem (the assignment problem) is presented. In the approach, a clustering algorithm is proposed for finding the optimal number of vehicles by grouping the customers into clusters where each cluster is visited by one vehicle. Finding the optimal number of clusters is NP-hard. This work presents a polynomial time clustering algorithm for finding the optimal number of clusters and solving the assignment problem. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=vehicle%20routing%20problems" title="vehicle routing problems">vehicle routing problems</a>, <a href="https://publications.waset.org/abstracts/search?q=clustering%20algorithms" title=" clustering algorithms"> clustering algorithms</a>, <a href="https://publications.waset.org/abstracts/search?q=Clarke%20and%20Wright%20Saving%20Method" title=" Clarke and Wright Saving Method"> Clarke and Wright Saving Method</a>, <a href="https://publications.waset.org/abstracts/search?q=agglomerative%20hierarchical%20clustering" title=" agglomerative hierarchical clustering"> agglomerative hierarchical clustering</a> </p> <a href="https://publications.waset.org/abstracts/85552/a-polynomial-time-clustering-algorithm-for-solving-the-assignment-problem-in-the-vehicle-routing-problem" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/85552.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">393</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3968</span> A Clustering Algorithm for Massive Texts</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ming%20Liu">Ming Liu</a>, <a href="https://publications.waset.org/abstracts/search?q=Chong%20Wu"> Chong Wu</a>, <a href="https://publications.waset.org/abstracts/search?q=Bingquan%20Liu"> Bingquan Liu</a>, <a href="https://publications.waset.org/abstracts/search?q=Lei%20Chen"> Lei Chen</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Internet users have to face the massive amount of textual data every day. Organizing texts into categories can help users dig the useful information from large-scale text collection. Clustering, in fact, is one of the most promising tools for categorizing texts due to its unsupervised characteristic. Unfortunately, most of traditional clustering algorithms lose their high qualities on large-scale text collection. This situation mainly attributes to the high- dimensional vectors generated from texts. To effectively and efficiently cluster large-scale text collection, this paper proposes a vector reconstruction based clustering algorithm. Only the features that can represent the cluster are preserved in cluster’s representative vector. This algorithm alternately repeats two sub-processes until it converges. One process is partial tuning sub-process, where feature’s weight is fine-tuned by iterative process. To accelerate clustering velocity, an intersection based similarity measurement and its corresponding neuron adjustment function are proposed and implemented in this sub-process. The other process is overall tuning sub-process, where the features are reallocated among different clusters. In this sub-process, the features useless to represent the cluster are removed from cluster’s representative vector. Experimental results on the three text collections (including two small-scale and one large-scale text collections) demonstrate that our algorithm obtains high quality on both small-scale and large-scale text collections. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=vector%20reconstruction" title="vector reconstruction">vector reconstruction</a>, <a href="https://publications.waset.org/abstracts/search?q=large-scale%20text%20clustering" title=" large-scale text clustering"> large-scale text clustering</a>, <a href="https://publications.waset.org/abstracts/search?q=partial%20tuning%20sub-process" title=" partial tuning sub-process"> partial tuning sub-process</a>, <a href="https://publications.waset.org/abstracts/search?q=overall%20tuning%20sub-process" title=" overall tuning sub-process"> overall tuning sub-process</a> </p> <a href="https://publications.waset.org/abstracts/22681/a-clustering-algorithm-for-massive-texts" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/22681.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">435</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">&lsaquo;</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=clustering%20algorithm&amp;page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=clustering%20algorithm&amp;page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=clustering%20algorithm&amp;page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=clustering%20algorithm&amp;page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=clustering%20algorithm&amp;page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=clustering%20algorithm&amp;page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=clustering%20algorithm&amp;page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=clustering%20algorithm&amp;page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=clustering%20algorithm&amp;page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=clustering%20algorithm&amp;page=133">133</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=clustering%20algorithm&amp;page=134">134</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=clustering%20algorithm&amp;page=2" rel="next">&rsaquo;</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">&copy; 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">&times;</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>

Pages: 1 2 3 4 5 6 7 8 9 10