CINXE.COM

Search results for: sparse Bayesian learning

<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: sparse Bayesian learning</title> <meta name="description" content="Search results for: sparse Bayesian learning"> <meta name="keywords" content="sparse Bayesian learning"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="sparse Bayesian learning" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="sparse Bayesian learning"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 7574</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: sparse Bayesian learning</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7574</span> A Generalized Sparse Bayesian Learning Algorithm for Near-Field Synthetic Aperture Radar Imaging: By Exploiting Impropriety and Noncircularity</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Pan%20Long">Pan Long</a>, <a href="https://publications.waset.org/abstracts/search?q=Bi%20Dongjie"> Bi Dongjie</a>, <a href="https://publications.waset.org/abstracts/search?q=Li%20Xifeng"> Li Xifeng</a>, <a href="https://publications.waset.org/abstracts/search?q=Xie%20Yongle"> Xie Yongle</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The near-field synthetic aperture radar (SAR) imaging is an advanced nondestructive testing and evaluation (NDT&amp;E) technique. This paper investigates the complex-valued signal processing related to the near-field SAR imaging system, where the measurement data turns out to be noncircular and improper, meaning that the complex-valued data is correlated to its complex conjugate. Furthermore, we discover that the degree of impropriety of the measurement data and that of the target image can be highly correlated in near-field SAR imaging. Based on these observations, A modified generalized sparse Bayesian learning algorithm is proposed, taking impropriety and noncircularity into account. Numerical results show that the proposed algorithm provides performance gain, with the help of noncircular assumption on the signals. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=complex-valued%20signal%20processing" title="complex-valued signal processing">complex-valued signal processing</a>, <a href="https://publications.waset.org/abstracts/search?q=synthetic%20aperture%20radar" title=" synthetic aperture radar"> synthetic aperture radar</a>, <a href="https://publications.waset.org/abstracts/search?q=2-D%20radar%20imaging" title=" 2-D radar imaging"> 2-D radar imaging</a>, <a href="https://publications.waset.org/abstracts/search?q=compressive%20sensing" title=" compressive sensing"> compressive sensing</a>, <a href="https://publications.waset.org/abstracts/search?q=sparse%20Bayesian%20learning" title=" sparse Bayesian learning"> sparse Bayesian learning</a> </p> <a href="https://publications.waset.org/abstracts/108404/a-generalized-sparse-bayesian-learning-algorithm-for-near-field-synthetic-aperture-radar-imaging-by-exploiting-impropriety-and-noncircularity" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/108404.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">132</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7573</span> Modern Machine Learning Conniptions for Automatic Speech Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=S.%20Jagadeesh%20Kumar">S. Jagadeesh Kumar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This expose presents a luculent of recent machine learning practices as employed in the modern and as pertinent to prospective automatic speech recognition schemes. The aspiration is to promote additional traverse ablution among the machine learning and automatic speech recognition factions that have transpired in the precedent. The manuscript is structured according to the chief machine learning archetypes that are furthermore trendy by now or have latency for building momentous hand-outs to automatic speech recognition expertise. The standards offered and convoluted in this article embraces adaptive and multi-task learning, active learning, Bayesian learning, discriminative learning, generative learning, supervised and unsupervised learning. These learning archetypes are aggravated and conferred in the perspective of automatic speech recognition tools and functions. This manuscript bequeaths and surveys topical advances of deep learning and learning with sparse depictions; further limelight is on their incessant significance in the evolution of automatic speech recognition. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=automatic%20speech%20recognition" title="automatic speech recognition">automatic speech recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning%20methods" title=" deep learning methods"> deep learning methods</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning%20archetypes" title=" machine learning archetypes"> machine learning archetypes</a>, <a href="https://publications.waset.org/abstracts/search?q=Bayesian%20learning" title=" Bayesian learning"> Bayesian learning</a>, <a href="https://publications.waset.org/abstracts/search?q=supervised%20and%20unsupervised%20learning" title=" supervised and unsupervised learning"> supervised and unsupervised learning</a> </p> <a href="https://publications.waset.org/abstracts/71467/modern-machine-learning-conniptions-for-automatic-speech-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/71467.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">448</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7572</span> KSVD-SVM Approach for Spontaneous Facial Expression Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Dawood%20Al%20Chanti">Dawood Al Chanti</a>, <a href="https://publications.waset.org/abstracts/search?q=Alice%20Caplier"> Alice Caplier</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Sparse representations of signals have received a great deal of attention in recent years. In this paper, the interest of using sparse representation as a mean for performing sparse discriminative analysis between spontaneous facial expressions is demonstrated. An automatic facial expressions recognition system is presented. It uses a KSVD-SVM approach which is made of three main stages: A pre-processing and feature extraction stage, which solves the problem of shared subspace distribution based on the random projection theory, to obtain low dimensional discriminative and reconstructive features; A dictionary learning and sparse coding stage, which uses the KSVD model to learn discriminative under or over dictionaries for sparse coding; Finally a classification stage, which uses a SVM classifier for facial expressions recognition. Our main concern is to be able to recognize non-basic affective states and non-acted expressions. Extensive experiments on the JAFFE static acted facial expressions database but also on the DynEmo dynamic spontaneous facial expressions database exhibit very good recognition rates. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=dictionary%20learning" title="dictionary learning">dictionary learning</a>, <a href="https://publications.waset.org/abstracts/search?q=random%20projection" title=" random projection"> random projection</a>, <a href="https://publications.waset.org/abstracts/search?q=pose%20and%20spontaneous%20facial%20expression" title=" pose and spontaneous facial expression"> pose and spontaneous facial expression</a>, <a href="https://publications.waset.org/abstracts/search?q=sparse%20representation" title=" sparse representation"> sparse representation</a> </p> <a href="https://publications.waset.org/abstracts/51683/ksvd-svm-approach-for-spontaneous-facial-expression-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/51683.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">305</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7571</span> Performance Analysis and Optimization for Diagonal Sparse Matrix-Vector Multiplication on Machine Learning Unit</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Qiuyu%20Dai">Qiuyu Dai</a>, <a href="https://publications.waset.org/abstracts/search?q=Haochong%20Zhang"> Haochong Zhang</a>, <a href="https://publications.waset.org/abstracts/search?q=Xiangrong%20Liu"> Xiangrong Liu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Diagonal sparse matrix-vector multiplication is a well-studied topic in the fields of scientific computing and big data processing. However, when diagonal sparse matrices are stored in DIA format, there can be a significant number of padded zero elements and scattered points, which can lead to a degradation in the performance of the current DIA kernel. This can also lead to excessive consumption of computational and memory resources. In order to address these issues, the authors propose the DIA-Adaptive scheme and its kernel, which leverages the parallel instruction sets on MLU. The researchers analyze the effect of allocating a varying number of threads, clusters, and hardware architectures on the performance of SpMV using different formats. The experimental results indicate that the proposed DIA-Adaptive scheme performs well and offers excellent parallelism. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=adaptive%20method" title="adaptive method">adaptive method</a>, <a href="https://publications.waset.org/abstracts/search?q=DIA" title=" DIA"> DIA</a>, <a href="https://publications.waset.org/abstracts/search?q=diagonal%20sparse%20matrices" title=" diagonal sparse matrices"> diagonal sparse matrices</a>, <a href="https://publications.waset.org/abstracts/search?q=MLU" title=" MLU"> MLU</a>, <a href="https://publications.waset.org/abstracts/search?q=sparse%20matrix-vector%20multiplication" title=" sparse matrix-vector multiplication"> sparse matrix-vector multiplication</a> </p> <a href="https://publications.waset.org/abstracts/161003/performance-analysis-and-optimization-for-diagonal-sparse-matrix-vector-multiplication-on-machine-learning-unit" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/161003.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">135</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7570</span> Non-Local Simultaneous Sparse Unmixing for Hyperspectral Data</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Fanqiang%20Kong">Fanqiang Kong</a>, <a href="https://publications.waset.org/abstracts/search?q=Chending%20Bian"> Chending Bian</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Sparse unmixing is a promising approach in a semisupervised fashion by assuming that the observed pixels of a hyperspectral image can be expressed in the form of linear combination of only a few pure spectral signatures (end members) in an available spectral library. However, the sparse unmixing problem still remains a great challenge at finding the optimal subset of endmembers for the observed data from a large standard spectral library, without considering the spatial information. Under such circumstances, a sparse unmixing algorithm termed as non-local simultaneous sparse unmixing (NLSSU) is presented. In NLSSU, the non-local simultaneous sparse representation method for endmember selection of sparse unmixing, is used to finding the optimal subset of endmembers for the similar image patch set in the hyperspectral image. And then, the non-local means method, as a regularizer for abundance estimation of sparse unmixing, is used to exploit the abundance image non-local self-similarity. Experimental results on both simulated and real data demonstrate that NLSSU outperforms the other algorithms, with a better spectral unmixing accuracy. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=hyperspectral%20unmixing" title="hyperspectral unmixing">hyperspectral unmixing</a>, <a href="https://publications.waset.org/abstracts/search?q=simultaneous%20sparse%20representation" title=" simultaneous sparse representation"> simultaneous sparse representation</a>, <a href="https://publications.waset.org/abstracts/search?q=sparse%20regression" title=" sparse regression"> sparse regression</a>, <a href="https://publications.waset.org/abstracts/search?q=non-local%20means" title=" non-local means"> non-local means</a> </p> <a href="https://publications.waset.org/abstracts/71689/non-local-simultaneous-sparse-unmixing-for-hyperspectral-data" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/71689.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">245</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7569</span> Factorization of Computations in Bayesian Networks: Interpretation of Factors</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Linda%20Smail">Linda Smail</a>, <a href="https://publications.waset.org/abstracts/search?q=Zineb%20Azouz"> Zineb Azouz</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Given a Bayesian network relative to a set I of discrete random variables, we are interested in computing the probability distribution P(S) where S is a subset of I. The general idea is to write the expression of P(S) in the form of a product of factors where each factor is easy to compute. More importantly, it will be very useful to give an interpretation of each of the factors in terms of conditional probabilities. This paper considers a semantic interpretation of the factors involved in computing marginal probabilities in Bayesian networks. Establishing such a semantic interpretations is indeed interesting and relevant in the case of large Bayesian networks. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bayesian%20networks" title="Bayesian networks">Bayesian networks</a>, <a href="https://publications.waset.org/abstracts/search?q=D-Separation" title=" D-Separation"> D-Separation</a>, <a href="https://publications.waset.org/abstracts/search?q=level%20two%20Bayesian%20networks" title=" level two Bayesian networks"> level two Bayesian networks</a>, <a href="https://publications.waset.org/abstracts/search?q=factorization%20of%20computation" title=" factorization of computation"> factorization of computation</a> </p> <a href="https://publications.waset.org/abstracts/18829/factorization-of-computations-in-bayesian-networks-interpretation-of-factors" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/18829.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">529</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7568</span> Networked Implementation of Milling Stability Optimization with Bayesian Learning</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Christoph%20Ramsauer">Christoph Ramsauer</a>, <a href="https://publications.waset.org/abstracts/search?q=Jaydeep%20Karandikar"> Jaydeep Karandikar</a>, <a href="https://publications.waset.org/abstracts/search?q=Tony%20Schmitz"> Tony Schmitz</a>, <a href="https://publications.waset.org/abstracts/search?q=Friedrich%20Bleicher"> Friedrich Bleicher</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Machining stability is an important limitation to discrete part machining. In this work, a networked implementation of milling stability optimization with Bayesian learning is presented. The milling process was monitored with a wireless sensory tool holder instrumented with an accelerometer at the Vienna University of Technology, Vienna, Austria. The recorded data from a milling test cut is used to classify the cut as stable or unstable based on the frequency analysis. The test cut result is fed to a Bayesian stability learning algorithm at the University of Tennessee, Knoxville, Tennessee, USA. The algorithm calculates the probability of stability as a function of axial depth of cut and spindle speed and recommends the parameters for the next test cut. The iterative process between two transatlantic locations repeats until convergence to a stable optimal process parameter set is achieved. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=machining%20stability" title="machining stability">machining stability</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=sensor" title=" sensor"> sensor</a>, <a href="https://publications.waset.org/abstracts/search?q=optimization" title=" optimization"> optimization</a> </p> <a href="https://publications.waset.org/abstracts/135659/networked-implementation-of-milling-stability-optimization-with-bayesian-learning" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/135659.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">206</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7567</span> An Improved Method to Compute Sparse Graphs for Traveling Salesman Problem</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Y.%20Wang">Y. Wang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The Traveling salesman problem (TSP) is NP-hard in combinatorial optimization. The research shows the algorithms for TSP on the sparse graphs have the shorter computation time than those for TSP according to the complete graphs. We present an improved iterative algorithm to compute the sparse graphs for TSP by frequency graphs computed with frequency quadrilaterals. The iterative algorithm is enhanced by adjusting two parameters of the algorithm. The computation time of the algorithm is <em>O</em>(<em>CN</em><sub>max</sub><em>n</em><sup>2</sup>) where <em>C</em> is the iterations, <em>N</em><sub>max</sub> is the maximum number of frequency quadrilaterals containing each edge and <em>n</em> is the scale of TSP. The experimental results showed the computed sparse graphs generally have less than 5<em>n</em> edges for most of these Euclidean instances. Moreover, the maximum degree and minimum degree of the vertices in the sparse graphs do not have much difference. Thus, the computation time of the methods to resolve the TSP on these sparse graphs will be greatly reduced. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=frequency%20quadrilateral" title="frequency quadrilateral">frequency quadrilateral</a>, <a href="https://publications.waset.org/abstracts/search?q=iterative%20algorithm" title=" iterative algorithm"> iterative algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=sparse%20graph" title=" sparse graph"> sparse graph</a>, <a href="https://publications.waset.org/abstracts/search?q=traveling%20salesman%20problem" title=" traveling salesman problem"> traveling salesman problem</a> </p> <a href="https://publications.waset.org/abstracts/82737/an-improved-method-to-compute-sparse-graphs-for-traveling-salesman-problem" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/82737.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">233</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7566</span> Curriculum-Based Multi-Agent Reinforcement Learning for Robotic Navigation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hyeongbok%20Kim">Hyeongbok Kim</a>, <a href="https://publications.waset.org/abstracts/search?q=Lingling%20Zhao"> Lingling Zhao</a>, <a href="https://publications.waset.org/abstracts/search?q=Xiaohong%20Su"> Xiaohong Su</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Deep reinforcement learning has been applied to address various problems in robotics, such as autonomous driving and unmanned aerial vehicle. However, because of the sparse reward penalty for a collision with obstacles during the navigation mission, the agent fails to learn the optimal policy or requires a long time for convergence. Therefore, using obstacles and enemy agents, in this paper, we present a curriculum-based boost learning method to effectively train compound skills during multi-agent reinforcement learning. First, to enable the agents to solve challenging tasks, we gradually increased learning difficulties by adjusting reward shaping instead of constructing different learning environments. Then, in a benchmark environment with static obstacles and moving enemy agents, the experimental results showed that the proposed curriculum learning strategy enhanced cooperative navigation and compound collision avoidance skills in uncertain environments while improving learning efficiency. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=curriculum%20learning" title="curriculum learning">curriculum learning</a>, <a href="https://publications.waset.org/abstracts/search?q=hard%20exploration" title=" hard exploration"> hard exploration</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-agent%20reinforcement%20learning" title=" multi-agent reinforcement learning"> multi-agent reinforcement learning</a>, <a href="https://publications.waset.org/abstracts/search?q=robotic%20navigation" title=" robotic navigation"> robotic navigation</a>, <a href="https://publications.waset.org/abstracts/search?q=sparse%20reward" title=" sparse reward"> sparse reward</a> </p> <a href="https://publications.waset.org/abstracts/162478/curriculum-based-multi-agent-reinforcement-learning-for-robotic-navigation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/162478.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">92</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7565</span> An Exploratory Sequential Design: A Mixed Methods Model for the Statistics Learning Assessment with a Bayesian Network Representation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Zhidong%20Zhang">Zhidong Zhang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This study established a mixed method model in assessing statistics learning with Bayesian network models. There are three variants in exploratory sequential designs. There are three linked steps in one of the designs: qualitative data collection and analysis, quantitative measure, instrument, intervention, and quantitative data collection analysis. The study used a scoring model of analysis of variance (ANOVA) as a content domain. The research study is to examine students’ learning in both semantic and performance aspects at fine grain level. The ANOVA score model, y = α+ βx1 + γx1+ ε, as a cognitive task to collect data during the student learning process. When the learning processes were decomposed into multiple steps in both semantic and performance aspects, a hierarchical Bayesian network was established. This is a theory-driven process. The hierarchical structure was gained based on qualitative cognitive analysis. The data from students’ ANOVA score model learning was used to give evidence to the hierarchical Bayesian network model from the evidential variables. Finally, the assessment results of students’ ANOVA score model learning were reported. Briefly, this was a mixed method research design applied to statistics learning assessment. The mixed methods designs expanded more possibilities for researchers to establish advanced quantitative models initially with a theory-driven qualitative mode. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=exploratory%20sequential%20design" title="exploratory sequential design">exploratory sequential design</a>, <a href="https://publications.waset.org/abstracts/search?q=ANOVA%20score%20model" title=" ANOVA score model"> ANOVA score model</a>, <a href="https://publications.waset.org/abstracts/search?q=Bayesian%20network%20model" title=" Bayesian network model"> Bayesian network model</a>, <a href="https://publications.waset.org/abstracts/search?q=mixed%20methods%20research%20design" title=" mixed methods research design"> mixed methods research design</a>, <a href="https://publications.waset.org/abstracts/search?q=cognitive%20analysis" title=" cognitive analysis"> cognitive analysis</a> </p> <a href="https://publications.waset.org/abstracts/102367/an-exploratory-sequential-design-a-mixed-methods-model-for-the-statistics-learning-assessment-with-a-bayesian-network-representation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/102367.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">179</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7564</span> Analysis of the Significance of Multimedia Channels Using Sparse PCA and Regularized SVD</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kourosh%20Modarresi">Kourosh Modarresi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The abundance of media channels and devices has given users a variety of options to extract, discover, and explore information in the digital world. Since, often, there is a long and complicated path that a typical user may venture before taking any (significant) action (such as purchasing goods and services), it is critical to know how each node (media channel) in the path of user has contributed to the final action. In this work, the significance of each media channel is computed using statistical analysis and machine learning techniques. More specifically, “Regularized Singular Value Decomposition”, and “Sparse Principal Component” has been used to compute the significance of each channel toward the final action. The results of this work are a considerable improvement compared to the present approaches. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=multimedia%20attribution" title="multimedia attribution">multimedia attribution</a>, <a href="https://publications.waset.org/abstracts/search?q=sparse%20principal%20component" title=" sparse principal component"> sparse principal component</a>, <a href="https://publications.waset.org/abstracts/search?q=regularization" title=" regularization"> regularization</a>, <a href="https://publications.waset.org/abstracts/search?q=singular%20value%20decomposition" title=" singular value decomposition"> singular value decomposition</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20significance" title=" feature significance"> feature significance</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=linear%20systems" title=" linear systems"> linear systems</a>, <a href="https://publications.waset.org/abstracts/search?q=variable%20shrinkage" title=" variable shrinkage"> variable shrinkage</a> </p> <a href="https://publications.waset.org/abstracts/19533/analysis-of-the-significance-of-multimedia-channels-using-sparse-pca-and-regularized-svd" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/19533.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">309</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7563</span> Identification of Bayesian Network with Convolutional Neural Network</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mohamed%20Raouf%20Benmakrelouf">Mohamed Raouf Benmakrelouf</a>, <a href="https://publications.waset.org/abstracts/search?q=Wafa%20Karouche"> Wafa Karouche</a>, <a href="https://publications.waset.org/abstracts/search?q=Joseph%20Rynkiewicz"> Joseph Rynkiewicz</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we propose an alternative method to construct a Bayesian Network (BN). This method relies on a convolutional neural network (CNN classifier), which determinates the edges of the network skeleton. We train a CNN on a normalized empirical probability density distribution (NEPDF) for predicting causal interactions and relationships. We have to find the optimal Bayesian network structure for causal inference. Indeed, we are undertaking a search for pair-wise causality, depending on considered causal assumptions. In order to avoid unreasonable causal structure, we consider a blacklist and a whitelist of causality senses. We tested the method on real data to assess the influence of education on the voting intention for the extreme right-wing party. We show that, with this method, we get a safer causal structure of variables (Bayesian Network) and make to identify a variable that satisfies the backdoor criterion. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bayesian%20network" title="Bayesian network">Bayesian network</a>, <a href="https://publications.waset.org/abstracts/search?q=structure%20learning" title=" structure learning"> structure learning</a>, <a href="https://publications.waset.org/abstracts/search?q=optimal%20search" title=" optimal search"> optimal search</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20network" title=" convolutional neural network"> convolutional neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=causal%20inference" title=" causal inference"> causal inference</a> </p> <a href="https://publications.waset.org/abstracts/151560/identification-of-bayesian-network-with-convolutional-neural-network" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/151560.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">176</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7562</span> Diagnostic Assessment for Mastery Learning of Engineering Students with a Bayesian Network Model</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Zhidong%20Zhang">Zhidong Zhang</a>, <a href="https://publications.waset.org/abstracts/search?q=Yingchen%20Yang"> Yingchen Yang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this study, a diagnostic assessment model for Mastery Engineering Learning was established based on a group of undergraduate students who studied in an engineering course. A diagnostic assessment model can examine both students' learning process and report achievement results. One very unique characteristic is that the diagnostic assessment model can recognize the errors and anything blocking students in their learning processes. The feedback is provided to help students to know how to solve the learning problems with alternative strategies and help the instructor to find alternative pedagogical strategies in the instructional designs. Dynamics is a core course in which is a common course being shared by several engineering programs. This course is a very challenging for engineering students to solve the problems. Thus knowledge acquisition and problem-solving skills are crucial for student success. Therefore, developing an effective and valid assessment model for student learning are of great importance. Diagnostic assessment is such a model which can provide effective feedback for both students and instructor in the mastery of engineering learning. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=diagnostic%20assessment" title="diagnostic assessment">diagnostic assessment</a>, <a href="https://publications.waset.org/abstracts/search?q=mastery%20learning" title=" mastery learning"> mastery learning</a>, <a href="https://publications.waset.org/abstracts/search?q=engineering" title=" engineering"> engineering</a>, <a href="https://publications.waset.org/abstracts/search?q=bayesian%20network%20model" title=" bayesian network model"> bayesian network model</a>, <a href="https://publications.waset.org/abstracts/search?q=learning%20processes" title=" learning processes"> learning processes</a> </p> <a href="https://publications.waset.org/abstracts/95964/diagnostic-assessment-for-mastery-learning-of-engineering-students-with-a-bayesian-network-model" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/95964.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">152</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7561</span> Employing Bayesian Artificial Neural Network for Evaluation of Cold Rolling Force</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=P.%20Kooche%20Baghy">P. Kooche Baghy</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20Eskandari"> S. Eskandari</a>, <a href="https://publications.waset.org/abstracts/search?q=E.javanmard"> E.javanmard</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Neural network has been used as a predictive means of cold rolling force in this dissertation. Thus, imposed average force on rollers as a mere input and five pertaining parameters to its as a outputs are regarded. According to our study, feed-forward multilayer perceptron network has been selected. Besides, Bayesian algorithm based on the feed-forward back propagation method has been selected due to noisy data. Further, 470 out of 585 all tests were used for network learning and others (115 tests) were considered as assessment criteria. Eventually, by 30 times running the MATLAB software, mean error was obtained 3.84 percent as a criteria of network learning. As a consequence, this the mentioned error on par with other approaches such as numerical and empirical methods is acceptable admittedly. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=artificial%20neural%20network" title="artificial neural network">artificial neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=Bayesian" title=" Bayesian"> Bayesian</a>, <a href="https://publications.waset.org/abstracts/search?q=cold%20rolling" title=" cold rolling"> cold rolling</a>, <a href="https://publications.waset.org/abstracts/search?q=force%20evaluation" title=" force evaluation"> force evaluation</a> </p> <a href="https://publications.waset.org/abstracts/47601/employing-bayesian-artificial-neural-network-for-evaluation-of-cold-rolling-force" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/47601.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">443</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7560</span> Sparse Principal Component Analysis: A Least Squares Approximation Approach</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Giovanni%20Merola">Giovanni Merola</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Sparse Principal Components Analysis aims to find principal components with few non-zero loadings. We derive such sparse solutions by adding a genuine sparsity requirement to the original Principal Components Analysis (PCA) objective function. This approach differs from others because it preserves PCA's original optimality: uncorrelatedness of the components and least squares approximation of the data. To identify the best subset of non-zero loadings we propose a branch-and-bound search and an iterative elimination algorithm. This last algorithm finds sparse solutions with large loadings and can be run without specifying the cardinality of the loadings and the number of components to compute in advance. We give thorough comparisons with the existing sparse PCA methods and several examples on real datasets. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=SPCA" title="SPCA">SPCA</a>, <a href="https://publications.waset.org/abstracts/search?q=uncorrelated%20components" title=" uncorrelated components"> uncorrelated components</a>, <a href="https://publications.waset.org/abstracts/search?q=branch-and-bound" title=" branch-and-bound"> branch-and-bound</a>, <a href="https://publications.waset.org/abstracts/search?q=backward%20elimination" title=" backward elimination"> backward elimination</a> </p> <a href="https://publications.waset.org/abstracts/14630/sparse-principal-component-analysis-a-least-squares-approximation-approach" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/14630.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">381</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7559</span> Scalable Learning of Tree-Based Models on Sparsely Representable Data</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Fares%20Hedayatit">Fares Hedayatit</a>, <a href="https://publications.waset.org/abstracts/search?q=Arnauld%20Joly"> Arnauld Joly</a>, <a href="https://publications.waset.org/abstracts/search?q=Panagiotis%20Papadimitriou"> Panagiotis Papadimitriou</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Many machine learning tasks such as text annotation usually require training over very big datasets, e.g., millions of web documents, that can be represented in a sparse input space. State-of the-art tree-based ensemble algorithms cannot scale to such datasets, since they include operations whose running time is a function of the input space size rather than a function of the non-zero input elements. In this paper, we propose an efficient splitting algorithm to leverage input sparsity within decision tree methods. Our algorithm improves training time over sparse datasets by more than two orders of magnitude and it has been incorporated in the current version of scikit-learn.org, the most popular open source Python machine learning library. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=big%20data" title="big data">big data</a>, <a href="https://publications.waset.org/abstracts/search?q=sparsely%20representable%20data" title=" sparsely representable data"> sparsely representable data</a>, <a href="https://publications.waset.org/abstracts/search?q=tree-based%20models" title=" tree-based models"> tree-based models</a>, <a href="https://publications.waset.org/abstracts/search?q=scalable%20learning" title=" scalable learning"> scalable learning</a> </p> <a href="https://publications.waset.org/abstracts/52853/scalable-learning-of-tree-based-models-on-sparsely-representable-data" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/52853.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">263</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7558</span> Learning a Bayesian Network for Situation-Aware Smart Home Service: A Case Study with a Robot Vacuum Cleaner</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Eu%20Tteum%20Ha">Eu Tteum Ha</a>, <a href="https://publications.waset.org/abstracts/search?q=Seyoung%20Kim"> Seyoung Kim</a>, <a href="https://publications.waset.org/abstracts/search?q=Jeongmin%20Kim"> Jeongmin Kim</a>, <a href="https://publications.waset.org/abstracts/search?q=Kwang%20Ryel%20Ryu"> Kwang Ryel Ryu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The smart home environment backed up by IoT (internet of things) technologies enables intelligent services based on the awareness of the situation a user is currently in. One of the convenient sensors for recognizing the situations within a home is the smart meter that can monitor the status of each electrical appliance in real time. This paper aims at learning a Bayesian network that models the causal relationship between the user situations and the status of the electrical appliances. Using such a network, we can infer the current situation based on the observed status of the appliances. However, learning the conditional probability tables (CPTs) of the network requires many training examples that cannot be obtained unless the user situations are closely monitored by any means. This paper proposes a method for learning the CPT entries of the network relying only on the user feedbacks generated occasionally. In our case study with a robot vacuum cleaner, the feedback comes in whenever the user gives an order to the robot adversely from its preprogrammed setting. Given a network with randomly initialized CPT entries, our proposed method uses this feedback information to adjust relevant CPT entries in the direction of increasing the probability of recognizing the desired situations. Simulation experiments show that our method can rapidly improve the recognition performance of the Bayesian network using a relatively small number of feedbacks. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bayesian%20network" title="Bayesian network">Bayesian network</a>, <a href="https://publications.waset.org/abstracts/search?q=IoT" title=" IoT"> IoT</a>, <a href="https://publications.waset.org/abstracts/search?q=learning" title=" learning"> learning</a>, <a href="https://publications.waset.org/abstracts/search?q=situation%20-awareness" title=" situation -awareness"> situation -awareness</a>, <a href="https://publications.waset.org/abstracts/search?q=smart%20home" title=" smart home"> smart home</a> </p> <a href="https://publications.waset.org/abstracts/23000/learning-a-bayesian-network-for-situation-aware-smart-home-service-a-case-study-with-a-robot-vacuum-cleaner" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/23000.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">523</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7557</span> Financial Assets Return, Economic Factors and Investor&#039;s Behavioral Indicators Relationships Modeling: A Bayesian Networks Approach</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nada%20Souissi">Nada Souissi</a>, <a href="https://publications.waset.org/abstracts/search?q=Mourad%20Mroua"> Mourad Mroua</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The main purpose of this study is to examine the interaction between financial asset volatility, economic factors and investor's behavioral indicators related to both the company's and the markets stocks for the period from January 2000 to January2020. Using multiple linear regression and Bayesian Networks modeling, results show a positive and negative relationship between investor's psychology index, economic factors and predicted stock market return. We reveal that the application of the Bayesian Discrete Network contributes to identify the different cause and effect relationships between all economic, financial variables and psychology index. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Financial%20asset%20return%20predictability" title="Financial asset return predictability">Financial asset return predictability</a>, <a href="https://publications.waset.org/abstracts/search?q=Economic%20factors" title=" Economic factors"> Economic factors</a>, <a href="https://publications.waset.org/abstracts/search?q=Investor%27s%20psychology%20index" title=" Investor&#039;s psychology index"> Investor&#039;s psychology index</a>, <a href="https://publications.waset.org/abstracts/search?q=Bayesian%20approach" title=" Bayesian approach"> Bayesian approach</a>, <a href="https://publications.waset.org/abstracts/search?q=Probabilistic%20networks" title=" Probabilistic networks"> Probabilistic networks</a>, <a href="https://publications.waset.org/abstracts/search?q=Parametric%20learning" title=" Parametric learning"> Parametric learning</a> </p> <a href="https://publications.waset.org/abstracts/123056/financial-assets-return-economic-factors-and-investors-behavioral-indicators-relationships-modeling-a-bayesian-networks-approach" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/123056.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">149</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7556</span> Channel Estimation Using Deep Learning for Reconfigurable Intelligent Surfaces-Assisted Millimeter Wave Systems</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ting%20Gao">Ting Gao</a>, <a href="https://publications.waset.org/abstracts/search?q=Mingyue%20He"> Mingyue He</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Reconfigurable intelligent surfaces (RISs) are expected to be an important part of next-generation wireless communication networks due to their potential to reduce the hardware cost and energy consumption of millimeter Wave (mmWave) massive multiple-input multiple-output (MIMO) technology. However, owing to the lack of signal processing abilities of the RIS, the perfect channel state information (CSI) in RIS-assisted communication systems is difficult to acquire. In this paper, the uplink channel estimation for mmWave systems with a hybrid active/passive RIS architecture is studied. Specifically, a deep learning-based estimation scheme is proposed to estimate the channel between the RIS and the user. In particular, the sparse structure of the mmWave channel is exploited to formulate the channel estimation as a sparse reconstruction problem. To this end, the proposed approach is derived to obtain the distribution of non-zero entries in a sparse channel. After that, the channel is reconstructed by utilizing the least-squares (LS) algorithm and compressed sensing (CS) theory. The simulation results demonstrate that the proposed channel estimation scheme is superior to existing solutions even in low signal-to-noise ratio (SNR) environments. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=channel%20estimation" title="channel estimation">channel estimation</a>, <a href="https://publications.waset.org/abstracts/search?q=reconfigurable%20intelligent%20surface" title=" reconfigurable intelligent surface"> reconfigurable intelligent surface</a>, <a href="https://publications.waset.org/abstracts/search?q=wireless%20communication" title=" wireless communication"> wireless communication</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a> </p> <a href="https://publications.waset.org/abstracts/148896/channel-estimation-using-deep-learning-for-reconfigurable-intelligent-surfaces-assisted-millimeter-wave-systems" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/148896.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">150</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7555</span> Sparsity Order Selection and Denoising in Compressed Sensing Framework</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mahdi%20Shamsi">Mahdi Shamsi</a>, <a href="https://publications.waset.org/abstracts/search?q=Tohid%20Yousefi%20Rezaii"> Tohid Yousefi Rezaii</a>, <a href="https://publications.waset.org/abstracts/search?q=Siavash%20Eftekharifar"> Siavash Eftekharifar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Compressed sensing (CS) is a new powerful mathematical theory concentrating on sparse signals which is widely used in signal processing. The main idea is to sense sparse signals by far fewer measurements than the Nyquist sampling rate, but the reconstruction process becomes nonlinear and more complicated. Common dilemma in sparse signal recovery in CS is the lack of knowledge about sparsity order of the signal, which can be viewed as model order selection procedure. In this paper, we address the problem of sparsity order estimation in sparse signal recovery. This is of main interest in situations where the signal sparsity is unknown or the signal to be recovered is approximately sparse. It is shown that the proposed method also leads to some kind of signal denoising, where the observations are contaminated with noise. Finally, the performance of the proposed approach is evaluated in different scenarios and compared to an existing method, which shows the effectiveness of the proposed method in terms of order selection as well as denoising. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=compressed%20sensing" title="compressed sensing">compressed sensing</a>, <a href="https://publications.waset.org/abstracts/search?q=data%20denoising" title=" data denoising"> data denoising</a>, <a href="https://publications.waset.org/abstracts/search?q=model%20order%20selection" title=" model order selection"> model order selection</a>, <a href="https://publications.waset.org/abstracts/search?q=sparse%20representation" title=" sparse representation"> sparse representation</a> </p> <a href="https://publications.waset.org/abstracts/31470/sparsity-order-selection-and-denoising-in-compressed-sensing-framework" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/31470.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">483</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7554</span> A Transform Domain Function Controlled VSSLMS Algorithm for Sparse System Identification</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Cemil%20Turan">Cemil Turan</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohammad%20Shukri%20Salman"> Mohammad Shukri Salman</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The convergence rate of the least-mean-square (LMS) algorithm deteriorates if the input signal to the filter is correlated. In a system identification problem, this convergence rate can be improved if the signal is white and/or if the system is sparse. We recently proposed a sparse transform domain LMS-type algorithm that uses a variable step-size for a sparse system identification. The proposed algorithm provided high performance even if the input signal is highly correlated. In this work, we investigate the performance of the proposed TD-LMS algorithm for a large number of filter tap which is also a critical issue for standard LMS algorithm. Additionally, the optimum value of the most important parameter is calculated for all experiments. Moreover, the convergence analysis of the proposed algorithm is provided. The performance of the proposed algorithm has been compared to different algorithms in a sparse system identification setting of different sparsity levels and different number of filter taps. Simulations have shown that the proposed algorithm has prominent performance compared to the other algorithms. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=adaptive%20filtering" title="adaptive filtering">adaptive filtering</a>, <a href="https://publications.waset.org/abstracts/search?q=sparse%20system%20identification" title=" sparse system identification"> sparse system identification</a>, <a href="https://publications.waset.org/abstracts/search?q=TD-LMS%20algorithm" title=" TD-LMS algorithm"> TD-LMS algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=VSSLMS%20algorithm" title=" VSSLMS algorithm"> VSSLMS algorithm</a> </p> <a href="https://publications.waset.org/abstracts/72335/a-transform-domain-function-controlled-vsslms-algorithm-for-sparse-system-identification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/72335.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">360</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7553</span> Development of a Few-View Computed Tomographic Reconstruction Algorithm Using Multi-Directional Total Variation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Chia%20Jui%20Hsieh">Chia Jui Hsieh</a>, <a href="https://publications.waset.org/abstracts/search?q=Jyh%20Cheng%20Chen"> Jyh Cheng Chen</a>, <a href="https://publications.waset.org/abstracts/search?q=Chih%20Wei%20Kuo"> Chih Wei Kuo</a>, <a href="https://publications.waset.org/abstracts/search?q=Ruei%20Teng%20Wang"> Ruei Teng Wang</a>, <a href="https://publications.waset.org/abstracts/search?q=Woei%20Chyn%20Chu"> Woei Chyn Chu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Compressed sensing (CS) based computed tomographic (CT) reconstruction algorithm utilizes total variation (TV) to transform CT image into sparse domain and minimizes L1-norm of sparse image for reconstruction. Different from the traditional CS based reconstruction which only calculates x-coordinate and y-coordinate TV to transform CT images into sparse domain, we propose a multi-directional TV to transform tomographic image into sparse domain for low-dose reconstruction. Our method considers all possible directions of TV calculations around a pixel, so the sparse transform for CS based reconstruction is more accurate. In 2D CT reconstruction, we use eight-directional TV to transform CT image into sparse domain. Furthermore, we also use 26-directional TV for 3D reconstruction. This multi-directional sparse transform method makes CS based reconstruction algorithm more powerful to reduce noise and increase image quality. To validate and evaluate the performance of this multi-directional sparse transform method, we use both Shepp-Logan phantom and a head phantom as the targets for reconstruction with the corresponding simulated sparse projection data (angular sampling interval is 5 deg and 6 deg, respectively). From the results, the multi-directional TV method can reconstruct images with relatively less artifacts compared with traditional CS based reconstruction algorithm which only calculates x-coordinate and y-coordinate TV. We also choose RMSE, PSNR, UQI to be the parameters for quantitative analysis. From the results of quantitative analysis, no matter which parameter is calculated, the multi-directional TV method, which we proposed, is better. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=compressed%20sensing%20%28CS%29" title="compressed sensing (CS)">compressed sensing (CS)</a>, <a href="https://publications.waset.org/abstracts/search?q=low-dose%20CT%20reconstruction" title=" low-dose CT reconstruction"> low-dose CT reconstruction</a>, <a href="https://publications.waset.org/abstracts/search?q=total%20variation%20%28TV%29" title=" total variation (TV)"> total variation (TV)</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-directional%20gradient%20operator" title=" multi-directional gradient operator"> multi-directional gradient operator</a> </p> <a href="https://publications.waset.org/abstracts/77716/development-of-a-few-view-computed-tomographic-reconstruction-algorithm-using-multi-directional-total-variation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/77716.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">256</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7552</span> Optimal Bayesian Control of the Proportion of Defectives in a Manufacturing Process</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Viliam%20Makis">Viliam Makis</a>, <a href="https://publications.waset.org/abstracts/search?q=Farnoosh%20Naderkhani"> Farnoosh Naderkhani</a>, <a href="https://publications.waset.org/abstracts/search?q=Leila%20Jafari"> Leila Jafari</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we present a model and an algorithm for the calculation of the optimal control limit, average cost, sample size, and the sampling interval for an optimal Bayesian chart to control the proportion of defective items produced using a semi-Markov decision process approach. Traditional p-chart has been widely used for controlling the proportion of defectives in various kinds of production processes for many years. It is well known that traditional non-Bayesian charts are not optimal, but very few optimal Bayesian control charts have been developed in the literature, mostly considering finite horizon. The objective of this paper is to develop a fast computational algorithm to obtain the optimal parameters of a Bayesian p-chart. The decision problem is formulated in the partially observable framework and the developed algorithm is illustrated by a numerical example. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bayesian%20control%20chart" title="Bayesian control chart">Bayesian control chart</a>, <a href="https://publications.waset.org/abstracts/search?q=semi-Markov%20decision%20process" title=" semi-Markov decision process"> semi-Markov decision process</a>, <a href="https://publications.waset.org/abstracts/search?q=quality%20control" title=" quality control"> quality control</a>, <a href="https://publications.waset.org/abstracts/search?q=partially%20observable%20process" title=" partially observable process"> partially observable process</a> </p> <a href="https://publications.waset.org/abstracts/49751/optimal-bayesian-control-of-the-proportion-of-defectives-in-a-manufacturing-process" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/49751.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">319</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7551</span> The Effect of Institutions on Economic Growth: An Analysis Based on Bayesian Panel Data Estimation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mohammad%20Anwar">Mohammad Anwar</a>, <a href="https://publications.waset.org/abstracts/search?q=Shah%20Waliullah"> Shah Waliullah</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This study investigated panel data regression models. This paper used Bayesian and classical methods to study the impact of institutions on economic growth from data (1990-2014), especially in developing countries. Under the classical and Bayesian methodology, the two-panel data models were estimated, which are common effects and fixed effects. For the Bayesian approach, the prior information is used in this paper, and normal gamma prior is used for the panel data models. The analysis was done through WinBUGS14 software. The estimated results of the study showed that panel data models are valid models in Bayesian methodology. In the Bayesian approach, the effects of all independent variables were positively and significantly affected by the dependent variables. Based on the standard errors of all models, we must say that the fixed effect model is the best model in the Bayesian estimation of panel data models. Also, it was proved that the fixed effect model has the lowest value of standard error, as compared to other models. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bayesian%20approach" title="Bayesian approach">Bayesian approach</a>, <a href="https://publications.waset.org/abstracts/search?q=common%20effect" title=" common effect"> common effect</a>, <a href="https://publications.waset.org/abstracts/search?q=fixed%20effect" title=" fixed effect"> fixed effect</a>, <a href="https://publications.waset.org/abstracts/search?q=random%20effect" title=" random effect"> random effect</a>, <a href="https://publications.waset.org/abstracts/search?q=Dynamic%20Random%20Effect%20Model" title=" Dynamic Random Effect Model"> Dynamic Random Effect Model</a> </p> <a href="https://publications.waset.org/abstracts/161692/the-effect-of-institutions-on-economic-growth-an-analysis-based-on-bayesian-panel-data-estimation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/161692.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">68</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7550</span> Supervised/Unsupervised Mahalanobis Algorithm for Improving Performance for Cyberattack Detection over Communications Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Radhika%20Ranjan%20Roy">Radhika Ranjan Roy</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Deployment of machine learning (ML)/deep learning (DL) algorithms for cyberattack detection in operational communications networks (wireless and/or wire-line) is being delayed because of low-performance parameters (e.g., recall, precision, and f₁-score). If datasets become imbalanced, which is the usual case for communications networks, the performance tends to become worse. Complexities in handling reducing dimensions of the feature sets for increasing performance are also a huge problem. Mahalanobis algorithms have been widely applied in scientific research because Mahalanobis distance metric learning is a successful framework. In this paper, we have investigated the Mahalanobis binary classifier algorithm for increasing cyberattack detection performance over communications networks as a proof of concept. We have also found that high-dimensional information in intermediate features that are not utilized as much for classification tasks in ML/DL algorithms are the main contributor to the state-of-the-art of improved performance of the Mahalanobis method, even for imbalanced and sparse datasets. With no feature reduction, MD offers uniform results for precision, recall, and f₁-score for unbalanced and sparse NSL-KDD datasets. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mahalanobis%20distance" title="Mahalanobis distance">Mahalanobis distance</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=NS-KDD" title=" NS-KDD"> NS-KDD</a>, <a href="https://publications.waset.org/abstracts/search?q=local%20intrinsic%20dimensionality" title=" local intrinsic dimensionality"> local intrinsic dimensionality</a>, <a href="https://publications.waset.org/abstracts/search?q=chi-square" title=" chi-square"> chi-square</a>, <a href="https://publications.waset.org/abstracts/search?q=positive%20semi-definite" title=" positive semi-definite"> positive semi-definite</a>, <a href="https://publications.waset.org/abstracts/search?q=area%20under%20the%20curve" title=" area under the curve"> area under the curve</a> </p> <a href="https://publications.waset.org/abstracts/161865/supervisedunsupervised-mahalanobis-algorithm-for-improving-performance-for-cyberattack-detection-over-communications-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/161865.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">78</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7549</span> Sparse-View CT Reconstruction Based on Nonconvex L1 − L2 Regularizations</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ali%20Pour%20Yazdanpanah">Ali Pour Yazdanpanah</a>, <a href="https://publications.waset.org/abstracts/search?q=Farideh%20Foroozandeh%20Shahraki"> Farideh Foroozandeh Shahraki</a>, <a href="https://publications.waset.org/abstracts/search?q=Emma%20Regentova"> Emma Regentova</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The reconstruction from sparse-view projections is one of important problems in computed tomography (CT) limited by the availability or feasibility of obtaining of a large number of projections. Traditionally, convex regularizers have been exploited to improve the reconstruction quality in sparse-view CT, and the convex constraint in those problems leads to an easy optimization process. However, convex regularizers often result in a biased approximation and inaccurate reconstruction in CT problems. Here, we present a nonconvex, Lipschitz continuous and non-smooth regularization model. The CT reconstruction is formulated as a nonconvex constrained L1 &minus; L2 minimization problem and solved through a difference of convex algorithm and alternating direction of multiplier method which generates a better result than L0 or L1 regularizers in the CT reconstruction. We compare our method with previously reported high performance methods which use convex regularizers such as TV, wavelet, curvelet, and curvelet+TV (CTV) on the test phantom images. The results show that there are benefits in using the nonconvex regularizer in the sparse-view CT reconstruction. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=computed%20tomography" title="computed tomography">computed tomography</a>, <a href="https://publications.waset.org/abstracts/search?q=non-convex" title=" non-convex"> non-convex</a>, <a href="https://publications.waset.org/abstracts/search?q=sparse-view%20reconstruction" title=" sparse-view reconstruction"> sparse-view reconstruction</a>, <a href="https://publications.waset.org/abstracts/search?q=L1-L2%20minimization" title=" L1-L2 minimization"> L1-L2 minimization</a>, <a href="https://publications.waset.org/abstracts/search?q=difference%20of%20convex%20functions" title=" difference of convex functions"> difference of convex functions</a> </p> <a href="https://publications.waset.org/abstracts/70473/sparse-view-ct-reconstruction-based-on-nonconvex-l1-l2-regularizations" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/70473.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">316</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7548</span> Bayesian Approach for Moving Extremes Ranked Set Sampling</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Said%20Ali%20Al-Hadhrami">Said Ali Al-Hadhrami</a>, <a href="https://publications.waset.org/abstracts/search?q=Amer%20Ibrahim%20Al-Omari"> Amer Ibrahim Al-Omari</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, Bayesian estimation for the mean of exponential distribution is considered using Moving Extremes Ranked Set Sampling (MERSS). Three priors are used; Jeffery, conjugate and constant using MERSS and Simple Random Sampling (SRS). Some properties of the proposed estimators are investigated. It is found that the suggested estimators using MERSS are more efficient than its counterparts based on SRS. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bayesian" title="Bayesian">Bayesian</a>, <a href="https://publications.waset.org/abstracts/search?q=efficiency" title=" efficiency"> efficiency</a>, <a href="https://publications.waset.org/abstracts/search?q=moving%20extreme%20ranked%20set%20sampling" title=" moving extreme ranked set sampling"> moving extreme ranked set sampling</a>, <a href="https://publications.waset.org/abstracts/search?q=ranked%20set%20sampling" title=" ranked set sampling"> ranked set sampling</a> </p> <a href="https://publications.waset.org/abstracts/30733/bayesian-approach-for-moving-extremes-ranked-set-sampling" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/30733.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">514</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7547</span> A New Framework for ECG Signal Modeling and Compression Based on Compressed Sensing Theory</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Siavash%20Eftekharifar">Siavash Eftekharifar</a>, <a href="https://publications.waset.org/abstracts/search?q=Tohid%20Yousefi%20Rezaii"> Tohid Yousefi Rezaii</a>, <a href="https://publications.waset.org/abstracts/search?q=Mahdi%20Shamsi"> Mahdi Shamsi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The purpose of this paper is to exploit compressed sensing (CS) method in order to model and compress the electrocardiogram (ECG) signals at a high compression ratio. In order to obtain a sparse representation of the ECG signals, first a suitable basis matrix with Gaussian kernels, which are shown to nicely fit the ECG signals, is constructed. Then the sparse model is extracted by applying some optimization technique. Finally, the CS theory is utilized to obtain a compressed version of the sparse signal. Reconstruction of the ECG signal from the compressed version is also done to prove the reliability of the algorithm. At this stage, a greedy optimization technique is used to reconstruct the ECG signal and the Mean Square Error (MSE) is calculated to evaluate the precision of the proposed compression method. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=compressed%20sensing" title="compressed sensing">compressed sensing</a>, <a href="https://publications.waset.org/abstracts/search?q=ECG%20compression" title=" ECG compression"> ECG compression</a>, <a href="https://publications.waset.org/abstracts/search?q=Gaussian%20kernel" title=" Gaussian kernel"> Gaussian kernel</a>, <a href="https://publications.waset.org/abstracts/search?q=sparse%20representation" title=" sparse representation"> sparse representation</a> </p> <a href="https://publications.waset.org/abstracts/31469/a-new-framework-for-ecg-signal-modeling-and-compression-based-on-compressed-sensing-theory" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/31469.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">462</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7546</span> Bayesian Reliability of Weibull Regression with Type-I Censored Data</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Al%20Omari%20Moahmmed%20Ahmed">Al Omari Moahmmed Ahmed </a> </p> <p class="card-text"><strong>Abstract:</strong></p> In the Bayesian, we developed an approach by using non-informative prior with covariate and obtained by using Gauss quadrature method to estimate the parameters of the covariate and reliability function of the Weibull regression distribution with Type-I censored data. The maximum likelihood seen that the estimators obtained are not available in closed forms, although they can be solved it by using Newton-Raphson methods. The comparison criteria are the MSE and the performance of these estimates are assessed using simulation considering various sample size, several specific values of shape parameter. The results show that Bayesian with non-informative prior is better than Maximum Likelihood Estimator. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=non-informative%20prior" title="non-informative prior">non-informative prior</a>, <a href="https://publications.waset.org/abstracts/search?q=Bayesian%20method" title=" Bayesian method"> Bayesian method</a>, <a href="https://publications.waset.org/abstracts/search?q=type-I%20censoring" title=" type-I censoring"> type-I censoring</a>, <a href="https://publications.waset.org/abstracts/search?q=Gauss%20quardature" title=" Gauss quardature"> Gauss quardature</a> </p> <a href="https://publications.waset.org/abstracts/18728/bayesian-reliability-of-weibull-regression-with-type-i-censored-data" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/18728.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">504</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7545</span> Hybrid Structure Learning Approach for Assessing the Phosphate Laundries Impact </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Emna%20Benmohamed">Emna Benmohamed</a>, <a href="https://publications.waset.org/abstracts/search?q=Hela%20Ltifi"> Hela Ltifi</a>, <a href="https://publications.waset.org/abstracts/search?q=Mounir%20Ben%20Ayed"> Mounir Ben Ayed</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Bayesian Network (BN) is one of the most efficient classification methods. It is widely used in several fields (i.e., medical diagnostics, risk analysis, bioinformatics research). The BN is defined as a probabilistic graphical model that represents a formalism for reasoning under uncertainty. This classification method has a high-performance rate in the extraction of new knowledge from data. The construction of this model consists of two phases for structure learning and parameter learning. For solving this problem, the K2 algorithm is one of the representative data-driven algorithms, which is based on score and search approach. In addition, the integration of the expert&#39;s knowledge in the structure learning process allows the obtainment of the highest accuracy. In this paper, we propose a hybrid approach combining the improvement of the K2 algorithm called K2 algorithm for Parents and Children search (K2PC) and the expert-driven method for learning the structure of BN. The evaluation of the experimental results, using the well-known benchmarks, proves that our K2PC algorithm has better performance in terms of correct structure detection. The real application of our model shows its efficiency in the analysis of the phosphate laundry effluents&#39; impact on the watershed in the Gafsa area (southwestern Tunisia). <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bayesian%20network" title="Bayesian network">Bayesian network</a>, <a href="https://publications.waset.org/abstracts/search?q=classification" title=" classification"> classification</a>, <a href="https://publications.waset.org/abstracts/search?q=expert%20knowledge" title=" expert knowledge"> expert knowledge</a>, <a href="https://publications.waset.org/abstracts/search?q=structure%20learning" title=" structure learning"> structure learning</a>, <a href="https://publications.waset.org/abstracts/search?q=surface%20water%20analysis" title=" surface water analysis"> surface water analysis</a> </p> <a href="https://publications.waset.org/abstracts/119016/hybrid-structure-learning-approach-for-assessing-the-phosphate-laundries-impact" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/119016.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">128</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">&lsaquo;</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=sparse%20Bayesian%20learning&amp;page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=sparse%20Bayesian%20learning&amp;page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=sparse%20Bayesian%20learning&amp;page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=sparse%20Bayesian%20learning&amp;page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=sparse%20Bayesian%20learning&amp;page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=sparse%20Bayesian%20learning&amp;page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=sparse%20Bayesian%20learning&amp;page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=sparse%20Bayesian%20learning&amp;page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=sparse%20Bayesian%20learning&amp;page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=sparse%20Bayesian%20learning&amp;page=252">252</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=sparse%20Bayesian%20learning&amp;page=253">253</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=sparse%20Bayesian%20learning&amp;page=2" rel="next">&rsaquo;</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">&copy; 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">&times;</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>

Pages: 1 2 3 4 5 6 7 8 9 10