CINXE.COM

Search results for: sparsity

<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: sparsity</title> <meta name="description" content="Search results for: sparsity"> <meta name="keywords" content="sparsity"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="sparsity" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="sparsity"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 25</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: sparsity</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">25</span> Sparse Signal Restoration Algorithm Based on Piecewise Adaptive Backtracking Orthogonal Least Squares</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Linyu%20Wang">Linyu Wang</a>, <a href="https://publications.waset.org/abstracts/search?q=Jiahui%20Ma"> Jiahui Ma</a>, <a href="https://publications.waset.org/abstracts/search?q=Jianhong%20Xiang"> Jianhong Xiang</a>, <a href="https://publications.waset.org/abstracts/search?q=Hanyu%20Jiang"> Hanyu Jiang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> the traditional greedy compressed sensing algorithm needs to know the signal sparsity when recovering the signal, but the signal sparsity in the practical application can not be obtained as a priori information, and the recovery accuracy is low, which does not meet the needs of practical application. To solve this problem, this paper puts forward Piecewise adaptive backtracking orthogonal least squares algorithm. The algorithm is divided into two stages. In the first stage, the sparsity pre-estimation strategy is adopted, which can quickly approach the real sparsity and reduce time consumption. In the second stage iteration, the correction strategy and adaptive step size are used to accurately estimate the sparsity, and the backtracking idea is introduced to improve the accuracy of signal recovery. Through experimental simulation, the algorithm can accurately recover the estimated signal with fewer iterations when the sparsity is unknown. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=compressed%20sensing" title="compressed sensing">compressed sensing</a>, <a href="https://publications.waset.org/abstracts/search?q=greedy%20algorithm" title=" greedy algorithm"> greedy algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=least%20square%20method" title=" least square method"> least square method</a>, <a href="https://publications.waset.org/abstracts/search?q=adaptive%20reconstruction" title=" adaptive reconstruction"> adaptive reconstruction</a> </p> <a href="https://publications.waset.org/abstracts/161616/sparse-signal-restoration-algorithm-based-on-piecewise-adaptive-backtracking-orthogonal-least-squares" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/161616.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">148</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">24</span> Sparsity Order Selection and Denoising in Compressed Sensing Framework</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mahdi%20Shamsi">Mahdi Shamsi</a>, <a href="https://publications.waset.org/abstracts/search?q=Tohid%20Yousefi%20Rezaii"> Tohid Yousefi Rezaii</a>, <a href="https://publications.waset.org/abstracts/search?q=Siavash%20Eftekharifar"> Siavash Eftekharifar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Compressed sensing (CS) is a new powerful mathematical theory concentrating on sparse signals which is widely used in signal processing. The main idea is to sense sparse signals by far fewer measurements than the Nyquist sampling rate, but the reconstruction process becomes nonlinear and more complicated. Common dilemma in sparse signal recovery in CS is the lack of knowledge about sparsity order of the signal, which can be viewed as model order selection procedure. In this paper, we address the problem of sparsity order estimation in sparse signal recovery. This is of main interest in situations where the signal sparsity is unknown or the signal to be recovered is approximately sparse. It is shown that the proposed method also leads to some kind of signal denoising, where the observations are contaminated with noise. Finally, the performance of the proposed approach is evaluated in different scenarios and compared to an existing method, which shows the effectiveness of the proposed method in terms of order selection as well as denoising. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=compressed%20sensing" title="compressed sensing">compressed sensing</a>, <a href="https://publications.waset.org/abstracts/search?q=data%20denoising" title=" data denoising"> data denoising</a>, <a href="https://publications.waset.org/abstracts/search?q=model%20order%20selection" title=" model order selection"> model order selection</a>, <a href="https://publications.waset.org/abstracts/search?q=sparse%20representation" title=" sparse representation"> sparse representation</a> </p> <a href="https://publications.waset.org/abstracts/31470/sparsity-order-selection-and-denoising-in-compressed-sensing-framework" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/31470.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">483</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">23</span> Sparsity-Based Unsupervised Unmixing of Hyperspectral Imaging Data Using Basis Pursuit</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ahmed%20Elrewainy">Ahmed Elrewainy</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Mixing in the hyperspectral imaging occurs due to the low spatial resolutions of the used cameras. The existing pure materials &ldquo;endmembers&rdquo; in the scene share the spectra pixels with different amounts called &ldquo;abundances&rdquo;. Unmixing of the data cube is an important task to know the present endmembers in the cube for the analysis of these images. Unsupervised unmixing is done with no information about the given data cube. Sparsity is one of the recent approaches used in the source recovery or unmixing techniques. The <em>l<sub>1</sub></em>-norm optimization problem &ldquo;basis pursuit&rdquo; could be used as a sparsity-based approach to solve this unmixing problem where the endmembers is assumed to be sparse in an appropriate domain known as dictionary. This optimization problem is solved using proximal method &ldquo;iterative thresholding&rdquo;. The <em>l<sub>1</sub></em>-norm basis pursuit optimization problem as a sparsity-based unmixing technique was used to unmix real and synthetic hyperspectral data cubes. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=basis%20pursuit" title="basis pursuit">basis pursuit</a>, <a href="https://publications.waset.org/abstracts/search?q=blind%20source%20separation" title=" blind source separation"> blind source separation</a>, <a href="https://publications.waset.org/abstracts/search?q=hyperspectral%20imaging" title=" hyperspectral imaging"> hyperspectral imaging</a>, <a href="https://publications.waset.org/abstracts/search?q=spectral%20unmixing" title=" spectral unmixing"> spectral unmixing</a>, <a href="https://publications.waset.org/abstracts/search?q=wavelets" title=" wavelets"> wavelets</a> </p> <a href="https://publications.waset.org/abstracts/74582/sparsity-based-unsupervised-unmixing-of-hyperspectral-imaging-data-using-basis-pursuit" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/74582.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">195</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">22</span> Talent-to-Vec: Using Network Graphs to Validate Models with Data Sparsity</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shaan%20Khosla">Shaan Khosla</a>, <a href="https://publications.waset.org/abstracts/search?q=Jon%20Krohn"> Jon Krohn</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In a recruiting context, machine learning models are valuable for recommendations: to predict the best candidates for a vacancy, to match the best vacancies for a candidate, and compile a set of similar candidates for any given candidate. While useful to create these models, validating their accuracy in a recommendation context is difficult due to a sparsity of data. In this report, we use network graph data to generate useful representations for candidates and vacancies. We use candidates and vacancies as network nodes and designate a bi-directional link between them based on the candidate interviewing for the vacancy. After using node2vec, the embeddings are used to construct a validation dataset with a ranked order, which will help validate new recommender systems. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=AI" title="AI">AI</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=NLP" title=" NLP"> NLP</a>, <a href="https://publications.waset.org/abstracts/search?q=recruiting" title=" recruiting"> recruiting</a> </p> <a href="https://publications.waset.org/abstracts/153263/talent-to-vec-using-network-graphs-to-validate-models-with-data-sparsity" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/153263.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">84</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">21</span> Sparse Unmixing of Hyperspectral Data by Exploiting Joint-Sparsity and Rank-Deficiency</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Fanqiang%20Kong">Fanqiang Kong</a>, <a href="https://publications.waset.org/abstracts/search?q=Chending%20Bian"> Chending Bian</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this work, we exploit two assumed properties of the abundances of the observed signatures (endmembers) in order to reconstruct the abundances from hyperspectral data. Joint-sparsity is the first property of the abundances, which assumes the adjacent pixels can be expressed as different linear combinations of same materials. The second property is rank-deficiency where the number of endmembers participating in hyperspectral data is very small compared with the dimensionality of spectral library, which means that the abundances matrix of the endmembers is a low-rank matrix. These assumptions lead to an optimization problem for the sparse unmixing model that requires minimizing a combined <em>l<sub>2,p</sub>-</em>norm and nuclear norm. We propose a variable splitting and augmented Lagrangian algorithm to solve the optimization problem. Experimental evaluation carried out on synthetic and real hyperspectral data shows that the proposed method outperforms the state-of-the-art algorithms with a better spectral unmixing accuracy. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=hyperspectral%20unmixing" title="hyperspectral unmixing">hyperspectral unmixing</a>, <a href="https://publications.waset.org/abstracts/search?q=joint-sparse" title=" joint-sparse"> joint-sparse</a>, <a href="https://publications.waset.org/abstracts/search?q=low-rank%20representation" title=" low-rank representation"> low-rank representation</a>, <a href="https://publications.waset.org/abstracts/search?q=abundance%20estimation" title=" abundance estimation"> abundance estimation</a> </p> <a href="https://publications.waset.org/abstracts/71439/sparse-unmixing-of-hyperspectral-data-by-exploiting-joint-sparsity-and-rank-deficiency" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/71439.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">261</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">20</span> Building Scalable and Accurate Hybrid Kernel Mapping Recommender</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hina%20Iqbal">Hina Iqbal</a>, <a href="https://publications.waset.org/abstracts/search?q=Mustansar%20Ali%20Ghazanfar"> Mustansar Ali Ghazanfar</a>, <a href="https://publications.waset.org/abstracts/search?q=Sandor%20Szedmak"> Sandor Szedmak</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Recommender systems uses artificial intelligence practices for filtering obscure information and can predict if a user likes a specified item. Kernel mapping Recommender systems have been proposed which are accurate and state-of-the-art algorithms and resolve recommender system’s design objectives such as; long tail, cold-start, and sparsity. The aim of research is to propose hybrid framework that can efficiently integrate different versions— namely item-based and user-based KMR— of KMR algorithm. We have proposed various heuristic algorithms that integrate different versions of KMR (into a unified framework) resulting in improved accuracy and elimination of problems associated with conventional recommender system. We have tested our system on publically available movies dataset and benchmark with KMR. The results (in terms of accuracy, precision, recall, F1 measure and ROC metrics) reveal that the proposed algorithm is quite accurate especially under cold-start and sparse scenarios. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kernel%20Mapping%20Recommender%20Systems" title="Kernel Mapping Recommender Systems">Kernel Mapping Recommender Systems</a>, <a href="https://publications.waset.org/abstracts/search?q=hybrid%20recommender%20systems" title=" hybrid recommender systems"> hybrid recommender systems</a>, <a href="https://publications.waset.org/abstracts/search?q=cold%20start" title=" cold start"> cold start</a>, <a href="https://publications.waset.org/abstracts/search?q=sparsity" title=" sparsity"> sparsity</a>, <a href="https://publications.waset.org/abstracts/search?q=long%20tail" title=" long tail"> long tail</a> </p> <a href="https://publications.waset.org/abstracts/59766/building-scalable-and-accurate-hybrid-kernel-mapping-recommender" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/59766.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">339</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19</span> Unsupervised Learning of Spatiotemporally Coherent Metrics</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ross%20Goroshin">Ross Goroshin</a>, <a href="https://publications.waset.org/abstracts/search?q=Joan%20Bruna"> Joan Bruna</a>, <a href="https://publications.waset.org/abstracts/search?q=Jonathan%20Tompson"> Jonathan Tompson</a>, <a href="https://publications.waset.org/abstracts/search?q=David%20Eigen"> David Eigen</a>, <a href="https://publications.waset.org/abstracts/search?q=Yann%20LeCun"> Yann LeCun</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Current state-of-the-art classification and detection algorithms rely on supervised training. In this work we study unsupervised feature learning in the context of temporally coherent video data. We focus on feature learning from unlabeled video data, using the assumption that adjacent video frames contain semantically similar information. This assumption is exploited to train a convolutional pooling auto-encoder regularized by slowness and sparsity. We establish a connection between slow feature learning to metric learning and show that the trained encoder can be used to define a more temporally and semantically coherent metric. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title="machine learning">machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=pattern%20clustering" title=" pattern clustering"> pattern clustering</a>, <a href="https://publications.waset.org/abstracts/search?q=pooling" title=" pooling"> pooling</a>, <a href="https://publications.waset.org/abstracts/search?q=classification" title=" classification "> classification </a> </p> <a href="https://publications.waset.org/abstracts/29488/unsupervised-learning-of-spatiotemporally-coherent-metrics" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/29488.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">456</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">18</span> Sparse Principal Component Analysis: A Least Squares Approximation Approach</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Giovanni%20Merola">Giovanni Merola</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Sparse Principal Components Analysis aims to find principal components with few non-zero loadings. We derive such sparse solutions by adding a genuine sparsity requirement to the original Principal Components Analysis (PCA) objective function. This approach differs from others because it preserves PCA's original optimality: uncorrelatedness of the components and least squares approximation of the data. To identify the best subset of non-zero loadings we propose a branch-and-bound search and an iterative elimination algorithm. This last algorithm finds sparse solutions with large loadings and can be run without specifying the cardinality of the loadings and the number of components to compute in advance. We give thorough comparisons with the existing sparse PCA methods and several examples on real datasets. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=SPCA" title="SPCA">SPCA</a>, <a href="https://publications.waset.org/abstracts/search?q=uncorrelated%20components" title=" uncorrelated components"> uncorrelated components</a>, <a href="https://publications.waset.org/abstracts/search?q=branch-and-bound" title=" branch-and-bound"> branch-and-bound</a>, <a href="https://publications.waset.org/abstracts/search?q=backward%20elimination" title=" backward elimination"> backward elimination</a> </p> <a href="https://publications.waset.org/abstracts/14630/sparse-principal-component-analysis-a-least-squares-approximation-approach" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/14630.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">381</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">17</span> On Direct Matrix Factored Inversion via Broyden&#039;s Updates</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Adel%20Mohsen">Adel Mohsen</a> </p> <p class="card-text"><strong>Abstract:</strong></p> A direct method based on the good Broyden's updates for evaluating the inverse of a nonsingular square matrix of full rank and solving related system of linear algebraic equations is studied. For a matrix A of order n whose LU-decomposition is A = LU, the multiplication count is O (n3). This includes the evaluation of the LU-decompositions of the inverse, the lower triangular decomposition of A as well as a “reduced matrix inverse”. If an explicit value of the inverse is not needed the order reduces to O (n3/2) to compute to compute inv(U) and the reduced inverse. For a symmetric matrix only O (n3/3) operations are required to compute inv(L) and the reduced inverse. An example is presented to demonstrate the capability of using the reduced matrix inverse in treating ill-conditioned systems. Besides the simplicity of Broyden's update, the method provides a mean to exploit the possible sparsity in the matrix and to derive a suitable preconditioner. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Broyden%27s%20updates" title="Broyden&#039;s updates">Broyden&#039;s updates</a>, <a href="https://publications.waset.org/abstracts/search?q=matrix%20inverse" title=" matrix inverse"> matrix inverse</a>, <a href="https://publications.waset.org/abstracts/search?q=inverse%20factorization" title=" inverse factorization"> inverse factorization</a>, <a href="https://publications.waset.org/abstracts/search?q=solution%20of%20linear%20algebraic%20equations" title=" solution of linear algebraic equations"> solution of linear algebraic equations</a>, <a href="https://publications.waset.org/abstracts/search?q=ill-conditioned%20matrices" title=" ill-conditioned matrices"> ill-conditioned matrices</a>, <a href="https://publications.waset.org/abstracts/search?q=preconditioning" title=" preconditioning"> preconditioning</a> </p> <a href="https://publications.waset.org/abstracts/22126/on-direct-matrix-factored-inversion-via-broydens-updates" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/22126.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">479</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">16</span> Scalable Learning of Tree-Based Models on Sparsely Representable Data</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Fares%20Hedayatit">Fares Hedayatit</a>, <a href="https://publications.waset.org/abstracts/search?q=Arnauld%20Joly"> Arnauld Joly</a>, <a href="https://publications.waset.org/abstracts/search?q=Panagiotis%20Papadimitriou"> Panagiotis Papadimitriou</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Many machine learning tasks such as text annotation usually require training over very big datasets, e.g., millions of web documents, that can be represented in a sparse input space. State-of the-art tree-based ensemble algorithms cannot scale to such datasets, since they include operations whose running time is a function of the input space size rather than a function of the non-zero input elements. In this paper, we propose an efficient splitting algorithm to leverage input sparsity within decision tree methods. Our algorithm improves training time over sparse datasets by more than two orders of magnitude and it has been incorporated in the current version of scikit-learn.org, the most popular open source Python machine learning library. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=big%20data" title="big data">big data</a>, <a href="https://publications.waset.org/abstracts/search?q=sparsely%20representable%20data" title=" sparsely representable data"> sparsely representable data</a>, <a href="https://publications.waset.org/abstracts/search?q=tree-based%20models" title=" tree-based models"> tree-based models</a>, <a href="https://publications.waset.org/abstracts/search?q=scalable%20learning" title=" scalable learning"> scalable learning</a> </p> <a href="https://publications.waset.org/abstracts/52853/scalable-learning-of-tree-based-models-on-sparsely-representable-data" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/52853.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">263</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">15</span> Bridging the Data Gap for Sexism Detection in Twitter: A Semi-Supervised Approach</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Adeep%20Hande">Adeep Hande</a>, <a href="https://publications.waset.org/abstracts/search?q=Shubham%20Agarwal"> Shubham Agarwal</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents a study on identifying sexism in online texts using various state-of-the-art deep learning models based on BERT. We experimented with different feature sets and model architectures and evaluated their performance using precision, recall, F1 score, and accuracy metrics. We also explored the use of pseudolabeling technique to improve model performance. Our experiments show that the best-performing models were based on BERT, and their multilingual model achieved an F1 score of 0.83. Furthermore, the use of pseudolabeling significantly improved the performance of the BERT-based models, with the best results achieved using the pseudolabeling technique. Our findings suggest that BERT-based models with pseudolabeling hold great promise for identifying sexism in online texts with high accuracy. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=large%20language%20models" title="large language models">large language models</a>, <a href="https://publications.waset.org/abstracts/search?q=semi-supervised%20learning" title=" semi-supervised learning"> semi-supervised learning</a>, <a href="https://publications.waset.org/abstracts/search?q=sexism%20detection" title=" sexism detection"> sexism detection</a>, <a href="https://publications.waset.org/abstracts/search?q=data%20sparsity" title=" data sparsity"> data sparsity</a> </p> <a href="https://publications.waset.org/abstracts/171717/bridging-the-data-gap-for-sexism-detection-in-twitter-a-semi-supervised-approach" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/171717.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">70</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">14</span> A Quantitative Evaluation of Text Feature Selection Methods</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=B.%20S.%20Harish">B. S. Harish</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20B.%20Revanasiddappa"> M. B. Revanasiddappa</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Due to rapid growth of text documents in digital form, automated text classification has become an important research in the last two decades. The major challenge of text document representations are high dimension, sparsity, volume and semantics. Since the terms are only features that can be found in documents, selection of good terms (features) plays an very important role. In text classification, feature selection is a strategy that can be used to improve classification effectiveness, computational efficiency and accuracy. In this paper, we present a quantitative analysis of most widely used feature selection (FS) methods, viz. Term Frequency-Inverse Document Frequency (tfidf ), Mutual Information (MI), Information Gain (IG), CHISquare (x2), Term Frequency-Relevance Frequency (tfrf ), Term Strength (TS), Ambiguity Measure (AM) and Symbolic Feature Selection (SFS) to classify text documents. We evaluated all the feature selection methods on standard datasets like 20 Newsgroups, 4 University dataset and Reuters-21578. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=classifiers" title="classifiers">classifiers</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20selection" title=" feature selection"> feature selection</a>, <a href="https://publications.waset.org/abstracts/search?q=text%20classification" title=" text classification "> text classification </a> </p> <a href="https://publications.waset.org/abstracts/28926/a-quantitative-evaluation-of-text-feature-selection-methods" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/28926.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">459</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">13</span> Efficient Ground Targets Detection Using Compressive Sensing in Ground-Based Synthetic-Aperture Radar (SAR) Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Gherbi%20Nabil">Gherbi Nabil</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Detection of ground targets in SAR radar images is an important area for radar information processing. In the literature, various algorithms have been discussed in this context. However, most of them are of low robustness and accuracy. To this end, we discuss target detection in SAR images based on compressive sensing. Firstly, traditional SAR image target detection algorithms are discussed, and their limitations are highlighted. Secondly, a compressive sensing method is proposed based on the sparsity of SAR images. Next, the detection problem is solved using Multiple Measurements Vector configuration. Furthermore, a robust Alternating Direction Method of Multipliers (ADMM) is developed to solve the optimization problem. Finally, the detection results obtained using raw complex data are presented. Experimental results on real SAR images have verified the effectiveness of the proposed algorithm. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=compressive%20sensing" title="compressive sensing">compressive sensing</a>, <a href="https://publications.waset.org/abstracts/search?q=raw%20complex%20data" title=" raw complex data"> raw complex data</a>, <a href="https://publications.waset.org/abstracts/search?q=synthetic%20aperture%20radar" title=" synthetic aperture radar"> synthetic aperture radar</a>, <a href="https://publications.waset.org/abstracts/search?q=ADMM" title=" ADMM"> ADMM</a> </p> <a href="https://publications.waset.org/abstracts/191958/efficient-ground-targets-detection-using-compressive-sensing-in-ground-based-synthetic-aperture-radar-sar-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/191958.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">20</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12</span> Sentiment Classification of Documents</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Swarnadip%20Ghosh">Swarnadip Ghosh</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Sentiment Analysis is the process of detecting the contextual polarity of text. In other words, it determines whether a piece of writing is positive, negative or neutral.Sentiment analysis of documents holds great importance in today's world, when numerous information is stored in databases and in the world wide web. An efficient algorithm to illicit such information, would be beneficial for social, economic as well as medical purposes. In this project, we have developed an algorithm to classify a document into positive or negative. Using our algorithm, we obtained a feature set from the data, and classified the documents based on this feature set. It is important to note that, in the classification, we have not used the independence assumption, which is considered by many procedures like the Naive Bayes. This makes the algorithm more general in scope. Moreover, because of the sparsity and high dimensionality of such data, we did not use empirical distribution for estimation, but developed a method by finding degree of close clustering of the data points. We have applied our algorithm on a movie review data set obtained from IMDb and obtained satisfactory results. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=sentiment" title="sentiment">sentiment</a>, <a href="https://publications.waset.org/abstracts/search?q=Run%27s%20Test" title=" Run&#039;s Test"> Run&#039;s Test</a>, <a href="https://publications.waset.org/abstracts/search?q=cross%20validation" title=" cross validation"> cross validation</a>, <a href="https://publications.waset.org/abstracts/search?q=higher%20dimensional%20pmf%20estimation" title=" higher dimensional pmf estimation"> higher dimensional pmf estimation</a> </p> <a href="https://publications.waset.org/abstracts/19401/sentiment-classification-of-documents" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/19401.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">402</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11</span> HR MRI CS Based Image Reconstruction</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Krzysztof%20Malczewski">Krzysztof Malczewski</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Magnetic Resonance Imaging (MRI) reconstruction algorithm using compressed sensing is presented in this paper. It is exhibited that the offered approach improves MR images spatial resolution in circumstances when highly undersampled k-space trajectories are applied. Compressed Sensing (CS) aims at signal and images reconstructing from significantly fewer measurements than were conventionally assumed necessary. Magnetic Resonance Imaging (MRI) is a fundamental medical imaging method struggles with an inherently slow data acquisition process. The use of CS to MRI has the potential for significant scan time reductions, with visible benefits for patients and health care economics. In this study the objective is to combine super-resolution image enhancement algorithm with CS framework benefits to achieve high resolution MR output image. Both methods emphasize on maximizing image sparsity on known sparse transform domain and minimizing fidelity. The presented algorithm considers the cardiac and respiratory movements. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=super-resolution" title="super-resolution">super-resolution</a>, <a href="https://publications.waset.org/abstracts/search?q=MRI" title=" MRI"> MRI</a>, <a href="https://publications.waset.org/abstracts/search?q=compressed%20sensing" title=" compressed sensing"> compressed sensing</a>, <a href="https://publications.waset.org/abstracts/search?q=sparse-sense" title=" sparse-sense"> sparse-sense</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20enhancement" title=" image enhancement"> image enhancement</a> </p> <a href="https://publications.waset.org/abstracts/6021/hr-mri-cs-based-image-reconstruction" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/6021.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">430</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10</span> Spherical Harmonic Based Monostatic Anisotropic Point Scatterer Model for RADAR Applications</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Eric%20Huang">Eric Huang</a>, <a href="https://publications.waset.org/abstracts/search?q=Coleman%20DeLude"> Coleman DeLude</a>, <a href="https://publications.waset.org/abstracts/search?q=Justin%20Romberg"> Justin Romberg</a>, <a href="https://publications.waset.org/abstracts/search?q=Saibal%20Mukhopadhyay"> Saibal Mukhopadhyay</a>, <a href="https://publications.waset.org/abstracts/search?q=Madhavan%20Swaminathan"> Madhavan Swaminathan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> High performance computing (HPC) based emulators can be used to model the scattering from multiple stationary and moving targets for RADAR applications. These emulators rely on the RADAR Cross Section (RCS) of the targets being available in complex scenarios. Representing the RCS using tables generated from electromagnetic (EM) simulations is often times cumbersome leading to large storage requirement. This paper proposed a spherical harmonic based anisotropic scatterer model to represent the RCS of complex targets. The problem of finding the locations and reflection profiles of all scatterers can be formulated as a linear least square problem with a special sparsity constraint. This paper solves this problem using a modified Orthogonal Matching Pursuit algorithm. The results show that the spherical harmonic based scatterer model can effectively represent the RCS data of complex targets. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=RADAR" title="RADAR">RADAR</a>, <a href="https://publications.waset.org/abstracts/search?q=RCS" title=" RCS"> RCS</a>, <a href="https://publications.waset.org/abstracts/search?q=high%20performance%20computing" title=" high performance computing"> high performance computing</a>, <a href="https://publications.waset.org/abstracts/search?q=point%20scatterer%20model" title=" point scatterer model"> point scatterer model</a> </p> <a href="https://publications.waset.org/abstracts/134722/spherical-harmonic-based-monostatic-anisotropic-point-scatterer-model-for-radar-applications" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/134722.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">191</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">9</span> A Transform Domain Function Controlled VSSLMS Algorithm for Sparse System Identification</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Cemil%20Turan">Cemil Turan</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohammad%20Shukri%20Salman"> Mohammad Shukri Salman</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The convergence rate of the least-mean-square (LMS) algorithm deteriorates if the input signal to the filter is correlated. In a system identification problem, this convergence rate can be improved if the signal is white and/or if the system is sparse. We recently proposed a sparse transform domain LMS-type algorithm that uses a variable step-size for a sparse system identification. The proposed algorithm provided high performance even if the input signal is highly correlated. In this work, we investigate the performance of the proposed TD-LMS algorithm for a large number of filter tap which is also a critical issue for standard LMS algorithm. Additionally, the optimum value of the most important parameter is calculated for all experiments. Moreover, the convergence analysis of the proposed algorithm is provided. The performance of the proposed algorithm has been compared to different algorithms in a sparse system identification setting of different sparsity levels and different number of filter taps. Simulations have shown that the proposed algorithm has prominent performance compared to the other algorithms. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=adaptive%20filtering" title="adaptive filtering">adaptive filtering</a>, <a href="https://publications.waset.org/abstracts/search?q=sparse%20system%20identification" title=" sparse system identification"> sparse system identification</a>, <a href="https://publications.waset.org/abstracts/search?q=TD-LMS%20algorithm" title=" TD-LMS algorithm"> TD-LMS algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=VSSLMS%20algorithm" title=" VSSLMS algorithm"> VSSLMS algorithm</a> </p> <a href="https://publications.waset.org/abstracts/72335/a-transform-domain-function-controlled-vsslms-algorithm-for-sparse-system-identification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/72335.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">360</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8</span> Novel Recommender Systems Using Hybrid CF and Social Network Information</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kyoung-Jae%20Kim">Kyoung-Jae Kim</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Collaborative Filtering (CF) is a popular technique for the personalization in the E-commerce domain to reduce information overload. In general, CF provides recommending items list based on other similar users’ preferences from the user-item matrix and predicts the focal user’s preference for particular items by using them. Many recommender systems in real-world use CF techniques because it’s excellent accuracy and robustness. However, it has some limitations including sparsity problems and complex dimensionality in a user-item matrix. In addition, traditional CF does not consider the emotional interaction between users. In this study, we propose recommender systems using social network and singular value decomposition (SVD) to alleviate some limitations. The purpose of this study is to reduce the dimensionality of data set using SVD and to improve the performance of CF by using emotional information from social network data of the focal user. In this study, we test the usability of hybrid CF, SVD and social network information model using the real-world data. The experimental results show that the proposed model outperforms conventional CF models. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=recommender%20systems" title="recommender systems">recommender systems</a>, <a href="https://publications.waset.org/abstracts/search?q=collaborative%20filtering" title=" collaborative filtering"> collaborative filtering</a>, <a href="https://publications.waset.org/abstracts/search?q=social%20network%20information" title=" social network information"> social network information</a>, <a href="https://publications.waset.org/abstracts/search?q=singular%20value%20decomposition" title=" singular value decomposition"> singular value decomposition</a> </p> <a href="https://publications.waset.org/abstracts/36626/novel-recommender-systems-using-hybrid-cf-and-social-network-information" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/36626.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">289</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7</span> System Identification in Presence of Outliers </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Chao%20Yu">Chao Yu</a>, <a href="https://publications.waset.org/abstracts/search?q=Qing-Guo%20Wang"> Qing-Guo Wang</a>, <a href="https://publications.waset.org/abstracts/search?q=Dan%20Zhang"> Dan Zhang </a> </p> <p class="card-text"><strong>Abstract:</strong></p> The outlier detection problem for dynamic systems is formulated as a matrix decomposition problem with low-rank, sparse matrices and further recast as a semidefinite programming (SDP) problem. A fast algorithm is presented to solve the resulting problem while keeping the solution matrix structure and it can greatly reduce the computational cost over the standard interior-point method. The computational burden is further reduced by proper construction of subsets of the raw data without violating low rank property of the involved matrix. The proposed method can make exact detection of outliers in case of no or little noise in output observations. In case of significant noise, a novel approach based on under-sampling with averaging is developed to denoise while retaining the saliency of outliers and so-filtered data enables successful outlier detection with the proposed method while the existing filtering methods fail. Use of recovered “clean” data from the proposed method can give much better parameter estimation compared with that based on the raw data. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=outlier%20detection" title="outlier detection">outlier detection</a>, <a href="https://publications.waset.org/abstracts/search?q=system%20identification" title=" system identification"> system identification</a>, <a href="https://publications.waset.org/abstracts/search?q=matrix%20decomposition" title=" matrix decomposition"> matrix decomposition</a>, <a href="https://publications.waset.org/abstracts/search?q=low-rank%20matrix" title=" low-rank matrix"> low-rank matrix</a>, <a href="https://publications.waset.org/abstracts/search?q=sparsity" title=" sparsity"> sparsity</a>, <a href="https://publications.waset.org/abstracts/search?q=semidefinite%20programming" title=" semidefinite programming"> semidefinite programming</a>, <a href="https://publications.waset.org/abstracts/search?q=interior-point%20methods" title=" interior-point methods"> interior-point methods</a>, <a href="https://publications.waset.org/abstracts/search?q=denoising" title=" denoising"> denoising</a> </p> <a href="https://publications.waset.org/abstracts/13363/system-identification-in-presence-of-outliers" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/13363.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">307</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6</span> Neural Machine Translation for Low-Resource African Languages: Benchmarking State-of-the-Art Transformer for Wolof</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Cheikh%20Bamba%20Dione">Cheikh Bamba Dione</a>, <a href="https://publications.waset.org/abstracts/search?q=Alla%20Lo"> Alla Lo</a>, <a href="https://publications.waset.org/abstracts/search?q=Elhadji%20Mamadou%20Nguer"> Elhadji Mamadou Nguer</a>, <a href="https://publications.waset.org/abstracts/search?q=Siley%20O.%20Ba"> Siley O. Ba</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we propose two neural machine translation (NMT) systems (French-to-Wolof and Wolof-to-French) based on sequence-to-sequence with attention and transformer architectures. We trained our models on a parallel French-Wolof corpus of about 83k sentence pairs. Because of the low-resource setting, we experimented with advanced methods for handling data sparsity, including subword segmentation, back translation, and the copied corpus method. We evaluate the models using the BLEU score and find that transformer outperforms the classic seq2seq model in all settings, in addition to being less sensitive to noise. In general, the best scores are achieved when training the models on word-level-based units. For subword-level models, using back translation proves to be slightly beneficial in low-resource (WO) to high-resource (FR) language translation for the transformer (but not for the seq2seq) models. A slight improvement can also be observed when injecting copied monolingual text in the target language. Moreover, combining the copied method data with back translation leads to a substantial improvement of the translation quality. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=backtranslation" title="backtranslation">backtranslation</a>, <a href="https://publications.waset.org/abstracts/search?q=low-resource%20language" title=" low-resource language"> low-resource language</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20machine%20translation" title=" neural machine translation"> neural machine translation</a>, <a href="https://publications.waset.org/abstracts/search?q=sequence-to-sequence" title=" sequence-to-sequence"> sequence-to-sequence</a>, <a href="https://publications.waset.org/abstracts/search?q=transformer" title=" transformer"> transformer</a>, <a href="https://publications.waset.org/abstracts/search?q=Wolof" title=" Wolof"> Wolof</a> </p> <a href="https://publications.waset.org/abstracts/135110/neural-machine-translation-for-low-resource-african-languages-benchmarking-state-of-the-art-transformer-for-wolof" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/135110.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">147</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5</span> PET Image Resolution Enhancement</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Krzysztof%20Malczewski">Krzysztof Malczewski</a> </p> <p class="card-text"><strong>Abstract:</strong></p> PET is widely applied scanning procedure in medical imaging based research. It delivers measurements of functioning in distinct areas of the human brain while the patient is comfortable, conscious and alert. This article presents the new compression sensing based super-resolution algorithm for improving the image resolution in clinical Positron Emission Tomography (PET) scanners. The issue of motion artifacts is well known in Positron Emission Tomography (PET) studies as its side effect. The PET images are being acquired over a limited period of time. As the patients cannot hold breath during the PET data gathering, spatial blurring and motion artefacts are the usual result. These may lead to wrong diagnosis. It is shown that the presented approach improves PET spatial resolution in cases when Compressed Sensing (CS) sequences are used. Compressed Sensing (CS) aims at signal and images reconstructing from significantly fewer measurements than were traditionally thought necessary. The application of CS to PET has the potential for significant scan time reductions, with visible benefits for patients and health care economics. In this study the goal is to combine super-resolution image enhancement algorithm with CS framework to achieve high resolution PET output. Both methods emphasize on maximizing image sparsity on known sparse transform domain and minimizing fidelity. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=PET" title="PET">PET</a>, <a href="https://publications.waset.org/abstracts/search?q=super-resolution" title=" super-resolution"> super-resolution</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20reconstruction" title=" image reconstruction"> image reconstruction</a>, <a href="https://publications.waset.org/abstracts/search?q=pattern%20recognition" title=" pattern recognition"> pattern recognition</a> </p> <a href="https://publications.waset.org/abstracts/6017/pet-image-resolution-enhancement" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/6017.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">373</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4</span> Cirrhosis Mortality Prediction as Classification using Frequent Subgraph Mining</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Abdolghani%20Ebrahimi">Abdolghani Ebrahimi</a>, <a href="https://publications.waset.org/abstracts/search?q=Diego%20Klabjan"> Diego Klabjan</a>, <a href="https://publications.waset.org/abstracts/search?q=Chenxi%20Ge"> Chenxi Ge</a>, <a href="https://publications.waset.org/abstracts/search?q=Daniela%20Ladner"> Daniela Ladner</a>, <a href="https://publications.waset.org/abstracts/search?q=Parker%20Stride"> Parker Stride</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this work, we use machine learning and novel data analysis techniques to predict the one-year mortality of cirrhotic patients. Data from 2,322 patients with liver cirrhosis are collected at a single medical center. Different machine learning models are applied to predict one-year mortality. A comprehensive feature space including demographic information, comorbidity, clinical procedure and laboratory tests is being analyzed. A temporal pattern mining technic called Frequent Subgraph Mining (FSM) is being used. Model for End-stage liver disease (MELD) prediction of mortality is used as a comparator. All of our models statistically significantly outperform the MELD-score model and show an average 10% improvement of the area under the curve (AUC). The FSM technic itself does not improve the model significantly, but FSM, together with a machine learning technique called an ensemble, further improves the model performance. With the abundance of data available in healthcare through electronic health records (EHR), existing predictive models can be refined to identify and treat patients at risk for higher mortality. However, due to the sparsity of the temporal information needed by FSM, the FSM model does not yield significant improvements. To the best of our knowledge, this is the first work to apply modern machine learning algorithms and data analysis methods on predicting one-year mortality of cirrhotic patients and builds a model that predicts one-year mortality significantly more accurate than the MELD score. We have also tested the potential of FSM and provided a new perspective of the importance of clinical features. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title="machine learning">machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=liver%20cirrhosis" title=" liver cirrhosis"> liver cirrhosis</a>, <a href="https://publications.waset.org/abstracts/search?q=subgraph%20mining" title=" subgraph mining"> subgraph mining</a>, <a href="https://publications.waset.org/abstracts/search?q=supervised%20learning" title=" supervised learning"> supervised learning</a> </p> <a href="https://publications.waset.org/abstracts/137686/cirrhosis-mortality-prediction-as-classification-using-frequent-subgraph-mining" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/137686.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">134</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3</span> Atomic Decomposition Audio Data Compression and Denoising Using Sparse Dictionary Feature Learning</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=T.%20Bryan">T. Bryan </a>, <a href="https://publications.waset.org/abstracts/search?q=V.%20Kepuska"> V. Kepuska</a>, <a href="https://publications.waset.org/abstracts/search?q=I.%20Kostnaic"> I. Kostnaic</a> </p> <p class="card-text"><strong>Abstract:</strong></p> A method of data compression and denoising is introduced that is based on atomic decomposition of audio data using “basis vectors” that are learned from the audio data itself. The basis vectors are shown to have higher data compression and better signal-to-noise enhancement than the Gabor and gammatone “seed atoms” that were used to generate them. The basis vectors are the input weights of a Sparse AutoEncoder (SAE) that is trained using “envelope samples” of windowed segments of the audio data. The envelope samples are extracted from the audio data by performing atomic decomposition with Gabor or gammatone seed atoms. This process identifies segments of audio data that are locally coherent with the seed atoms. Envelope samples are extracted by identifying locally coherent audio data segments with Gabor or gammatone seed atoms, found by matching pursuit. The envelope samples are formed by taking the kronecker products of the atomic envelopes with the locally coherent data segments. Oracle signal-to-noise ratio (SNR) verses data compression curves are generated for the seed atoms as well as the basis vectors learned from Gabor and gammatone seed atoms. SNR data compression curves are generated for speech signals as well as early American music recordings. The basis vectors are shown to have higher denoising capability for data compression rates ranging from 90% to 99.84% for speech as well as music. Envelope samples are displayed as images by folding the time series into column vectors. This display method is used to compare of the output of the SAE with the envelope samples that produced them. The basis vectors are also displayed as images. Sparsity is shown to play an important role in producing the highest denoising basis vectors. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=sparse%20dictionary%20learning" title="sparse dictionary learning">sparse dictionary learning</a>, <a href="https://publications.waset.org/abstracts/search?q=autoencoder" title=" autoencoder"> autoencoder</a>, <a href="https://publications.waset.org/abstracts/search?q=sparse%20autoencoder" title=" sparse autoencoder"> sparse autoencoder</a>, <a href="https://publications.waset.org/abstracts/search?q=basis%20vectors" title=" basis vectors"> basis vectors</a>, <a href="https://publications.waset.org/abstracts/search?q=atomic%20decomposition" title=" atomic decomposition"> atomic decomposition</a>, <a href="https://publications.waset.org/abstracts/search?q=envelope%20sampling" title=" envelope sampling"> envelope sampling</a>, <a href="https://publications.waset.org/abstracts/search?q=envelope%20samples" title=" envelope samples"> envelope samples</a>, <a href="https://publications.waset.org/abstracts/search?q=Gabor" title=" Gabor"> Gabor</a>, <a href="https://publications.waset.org/abstracts/search?q=gammatone" title=" gammatone"> gammatone</a>, <a href="https://publications.waset.org/abstracts/search?q=matching%20pursuit" title=" matching pursuit"> matching pursuit</a> </p> <a href="https://publications.waset.org/abstracts/42586/atomic-decomposition-audio-data-compression-and-denoising-using-sparse-dictionary-feature-learning" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/42586.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">253</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2</span> Partial Least Square Regression for High-Dimentional and High-Correlated Data</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mohammed%20Abdullah%20Alshahrani">Mohammed Abdullah Alshahrani</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The research focuses on investigating the use of partial least squares (PLS) methodology for addressing challenges associated with high-dimensional correlated data. Recent technological advancements have led to experiments producing data characterized by a large number of variables compared to observations, with substantial inter-variable correlations. Such data patterns are common in chemometrics, where near-infrared (NIR) spectrometer calibrations record chemical absorbance levels across hundreds of wavelengths, and in genomics, where thousands of genomic regions' copy number alterations (CNA) are recorded from cancer patients. PLS serves as a widely used method for analyzing high-dimensional data, functioning as a regression tool in chemometrics and a classification method in genomics. It handles data complexity by creating latent variables (components) from original variables. However, applying PLS can present challenges. The study investigates key areas to address these challenges, including unifying interpretations across three main PLS algorithms and exploring unusual negative shrinkage factors encountered during model fitting. The research presents an alternative approach to addressing the interpretation challenge of predictor weights associated with PLS. Sparse estimation of predictor weights is employed using a penalty function combining a lasso penalty for sparsity and a Cauchy distribution-based penalty to account for variable dependencies. The results demonstrate sparse and grouped weight estimates, aiding interpretation and prediction tasks in genomic data analysis. High-dimensional data scenarios, where predictors outnumber observations, are common in regression analysis applications. Ordinary least squares regression (OLS), the standard method, performs inadequately with high-dimensional and highly correlated data. Copy number alterations (CNA) in key genes have been linked to disease phenotypes, highlighting the importance of accurate classification of gene expression data in bioinformatics and biology using regularized methods like PLS for regression and classification. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=partial%20least%20square%20regression" title="partial least square regression">partial least square regression</a>, <a href="https://publications.waset.org/abstracts/search?q=genetics%20data" title=" genetics data"> genetics data</a>, <a href="https://publications.waset.org/abstracts/search?q=negative%20filter%20factors" title=" negative filter factors"> negative filter factors</a>, <a href="https://publications.waset.org/abstracts/search?q=high%20dimensional%20data" title=" high dimensional data"> high dimensional data</a>, <a href="https://publications.waset.org/abstracts/search?q=high%20correlated%20data" title=" high correlated data"> high correlated data</a> </p> <a href="https://publications.waset.org/abstracts/185475/partial-least-square-regression-for-high-dimentional-and-high-correlated-data" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/185475.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">49</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1</span> Event Data Representation Based on Time Stamp for Pedestrian Detection</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yuta%20Nakano">Yuta Nakano</a>, <a href="https://publications.waset.org/abstracts/search?q=Kozo%20Kajiwara"> Kozo Kajiwara</a>, <a href="https://publications.waset.org/abstracts/search?q=Atsushi%20Hori"> Atsushi Hori</a>, <a href="https://publications.waset.org/abstracts/search?q=Takeshi%20Fujita"> Takeshi Fujita</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In association with the wave of electric vehicles (EV), low energy consumption systems have become more and more important. One of the key technologies to realize low energy consumption is a dynamic vision sensor (DVS), or we can call it an event sensor, neuromorphic vision sensor and so on. This sensor has several features, such as high temporal resolution, which can achieve 1 Mframe/s, and a high dynamic range (120 DB). However, the point that can contribute to low energy consumption the most is its sparsity; to be more specific, this sensor only captures the pixels that have intensity change. In other words, there is no signal in the area that does not have any intensity change. That is to say, this sensor is more energy efficient than conventional sensors such as RGB cameras because we can remove redundant data. On the other side of the advantages, it is difficult to handle the data because the data format is completely different from RGB image; for example, acquired signals are asynchronous and sparse, and each signal is composed of x-y coordinate, polarity (two values: +1 or -1) and time stamp, it does not include intensity such as RGB values. Therefore, as we cannot use existing algorithms straightforwardly, we have to design a new processing algorithm to cope with DVS data. In order to solve difficulties caused by data format differences, most of the prior arts make a frame data and feed it to deep learning such as Convolutional Neural Networks (CNN) for object detection and recognition purposes. However, even though we can feed the data, it is still difficult to achieve good performance due to a lack of intensity information. Although polarity is often used as intensity instead of RGB pixel value, it is apparent that polarity information is not rich enough. Considering this context, we proposed to use the timestamp information as a data representation that is fed to deep learning. Concretely, at first, we also make frame data divided by a certain time period, then give intensity value in response to the timestamp in each frame; for example, a high value is given on a recent signal. We expected that this data representation could capture the features, especially of moving objects, because timestamp represents the movement direction and speed. By using this proposal method, we made our own dataset by DVS fixed on a parked car to develop an application for a surveillance system that can detect persons around the car. We think DVS is one of the ideal sensors for surveillance purposes because this sensor can run for a long time with low energy consumption in a NOT dynamic situation. For comparison purposes, we reproduced state of the art method as a benchmark, which makes frames the same as us and feeds polarity information to CNN. Then, we measured the object detection performances of the benchmark and ours on the same dataset. As a result, our method achieved a maximum of 7 points greater than the benchmark in the F1 score. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=event%20camera" title="event camera">event camera</a>, <a href="https://publications.waset.org/abstracts/search?q=dynamic%20vision%20sensor" title=" dynamic vision sensor"> dynamic vision sensor</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=data%20representation" title=" data representation"> data representation</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20recognition" title=" object recognition"> object recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=low%20energy%20consumption" title=" low energy consumption"> low energy consumption</a> </p> <a href="https://publications.waset.org/abstracts/164424/event-data-representation-based-on-time-stamp-for-pedestrian-detection" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/164424.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">97</span> </span> </div> </div> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">&copy; 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">&times;</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>

Pages: 1 2 3 4 5 6 7 8 9 10