CINXE.COM

Search results for: gradient descent

<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: gradient descent</title> <meta name="description" content="Search results for: gradient descent"> <meta name="keywords" content="gradient descent"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="gradient descent" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="gradient descent"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 822</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: gradient descent</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">822</span> Dynamic Measurement System Modeling with Machine Learning Algorithms</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Changqiao%20Wu">Changqiao Wu</a>, <a href="https://publications.waset.org/abstracts/search?q=Guoqing%20Ding"> Guoqing Ding</a>, <a href="https://publications.waset.org/abstracts/search?q=Xin%20Chen"> Xin Chen</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, ways of modeling dynamic measurement systems are discussed. Specially, for linear system with single-input single-output, it could be modeled with shallow neural network. Then, gradient based optimization algorithms are used for searching the proper coefficients. Besides, method with normal equation and second order gradient descent are proposed to accelerate the modeling process, and ways of better gradient estimation are discussed. It shows that the mathematical essence of the learning objective is maximum likelihood with noises under Gaussian distribution. For conventional gradient descent, the mini-batch learning and gradient with momentum contribute to faster convergence and enhance model ability. Lastly, experimental results proved the effectiveness of second order gradient descent algorithm, and indicated that optimization with normal equation was the most suitable for linear dynamic models. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=dynamic%20system%20modeling" title="dynamic system modeling">dynamic system modeling</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20network" title=" neural network"> neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=normal%20equation" title=" normal equation"> normal equation</a>, <a href="https://publications.waset.org/abstracts/search?q=second%20order%20gradient%20descent" title=" second order gradient descent"> second order gradient descent</a> </p> <a href="https://publications.waset.org/abstracts/98265/dynamic-measurement-system-modeling-with-machine-learning-algorithms" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/98265.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">127</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">821</span> Global Convergence of a Modified Three-Term Conjugate Gradient Algorithms</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Belloufi%20Mohammed">Belloufi Mohammed</a>, <a href="https://publications.waset.org/abstracts/search?q=Sellami%20Badreddine"> Sellami Badreddine</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper deals with a new nonlinear modified three-term conjugate gradient algorithm for solving large-scale unstrained optimization problems. The search direction of the algorithms from this class has three terms and is computed as modifications of the classical conjugate gradient algorithms to satisfy both the descent and the conjugacy conditions. An example of three-term conjugate gradient algorithm from this class, as modifications of the classical and well known Hestenes and Stiefel or of the CG_DESCENT by Hager and Zhang conjugate gradient algorithms, satisfying both the descent and the conjugacy conditions is presented. Under mild conditions, we prove that the modified three-term conjugate gradient algorithm with Wolfe type line search is globally convergent. Preliminary numerical results show the proposed method is very promising. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=unconstrained%20optimization" title="unconstrained optimization">unconstrained optimization</a>, <a href="https://publications.waset.org/abstracts/search?q=three-term%20conjugate%20gradient" title=" three-term conjugate gradient"> three-term conjugate gradient</a>, <a href="https://publications.waset.org/abstracts/search?q=sufficient%20descent%20property" title=" sufficient descent property"> sufficient descent property</a>, <a href="https://publications.waset.org/abstracts/search?q=line%20search" title=" line search"> line search</a> </p> <a href="https://publications.waset.org/abstracts/41727/global-convergence-of-a-modified-three-term-conjugate-gradient-algorithms" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/41727.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">375</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">820</span> Steepest Descent Method with New Step Sizes</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bib%20Paruhum%20Silalahi">Bib Paruhum Silalahi</a>, <a href="https://publications.waset.org/abstracts/search?q=Djihad%20Wungguli"> Djihad Wungguli</a>, <a href="https://publications.waset.org/abstracts/search?q=Sugi%20Guritman"> Sugi Guritman</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Steepest descent method is a simple gradient method for optimization. This method has a slow convergence in heading to the optimal solution, which occurs because of the zigzag form of the steps. Barzilai and Borwein modified this algorithm so that it performs well for problems with large dimensions. Barzilai and Borwein method results have sparked a lot of research on the method of steepest descent, including alternate minimization gradient method and Yuan method. Inspired by previous works, we modified the step size of the steepest descent method. We then compare the modification results against the Barzilai and Borwein method, alternate minimization gradient method and Yuan method for quadratic function cases in terms of the iterations number and the running time. The average results indicate that the steepest descent method with the new step sizes provide good results for small dimensions and able to compete with the results of Barzilai and Borwein method and the alternate minimization gradient method for large dimensions. The new step sizes have faster convergence compared to the other methods, especially for cases with large dimensions. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=steepest%20descent" title="steepest descent">steepest descent</a>, <a href="https://publications.waset.org/abstracts/search?q=line%20search" title=" line search"> line search</a>, <a href="https://publications.waset.org/abstracts/search?q=iteration" title=" iteration"> iteration</a>, <a href="https://publications.waset.org/abstracts/search?q=running%20time" title=" running time"> running time</a>, <a href="https://publications.waset.org/abstracts/search?q=unconstrained%20optimization" title=" unconstrained optimization"> unconstrained optimization</a>, <a href="https://publications.waset.org/abstracts/search?q=convergence" title=" convergence"> convergence</a> </p> <a href="https://publications.waset.org/abstracts/29734/steepest-descent-method-with-new-step-sizes" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/29734.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">540</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">819</span> A New Conjugate Gradient Method with Guaranteed Descent</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=B.%20Sellami">B. Sellami</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20Belloufi"> M. Belloufi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Conjugate gradient methods are an important class of methods for unconstrained optimization, especially for large-scale problems. Recently, they have been much studied. In this paper, we propose a new two-parameter family of conjugate gradient methods for unconstrained optimization. The two-parameter family of methods not only includes the already existing three practical nonlinear conjugate gradient methods, but also has other family of conjugate gradient methods as subfamily. The two-parameter family of methods with the Wolfe line search is shown to ensure the descent property of each search direction. Some general convergence results are also established for the two-parameter family of methods. The numerical results show that this method is efficient for the given test problems. In addition, the methods related to this family are uniformly discussed. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=unconstrained%20optimization" title="unconstrained optimization">unconstrained optimization</a>, <a href="https://publications.waset.org/abstracts/search?q=conjugate%20gradient%20method" title=" conjugate gradient method"> conjugate gradient method</a>, <a href="https://publications.waset.org/abstracts/search?q=line%20search" title=" line search"> line search</a>, <a href="https://publications.waset.org/abstracts/search?q=global%20convergence" title=" global convergence"> global convergence</a> </p> <a href="https://publications.waset.org/abstracts/41734/a-new-conjugate-gradient-method-with-guaranteed-descent" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/41734.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">452</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">818</span> Convergence Analysis of Training Two-Hidden-Layer Partially Over-Parameterized ReLU Networks via Gradient Descent</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Zhifeng%20Kong">Zhifeng Kong</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Over-parameterized neural networks have attracted a great deal of attention in recent deep learning theory research, as they challenge the classic perspective of over-fitting when the model has excessive parameters and have gained empirical success in various settings. While a number of theoretical works have been presented to demystify properties of such models, the convergence properties of such models are still far from being thoroughly understood. In this work, we study the convergence properties of training two-hidden-layer partially over-parameterized fully connected networks with the Rectified Linear Unit activation via gradient descent. To our knowledge, this is the first theoretical work to understand convergence properties of deep over-parameterized networks without the equally-wide-hidden-layer assumption and other unrealistic assumptions. We provide a probabilistic lower bound of the widths of hidden layers and proved linear convergence rate of gradient descent. We also conducted experiments on synthetic and real-world datasets to validate our theory. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=over-parameterization" title="over-parameterization">over-parameterization</a>, <a href="https://publications.waset.org/abstracts/search?q=rectified%20linear%20units%20ReLU" title=" rectified linear units ReLU"> rectified linear units ReLU</a>, <a href="https://publications.waset.org/abstracts/search?q=convergence" title=" convergence"> convergence</a>, <a href="https://publications.waset.org/abstracts/search?q=gradient%20descent" title=" gradient descent"> gradient descent</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20networks" title=" neural networks"> neural networks</a> </p> <a href="https://publications.waset.org/abstracts/118561/convergence-analysis-of-training-two-hidden-layer-partially-over-parameterized-relu-networks-via-gradient-descent" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/118561.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">142</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">817</span> A New Class of Conjugate Gradient Methods Based on a Modified Search Direction for Unconstrained Optimization</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Belloufi%20Mohammed">Belloufi Mohammed</a>, <a href="https://publications.waset.org/abstracts/search?q=Sellami%20Badreddine"> Sellami Badreddine</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Conjugate gradient methods have played a special role for solving large scale optimization problems due to the simplicity of their iteration, convergence properties and their low memory requirements. In this work, we propose a new class of conjugate gradient methods which ensures sufficient descent. Moreover, we propose a new search direction with the Wolfe line search technique for solving unconstrained optimization problems, a global convergence result for general functions is established provided that the line search satisfies the Wolfe conditions. Our numerical experiments indicate that our proposed methods are preferable and in general superior to the classical conjugate gradient methods in terms of efficiency and robustness. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=unconstrained%20optimization" title="unconstrained optimization">unconstrained optimization</a>, <a href="https://publications.waset.org/abstracts/search?q=conjugate%20gradient%20method" title=" conjugate gradient method"> conjugate gradient method</a>, <a href="https://publications.waset.org/abstracts/search?q=sufficient%20descent%20property" title=" sufficient descent property"> sufficient descent property</a>, <a href="https://publications.waset.org/abstracts/search?q=numerical%20comparisons" title=" numerical comparisons"> numerical comparisons</a> </p> <a href="https://publications.waset.org/abstracts/41725/a-new-class-of-conjugate-gradient-methods-based-on-a-modified-search-direction-for-unconstrained-optimization" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/41725.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">403</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">816</span> An Accelerated Stochastic Gradient Method with Momentum</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Liang%20Liu">Liang Liu</a>, <a href="https://publications.waset.org/abstracts/search?q=Xiaopeng%20Luo"> Xiaopeng Luo</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we propose an accelerated stochastic gradient method with momentum. The momentum term is the weighted average of generated gradients, and the weights decay inverse proportionally with the iteration times. Stochastic gradient descent with momentum (SGDM) uses weights that decay exponentially with the iteration times to generate the momentum term. Using exponential decay weights, variants of SGDM with inexplicable and complicated formats have been proposed to achieve better performance. However, the momentum update rules of our method are as simple as that of SGDM. We provide theoretical convergence analyses, which show both the exponential decay weights and our inverse proportional decay weights can limit the variance of the parameter moving directly to a region. Experimental results show that our method works well with many practical problems and outperforms SGDM. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=exponential%20decay%20rate%20weight" title="exponential decay rate weight">exponential decay rate weight</a>, <a href="https://publications.waset.org/abstracts/search?q=gradient%20descent" title=" gradient descent"> gradient descent</a>, <a href="https://publications.waset.org/abstracts/search?q=inverse%20proportional%20decay%20rate%20weight" title=" inverse proportional decay rate weight"> inverse proportional decay rate weight</a>, <a href="https://publications.waset.org/abstracts/search?q=momentum" title=" momentum"> momentum</a> </p> <a href="https://publications.waset.org/abstracts/133507/an-accelerated-stochastic-gradient-method-with-momentum" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/133507.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">162</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">815</span> MapReduce Logistic Regression Algorithms with RHadoop</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Byung%20Ho%20Jung">Byung Ho Jung</a>, <a href="https://publications.waset.org/abstracts/search?q=Dong%20Hoon%20Lim"> Dong Hoon Lim</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Logistic regression is a statistical method for analyzing a dataset in which there are one or more independent variables that determine an outcome. Logistic regression is used extensively in numerous disciplines, including the medical and social science fields. In this paper, we address the problem of estimating parameters in the logistic regression based on MapReduce framework with RHadoop that integrates R and Hadoop environment applicable to large scale data. There exist three learning algorithms for logistic regression, namely Gradient descent method, Cost minimization method and Newton-Rhapson's method. The Newton-Rhapson's method does not require a learning rate, while gradient descent and cost minimization methods need to manually pick a learning rate. The experimental results demonstrated that our learning algorithms using RHadoop can scale well and efficiently process large data sets on commodity hardware. We also compared the performance of our Newton-Rhapson's method with gradient descent and cost minimization methods. The results showed that our newton's method appeared to be the most robust to all data tested. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=big%20data" title="big data">big data</a>, <a href="https://publications.waset.org/abstracts/search?q=logistic%20regression" title=" logistic regression"> logistic regression</a>, <a href="https://publications.waset.org/abstracts/search?q=MapReduce" title=" MapReduce"> MapReduce</a>, <a href="https://publications.waset.org/abstracts/search?q=RHadoop" title=" RHadoop"> RHadoop</a> </p> <a href="https://publications.waset.org/abstracts/41569/mapreduce-logistic-regression-algorithms-with-rhadoop" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/41569.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">284</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">814</span> A New Family of Globally Convergent Conjugate Gradient Methods</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=B.%20Sellami">B. Sellami</a>, <a href="https://publications.waset.org/abstracts/search?q=Y.%20Laskri"> Y. Laskri</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20Belloufi"> M. Belloufi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Conjugate gradient methods are an important class of methods for unconstrained optimization, especially for large-scale problems. Recently, they have been much studied. In this paper, a new family of conjugate gradient method is proposed for unconstrained optimization. This method includes the already existing two practical nonlinear conjugate gradient methods, which produces a descent search direction at every iteration and converges globally provided that the line search satisfies the Wolfe conditions. The numerical experiments are done to test the efficiency of the new method, which implies the new method is promising. In addition the methods related to this family are uniformly discussed. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=conjugate%20gradient%20method" title="conjugate gradient method">conjugate gradient method</a>, <a href="https://publications.waset.org/abstracts/search?q=global%20convergence" title=" global convergence"> global convergence</a>, <a href="https://publications.waset.org/abstracts/search?q=line%20search" title=" line search"> line search</a>, <a href="https://publications.waset.org/abstracts/search?q=unconstrained%20optimization" title=" unconstrained optimization"> unconstrained optimization</a> </p> <a href="https://publications.waset.org/abstracts/40381/a-new-family-of-globally-convergent-conjugate-gradient-methods" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/40381.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">410</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">813</span> Convergence and Stability in Federated Learning with Adaptive Differential Privacy Preservation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rizwan%20Rizwan">Rizwan Rizwan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper provides an overview of Federated Learning (FL) and its application in enhancing data security, privacy, and efficiency. FL utilizes three distinct architectures to ensure privacy is never compromised. It involves training individual edge devices and aggregating their models on a server without sharing raw data. This approach not only provides secure models without data sharing but also offers a highly efficient privacy--preserving solution with improved security and data access. Also we discusses various frameworks used in FL and its integration with machine learning, deep learning, and data mining. In order to address the challenges of multi--party collaborative modeling scenarios, a brief review FL scheme combined with an adaptive gradient descent strategy and differential privacy mechanism. The adaptive learning rate algorithm adjusts the gradient descent process to avoid issues such as model overfitting and fluctuations, thereby enhancing modeling efficiency and performance in multi-party computation scenarios. Additionally, to cater to ultra-large-scale distributed secure computing, the research introduces a differential privacy mechanism that defends against various background knowledge attacks. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=federated%20learning" title="federated learning">federated learning</a>, <a href="https://publications.waset.org/abstracts/search?q=differential%20privacy" title=" differential privacy"> differential privacy</a>, <a href="https://publications.waset.org/abstracts/search?q=gradient%20descent%20strategy" title=" gradient descent strategy"> gradient descent strategy</a>, <a href="https://publications.waset.org/abstracts/search?q=convergence" title=" convergence"> convergence</a>, <a href="https://publications.waset.org/abstracts/search?q=stability" title=" stability"> stability</a>, <a href="https://publications.waset.org/abstracts/search?q=threats" title=" threats"> threats</a> </p> <a href="https://publications.waset.org/abstracts/187891/convergence-and-stability-in-federated-learning-with-adaptive-differential-privacy-preservation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/187891.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">30</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">812</span> Descent Algorithms for Optimization Algorithms Using q-Derivative</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Geetanjali%20Panda">Geetanjali Panda</a>, <a href="https://publications.waset.org/abstracts/search?q=Suvrakanti%20Chakraborty"> Suvrakanti Chakraborty</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, Newton-like descent methods are proposed for unconstrained optimization problems, which use q-derivatives of the gradient of an objective function. First, a local scheme is developed with alternative sufficient optimality condition, and then the method is extended to a global scheme. Moreover, a variant of practical Newton scheme is also developed introducing a real sequence. Global convergence of these schemes is proved under some mild conditions. Numerical experiments and graphical illustrations are provided. Finally, the performance profiles on a test set show that the proposed schemes are competitive to the existing first-order schemes for optimization problems. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Descent%20algorithm" title="Descent algorithm">Descent algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=line%20search%20method" title=" line search method"> line search method</a>, <a href="https://publications.waset.org/abstracts/search?q=q%20calculus" title=" q calculus"> q calculus</a>, <a href="https://publications.waset.org/abstracts/search?q=Quasi%20Newton%20method" title=" Quasi Newton method"> Quasi Newton method</a> </p> <a href="https://publications.waset.org/abstracts/62700/descent-algorithms-for-optimization-algorithms-using-q-derivative" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/62700.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">398</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">811</span> A Modified Nonlinear Conjugate Gradient Algorithm for Large Scale Unconstrained Optimization Problems</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Tsegay%20Giday%20Woldu">Tsegay Giday Woldu</a>, <a href="https://publications.waset.org/abstracts/search?q=Haibin%20Zhang"> Haibin Zhang</a>, <a href="https://publications.waset.org/abstracts/search?q=Xin%20Zhang"> Xin Zhang</a>, <a href="https://publications.waset.org/abstracts/search?q=Yemane%20Hailu%20Fissuh"> Yemane Hailu Fissuh</a> </p> <p class="card-text"><strong>Abstract:</strong></p> It is well known that nonlinear conjugate gradient method is one of the widely used first order methods to solve large scale unconstrained smooth optimization problems. Because of the low memory requirement, attractive theoretical features, practical computational efficiency and nice convergence properties, nonlinear conjugate gradient methods have a special role for solving large scale unconstrained optimization problems. Large scale optimization problems are with important applications in practical and scientific world. However, nonlinear conjugate gradient methods have restricted information about the curvature of the objective function and they are likely less efficient and robust compared to some second order algorithms. To overcome these drawbacks, the new modified nonlinear conjugate gradient method is presented. The noticeable features of our work are that the new search direction possesses the sufficient descent property independent of any line search and it belongs to a trust region. Under mild assumptions and standard Wolfe line search technique, the global convergence property of the proposed algorithm is established. Furthermore, to test the practical computational performance of our new algorithm, numerical experiments are provided and implemented on the set of some large dimensional unconstrained problems. The numerical results show that the proposed algorithm is an efficient and robust compared with other similar algorithms. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=conjugate%20gradient%20method" title="conjugate gradient method">conjugate gradient method</a>, <a href="https://publications.waset.org/abstracts/search?q=global%20convergence" title=" global convergence"> global convergence</a>, <a href="https://publications.waset.org/abstracts/search?q=large%20scale%20optimization" title=" large scale optimization"> large scale optimization</a>, <a href="https://publications.waset.org/abstracts/search?q=sufficient%20descent%20property" title=" sufficient descent property"> sufficient descent property</a> </p> <a href="https://publications.waset.org/abstracts/102625/a-modified-nonlinear-conjugate-gradient-algorithm-for-large-scale-unconstrained-optimization-problems" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/102625.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">205</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">810</span> Stacking Ensemble Approach for Combining Different Methods in Real Estate Prediction</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sol%20Girouard">Sol Girouard</a>, <a href="https://publications.waset.org/abstracts/search?q=Zona%20Kostic"> Zona Kostic</a> </p> <p class="card-text"><strong>Abstract:</strong></p> A home is often the largest and most expensive purchase a person makes. Whether the decision leads to a successful outcome will be determined by a combination of critical factors. In this paper, we propose a method that efficiently handles all the factors in residential real estate and performs predictions given a feature space with high dimensionality while controlling for overfitting. The proposed method was built on gradient descent and boosting algorithms and uses a mixed optimizing technique to improve the prediction power. Usually, a single model cannot handle all the cases thus our approach builds multiple models based on different subsets of the predictors. The algorithm was tested on 3 million homes across the U.S., and the experimental results demonstrate the efficiency of this approach by outperforming techniques currently used in forecasting prices. With everyday changes on the real estate market, our proposed algorithm capitalizes from new events allowing more efficient predictions. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=real%20estate%20prediction" title="real estate prediction">real estate prediction</a>, <a href="https://publications.waset.org/abstracts/search?q=gradient%20descent" title=" gradient descent"> gradient descent</a>, <a href="https://publications.waset.org/abstracts/search?q=boosting" title=" boosting"> boosting</a>, <a href="https://publications.waset.org/abstracts/search?q=ensemble%20methods" title=" ensemble methods"> ensemble methods</a>, <a href="https://publications.waset.org/abstracts/search?q=active%20learning" title=" active learning"> active learning</a>, <a href="https://publications.waset.org/abstracts/search?q=training" title=" training"> training</a> </p> <a href="https://publications.waset.org/abstracts/90597/stacking-ensemble-approach-for-combining-different-methods-in-real-estate-prediction" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/90597.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">277</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">809</span> Implications of Optimisation Algorithm on the Forecast Performance of Artificial Neural Network for Streamflow Modelling</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Martins%20Y.%20Otache">Martins Y. Otache</a>, <a href="https://publications.waset.org/abstracts/search?q=John%20J.%20Musa"> John J. Musa</a>, <a href="https://publications.waset.org/abstracts/search?q=Abayomi%20I.%20Kuti"> Abayomi I. Kuti</a>, <a href="https://publications.waset.org/abstracts/search?q=Mustapha%20Mohammed"> Mustapha Mohammed</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The performance of an artificial neural network (ANN) is contingent on a host of factors, for instance, the network optimisation scheme. In view of this, the study examined the general implications of the ANN training optimisation algorithm on its forecast performance. To this end, the Bayesian regularisation (Br), Levenberg-Marquardt (LM), and the adaptive learning gradient descent: GDM (with momentum) algorithms were employed under different ANN structural configurations: (1) single-hidden layer, and (2) double-hidden layer feedforward back propagation network. Results obtained revealed generally that the gradient descent with momentum (GDM) optimisation algorithm, with its adaptive learning capability, used a relatively shorter time in both training and validation phases as compared to the Levenberg- Marquardt (LM) and Bayesian Regularisation (Br) algorithms though learning may not be consummated; i.e., in all instances considering also the prediction of extreme flow conditions for 1-day and 5-day ahead, respectively especially using the ANN model. In specific statistical terms on the average, model performance efficiency using the coefficient of efficiency (CE) statistic were Br: 98%, 94%; LM: 98 %, 95 %, and GDM: 96 %, 96% respectively for training and validation phases. However, on the basis of relative error distribution statistics (MAE, MAPE, and MSRE), GDM performed better than the others overall. Based on the findings, it is imperative to state that the adoption of ANN for real-time forecasting should employ training algorithms that do not have computational overhead like the case of LM that requires the computation of the Hessian matrix, protracted time, and sensitivity to initial conditions; to this end, Br and other forms of the gradient descent with momentum should be adopted considering overall time expenditure and quality of the forecast as well as mitigation of network overfitting. On the whole, it is recommended that evaluation should consider implications of (i) data quality and quantity and (ii) transfer functions on the overall network forecast performance. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=streamflow" title="streamflow">streamflow</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20network" title=" neural network"> neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=optimisation" title=" optimisation"> optimisation</a>, <a href="https://publications.waset.org/abstracts/search?q=algorithm" title=" algorithm"> algorithm</a> </p> <a href="https://publications.waset.org/abstracts/132874/implications-of-optimisation-algorithm-on-the-forecast-performance-of-artificial-neural-network-for-streamflow-modelling" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/132874.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">152</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">808</span> Enhancing Spatial Interpolation: A Multi-Layer Inverse Distance Weighting Model for Complex Regression and Classification Tasks in Spatial Data Analysis</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yakin%20Hajlaoui">Yakin Hajlaoui</a>, <a href="https://publications.waset.org/abstracts/search?q=Richard%20Labib"> Richard Labib</a>, <a href="https://publications.waset.org/abstracts/search?q=Jean-Fran%C3%A7ois%20Plante"> Jean-François Plante</a>, <a href="https://publications.waset.org/abstracts/search?q=Michel%20Gamache"> Michel Gamache</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This study introduces the Multi-Layer Inverse Distance Weighting Model (ML-IDW), inspired by the mathematical formulation of both multi-layer neural networks (ML-NNs) and Inverse Distance Weighting model (IDW). ML-IDW leverages ML-NNs' processing capabilities, characterized by compositions of learnable non-linear functions applied to input features, and incorporates IDW's ability to learn anisotropic spatial dependencies, presenting a promising solution for nonlinear spatial interpolation and learning from complex spatial data. it employ gradient descent and backpropagation to train ML-IDW, comparing its performance against conventional spatial interpolation models such as Kriging and standard IDW on regression and classification tasks using simulated spatial datasets of varying complexity. the results highlight the efficacy of ML-IDW, particularly in handling complex spatial datasets, exhibiting lower mean square error in regression and higher F1 score in classification. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title="deep learning">deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-layer%20neural%20networks" title=" multi-layer neural networks"> multi-layer neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=gradient%20descent" title=" gradient descent"> gradient descent</a>, <a href="https://publications.waset.org/abstracts/search?q=spatial%20interpolation" title=" spatial interpolation"> spatial interpolation</a>, <a href="https://publications.waset.org/abstracts/search?q=inverse%20distance%20weighting" title=" inverse distance weighting"> inverse distance weighting</a> </p> <a href="https://publications.waset.org/abstracts/185810/enhancing-spatial-interpolation-a-multi-layer-inverse-distance-weighting-model-for-complex-regression-and-classification-tasks-in-spatial-data-analysis" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/185810.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">52</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">807</span> Review on Quaternion Gradient Operator with Marginal and Vector Approaches for Colour Edge Detection</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nadia%20Ben%20Youssef">Nadia Ben Youssef</a>, <a href="https://publications.waset.org/abstracts/search?q=Aicha%20Bouzid"> Aicha Bouzid</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Gradient estimation is one of the most fundamental tasks in the field of image processing in general, and more particularly for color images since that the research in color image gradient remains limited. The widely used gradient method is Di Zenzo’s gradient operator, which is based on the measure of squared local contrast of color images. The proposed gradient mechanism, presented in this paper, is based on the principle of the Di Zenzo’s approach using quaternion representation. This edge detector is compared to a marginal approach based on multiscale product of wavelet transform and another vector approach based on quaternion convolution and vector gradient approach. The experimental results indicate that the proposed color gradient operator outperforms marginal approach, however, it is less efficient then the second vector approach. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=gradient" title="gradient">gradient</a>, <a href="https://publications.waset.org/abstracts/search?q=edge%20detection" title=" edge detection"> edge detection</a>, <a href="https://publications.waset.org/abstracts/search?q=color%20image" title=" color image"> color image</a>, <a href="https://publications.waset.org/abstracts/search?q=quaternion" title=" quaternion"> quaternion</a> </p> <a href="https://publications.waset.org/abstracts/141138/review-on-quaternion-gradient-operator-with-marginal-and-vector-approaches-for-colour-edge-detection" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/141138.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">234</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">806</span> Mathematical Modeling of the Working Principle of Gravity Gradient Instrument</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Danni%20Cong">Danni Cong</a>, <a href="https://publications.waset.org/abstracts/search?q=Meiping%20Wu"> Meiping Wu</a>, <a href="https://publications.waset.org/abstracts/search?q=Hua%20Mu"> Hua Mu</a>, <a href="https://publications.waset.org/abstracts/search?q=Xiaofeng%20He"> Xiaofeng He</a>, <a href="https://publications.waset.org/abstracts/search?q=Junxiang%20Lian"> Junxiang Lian</a>, <a href="https://publications.waset.org/abstracts/search?q=Juliang%20Cao"> Juliang Cao</a>, <a href="https://publications.waset.org/abstracts/search?q=Shaokun%20Cai"> Shaokun Cai</a>, <a href="https://publications.waset.org/abstracts/search?q=Hao%20Qin"> Hao Qin</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Gravity field is of great significance in geoscience, national economy and national security, and gravitational gradient measurement has been extensively studied due to its higher accuracy than gravity measurement. Gravity gradient sensor, being one of core devices of the gravity gradient instrument, plays a key role in measuring accuracy. Therefore, this paper starts from analyzing the working principle of the gravity gradient sensor by Newton’s law, and then considers the relative motion between inertial and non-inertial systems to build a relatively adequate mathematical model, laying a foundation for the measurement error calibration, measurement accuracy improvement. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=gravity%20gradient" title="gravity gradient">gravity gradient</a>, <a href="https://publications.waset.org/abstracts/search?q=gravity%20gradient%20sensor" title=" gravity gradient sensor"> gravity gradient sensor</a>, <a href="https://publications.waset.org/abstracts/search?q=accelerometer" title=" accelerometer"> accelerometer</a>, <a href="https://publications.waset.org/abstracts/search?q=single-axis%20rotation%20modulation" title=" single-axis rotation modulation"> single-axis rotation modulation</a> </p> <a href="https://publications.waset.org/abstracts/74776/mathematical-modeling-of-the-working-principle-of-gravity-gradient-instrument" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/74776.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">326</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">805</span> A New Modification of Nonlinear Conjugate Gradient Coefficients with Global Convergence Properties</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ahmad%20Alhawarat">Ahmad Alhawarat</a>, <a href="https://publications.waset.org/abstracts/search?q=Mustafa%20Mamat"> Mustafa Mamat</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohd%20Rivaie"> Mohd Rivaie</a>, <a href="https://publications.waset.org/abstracts/search?q=Ismail%20Mohd"> Ismail Mohd</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Conjugate gradient method has been enormously used to solve large scale unconstrained optimization problems due to the number of iteration, memory, CPU time, and convergence property, in this paper we find a new class of nonlinear conjugate gradient coefficient with global convergence properties proved by exact line search. The numerical results for our new βK give a good result when it compared with well-known formulas. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=conjugate%20gradient%20method" title="conjugate gradient method">conjugate gradient method</a>, <a href="https://publications.waset.org/abstracts/search?q=conjugate%20gradient%20coefficient" title=" conjugate gradient coefficient"> conjugate gradient coefficient</a>, <a href="https://publications.waset.org/abstracts/search?q=global%20convergence" title=" global convergence"> global convergence</a> </p> <a href="https://publications.waset.org/abstracts/1392/a-new-modification-of-nonlinear-conjugate-gradient-coefficients-with-global-convergence-properties" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/1392.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">463</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">804</span> Linear Study of Electrostatic Ion Temperature Gradient Mode with Entropy Gradient Drift and Sheared Ion Flows</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=M.%20Yaqub%20Khan">M. Yaqub Khan</a>, <a href="https://publications.waset.org/abstracts/search?q=Usman%20Shabbir"> Usman Shabbir</a> </p> <p class="card-text"><strong>Abstract:</strong></p> History of plasma reveals that continuous struggle of experimentalists and theorists are not fruitful for confinement up to now. It needs a change to bring the research through entropy. Approximately, all the quantities like number density, temperature, electrostatic potential, etc. are connected to entropy. Therefore, it is better to change the way of research. In ion temperature gradient mode with the help of Braginskii model, Boltzmannian electrons, effect of velocity shear is studied inculcating entropy in the magnetoplasma. New dispersion relation is derived for ion temperature gradient mode, and dependence on entropy gradient drift is seen. It is also seen velocity shear enhances the instability but in anomalous transport, its role is not seen significantly but entropy. This work will be helpful to the next step of tokamak and space plasmas. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=entropy" title="entropy">entropy</a>, <a href="https://publications.waset.org/abstracts/search?q=velocity%20shear" title=" velocity shear"> velocity shear</a>, <a href="https://publications.waset.org/abstracts/search?q=ion%20temperature%20gradient%20mode" title=" ion temperature gradient mode"> ion temperature gradient mode</a>, <a href="https://publications.waset.org/abstracts/search?q=drift" title=" drift"> drift</a> </p> <a href="https://publications.waset.org/abstracts/70221/linear-study-of-electrostatic-ion-temperature-gradient-mode-with-entropy-gradient-drift-and-sheared-ion-flows" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/70221.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">386</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">803</span> Torsional Vibration of Carbon Nanotubes via Nonlocal Gradient Theories</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mustafa%20Arda">Mustafa Arda</a>, <a href="https://publications.waset.org/abstracts/search?q=Metin%20Aydogdu"> Metin Aydogdu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Carbon nanotubes (CNTs) have many possible application areas because of their superior physical properties. Nonlocal Theory, which unlike the classical theories, includes the size dependency. Nonlocal Stress and Strain Gradient approaches can be used in nanoscale static and dynamic analysis. In the present study, torsional vibration of CNTs was investigated according to nonlocal stress and strain gradient theories. Effects of the small scale parameters to the non-dimensional frequency were obtained. Results were compared with the Molecular Dynamics Simulation and Lattice Dynamics. Strain Gradient Theory has shown more weakening effect on CNT according to the Stress Gradient Theory. Combination of both theories gives more acceptable results rather than the classical and stress or strain gradient theory according to Lattice Dynamics. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=torsional%20vibration" title="torsional vibration">torsional vibration</a>, <a href="https://publications.waset.org/abstracts/search?q=carbon%20nanotubes" title=" carbon nanotubes"> carbon nanotubes</a>, <a href="https://publications.waset.org/abstracts/search?q=nonlocal%20gradient%20theory" title=" nonlocal gradient theory"> nonlocal gradient theory</a>, <a href="https://publications.waset.org/abstracts/search?q=stress" title=" stress"> stress</a>, <a href="https://publications.waset.org/abstracts/search?q=strain" title=" strain"> strain</a> </p> <a href="https://publications.waset.org/abstracts/48828/torsional-vibration-of-carbon-nanotubes-via-nonlocal-gradient-theories" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/48828.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">389</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">802</span> Detecting Cyberbullying, Spam and Bot Behavior and Fake News in Social Media Accounts Using Machine Learning</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=M.%20D.%20D.%20Chathurangi">M. D. D. Chathurangi</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20G.%20K.%20Nayanathara"> M. G. K. Nayanathara</a>, <a href="https://publications.waset.org/abstracts/search?q=K.%20M.%20H.%20M.%20M.%20Gunapala"> K. M. H. M. M. Gunapala</a>, <a href="https://publications.waset.org/abstracts/search?q=G.%20M.%20R.%20G.%20Dayananda"> G. M. R. G. Dayananda</a>, <a href="https://publications.waset.org/abstracts/search?q=Kavinga%20Yapa%20Abeywardena"> Kavinga Yapa Abeywardena</a>, <a href="https://publications.waset.org/abstracts/search?q=Deemantha%20Siriwardana"> Deemantha Siriwardana</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Due to the growing popularity of social media platforms at present, there are various concerns, mostly cyberbullying, spam, bot accounts, and the spread of incorrect information. To develop a risk score calculation system as a thorough method for deciphering and exposing unethical social media profiles, this research explores the most suitable algorithms to our best knowledge in detecting the mentioned concerns. Various multiple models, such as Naïve Bayes, CNN, KNN, Stochastic Gradient Descent, Gradient Boosting Classifier, etc., were examined, and the best results were taken into the development of the risk score system. For cyberbullying, the Logistic Regression algorithm achieved an accuracy of 84.9%, while the spam-detecting MLP model gained 98.02% accuracy. The bot accounts identifying the Random Forest algorithm obtained 91.06% accuracy, and 84% accuracy was acquired for fake news detection using SVM. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=cyberbullying" title="cyberbullying">cyberbullying</a>, <a href="https://publications.waset.org/abstracts/search?q=spam%20behavior" title=" spam behavior"> spam behavior</a>, <a href="https://publications.waset.org/abstracts/search?q=bot%20accounts" title=" bot accounts"> bot accounts</a>, <a href="https://publications.waset.org/abstracts/search?q=fake%20news" title=" fake news"> fake news</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a> </p> <a href="https://publications.waset.org/abstracts/186451/detecting-cyberbullying-spam-and-bot-behavior-and-fake-news-in-social-media-accounts-using-machine-learning" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/186451.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">36</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">801</span> Profit-Based Artificial Neural Network (ANN) Trained by Migrating Birds Optimization: A Case Study in Credit Card Fraud Detection</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ashkan%20Zakaryazad">Ashkan Zakaryazad</a>, <a href="https://publications.waset.org/abstracts/search?q=Ekrem%20Duman"> Ekrem Duman</a> </p> <p class="card-text"><strong>Abstract:</strong></p> A typical classification technique ranks the instances in a data set according to the likelihood of belonging to one (positive) class. A credit card (CC) fraud detection model ranks the transactions in terms of probability of being fraud. In fact, this approach is often criticized, because firms do not care about fraud probability but about the profitability or costliness of detecting a fraudulent transaction. The key contribution in this study is to focus on the profit maximization in the model building step. The artificial neural network proposed in this study works based on profit maximization instead of minimizing the error of prediction. Moreover, some studies have shown that the back propagation algorithm, similar to other gradient–based algorithms, usually gets trapped in local optima and swarm-based algorithms are more successful in this respect. In this study, we train our profit maximization ANN using the Migrating Birds optimization (MBO) which is introduced to literature recently. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=neural%20network" title="neural network">neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=profit-based%20neural%20network" title=" profit-based neural network"> profit-based neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=sum%20of%20squared%20errors%20%28SSE%29" title=" sum of squared errors (SSE)"> sum of squared errors (SSE)</a>, <a href="https://publications.waset.org/abstracts/search?q=MBO" title=" MBO"> MBO</a>, <a href="https://publications.waset.org/abstracts/search?q=gradient%20descent" title=" gradient descent"> gradient descent</a> </p> <a href="https://publications.waset.org/abstracts/31637/profit-based-artificial-neural-network-ann-trained-by-migrating-birds-optimization-a-case-study-in-credit-card-fraud-detection" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/31637.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">475</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">800</span> Green Function and Eshelby Tensor Based on Mindlin’s 2nd Gradient Model: An Explicit Study of Spherical Inclusion Case</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=A.%20Selmi">A. Selmi</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Bisharat"> A. Bisharat</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Using Fourier transform and based on the Mindlin&#39;s 2<sup>nd</sup> gradient model that involves two length scale parameters, the Green&#39;s function, the Eshelby tensor, and the Eshelby-like tensor for a spherical inclusion are derived. It is proved that the Eshelby tensor consists of two parts; the classical Eshelby tensor and a gradient part including the length scale parameters which enable the interpretation of the size effect. When the strain gradient is not taken into account, the obtained Green&#39;s function and Eshelby tensor reduce to its analogue based on the classical elasticity. The Eshelby tensor in and outside the inclusion, the volume average of the gradient part and the Eshelby-like tensor are explicitly obtained. Unlike the classical Eshelby tensor, the results show that the components of the new Eshelby tensor vary with the position and the inclusion dimensions. It is demonstrated that the contribution of the gradient part should not be neglected. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Eshelby%20tensor" title="Eshelby tensor">Eshelby tensor</a>, <a href="https://publications.waset.org/abstracts/search?q=Eshelby-like%20tensor" title=" Eshelby-like tensor"> Eshelby-like tensor</a>, <a href="https://publications.waset.org/abstracts/search?q=Green%E2%80%99s%20function" title=" Green’s function"> Green’s function</a>, <a href="https://publications.waset.org/abstracts/search?q=Mindlin%E2%80%99s%202nd%20gradient%20model" title=" Mindlin’s 2nd gradient model"> Mindlin’s 2nd gradient model</a>, <a href="https://publications.waset.org/abstracts/search?q=spherical%20inclusion" title=" spherical inclusion"> spherical inclusion</a> </p> <a href="https://publications.waset.org/abstracts/95413/green-function-and-eshelby-tensor-based-on-mindlins-2nd-gradient-model-an-explicit-study-of-spherical-inclusion-case" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/95413.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">268</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">799</span> Investigating the Influence of Activation Functions on Image Classification Accuracy via Deep Convolutional Neural Network</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Gulfam%20Haider">Gulfam Haider</a>, <a href="https://publications.waset.org/abstracts/search?q=sana%20danish"> sana danish</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Convolutional Neural Networks (CNNs) have emerged as powerful tools for image classification, and the choice of optimizers profoundly affects their performance. The study of optimizers and their adaptations remains a topic of significant importance in machine learning research. While numerous studies have explored and advocated for various optimizers, the efficacy of these optimization techniques is still subject to scrutiny. This work aims to address the challenges surrounding the effectiveness of optimizers by conducting a comprehensive analysis and evaluation. The primary focus of this investigation lies in examining the performance of different optimizers when employed in conjunction with the popular activation function, Rectified Linear Unit (ReLU). By incorporating ReLU, known for its favorable properties in prior research, the aim is to bolster the effectiveness of the optimizers under scrutiny. Specifically, we evaluate the adjustment of these optimizers with both the original Softmax activation function and the modified ReLU activation function, carefully assessing their impact on overall performance. To achieve this, a series of experiments are conducted using a well-established benchmark dataset for image classification tasks, namely the Canadian Institute for Advanced Research dataset (CIFAR-10). The selected optimizers for investigation encompass a range of prominent algorithms, including Adam, Root Mean Squared Propagation (RMSprop), Adaptive Learning Rate Method (Adadelta), Adaptive Gradient Algorithm (Adagrad), and Stochastic Gradient Descent (SGD). The performance analysis encompasses a comprehensive evaluation of the classification accuracy, convergence speed, and robustness of the CNN models trained with each optimizer. Through rigorous experimentation and meticulous assessment, we discern the strengths and weaknesses of the different optimization techniques, providing valuable insights into their suitability for image classification tasks. By conducting this in-depth study, we contribute to the existing body of knowledge surrounding optimizers in CNNs, shedding light on their performance characteristics for image classification. The findings gleaned from this research serve to guide researchers and practitioners in making informed decisions when selecting optimizers and activation functions, thus advancing the state-of-the-art in the field of image classification with convolutional neural networks. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20neural%20network" title="deep neural network">deep neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=optimizers" title=" optimizers"> optimizers</a>, <a href="https://publications.waset.org/abstracts/search?q=RMsprop" title=" RMsprop"> RMsprop</a>, <a href="https://publications.waset.org/abstracts/search?q=ReLU" title=" ReLU"> ReLU</a>, <a href="https://publications.waset.org/abstracts/search?q=stochastic%20gradient%20descent" title=" stochastic gradient descent"> stochastic gradient descent</a> </p> <a href="https://publications.waset.org/abstracts/169078/investigating-the-influence-of-activation-functions-on-image-classification-accuracy-via-deep-convolutional-neural-network" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/169078.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">125</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">798</span> Flexural Strength Design of RC Beams with Consideration of Strain Gradient Effect</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mantai%20Chen">Mantai Chen</a>, <a href="https://publications.waset.org/abstracts/search?q=Johnny%20Ching%20Ming%20Ho"> Johnny Ching Ming Ho</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The stress-strain relationship of concrete under flexure is one of the essential parameters in assessing ultimate flexural strength capacity of RC beams. Currently, the concrete stress-strain curve in flexure is obtained by incorporating a constant scale-down factor of 0.85 in the uniaxial stress-strain curve. However, it was revealed that strain gradient would improve the maximum concrete stress under flexure and concrete stress-strain curve is strain gradient dependent. Based on the strain-gradient-dependent concrete stress-strain curve, the investigation of the combined effects of strain gradient and concrete strength on flexural strength of RC beams was extended to high strength concrete up to 100 MPa by theoretical analysis. As an extension and application of the authors’ previous study, a new flexural strength design method incorporating the combined effects of strain gradient and concrete strength is developed. A set of equivalent rectangular concrete stress block parameters is proposed and applied to produce a series of design charts showing that the flexural strength of RC beams are improved with strain gradient effect considered. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=beams" title="beams">beams</a>, <a href="https://publications.waset.org/abstracts/search?q=equivalent%20concrete%20stress%20block" title=" equivalent concrete stress block"> equivalent concrete stress block</a>, <a href="https://publications.waset.org/abstracts/search?q=flexural%20strength" title=" flexural strength"> flexural strength</a>, <a href="https://publications.waset.org/abstracts/search?q=strain%20gradient" title=" strain gradient"> strain gradient</a> </p> <a href="https://publications.waset.org/abstracts/5486/flexural-strength-design-of-rc-beams-with-consideration-of-strain-gradient-effect" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/5486.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">447</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">797</span> Hybrid Gravity Gradient Inversion-Ant Colony Optimization Algorithm for Motion Planning of Mobile Robots</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Meng%20Wu">Meng Wu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Motion planning is a common task required to be fulfilled by robots. A strategy combining Ant Colony Optimization (ACO) and gravity gradient inversion algorithm is proposed for motion planning of mobile robots. In this paper, in order to realize optimal motion planning strategy, the cost function in ACO is designed based on gravity gradient inversion algorithm. The obstacles around mobile robot can cause gravity gradient anomalies; the gradiometer is installed on the mobile robot to detect the gravity gradient anomalies. After obtaining the anomalies, gravity gradient inversion algorithm is employed to calculate relative distance and orientation between mobile robot and obstacles. The relative distance and orientation deduced from gravity gradient inversion algorithm is employed as cost function in ACO algorithm to realize motion planning. The proposed strategy is validated by the simulation and experiment results. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=motion%20planning" title="motion planning">motion planning</a>, <a href="https://publications.waset.org/abstracts/search?q=gravity%20gradient%20inversion%20algorithm" title=" gravity gradient inversion algorithm"> gravity gradient inversion algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=ant%20colony%20optimization" title=" ant colony optimization"> ant colony optimization</a> </p> <a href="https://publications.waset.org/abstracts/110462/hybrid-gravity-gradient-inversion-ant-colony-optimization-algorithm-for-motion-planning-of-mobile-robots" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/110462.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">137</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">796</span> A Refined Nonlocal Strain Gradient Theory for Assessing Scaling-Dependent Vibration Behavior of Microbeams</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Xiaobai%20Li">Xiaobai Li</a>, <a href="https://publications.waset.org/abstracts/search?q=Li%20Li"> Li Li</a>, <a href="https://publications.waset.org/abstracts/search?q=Yujin%20Hu"> Yujin Hu</a>, <a href="https://publications.waset.org/abstracts/search?q=Weiming%20Deng"> Weiming Deng</a>, <a href="https://publications.waset.org/abstracts/search?q=Zhe%20Ding"> Zhe Ding</a> </p> <p class="card-text"><strong>Abstract:</strong></p> A size-dependent Euler&ndash;Bernoulli beam model, which accounts for nonlocal stress field, strain gradient field and higher order inertia force field, is derived based on the nonlocal strain gradient theory considering velocity gradient effect. The governing equations and boundary conditions are derived both in dimensional and dimensionless form by employed the Hamilton principle. The analytical solutions based on different continuum theories are compared. The effect of higher order inertia terms is extremely significant in high frequency range. It is found that there exists an asymptotic frequency for the proposed beam model, while for the nonlocal strain gradient theory the solutions diverge. The effect of strain gradient field in thickness direction is significant in low frequencies domain and it cannot be neglected when the material strain length scale parameter is considerable with beam thickness. The influence of each of three size effect parameters on the natural frequencies are investigated. The natural frequencies increase with the increasing material strain gradient length scale parameter or decreasing velocity gradient length scale parameter and nonlocal parameter. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Euler-Bernoulli%20Beams" title="Euler-Bernoulli Beams">Euler-Bernoulli Beams</a>, <a href="https://publications.waset.org/abstracts/search?q=free%20vibration" title=" free vibration"> free vibration</a>, <a href="https://publications.waset.org/abstracts/search?q=higher%20order%20inertia" title=" higher order inertia"> higher order inertia</a>, <a href="https://publications.waset.org/abstracts/search?q=Nonlocal%20Strain%20Gradient%20Theory" title=" Nonlocal Strain Gradient Theory"> Nonlocal Strain Gradient Theory</a>, <a href="https://publications.waset.org/abstracts/search?q=velocity%20gradient" title=" velocity gradient"> velocity gradient</a> </p> <a href="https://publications.waset.org/abstracts/60330/a-refined-nonlocal-strain-gradient-theory-for-assessing-scaling-dependent-vibration-behavior-of-microbeams" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/60330.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">267</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">795</span> Ultra-Fast pH-Gradient Ion Exchange Chromatography for the Separation of Monoclonal Antibody Charge Variants</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Robert%20van%20Ling">Robert van Ling</a>, <a href="https://publications.waset.org/abstracts/search?q=Alexander%20Schwahn"> Alexander Schwahn</a>, <a href="https://publications.waset.org/abstracts/search?q=Shanhua%20Lin"> Shanhua Lin</a>, <a href="https://publications.waset.org/abstracts/search?q=Ken%20Cook"> Ken Cook</a>, <a href="https://publications.waset.org/abstracts/search?q=Frank%20Steiner"> Frank Steiner</a>, <a href="https://publications.waset.org/abstracts/search?q=Rowan%20Moore"> Rowan Moore</a>, <a href="https://publications.waset.org/abstracts/search?q=Mauro%20de%20Pra"> Mauro de Pra</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Purpose: Demonstration of fast high resolution charge variant analysis for monoclonal antibody (mAb) therapeutics within 5 minutes. Methods: Three commercially available mAbs were used for all experiments. The charge variants of therapeutic mAbs (Bevacizumab, Cetuximab, Infliximab, and Trastuzumab) are analyzed on a strong cation exchange column with a linear pH gradient separation method. The linear gradient from pH 5.6 to pH 10.2 is generated over time by running a linear pump gradient from 100% Thermo Scientific™ CX-1 pH Gradient Buffer A (pH 5.6) to 100% CX-1 pH Gradient Buffer B (pH 10.2), using the Thermo Scientific™ Vanquish™ UHPLC system. Results: The pH gradient method is generally applicable to monoclonal antibody charge variant analysis. In conjunction with state-of-the-art column and UHPLC technology, ultra fast high-resolution separations are consistently achieved in under 5 minutes for all mAbs analyzed. Conclusion: The linear pH gradient method is a platform method for mAb charge variant analysis. The linear pH gradient method can be easily optimized to improve separations and shorten cycle times. Ultra-fast charge variant separation is facilitated with UHPLC that complements, and in some instances outperforms CE approaches in terms of both resolution and throughput. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=charge%20variants" title="charge variants">charge variants</a>, <a href="https://publications.waset.org/abstracts/search?q=ion%20exchange%20chromatography" title=" ion exchange chromatography"> ion exchange chromatography</a>, <a href="https://publications.waset.org/abstracts/search?q=monoclonal%20antibody" title=" monoclonal antibody"> monoclonal antibody</a>, <a href="https://publications.waset.org/abstracts/search?q=UHPLC" title=" UHPLC"> UHPLC</a> </p> <a href="https://publications.waset.org/abstracts/63884/ultra-fast-ph-gradient-ion-exchange-chromatography-for-the-separation-of-monoclonal-antibody-charge-variants" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/63884.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">440</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">794</span> Unpowered Knee Exoskeleton with Compliant Joints for Stair Descent Assistance</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Pengfan%20Wu">Pengfan Wu</a>, <a href="https://publications.waset.org/abstracts/search?q=Xiaoan%20Chen"> Xiaoan Chen</a>, <a href="https://publications.waset.org/abstracts/search?q=Ye%20He"> Ye He</a>, <a href="https://publications.waset.org/abstracts/search?q=Tianchi%20Chen"> Tianchi Chen</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper introduces the design of an unpowered knee exoskeleton to assist human walking by redistributing the moment of the knee joint during stair descent (SD). Considering the knee moment varying with the knee joint angle and the work of the knee joint is all negative, the custom-built spring was used to convert negative work into the potential energy of the spring during flexion, and the obtained energy work as assistance during extension to reduce the consumption of lower limb muscles. The human-machine adaptability problem was left by traditional rigid wearable due to the knee involves sliding and rotating without a fixed-axis rotation, and this paper designed the two-direction grooves to follow the human-knee kinematics, and the wire spring provides a certain resistance to the pin in the groove to prevent extra degrees of freedom. The experiment was performed on a normal stair by healthy young wearing the device on both legs with the surface electromyography recorded. The results show that the quadriceps (knee extensor) were reduced significantly. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=unpowered%20exoskeleton" title="unpowered exoskeleton">unpowered exoskeleton</a>, <a href="https://publications.waset.org/abstracts/search?q=stair%20descent" title=" stair descent"> stair descent</a>, <a href="https://publications.waset.org/abstracts/search?q=knee%20compliant%20joint" title=" knee compliant joint"> knee compliant joint</a>, <a href="https://publications.waset.org/abstracts/search?q=energy%20redistribution" title=" energy redistribution"> energy redistribution</a> </p> <a href="https://publications.waset.org/abstracts/115645/unpowered-knee-exoskeleton-with-compliant-joints-for-stair-descent-assistance" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/115645.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">125</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">793</span> Facial Expression Recognition Using Sparse Gaussian Conditional Random Field</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mohammadamin%20Abbasnejad">Mohammadamin Abbasnejad</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The analysis of expression and facial Action Units (AUs) detection are very important tasks in fields of computer vision and Human Computer Interaction (HCI) due to the wide range of applications in human life. Many works have been done during the past few years which has their own advantages and disadvantages. In this work, we present a new model based on Gaussian Conditional Random Field. We solve our objective problem using ADMM and we show how well the proposed model works. We train and test our work on two facial expression datasets, CK+, and RU-FACS. Experimental evaluation shows that our proposed approach outperform state of the art expression recognition. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Gaussian%20Conditional%20Random%20Field" title="Gaussian Conditional Random Field">Gaussian Conditional Random Field</a>, <a href="https://publications.waset.org/abstracts/search?q=ADMM" title=" ADMM"> ADMM</a>, <a href="https://publications.waset.org/abstracts/search?q=convergence" title=" convergence"> convergence</a>, <a href="https://publications.waset.org/abstracts/search?q=gradient%20descent" title=" gradient descent"> gradient descent</a> </p> <a href="https://publications.waset.org/abstracts/26245/facial-expression-recognition-using-sparse-gaussian-conditional-random-field" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/26245.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">356</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">&lsaquo;</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=gradient%20descent&amp;page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=gradient%20descent&amp;page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=gradient%20descent&amp;page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=gradient%20descent&amp;page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=gradient%20descent&amp;page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=gradient%20descent&amp;page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=gradient%20descent&amp;page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=gradient%20descent&amp;page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=gradient%20descent&amp;page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=gradient%20descent&amp;page=27">27</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=gradient%20descent&amp;page=28">28</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=gradient%20descent&amp;page=2" rel="next">&rsaquo;</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">&copy; 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">&times;</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>

Pages: 1 2 3 4 5 6 7 8 9 10