CINXE.COM
Search results for: deep learning
<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: deep learning</title> <meta name="description" content="Search results for: deep learning"> <meta name="keywords" content="deep learning"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="deep learning" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="deep learning"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 8409</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: deep learning</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8259</span> Defect Classification of Hydrogen Fuel Pressure Vessels using Deep Learning</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Dongju%20Kim">Dongju Kim</a>, <a href="https://publications.waset.org/abstracts/search?q=Youngjoo%20Suh"> Youngjoo Suh</a>, <a href="https://publications.waset.org/abstracts/search?q=Hyojin%20Kim"> Hyojin Kim</a>, <a href="https://publications.waset.org/abstracts/search?q=Gyeongyeong%20Kim"> Gyeongyeong Kim</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Acoustic Emission Testing (AET) is widely used to test the structural integrity of an operational hydrogen storage container, and clustering algorithms are frequently used in pattern recognition methods to interpret AET results. However, the interpretation of AET results can vary from user to user as the tuning of the relevant parameters relies on the user's experience and knowledge of AET. Therefore, it is necessary to use a deep learning model to identify patterns in acoustic emission (AE) signal data that can be used to classify defects instead. In this paper, a deep learning-based model for classifying the types of defects in hydrogen storage tanks, using AE sensor waveforms, is proposed. As hydrogen storage tanks are commonly constructed using carbon fiber reinforced polymer composite (CFRP), a defect classification dataset is collected through a tensile test on a specimen of CFRP with an AE sensor attached. The performance of the classification model, using one-dimensional convolutional neural network (1-D CNN) and synthetic minority oversampling technique (SMOTE) data augmentation, achieved 91.09% accuracy for each defect. It is expected that the deep learning classification model in this paper, used with AET, will help in evaluating the operational safety of hydrogen storage containers. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=acoustic%20emission%20testing" title="acoustic emission testing">acoustic emission testing</a>, <a href="https://publications.waset.org/abstracts/search?q=carbon%20fiber%20reinforced%20polymer%20composite" title=" carbon fiber reinforced polymer composite"> carbon fiber reinforced polymer composite</a>, <a href="https://publications.waset.org/abstracts/search?q=one-dimensional%20convolutional%20neural%20network" title=" one-dimensional convolutional neural network"> one-dimensional convolutional neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=smote%20data%20augmentation" title=" smote data augmentation"> smote data augmentation</a> </p> <a href="https://publications.waset.org/abstracts/150903/defect-classification-of-hydrogen-fuel-pressure-vessels-using-deep-learning" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/150903.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">93</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8258</span> Code Embedding for Software Vulnerability Discovery Based on Semantic Information</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Joseph%20Gear">Joseph Gear</a>, <a href="https://publications.waset.org/abstracts/search?q=Yue%20Xu"> Yue Xu</a>, <a href="https://publications.waset.org/abstracts/search?q=Ernest%20Foo"> Ernest Foo</a>, <a href="https://publications.waset.org/abstracts/search?q=Praveen%20Gauravaran"> Praveen Gauravaran</a>, <a href="https://publications.waset.org/abstracts/search?q=Zahra%20Jadidi"> Zahra Jadidi</a>, <a href="https://publications.waset.org/abstracts/search?q=Leonie%20Simpson"> Leonie Simpson</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Deep learning methods have been seeing an increasing application to the long-standing security research goal of automatic vulnerability detection for source code. Attention, however, must still be paid to the task of producing vector representations for source code (code embeddings) as input for these deep learning models. Graphical representations of code, most predominantly Abstract Syntax Trees and Code Property Graphs, have received some use in this task of late; however, for very large graphs representing very large code snip- pets, learning becomes prohibitively computationally expensive. This expense may be reduced by intelligently pruning this input to only vulnerability-relevant information; however, little research in this area has been performed. Additionally, most existing work comprehends code based solely on the structure of the graph at the expense of the information contained by the node in the graph. This paper proposes Semantic-enhanced Code Embedding for Vulnerability Discovery (SCEVD), a deep learning model which uses semantic-based feature selection for its vulnerability classification model. It uses information from the nodes as well as the structure of the code graph in order to select features which are most indicative of the presence or absence of vulnerabilities. This model is implemented and experimentally tested using the SARD Juliet vulnerability test suite to determine its efficacy. It is able to improve on existing code graph feature selection methods, as demonstrated by its improved ability to discover vulnerabilities. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=code%20representation" title="code representation">code representation</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=source%20code%20semantics" title=" source code semantics"> source code semantics</a>, <a href="https://publications.waset.org/abstracts/search?q=vulnerability%20discovery" title=" vulnerability discovery"> vulnerability discovery</a> </p> <a href="https://publications.waset.org/abstracts/157454/code-embedding-for-software-vulnerability-discovery-based-on-semantic-information" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/157454.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">158</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8257</span> Advances in Machine Learning and Deep Learning Techniques for Image Classification and Clustering</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=R.%20Nandhini">R. Nandhini</a>, <a href="https://publications.waset.org/abstracts/search?q=Gaurab%20Mudbhari"> Gaurab Mudbhari</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Ranging from the field of health care to self-driving cars, machine learning and deep learning algorithms have revolutionized the field with the proper utilization of images and visual-oriented data. Segmentation, regression, classification, clustering, dimensionality reduction, etc., are some of the Machine Learning tasks that helped Machine Learning and Deep Learning models to become state-of-the-art models for the field where images are key datasets. Among these tasks, classification and clustering are essential but difficult because of the intricate and high-dimensional characteristics of image data. This finding examines and assesses advanced techniques in supervised classification and unsupervised clustering for image datasets, emphasizing the relative efficiency of Convolutional Neural Networks (CNNs), Vision Transformers (ViTs), Deep Embedded Clustering (DEC), and self-supervised learning approaches. Due to the distinctive structural attributes present in images, conventional methods often fail to effectively capture spatial patterns, resulting in the development of models that utilize more advanced architectures and attention mechanisms. In image classification, we investigated both CNNs and ViTs. One of the most promising models, which is very much known for its ability to detect spatial hierarchies, is CNN, and it serves as a core model in our study. On the other hand, ViT is another model that also serves as a core model, reflecting a modern classification method that uses a self-attention mechanism which makes them more robust as this self-attention mechanism allows them to lean global dependencies in images without relying on convolutional layers. This paper evaluates the performance of these two architectures based on accuracy, precision, recall, and F1-score across different image datasets, analyzing their appropriateness for various categories of images. In the domain of clustering, we assess DEC, Variational Autoencoders (VAEs), and conventional clustering techniques like k-means, which are used on embeddings derived from CNN models. DEC, a prominent model in the field of clustering, has gained the attention of many ML engineers because of its ability to combine feature learning and clustering into a single framework and its main goal is to improve clustering quality through better feature representation. VAEs, on the other hand, are pretty well known for using latent embeddings for grouping similar images without requiring for prior label by utilizing the probabilistic clustering method. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title="machine learning">machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20classification" title=" image classification"> image classification</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20clustering" title=" image clustering"> image clustering</a> </p> <a href="https://publications.waset.org/abstracts/194618/advances-in-machine-learning-and-deep-learning-techniques-for-image-classification-and-clustering" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/194618.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">8</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8256</span> Deep Routing Strategy: Deep Learning based Intelligent Routing in Software Defined Internet of Things.</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Zabeehullah">Zabeehullah</a>, <a href="https://publications.waset.org/abstracts/search?q=Fahim%20Arif"> Fahim Arif</a>, <a href="https://publications.waset.org/abstracts/search?q=Yawar%20Abbas"> Yawar Abbas</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Software Defined Network (SDN) is a next genera-tion networking model which simplifies the traditional network complexities and improve the utilization of constrained resources. Currently, most of the SDN based Internet of Things(IoT) environments use traditional network routing strategies which work on the basis of max or min metric value. However, IoT network heterogeneity, dynamic traffic flow and complexity demands intelligent and self-adaptive routing algorithms because traditional routing algorithms lack the self-adaptions, intelligence and efficient utilization of resources. To some extent, SDN, due its flexibility, and centralized control has managed the IoT complexity and heterogeneity but still Software Defined IoT (SDIoT) lacks intelligence. To address this challenge, we proposed a model called Deep Routing Strategy (DRS) which uses Deep Learning algorithm to perform routing in SDIoT intelligently and efficiently. Our model uses real-time traffic for training and learning. Results demonstrate that proposed model has achieved high accuracy and low packet loss rate during path selection. Proposed model has also outperformed benchmark routing algorithm (OSPF). Moreover, proposed model provided encouraging results during high dynamic traffic flow. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=SDN" title="SDN">SDN</a>, <a href="https://publications.waset.org/abstracts/search?q=IoT" title=" IoT"> IoT</a>, <a href="https://publications.waset.org/abstracts/search?q=DL" title=" DL"> DL</a>, <a href="https://publications.waset.org/abstracts/search?q=ML" title=" ML"> ML</a>, <a href="https://publications.waset.org/abstracts/search?q=DRS" title=" DRS"> DRS</a> </p> <a href="https://publications.waset.org/abstracts/150674/deep-routing-strategy-deep-learning-based-intelligent-routing-in-software-defined-internet-of-things" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/150674.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">110</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8255</span> Water Body Detection and Estimation from Landsat Satellite Images Using Deep Learning</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=M.%20Devaki">M. Devaki</a>, <a href="https://publications.waset.org/abstracts/search?q=K.%20B.%20Jayanthi"> K. B. Jayanthi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The identification of water bodies from satellite images has recently received a great deal of attention. Different methods have been developed to distinguish water bodies from various satellite images that vary in terms of time and space. Urban water identification issues body manifests in numerous applications with a great deal of certainty. There has been a sharp rise in the usage of satellite images to map natural resources, including urban water bodies and forests, during the past several years. This is because water and forest resources depend on each other so heavily that ongoing monitoring of both is essential to their sustainable management. The relevant elements from satellite pictures have been chosen using a variety of techniques, including machine learning. Then, a convolution neural network (CNN) architecture is created that can identify a superpixel as either one of two classes, one that includes water or doesn't from input data in a complex metropolitan scene. The deep learning technique, CNN, has advanced tremendously in a variety of visual-related tasks. CNN can improve classification performance by reducing the spectral-spatial regularities of the input data and extracting deep features hierarchically from raw pictures. Calculate the water body using the satellite image's resolution. Experimental results demonstrate that the suggested method outperformed conventional approaches in terms of water extraction accuracy from remote-sensing images, with an average overall accuracy of 97%. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=water%20body" title="water body">water body</a>, <a href="https://publications.waset.org/abstracts/search?q=Deep%20learning" title=" Deep learning"> Deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=satellite%20images" title=" satellite images"> satellite images</a>, <a href="https://publications.waset.org/abstracts/search?q=convolution%20neural%20network" title=" convolution neural network"> convolution neural network</a> </p> <a href="https://publications.waset.org/abstracts/162827/water-body-detection-and-estimation-from-landsat-satellite-images-using-deep-learning" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/162827.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">89</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8254</span> Exploring Deep Neural Network Compression: An Overview</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ghorab%20Sara">Ghorab Sara</a>, <a href="https://publications.waset.org/abstracts/search?q=Meziani%20Lila"> Meziani Lila</a>, <a href="https://publications.waset.org/abstracts/search?q=Rubin%20Harvey%20Stuart"> Rubin Harvey Stuart</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The rapid growth of deep learning has led to intricate and resource-intensive deep neural networks widely used in computer vision tasks. However, their complexity results in high computational demands and memory usage, hindering real-time application. To address this, research focuses on model compression techniques. The paper provides an overview of recent advancements in compressing neural networks and categorizes the various methods into four main approaches: network pruning, quantization, network decomposition, and knowledge distillation. This paper aims to provide a comprehensive outline of both the advantages and limitations of each method. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=model%20compression" title="model compression">model compression</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20neural%20network" title=" deep neural network"> deep neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=pruning" title=" pruning"> pruning</a>, <a href="https://publications.waset.org/abstracts/search?q=knowledge%20distillation" title=" knowledge distillation"> knowledge distillation</a>, <a href="https://publications.waset.org/abstracts/search?q=quantization" title=" quantization"> quantization</a>, <a href="https://publications.waset.org/abstracts/search?q=low-rank%20decomposition" title=" low-rank decomposition"> low-rank decomposition</a> </p> <a href="https://publications.waset.org/abstracts/185803/exploring-deep-neural-network-compression-an-overview" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/185803.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">43</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8253</span> Investigation on Behavior of Fixed-Ended Reinforced Concrete Deep Beams </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Y.%20Heyrani%20Birak">Y. Heyrani Birak</a>, <a href="https://publications.waset.org/abstracts/search?q=R.%20Hizaji"> R. Hizaji</a>, <a href="https://publications.waset.org/abstracts/search?q=J.%20Shahkarami"> J. Shahkarami</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Reinforced Concrete (RC) deep beams are special structural elements because of their geometry and behavior under loads. For example, assumption of strain- stress distribution is not linear in the cross section. These types of beams may have simple supports or fixed supports. A lot of research works have been conducted on simply supported deep beams, but little study has been done in the fixed-end RC deep beams behavior. Recently, using of fixed-ended deep beams has been widely increased in structures. In this study, the behavior of fixed-ended deep beams is investigated, and the important parameters in capacity of this type of beams are mentioned. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20beam" title="deep beam">deep beam</a>, <a href="https://publications.waset.org/abstracts/search?q=capacity" title=" capacity"> capacity</a>, <a href="https://publications.waset.org/abstracts/search?q=reinforced%20concrete" title=" reinforced concrete"> reinforced concrete</a>, <a href="https://publications.waset.org/abstracts/search?q=fixed-ended" title=" fixed-ended"> fixed-ended</a> </p> <a href="https://publications.waset.org/abstracts/57558/investigation-on-behavior-of-fixed-ended-reinforced-concrete-deep-beams" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/57558.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">334</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8252</span> Missing Link Data Estimation with Recurrent Neural Network: An Application Using Speed Data of Daegu Metropolitan Area</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=JaeHwan%20Yang">JaeHwan Yang</a>, <a href="https://publications.waset.org/abstracts/search?q=Da-Woon%20Jeong"> Da-Woon Jeong</a>, <a href="https://publications.waset.org/abstracts/search?q=Seung-Young%20Kho"> Seung-Young Kho</a>, <a href="https://publications.waset.org/abstracts/search?q=Dong-Kyu%20Kim"> Dong-Kyu Kim</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In terms of ITS, information on link characteristic is an essential factor for plan or operation. But in practical cases, not every link has installed sensors on it. The link that does not have data on it is called “Missing Link”. The purpose of this study is to impute data of these missing links. To get these data, this study applies the machine learning method. With the machine learning process, especially for the deep learning process, missing link data can be estimated from present link data. For deep learning process, this study uses “Recurrent Neural Network” to take time-series data of road. As input data, Dedicated Short-range Communications (DSRC) data of Dalgubul-daero of Daegu Metropolitan Area had been fed into the learning process. Neural Network structure has 17 links with present data as input, 2 hidden layers, for 1 missing link data. As a result, forecasted data of target link show about 94% of accuracy compared with actual data. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=data%20estimation" title="data estimation">data estimation</a>, <a href="https://publications.waset.org/abstracts/search?q=link%20data" title=" link data"> link data</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=road%20network" title=" road network"> road network</a> </p> <a href="https://publications.waset.org/abstracts/80183/missing-link-data-estimation-with-recurrent-neural-network-an-application-using-speed-data-of-daegu-metropolitan-area" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/80183.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">510</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8251</span> Failure Mechanism in Fixed-Ended Reinforced Concrete Deep Beams under Cyclic Load</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=A.%20Aarabzadeh">A. Aarabzadeh</a>, <a href="https://publications.waset.org/abstracts/search?q=R.%20Hizaji"> R. Hizaji</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Reinforced Concrete (RC) deep beams are a special type of beams due to their geometry, boundary conditions, and behavior compared to ordinary shallow beams. For example, assumption of a linear strain-stress distribution in the cross section is not valid. Little study has been dedicated to fixed-end RC deep beams. Also, most experimental studies are carried out on simply supported deep beams. Regarding recent tendency for application of deep beams, possibility of using fixed-ended deep beams has been widely increased in structures. Therefore, it seems necessary to investigate the aforementioned structural element in more details. In addition to experimental investigation of a concrete deep beam under cyclic load, different failure mechanisms of fixed-ended deep beams under this type of loading have been evaluated in the present study. The results show that failure mechanisms of deep beams under cyclic loads are quite different from monotonic loads. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20beam" title="deep beam">deep beam</a>, <a href="https://publications.waset.org/abstracts/search?q=cyclic%20load" title=" cyclic load"> cyclic load</a>, <a href="https://publications.waset.org/abstracts/search?q=reinforced%20concrete" title=" reinforced concrete"> reinforced concrete</a>, <a href="https://publications.waset.org/abstracts/search?q=fixed-ended" title=" fixed-ended"> fixed-ended</a> </p> <a href="https://publications.waset.org/abstracts/56504/failure-mechanism-in-fixed-ended-reinforced-concrete-deep-beams-under-cyclic-load" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/56504.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">361</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8250</span> Identification of Breast Anomalies Based on Deep Convolutional Neural Networks and K-Nearest Neighbors</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ayyaz%20Hussain">Ayyaz Hussain</a>, <a href="https://publications.waset.org/abstracts/search?q=Tariq%20Sadad"> Tariq Sadad</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Breast cancer (BC) is one of the widespread ailments among females globally. The early prognosis of BC can decrease the mortality rate. Exact findings of benign tumors can avoid unnecessary biopsies and further treatments of patients under investigation. However, due to variations in images, it is a tough job to isolate cancerous cases from normal and benign ones. The machine learning technique is widely employed in the classification of BC pattern and prognosis. In this research, a deep convolution neural network (DCNN) called AlexNet architecture is employed to get more discriminative features from breast tissues. To achieve higher accuracy, K-nearest neighbor (KNN) classifiers are employed as a substitute for the softmax layer in deep learning. The proposed model is tested on a widely used breast image database called MIAS dataset for experimental purposes and achieved 99% accuracy. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=breast%20cancer" title="breast cancer">breast cancer</a>, <a href="https://publications.waset.org/abstracts/search?q=DCNN" title=" DCNN"> DCNN</a>, <a href="https://publications.waset.org/abstracts/search?q=KNN" title=" KNN"> KNN</a>, <a href="https://publications.waset.org/abstracts/search?q=mammography" title=" mammography"> mammography</a> </p> <a href="https://publications.waset.org/abstracts/118200/identification-of-breast-anomalies-based-on-deep-convolutional-neural-networks-and-k-nearest-neighbors" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/118200.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">136</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8249</span> Time Series Forecasting (TSF) Using Various Deep Learning Models</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jimeng%20Shi">Jimeng Shi</a>, <a href="https://publications.waset.org/abstracts/search?q=Mahek%20Jain"> Mahek Jain</a>, <a href="https://publications.waset.org/abstracts/search?q=Giri%20Narasimhan"> Giri Narasimhan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Time Series Forecasting (TSF) is used to predict the target variables at a future time point based on the learning from previous time points. To keep the problem tractable, learning methods use data from a fixed-length window in the past as an explicit input. In this paper, we study how the performance of predictive models changes as a function of different look-back window sizes and different amounts of time to predict the future. We also consider the performance of the recent attention-based Transformer models, which have had good success in the image processing and natural language processing domains. In all, we compare four different deep learning methods (RNN, LSTM, GRU, and Transformer) along with a baseline method. The dataset (hourly) we used is the Beijing Air Quality Dataset from the UCI website, which includes a multivariate time series of many factors measured on an hourly basis for a period of 5 years (2010-14). For each model, we also report on the relationship between the performance and the look-back window sizes and the number of predicted time points into the future. Our experiments suggest that Transformer models have the best performance with the lowest Mean Average Errors (MAE = 14.599, 23.273) and Root Mean Square Errors (RSME = 23.573, 38.131) for most of our single-step and multi-steps predictions. The best size for the look-back window to predict 1 hour into the future appears to be one day, while 2 or 4 days perform the best to predict 3 hours into the future. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=air%20quality%20prediction" title="air quality prediction">air quality prediction</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning%20algorithms" title=" deep learning algorithms"> deep learning algorithms</a>, <a href="https://publications.waset.org/abstracts/search?q=time%20series%20forecasting" title=" time series forecasting"> time series forecasting</a>, <a href="https://publications.waset.org/abstracts/search?q=look-back%20window" title=" look-back window"> look-back window</a> </p> <a href="https://publications.waset.org/abstracts/146879/time-series-forecasting-tsf-using-various-deep-learning-models" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/146879.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">153</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8248</span> Lightweight Hybrid Convolutional and Recurrent Neural Networks for Wearable Sensor Based Human Activity Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sonia%20Perez-Gamboa">Sonia Perez-Gamboa</a>, <a href="https://publications.waset.org/abstracts/search?q=Qingquan%20Sun"> Qingquan Sun</a>, <a href="https://publications.waset.org/abstracts/search?q=Yan%20Zhang"> Yan Zhang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Non-intrusive sensor-based human activity recognition (HAR) is utilized in a spectrum of applications, including fitness tracking devices, gaming, health care monitoring, and smartphone applications. Deep learning models such as convolutional neural networks (CNNs) and long short term memory (LSTM) recurrent neural networks (RNNs) provide a way to achieve HAR accurately and effectively. In this paper, we design a multi-layer hybrid architecture with CNN and LSTM and explore a variety of multi-layer combinations. Based on the exploration, we present a lightweight, hybrid, and multi-layer model, which can improve the recognition performance by integrating local features and scale-invariant with dependencies of activities. The experimental results demonstrate the efficacy of the proposed model, which can achieve a 94.7% activity recognition rate on a benchmark human activity dataset. This model outperforms traditional machine learning and other deep learning methods. Additionally, our implementation achieves a balance between recognition rate and training time consumption. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title="deep learning">deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=LSTM" title=" LSTM"> LSTM</a>, <a href="https://publications.waset.org/abstracts/search?q=CNN" title=" CNN"> CNN</a>, <a href="https://publications.waset.org/abstracts/search?q=human%20activity%20recognition" title=" human activity recognition"> human activity recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=inertial%20sensor" title=" inertial sensor"> inertial sensor</a> </p> <a href="https://publications.waset.org/abstracts/131782/lightweight-hybrid-convolutional-and-recurrent-neural-networks-for-wearable-sensor-based-human-activity-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/131782.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">150</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8247</span> Facial Emotion Recognition with Convolutional Neural Network Based Architecture</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Koray%20U.%20Erbas">Koray U. Erbas</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Neural networks are appealing for many applications since they are able to learn complex non-linear relationships between input and output data. As the number of neurons and layers in a neural network increase, it is possible to represent more complex relationships with automatically extracted features. Nowadays Deep Neural Networks (DNNs) are widely used in Computer Vision problems such as; classification, object detection, segmentation image editing etc. In this work, Facial Emotion Recognition task is performed by proposed Convolutional Neural Network (CNN)-based DNN architecture using FER2013 Dataset. Moreover, the effects of different hyperparameters (activation function, kernel size, initializer, batch size and network size) are investigated and ablation study results for Pooling Layer, Dropout and Batch Normalization are presented. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20network" title="convolutional neural network">convolutional neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning%20based%20FER" title=" deep learning based FER"> deep learning based FER</a>, <a href="https://publications.waset.org/abstracts/search?q=facial%20emotion%20recognition" title=" facial emotion recognition"> facial emotion recognition</a> </p> <a href="https://publications.waset.org/abstracts/128197/facial-emotion-recognition-with-convolutional-neural-network-based-architecture" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/128197.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">273</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8246</span> Defect Identification in Partial Discharge Patterns of Gas Insulated Switchgear and Straight Cable Joint</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Chien-Kuo%20Chang">Chien-Kuo Chang</a>, <a href="https://publications.waset.org/abstracts/search?q=Yu-Hsiang%20Lin"> Yu-Hsiang Lin</a>, <a href="https://publications.waset.org/abstracts/search?q=Yi-Yun%20Tang"> Yi-Yun Tang</a>, <a href="https://publications.waset.org/abstracts/search?q=Min-Chiu%20Wu"> Min-Chiu Wu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> With the trend of technological advancement, the harm caused by power outages is substantial, mostly due to problems in the power grid. This highlights the necessity for further improvement in the reliability of the power system. In the power system, gas-insulated switches (GIS) and power cables play a crucial role. Long-term operation under high voltage can cause insulation materials in the equipment to crack, potentially leading to partial discharges. If these partial discharges (PD) can be analyzed, preventative maintenance and replacement of equipment can be carried out, there by improving the reliability of the power grid. This research will diagnose defects by identifying three different defects in GIS and three different defects in straight cable joints, for a total of six types of defects. The partial discharge data measured will be converted through phase analysis diagrams and pulse sequence analysis. Discharge features will be extracted using convolutional image processing, and three different deep learning models, CNN, ResNet18, and MobileNet, will be used for training and evaluation. Class Activation Mapping will be utilized to interpret the black-box problem of deep learning models, with each model achieving an accuracy rate of over 95%. Lastly, the overall model performance will be enhanced through an ensemble learning voting method. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=partial%20discharge" title="partial discharge">partial discharge</a>, <a href="https://publications.waset.org/abstracts/search?q=gas-insulated%20switches" title=" gas-insulated switches"> gas-insulated switches</a>, <a href="https://publications.waset.org/abstracts/search?q=straight%20cable%20joint" title=" straight cable joint"> straight cable joint</a>, <a href="https://publications.waset.org/abstracts/search?q=defect%20identification" title=" defect identification"> defect identification</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=ensemble%20learning" title=" ensemble learning"> ensemble learning</a> </p> <a href="https://publications.waset.org/abstracts/169443/defect-identification-in-partial-discharge-patterns-of-gas-insulated-switchgear-and-straight-cable-joint" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/169443.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">78</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8245</span> Complex Learning Tasks and Their Impact on Cognitive Engagement for Undergraduate Engineering Students</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Anastassis%20Kozanitis">Anastassis Kozanitis</a>, <a href="https://publications.waset.org/abstracts/search?q=Diane%20Leduc"> Diane Leduc</a>, <a href="https://publications.waset.org/abstracts/search?q=Alain%20Stockless"> Alain Stockless</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents preliminary results from a two-year funded research program looking to analyze and understand the relationship between high cognitive engagement, higher order cognitive processes employed in situations of complex learning tasks, and the use of active learning pedagogies in engineering undergraduate programs. A mixed method approach was used to gauge student engagement and their cognitive processes when accomplishing complex tasks. Quantitative data collected from the self-report cognitive engagement scale shows that deep learning approach is positively correlated with high levels of complex learning tasks and the level of student engagement, in the context of classroom active learning pedagogies. Qualitative analyses of in depth face-to-face interviews reveal insights into the mechanisms influencing students’ cognitive processes when confronted with open-ended problem resolution. Findings also support evidence that students will adjust their level of cognitive engagement according to the specific didactic environment. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=cognitive%20engagement" title="cognitive engagement">cognitive engagement</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20and%20shallow%20strategies" title=" deep and shallow strategies"> deep and shallow strategies</a>, <a href="https://publications.waset.org/abstracts/search?q=engineering%20programs" title=" engineering programs"> engineering programs</a>, <a href="https://publications.waset.org/abstracts/search?q=higher%20order%20cognitive%20processes" title=" higher order cognitive processes"> higher order cognitive processes</a> </p> <a href="https://publications.waset.org/abstracts/58498/complex-learning-tasks-and-their-impact-on-cognitive-engagement-for-undergraduate-engineering-students" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/58498.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">324</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8244</span> 3D Plant Growth Measurement System Using Deep Learning Technology</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kazuaki%20Shiraishi">Kazuaki Shiraishi</a>, <a href="https://publications.waset.org/abstracts/search?q=Narumitsu%20Asai"> Narumitsu Asai</a>, <a href="https://publications.waset.org/abstracts/search?q=Tsukasa%20Kitahara"> Tsukasa Kitahara</a>, <a href="https://publications.waset.org/abstracts/search?q=Sosuke%20Mieno"> Sosuke Mieno</a>, <a href="https://publications.waset.org/abstracts/search?q=Takaharu%20Kameoka"> Takaharu Kameoka</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The purpose of this research is to facilitate productivity advances in agriculture. To accomplish this, we developed an automatic three-dimensional (3D) recording system for growth of field crops that consists of a number of inexpensive modules: a very low-cost stereo camera, a couple of ZigBee wireless modules, a Raspberry Pi single-board computer, and a third generation (3G) wireless communication module. Our system uses an inexpensive Web stereo camera in order to keep total costs low. However, inexpensive video cameras record low-resolution images that are very noisy. Accordingly, in order to resolve these problems, we adopted a deep learning method. Based on the results of extended period of time operation test conducted without the use of an external power supply, we found that by using Super-Resolution Convolutional Neural Network method, our system could achieve a balance between the competing goals of low-cost and superior performance. Our experimental results showed the effectiveness of our system. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=3D%20plant%20data" title="3D plant data">3D plant data</a>, <a href="https://publications.waset.org/abstracts/search?q=automatic%20recording" title=" automatic recording"> automatic recording</a>, <a href="https://publications.waset.org/abstracts/search?q=stereo%20camera" title=" stereo camera"> stereo camera</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a> </p> <a href="https://publications.waset.org/abstracts/54418/3d-plant-growth-measurement-system-using-deep-learning-technology" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/54418.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">273</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8243</span> LanE-change Path Planning of Autonomous Driving Using Model-Based Optimization, Deep Reinforcement Learning and 5G Vehicle-to-Vehicle Communications </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=William%20Li">William Li</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Lane-change path planning is a crucial and yet complex task in autonomous driving. The traditional path planning approach based on a system of carefully-crafted rules to cover various driving scenarios becomes unwieldy as more and more rules are added to deal with exceptions and corner cases. This paper proposes to divide the entire path planning to two stages. In the first stage the ego vehicle travels longitudinally in the source lane to reach a safe state. In the second stage the ego vehicle makes lateral lane-change maneuver to the target lane. The paper derives the safe state conditions based on lateral lane-change maneuver calculation to ensure collision free in the second stage. To determine the acceleration sequence that minimizes the time to reach a safe state in the first stage, the paper proposes three schemes, namely, kinetic model based optimization, deep reinforcement learning, and 5G vehicle-to-vehicle (V2V) communications. The paper investigates these schemes via simulation. The model-based optimization is sensitive to the model assumptions. The deep reinforcement learning is more flexible in handling scenarios beyond the model assumed by the optimization. The 5G V2V eliminates uncertainty in predicting future behaviors of surrounding vehicles by sharing driving intents and enabling cooperative driving. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=lane%20change" title="lane change">lane change</a>, <a href="https://publications.waset.org/abstracts/search?q=path%20planning" title=" path planning"> path planning</a>, <a href="https://publications.waset.org/abstracts/search?q=autonomous%20driving" title=" autonomous driving"> autonomous driving</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20reinforcement%20learning" title=" deep reinforcement learning"> deep reinforcement learning</a>, <a href="https://publications.waset.org/abstracts/search?q=5G" title=" 5G"> 5G</a>, <a href="https://publications.waset.org/abstracts/search?q=V2V%20communications" title=" V2V communications"> V2V communications</a>, <a href="https://publications.waset.org/abstracts/search?q=connected%20vehicles" title=" connected vehicles"> connected vehicles</a> </p> <a href="https://publications.waset.org/abstracts/118114/lane-change-path-planning-of-autonomous-driving-using-model-based-optimization-deep-reinforcement-learning-and-5g-vehicle-to-vehicle-communications" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/118114.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">252</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8242</span> Brain Tumor Detection and Classification Using Pre-Trained Deep Learning Models</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Aditya%20Karade">Aditya Karade</a>, <a href="https://publications.waset.org/abstracts/search?q=Sharada%20Falane"> Sharada Falane</a>, <a href="https://publications.waset.org/abstracts/search?q=Dhananjay%20Deshmukh"> Dhananjay Deshmukh</a>, <a href="https://publications.waset.org/abstracts/search?q=Vijaykumar%20Mantri"> Vijaykumar Mantri</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Brain tumors pose a significant challenge in healthcare due to their complex nature and impact on patient outcomes. The application of deep learning (DL) algorithms in medical imaging have shown promise in accurate and efficient brain tumour detection. This paper explores the performance of various pre-trained DL models ResNet50, Xception, InceptionV3, EfficientNetB0, DenseNet121, NASNetMobile, VGG19, VGG16, and MobileNet on a brain tumour dataset sourced from Figshare. The dataset consists of MRI scans categorizing different types of brain tumours, including meningioma, pituitary, glioma, and no tumour. The study involves a comprehensive evaluation of these models’ accuracy and effectiveness in classifying brain tumour images. Data preprocessing, augmentation, and finetuning techniques are employed to optimize model performance. Among the evaluated deep learning models for brain tumour detection, ResNet50 emerges as the top performer with an accuracy of 98.86%. Following closely is Xception, exhibiting a strong accuracy of 97.33%. These models showcase robust capabilities in accurately classifying brain tumour images. On the other end of the spectrum, VGG16 trails with the lowest accuracy at 89.02%. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=brain%20tumour" title="brain tumour">brain tumour</a>, <a href="https://publications.waset.org/abstracts/search?q=MRI%20image" title=" MRI image"> MRI image</a>, <a href="https://publications.waset.org/abstracts/search?q=detecting%20and%20classifying%20tumour" title=" detecting and classifying tumour"> detecting and classifying tumour</a>, <a href="https://publications.waset.org/abstracts/search?q=pre-trained%20models" title=" pre-trained models"> pre-trained models</a>, <a href="https://publications.waset.org/abstracts/search?q=transfer%20learning" title=" transfer learning"> transfer learning</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20segmentation" title=" image segmentation"> image segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=data%20augmentation" title=" data augmentation"> data augmentation</a> </p> <a href="https://publications.waset.org/abstracts/178879/brain-tumor-detection-and-classification-using-pre-trained-deep-learning-models" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/178879.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">74</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8241</span> Chassis Level Control Using Proportional Integrated Derivative Control, Fuzzy Logic and Deep Learning</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Atakan%20Aral%20Ormanc%C4%B1">Atakan Aral Ormancı</a>, <a href="https://publications.waset.org/abstracts/search?q=Tu%C4%9F%C3%A7e%20Arslanta%C5%9F"> Tuğçe Arslantaş</a>, <a href="https://publications.waset.org/abstracts/search?q=Murat%20%C3%96zc%C3%BC"> Murat Özcü</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This study presents the design and implementation of an experimental chassis-level system for various control applications. Specifically, the height level of the chassis is controlled using proportional integrated derivative, fuzzy logic, and deep learning control methods. Real-time data obtained from height and pressure sensors installed in a 6x2 truck chassis, in combination with pulse-width modulation signal values, are utilized during the tests. A prototype pneumatic system of a 6x2 truck is added to the setup, which enables the Smart Pneumatic Actuators to function as if they were in a real-world setting. To obtain real-time signal data from height sensors, an Arduino Nano is utilized, while a Raspberry Pi processes the data using Matlab/Simulink and provides the correct output signals to control the Smart Pneumatic Actuator in the truck chassis. The objective of this research is to optimize the time it takes for the chassis to level down and up under various loads. To achieve this, proportional integrated derivative control, fuzzy logic control, and deep learning techniques are applied to the system. The results show that the deep learning method is superior in optimizing time for a non-linear system. Fuzzy logic control with a triangular membership function as the rule base achieves better outcomes than proportional integrated derivative control. Traditional proportional integrated derivative control improves the time it takes to level the chassis down and up compared to an uncontrolled system. The findings highlight the superiority of deep learning techniques in optimizing the time for a non-linear system, and the potential of fuzzy logic control. The proposed approach and the experimental results provide a valuable contribution to the field of control, automation, and systems engineering. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=automotive" title="automotive">automotive</a>, <a href="https://publications.waset.org/abstracts/search?q=chassis%20level%20control" title=" chassis level control"> chassis level control</a>, <a href="https://publications.waset.org/abstracts/search?q=control%20systems" title=" control systems"> control systems</a>, <a href="https://publications.waset.org/abstracts/search?q=pneumatic%20system%20control" title=" pneumatic system control"> pneumatic system control</a> </p> <a href="https://publications.waset.org/abstracts/164728/chassis-level-control-using-proportional-integrated-derivative-control-fuzzy-logic-and-deep-learning" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/164728.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">81</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8240</span> Using Deep Learning Real-Time Object Detection Convolution Neural Networks for Fast Fruit Recognition in the Tree</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=K.%20Bresilla">K. Bresilla</a>, <a href="https://publications.waset.org/abstracts/search?q=L.%20Manfrini"> L. Manfrini</a>, <a href="https://publications.waset.org/abstracts/search?q=B.%20Morandi"> B. Morandi</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Boini"> A. Boini</a>, <a href="https://publications.waset.org/abstracts/search?q=G.%20Perulli"> G. Perulli</a>, <a href="https://publications.waset.org/abstracts/search?q=L.%20C.%20Grappadelli"> L. C. Grappadelli</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Image/video processing for fruit in the tree using hard-coded feature extraction algorithms have shown high accuracy during recent years. While accurate, these approaches even with high-end hardware are computationally intensive and too slow for real-time systems. This paper details the use of deep convolution neural networks (CNNs), specifically an algorithm (YOLO - You Only Look Once) with 24+2 convolution layers. Using deep-learning techniques eliminated the need for hard-code specific features for specific fruit shapes, color and/or other attributes. This CNN is trained on more than 5000 images of apple and pear fruits on 960 cores GPU (Graphical Processing Unit). Testing set showed an accuracy of 90%. After this, trained data were transferred to an embedded device (Raspberry Pi gen.3) with camera for more portability. Based on correlation between number of visible fruits or detected fruits on one frame and the real number of fruits on one tree, a model was created to accommodate this error rate. Speed of processing and detection of the whole platform was higher than 40 frames per second. This speed is fast enough for any grasping/harvesting robotic arm or other real-time applications. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=artificial%20intelligence" title="artificial intelligence">artificial intelligence</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=fruit%20recognition" title=" fruit recognition"> fruit recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=harvesting%20robot" title=" harvesting robot"> harvesting robot</a>, <a href="https://publications.waset.org/abstracts/search?q=precision%20agriculture" title=" precision agriculture"> precision agriculture</a> </p> <a href="https://publications.waset.org/abstracts/79886/using-deep-learning-real-time-object-detection-convolution-neural-networks-for-fast-fruit-recognition-in-the-tree" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/79886.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">420</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8239</span> Modeling and Mapping of Soil Erosion Risk Using Geographic Information Systems, Remote Sensing, and Deep Learning Algorithms: Case of the Oued Mikkes Watershed, Morocco</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=My%20Hachem%20Aouragh">My Hachem Aouragh</a>, <a href="https://publications.waset.org/abstracts/search?q=Hind%20Ragragui"> Hind Ragragui</a>, <a href="https://publications.waset.org/abstracts/search?q=Abdellah%20El-Hmaidi"> Abdellah El-Hmaidi</a>, <a href="https://publications.waset.org/abstracts/search?q=Ali%20Essahlaoui"> Ali Essahlaoui</a>, <a href="https://publications.waset.org/abstracts/search?q=Abdelhadi%20El%20Ouali"> Abdelhadi El Ouali</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This study investigates soil erosion susceptibility in the Oued Mikkes watershed, located in the Meknes-Fez region of northern Morocco, utilizing advanced techniques such as deep learning algorithms and remote sensing integrated within Geographic Information Systems (GIS). Spanning approximately 1,920 km², the watershed is characterized by a semi-arid Mediterranean climate with irregular rainfall and limited water resources. The waterways within the watershed, especially the Oued Mikkes, are vital for agricultural irrigation and potable water supply. The research assesses the extent of erosion risk upstream of the Sidi Chahed dam while developing a spatial model of soil loss. Several important factors, including topography, land use/land cover, and climate, were analyzed, with data on slope, NDVI, and rainfall erosivity processed using deep learning models (DLNN, CNN, RNN). The results demonstrated excellent predictive performance, with AUC values of 0.92, 0.90, and 0.88 for DLNN, CNN, and RNN, respectively. The resulting susceptibility maps provide critical insights for soil management and conservation strategies, identifying regions at high risk for erosion across 24% of the study area. The most high-risk areas are concentrated on steep slopes, particularly near the Ifrane district and the surrounding mountains, while low-risk areas are located in flatter regions with less rugged topography. The combined use of remote sensing and deep learning offers a powerful tool for accurate erosion risk assessment and resource management in the Mikkes watershed, highlighting the implications of soil erosion on dam siltation and operational efficiency. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=soil%20erosion" title="soil erosion">soil erosion</a>, <a href="https://publications.waset.org/abstracts/search?q=GIS" title=" GIS"> GIS</a>, <a href="https://publications.waset.org/abstracts/search?q=remote%20sensing" title=" remote sensing"> remote sensing</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=Mikkes%20Watershed" title=" Mikkes Watershed"> Mikkes Watershed</a>, <a href="https://publications.waset.org/abstracts/search?q=Morocco" title=" Morocco"> Morocco</a> </p> <a href="https://publications.waset.org/abstracts/193099/modeling-and-mapping-of-soil-erosion-risk-using-geographic-information-systems-remote-sensing-and-deep-learning-algorithms-case-of-the-oued-mikkes-watershed-morocco" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/193099.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">17</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8238</span> Assessing Performance of Data Augmentation Techniques for a Convolutional Network Trained for Recognizing Humans in Drone Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Masood%20Varshosaz">Masood Varshosaz</a>, <a href="https://publications.waset.org/abstracts/search?q=Kamyar%20Hasanpour"> Kamyar Hasanpour</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In recent years, we have seen growing interest in recognizing humans in drone images for post-disaster search and rescue operations. Deep learning algorithms have shown great promise in this area, but they often require large amounts of labeled data to train the models. To keep the data acquisition cost low, augmentation techniques can be used to create additional data from existing images. There are many techniques of such that can help generate variations of an original image to improve the performance of deep learning algorithms. While data augmentation is potentially assumed to improve the accuracy and robustness of the models, it is important to ensure that the performance gains are not outweighed by the additional computational cost or complexity of implementing the techniques. To this end, it is important to evaluate the impact of data augmentation on the performance of the deep learning models. In this paper, we evaluated the most currently available 2D data augmentation techniques on a standard convolutional network which was trained for recognizing humans in drone images. The techniques include rotation, scaling, random cropping, flipping, shifting, and their combination. The results showed that the augmented models perform 1-3% better compared to a base network. However, as the augmented images only contain the human parts already visible in the original images, a new data augmentation approach is needed to include the invisible parts of the human body. Thus, we suggest a new method that employs simulated 3D human models to generate new data for training the network. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=human%20recognition" title="human recognition">human recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=drones" title=" drones"> drones</a>, <a href="https://publications.waset.org/abstracts/search?q=disaster%20mitigation" title=" disaster mitigation"> disaster mitigation</a> </p> <a href="https://publications.waset.org/abstracts/168645/assessing-performance-of-data-augmentation-techniques-for-a-convolutional-network-trained-for-recognizing-humans-in-drone-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/168645.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">93</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8237</span> Deep Reinforcement Learning Approach for Trading Automation in The Stock Market</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Taylan%20Kabbani">Taylan Kabbani</a>, <a href="https://publications.waset.org/abstracts/search?q=Ekrem%20Duman"> Ekrem Duman</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The design of adaptive systems that take advantage of financial markets while reducing the risk can bring more stagnant wealth into the global market. However, most efforts made to generate successful deals in trading financial assets rely on Supervised Learning (SL), which suffered from various limitations. Deep Reinforcement Learning (DRL) offers to solve these drawbacks of SL approaches by combining the financial assets price "prediction" step and the "allocation" step of the portfolio in one unified process to produce fully autonomous systems capable of interacting with its environment to make optimal decisions through trial and error. In this paper, a continuous action space approach is adopted to give the trading agent the ability to gradually adjust the portfolio's positions with each time step (dynamically re-allocate investments), resulting in better agent-environment interaction and faster convergence of the learning process. In addition, the approach supports the managing of a portfolio with several assets instead of a single one. This work represents a novel DRL model to generate profitable trades in the stock market, effectively overcoming the limitations of supervised learning approaches. We formulate the trading problem, or what is referred to as The Agent Environment as Partially observed Markov Decision Process (POMDP) model, considering the constraints imposed by the stock market, such as liquidity and transaction costs. More specifically, we design an environment that simulates the real-world trading process by augmenting the state representation with ten different technical indicators and sentiment analysis of news articles for each stock. We then solve the formulated POMDP problem using the Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithm, which can learn policies in high-dimensional and continuous action spaces like those typically found in the stock market environment. From the point of view of stock market forecasting and the intelligent decision-making mechanism, this paper demonstrates the superiority of deep reinforcement learning in financial markets over other types of machine learning such as supervised learning and proves its credibility and advantages of strategic decision-making. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=the%20stock%20market" title="the stock market">the stock market</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20reinforcement%20learning" title=" deep reinforcement learning"> deep reinforcement learning</a>, <a href="https://publications.waset.org/abstracts/search?q=MDP" title=" MDP"> MDP</a>, <a href="https://publications.waset.org/abstracts/search?q=twin%20delayed%20deep%20deterministic%20policy%20gradient" title=" twin delayed deep deterministic policy gradient"> twin delayed deep deterministic policy gradient</a>, <a href="https://publications.waset.org/abstracts/search?q=sentiment%20analysis" title=" sentiment analysis"> sentiment analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=technical%20indicators" title=" technical indicators"> technical indicators</a>, <a href="https://publications.waset.org/abstracts/search?q=autonomous%20agent" title=" autonomous agent"> autonomous agent</a> </p> <a href="https://publications.waset.org/abstracts/142828/deep-reinforcement-learning-approach-for-trading-automation-in-the-stock-market" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/142828.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">178</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8236</span> Deepnic, A Method to Transform Each Variable into Image for Deep Learning</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nguyen%20J.%20M.">Nguyen J. M.</a>, <a href="https://publications.waset.org/abstracts/search?q=Lucas%20G."> Lucas G.</a>, <a href="https://publications.waset.org/abstracts/search?q=Brunner%20M."> Brunner M.</a>, <a href="https://publications.waset.org/abstracts/search?q=Ruan%20S."> Ruan S.</a>, <a href="https://publications.waset.org/abstracts/search?q=Antonioli%20D."> Antonioli D.</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Deep learning based on convolutional neural networks (CNN) is a very powerful technique for classifying information from an image. We propose a new method, DeepNic, to transform each variable of a tabular dataset into an image where each pixel represents a set of conditions that allow the variable to make an error-free prediction. The contrast of each pixel is proportional to its prediction performance and the color of each pixel corresponds to a sub-family of NICs. NICs are probabilities that depend on the number of inputs to each neuron and the range of coefficients of the inputs. Each variable can therefore be expressed as a function of a matrix of 2 vectors corresponding to an image whose pixels express predictive capabilities. Our objective is to transform each variable of tabular data into images into an image that can be analysed by CNNs, unlike other methods which use all the variables to construct an image. We analyse the NIC information of each variable and express it as a function of the number of neurons and the range of coefficients used. The predictive value and the category of the NIC are expressed by the contrast and the color of the pixel. We have developed a pipeline to implement this technology and have successfully applied it to genomic expressions on an Affymetrix chip. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=tabular%20data" title="tabular data">tabular data</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=perfect%20trees" title=" perfect trees"> perfect trees</a>, <a href="https://publications.waset.org/abstracts/search?q=NICS" title=" NICS"> NICS</a> </p> <a href="https://publications.waset.org/abstracts/152479/deepnic-a-method-to-transform-each-variable-into-image-for-deep-learning" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/152479.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">90</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8235</span> Deciphering Orangutan Drawing Behavior Using Artificial Intelligence</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Benjamin%20Beltzung">Benjamin Beltzung</a>, <a href="https://publications.waset.org/abstracts/search?q=Marie%20Pel%C3%A9"> Marie Pelé</a>, <a href="https://publications.waset.org/abstracts/search?q=Julien%20P.%20Renoult"> Julien P. Renoult</a>, <a href="https://publications.waset.org/abstracts/search?q=C%C3%A9dric%20Sueur"> Cédric Sueur</a> </p> <p class="card-text"><strong>Abstract:</strong></p> To this day, it is not known if drawing is specifically human behavior or if this behavior finds its origins in ancestor species. An interesting window to enlighten this question is to analyze the drawing behavior in genetically close to human species, such as non-human primate species. A good candidate for this approach is the orangutan, who shares 97% of our genes and exhibits multiple human-like behaviors. Focusing on figurative aspects may not be suitable for orangutans’ drawings, which may appear as scribbles but may have meaning. A manual feature selection would lead to an anthropocentric bias, as the features selected by humans may not match with those relevant for orangutans. In the present study, we used deep learning to analyze the drawings of a female orangutan named Molly († in 2011), who has produced 1,299 drawings in her last five years as part of a behavioral enrichment program at the Tama Zoo in Japan. We investigate multiple ways to decipher Molly’s drawings. First, we demonstrate the existence of differences between seasons by training a deep learning model to classify Molly’s drawings according to the seasons. Then, to understand and interpret these seasonal differences, we analyze how the information spreads within the network, from shallow to deep layers, where early layers encode simple local features and deep layers encode more complex and global information. More precisely, we investigate the impact of feature complexity on classification accuracy through features extraction fed to a Support Vector Machine. Last, we leverage style transfer to dissociate features associated with drawing style from those describing the representational content and analyze the relative importance of these two types of features in explaining seasonal variation. Content features were relevant for the classification, showing the presence of meaning in these non-figurative drawings and the ability of deep learning to decipher these differences. The style of the drawings was also relevant, as style features encoded enough information to have a classification better than random. The accuracy of style features was higher for deeper layers, demonstrating and highlighting the variation of style between seasons in Molly’s drawings. Through this study, we demonstrate how deep learning can help at finding meanings in non-figurative drawings and interpret these differences. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=cognition" title="cognition">cognition</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=drawing%20behavior" title=" drawing behavior"> drawing behavior</a>, <a href="https://publications.waset.org/abstracts/search?q=interpretability" title=" interpretability"> interpretability</a> </p> <a href="https://publications.waset.org/abstracts/152609/deciphering-orangutan-drawing-behavior-using-artificial-intelligence" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/152609.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">165</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8234</span> An Ensemble Deep Learning Architecture for Imbalanced Classification of Thoracic Surgery Patients</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Saba%20%20Ebrahimi">Saba Ebrahimi</a>, <a href="https://publications.waset.org/abstracts/search?q=Saeed%20Ahmadian"> Saeed Ahmadian</a>, <a href="https://publications.waset.org/abstracts/search?q=Hedie%20%20Ashrafi"> Hedie Ashrafi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Selecting appropriate patients for surgery is one of the main issues in thoracic surgery (TS). Both short-term and long-term risks and benefits of surgery must be considered in the patient selection criteria. There are some limitations in the existing datasets of TS patients because of missing values of attributes and imbalanced distribution of survival classes. In this study, a novel ensemble architecture of deep learning networks is proposed based on stacking different linear and non-linear layers to deal with imbalance datasets. The categorical and numerical features are split using different layers with ability to shrink the unnecessary features. Then, after extracting the insight from the raw features, a novel biased-kernel layer is applied to reinforce the gradient of the minority class and cause the network to be trained better comparing the current methods. Finally, the performance and advantages of our proposed model over the existing models are examined for predicting patient survival after thoracic surgery using a real-life clinical data for lung cancer patients. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title="deep learning">deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=ensemble%20models" title=" ensemble models"> ensemble models</a>, <a href="https://publications.waset.org/abstracts/search?q=imbalanced%20classification" title=" imbalanced classification"> imbalanced classification</a>, <a href="https://publications.waset.org/abstracts/search?q=lung%20cancer" title=" lung cancer"> lung cancer</a>, <a href="https://publications.waset.org/abstracts/search?q=TS%20patient%20selection" title=" TS patient selection"> TS patient selection</a> </p> <a href="https://publications.waset.org/abstracts/128394/an-ensemble-deep-learning-architecture-for-imbalanced-classification-of-thoracic-surgery-patients" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/128394.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">145</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8233</span> A Survey of Field Programmable Gate Array-Based Convolutional Neural Network Accelerators</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Wei%20Zhang">Wei Zhang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> With the rapid development of deep learning, neural network and deep learning algorithms play a significant role in various practical applications. Due to the high accuracy and good performance, Convolutional Neural Networks (CNNs) especially have become a research hot spot in the past few years. However, the size of the networks becomes increasingly large scale due to the demands of the practical applications, which poses a significant challenge to construct a high-performance implementation of deep learning neural networks. Meanwhile, many of these application scenarios also have strict requirements on the performance and low-power consumption of hardware devices. Therefore, it is particularly critical to choose a moderate computing platform for hardware acceleration of CNNs. This article aimed to survey the recent advance in Field Programmable Gate Array (FPGA)-based acceleration of CNNs. Various designs and implementations of the accelerator based on FPGA under different devices and network models are overviewed, and the versions of Graphic Processing Units (GPUs), Application Specific Integrated Circuits (ASICs) and Digital Signal Processors (DSPs) are compared to present our own critical analysis and comments. Finally, we give a discussion on different perspectives of these acceleration and optimization methods on FPGA platforms to further explore the opportunities and challenges for future research. More helpfully, we give a prospect for future development of the FPGA-based accelerator. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title="deep learning">deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=field%20programmable%20gate%20array" title=" field programmable gate array"> field programmable gate array</a>, <a href="https://publications.waset.org/abstracts/search?q=FPGA" title=" FPGA"> FPGA</a>, <a href="https://publications.waset.org/abstracts/search?q=hardware%20accelerator" title=" hardware accelerator"> hardware accelerator</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks" title=" convolutional neural networks"> convolutional neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=CNN" title=" CNN"> CNN</a> </p> <a href="https://publications.waset.org/abstracts/128017/a-survey-of-field-programmable-gate-array-based-convolutional-neural-network-accelerators" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/128017.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">128</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8232</span> Reinforcement Learning for Self Driving Racing Car Games</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Adam%20Beaunoyer">Adam Beaunoyer</a>, <a href="https://publications.waset.org/abstracts/search?q=Cory%20Beaunoyer"> Cory Beaunoyer</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohammed%20Elmorsy"> Mohammed Elmorsy</a>, <a href="https://publications.waset.org/abstracts/search?q=Hanan%20Saleh"> Hanan Saleh</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This research aims to create a reinforcement learning agent capable of racing in challenging simulated environments with a low collision count. We present a reinforcement learning agent that can navigate challenging tracks using both a Deep Q-Network (DQN) and a Soft Actor-Critic (SAC) method. A challenging track includes curves, jumps, and varying road widths throughout. Using open-source code on Github, the environment used in this research is based on the 1995 racing game WipeOut. The proposed reinforcement learning agent can navigate challenging tracks rapidly while maintaining low racing completion time and collision count. The results show that the SAC model outperforms the DQN model by a large margin. We also propose an alternative multiple-car model that can navigate the track without colliding with other vehicles on the track. The SAC model is the basis for the multiple-car model, where it can complete the laps quicker than the single-car model but has a higher collision rate with the track wall. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=reinforcement%20learning" title="reinforcement learning">reinforcement learning</a>, <a href="https://publications.waset.org/abstracts/search?q=soft%20actor-critic" title=" soft actor-critic"> soft actor-critic</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20q-network" title=" deep q-network"> deep q-network</a>, <a href="https://publications.waset.org/abstracts/search?q=self-driving%20cars" title=" self-driving cars"> self-driving cars</a>, <a href="https://publications.waset.org/abstracts/search?q=artificial%20intelligence" title=" artificial intelligence"> artificial intelligence</a>, <a href="https://publications.waset.org/abstracts/search?q=gaming" title=" gaming"> gaming</a> </p> <a href="https://publications.waset.org/abstracts/185804/reinforcement-learning-for-self-driving-racing-car-games" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/185804.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">46</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8231</span> Automatic Detection and Filtering of Negative Emotion-Bearing Contents from Social Media in Amharic Using Sentiment Analysis and Deep Learning Methods</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Derejaw%20Lake%20Melie">Derejaw Lake Melie</a>, <a href="https://publications.waset.org/abstracts/search?q=Alemu%20Kumlachew%20Tegegne"> Alemu Kumlachew Tegegne</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The increasing prevalence of social media in Ethiopia has exacerbated societal challenges by fostering the proliferation of negative emotional posts and comments. Illicit use of social media has further exacerbated divisions among the population. Addressing these issues through manual identification and aggregation of emotions from millions of users for swift decision-making poses significant challenges, particularly given the rapid growth of Amharic language usage on social platforms. Consequently, there is a critical need to develop an intelligent system capable of automatically detecting and categorizing negative emotional content into social, religious, and political categories while also filtering out toxic online content. This paper aims to leverage sentiment analysis techniques to achieve automatic detection and filtering of negative emotional content from Amharic social media texts, employing a comparative study of deep learning algorithms. The study utilized a dataset comprising 29,962 comments collected from social media platforms using comment exporter software. Data pre-processing techniques were applied to enhance data quality, followed by the implementation of deep learning methods for training, testing, and evaluation. The results showed that CNN, GRU, LSTM, and Bi-LSTM classification models achieved accuracies of 83%, 50%, 84%, and 86%, respectively. Among these models, Bi-LSTM demonstrated the highest accuracy of 86% in the experiment. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=negative%20emotion" title="negative emotion">negative emotion</a>, <a href="https://publications.waset.org/abstracts/search?q=emotion%20detection" title=" emotion detection"> emotion detection</a>, <a href="https://publications.waset.org/abstracts/search?q=social%20media%20filtering%20sentiment%20analysis" title=" social media filtering sentiment analysis"> social media filtering sentiment analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning." title=" deep learning."> deep learning.</a> </p> <a href="https://publications.waset.org/abstracts/191945/automatic-detection-and-filtering-of-negative-emotion-bearing-contents-from-social-media-in-amharic-using-sentiment-analysis-and-deep-learning-methods" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/191945.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">23</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8230</span> Accuracy Improvement of Traffic Participant Classification Using Millimeter-Wave Radar by Leveraging Simulator Based on Domain Adaptation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Tokihiko%20Akita">Tokihiko Akita</a>, <a href="https://publications.waset.org/abstracts/search?q=Seiichi%20Mita"> Seiichi Mita</a> </p> <p class="card-text"><strong>Abstract:</strong></p> A millimeter-wave radar is the most robust against adverse environments, making it an essential environment recognition sensor for automated driving. However, the reflection signal is sparse and unstable, so it is difficult to obtain the high recognition accuracy. Deep learning provides high accuracy even for them in recognition, but requires large scale datasets with ground truth. Specially, it takes a lot of cost to annotate for a millimeter-wave radar. For the solution, utilizing a simulator that can generate an annotated huge dataset is effective. Simulation of the radar is more difficult to match with real world data than camera image, and recognition by deep learning with higher-order features using the simulator causes further deviation. We have challenged to improve the accuracy of traffic participant classification by fusing simulator and real-world data with domain adaptation technique. Experimental results with the domain adaptation network created by us show that classification accuracy can be improved even with a few real-world data. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=millimeter-wave%20radar" title="millimeter-wave radar">millimeter-wave radar</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20classification" title=" object classification"> object classification</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=simulation" title=" simulation"> simulation</a>, <a href="https://publications.waset.org/abstracts/search?q=domain%20adaptation" title=" domain adaptation"> domain adaptation</a> </p> <a href="https://publications.waset.org/abstracts/164634/accuracy-improvement-of-traffic-participant-classification-using-millimeter-wave-radar-by-leveraging-simulator-based-on-domain-adaptation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/164634.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">93</span> </span> </div> </div> <ul class="pagination"> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=deep%20learning&page=5" rel="prev">‹</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=deep%20learning&page=1">1</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=deep%20learning&page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=deep%20learning&page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=deep%20learning&page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=deep%20learning&page=5">5</a></li> <li class="page-item active"><span class="page-link">6</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=deep%20learning&page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=deep%20learning&page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=deep%20learning&page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=deep%20learning&page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=deep%20learning&page=280">280</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=deep%20learning&page=281">281</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=deep%20learning&page=7" rel="next">›</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">© 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">×</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>