CINXE.COM

Search results for: convolutional neural networks

<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: convolutional neural networks</title> <meta name="description" content="Search results for: convolutional neural networks"> <meta name="keywords" content="convolutional neural networks"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="convolutional neural networks" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="convolutional neural networks"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 3758</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: convolutional neural networks</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3758</span> Comparison of Classical Computer Vision vs. Convolutional Neural Networks Approaches for Weed Mapping in Aerial Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Paulo%20Cesar%20Pereira%20Junior">Paulo Cesar Pereira Junior</a>, <a href="https://publications.waset.org/abstracts/search?q=Alexandre%20Monteiro"> Alexandre Monteiro</a>, <a href="https://publications.waset.org/abstracts/search?q=Rafael%20da%20Luz%20Ribeiro"> Rafael da Luz Ribeiro</a>, <a href="https://publications.waset.org/abstracts/search?q=Antonio%20Carlos%20Sobieranski"> Antonio Carlos Sobieranski</a>, <a href="https://publications.waset.org/abstracts/search?q=Aldo%20von%20Wangenheim"> Aldo von Wangenheim</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we present a comparison between convolutional neural networks and classical computer vision approaches, for the specific precision agriculture problem of weed mapping on sugarcane fields aerial images. A systematic literature review was conducted to find which computer vision methods are being used on this specific problem. The most cited methods were implemented, as well as four models of convolutional neural networks. All implemented approaches were tested using the same dataset, and their results were quantitatively and qualitatively analyzed. The obtained results were compared to a human expert made ground truth for validation. The results indicate that the convolutional neural networks present better precision and generalize better than the classical models. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks" title="convolutional neural networks">convolutional neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=digital%20image%20processing" title=" digital image processing"> digital image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=precision%20agriculture" title=" precision agriculture"> precision agriculture</a>, <a href="https://publications.waset.org/abstracts/search?q=semantic%20segmentation" title=" semantic segmentation"> semantic segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=unmanned%20aerial%20vehicles" title=" unmanned aerial vehicles"> unmanned aerial vehicles</a> </p> <a href="https://publications.waset.org/abstracts/112982/comparison-of-classical-computer-vision-vs-convolutional-neural-networks-approaches-for-weed-mapping-in-aerial-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/112982.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">260</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3757</span> Causal Relation Identification Using Convolutional Neural Networks and Knowledge Based Features</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Tharini%20N.%20de%20Silva">Tharini N. de Silva</a>, <a href="https://publications.waset.org/abstracts/search?q=Xiao%20Zhibo"> Xiao Zhibo</a>, <a href="https://publications.waset.org/abstracts/search?q=Zhao%20Rui"> Zhao Rui</a>, <a href="https://publications.waset.org/abstracts/search?q=Mao%20Kezhi"> Mao Kezhi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Causal relation identification is a crucial task in information extraction and knowledge discovery. In this work, we present two approaches to causal relation identification. The first is a classification model trained on a set of knowledge-based features. The second is a deep learning based approach training a model using convolutional neural networks to classify causal relations. We experiment with several different convolutional neural networks (CNN) models based on previous work on relation extraction as well as our own research. Our models are able to identify both explicit and implicit causal relations as well as the direction of the causal relation. The results of our experiments show a higher accuracy than previously achieved for causal relation identification tasks. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=causal%20realtion%20extraction" title="causal realtion extraction">causal realtion extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=relation%20extracton" title=" relation extracton"> relation extracton</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20network" title=" convolutional neural network"> convolutional neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=text%20representation" title=" text representation"> text representation</a> </p> <a href="https://publications.waset.org/abstracts/61573/causal-relation-identification-using-convolutional-neural-networks-and-knowledge-based-features" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/61573.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">732</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3756</span> Classification of Echo Signals Based on Deep Learning</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Aisulu%20Tileukulova">Aisulu Tileukulova</a>, <a href="https://publications.waset.org/abstracts/search?q=Zhexebay%20Dauren"> Zhexebay Dauren</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Radar plays an important role because it is widely used in civil and military fields. Target detection is one of the most important radar applications. The accuracy of detecting inconspicuous aerial objects in radar facilities is lower against the background of noise. Convolutional neural networks can be used to improve the recognition of this type of aerial object. The purpose of this work is to develop an algorithm for recognizing aerial objects using convolutional neural networks, as well as training a neural network. In this paper, the structure of a convolutional neural network (CNN) consists of different types of layers: 8 convolutional layers and 3 layers of a fully connected perceptron. ReLU is used as an activation function in convolutional layers, while the last layer uses softmax. It is necessary to form a data set for training a neural network in order to detect a target. We built a Confusion Matrix of the CNN model to measure the effectiveness of our model. The results showed that the accuracy when testing the model was 95.7%. Classification of echo signals using CNN shows high accuracy and significantly speeds up the process of predicting the target. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=radar" title="radar">radar</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20network" title=" neural network"> neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20network" title=" convolutional neural network"> convolutional neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=echo%20signals" title=" echo signals"> echo signals</a> </p> <a href="https://publications.waset.org/abstracts/147596/classification-of-echo-signals-based-on-deep-learning" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/147596.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">353</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3755</span> Tumor Detection Using Convolutional Neural Networks (CNN) Based Neural Network</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Vinai%20K.%20Singh">Vinai K. Singh</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In Neural Network-based Learning techniques, there are several models of Convolutional Networks. Whenever the methods are deployed with large datasets, only then can their applicability and appropriateness be determined. Clinical and pathological pictures of lobular carcinoma are thought to exhibit a large number of random formations and textures. Working with such pictures is a difficult problem in machine learning. Focusing on wet laboratories and following the outcomes, numerous studies have been published with fresh commentaries in the investigation. In this research, we provide a framework that can operate effectively on raw photos of various resolutions while easing the issues caused by the existence of patterns and texturing. The suggested approach produces very good findings that may be used to make decisions in the diagnosis of cancer. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=lobular%20carcinoma" title="lobular carcinoma">lobular carcinoma</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks%20%28CNN%29" title=" convolutional neural networks (CNN)"> convolutional neural networks (CNN)</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=histopathological%20imagery%20scans" title=" histopathological imagery scans"> histopathological imagery scans</a> </p> <a href="https://publications.waset.org/abstracts/146403/tumor-detection-using-convolutional-neural-networks-cnn-based-neural-network" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/146403.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">136</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3754</span> Taxonomic Classification for Living Organisms Using Convolutional Neural Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Saed%20Khawaldeh">Saed Khawaldeh</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohamed%20Elsharnouby"> Mohamed Elsharnouby</a>, <a href="https://publications.waset.org/abstracts/search?q=Alaa%20%20Eddin%20Alchalabi"> Alaa Eddin Alchalabi</a>, <a href="https://publications.waset.org/abstracts/search?q=Usama%20Pervaiz"> Usama Pervaiz</a>, <a href="https://publications.waset.org/abstracts/search?q=Tajwar%20Aleef"> Tajwar Aleef</a>, <a href="https://publications.waset.org/abstracts/search?q=Vu%20Hoang%20Minh"> Vu Hoang Minh</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Taxonomic classification has a wide-range of applications such as finding out more about the evolutionary history of organisms that can be done by making a comparison between species living now and species that lived in the past. This comparison can be made using different kinds of extracted species’ data which include DNA sequences. Compared to the estimated number of the organisms that nature harbours, humanity does not have a thorough comprehension of which specific species they all belong to, in spite of the significant development of science and scientific knowledge over many years. One of the methods that can be applied to extract information out of the study of organisms in this regard is to use the DNA sequence of a living organism as a marker, thus making it available to classify it into a taxonomy. The classification of living organisms can be done in many machine learning techniques including Neural Networks (NNs). In this study, DNA sequences classification is performed using Convolutional Neural Networks (CNNs) which is a special type of NNs. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20networks" title="deep networks">deep networks</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks" title=" convolutional neural networks"> convolutional neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=taxonomic%20classification" title=" taxonomic classification"> taxonomic classification</a>, <a href="https://publications.waset.org/abstracts/search?q=DNA%20sequences%20classification" title=" DNA sequences classification "> DNA sequences classification </a> </p> <a href="https://publications.waset.org/abstracts/65170/taxonomic-classification-for-living-organisms-using-convolutional-neural-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/65170.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">442</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3753</span> Experimental Study of Hyperparameter Tuning a Deep Learning Convolutional Recurrent Network for Text Classification</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bharatendra%20Rai">Bharatendra Rai</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The sequence of words in text data has long-term dependencies and is known to suffer from vanishing gradient problems when developing deep learning models. Although recurrent networks such as long short-term memory networks help to overcome this problem, achieving high text classification performance is a challenging problem. Convolutional recurrent networks that combine the advantages of long short-term memory networks and convolutional neural networks can be useful for text classification performance improvements. However, arriving at suitable hyperparameter values for convolutional recurrent networks is still a challenging task where fitting a model requires significant computing resources. This paper illustrates the advantages of using convolutional recurrent networks for text classification with the help of statistically planned computer experiments for hyperparameter tuning. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=long%20short-term%20memory%20networks" title="long short-term memory networks">long short-term memory networks</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20recurrent%20networks" title=" convolutional recurrent networks"> convolutional recurrent networks</a>, <a href="https://publications.waset.org/abstracts/search?q=text%20classification" title=" text classification"> text classification</a>, <a href="https://publications.waset.org/abstracts/search?q=hyperparameter%20tuning" title=" hyperparameter tuning"> hyperparameter tuning</a>, <a href="https://publications.waset.org/abstracts/search?q=Tukey%20honest%20significant%20differences" title=" Tukey honest significant differences"> Tukey honest significant differences</a> </p> <a href="https://publications.waset.org/abstracts/169795/experimental-study-of-hyperparameter-tuning-a-deep-learning-convolutional-recurrent-network-for-text-classification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/169795.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">129</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3752</span> Image Classification with Localization Using Convolutional Neural Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bhuyain%20Mobarok%20Hossain">Bhuyain Mobarok Hossain</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Image classification and localization research is currently an important strategy in the field of computer vision. The evolution and advancement of deep learning and convolutional neural networks (CNN) have greatly improved the capabilities of object detection and image-based classification. Target detection is important to research in the field of computer vision, especially in video surveillance systems. To solve this problem, we will be applying a convolutional neural network of multiple scales at multiple locations in the image in one sliding window. Most translation networks move away from the bounding box around the area of interest. In contrast to this architecture, we consider the problem to be a classification problem where each pixel of the image is a separate section. Image classification is the method of predicting an individual category or specifying by a shoal of data points. Image classification is a part of the classification problem, including any labels throughout the image. The image can be classified as a day or night shot. Or, likewise, images of cars and motorbikes will be automatically placed in their collection. The deep learning of image classification generally includes convolutional layers; the invention of it is referred to as a convolutional neural network (CNN). <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20classification" title="image classification">image classification</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20detection" title=" object detection"> object detection</a>, <a href="https://publications.waset.org/abstracts/search?q=localization" title=" localization"> localization</a>, <a href="https://publications.waset.org/abstracts/search?q=particle%20filter" title=" particle filter"> particle filter</a> </p> <a href="https://publications.waset.org/abstracts/139288/image-classification-with-localization-using-convolutional-neural-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/139288.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">305</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3751</span> Deep Learning Based, End-to-End Metaphor Detection in Greek with Recurrent and Convolutional Neural Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Konstantinos%20Perifanos">Konstantinos Perifanos</a>, <a href="https://publications.waset.org/abstracts/search?q=Eirini%20Florou"> Eirini Florou</a>, <a href="https://publications.waset.org/abstracts/search?q=Dionysis%20Goutsos"> Dionysis Goutsos</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents and benchmarks a number of end-to-end Deep Learning based models for metaphor detection in Greek. We combine Convolutional Neural Networks and Recurrent Neural Networks with representation learning to bear on the metaphor detection problem for the Greek language. The models presented achieve exceptional accuracy scores, significantly improving the previous state-of-the-art results, which had already achieved accuracy 0.82. Furthermore, no special preprocessing, feature engineering or linguistic knowledge is used in this work. The methods presented achieve accuracy of 0.92 and F-score 0.92 with Convolutional Neural Networks (CNNs) and bidirectional Long Short Term Memory networks (LSTMs). Comparable results of 0.91 accuracy and 0.91 F-score are also achieved with bidirectional Gated Recurrent Units (GRUs) and Convolutional Recurrent Neural Nets (CRNNs). The models are trained and evaluated only on the basis of training tuples, the related sentences and their labels. The outcome is a state-of-the-art collection of metaphor detection models, trained on limited labelled resources, which can be extended to other languages and similar tasks. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=metaphor%20detection" title="metaphor detection">metaphor detection</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=representation%20learning" title=" representation learning"> representation learning</a>, <a href="https://publications.waset.org/abstracts/search?q=embeddings" title=" embeddings"> embeddings</a> </p> <a href="https://publications.waset.org/abstracts/115854/deep-learning-based-end-to-end-metaphor-detection-in-greek-with-recurrent-and-convolutional-neural-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/115854.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">153</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3750</span> Facial Emotion Recognition with Convolutional Neural Network Based Architecture</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Koray%20U.%20Erbas">Koray U. Erbas</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Neural networks are appealing for many applications since they are able to learn complex non-linear relationships between input and output data. As the number of neurons and layers in a neural network increase, it is possible to represent more complex relationships with automatically extracted features. Nowadays Deep Neural Networks (DNNs) are widely used in Computer Vision problems such as; classification, object detection, segmentation image editing etc. In this work, Facial Emotion Recognition task is performed by proposed Convolutional Neural Network (CNN)-based DNN architecture using FER2013 Dataset. Moreover, the effects of different hyperparameters (activation function, kernel size, initializer, batch size and network size) are investigated and ablation study results for Pooling Layer, Dropout and Batch Normalization are presented. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20network" title="convolutional neural network">convolutional neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning%20based%20FER" title=" deep learning based FER"> deep learning based FER</a>, <a href="https://publications.waset.org/abstracts/search?q=facial%20emotion%20recognition" title=" facial emotion recognition"> facial emotion recognition</a> </p> <a href="https://publications.waset.org/abstracts/128197/facial-emotion-recognition-with-convolutional-neural-network-based-architecture" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/128197.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">274</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3749</span> Traffic Sign Recognition System Using Convolutional Neural NetworkDevineni</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Devineni%20Vijay%20Bhaskar">Devineni Vijay Bhaskar</a>, <a href="https://publications.waset.org/abstracts/search?q=Yendluri%20Raja"> Yendluri Raja</a> </p> <p class="card-text"><strong>Abstract:</strong></p> We recommend a model for traffic sign detection stranded on Convolutional Neural Networks (CNN). We first renovate the unique image into the gray scale image through with support vector machines, then use convolutional neural networks with fixed and learnable layers for revealing and understanding. The permanent layer can reduction the amount of attention areas to notice and crop the limits very close to the boundaries of traffic signs. The learnable coverings can rise the accuracy of detection significantly. Besides, we use bootstrap procedures to progress the accuracy and avoid overfitting problem. In the German Traffic Sign Detection Benchmark, we obtained modest results, with an area under the precision-recall curve (AUC) of 99.49% in the group “Risk”, and an AUC of 96.62% in the group “Obligatory”. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20network" title="convolutional neural network">convolutional neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=support%20vector%20machine" title=" support vector machine"> support vector machine</a>, <a href="https://publications.waset.org/abstracts/search?q=detection" title=" detection"> detection</a>, <a href="https://publications.waset.org/abstracts/search?q=traffic%20signs" title=" traffic signs"> traffic signs</a>, <a href="https://publications.waset.org/abstracts/search?q=bootstrap%20procedures" title=" bootstrap procedures"> bootstrap procedures</a>, <a href="https://publications.waset.org/abstracts/search?q=precision-recall%20curve" title=" precision-recall curve"> precision-recall curve</a> </p> <a href="https://publications.waset.org/abstracts/149896/traffic-sign-recognition-system-using-convolutional-neural-networkdevineni" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/149896.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">122</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3748</span> Neural Style Transfer Using Deep Learning</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shaik%20Jilani%20Basha">Shaik Jilani Basha</a>, <a href="https://publications.waset.org/abstracts/search?q=Inavolu%20Avinash"> Inavolu Avinash</a>, <a href="https://publications.waset.org/abstracts/search?q=Alla%20Venu%20Sai%20Reddy"> Alla Venu Sai Reddy</a>, <a href="https://publications.waset.org/abstracts/search?q=Bitragunta%20Taraka%20Ramu"> Bitragunta Taraka Ramu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> We can use the neural style transfer technique to build a picture with the same "content" as the beginning image but the "style" of the picture we've chosen. Neural style transfer is a technique for merging the style of one image into another while retaining its original information. The only change is how the image is formatted to give it an additional artistic sense. The content image depicts the plan or drawing, as well as the colors of the drawing or paintings used to portray the style. It is a computer vision programme that learns and processes images through deep convolutional neural networks. To implement software, we used to train deep learning models with the train data, and whenever a user takes an image and a styled image, the output will be as the style gets transferred to the original image, and it will be shown as the output. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=neural%20networks" title="neural networks">neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks" title=" convolutional neural networks"> convolutional neural networks</a> </p> <a href="https://publications.waset.org/abstracts/167224/neural-style-transfer-using-deep-learning" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/167224.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">95</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3747</span> Automated Machine Learning Algorithm Using Recurrent Neural Network to Perform Long-Term Time Series Forecasting</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ying%20Su">Ying Su</a>, <a href="https://publications.waset.org/abstracts/search?q=Morgan%20C.%20Wang"> Morgan C. Wang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Long-term time series forecasting is an important research area for automated machine learning (AutoML). Currently, forecasting based on either machine learning or statistical learning is usually built by experts, and it requires significant manual effort, from model construction, feature engineering, and hyper-parameter tuning to the construction of the time series model. Automation is not possible since there are too many human interventions. To overcome these limitations, this article proposed to use recurrent neural networks (RNN) through the memory state of RNN to perform long-term time series prediction. We have shown that this proposed approach is better than the traditional Autoregressive Integrated Moving Average (ARIMA). In addition, we also found it is better than other network systems, including Fully Connected Neural Networks (FNN), Convolutional Neural Networks (CNN), and Nonpooling Convolutional Neural Networks (NPCNN). <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=automated%20machines%20learning" title="automated machines learning">automated machines learning</a>, <a href="https://publications.waset.org/abstracts/search?q=autoregressive%20integrated%20moving%20average" title=" autoregressive integrated moving average"> autoregressive integrated moving average</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20networks" title=" neural networks"> neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=time%20series%20analysis" title=" time series analysis"> time series analysis</a> </p> <a href="https://publications.waset.org/abstracts/173817/automated-machine-learning-algorithm-using-recurrent-neural-network-to-perform-long-term-time-series-forecasting" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/173817.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">105</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3746</span> Classification of Computer Generated Images from Photographic Images Using Convolutional Neural Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Chaitanya%20Chawla">Chaitanya Chawla</a>, <a href="https://publications.waset.org/abstracts/search?q=Divya%20Panwar"> Divya Panwar</a>, <a href="https://publications.waset.org/abstracts/search?q=Gurneesh%20Singh%20Anand"> Gurneesh Singh Anand</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20P.%20S%20Bhatia"> M. P. S Bhatia</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents a deep-learning mechanism for classifying computer generated images and photographic images. The proposed method accounts for a convolutional layer capable of automatically learning correlation between neighbouring pixels. In the current form, Convolutional Neural Network (CNN) will learn features based on an image&#39;s content instead of the structural features of the image. The layer is particularly designed to subdue an image&#39;s content and robustly learn the sensor pattern noise features (usually inherited from image processing in a camera) as well as the statistical properties of images. The paper was assessed on latest natural and computer generated images, and it was concluded that it performs better than the current state of the art methods. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20forensics" title="image forensics">image forensics</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20graphics" title=" computer graphics"> computer graphics</a>, <a href="https://publications.waset.org/abstracts/search?q=classification" title=" classification"> classification</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks" title=" convolutional neural networks"> convolutional neural networks</a> </p> <a href="https://publications.waset.org/abstracts/95266/classification-of-computer-generated-images-from-photographic-images-using-convolutional-neural-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/95266.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">336</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3745</span> Forecasting the Temperature at a Weather Station Using Deep Neural Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Debneil%20Saha%20Roy">Debneil Saha Roy</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Weather forecasting is a complex topic and is well suited for analysis by deep learning approaches. With the wide availability of weather observation data nowadays, these approaches can be utilized to identify immediate comparisons between historical weather forecasts and current observations. This work explores the application of deep learning techniques to weather forecasting in order to accurately predict the weather over a given forecast hori­zon. Three deep neural networks are used in this study, namely, Multi-Layer Perceptron (MLP), Long Short Tunn Memory Network (LSTM) and a combination of Convolutional Neural Network (CNN) and LSTM. The predictive performance of these models is compared using two evaluation metrics. The results show that forecasting accuracy increases with an increase in the complexity of deep neural networks. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20network" title="convolutional neural network">convolutional neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=long%20short%20term%20memory" title=" long short term memory"> long short term memory</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-layer%20perceptron" title=" multi-layer perceptron"> multi-layer perceptron</a> </p> <a href="https://publications.waset.org/abstracts/124787/forecasting-the-temperature-at-a-weather-station-using-deep-neural-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/124787.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">177</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3744</span> Amplifying Sine Unit-Convolutional Neural Network: An Efficient Deep Architecture for Image Classification and Feature Visualizations</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jamshaid%20Ul%20Rahman">Jamshaid Ul Rahman</a>, <a href="https://publications.waset.org/abstracts/search?q=Faiza%20Makhdoom"> Faiza Makhdoom</a>, <a href="https://publications.waset.org/abstracts/search?q=Dianchen%20Lu"> Dianchen Lu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Activation functions play a decisive role in determining the capacity of Deep Neural Networks (DNNs) as they enable neural networks to capture inherent nonlinearities present in data fed to them. The prior research on activation functions primarily focused on the utility of monotonic or non-oscillatory functions, until Growing Cosine Unit (GCU) broke the taboo for a number of applications. In this paper, a Convolutional Neural Network (CNN) model named as ASU-CNN is proposed which utilizes recently designed activation function ASU across its layers. The effect of this non-monotonic and oscillatory function is inspected through feature map visualizations from different convolutional layers. The optimization of proposed network is offered by Adam with a fine-tuned adjustment of learning rate. The network achieved promising results on both training and testing data for the classification of CIFAR-10. The experimental results affirm the computational feasibility and efficacy of the proposed model for performing tasks related to the field of computer vision. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=amplifying%20sine%20unit" title="amplifying sine unit">amplifying sine unit</a>, <a href="https://publications.waset.org/abstracts/search?q=activation%20function" title=" activation function"> activation function</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks" title=" convolutional neural networks"> convolutional neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=oscillatory%20activation" title=" oscillatory activation"> oscillatory activation</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20classification" title=" image classification"> image classification</a>, <a href="https://publications.waset.org/abstracts/search?q=CIFAR-10" title=" CIFAR-10"> CIFAR-10</a> </p> <a href="https://publications.waset.org/abstracts/169054/amplifying-sine-unit-convolutional-neural-network-an-efficient-deep-architecture-for-image-classification-and-feature-visualizations" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/169054.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">111</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3743</span> Using Deep Learning Neural Networks and Candlestick Chart Representation to Predict Stock Market</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rosdyana%20Mangir%20Irawan%20Kusuma">Rosdyana Mangir Irawan Kusuma</a>, <a href="https://publications.waset.org/abstracts/search?q=Wei-Chun%20Kao"> Wei-Chun Kao</a>, <a href="https://publications.waset.org/abstracts/search?q=Ho-Thi%20Trang"> Ho-Thi Trang</a>, <a href="https://publications.waset.org/abstracts/search?q=Yu-Yen%20Ou"> Yu-Yen Ou</a>, <a href="https://publications.waset.org/abstracts/search?q=Kai-Lung%20Hua"> Kai-Lung Hua</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Stock market prediction is still a challenging problem because there are many factors that affect the stock market price such as company news and performance, industry performance, investor sentiment, social media sentiment, and economic factors. This work explores the predictability in the stock market using deep convolutional network and candlestick charts. The outcome is utilized to design a decision support framework that can be used by traders to provide suggested indications of future stock price direction. We perform this work using various types of neural networks like convolutional neural network, residual network and visual geometry group network. From stock market historical data, we converted it to candlestick charts. Finally, these candlestick charts will be feed as input for training a convolutional neural network model. This convolutional neural network model will help us to analyze the patterns inside the candlestick chart and predict the future movements of the stock market. The effectiveness of our method is evaluated in stock market prediction with promising results; 92.2% and 92.1 % accuracy for Taiwan and Indonesian stock market dataset respectively. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=candlestick%20chart" title="candlestick chart">candlestick chart</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20network" title=" neural network"> neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=stock%20market%20prediction" title=" stock market prediction"> stock market prediction</a> </p> <a href="https://publications.waset.org/abstracts/98615/using-deep-learning-neural-networks-and-candlestick-chart-representation-to-predict-stock-market" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/98615.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">447</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3742</span> Large Neural Networks Learning From Scratch With Very Few Data and Without Explicit Regularization</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Christoph%20Linse">Christoph Linse</a>, <a href="https://publications.waset.org/abstracts/search?q=Thomas%20Martinetz"> Thomas Martinetz</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Recent findings have shown that Neural Networks generalize also in over-parametrized regimes with zero training error. This is surprising, since it is completely against traditional machine learning wisdom. In our empirical study we fortify these findings in the domain of fine-grained image classification. We show that very large Convolutional Neural Networks with millions of weights do learn with only a handful of training samples and without image augmentation, explicit regularization or pretraining. We train the architectures ResNet018, ResNet101 and VGG19 on subsets of the difficult benchmark datasets Caltech101, CUB_200_2011, FGVCAircraft, Flowers102 and StanfordCars with 100 classes and more, perform a comprehensive comparative study and draw implications for the practical application of CNNs. Finally, we show that VGG19 with 140 million weights learns to distinguish airplanes and motorbikes with up to 95% accuracy using only 20 training samples per class. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks" title="convolutional neural networks">convolutional neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=fine-grained%20image%20classification" title=" fine-grained image classification"> fine-grained image classification</a>, <a href="https://publications.waset.org/abstracts/search?q=generalization" title=" generalization"> generalization</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20recognition" title=" image recognition"> image recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=over-parameterized" title=" over-parameterized"> over-parameterized</a>, <a href="https://publications.waset.org/abstracts/search?q=small%20data%20sets" title=" small data sets"> small data sets</a> </p> <a href="https://publications.waset.org/abstracts/154011/large-neural-networks-learning-from-scratch-with-very-few-data-and-without-explicit-regularization" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/154011.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">88</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3741</span> Convolutional Neural Networks Architecture Analysis for Image Captioning</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jun%20Seung%20Woo">Jun Seung Woo</a>, <a href="https://publications.waset.org/abstracts/search?q=Shin%20Dong%20Ho"> Shin Dong Ho</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The Image Captioning models with Attention technology have developed significantly compared to previous models, but it is still unsatisfactory in recognizing images. We perform an extensive search over seven interesting Convolutional Neural Networks(CNN) architectures to analyze the behavior of different models for image captioning. We compared seven different CNN Architectures, according to batch size, using on public benchmarks: MS-COCO datasets. In our experimental results, DenseNet and InceptionV3 got about 14% loss and about 160sec training time per epoch. It was the most satisfactory result among the seven CNN architectures after training 50 epochs on GPU. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title="deep learning">deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20captioning" title=" image captioning"> image captioning</a>, <a href="https://publications.waset.org/abstracts/search?q=CNN%20architectures" title=" CNN architectures"> CNN architectures</a>, <a href="https://publications.waset.org/abstracts/search?q=densenet" title=" densenet"> densenet</a>, <a href="https://publications.waset.org/abstracts/search?q=inceptionV3" title=" inceptionV3"> inceptionV3</a> </p> <a href="https://publications.waset.org/abstracts/148886/convolutional-neural-networks-architecture-analysis-for-image-captioning" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/148886.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">133</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3740</span> Improvement of Ground Truth Data for Eye Location on Infrared Driver Recordings</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sorin%20Valcan">Sorin Valcan</a>, <a href="https://publications.waset.org/abstracts/search?q=Mihail%20Gaianu"> Mihail Gaianu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Labeling is a very costly and time consuming process which aims to generate datasets for training neural networks in several functionalities and projects. For driver monitoring system projects, the need for labeled images has a significant impact on the budget and distribution of effort. This paper presents the modifications done to an algorithm used for the generation of ground truth data for 2D eyes location on infrared images with drivers in order to improve the quality of the data and performance of the trained neural networks. The algorithm restrictions become tougher, which makes it more accurate but also less constant. The resulting dataset becomes smaller and shall not be altered by any kind of manual label adjustment before being used in the neural networks training process. These changes resulted in a much better performance of the trained neural networks. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=labeling%20automation" title="labeling automation">labeling automation</a>, <a href="https://publications.waset.org/abstracts/search?q=infrared%20camera" title=" infrared camera"> infrared camera</a>, <a href="https://publications.waset.org/abstracts/search?q=driver%20monitoring" title=" driver monitoring"> driver monitoring</a>, <a href="https://publications.waset.org/abstracts/search?q=eye%20detection" title=" eye detection"> eye detection</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks" title=" convolutional neural networks"> convolutional neural networks</a> </p> <a href="https://publications.waset.org/abstracts/148969/improvement-of-ground-truth-data-for-eye-location-on-infrared-driver-recordings" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/148969.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">117</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3739</span> An Accurate Computer-Aided Diagnosis: CAD System for Diagnosis of Aortic Enlargement by Using Convolutional Neural Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mahdi%20Bazarganigilani">Mahdi Bazarganigilani</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Aortic enlargement, also known as an aortic aneurysm, can occur when the walls of the aorta become weak. This disease can become deadly if overlooked and undiagnosed. In this paper, a computer-aided diagnosis (CAD) system was introduced to accurately diagnose aortic enlargement from chest x-ray images. An enhanced convolutional neural network (CNN) was employed and then trained by transfer learning by using three different main areas from the original images. The areas included the left lung, heart, and right lung. The accuracy of the system was then evaluated on 1001 samples by using 4-fold cross-validation. A promising accuracy of 90% was achieved in terms of the F-measure indicator. The results showed using different areas from the original image in the training phase of CNN could increase the accuracy of predictions. This encouraged the author to evaluate this method on a larger dataset and even on different CAD systems for further enhancement of this methodology. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=computer-aided%20diagnosis%20systems" title="computer-aided diagnosis systems">computer-aided diagnosis systems</a>, <a href="https://publications.waset.org/abstracts/search?q=aortic%20enlargement" title=" aortic enlargement"> aortic enlargement</a>, <a href="https://publications.waset.org/abstracts/search?q=chest%20X-ray" title=" chest X-ray"> chest X-ray</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks" title=" convolutional neural networks"> convolutional neural networks</a> </p> <a href="https://publications.waset.org/abstracts/145129/an-accurate-computer-aided-diagnosis-cad-system-for-diagnosis-of-aortic-enlargement-by-using-convolutional-neural-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/145129.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">162</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3738</span> Clothes Identification Using Inception ResNet V2 and MobileNet V2</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Subodh%20Chandra%20Shakya">Subodh Chandra Shakya</a>, <a href="https://publications.waset.org/abstracts/search?q=Badal%20Shrestha"> Badal Shrestha</a>, <a href="https://publications.waset.org/abstracts/search?q=Suni%20Thapa"> Suni Thapa</a>, <a href="https://publications.waset.org/abstracts/search?q=Ashutosh%20Chauhan"> Ashutosh Chauhan</a>, <a href="https://publications.waset.org/abstracts/search?q=Saugat%20Adhikari"> Saugat Adhikari</a> </p> <p class="card-text"><strong>Abstract:</strong></p> To tackle our problem of clothes identification, we used different architectures of Convolutional Neural Networks. Among different architectures, the outcome from Inception ResNet V2 and MobileNet V2 seemed promising. On comparison of the metrices, we observed that the Inception ResNet V2 slightly outperforms MobileNet V2 for this purpose. So this paper of ours proposes the cloth identifier using Inception ResNet V2 and also contains the comparison between the outcome of ResNet V2 and MobileNet V2. The document here contains the results and findings of the research that we performed on the DeepFashion Dataset. To improve the dataset, we used different image preprocessing techniques like image shearing, image rotation, and denoising. The whole experiment was conducted with the intention of testing the efficiency of convolutional neural networks on cloth identification so that we could develop a reliable system that is good enough in identifying the clothes worn by the users. The whole system can be integrated with some kind of recommendation system. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=inception%20ResNet" title="inception ResNet">inception ResNet</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20net" title=" convolutional neural net"> convolutional neural net</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=confusion%20matrix" title=" confusion matrix"> confusion matrix</a>, <a href="https://publications.waset.org/abstracts/search?q=data%20augmentation" title=" data augmentation"> data augmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=data%20preprocessing" title=" data preprocessing"> data preprocessing</a> </p> <a href="https://publications.waset.org/abstracts/129604/clothes-identification-using-inception-resnet-v2-and-mobilenet-v2" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/129604.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">187</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3737</span> Unsupervised Images Generation Based on Sloan Digital Sky Survey with Deep Convolutional Generative Neural Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Guanghua%20Zhang">Guanghua Zhang</a>, <a href="https://publications.waset.org/abstracts/search?q=Fubao%20Wang"> Fubao Wang</a>, <a href="https://publications.waset.org/abstracts/search?q=Weijun%20Duan"> Weijun Duan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Convolution neural network (CNN) has attracted more and more attention on recent years. Especially in the field of computer vision and image classification. However, unsupervised learning with CNN has received less attention than supervised learning. In this work, we use a new powerful tool which is deep convolutional generative adversarial networks (DCGANs) to generate images from Sloan Digital Sky Survey. Training by various star and galaxy images, it shows that both the generator and the discriminator are good for unsupervised learning. In this paper, we also took several experiments to choose the best value for hyper-parameters and which could help to stabilize the training process and promise a good quality of the output. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=convolution%20neural%20network" title="convolution neural network">convolution neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=discriminator" title=" discriminator"> discriminator</a>, <a href="https://publications.waset.org/abstracts/search?q=generator" title=" generator"> generator</a>, <a href="https://publications.waset.org/abstracts/search?q=unsupervised%20learning" title=" unsupervised learning"> unsupervised learning</a> </p> <a href="https://publications.waset.org/abstracts/89010/unsupervised-images-generation-based-on-sloan-digital-sky-survey-with-deep-convolutional-generative-neural-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/89010.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">268</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3736</span> Game Structure and Spatio-Temporal Action Detection in Soccer Using Graphs and 3D Convolutional Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=J%C3%A9r%C3%A9mie%20Ochin">Jérémie Ochin</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Soccer analytics are built on two data sources: the frame-by-frame position of each player on the terrain and the sequences of events, such as ball drive, pass, cross, shot, throw-in... With more than 2000 ball-events per soccer game, their precise and exhaustive annotation, based on a monocular video stream such as a TV broadcast, remains a tedious and costly manual task. State-of-the-art methods for spatio-temporal action detection from a monocular video stream, often based on 3D convolutional neural networks, are close to reach levels of performances in mean Average Precision (mAP) compatibles with the automation of such task. Nevertheless, to meet their expectation of exhaustiveness in the context of data analytics, such methods must be applied in a regime of high recall – low precision, using low confidence score thresholds. This setting unavoidably leads to the detection of false positives that are the product of the well documented overconfidence behaviour of neural networks and, in this case, their limited access to contextual information and understanding of the game: their predictions are highly unstructured. Based on the assumption that professional soccer players’ behaviour, pose, positions and velocity are highly interrelated and locally driven by the player performing a ball-action, it is hypothesized that the addition of information regarding surrounding player’s appearance, positions and velocity in the prediction methods can improve their metrics. Several methods are compared to build a proper representation of the game surrounding a player, from handcrafted features of the local graph, based on domain knowledge, to the use of Graph Neural Networks trained in an end-to-end fashion with existing state-of-the-art 3D convolutional neural networks. It is shown that the inclusion of information regarding surrounding players helps reaching higher metrics. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=fine-grained%20action%20recognition" title="fine-grained action recognition">fine-grained action recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=human%20action%20recognition" title=" human action recognition"> human action recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks" title=" convolutional neural networks"> convolutional neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=graph%20neural%20networks" title=" graph neural networks"> graph neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=spatio-temporal%20action%20recognition" title=" spatio-temporal action recognition"> spatio-temporal action recognition</a> </p> <a href="https://publications.waset.org/abstracts/192167/game-structure-and-spatio-temporal-action-detection-in-soccer-using-graphs-and-3d-convolutional-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/192167.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">24</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3735</span> Pose Normalization Network for Object Classification</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bingquan%20Shen">Bingquan Shen</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Convolutional Neural Networks (CNN) have demonstrated their effectiveness in synthesizing 3D views of object instances at various viewpoints. Given the problem where one have limited viewpoints of a particular object for classification, we present a pose normalization architecture to transform the object to existing viewpoints in the training dataset before classification to yield better classification performance. We have demonstrated that this Pose Normalization Network (PNN) can capture the style of the target object and is able to re-render it to a desired viewpoint. Moreover, we have shown that the PNN improves the classification result for the 3D chairs dataset and ShapeNet airplanes dataset when given only images at limited viewpoint, as compared to a CNN baseline. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks" title="convolutional neural networks">convolutional neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20classification" title=" object classification"> object classification</a>, <a href="https://publications.waset.org/abstracts/search?q=pose%20normalization" title=" pose normalization"> pose normalization</a>, <a href="https://publications.waset.org/abstracts/search?q=viewpoint%20invariant" title=" viewpoint invariant"> viewpoint invariant</a> </p> <a href="https://publications.waset.org/abstracts/56852/pose-normalization-network-for-object-classification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/56852.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">352</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3734</span> Monitor Student Concentration Levels on Online Education Sessions</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=M.%20K.%20Wijayarathna">M. K. Wijayarathna</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20M.%20Buddika%20Harshanath"> S. M. Buddika Harshanath</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Monitoring student engagement has become a crucial part of the educational process and a reliable indicator of the capacity to retain information. As online learning classrooms are now more common these days, students' attention levels have become increasingly important, making it more difficult to check each student's concentration level in an online classroom setting. To profile student attention to various gradients of engagement, a study is a plan to conduct using machine learning models. Using a convolutional neural network, the findings and confidence score of the high accuracy model are obtained. In this research, convolutional neural networks are using to help discover essential emotions that are critical in defining various levels of participation. Students' attention levels were shown to be influenced by emotions such as calm, enjoyment, surprise, and fear. An improved virtual learning system was created as a result of these data, which allowed teachers to focus their support and advise on those students who needed it. Student participation has formed as a crucial component of the learning technique and a consistent predictor of a student's capacity to retain material in the classroom. Convolutional neural networks have a plan to implement the platform. As a preliminary step, a video of the pupil would be taken. In the end, researchers used a convolutional neural network utilizing the Keras toolkit to take pictures of the recordings. Two convolutional neural network methods are planned to use to determine the pupils' attention level. Finally, those predicted student attention level results plan to display on the graphical user interface of the System. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=HTML5" title="HTML5">HTML5</a>, <a href="https://publications.waset.org/abstracts/search?q=JavaScript" title=" JavaScript"> JavaScript</a>, <a href="https://publications.waset.org/abstracts/search?q=Python%20flask%20framework" title=" Python flask framework"> Python flask framework</a>, <a href="https://publications.waset.org/abstracts/search?q=AI" title=" AI"> AI</a>, <a href="https://publications.waset.org/abstracts/search?q=graphical%20user" title=" graphical user"> graphical user</a> </p> <a href="https://publications.waset.org/abstracts/153646/monitor-student-concentration-levels-on-online-education-sessions" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/153646.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">99</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3733</span> A Survey of Field Programmable Gate Array-Based Convolutional Neural Network Accelerators</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Wei%20Zhang">Wei Zhang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> With the rapid development of deep learning, neural network and deep learning algorithms play a significant role in various practical applications. Due to the high accuracy and good performance, Convolutional Neural Networks (CNNs) especially have become a research hot spot in the past few years. However, the size of the networks becomes increasingly large scale due to the demands of the practical applications, which poses a significant challenge to construct a high-performance implementation of deep learning neural networks. Meanwhile, many of these application scenarios also have strict requirements on the performance and low-power consumption of hardware devices. Therefore, it is particularly critical to choose a moderate computing platform for hardware acceleration of CNNs. This article aimed to survey the recent advance in Field Programmable Gate Array (FPGA)-based acceleration of CNNs. Various designs and implementations of the accelerator based on FPGA under different devices and network models are overviewed, and the versions of Graphic Processing Units (GPUs), Application Specific Integrated Circuits (ASICs) and Digital Signal Processors (DSPs) are compared to present our own critical analysis and comments. Finally, we give a discussion on different perspectives of these acceleration and optimization methods on FPGA platforms to further explore the opportunities and challenges for future research. More helpfully, we give a prospect for future development of the FPGA-based accelerator. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title="deep learning">deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=field%20programmable%20gate%20array" title=" field programmable gate array"> field programmable gate array</a>, <a href="https://publications.waset.org/abstracts/search?q=FPGA" title=" FPGA"> FPGA</a>, <a href="https://publications.waset.org/abstracts/search?q=hardware%20accelerator" title=" hardware accelerator"> hardware accelerator</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks" title=" convolutional neural networks"> convolutional neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=CNN" title=" CNN"> CNN</a> </p> <a href="https://publications.waset.org/abstracts/128017/a-survey-of-field-programmable-gate-array-based-convolutional-neural-network-accelerators" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/128017.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">128</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3732</span> Electrocardiogram-Based Heartbeat Classification Using Convolutional Neural Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jacqueline%20Rose%20T.%20Alipo-on">Jacqueline Rose T. Alipo-on</a>, <a href="https://publications.waset.org/abstracts/search?q=Francesca%20Isabelle%20F.%20Escobar"> Francesca Isabelle F. Escobar</a>, <a href="https://publications.waset.org/abstracts/search?q=Myles%20Joshua%20T.%20Tan"> Myles Joshua T. Tan</a>, <a href="https://publications.waset.org/abstracts/search?q=Hezerul%20Abdul%20Karim"> Hezerul Abdul Karim</a>, <a href="https://publications.waset.org/abstracts/search?q=Nouar%20Al%20Dahoul"> Nouar Al Dahoul</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Electrocardiogram (ECG) signal analysis and processing are crucial in the diagnosis of cardiovascular diseases, which are considered one of the leading causes of mortality worldwide. However, the traditional rule-based analysis of large volumes of ECG data is time-consuming, labor-intensive, and prone to human errors. With the advancement of the programming paradigm, algorithms such as machine learning have been increasingly used to perform an analysis of ECG signals. In this paper, various deep learning algorithms were adapted to classify five classes of heartbeat types. The dataset used in this work is the synthetic MIT-BIH Arrhythmia dataset produced from generative adversarial networks (GANs). Various deep learning models such as ResNet-50 convolutional neural network (CNN), 1-D CNN, and long short-term memory (LSTM) were evaluated and compared. ResNet-50 was found to outperform other models in terms of recall and F1 score using a five-fold average score of 98.88% and 98.87%, respectively. 1-D CNN, on the other hand, was found to have the highest average precision of 98.93%. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=heartbeat%20classification" title="heartbeat classification">heartbeat classification</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20network" title=" convolutional neural network"> convolutional neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=electrocardiogram%20signals" title=" electrocardiogram signals"> electrocardiogram signals</a>, <a href="https://publications.waset.org/abstracts/search?q=generative%20adversarial%20networks" title=" generative adversarial networks"> generative adversarial networks</a>, <a href="https://publications.waset.org/abstracts/search?q=long%20short-term%20memory" title=" long short-term memory"> long short-term memory</a>, <a href="https://publications.waset.org/abstracts/search?q=ResNet-50" title=" ResNet-50"> ResNet-50</a> </p> <a href="https://publications.waset.org/abstracts/162763/electrocardiogram-based-heartbeat-classification-using-convolutional-neural-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/162763.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">128</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3731</span> Cells Detection and Recognition in Bone Marrow Examination with Deep Learning Method</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shiyin%20He">Shiyin He</a>, <a href="https://publications.waset.org/abstracts/search?q=Zheng%20Huang"> Zheng Huang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, deep learning methods are applied in bio-medical field to detect and count different types of cells in an automatic way instead of manual work in medical practice, specifically in bone marrow examination. The process is mainly composed of two steps, detection and recognition. Mask-Region-Convolutional Neural Networks (Mask-RCNN) was used for detection and image segmentation to extract cells and then Convolutional Neural Networks (CNN), as well as Deep Residual Network (ResNet) was used to classify. Result of cell detection network shows high efficiency to meet application requirements. For the cell recognition network, two networks are compared and the final system is fully applicable. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=cell%20detection" title="cell detection">cell detection</a>, <a href="https://publications.waset.org/abstracts/search?q=cell%20recognition" title=" cell recognition"> cell recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=Mask-RCNN" title=" Mask-RCNN"> Mask-RCNN</a>, <a href="https://publications.waset.org/abstracts/search?q=ResNet" title=" ResNet"> ResNet</a> </p> <a href="https://publications.waset.org/abstracts/98649/cells-detection-and-recognition-in-bone-marrow-examination-with-deep-learning-method" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/98649.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">190</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3730</span> Physics-Informed Convolutional Neural Networks for Reservoir Simulation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jiangxia%20Han">Jiangxia Han</a>, <a href="https://publications.waset.org/abstracts/search?q=Liang%20Xue"> Liang Xue</a>, <a href="https://publications.waset.org/abstracts/search?q=Keda%20Chen"> Keda Chen</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Despite the significant progress over the last decades in reservoir simulation using numerical discretization, meshing is complex. Moreover, the high degree of freedom of the space-time flow field makes the solution process very time-consuming. Therefore, we present Physics-Informed Convolutional Neural Networks(PICNN) as a hybrid scientific theory and data method for reservoir modeling. Besides labeled data, the model is driven by the scientific theories of the underlying problem, such as governing equations, boundary conditions, and initial conditions. PICNN integrates governing equations and boundary conditions into the network architecture in the form of a customized convolution kernel. The loss function is composed of data matching, initial conditions, and other measurable prior knowledge. By customizing the convolution kernel and minimizing the loss function, the neural network parameters not only fit the data but also honor the governing equation. The PICNN provides a methodology to model and history-match flow and transport problems in porous media. Numerical results demonstrate that the proposed PICNN can provide an accurate physical solution from a limited dataset. We show how this method can be applied in the context of a forward simulation for continuous problems. Furthermore, several complex scenarios are tested, including the existence of data noise, different work schedules, and different good patterns. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks" title="convolutional neural networks">convolutional neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=flow%20and%20transport%20in%20porous%20media" title=" flow and transport in porous media"> flow and transport in porous media</a>, <a href="https://publications.waset.org/abstracts/search?q=physics-informed%20neural%20networks" title=" physics-informed neural networks"> physics-informed neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=reservoir%20simulation" title=" reservoir simulation"> reservoir simulation</a> </p> <a href="https://publications.waset.org/abstracts/156803/physics-informed-convolutional-neural-networks-for-reservoir-simulation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/156803.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">143</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3729</span> The Twin Terminal of Pedestrian Trajectory Based on City Intelligent Model (CIM) 4.0</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Chen%20Xi">Chen Xi</a>, <a href="https://publications.waset.org/abstracts/search?q=Lao%20Xuerui"> Lao Xuerui</a>, <a href="https://publications.waset.org/abstracts/search?q=Li%20Junjie"> Li Junjie</a>, <a href="https://publications.waset.org/abstracts/search?q=Jiang%20Yike"> Jiang Yike</a>, <a href="https://publications.waset.org/abstracts/search?q=Wang%20Hanwei"> Wang Hanwei</a>, <a href="https://publications.waset.org/abstracts/search?q=Zeng%20Zihao"> Zeng Zihao</a> </p> <p class="card-text"><strong>Abstract:</strong></p> To further promote the development of smart cities, the microscopic "nerve endings" of the City Intelligent Model (CIM) are extended to be more sensitive. In this paper, we develop a pedestrian trajectory twin terminal based on the CIM and CNN technology. It also uses 5G networks, architectural and geoinformatics technologies, convolutional neural networks, combined with deep learning networks for human behaviour recognition models, to provide empirical data such as 'pedestrian flow data and human behavioural characteristics data', and ultimately form spatial performance evaluation criteria and spatial performance warning systems, to make the empirical data accurate and intelligent for prediction and decision making. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=urban%20planning" title="urban planning">urban planning</a>, <a href="https://publications.waset.org/abstracts/search?q=urban%20governance" title=" urban governance"> urban governance</a>, <a href="https://publications.waset.org/abstracts/search?q=CIM" title=" CIM"> CIM</a>, <a href="https://publications.waset.org/abstracts/search?q=artificial%20intelligence" title=" artificial intelligence"> artificial intelligence</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20network" title=" convolutional neural network"> convolutional neural network</a> </p> <a href="https://publications.waset.org/abstracts/163907/the-twin-terminal-of-pedestrian-trajectory-based-on-city-intelligent-model-cim-40" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/163907.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">150</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">&lsaquo;</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks&amp;page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks&amp;page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks&amp;page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks&amp;page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks&amp;page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks&amp;page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks&amp;page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks&amp;page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks&amp;page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks&amp;page=125">125</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks&amp;page=126">126</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks&amp;page=2" rel="next">&rsaquo;</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">&copy; 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">&times;</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>

Pages: 1 2 3 4 5 6 7 8 9 10