CINXE.COM

Search results for: Deep Neural Network

<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: Deep Neural Network</title> <meta name="description" content="Search results for: Deep Neural Network"> <meta name="keywords" content="Deep Neural Network"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="Deep Neural Network" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="Deep Neural Network"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 6763</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: Deep Neural Network</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6763</span> Exploring Deep Neural Network Compression: An Overview</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ghorab%20Sara">Ghorab Sara</a>, <a href="https://publications.waset.org/abstracts/search?q=Meziani%20Lila"> Meziani Lila</a>, <a href="https://publications.waset.org/abstracts/search?q=Rubin%20Harvey%20Stuart"> Rubin Harvey Stuart</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The rapid growth of deep learning has led to intricate and resource-intensive deep neural networks widely used in computer vision tasks. However, their complexity results in high computational demands and memory usage, hindering real-time application. To address this, research focuses on model compression techniques. The paper provides an overview of recent advancements in compressing neural networks and categorizes the various methods into four main approaches: network pruning, quantization, network decomposition, and knowledge distillation. This paper aims to provide a comprehensive outline of both the advantages and limitations of each method. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=model%20compression" title="model compression">model compression</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20neural%20network" title=" deep neural network"> deep neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=pruning" title=" pruning"> pruning</a>, <a href="https://publications.waset.org/abstracts/search?q=knowledge%20distillation" title=" knowledge distillation"> knowledge distillation</a>, <a href="https://publications.waset.org/abstracts/search?q=quantization" title=" quantization"> quantization</a>, <a href="https://publications.waset.org/abstracts/search?q=low-rank%20decomposition" title=" low-rank decomposition"> low-rank decomposition</a> </p> <a href="https://publications.waset.org/abstracts/185803/exploring-deep-neural-network-compression-an-overview" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/185803.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">43</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6762</span> Forecasting the Temperature at a Weather Station Using Deep Neural Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Debneil%20Saha%20Roy">Debneil Saha Roy</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Weather forecasting is a complex topic and is well suited for analysis by deep learning approaches. With the wide availability of weather observation data nowadays, these approaches can be utilized to identify immediate comparisons between historical weather forecasts and current observations. This work explores the application of deep learning techniques to weather forecasting in order to accurately predict the weather over a given forecast hori­zon. Three deep neural networks are used in this study, namely, Multi-Layer Perceptron (MLP), Long Short Tunn Memory Network (LSTM) and a combination of Convolutional Neural Network (CNN) and LSTM. The predictive performance of these models is compared using two evaluation metrics. The results show that forecasting accuracy increases with an increase in the complexity of deep neural networks. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20network" title="convolutional neural network">convolutional neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=long%20short%20term%20memory" title=" long short term memory"> long short term memory</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-layer%20perceptron" title=" multi-layer perceptron"> multi-layer perceptron</a> </p> <a href="https://publications.waset.org/abstracts/124787/forecasting-the-temperature-at-a-weather-station-using-deep-neural-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/124787.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">177</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6761</span> Facial Emotion Recognition with Convolutional Neural Network Based Architecture</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Koray%20U.%20Erbas">Koray U. Erbas</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Neural networks are appealing for many applications since they are able to learn complex non-linear relationships between input and output data. As the number of neurons and layers in a neural network increase, it is possible to represent more complex relationships with automatically extracted features. Nowadays Deep Neural Networks (DNNs) are widely used in Computer Vision problems such as; classification, object detection, segmentation image editing etc. In this work, Facial Emotion Recognition task is performed by proposed Convolutional Neural Network (CNN)-based DNN architecture using FER2013 Dataset. Moreover, the effects of different hyperparameters (activation function, kernel size, initializer, batch size and network size) are investigated and ablation study results for Pooling Layer, Dropout and Batch Normalization are presented. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20network" title="convolutional neural network">convolutional neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning%20based%20FER" title=" deep learning based FER"> deep learning based FER</a>, <a href="https://publications.waset.org/abstracts/search?q=facial%20emotion%20recognition" title=" facial emotion recognition"> facial emotion recognition</a> </p> <a href="https://publications.waset.org/abstracts/128197/facial-emotion-recognition-with-convolutional-neural-network-based-architecture" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/128197.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">274</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6760</span> A Survey of Sentiment Analysis Based on Deep Learning</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Pingping%20Lin">Pingping Lin</a>, <a href="https://publications.waset.org/abstracts/search?q=Xudong%20Luo"> Xudong Luo</a>, <a href="https://publications.waset.org/abstracts/search?q=Yifan%20Fan"> Yifan Fan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Sentiment analysis is a very active research topic. Every day, Facebook, Twitter, Weibo, and other social media, as well as significant e-commerce websites, generate a massive amount of comments, which can be used to analyse peoples opinions or emotions. The existing methods for sentiment analysis are based mainly on sentiment dictionaries, machine learning, and deep learning. The first two kinds of methods rely on heavily sentiment dictionaries or large amounts of labelled data. The third one overcomes these two problems. So, in this paper, we focus on the third one. Specifically, we survey various sentiment analysis methods based on convolutional neural network, recurrent neural network, long short-term memory, deep neural network, deep belief network, and memory network. We compare their futures, advantages, and disadvantages. Also, we point out the main problems of these methods, which may be worthy of careful studies in the future. Finally, we also examine the application of deep learning in multimodal sentiment analysis and aspect-level sentiment analysis. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=document%20analysis" title="document analysis">document analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=multimodal%20sentiment%20analysis" title=" multimodal sentiment analysis"> multimodal sentiment analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=natural%20language%20processing" title=" natural language processing"> natural language processing</a> </p> <a href="https://publications.waset.org/abstracts/130107/a-survey-of-sentiment-analysis-based-on-deep-learning" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/130107.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">164</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6759</span> Data Augmentation for Automatic Graphical User Interface Generation Based on Generative Adversarial Network</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Xulu%20Yao">Xulu Yao</a>, <a href="https://publications.waset.org/abstracts/search?q=Moi%20Hoon%20Yap"> Moi Hoon Yap</a>, <a href="https://publications.waset.org/abstracts/search?q=Yanlong%20Zhang"> Yanlong Zhang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> As a branch of artificial neural network, deep learning is widely used in the field of image recognition, but the lack of its dataset leads to imperfect model learning. By analysing the data scale requirements of deep learning and aiming at the application in GUI generation, it is found that the collection of GUI dataset is a time-consuming and labor-consuming project, which is difficult to meet the needs of current deep learning network. To solve this problem, this paper proposes a semi-supervised deep learning model that relies on the original small-scale datasets to produce a large number of reliable data sets. By combining the cyclic neural network with the generated countermeasure network, the cyclic neural network can learn the sequence relationship and characteristics of data, make the generated countermeasure network generate reasonable data, and then expand the Rico dataset. Relying on the network structure, the characteristics of collected data can be well analysed, and a large number of reasonable data can be generated according to these characteristics. After data processing, a reliable dataset for model training can be formed, which alleviates the problem of dataset shortage in deep learning. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=GUI" title="GUI">GUI</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=GAN" title=" GAN"> GAN</a>, <a href="https://publications.waset.org/abstracts/search?q=data%20augmentation" title=" data augmentation"> data augmentation</a> </p> <a href="https://publications.waset.org/abstracts/143650/data-augmentation-for-automatic-graphical-user-interface-generation-based-on-generative-adversarial-network" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/143650.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">184</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6758</span> Integrating Knowledge Distillation of Multiple Strategies</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Min%20Jindong">Min Jindong</a>, <a href="https://publications.waset.org/abstracts/search?q=Wang%20Mingxia"> Wang Mingxia</a> </p> <p class="card-text"><strong>Abstract:</strong></p> With the widespread use of artificial intelligence in life, computer vision, especially deep convolutional neural network models, has developed rapidly. With the increase of the complexity of the real visual target detection task and the improvement of the recognition accuracy, the target detection network model is also very large. The huge deep neural network model is not conducive to deployment on edge devices with limited resources, and the timeliness of network model inference is poor. In this paper, knowledge distillation is used to compress the huge and complex deep neural network model, and the knowledge contained in the complex network model is comprehensively transferred to another lightweight network model. Different from traditional knowledge distillation methods, we propose a novel knowledge distillation that incorporates multi-faceted features, called M-KD. In this paper, when training and optimizing the deep neural network model for target detection, the knowledge of the soft target output of the teacher network in knowledge distillation, the relationship between the layers of the teacher network and the feature attention map of the hidden layer of the teacher network are transferred to the student network as all knowledge. in the model. At the same time, we also introduce an intermediate transition layer, that is, an intermediate guidance layer, between the teacher network and the student network to make up for the huge difference between the teacher network and the student network. Finally, this paper adds an exploration module to the traditional knowledge distillation teacher-student network model. The student network model not only inherits the knowledge of the teacher network but also explores some new knowledge and characteristics. Comprehensive experiments in this paper using different distillation parameter configurations across multiple datasets and convolutional neural network models demonstrate that our proposed new network model achieves substantial improvements in speed and accuracy performance. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=object%20detection" title="object detection">object detection</a>, <a href="https://publications.waset.org/abstracts/search?q=knowledge%20distillation" title=" knowledge distillation"> knowledge distillation</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20network" title=" convolutional network"> convolutional network</a>, <a href="https://publications.waset.org/abstracts/search?q=model%20compression" title=" model compression"> model compression</a> </p> <a href="https://publications.waset.org/abstracts/148652/integrating-knowledge-distillation-of-multiple-strategies" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/148652.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">278</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6757</span> Positive Bias and Length Bias in Deep Neural Networks for Premises Selection</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jiaqi%20Huang">Jiaqi Huang</a>, <a href="https://publications.waset.org/abstracts/search?q=Yuheng%20Wang"> Yuheng Wang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Premises selection, the task of selecting a set of axioms for proving a given conjecture, is a major bottleneck in automated theorem proving. An array of deep-learning-based methods has been established for premises selection, but a perfect performance remains challenging. Our study examines the inaccuracy of deep neural networks in premises selection. Through training network models using encoded conjecture and axiom pairs from the Mizar Mathematical Library, two potential biases are found: the network models classify more premises as necessary than unnecessary, referred to as the ‘positive bias’, and the network models perform better in proving conjectures that paired with more axioms, referred to as ‘length bias’. The ‘positive bias’ and ‘length bias’ discovered could inform the limitation of existing deep neural networks. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=automated%20theorem%20proving" title="automated theorem proving">automated theorem proving</a>, <a href="https://publications.waset.org/abstracts/search?q=premises%20selection" title=" premises selection"> premises selection</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=interpreting%20deep%20learning" title=" interpreting deep learning"> interpreting deep learning</a> </p> <a href="https://publications.waset.org/abstracts/108017/positive-bias-and-length-bias-in-deep-neural-networks-for-premises-selection" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/108017.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">183</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6756</span> Using Deep Learning Neural Networks and Candlestick Chart Representation to Predict Stock Market</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rosdyana%20Mangir%20Irawan%20Kusuma">Rosdyana Mangir Irawan Kusuma</a>, <a href="https://publications.waset.org/abstracts/search?q=Wei-Chun%20Kao"> Wei-Chun Kao</a>, <a href="https://publications.waset.org/abstracts/search?q=Ho-Thi%20Trang"> Ho-Thi Trang</a>, <a href="https://publications.waset.org/abstracts/search?q=Yu-Yen%20Ou"> Yu-Yen Ou</a>, <a href="https://publications.waset.org/abstracts/search?q=Kai-Lung%20Hua"> Kai-Lung Hua</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Stock market prediction is still a challenging problem because there are many factors that affect the stock market price such as company news and performance, industry performance, investor sentiment, social media sentiment, and economic factors. This work explores the predictability in the stock market using deep convolutional network and candlestick charts. The outcome is utilized to design a decision support framework that can be used by traders to provide suggested indications of future stock price direction. We perform this work using various types of neural networks like convolutional neural network, residual network and visual geometry group network. From stock market historical data, we converted it to candlestick charts. Finally, these candlestick charts will be feed as input for training a convolutional neural network model. This convolutional neural network model will help us to analyze the patterns inside the candlestick chart and predict the future movements of the stock market. The effectiveness of our method is evaluated in stock market prediction with promising results; 92.2% and 92.1 % accuracy for Taiwan and Indonesian stock market dataset respectively. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=candlestick%20chart" title="candlestick chart">candlestick chart</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20network" title=" neural network"> neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=stock%20market%20prediction" title=" stock market prediction"> stock market prediction</a> </p> <a href="https://publications.waset.org/abstracts/98615/using-deep-learning-neural-networks-and-candlestick-chart-representation-to-predict-stock-market" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/98615.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">447</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6755</span> Robot Movement Using the Trust Region Policy Optimization</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Romisaa%20Ali">Romisaa Ali</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The Policy Gradient approach is one of the deep reinforcement learning families that combines deep neural networks (DNN) with reinforcement learning RL to discover the optimum of the control problem through experience gained from the interaction between the robot and its surroundings. In contrast to earlier policy gradient algorithms, which were unable to handle these two types of error because of over-or under-estimation introduced by the deep neural network model, this article will discuss the state-of-the-art SOTA policy gradient technique, trust region policy optimization (TRPO), by applying this method in various environments compared to another policy gradient method, the Proximal Policy Optimization (PPO), to explain their robust optimization, using this SOTA to gather experience data during various training phases after observing the impact of hyper-parameters on neural network performance. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20neural%20networks" title="deep neural networks">deep neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20reinforcement%20learning" title=" deep reinforcement learning"> deep reinforcement learning</a>, <a href="https://publications.waset.org/abstracts/search?q=proximal%20policy%20optimization" title=" proximal policy optimization"> proximal policy optimization</a>, <a href="https://publications.waset.org/abstracts/search?q=state-of-the-art" title=" state-of-the-art"> state-of-the-art</a>, <a href="https://publications.waset.org/abstracts/search?q=trust%20region%20policy%20optimization" title=" trust region policy optimization"> trust region policy optimization</a> </p> <a href="https://publications.waset.org/abstracts/158075/robot-movement-using-the-trust-region-policy-optimization" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/158075.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">169</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6754</span> Hyperspectral Data Classification Algorithm Based on the Deep Belief and Self-Organizing Neural Network</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Li%20Qingjian">Li Qingjian</a>, <a href="https://publications.waset.org/abstracts/search?q=Li%20Ke"> Li Ke</a>, <a href="https://publications.waset.org/abstracts/search?q=He%20Chun"> He Chun</a>, <a href="https://publications.waset.org/abstracts/search?q=Huang%20Yong"> Huang Yong</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, the method of combining the Pohl Seidman's deep belief network with the self-organizing neural network is proposed to classify the target. This method is mainly aimed at the high nonlinearity of the hyperspectral image, the high sample dimension and the difficulty in designing the classifier. The main feature of original data is extracted by deep belief network. In the process of extracting features, adding known labels samples to fine tune the network, enriching the main characteristics. Then, the extracted feature vectors are classified into the self-organizing neural network. This method can effectively reduce the dimensions of data in the spectrum dimension in the preservation of large amounts of raw data information, to solve the traditional clustering and the long training time when labeled samples less deep learning algorithm for training problems, improve the classification accuracy and robustness. Through the data simulation, the results show that the proposed network structure can get a higher classification precision in the case of a small number of known label samples. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=DBN" title="DBN">DBN</a>, <a href="https://publications.waset.org/abstracts/search?q=SOM" title=" SOM"> SOM</a>, <a href="https://publications.waset.org/abstracts/search?q=pattern%20classification" title=" pattern classification"> pattern classification</a>, <a href="https://publications.waset.org/abstracts/search?q=hyperspectral" title=" hyperspectral"> hyperspectral</a>, <a href="https://publications.waset.org/abstracts/search?q=data%20compression" title=" data compression"> data compression</a> </p> <a href="https://publications.waset.org/abstracts/89759/hyperspectral-data-classification-algorithm-based-on-the-deep-belief-and-self-organizing-neural-network" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/89759.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">341</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6753</span> Automated Weight Painting: Using Deep Neural Networks to Adjust 3D Mesh Skeletal Weights</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=John%20Gibbs">John Gibbs</a>, <a href="https://publications.waset.org/abstracts/search?q=Benjamin%20Flanders"> Benjamin Flanders</a>, <a href="https://publications.waset.org/abstracts/search?q=Dylan%20Pozorski"> Dylan Pozorski</a>, <a href="https://publications.waset.org/abstracts/search?q=Weixuan%20Liu"> Weixuan Liu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Weight Painting–adjusting the influence a skeletal joint has on a given vertex in a character mesh–is an arduous and time con- suming part of the 3D animation pipeline. This process generally requires a trained technical animator and many hours of work to complete. Our skiNNer plug-in, which works within Autodesk’s Maya 3D animation software, uses Machine Learning and data pro- cessing techniques to create a deep neural network model that can accomplish the weight painting task in seconds rather than hours for bipedal quasi-humanoid character meshes. In order to create a properly trained network, a number of challenges were overcome, including curating an appropriately large data library, managing an arbitrary 3D mesh size, handling arbitrary skeletal architectures, accounting for extreme numeric values (most data points are near 0 or 1 for weight maps), and constructing an appropriate neural network model that can properly capture the high frequency alter- ation between high weight values (near 1.0) and low weight values (near 0.0). The arrived at neural network model is a cross between a traditional CNN, deep residual network, and fully dense network. The resultant network captures the unusually hard-edged features of a weight map matrix, and produces excellent results on many bipedal models. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=3d%20animation" title="3d animation">3d animation</a>, <a href="https://publications.waset.org/abstracts/search?q=animation" title=" animation"> animation</a>, <a href="https://publications.waset.org/abstracts/search?q=character" title=" character"> character</a>, <a href="https://publications.waset.org/abstracts/search?q=rigging" title=" rigging"> rigging</a>, <a href="https://publications.waset.org/abstracts/search?q=skinning" title="skinning">skinning</a>, <a href="https://publications.waset.org/abstracts/search?q=weight%20painting" title=" weight painting"> weight painting</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=artificial%20intelligence" title=" artificial intelligence"> artificial intelligence</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20network" title=" neural network"> neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20neural%20network" title=" deep neural network"> deep neural network</a> </p> <a href="https://publications.waset.org/abstracts/148327/automated-weight-painting-using-deep-neural-networks-to-adjust-3d-mesh-skeletal-weights" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/148327.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">273</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6752</span> Deep Neural Network Approach for Navigation of Autonomous Vehicles</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mayank%20Raj">Mayank Raj</a>, <a href="https://publications.waset.org/abstracts/search?q=V.%20G.%20Narendra"> V. G. Narendra </a> </p> <p class="card-text"><strong>Abstract:</strong></p> Ever since the DARPA challenge on autonomous vehicles in 2005, there has been a lot of buzz about ‘Autonomous Vehicles’ amongst the major tech giants such as Google, Uber, and Tesla. Numerous approaches have been adopted to solve this problem, which can have a long-lasting impact on mankind. In this paper, we have used Deep Learning techniques and TensorFlow framework with the goal of building a neural network model to predict (speed, acceleration, steering angle, and brake) features needed for navigation of autonomous vehicles. The Deep Neural Network has been trained on images and sensor data obtained from the comma.ai dataset. A heatmap was used to check for correlation among the features, and finally, four important features were selected. This was a multivariate regression problem. The final model had five convolutional layers, followed by five dense layers. Finally, the calculated values were tested against the labeled data, where the mean squared error was used as a performance metric. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=autonomous%20vehicles" title="autonomous vehicles">autonomous vehicles</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=artificial%20intelligence" title=" artificial intelligence"> artificial intelligence</a> </p> <a href="https://publications.waset.org/abstracts/130762/deep-neural-network-approach-for-navigation-of-autonomous-vehicles" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/130762.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">158</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6751</span> Classification Based on Deep Neural Cellular Automata Model</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yasser%20F.%20Hassan">Yasser F. Hassan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Deep learning structure is a branch of machine learning science and greet achievement in research and applications. Cellular neural networks are regarded as array of nonlinear analog processors called cells connected in a way allowing parallel computations. The paper discusses how to use deep learning structure for representing neural cellular automata model. The proposed learning technique in cellular automata model will be examined from structure of deep learning. A deep automata neural cellular system modifies each neuron based on the behavior of the individual and its decision as a result of multi-level deep structure learning. The paper will present the architecture of the model and the results of simulation of approach are given. Results from the implementation enrich deep neural cellular automata system and shed a light on concept formulation of the model and the learning in it. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=cellular%20automata" title="cellular automata">cellular automata</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20cellular%20automata" title=" neural cellular automata"> neural cellular automata</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=classification" title=" classification"> classification</a> </p> <a href="https://publications.waset.org/abstracts/104722/classification-based-on-deep-neural-cellular-automata-model" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/104722.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">198</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6750</span> Performance Evaluation of Distributed Deep Learning Frameworks in Cloud Environment</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shuen-Tai%20Wang">Shuen-Tai Wang</a>, <a href="https://publications.waset.org/abstracts/search?q=Fang-An%20Kuo"> Fang-An Kuo</a>, <a href="https://publications.waset.org/abstracts/search?q=Chau-Yi%20Chou"> Chau-Yi Chou</a>, <a href="https://publications.waset.org/abstracts/search?q=Yu-Bin%20Fang"> Yu-Bin Fang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> 2016 has become the year of the Artificial Intelligence explosion. AI technologies are getting more and more matured that most world well-known tech giants are making large investment to increase the capabilities in AI. Machine learning is the science of getting computers to act without being explicitly programmed, and deep learning is a subset of machine learning that uses deep neural network to train a machine to learn&nbsp; features directly from data. Deep learning realizes many machine learning applications which expand the field of AI. At the present time, deep learning frameworks have been widely deployed on servers for deep learning applications in both academia and industry. In training deep neural networks, there are many standard processes or algorithms, but the performance of different frameworks might be different. In this paper we evaluate the running performance of two state-of-the-art distributed deep learning frameworks that are running training calculation in parallel over multi GPU and multi nodes in our cloud environment. We evaluate the training performance of the frameworks with ResNet-50 convolutional neural network, and we analyze what factors that result in the performance among both distributed frameworks as well. Through the experimental analysis, we identify the overheads which could be further optimized. The main contribution is that the evaluation results provide further optimization directions in both performance tuning and algorithmic design. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=artificial%20intelligence" title="artificial intelligence">artificial intelligence</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks" title=" convolutional neural networks"> convolutional neural networks</a> </p> <a href="https://publications.waset.org/abstracts/110135/performance-evaluation-of-distributed-deep-learning-frameworks-in-cloud-environment" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/110135.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">211</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6749</span> An Event Relationship Extraction Method Incorporating Deep Feedback Recurrent Neural Network and Bidirectional Long Short-Term Memory</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yin%20Yuanling">Yin Yuanling</a> </p> <p class="card-text"><strong>Abstract:</strong></p> A Deep Feedback Recurrent Neural Network (DFRNN) and Bidirectional Long Short-Term Memory (BiLSTM) are designed to address the problem of low accuracy of traditional relationship extraction models. This method combines a deep feedback-based recurrent neural network (DFRNN) with a bi-directional long short-term memory (BiLSTM) approach. The method combines DFRNN, which extracts local features of text based on deep feedback recurrent mechanism, BiLSTM, which better extracts global features of text, and Self-Attention, which extracts semantic information. Experiments show that the method achieves an F1 value of 76.69% on the CEC dataset, which is 0.0652 better than the BiLSTM+Self-ATT model, thus optimizing the performance of the deep learning method in the event relationship extraction task. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=event%20relations" title="event relations">event relations</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=DFRNN%20models" title=" DFRNN models"> DFRNN models</a>, <a href="https://publications.waset.org/abstracts/search?q=bi-directional%20long%20and%20short-term%20memory%20networks" title=" bi-directional long and short-term memory networks"> bi-directional long and short-term memory networks</a> </p> <a href="https://publications.waset.org/abstracts/156673/an-event-relationship-extraction-method-incorporating-deep-feedback-recurrent-neural-network-and-bidirectional-long-short-term-memory" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/156673.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">144</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6748</span> A Case Study of Deep Learning for Disease Detection in Crops</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Felipe%20A.%20Guth">Felipe A. Guth</a>, <a href="https://publications.waset.org/abstracts/search?q=Shane%20Ward"> Shane Ward</a>, <a href="https://publications.waset.org/abstracts/search?q=Kevin%20McDonnell"> Kevin McDonnell</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In the precision agriculture area, one of the main tasks is the automated detection of diseases in crops. Machine Learning algorithms have been studied in recent decades for such tasks in view of their potential for improving economic outcomes that automated disease detection may attain over crop fields. The latest generation of deep learning convolution neural networks has presented significant results in the area of image classification. In this way, this work has tested the implementation of an architecture of deep learning convolution neural network for the detection of diseases in different types of crops. A data augmentation strategy was used to meet the requirements of the algorithm implemented with a deep learning framework. Two test scenarios were deployed. The first scenario implemented a neural network under images extracted from a controlled environment while the second one took images both from the field and the controlled environment. The results evaluated the generalisation capacity of the neural networks in relation to the two types of images presented. Results yielded a general classification accuracy of 59% in scenario 1 and 96% in scenario 2. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks" title="convolutional neural networks">convolutional neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=disease%20detection" title=" disease detection"> disease detection</a>, <a href="https://publications.waset.org/abstracts/search?q=precision%20agriculture" title=" precision agriculture"> precision agriculture</a> </p> <a href="https://publications.waset.org/abstracts/95339/a-case-study-of-deep-learning-for-disease-detection-in-crops" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/95339.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">259</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6747</span> Cells Detection and Recognition in Bone Marrow Examination with Deep Learning Method</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shiyin%20He">Shiyin He</a>, <a href="https://publications.waset.org/abstracts/search?q=Zheng%20Huang"> Zheng Huang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, deep learning methods are applied in bio-medical field to detect and count different types of cells in an automatic way instead of manual work in medical practice, specifically in bone marrow examination. The process is mainly composed of two steps, detection and recognition. Mask-Region-Convolutional Neural Networks (Mask-RCNN) was used for detection and image segmentation to extract cells and then Convolutional Neural Networks (CNN), as well as Deep Residual Network (ResNet) was used to classify. Result of cell detection network shows high efficiency to meet application requirements. For the cell recognition network, two networks are compared and the final system is fully applicable. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=cell%20detection" title="cell detection">cell detection</a>, <a href="https://publications.waset.org/abstracts/search?q=cell%20recognition" title=" cell recognition"> cell recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=Mask-RCNN" title=" Mask-RCNN"> Mask-RCNN</a>, <a href="https://publications.waset.org/abstracts/search?q=ResNet" title=" ResNet"> ResNet</a> </p> <a href="https://publications.waset.org/abstracts/98649/cells-detection-and-recognition-in-bone-marrow-examination-with-deep-learning-method" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/98649.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">190</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6746</span> A Comparison of Methods for Neural Network Aggregation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=John%20Pomerat">John Pomerat</a>, <a href="https://publications.waset.org/abstracts/search?q=Aviv%20Segev"> Aviv Segev</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Recently, deep learning has had many theoretical breakthroughs. For deep learning to be successful in the industry, however, there need to be practical algorithms capable of handling many real-world hiccups preventing the immediate application of a learning algorithm. Although AI promises to revolutionize the healthcare industry, getting access to patient data in order to train learning algorithms has not been easy. One proposed solution to this is data- sharing. In this paper, we propose an alternative protocol, based on multi-party computation, to train deep learning models while maintaining both the privacy and security of training data. We examine three methods of training neural networks in this way: Transfer learning, average ensemble learning, and series network learning. We compare these methods to the equivalent model obtained through data-sharing across two different experiments. Additionally, we address the security concerns of this protocol. While the motivating example is healthcare, our findings regarding multi-party computation of neural network training are purely theoretical and have use-cases outside the domain of healthcare. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=neural%20network%20aggregation" title="neural network aggregation">neural network aggregation</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-party%20computation" title=" multi-party computation"> multi-party computation</a>, <a href="https://publications.waset.org/abstracts/search?q=transfer%20learning" title=" transfer learning"> transfer learning</a>, <a href="https://publications.waset.org/abstracts/search?q=average%20ensemble%20learning" title=" average ensemble learning"> average ensemble learning</a> </p> <a href="https://publications.waset.org/abstracts/128037/a-comparison-of-methods-for-neural-network-aggregation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/128037.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">162</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6745</span> Improved Super-Resolution Using Deep Denoising Convolutional Neural Network </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Pawan%20Kumar%20Mishra">Pawan Kumar Mishra</a>, <a href="https://publications.waset.org/abstracts/search?q=Ganesh%20Singh%20Bisht"> Ganesh Singh Bisht</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Super-resolution is the technique that is being used in computer vision to construct high-resolution images from a single low-resolution image. It is used to increase the frequency component, recover the lost details and removing the down sampling and noises that caused by camera during image acquisition process. High-resolution images or videos are desired part of all image processing tasks and its analysis in most of digital imaging application. The target behind super-resolution is to combine non-repetition information inside single or multiple low-resolution frames to generate a high-resolution image. Many methods have been proposed where multiple images are used as low-resolution images of same scene with different variation in transformation. This is called multi-image super resolution. And another family of methods is single image super-resolution that tries to learn redundancy that presents in image and reconstruction the lost information from a single low-resolution image. Use of deep learning is one of state of art method at present for solving reconstruction high-resolution image. In this research, we proposed Deep Denoising Super Resolution (DDSR) that is a deep neural network for effectively reconstruct the high-resolution image from low-resolution image. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=resolution" title="resolution">resolution</a>, <a href="https://publications.waset.org/abstracts/search?q=deep-learning" title=" deep-learning"> deep-learning</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20network" title=" neural network"> neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=de-blurring" title=" de-blurring"> de-blurring</a> </p> <a href="https://publications.waset.org/abstracts/78802/improved-super-resolution-using-deep-denoising-convolutional-neural-network" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/78802.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">517</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6744</span> Research on Reservoir Lithology Prediction Based on Residual Neural Network and Squeeze-and- Excitation Neural Network</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Li%20Kewen">Li Kewen</a>, <a href="https://publications.waset.org/abstracts/search?q=Su%20Zhaoxin"> Su Zhaoxin</a>, <a href="https://publications.waset.org/abstracts/search?q=Wang%20Xingmou"> Wang Xingmou</a>, <a href="https://publications.waset.org/abstracts/search?q=Zhu%20Jian%20Bing"> Zhu Jian Bing </a> </p> <p class="card-text"><strong>Abstract:</strong></p> Conventional reservoir prediction methods ar not sufficient to explore the implicit relation between seismic attributes, and thus data utilization is low. In order to improve the predictive classification accuracy of reservoir lithology, this paper proposes a deep learning lithology prediction method based on ResNet (Residual Neural Network) and SENet (Squeeze-and-Excitation Neural Network). The neural network model is built and trained by using seismic attribute data and lithology data of Shengli oilfield, and the nonlinear mapping relationship between seismic attribute and lithology marker is established. The experimental results show that this method can significantly improve the classification effect of reservoir lithology, and the classification accuracy is close to 70%. This study can effectively predict the lithology of undrilled area and provide support for exploration and development. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20network" title="convolutional neural network">convolutional neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=lithology" title=" lithology"> lithology</a>, <a href="https://publications.waset.org/abstracts/search?q=prediction%20of%20reservoir" title=" prediction of reservoir"> prediction of reservoir</a>, <a href="https://publications.waset.org/abstracts/search?q=seismic%20attributes" title=" seismic attributes "> seismic attributes </a> </p> <a href="https://publications.waset.org/abstracts/121343/research-on-reservoir-lithology-prediction-based-on-residual-neural-network-and-squeeze-and-excitation-neural-network" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/121343.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">177</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6743</span> Domain-Specific Deep Neural Network Model for Classification of Abnormalities on Chest Radiographs</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nkechinyere%20Joy%20Olawuyi">Nkechinyere Joy Olawuyi</a>, <a href="https://publications.waset.org/abstracts/search?q=Babajide%20Samuel%20Afolabi"> Babajide Samuel Afolabi</a>, <a href="https://publications.waset.org/abstracts/search?q=Bola%20Ibitoye"> Bola Ibitoye</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This study collected a preprocessed dataset of chest radiographs and formulated a deep neural network model for detecting abnormalities. It also evaluated the performance of the formulated model and implemented a prototype of the formulated model. This was with the view to developing a deep neural network model to automatically classify abnormalities in chest radiographs. In order to achieve the overall purpose of this research, a large set of chest x-ray images were sourced for and collected from the CheXpert dataset, which is an online repository of annotated chest radiographs compiled by the Machine Learning Research Group, Stanford University. The chest radiographs were preprocessed into a format that can be fed into a deep neural network. The preprocessing techniques used were standardization and normalization. The classification problem was formulated as a multi-label binary classification model, which used convolutional neural network architecture to make a decision on whether an abnormality was present or not in the chest radiographs. The classification model was evaluated using specificity, sensitivity, and Area Under Curve (AUC) score as the parameter. A prototype of the classification model was implemented using Keras Open source deep learning framework in Python Programming Language. The AUC ROC curve of the model was able to classify Atelestasis, Support devices, Pleural effusion, Pneumonia, A normal CXR (no finding), Pneumothorax, and Consolidation. However, Lung opacity and Cardiomegaly had a probability of less than 0.5 and thus were classified as absent. Precision, recall, and F1 score values were 0.78; this implies that the number of False Positive and False Negative is the same, revealing some measure of label imbalance in the dataset. The study concluded that the developed model is sufficient to classify abnormalities present in chest radiographs into present or absent. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=transfer%20learning" title="transfer learning">transfer learning</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20network" title=" convolutional neural network"> convolutional neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=radiograph" title=" radiograph"> radiograph</a>, <a href="https://publications.waset.org/abstracts/search?q=classification" title=" classification"> classification</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-label" title=" multi-label"> multi-label</a> </p> <a href="https://publications.waset.org/abstracts/160741/domain-specific-deep-neural-network-model-for-classification-of-abnormalities-on-chest-radiographs" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/160741.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">129</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6742</span> Speech Emotion Recognition: A DNN and LSTM Comparison in Single and Multiple Feature Application</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Thiago%20Spilborghs%20Bueno%20Meyer">Thiago Spilborghs Bueno Meyer</a>, <a href="https://publications.waset.org/abstracts/search?q=Plinio%20Thomaz%20Aquino%20Junior"> Plinio Thomaz Aquino Junior</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Through speech, which privileges the functional and interactive nature of the text, it is possible to ascertain the spatiotemporal circumstances, the conditions of production and reception of the discourse, the explicit purposes such as informing, explaining, convincing, etc. These conditions allow bringing the interaction between humans closer to the human-robot interaction, making it natural and sensitive to information. However, it is not enough to understand what is said; it is necessary to recognize emotions for the desired interaction. The validity of the use of neural networks for feature selection and emotion recognition was verified. For this purpose, it is proposed the use of neural networks and comparison of models, such as recurrent neural networks and deep neural networks, in order to carry out the classification of emotions through speech signals to verify the quality of recognition. It is expected to enable the implementation of robots in a domestic environment, such as the HERA robot from the RoboFEI@Home team, which focuses on autonomous service robots for the domestic environment. Tests were performed using only the Mel-Frequency Cepstral Coefficients, as well as tests with several characteristics of Delta-MFCC, spectral contrast, and the Mel spectrogram. To carry out the training, validation and testing of the neural networks, the eNTERFACE’05 database was used, which has 42 speakers from 14 different nationalities speaking the English language. The data from the chosen database are videos that, for use in neural networks, were converted into audios. It was found as a result, a classification of 51,969% of correct answers when using the deep neural network, when the use of the recurrent neural network was verified, with the classification with accuracy equal to 44.09%. The results are more accurate when only the Mel-Frequency Cepstral Coefficients are used for the classification, using the classifier with the deep neural network, and in only one case, it is possible to observe a greater accuracy by the recurrent neural network, which occurs in the use of various features and setting 73 for batch size and 100 training epochs. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=emotion%20recognition" title="emotion recognition">emotion recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=speech" title=" speech"> speech</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=human-robot%20interaction" title=" human-robot interaction"> human-robot interaction</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20networks" title=" neural networks"> neural networks</a> </p> <a href="https://publications.waset.org/abstracts/145908/speech-emotion-recognition-a-dnn-and-lstm-comparison-in-single-and-multiple-feature-application" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/145908.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">170</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6741</span> Efficient Deep Neural Networks for Real-Time Strawberry Freshness Monitoring: A Transfer Learning Approach</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mst.%20Tuhin%20Akter">Mst. Tuhin Akter</a>, <a href="https://publications.waset.org/abstracts/search?q=Sharun%20Akter%20Khushbu"> Sharun Akter Khushbu</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20M.%20Shaqib"> S. M. Shaqib</a> </p> <p class="card-text"><strong>Abstract:</strong></p> A real-time system architecture is highly effective for monitoring and detecting various damaged products or fruits that may deteriorate over time or become infected with diseases. Deep learning models have proven to be effective in building such architectures. However, building a deep learning model from scratch is a time-consuming and costly process. A more efficient solution is to utilize deep neural network (DNN) based transfer learning models in the real-time monitoring architecture. This study focuses on using a novel strawberry dataset to develop effective transfer learning models for the proposed real-time monitoring system architecture, specifically for evaluating and detecting strawberry freshness. Several state-of-the-art transfer learning models were employed, and the best performing model was found to be Xception, demonstrating higher performance across evaluation metrics such as accuracy, recall, precision, and F1-score. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=strawberry%20freshness%20evaluation" title="strawberry freshness evaluation">strawberry freshness evaluation</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20neural%20network" title=" deep neural network"> deep neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=transfer%20learning" title=" transfer learning"> transfer learning</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20augmentation" title=" image augmentation"> image augmentation</a> </p> <a href="https://publications.waset.org/abstracts/177872/efficient-deep-neural-networks-for-real-time-strawberry-freshness-monitoring-a-transfer-learning-approach" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/177872.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">90</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6740</span> Classification of Echo Signals Based on Deep Learning</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Aisulu%20Tileukulova">Aisulu Tileukulova</a>, <a href="https://publications.waset.org/abstracts/search?q=Zhexebay%20Dauren"> Zhexebay Dauren</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Radar plays an important role because it is widely used in civil and military fields. Target detection is one of the most important radar applications. The accuracy of detecting inconspicuous aerial objects in radar facilities is lower against the background of noise. Convolutional neural networks can be used to improve the recognition of this type of aerial object. The purpose of this work is to develop an algorithm for recognizing aerial objects using convolutional neural networks, as well as training a neural network. In this paper, the structure of a convolutional neural network (CNN) consists of different types of layers: 8 convolutional layers and 3 layers of a fully connected perceptron. ReLU is used as an activation function in convolutional layers, while the last layer uses softmax. It is necessary to form a data set for training a neural network in order to detect a target. We built a Confusion Matrix of the CNN model to measure the effectiveness of our model. The results showed that the accuracy when testing the model was 95.7%. Classification of echo signals using CNN shows high accuracy and significantly speeds up the process of predicting the target. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=radar" title="radar">radar</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20network" title=" neural network"> neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20network" title=" convolutional neural network"> convolutional neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=echo%20signals" title=" echo signals"> echo signals</a> </p> <a href="https://publications.waset.org/abstracts/147596/classification-of-echo-signals-based-on-deep-learning" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/147596.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">353</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6739</span> Neural Network Based Decision Trees Using Machine Learning for Alzheimer&#039;s Diagnosis</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=P.%20S.%20Jagadeesh%20Kumar">P. S. Jagadeesh Kumar</a>, <a href="https://publications.waset.org/abstracts/search?q=Tracy%20Lin%20Huan"> Tracy Lin Huan</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20Meenakshi%20Sundaram"> S. Meenakshi Sundaram</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Alzheimer’s disease is one of the prevalent kind of ailment, expected for impudent reconciliation or an effectual therapy is to be accredited hitherto. Probable detonation of patients in the upcoming years, and consequently an enormous deal of apprehension in early discovery of the disorder, this will conceivably chaperon to enhanced healing outcomes. Complex impetuosity of the brain is an observant symbolic of the disease and a unique recognition of genetic sign of the disease. Machine learning alongside deep learning and decision tree reinforces the aptitude to absorb characteristics from multi-dimensional data’s and thus simplifies automatic classification of Alzheimer’s disease. Susceptible testing was prophesied and realized in training the prospect of Alzheimer’s disease classification built on machine learning advances. It was shrewd that the decision trees trained with deep neural network fashioned the excellent results parallel to related pattern classification. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Alzheimer%27s%20diagnosis" title="Alzheimer&#039;s diagnosis">Alzheimer&#039;s diagnosis</a>, <a href="https://publications.waset.org/abstracts/search?q=decision%20trees" title=" decision trees"> decision trees</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20neural%20network" title=" deep neural network"> deep neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=pattern%20classification" title=" pattern classification"> pattern classification</a> </p> <a href="https://publications.waset.org/abstracts/77725/neural-network-based-decision-trees-using-machine-learning-for-alzheimers-diagnosis" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/77725.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">297</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6738</span> Keyframe Extraction Using Face Quality Assessment and Convolution Neural Network</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rahma%20Abed">Rahma Abed</a>, <a href="https://publications.waset.org/abstracts/search?q=Sahbi%20Bahroun"> Sahbi Bahroun</a>, <a href="https://publications.waset.org/abstracts/search?q=Ezzeddine%20Zagrouba"> Ezzeddine Zagrouba</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Due to the huge amount of data in videos, extracting the relevant frames became a necessity and an essential step prior to performing face recognition. In this context, we propose a method for extracting keyframes from videos based on face quality and deep learning for a face recognition task. This method has two steps. We start by generating face quality scores for each face image based on the use of three face feature extractors, including Gabor, LBP, and HOG. The second step consists in training a Deep Convolutional Neural Network in a supervised manner in order to select the frames that have the best face quality. The obtained results show the effectiveness of the proposed method compared to the methods of the state of the art. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=keyframe%20extraction" title="keyframe extraction">keyframe extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=face%20quality%20assessment" title=" face quality assessment"> face quality assessment</a>, <a href="https://publications.waset.org/abstracts/search?q=face%20in%20video%20recognition" title=" face in video recognition"> face in video recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=convolution%20neural%20network" title=" convolution neural network"> convolution neural network</a> </p> <a href="https://publications.waset.org/abstracts/111347/keyframe-extraction-using-face-quality-assessment-and-convolution-neural-network" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/111347.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">233</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6737</span> Deep Learning for Recommender System: Principles, Methods and Evaluation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Basiliyos%20Tilahun%20Betru">Basiliyos Tilahun Betru</a>, <a href="https://publications.waset.org/abstracts/search?q=Charles%20Awono%20Onana"> Charles Awono Onana</a>, <a href="https://publications.waset.org/abstracts/search?q=Bernabe%20Batchakui"> Bernabe Batchakui</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Recommender systems have become increasingly popular in recent years, and are utilized in numerous areas. Nowadays many web services provide several information for users and recommender systems have been developed as critical element of these web applications to predict choice of preference and provide significant recommendations. With the help of the advantage of deep learning in modeling different types of data and due to the dynamic change of user preference, building a deep model can better understand users demand and further improve quality of recommendation. In this paper, deep neural network models for recommender system are evaluated. Most of deep neural network models in recommender system focus on the classical collaborative filtering user-item setting. Deep learning models demonstrated high level features of complex data can be learned instead of using metadata which can significantly improve accuracy of recommendation. Even though deep learning poses a great impact in various areas, applying the model to a recommender system have not been fully exploited and still a lot of improvements can be done both in collaborative and content-based approach while considering different contextual factors. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=big%20data" title="big data">big data</a>, <a href="https://publications.waset.org/abstracts/search?q=decision%20making" title=" decision making"> decision making</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=recommender%20system" title=" recommender system"> recommender system</a> </p> <a href="https://publications.waset.org/abstracts/74244/deep-learning-for-recommender-system-principles-methods-and-evaluation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/74244.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">478</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6736</span> Combining the Deep Neural Network with the K-Means for Traffic Accident Prediction</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Celso%20L.%20Fernando">Celso L. Fernando</a>, <a href="https://publications.waset.org/abstracts/search?q=Toshio%20Yoshii"> Toshio Yoshii</a>, <a href="https://publications.waset.org/abstracts/search?q=Takahiro%20Tsubota"> Takahiro Tsubota</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Understanding the causes of a road accident and predicting their occurrence is key to preventing deaths and serious injuries from road accident events. Traditional statistical methods such as the Poisson and the Logistics regressions have been used to find the association of the traffic environmental factors with the accident occurred; recently, an artificial neural network, ANN, a computational technique that learns from historical data to make a more accurate prediction, has emerged. Although the ability to make accurate predictions, the ANN has difficulty dealing with highly unbalanced attribute patterns distribution in the training dataset; in such circumstances, the ANN treats the minority group as noise. However, in the real world data, the minority group is often the group of interest; e.g., in the road traffic accident data, the events of the accident are the group of interest. This study proposes a combination of the k-means with the ANN to improve the predictive ability of the neural network model by alleviating the effect of the unbalanced distribution of the attribute patterns in the training dataset. The results show that the proposed method improves the ability of the neural network to make a prediction on a highly unbalanced distributed attribute patterns dataset; however, on an even distributed attribute patterns dataset, the proposed method performs almost like a standard neural network. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=accident%20risks%20estimation" title="accident risks estimation">accident risks estimation</a>, <a href="https://publications.waset.org/abstracts/search?q=artificial%20neural%20network" title=" artificial neural network"> artificial neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=k-mean" title=" k-mean"> k-mean</a>, <a href="https://publications.waset.org/abstracts/search?q=road%20safety" title=" road safety"> road safety</a> </p> <a href="https://publications.waset.org/abstracts/148822/combining-the-deep-neural-network-with-the-k-means-for-traffic-accident-prediction" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/148822.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">163</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6735</span> Remote Sensing through Deep Neural Networks for Satellite Image Classification</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Teja%20Sai%20Puligadda">Teja Sai Puligadda</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Satellite images in detail can serve an important role in the geographic study. Quantitative and qualitative information provided by the satellite and remote sensing images minimizes the complexity of work and time. Data/images are captured at regular intervals by satellite remote sensing systems, and the amount of data collected is often enormous, and it expands rapidly as technology develops. Interpreting remote sensing images, geographic data mining, and researching distinct vegetation types such as agricultural and forests are all part of satellite image categorization. One of the biggest challenge data scientists faces while classifying satellite images is finding the best suitable classification algorithms based on the available that could able to classify images with utmost accuracy. In order to categorize satellite images, which is difficult due to the sheer volume of data, many academics are turning to deep learning machine algorithms. As, the CNN algorithm gives high accuracy in image recognition problems and automatically detects the important features without any human supervision and the ANN algorithm stores information on the entire network (Abhishek Gupta., 2020), these two deep learning algorithms have been used for satellite image classification. This project focuses on remote sensing through Deep Neural Networks i.e., ANN and CNN with Deep Sat (SAT-4) Airborne dataset for classifying images. Thus, in this project of classifying satellite images, the algorithms ANN and CNN are implemented, evaluated & compared and the performance is analyzed through evaluation metrics such as Accuracy and Loss. Additionally, the Neural Network algorithm which gives the lowest bias and lowest variance in solving multi-class satellite image classification is analyzed. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=artificial%20neural%20network" title="artificial neural network">artificial neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20network" title=" convolutional neural network"> convolutional neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=remote%20sensing" title=" remote sensing"> remote sensing</a>, <a href="https://publications.waset.org/abstracts/search?q=accuracy" title=" accuracy"> accuracy</a>, <a href="https://publications.waset.org/abstracts/search?q=loss" title=" loss"> loss</a> </p> <a href="https://publications.waset.org/abstracts/146723/remote-sensing-through-deep-neural-networks-for-satellite-image-classification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/146723.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">159</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6734</span> A Custom Convolutional Neural Network with Hue, Saturation, Value Color for Malaria Classification</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ghazala%20Hcini">Ghazala Hcini</a>, <a href="https://publications.waset.org/abstracts/search?q=Imen%20Jdey"> Imen Jdey</a>, <a href="https://publications.waset.org/abstracts/search?q=Hela%20Ltifi"> Hela Ltifi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Malaria disease should be considered and handled as a potential restorative catastrophe. One of the most challenging tasks in the field of microscopy image processing is due to differences in test design and vulnerability of cell classifications. In this article, we focused on applying deep learning to classify patients by identifying images of infected and uninfected cells. We performed multiple forms, counting a classification approach using the Hue, Saturation, Value (HSV) color space. HSV is used since of its superior ability to speak to image brightness; at long last, for classification, a convolutional neural network (CNN) architecture is created. Clusters of focus were used to deliver the classification. The highlights got to be forbidden, and a few more clamor sorts are included in the information. The suggested method has a precision of 99.79%, a recall value of 99.55%, and provides 99.96% accuracy. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title="deep learning">deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20network" title=" convolutional neural network"> convolutional neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20classification" title=" image classification"> image classification</a>, <a href="https://publications.waset.org/abstracts/search?q=color%20transformation" title=" color transformation"> color transformation</a>, <a href="https://publications.waset.org/abstracts/search?q=HSV%20color" title=" HSV color"> HSV color</a>, <a href="https://publications.waset.org/abstracts/search?q=malaria%20diagnosis" title=" malaria diagnosis"> malaria diagnosis</a>, <a href="https://publications.waset.org/abstracts/search?q=malaria%20cells%20images" title=" malaria cells images"> malaria cells images</a> </p> <a href="https://publications.waset.org/abstracts/161232/a-custom-convolutional-neural-network-with-hue-saturation-value-color-for-malaria-classification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/161232.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">88</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">&lsaquo;</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=Deep%20Neural%20Network&amp;page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=Deep%20Neural%20Network&amp;page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=Deep%20Neural%20Network&amp;page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=Deep%20Neural%20Network&amp;page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=Deep%20Neural%20Network&amp;page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=Deep%20Neural%20Network&amp;page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=Deep%20Neural%20Network&amp;page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=Deep%20Neural%20Network&amp;page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=Deep%20Neural%20Network&amp;page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=Deep%20Neural%20Network&amp;page=225">225</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=Deep%20Neural%20Network&amp;page=226">226</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=Deep%20Neural%20Network&amp;page=2" rel="next">&rsaquo;</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">&copy; 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">&times;</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>

Pages: 1 2 3 4 5 6 7 8 9 10