CINXE.COM
Search results for: deep convolution network
<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: deep convolution network</title> <meta name="description" content="Search results for: deep convolution network"> <meta name="keywords" content="deep convolution network"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="deep convolution network" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="deep convolution network"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 6364</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: deep convolution network</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6364</span> Detecting Manipulated Media Using Deep Capsule Network</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Joseph%20Uzuazomaro%20Oju">Joseph Uzuazomaro Oju</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The ease at which manipulated media can be created, and the increasing difficulty in identifying fake media makes it a great threat. Most of the applications used for the creation of these high-quality fake videos and images are built with deep learning. Hence, the use of deep learning in creating a detection mechanism cannot be overemphasized. Any successful fake media that is being detected before it reached the populace will save people from the self-doubt of either a content is genuine or fake and will ensure the credibility of videos and images. The methodology introduced in this paper approaches the manipulated media detection challenge using a combo of VGG-19 and a deep capsule network. In the case of videos, they are converted into frames, which, in turn, are resized and cropped to the face region. These preprocessed images/videos are fed to the VGG-19 network to extract the latent features. The extracted latent features are inputted into a deep capsule network enhanced with a 3D -convolution dynamic routing agreement. The 3D 鈥揷onvolution dynamic routing agreement algorithm helps to reduce the linkages between capsules networks. Thereby limiting the poor learning shortcoming of multiple capsule network layers. The resultant output from the deep capsule network will indicate a media to be either genuine or fake. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20capsule%20network" title="deep capsule network">deep capsule network</a>, <a href="https://publications.waset.org/abstracts/search?q=dynamic%20routing" title=" dynamic routing"> dynamic routing</a>, <a href="https://publications.waset.org/abstracts/search?q=fake%20media%20detection" title=" fake media detection"> fake media detection</a>, <a href="https://publications.waset.org/abstracts/search?q=manipulated%20media" title=" manipulated media"> manipulated media</a> </p> <a href="https://publications.waset.org/abstracts/123371/detecting-manipulated-media-using-deep-capsule-network" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/123371.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">132</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6363</span> Facial Emotion Recognition Using Deep Learning</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ashutosh%20Mishra">Ashutosh Mishra</a>, <a href="https://publications.waset.org/abstracts/search?q=Nikhil%20Goyal"> Nikhil Goyal</a> </p> <p class="card-text"><strong>Abstract:</strong></p> A 3D facial emotion recognition model based on deep learning is proposed in this paper. Two convolution layers and a pooling layer are employed in the deep learning architecture. After the convolution process, the pooling is finished. The probabilities for various classes of human faces are calculated using the sigmoid activation function. To verify the efficiency of deep learning-based systems, a set of faces. The Kaggle dataset is used to verify the accuracy of a deep learning-based face recognition model. The model's accuracy is about 65 percent, which is lower than that of other facial expression recognition techniques. Despite significant gains in representation precision due to the nonlinearity of profound image representations. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=facial%20recognition" title="facial recognition">facial recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=computational%20intelligence" title=" computational intelligence"> computational intelligence</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20network" title=" convolutional neural network"> convolutional neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=depth%20map" title=" depth map"> depth map</a> </p> <a href="https://publications.waset.org/abstracts/139253/facial-emotion-recognition-using-deep-learning" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/139253.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">231</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6362</span> Satellite Imagery Classification Based on Deep Convolution Network</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Zhong%20Ma">Zhong Ma</a>, <a href="https://publications.waset.org/abstracts/search?q=Zhuping%20Wang"> Zhuping Wang</a>, <a href="https://publications.waset.org/abstracts/search?q=Congxin%20Liu"> Congxin Liu</a>, <a href="https://publications.waset.org/abstracts/search?q=Xiangzeng%20Liu"> Xiangzeng Liu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Satellite imagery classification is a challenging problem with many practical applications. In this paper, we designed a deep convolution neural network (DCNN) to classify the satellite imagery. The contributions of this paper are twofold — First, to cope with the large-scale variance in the satellite image, we introduced the inception module, which has multiple filters with different size at the same level, as the building block to build our DCNN model. Second, we proposed a genetic algorithm based method to efficiently search the best hyper-parameters of the DCNN in a large search space. The proposed method is evaluated on the benchmark database. The results of the proposed hyper-parameters search method show it will guide the search towards better regions of the parameter space. Based on the found hyper-parameters, we built our DCNN models, and evaluated its performance on satellite imagery classification, the results show the classification accuracy of proposed models outperform the state of the art method. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=satellite%20imagery%20classification" title="satellite imagery classification">satellite imagery classification</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20convolution%20network" title=" deep convolution network"> deep convolution network</a>, <a href="https://publications.waset.org/abstracts/search?q=genetic%20algorithm" title=" genetic algorithm"> genetic algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=hyper-parameter%20optimization" title=" hyper-parameter optimization"> hyper-parameter optimization</a> </p> <a href="https://publications.waset.org/abstracts/44963/satellite-imagery-classification-based-on-deep-convolution-network" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/44963.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">300</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6361</span> Keyframe Extraction Using Face Quality Assessment and Convolution Neural Network</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rahma%20Abed">Rahma Abed</a>, <a href="https://publications.waset.org/abstracts/search?q=Sahbi%20Bahroun"> Sahbi Bahroun</a>, <a href="https://publications.waset.org/abstracts/search?q=Ezzeddine%20Zagrouba"> Ezzeddine Zagrouba</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Due to the huge amount of data in videos, extracting the relevant frames became a necessity and an essential step prior to performing face recognition. In this context, we propose a method for extracting keyframes from videos based on face quality and deep learning for a face recognition task. This method has two steps. We start by generating face quality scores for each face image based on the use of three face feature extractors, including Gabor, LBP, and HOG. The second step consists in training a Deep Convolutional Neural Network in a supervised manner in order to select the frames that have the best face quality. The obtained results show the effectiveness of the proposed method compared to the methods of the state of the art. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=keyframe%20extraction" title="keyframe extraction">keyframe extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=face%20quality%20assessment" title=" face quality assessment"> face quality assessment</a>, <a href="https://publications.waset.org/abstracts/search?q=face%20in%20video%20recognition" title=" face in video recognition"> face in video recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=convolution%20neural%20network" title=" convolution neural network"> convolution neural network</a> </p> <a href="https://publications.waset.org/abstracts/111347/keyframe-extraction-using-face-quality-assessment-and-convolution-neural-network" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/111347.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">233</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6360</span> Unsupervised Images Generation Based on Sloan Digital Sky Survey with Deep Convolutional Generative Neural Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Guanghua%20Zhang">Guanghua Zhang</a>, <a href="https://publications.waset.org/abstracts/search?q=Fubao%20Wang"> Fubao Wang</a>, <a href="https://publications.waset.org/abstracts/search?q=Weijun%20Duan"> Weijun Duan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Convolution neural network (CNN) has attracted more and more attention on recent years. Especially in the field of computer vision and image classification. However, unsupervised learning with CNN has received less attention than supervised learning. In this work, we use a new powerful tool which is deep convolutional generative adversarial networks (DCGANs) to generate images from Sloan Digital Sky Survey. Training by various star and galaxy images, it shows that both the generator and the discriminator are good for unsupervised learning. In this paper, we also took several experiments to choose the best value for hyper-parameters and which could help to stabilize the training process and promise a good quality of the output. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=convolution%20neural%20network" title="convolution neural network">convolution neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=discriminator" title=" discriminator"> discriminator</a>, <a href="https://publications.waset.org/abstracts/search?q=generator" title=" generator"> generator</a>, <a href="https://publications.waset.org/abstracts/search?q=unsupervised%20learning" title=" unsupervised learning"> unsupervised learning</a> </p> <a href="https://publications.waset.org/abstracts/89010/unsupervised-images-generation-based-on-sloan-digital-sky-survey-with-deep-convolutional-generative-neural-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/89010.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">268</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6359</span> Advances of Image Processing in Precision Agriculture: Using Deep Learning Convolution Neural Network for Soil Nutrient Classification</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Halimatu%20S.%20Abdullahi">Halimatu S. Abdullahi</a>, <a href="https://publications.waset.org/abstracts/search?q=Ray%20E.%20Sheriff"> Ray E. Sheriff</a>, <a href="https://publications.waset.org/abstracts/search?q=Fatima%20Mahieddine"> Fatima Mahieddine</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Agriculture is essential to the continuous existence of human life as they directly depend on it for the production of food. The exponential rise in population calls for a rapid increase in food with the application of technology to reduce the laborious work and maximize production. Technology can aid/improve agriculture in several ways through pre-planning and post-harvest by the use of computer vision technology through image processing to determine the soil nutrient composition, right amount, right time, right place application of farm input resources like fertilizers, herbicides, water, weed detection, early detection of pest and diseases etc. This is precision agriculture which is thought to be solution required to achieve our goals. There has been significant improvement in the area of image processing and data processing which has being a major challenge. A database of images is collected through remote sensing, analyzed and a model is developed to determine the right treatment plans for different crop types and different regions. Features of images from vegetations need to be extracted, classified, segmented and finally fed into the model. Different techniques have been applied to the processes from the use of neural network, support vector machine, fuzzy logic approach and recently, the most effective approach generating excellent results using the deep learning approach of convolution neural network for image classifications. Deep Convolution neural network is used to determine soil nutrients required in a plantation for maximum production. The experimental results on the developed model yielded results with an average accuracy of 99.58%. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=convolution" title="convolution">convolution</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20extraction" title=" feature extraction"> feature extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20analysis" title=" image analysis"> image analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=validation" title=" validation"> validation</a>, <a href="https://publications.waset.org/abstracts/search?q=precision%20agriculture" title=" precision agriculture"> precision agriculture</a> </p> <a href="https://publications.waset.org/abstracts/70564/advances-of-image-processing-in-precision-agriculture-using-deep-learning-convolution-neural-network-for-soil-nutrient-classification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/70564.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">315</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6358</span> A Case Study of Deep Learning for Disease Detection in Crops</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Felipe%20A.%20Guth">Felipe A. Guth</a>, <a href="https://publications.waset.org/abstracts/search?q=Shane%20Ward"> Shane Ward</a>, <a href="https://publications.waset.org/abstracts/search?q=Kevin%20McDonnell"> Kevin McDonnell</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In the precision agriculture area, one of the main tasks is the automated detection of diseases in crops. Machine Learning algorithms have been studied in recent decades for such tasks in view of their potential for improving economic outcomes that automated disease detection may attain over crop fields. The latest generation of deep learning convolution neural networks has presented significant results in the area of image classification. In this way, this work has tested the implementation of an architecture of deep learning convolution neural network for the detection of diseases in different types of crops. A data augmentation strategy was used to meet the requirements of the algorithm implemented with a deep learning framework. Two test scenarios were deployed. The first scenario implemented a neural network under images extracted from a controlled environment while the second one took images both from the field and the controlled environment. The results evaluated the generalisation capacity of the neural networks in relation to the two types of images presented. Results yielded a general classification accuracy of 59% in scenario 1 and 96% in scenario 2. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks" title="convolutional neural networks">convolutional neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=disease%20detection" title=" disease detection"> disease detection</a>, <a href="https://publications.waset.org/abstracts/search?q=precision%20agriculture" title=" precision agriculture"> precision agriculture</a> </p> <a href="https://publications.waset.org/abstracts/95339/a-case-study-of-deep-learning-for-disease-detection-in-crops" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/95339.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">259</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6357</span> Defect Detection for Nanofibrous Images with Deep Learning-Based Approaches</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Gaokai%20Liu">Gaokai Liu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Automatic defect detection for nanomaterial images is widely required in industrial scenarios. Deep learning approaches are considered as the most effective solutions for the great majority of image-based tasks. In this paper, an edge guidance network for defect segmentation is proposed. First, the encoder path with multiple convolution and downsampling operations is applied to the acquisition of shared features. Then two decoder paths both are connected to the last convolution layer of the encoder and supervised by the edge and segmentation labels, respectively, to guide the whole training process. Meanwhile, the edge and encoder outputs from the same stage are concatenated to the segmentation corresponding part to further tune the segmentation result. Finally, the effectiveness of the proposed method is verified via the experiments on open nanofibrous datasets. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title="deep learning">deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=defect%20detection" title=" defect detection"> defect detection</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20segmentation" title=" image segmentation"> image segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=nanomaterials" title=" nanomaterials"> nanomaterials</a> </p> <a href="https://publications.waset.org/abstracts/133093/defect-detection-for-nanofibrous-images-with-deep-learning-based-approaches" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/133093.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">149</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6356</span> Estimating Cyclone Intensity Using INSAT-3D IR Images Based on Convolution Neural Network Model</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Divvela%20Vishnu%20Sai%20Kumar">Divvela Vishnu Sai Kumar</a>, <a href="https://publications.waset.org/abstracts/search?q=Deepak%20Arora"> Deepak Arora</a>, <a href="https://publications.waset.org/abstracts/search?q=Sheenu%20Rizvi"> Sheenu Rizvi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Forecasting a cyclone through satellite images consists of the estimation of the intensity of the cyclone and predicting it before a cyclone comes. This research work can help people to take safety measures before the cyclone comes. The prediction of the intensity of a cyclone is very important to save lives and minimize the damage caused by cyclones. These cyclones are very costliest natural disasters that cause a lot of damage globally due to a lot of hazards. Authors have proposed five different CNN (Convolutional Neural Network) models that estimate the intensity of cyclones through INSAT-3D IR images. There are a lot of techniques that are used to estimate the intensity; the best model proposed by authors estimates intensity with a root mean squared error (RMSE) of 10.02 kts. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=estimating%20cyclone%20intensity" title="estimating cyclone intensity">estimating cyclone intensity</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=convolution%20neural%20network" title=" convolution neural network"> convolution neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=prediction%20models" title=" prediction models"> prediction models</a> </p> <a href="https://publications.waset.org/abstracts/163095/estimating-cyclone-intensity-using-insat-3d-ir-images-based-on-convolution-neural-network-model" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/163095.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">126</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6355</span> Water Body Detection and Estimation from Landsat Satellite Images Using Deep Learning</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=M.%20Devaki">M. Devaki</a>, <a href="https://publications.waset.org/abstracts/search?q=K.%20B.%20Jayanthi"> K. B. Jayanthi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The identification of water bodies from satellite images has recently received a great deal of attention. Different methods have been developed to distinguish water bodies from various satellite images that vary in terms of time and space. Urban water identification issues body manifests in numerous applications with a great deal of certainty. There has been a sharp rise in the usage of satellite images to map natural resources, including urban water bodies and forests, during the past several years. This is because water and forest resources depend on each other so heavily that ongoing monitoring of both is essential to their sustainable management. The relevant elements from satellite pictures have been chosen using a variety of techniques, including machine learning. Then, a convolution neural network (CNN) architecture is created that can identify a superpixel as either one of two classes, one that includes water or doesn't from input data in a complex metropolitan scene. The deep learning technique, CNN, has advanced tremendously in a variety of visual-related tasks. CNN can improve classification performance by reducing the spectral-spatial regularities of the input data and extracting deep features hierarchically from raw pictures. Calculate the water body using the satellite image's resolution. Experimental results demonstrate that the suggested method outperformed conventional approaches in terms of water extraction accuracy from remote-sensing images, with an average overall accuracy of 97%. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=water%20body" title="water body">water body</a>, <a href="https://publications.waset.org/abstracts/search?q=Deep%20learning" title=" Deep learning"> Deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=satellite%20images" title=" satellite images"> satellite images</a>, <a href="https://publications.waset.org/abstracts/search?q=convolution%20neural%20network" title=" convolution neural network"> convolution neural network</a> </p> <a href="https://publications.waset.org/abstracts/162827/water-body-detection-and-estimation-from-landsat-satellite-images-using-deep-learning" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/162827.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">89</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6354</span> Weed Classification Using a Two-Dimensional Deep Convolutional Neural Network</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Muhammad%20Ali%20Sarwar">Muhammad Ali Sarwar</a>, <a href="https://publications.waset.org/abstracts/search?q=Muhammad%20Farooq"> Muhammad Farooq</a>, <a href="https://publications.waset.org/abstracts/search?q=Nayab%20Hassan"> Nayab Hassan</a>, <a href="https://publications.waset.org/abstracts/search?q=Hammad%20Hassan"> Hammad Hassan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Pakistan is highly recognized for its agriculture and is well known for producing substantial amounts of wheat, cotton, and sugarcane. However, some factors contribute to a decline in crop quality and a reduction in overall output. One of the main factors contributing to this decline is the presence of weed and its late detection. This process of detection is manual and demands a detailed inspection to be done by the farmer itself. But by the time detection of weed, the farmer will be able to save its cost and can increase the overall production. The focus of this research is to identify and classify the four main types of weeds (Small-Flowered Cranesbill, Chick Weed, Prickly Acacia, and Black-Grass) that are prevalent in our region鈥檚 major crops. In this work, we implemented three different deep learning techniques: YOLO-v5, Inception-v3, and Deep CNN on the same Dataset, and have concluded that deep convolutions neural network performed better with an accuracy of 97.45% for such classification. In relative to the state of the art, our proposed approach yields 2% better results. We devised the architecture in an efficient way such that it can be used in real-time. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20convolution%20networks" title="deep convolution networks">deep convolution networks</a>, <a href="https://publications.waset.org/abstracts/search?q=Yolo" title=" Yolo"> Yolo</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=agriculture" title=" agriculture"> agriculture</a> </p> <a href="https://publications.waset.org/abstracts/169359/weed-classification-using-a-two-dimensional-deep-convolutional-neural-network" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/169359.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">117</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6353</span> Deep Learning Application for Object Image Recognition and Robot Automatic Grasping</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shiuh-Jer%20Huang">Shiuh-Jer Huang</a>, <a href="https://publications.waset.org/abstracts/search?q=Chen-Zon%20Yan"> Chen-Zon Yan</a>, <a href="https://publications.waset.org/abstracts/search?q=C.%20K.%20Huang"> C. K. Huang</a>, <a href="https://publications.waset.org/abstracts/search?q=Chun-Chien%20Ting"> Chun-Chien Ting</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Since the vision system application in industrial environment for autonomous purposes is required intensely, the image recognition technique becomes an important research topic. Here, deep learning algorithm is employed in image system to recognize the industrial object and integrate with a 7A6 Series Manipulator for object automatic gripping task. PC and Graphic Processing Unit (GPU) are chosen to construct the 3D Vision Recognition System. Depth Camera (Intel RealSense SR300) is employed to extract the image for object recognition and coordinate derivation. The YOLOv2 scheme is adopted in Convolution neural network (CNN) structure for object classification and center point prediction. Additionally, image processing strategy is used to find the object contour for calculating the object orientation angle. Then, the specified object location and orientation information are sent to robotic controller. Finally, a six-axis manipulator can grasp the specific object in a random environment based on the user command and the extracted image information. The experimental results show that YOLOv2 has been successfully employed to detect the object location and category with confidence near 0.9 and 3D position error less than 0.4 mm. It is useful for future intelligent robotic application in industrial 4.0 environment. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title="deep learning">deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=convolution%20neural%20network" title=" convolution neural network"> convolution neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=YOLOv2" title=" YOLOv2"> YOLOv2</a>, <a href="https://publications.waset.org/abstracts/search?q=7A6%20series%20manipulator" title=" 7A6 series manipulator"> 7A6 series manipulator</a> </p> <a href="https://publications.waset.org/abstracts/110468/deep-learning-application-for-object-image-recognition-and-robot-automatic-grasping" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/110468.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">250</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6352</span> Convolutional Neural Network Based on Random Kernels for Analyzing Visual Imagery</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ja-Keoung%20Koo">Ja-Keoung Koo</a>, <a href="https://publications.waset.org/abstracts/search?q=Kensuke%20Nakamura"> Kensuke Nakamura</a>, <a href="https://publications.waset.org/abstracts/search?q=Hyohun%20Kim"> Hyohun Kim</a>, <a href="https://publications.waset.org/abstracts/search?q=Dongwha%20Shin"> Dongwha Shin</a>, <a href="https://publications.waset.org/abstracts/search?q=Yeonseok%20Kim"> Yeonseok Kim</a>, <a href="https://publications.waset.org/abstracts/search?q=Ji-Su%20Ahn"> Ji-Su Ahn</a>, <a href="https://publications.waset.org/abstracts/search?q=Byung-Woo%20Hong"> Byung-Woo Hong</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The machine learning techniques based on a convolutional neural network (CNN) have been actively developed and successfully applied to a variety of image analysis tasks including reconstruction, noise reduction, resolution enhancement, segmentation, motion estimation, object recognition. The classical visual information processing that ranges from low level tasks to high level ones has been widely developed in the deep learning framework. It is generally considered as a challenging problem to derive visual interpretation from high dimensional imagery data. A CNN is a class of feed-forward artificial neural network that usually consists of deep layers the connections of which are established by a series of non-linear operations. The CNN architecture is known to be shift invariant due to its shared weights and translation invariance characteristics. However, it is often computationally intractable to optimize the network in particular with a large number of convolution layers due to a large number of unknowns to be optimized with respect to the training set that is generally required to be large enough to effectively generalize the model under consideration. It is also necessary to limit the size of convolution kernels due to the computational expense despite of the recent development of effective parallel processing machinery, which leads to the use of the constantly small size of the convolution kernels throughout the deep CNN architecture. However, it is often desired to consider different scales in the analysis of visual features at different layers in the network. Thus, we propose a CNN model where different sizes of the convolution kernels are applied at each layer based on the random projection. We apply random filters with varying sizes and associate the filter responses with scalar weights that correspond to the standard deviation of the random filters. We are allowed to use large number of random filters with the cost of one scalar unknown for each filter. The computational cost in the back-propagation procedure does not increase with the larger size of the filters even though the additional computational cost is required in the computation of convolution in the feed-forward procedure. The use of random kernels with varying sizes allows to effectively analyze image features at multiple scales leading to a better generalization. The robustness and effectiveness of the proposed CNN based on random kernels are demonstrated by numerical experiments where the quantitative comparison of the well-known CNN architectures and our models that simply replace the convolution kernels with the random filters is performed. The experimental results indicate that our model achieves better performance with less number of unknown weights. The proposed algorithm has a high potential in the application of a variety of visual tasks based on the CNN framework. Acknowledgement鈥擳his work was supported by the MISP (Ministry of Science and ICT), Korea, under the National Program for Excellence in SW (20170001000011001) supervised by IITP, and NRF-2014R1A2A1A11051941, NRF2017R1A2B4006023. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title="deep learning">deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20network" title=" convolutional neural network"> convolutional neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=random%20kernel" title=" random kernel"> random kernel</a>, <a href="https://publications.waset.org/abstracts/search?q=random%20projection" title=" random projection"> random projection</a>, <a href="https://publications.waset.org/abstracts/search?q=dimensionality%20reduction" title=" dimensionality reduction"> dimensionality reduction</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20recognition" title=" object recognition"> object recognition</a> </p> <a href="https://publications.waset.org/abstracts/78806/convolutional-neural-network-based-on-random-kernels-for-analyzing-visual-imagery" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/78806.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">289</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6351</span> Performance Comparison of Deep Convolutional Neural Networks for Binary Classification of Fine-Grained Leaf Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kamal%20KC">Kamal KC</a>, <a href="https://publications.waset.org/abstracts/search?q=Zhendong%20Yin"> Zhendong Yin</a>, <a href="https://publications.waset.org/abstracts/search?q=Dasen%20Li"> Dasen Li</a>, <a href="https://publications.waset.org/abstracts/search?q=Zhilu%20Wu"> Zhilu Wu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Intra-plant disease classification based on leaf images is a challenging computer vision task due to similarities in texture, color, and shape of leaves with a slight variation of leaf spot; and external environmental changes such as lighting and background noises. Deep convolutional neural network (DCNN) has proven to be an effective tool for binary classification. In this paper, two methods for binary classification of diseased plant leaves using DCNN are presented; model created from scratch and transfer learning. Our main contribution is a thorough evaluation of 4 networks created from scratch and transfer learning of 5 pre-trained models. Training and testing of these models were performed on a plant leaf images dataset belonging to 16 distinct classes, containing a total of 22,265 images from 8 different plants, consisting of a pair of healthy and diseased leaves. We introduce a deep CNN model, Optimized MobileNet. This model with depthwise separable CNN as a building block attained an average test accuracy of 99.77%. We also present a fine-tuning method by introducing the concept of a convolutional block, which is a collection of different deep neural layers. Fine-tuned models proved to be efficient in terms of accuracy and computational cost. Fine-tuned MobileNet achieved an average test accuracy of 99.89% on 8 pairs of [healthy, diseased] leaf ImageSet. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20convolution%20neural%20network" title="deep convolution neural network">deep convolution neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=depthwise%20separable%20convolution" title=" depthwise separable convolution"> depthwise separable convolution</a>, <a href="https://publications.waset.org/abstracts/search?q=fine-grained%20classification" title=" fine-grained classification"> fine-grained classification</a>, <a href="https://publications.waset.org/abstracts/search?q=MobileNet" title=" MobileNet"> MobileNet</a>, <a href="https://publications.waset.org/abstracts/search?q=plant%20disease" title=" plant disease"> plant disease</a>, <a href="https://publications.waset.org/abstracts/search?q=transfer%20learning" title=" transfer learning"> transfer learning</a> </p> <a href="https://publications.waset.org/abstracts/139441/performance-comparison-of-deep-convolutional-neural-networks-for-binary-classification-of-fine-grained-leaf-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/139441.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">186</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6350</span> Two Concurrent Convolution Neural Networks TC*CNN Model for Face Recognition Using Edge</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=T.%20Alghamdi">T. Alghamdi</a>, <a href="https://publications.waset.org/abstracts/search?q=G.%20Alaghband"> G. Alaghband</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper we develop a model that couples Two Concurrent Convolution Neural Network with different filters (TC*CNN) for face recognition and compare its performance to an existing sequential CNN (base model). We also test and compare the quality and performance of the models on three datasets with various levels of complexity (easy, moderate, and difficult) and show that for the most complex datasets, edges will produce the most accurate and efficient results. We further show that in such cases while Support Vector Machine (SVM) models are fast, they do not produce accurate results. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Convolution%20Neural%20Network" title="Convolution Neural Network">Convolution Neural Network</a>, <a href="https://publications.waset.org/abstracts/search?q=Edges" title=" Edges"> Edges</a>, <a href="https://publications.waset.org/abstracts/search?q=Face%20Recognition" title=" Face Recognition "> Face Recognition </a>, <a href="https://publications.waset.org/abstracts/search?q=Support%20Vector%20Machine." title=" Support Vector Machine. "> Support Vector Machine. </a> </p> <a href="https://publications.waset.org/abstracts/119126/two-concurrent-convolution-neural-networks-tccnn-model-for-face-recognition-using-edge" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/119126.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">153</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6349</span> Towards Long-Range Pixels Connection for Context-Aware Semantic Segmentation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Muhammad%20Zubair%20Khan">Muhammad Zubair Khan</a>, <a href="https://publications.waset.org/abstracts/search?q=Yugyung%20Lee"> Yugyung Lee</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Deep learning has recently achieved enormous response in semantic image segmentation. The previously developed U-Net inspired architectures operate with continuous stride and pooling operations, leading to spatial data loss. Also, the methods lack establishing long-term pixels connection to preserve context knowledge and reduce spatial loss in prediction. This article developed encoder-decoder architecture with bi-directional LSTM embedded in long skip-connections and densely connected convolution blocks. The network non-linearly combines the feature maps across encoder-decoder paths for finding dependency and correlation between image pixels. Additionally, the densely connected convolutional blocks are kept in the final encoding layer to reuse features and prevent redundant data sharing. The method applied batch-normalization for reducing internal covariate shift in data distributions. The empirical evidence shows a promising response to our method compared with other semantic segmentation techniques. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title="deep learning">deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=semantic%20segmentation" title=" semantic segmentation"> semantic segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20analysis" title=" image analysis"> image analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=pixels%20connection" title=" pixels connection"> pixels connection</a>, <a href="https://publications.waset.org/abstracts/search?q=convolution%20neural%20network" title=" convolution neural network"> convolution neural network</a> </p> <a href="https://publications.waset.org/abstracts/147965/towards-long-range-pixels-connection-for-context-aware-semantic-segmentation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/147965.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">102</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6348</span> Prediction on Housing Price Based on Deep Learning</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Li%20Yu">Li Yu</a>, <a href="https://publications.waset.org/abstracts/search?q=Chenlu%20Jiao"> Chenlu Jiao</a>, <a href="https://publications.waset.org/abstracts/search?q=Hongrun%20Xin"> Hongrun Xin</a>, <a href="https://publications.waset.org/abstracts/search?q=Yan%20Wang"> Yan Wang</a>, <a href="https://publications.waset.org/abstracts/search?q=Kaiyang%20Wang"> Kaiyang Wang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In order to study the impact of various factors on the housing price, we propose to build different prediction models based on deep learning to determine the existing data of the real estate in order to more accurately predict the housing price or its changing trend in the future. Considering that the factors which affect the housing price vary widely, the proposed prediction models include two categories. The first one is based on multiple characteristic factors of the real estate. We built Convolution Neural Network (CNN) prediction model and Long Short-Term Memory (LSTM) neural network prediction model based on deep learning, and logical regression model was implemented to make a comparison between these three models. Another prediction model is time series model. Based on deep learning, we proposed an LSTM-1 model purely regard to time series, then implementing and comparing the LSTM model and the Auto-Regressive and Moving Average (ARMA) model. In this paper, comprehensive study of the second-hand housing price in Beijing has been conducted from three aspects: crawling and analyzing, housing price predicting, and the result comparing. Ultimately the best model program was produced, which is of great significance to evaluation and prediction of the housing price in the real estate industry. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title="deep learning">deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20network" title=" convolutional neural network"> convolutional neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=LSTM" title=" LSTM"> LSTM</a>, <a href="https://publications.waset.org/abstracts/search?q=housing%20prediction" title=" housing prediction"> housing prediction</a> </p> <a href="https://publications.waset.org/abstracts/84747/prediction-on-housing-price-based-on-deep-learning" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/84747.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">306</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6347</span> Classification of Land Cover Usage from Satellite Images Using Deep Learning Algorithms</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shaik%20Ayesha%20Fathima">Shaik Ayesha Fathima</a>, <a href="https://publications.waset.org/abstracts/search?q=Shaik%20Noor%20Jahan"> Shaik Noor Jahan</a>, <a href="https://publications.waset.org/abstracts/search?q=Duvvada%20Rajeswara%20Rao"> Duvvada Rajeswara Rao</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Earth's environment and its evolution can be seen through satellite images in near real-time. Through satellite imagery, remote sensing data provide crucial information that can be used for a variety of applications, including image fusion, change detection, land cover classification, agriculture, mining, disaster mitigation, and monitoring climate change. The objective of this project is to propose a method for classifying satellite images according to multiple predefined land cover classes. The proposed approach involves collecting data in image format. The data is then pre-processed using data pre-processing techniques. The processed data is fed into the proposed algorithm and the obtained result is analyzed. Some of the algorithms used in satellite imagery classification are U-Net, Random Forest, Deep Labv3, CNN, ANN, Resnet etc. In this project, we are using the DeepLabv3 (Atrous convolution) algorithm for land cover classification. The dataset used is the deep globe land cover classification dataset. DeepLabv3 is a semantic segmentation system that uses atrous convolution to capture multi-scale context by adopting multiple atrous rates in cascade or in parallel to determine the scale of segments. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=area%20calculation" title="area calculation">area calculation</a>, <a href="https://publications.waset.org/abstracts/search?q=atrous%20convolution" title=" atrous convolution"> atrous convolution</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20globe%20land%20cover%20classification" title=" deep globe land cover classification"> deep globe land cover classification</a>, <a href="https://publications.waset.org/abstracts/search?q=deepLabv3" title=" deepLabv3"> deepLabv3</a>, <a href="https://publications.waset.org/abstracts/search?q=land%20cover%20classification" title=" land cover classification"> land cover classification</a>, <a href="https://publications.waset.org/abstracts/search?q=resnet%2050" title=" resnet 50"> resnet 50</a> </p> <a href="https://publications.waset.org/abstracts/147677/classification-of-land-cover-usage-from-satellite-images-using-deep-learning-algorithms" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/147677.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">139</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6346</span> Using Deep Learning Real-Time Object Detection Convolution Neural Networks for Fast Fruit Recognition in the Tree</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=K.%20Bresilla">K. Bresilla</a>, <a href="https://publications.waset.org/abstracts/search?q=L.%20Manfrini"> L. Manfrini</a>, <a href="https://publications.waset.org/abstracts/search?q=B.%20Morandi"> B. Morandi</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Boini"> A. Boini</a>, <a href="https://publications.waset.org/abstracts/search?q=G.%20Perulli"> G. Perulli</a>, <a href="https://publications.waset.org/abstracts/search?q=L.%20C.%20Grappadelli"> L. C. Grappadelli</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Image/video processing for fruit in the tree using hard-coded feature extraction algorithms have shown high accuracy during recent years. While accurate, these approaches even with high-end hardware are computationally intensive and too slow for real-time systems. This paper details the use of deep convolution neural networks (CNNs), specifically an algorithm (YOLO - You Only Look Once) with 24+2 convolution layers. Using deep-learning techniques eliminated the need for hard-code specific features for specific fruit shapes, color and/or other attributes. This CNN is trained on more than 5000 images of apple and pear fruits on 960 cores GPU (Graphical Processing Unit). Testing set showed an accuracy of 90%. After this, trained data were transferred to an embedded device (Raspberry Pi gen.3) with camera for more portability. Based on correlation between number of visible fruits or detected fruits on one frame and the real number of fruits on one tree, a model was created to accommodate this error rate. Speed of processing and detection of the whole platform was higher than 40 frames per second. This speed is fast enough for any grasping/harvesting robotic arm or other real-time applications. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=artificial%20intelligence" title="artificial intelligence">artificial intelligence</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=fruit%20recognition" title=" fruit recognition"> fruit recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=harvesting%20robot" title=" harvesting robot"> harvesting robot</a>, <a href="https://publications.waset.org/abstracts/search?q=precision%20agriculture" title=" precision agriculture"> precision agriculture</a> </p> <a href="https://publications.waset.org/abstracts/79886/using-deep-learning-real-time-object-detection-convolution-neural-networks-for-fast-fruit-recognition-in-the-tree" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/79886.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">420</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6345</span> Multimodal Convolutional Neural Network for Musical Instrument Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yagya%20Raj%20Pandeya">Yagya Raj Pandeya</a>, <a href="https://publications.waset.org/abstracts/search?q=Joonwhoan%20Lee"> Joonwhoan Lee</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The dynamic behavior of music and video makes it difficult to evaluate musical instrument playing in a video by computer system. Any television or film video clip with music information are rich sources for analyzing musical instruments using modern machine learning technologies. In this research, we integrate the audio and video information sources using convolutional neural network (CNN) and pass network learned features through recurrent neural network (RNN) to preserve the dynamic behaviors of audio and video. We use different pre-trained CNN for music and video feature extraction and then fine tune each model. The music network use 2D convolutional network and video network use 3D convolution (C3D). Finally, we concatenate each music and video feature by preserving the time varying features. The long short term memory (LSTM) network is used for long-term dynamic feature characterization and then use late fusion with generalized mean. The proposed network performs better performance to recognize the musical instrument using audio-video multimodal neural network. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=multimodal" title="multimodal">multimodal</a>, <a href="https://publications.waset.org/abstracts/search?q=3D%20convolution" title=" 3D convolution"> 3D convolution</a>, <a href="https://publications.waset.org/abstracts/search?q=music-video%20feature%20extraction" title=" music-video feature extraction"> music-video feature extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=generalized%20mean" title=" generalized mean"> generalized mean</a> </p> <a href="https://publications.waset.org/abstracts/104041/multimodal-convolutional-neural-network-for-musical-instrument-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/104041.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">215</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6344</span> The UAV Feasibility Trajectory Prediction Using Convolution Neural Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Adrien%20Marque">Adrien Marque</a>, <a href="https://publications.waset.org/abstracts/search?q=Daniel%20Delahaye"> Daniel Delahaye</a>, <a href="https://publications.waset.org/abstracts/search?q=Pierre%20Mar%C3%A9chal"> Pierre Mar茅chal</a>, <a href="https://publications.waset.org/abstracts/search?q=Isabelle%20Berry"> Isabelle Berry</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Wind direction and uncertainty are crucial in aircraft or unmanned aerial vehicle trajectories. By computing wind covariance matrices on each spatial grid point, these spatial grids can be defined as images with symmetric positive definite matrix elements. A data pre-processing step, a specific convolution, a specific max-pooling, and a specific flatten layers are implemented to process such images. Then, the neural network is applied to spatial grids, whose elements are wind covariance matrices, to solve classification problems related to the feasibility of unmanned aerial vehicles based on wind direction and wind uncertainty. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=wind%20direction" title="wind direction">wind direction</a>, <a href="https://publications.waset.org/abstracts/search?q=uncertainty%20level" title=" uncertainty level"> uncertainty level</a>, <a href="https://publications.waset.org/abstracts/search?q=unmanned%20aerial%20vehicle" title=" unmanned aerial vehicle"> unmanned aerial vehicle</a>, <a href="https://publications.waset.org/abstracts/search?q=convolution%20neural%20network" title=" convolution neural network"> convolution neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=SPD%20matrices" title=" SPD matrices"> SPD matrices</a> </p> <a href="https://publications.waset.org/abstracts/188367/the-uav-feasibility-trajectory-prediction-using-convolution-neural-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/188367.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">49</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6343</span> Embedded Semantic Segmentation Network Optimized for Matrix Multiplication Accelerator</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jaeyoung%20Lee">Jaeyoung Lee</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Autonomous driving systems require high reliability to provide people with a safe and comfortable driving experience. However, despite the development of a number of vehicle sensors, it is difficult to always provide high perceived performance in driving environments that vary from time to season. The image segmentation method using deep learning, which has recently evolved rapidly, provides high recognition performance in various road environments stably. However, since the system controls a vehicle in real time, a highly complex deep learning network cannot be used due to time and memory constraints. Moreover, efficient networks are optimized for GPU environments, which degrade performance in embedded processor environments equipped simple hardware accelerators. In this paper, a semantic segmentation network, matrix multiplication accelerator network (MMANet), optimized for matrix multiplication accelerator (MMA) on Texas instrument digital signal processors (TI DSP) is proposed to improve the recognition performance of autonomous driving system. The proposed method is designed to maximize the number of layers that can be performed in a limited time to provide reliable driving environment information in real time. First, the number of channels in the activation map is fixed to fit the structure of MMA. By increasing the number of parallel branches, the lack of information caused by fixing the number of channels is resolved. Second, an efficient convolution is selected depending on the size of the activation. Since MMA is a fixed, it may be more efficient for normal convolution than depthwise separable convolution depending on memory access overhead. Thus, a convolution type is decided according to output stride to increase network depth. In addition, memory access time is minimized by processing operations only in L3 cache. Lastly, reliable contexts are extracted using the extended atrous spatial pyramid pooling (ASPP). The suggested method gets stable features from an extended path by increasing the kernel size and accessing consecutive data. In addition, it consists of two ASPPs to obtain high quality contexts using the restored shape without global average pooling paths since the layer uses MMA as a simple adder. To verify the proposed method, an experiment is conducted using perfsim, a timing simulator, and the Cityscapes validation sets. The proposed network can process an image with 640 x 480 resolution for 6.67 ms, so six cameras can be used to identify the surroundings of the vehicle as 20 frame per second (FPS). In addition, it achieves 73.1% mean intersection over union (mIoU) which is the highest recognition rate among embedded networks on the Cityscapes validation set. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=edge%20network" title="edge network">edge network</a>, <a href="https://publications.waset.org/abstracts/search?q=embedded%20network" title=" embedded network"> embedded network</a>, <a href="https://publications.waset.org/abstracts/search?q=MMA" title=" MMA"> MMA</a>, <a href="https://publications.waset.org/abstracts/search?q=matrix%20multiplication%20accelerator" title=" matrix multiplication accelerator"> matrix multiplication accelerator</a>, <a href="https://publications.waset.org/abstracts/search?q=semantic%20segmentation%20network" title=" semantic segmentation network"> semantic segmentation network</a> </p> <a href="https://publications.waset.org/abstracts/125967/embedded-semantic-segmentation-network-optimized-for-matrix-multiplication-accelerator" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/125967.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">129</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6342</span> Operator Optimization Based on Hardware Architecture Alignment Requirements</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Qingqing%20Gai">Qingqing Gai</a>, <a href="https://publications.waset.org/abstracts/search?q=Junxing%20Shen"> Junxing Shen</a>, <a href="https://publications.waset.org/abstracts/search?q=Yu%20Luo"> Yu Luo</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Due to the hardware architecture characteristics, some operators tend to acquire better performance if the input/output tensor dimensions are aligned to a certain minimum granularity, such as convolution and deconvolution commonly used in deep learning. Furthermore, if the requirements are not met, the general strategy is to pad with 0 to satisfy the requirements, potentially leading to the under-utilization of the hardware resources. Therefore, for the convolution and deconvolution whose input and output channels do not meet the minimum granularity alignment, we propose to transfer the W-dimensional data to the C-dimension for computation (W2C) to enable the C-dimension to meet the hardware requirements. This scheme also reduces the number of computations in the W-dimension. Although this scheme substantially increases computation, the operator鈥檚 speed can improve significantly. It achieves remarkable speedups on multiple hardware accelerators, including Nvidia Tensor cores, Qualcomm digital signal processors (DSPs), and Huawei neural processing units (NPUs). All you need to do is modify the network structure and rearrange the operator weights offline without retraining. At the same time, for some operators, such as the Reducemax, we observe that transferring the Cdimensional data to the W-dimension(C2W) and replacing the Reducemax with the Maxpool can accomplish acceleration under certain circumstances. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=convolution" title="convolution">convolution</a>, <a href="https://publications.waset.org/abstracts/search?q=deconvolution" title=" deconvolution"> deconvolution</a>, <a href="https://publications.waset.org/abstracts/search?q=W2C" title=" W2C"> W2C</a>, <a href="https://publications.waset.org/abstracts/search?q=C2W" title=" C2W"> C2W</a>, <a href="https://publications.waset.org/abstracts/search?q=alignment" title=" alignment"> alignment</a>, <a href="https://publications.waset.org/abstracts/search?q=hardware%20accelerator" title=" hardware accelerator"> hardware accelerator</a> </p> <a href="https://publications.waset.org/abstracts/157366/operator-optimization-based-on-hardware-architecture-alignment-requirements" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/157366.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">104</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6341</span> The Detection of Implanted Radioactive Seeds on Ultrasound Images Using Convolution Neural Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Edward%20Holupka">Edward Holupka</a>, <a href="https://publications.waset.org/abstracts/search?q=John%20Rossman"> John Rossman</a>, <a href="https://publications.waset.org/abstracts/search?q=Tye%20Morancy"> Tye Morancy</a>, <a href="https://publications.waset.org/abstracts/search?q=Joseph%20Aronovitz"> Joseph Aronovitz</a>, <a href="https://publications.waset.org/abstracts/search?q=Irving%20Kaplan"> Irving Kaplan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> A common modality for the treatment of early stage prostate cancer is the implantation of radioactive seeds directly into the prostate. The radioactive seeds are positioned inside the prostate to achieve optimal radiation dose coverage to the prostate. These radioactive seeds are positioned inside the prostate using Transrectal ultrasound imaging. Once all of the planned seeds have been implanted, two dimensional transaxial transrectal ultrasound images separated by 2 mm are obtained through out the prostate, beginning at the base of the prostate up to and including the apex. A common deep neural network, called DetectNet was trained to automatically determine the position of the implanted radioactive seeds within the prostate under ultrasound imaging. The results of the training using 950 training ultrasound images and 90 validation ultrasound images. The commonly used metrics for successful training were used to evaluate the efficacy and accuracy of the trained deep neural network and resulted in an loss_bbox (train) = 0.00, loss_coverage (train) = 1.89e-8, loss_bbox (validation) = 11.84, loss_coverage (validation) = 9.70, mAP (validation) = 66.87%, precision (validation) = 81.07%, and a recall (validation) = 82.29%, where train and validation refers to the training image set and validation refers to the validation training set. On the hardware platform used, the training expended 12.8 seconds per epoch. The network was trained for over 10,000 epochs. In addition, the seed locations as determined by the Deep Neural Network were compared to the seed locations as determined by a commercial software based on a one to three months after implant CT. The Deep Learning approach was within \strikeout off\uuline off\uwave off2.29\uuline default\uwave default mm of the seed locations determined by the commercial software. The Deep Learning approach to the determination of radioactive seed locations is robust, accurate, and fast and well within spatial agreement with the gold standard of CT determined seed coordinates. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=prostate" title="prostate">prostate</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20neural%20network" title=" deep neural network"> deep neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=seed%20implant" title=" seed implant"> seed implant</a>, <a href="https://publications.waset.org/abstracts/search?q=ultrasound" title=" ultrasound"> ultrasound</a> </p> <a href="https://publications.waset.org/abstracts/93735/the-detection-of-implanted-radioactive-seeds-on-ultrasound-images-using-convolution-neural-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/93735.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">198</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6340</span> A Time-Varying and Non-Stationary Convolution Spectral Mixture Kernel for Gaussian Process</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kai%20Chen">Kai Chen</a>, <a href="https://publications.waset.org/abstracts/search?q=Shuguang%20Cui"> Shuguang Cui</a>, <a href="https://publications.waset.org/abstracts/search?q=Feng%20Yin"> Feng Yin</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Gaussian process (GP) with spectral mixture (SM) kernel demonstrates flexible non-parametric Bayesian learning ability in modeling unknown function. In this work a novel time-varying and non-stationary convolution spectral mixture (TN-CSM) kernel with a significant enhancing of interpretability by using process convolution is introduced. A way decomposing the SM component into an auto-convolution of base SM component and parameterizing it to be input dependent is outlined. Smoothly, performing a convolution between two base SM component yields a novel structure of non-stationary SM component with much better generalized expression and interpretation. The TN-CSM perfectly allows compatibility with the stationary SM kernel in terms of kernel form and spectral base ignored and confused by previous non-stationary kernels. On synthetic and real-world datatsets, experiments show the time-varying characteristics of hyper-parameters in TN-CSM and compare the learning performance of TN-CSM with popular and representative non-stationary GP. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Gaussian%20process" title="Gaussian process">Gaussian process</a>, <a href="https://publications.waset.org/abstracts/search?q=spectral%20mixture" title=" spectral mixture"> spectral mixture</a>, <a href="https://publications.waset.org/abstracts/search?q=non-stationary" title=" non-stationary"> non-stationary</a>, <a href="https://publications.waset.org/abstracts/search?q=convolution" title=" convolution"> convolution</a> </p> <a href="https://publications.waset.org/abstracts/131675/a-time-varying-and-non-stationary-convolution-spectral-mixture-kernel-for-gaussian-process" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/131675.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">196</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6339</span> Automatic Number Plate Recognition System Based on Deep Learning</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=T.%20Damak">T. Damak</a>, <a href="https://publications.waset.org/abstracts/search?q=O.%20Kriaa"> O. Kriaa</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Baccar"> A. Baccar</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20A.%20Ben%20Ayed"> M. A. Ben Ayed</a>, <a href="https://publications.waset.org/abstracts/search?q=N.%20Masmoudi"> N. Masmoudi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In the last few years, Automatic Number Plate Recognition (ANPR) systems have become widely used in the safety, the security, and the commercial aspects. Forethought, several methods and techniques are computing to achieve the better levels in terms of accuracy and real time execution. This paper proposed a computer vision algorithm of Number Plate Localization (NPL) and Characters Segmentation (CS). In addition, it proposed an improved method in Optical Character Recognition (OCR) based on Deep Learning (DL) techniques. In order to identify the number of detected plate after NPL and CS steps, the Convolutional Neural Network (CNN) algorithm is proposed. A DL model is developed using four convolution layers, two layers of Maxpooling, and six layers of fully connected. The model was trained by number image database on the Jetson TX2 NVIDIA target. The accuracy result has achieved 95.84%. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=ANPR" title="ANPR">ANPR</a>, <a href="https://publications.waset.org/abstracts/search?q=CS" title=" CS"> CS</a>, <a href="https://publications.waset.org/abstracts/search?q=CNN" title=" CNN"> CNN</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=NPL" title=" NPL"> NPL</a> </p> <a href="https://publications.waset.org/abstracts/108706/automatic-number-plate-recognition-system-based-on-deep-learning" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/108706.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">306</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6338</span> A Survey of Sentiment Analysis Based on Deep Learning</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Pingping%20Lin">Pingping Lin</a>, <a href="https://publications.waset.org/abstracts/search?q=Xudong%20Luo"> Xudong Luo</a>, <a href="https://publications.waset.org/abstracts/search?q=Yifan%20Fan"> Yifan Fan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Sentiment analysis is a very active research topic. Every day, Facebook, Twitter, Weibo, and other social media, as well as significant e-commerce websites, generate a massive amount of comments, which can be used to analyse peoples opinions or emotions. The existing methods for sentiment analysis are based mainly on sentiment dictionaries, machine learning, and deep learning. The first two kinds of methods rely on heavily sentiment dictionaries or large amounts of labelled data. The third one overcomes these two problems. So, in this paper, we focus on the third one. Specifically, we survey various sentiment analysis methods based on convolutional neural network, recurrent neural network, long short-term memory, deep neural network, deep belief network, and memory network. We compare their futures, advantages, and disadvantages. Also, we point out the main problems of these methods, which may be worthy of careful studies in the future. Finally, we also examine the application of deep learning in multimodal sentiment analysis and aspect-level sentiment analysis. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=document%20analysis" title="document analysis">document analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=multimodal%20sentiment%20analysis" title=" multimodal sentiment analysis"> multimodal sentiment analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=natural%20language%20processing" title=" natural language processing"> natural language processing</a> </p> <a href="https://publications.waset.org/abstracts/130107/a-survey-of-sentiment-analysis-based-on-deep-learning" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/130107.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">164</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6337</span> Lean Comic GAN (LC-GAN): a Light-Weight GAN Architecture Leveraging Factorized Convolution and Teacher Forcing Distillation Style Loss Aimed to Capture Two Dimensional Animated Filtered Still Shots Using Mobile Phone Camera and Edge Devices</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kaustav%20Mukherjee">Kaustav Mukherjee</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper we propose a Neural Style Transfer solution whereby we have created a Lightweight Separable Convolution Kernel Based GAN Architecture (SC-GAN) which will very useful for designing filter for Mobile Phone Cameras and also Edge Devices which will convert any image to its 2D ANIMATED COMIC STYLE Movies like HEMAN, SUPERMAN, JUNGLE-BOOK. This will help the 2D animation artist by relieving to create new characters from real life person's images without having to go for endless hours of manual labour drawing each and every pose of a cartoon. It can even be used to create scenes from real life images.This will reduce a huge amount of turn around time to make 2D animated movies and decrease cost in terms of manpower and time. In addition to that being extreme light-weight it can be used as camera filters capable of taking Comic Style Shots using mobile phone camera or edge device cameras like Raspberry Pi 4,NVIDIA Jetson NANO etc. Existing Methods like CartoonGAN with the model size close to 170 MB is too heavy weight for mobile phones and edge devices due to their scarcity in resources. Compared to the current state of the art our proposed method which has a total model size of 31 MB which clearly makes it ideal and ultra-efficient for designing of camera filters on low resource devices like mobile phones, tablets and edge devices running OS or RTOS. .Owing to use of high resolution input and usage of bigger convolution kernel size it produces richer resolution Comic-Style Pictures implementation with 6 times lesser number of parameters and with just 25 extra epoch trained on a dataset of less than 1000 which breaks the myth that all GAN need mammoth amount of data. Our network reduces the density of the Gan architecture by using Depthwise Separable Convolution which does the convolution operation on each of the RGB channels separately then we use a Point-Wise Convolution to bring back the network into required channel number using 1 by 1 kernel.This reduces the number of parameters substantially and makes it extreme light-weight and suitable for mobile phones and edge devices. The architecture mentioned in the present paper make use of Parameterised Batch Normalization Goodfellow etc al. (Deep Learning OPTIMIZATION FOR TRAINING DEEP MODELS page 320) which makes the network to use the advantage of Batch Norm for easier training while maintaining the non-linear feature capture by inducing the learnable parameters <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=comic%20stylisation%20from%20camera%20image%20using%20GAN" title="comic stylisation from camera image using GAN">comic stylisation from camera image using GAN</a>, <a href="https://publications.waset.org/abstracts/search?q=creating%202D%20animated%20movie%20style%20custom%20stickers%20from%20images" title=" creating 2D animated movie style custom stickers from images"> creating 2D animated movie style custom stickers from images</a>, <a href="https://publications.waset.org/abstracts/search?q=depth-wise%20separable%20convolutional%20neural%20network%20for%20light-weight%20GAN%20architecture%20for%20EDGE%20devices" title=" depth-wise separable convolutional neural network for light-weight GAN architecture for EDGE devices"> depth-wise separable convolutional neural network for light-weight GAN architecture for EDGE devices</a>, <a href="https://publications.waset.org/abstracts/search?q=GAN%20architecture%20for%202D%20animated%20cartoonizing%20neural%20style" title=" GAN architecture for 2D animated cartoonizing neural style"> GAN architecture for 2D animated cartoonizing neural style</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20style%20transfer%20for%20edge" title=" neural style transfer for edge"> neural style transfer for edge</a>, <a href="https://publications.waset.org/abstracts/search?q=model%20distilation" title=" model distilation"> model distilation</a>, <a href="https://publications.waset.org/abstracts/search?q=perceptual%20loss" title=" perceptual loss"> perceptual loss</a> </p> <a href="https://publications.waset.org/abstracts/127050/lean-comic-gan-lc-gan-a-light-weight-gan-architecture-leveraging-factorized-convolution-and-teacher-forcing-distillation-style-loss-aimed-to-capture-two-dimensional-animated-filtered-still-shots-using-mobile-phone-camera-and-edge-devices" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/127050.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">132</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6336</span> Data Augmentation for Automatic Graphical User Interface Generation Based on Generative Adversarial Network</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Xulu%20Yao">Xulu Yao</a>, <a href="https://publications.waset.org/abstracts/search?q=Moi%20Hoon%20Yap"> Moi Hoon Yap</a>, <a href="https://publications.waset.org/abstracts/search?q=Yanlong%20Zhang"> Yanlong Zhang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> As a branch of artificial neural network, deep learning is widely used in the field of image recognition, but the lack of its dataset leads to imperfect model learning. By analysing the data scale requirements of deep learning and aiming at the application in GUI generation, it is found that the collection of GUI dataset is a time-consuming and labor-consuming project, which is difficult to meet the needs of current deep learning network. To solve this problem, this paper proposes a semi-supervised deep learning model that relies on the original small-scale datasets to produce a large number of reliable data sets. By combining the cyclic neural network with the generated countermeasure network, the cyclic neural network can learn the sequence relationship and characteristics of data, make the generated countermeasure network generate reasonable data, and then expand the Rico dataset. Relying on the network structure, the characteristics of collected data can be well analysed, and a large number of reasonable data can be generated according to these characteristics. After data processing, a reliable dataset for model training can be formed, which alleviates the problem of dataset shortage in deep learning. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=GUI" title="GUI">GUI</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=GAN" title=" GAN"> GAN</a>, <a href="https://publications.waset.org/abstracts/search?q=data%20augmentation" title=" data augmentation"> data augmentation</a> </p> <a href="https://publications.waset.org/abstracts/143650/data-augmentation-for-automatic-graphical-user-interface-generation-based-on-generative-adversarial-network" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/143650.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">184</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6335</span> Identification of Breast Anomalies Based on Deep Convolutional Neural Networks and K-Nearest Neighbors</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ayyaz%20Hussain">Ayyaz Hussain</a>, <a href="https://publications.waset.org/abstracts/search?q=Tariq%20Sadad"> Tariq Sadad</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Breast cancer (BC) is one of the widespread ailments among females globally. The early prognosis of BC can decrease the mortality rate. Exact findings of benign tumors can avoid unnecessary biopsies and further treatments of patients under investigation. However, due to variations in images, it is a tough job to isolate cancerous cases from normal and benign ones. The machine learning technique is widely employed in the classification of BC pattern and prognosis. In this research, a deep convolution neural network (DCNN) called AlexNet architecture is employed to get more discriminative features from breast tissues. To achieve higher accuracy, K-nearest neighbor (KNN) classifiers are employed as a substitute for the softmax layer in deep learning. The proposed model is tested on a widely used breast image database called MIAS dataset for experimental purposes and achieved 99% accuracy. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=breast%20cancer" title="breast cancer">breast cancer</a>, <a href="https://publications.waset.org/abstracts/search?q=DCNN" title=" DCNN"> DCNN</a>, <a href="https://publications.waset.org/abstracts/search?q=KNN" title=" KNN"> KNN</a>, <a href="https://publications.waset.org/abstracts/search?q=mammography" title=" mammography"> mammography</a> </p> <a href="https://publications.waset.org/abstracts/118200/identification-of-breast-anomalies-based-on-deep-convolutional-neural-networks-and-k-nearest-neighbors" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/118200.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">136</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">‹</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=deep%20convolution%20network&page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=deep%20convolution%20network&page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=deep%20convolution%20network&page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=deep%20convolution%20network&page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=deep%20convolution%20network&page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=deep%20convolution%20network&page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=deep%20convolution%20network&page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=deep%20convolution%20network&page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=deep%20convolution%20network&page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=deep%20convolution%20network&page=212">212</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=deep%20convolution%20network&page=213">213</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=deep%20convolution%20network&page=2" rel="next">›</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">© 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">×</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>