CINXE.COM
Search results for: resNet
<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: resNet</title> <meta name="description" content="Search results for: resNet"> <meta name="keywords" content="resNet"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="resNet" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="resNet"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 36</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: resNet</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">36</span> Clothes Identification Using Inception ResNet V2 and MobileNet V2</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Subodh%20Chandra%20Shakya">Subodh Chandra Shakya</a>, <a href="https://publications.waset.org/abstracts/search?q=Badal%20Shrestha"> Badal Shrestha</a>, <a href="https://publications.waset.org/abstracts/search?q=Suni%20Thapa"> Suni Thapa</a>, <a href="https://publications.waset.org/abstracts/search?q=Ashutosh%20Chauhan"> Ashutosh Chauhan</a>, <a href="https://publications.waset.org/abstracts/search?q=Saugat%20Adhikari"> Saugat Adhikari</a> </p> <p class="card-text"><strong>Abstract:</strong></p> To tackle our problem of clothes identification, we used different architectures of Convolutional Neural Networks. Among different architectures, the outcome from Inception ResNet V2 and MobileNet V2 seemed promising. On comparison of the metrices, we observed that the Inception ResNet V2 slightly outperforms MobileNet V2 for this purpose. So this paper of ours proposes the cloth identifier using Inception ResNet V2 and also contains the comparison between the outcome of ResNet V2 and MobileNet V2. The document here contains the results and findings of the research that we performed on the DeepFashion Dataset. To improve the dataset, we used different image preprocessing techniques like image shearing, image rotation, and denoising. The whole experiment was conducted with the intention of testing the efficiency of convolutional neural networks on cloth identification so that we could develop a reliable system that is good enough in identifying the clothes worn by the users. The whole system can be integrated with some kind of recommendation system. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=inception%20ResNet" title="inception ResNet">inception ResNet</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20net" title=" convolutional neural net"> convolutional neural net</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=confusion%20matrix" title=" confusion matrix"> confusion matrix</a>, <a href="https://publications.waset.org/abstracts/search?q=data%20augmentation" title=" data augmentation"> data augmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=data%20preprocessing" title=" data preprocessing"> data preprocessing</a> </p> <a href="https://publications.waset.org/abstracts/129604/clothes-identification-using-inception-resnet-v2-and-mobilenet-v2" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/129604.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">187</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">35</span> PatchMix: Learning Transferable Semi-Supervised Representation by Predicting Patches</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Arpit%20Rai">Arpit Rai</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this work, we propose PatchMix, a semi-supervised method for pre-training visual representations. PatchMix mixes patches of two images and then solves an auxiliary task of predicting the label of each patch in the mixed image. Our experiments on the CIFAR-10, 100 and the SVHN dataset show that the representations learned by this method encodes useful information for transfer to new tasks and outperform the baseline Residual Network encoders by on CIFAR 10 by 12% on ResNet 101 and 2% on ResNet-56, by 4% on CIFAR-100 on ResNet101 and by 6% on SVHN dataset on the ResNet-101 baseline model. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=self-supervised%20learning" title="self-supervised learning">self-supervised learning</a>, <a href="https://publications.waset.org/abstracts/search?q=representation%20learning" title=" representation learning"> representation learning</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=generalization" title=" generalization"> generalization</a> </p> <a href="https://publications.waset.org/abstracts/150013/patchmix-learning-transferable-semi-supervised-representation-by-predicting-patches" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/150013.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">89</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">34</span> The Modification of Convolutional Neural Network in Fin Whale Identification</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jiahao%20Cui">Jiahao Cui</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In the past centuries, due to climate change and intense whaling, the global whale population has dramatically declined. Among the various whale species, the fin whale experienced the most drastic drop in number due to its popularity in whaling. Under this background, identifying fin whale calls could be immensely beneficial to the preservation of the species. This paper uses feature extraction to process the input audio signal, then a network based on AlexNet and three networks based on the ResNet model was constructed to classify fin whale calls. A mixture of the DOSITS database and the Watkins database was used during training. The results demonstrate that a modified ResNet network has the best performance considering precision and network complexity. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20network" title="convolutional neural network">convolutional neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=ResNet" title=" ResNet"> ResNet</a>, <a href="https://publications.waset.org/abstracts/search?q=AlexNet" title=" AlexNet"> AlexNet</a>, <a href="https://publications.waset.org/abstracts/search?q=fin%20whale%20preservation" title=" fin whale preservation"> fin whale preservation</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20extraction" title=" feature extraction"> feature extraction</a> </p> <a href="https://publications.waset.org/abstracts/155185/the-modification-of-convolutional-neural-network-in-fin-whale-identification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/155185.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">122</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">33</span> Bone Fracture Detection with X-Ray Images Using Mobilenet V3 Architecture</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ashlesha%20Khanapure">Ashlesha Khanapure</a>, <a href="https://publications.waset.org/abstracts/search?q=Harsh%20Kashyap"> Harsh Kashyap</a>, <a href="https://publications.waset.org/abstracts/search?q=Abhinav%20Anand"> Abhinav Anand</a>, <a href="https://publications.waset.org/abstracts/search?q=Sanjana%20Habib"> Sanjana Habib</a>, <a href="https://publications.waset.org/abstracts/search?q=Anupama%20Bidargaddi"> Anupama Bidargaddi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Technologies that are developing quickly are being developed daily in a variety of disciplines, particularly the medical field. For the purpose of detecting bone fractures in X-ray pictures of different body segments, our work compares the ResNet-50 and MobileNetV3 architectures. It evaluates accuracy and computing efficiency with X-rays of the elbow, hand, and shoulder from the MURA dataset. Through training and validation, the models are evaluated on normal and fractured images. While ResNet-50 showcases superior accuracy in fracture identification, MobileNetV3 showcases superior speed and resource optimization. Despite ResNet-50’s accuracy, MobileNetV3’s swifter inference makes it a viable choice for real-time clinical applications, emphasizing the importance of balancing computational efficiency and accuracy in medical imaging. We created a graphical user interface (GUI) for MobileNet V3 model bone fracture detection. This research underscores MobileNetV3’s potential to streamline bone fracture diagnoses, potentially revolutionizing orthopedic medical procedures and enhancing patient care. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=CNN" title="CNN">CNN</a>, <a href="https://publications.waset.org/abstracts/search?q=MobileNet%20V3" title=" MobileNet V3"> MobileNet V3</a>, <a href="https://publications.waset.org/abstracts/search?q=ResNet-50" title=" ResNet-50"> ResNet-50</a>, <a href="https://publications.waset.org/abstracts/search?q=healthcare" title=" healthcare"> healthcare</a>, <a href="https://publications.waset.org/abstracts/search?q=MURA" title=" MURA"> MURA</a>, <a href="https://publications.waset.org/abstracts/search?q=X-ray" title=" X-ray"> X-ray</a>, <a href="https://publications.waset.org/abstracts/search?q=fracture%20detection" title=" fracture detection"> fracture detection</a> </p> <a href="https://publications.waset.org/abstracts/182019/bone-fracture-detection-with-x-ray-images-using-mobilenet-v3-architecture" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/182019.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">63</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">32</span> Improving Axial-Attention Network via Cross-Channel Weight Sharing</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nazmul%20Shahadat">Nazmul Shahadat</a>, <a href="https://publications.waset.org/abstracts/search?q=Anthony%20S.%20Maida"> Anthony S. Maida</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In recent years, hypercomplex inspired neural networks improved deep CNN architectures due to their ability to share weights across input channels and thus improve cohesiveness of representations within the layers. The work described herein studies the effect of replacing existing layers in an Axial Attention ResNet with their quaternion variants that use cross-channel weight sharing to assess the effect on image classification. We expect the quaternion enhancements to produce improved feature maps with more interlinked representations. We experiment with the stem of the network, the bottleneck layer, and the fully connected backend by replacing them with quaternion versions. These modifications lead to novel architectures which yield improved accuracy performance on the ImageNet300k classification dataset. Our baseline networks for comparison were the original real-valued ResNet, the original quaternion-valued ResNet, and the Axial Attention ResNet. Since improvement was observed regardless of which part of the network was modified, there is a promise that this technique may be generally useful in improving classification accuracy for a large class of networks. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=axial%20attention" title="axial attention">axial attention</a>, <a href="https://publications.waset.org/abstracts/search?q=representational%20networks" title=" representational networks"> representational networks</a>, <a href="https://publications.waset.org/abstracts/search?q=weight%20sharing" title=" weight sharing"> weight sharing</a>, <a href="https://publications.waset.org/abstracts/search?q=cross-channel%20correlations" title=" cross-channel correlations"> cross-channel correlations</a>, <a href="https://publications.waset.org/abstracts/search?q=quaternion-enhanced%20axial%20attention" title=" quaternion-enhanced axial attention"> quaternion-enhanced axial attention</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20networks" title=" deep networks"> deep networks</a> </p> <a href="https://publications.waset.org/abstracts/164808/improving-axial-attention-network-via-cross-channel-weight-sharing" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/164808.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">83</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">31</span> An Auxiliary Technique for Coronary Heart Disease Prediction by Analyzing Electrocardiogram Based on ResNet and Bi-Long Short-Term Memory</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yang%20Zhang">Yang Zhang</a>, <a href="https://publications.waset.org/abstracts/search?q=Jian%20He"> Jian He</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Heart disease is one of the leading causes of death in the world, and coronary heart disease (CHD) is one of the major heart diseases. Electrocardiogram (ECG) is widely used in the detection of heart diseases, but the traditional manual method for CHD prediction by analyzing ECG requires lots of professional knowledge for doctors. This paper introduces sliding window and continuous wavelet transform (CWT) to transform ECG signals into images, and then ResNet and Bi-LSTM are introduced to build the ECG feature extraction network (namely ECGNet). At last, an auxiliary system for coronary heart disease prediction was developed based on modified ResNet18 and Bi-LSTM, and the public ECG dataset of CHD from MIMIC-3 was used to train and test the system. The experimental results show that the accuracy of the method is 83%, and the F1-score is 83%. Compared with the available methods for CHD prediction based on ECG, such as kNN, decision tree, VGGNet, etc., this method not only improves the prediction accuracy but also could avoid the degradation phenomenon of the deep learning network. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bi-LSTM" title="Bi-LSTM">Bi-LSTM</a>, <a href="https://publications.waset.org/abstracts/search?q=CHD" title=" CHD"> CHD</a>, <a href="https://publications.waset.org/abstracts/search?q=ECG" title=" ECG"> ECG</a>, <a href="https://publications.waset.org/abstracts/search?q=ResNet" title=" ResNet"> ResNet</a>, <a href="https://publications.waset.org/abstracts/search?q=sliding%C2%A0window" title=" sliding window"> sliding window</a> </p> <a href="https://publications.waset.org/abstracts/165165/an-auxiliary-technique-for-coronary-heart-disease-prediction-by-analyzing-electrocardiogram-based-on-resnet-and-bi-long-short-term-memory" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/165165.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">89</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">30</span> Attention-Based ResNet for Breast Cancer Classification</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Abebe%20Mulugojam%20Negash">Abebe Mulugojam Negash</a>, <a href="https://publications.waset.org/abstracts/search?q=Yongbin%20Yu"> Yongbin Yu</a>, <a href="https://publications.waset.org/abstracts/search?q=Ekong%20Favour"> Ekong Favour</a>, <a href="https://publications.waset.org/abstracts/search?q=Bekalu%20Nigus%20Dawit"> Bekalu Nigus Dawit</a>, <a href="https://publications.waset.org/abstracts/search?q=Molla%20Woretaw%20Teshome"> Molla Woretaw Teshome</a>, <a href="https://publications.waset.org/abstracts/search?q=Aynalem%20Birtukan%20Yirga"> Aynalem Birtukan Yirga</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Breast cancer remains a significant health concern, necessitating advancements in diagnostic methodologies. Addressing this, our paper confronts the notable challenges in breast cancer classification, particularly the imbalance in datasets and the constraints in the accuracy and interpretability of prevailing deep learning approaches. We proposed an attention-based residual neural network (ResNet), which effectively combines the robust features of ResNet with an advanced attention mechanism. Enhanced through strategic data augmentation and positive weight adjustments, this approach specifically targets the issue of data imbalance. The proposed model is tested on the BreakHis dataset and achieved accuracies of 99.00%, 99.04%, 98.67%, and 98.08% in different magnifications (40X, 100X, 200X, and 400X), respectively. We evaluated the performance by using different evaluation metrics such as precision, recall, and F1-Score and made comparisons with other state-of-the-art methods. Our experiments demonstrate that the proposed model outperforms existing approaches, achieving higher accuracy in breast cancer classification. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=residual%20neural%20network" title="residual neural network">residual neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=attention%20mechanism" title=" attention mechanism"> attention mechanism</a>, <a href="https://publications.waset.org/abstracts/search?q=positive%20weight" title=" positive weight"> positive weight</a>, <a href="https://publications.waset.org/abstracts/search?q=data%20augmentation" title=" data augmentation"> data augmentation</a> </p> <a href="https://publications.waset.org/abstracts/181531/attention-based-resnet-for-breast-cancer-classification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/181531.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">101</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">29</span> A Comparative Study of Deep Learning Methods for COVID-19 Detection</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Aishrith%20Rao">Aishrith Rao</a> </p> <p class="card-text"><strong>Abstract:</strong></p> COVID 19 is a pandemic which has resulted in thousands of deaths around the world and a huge impact on the global economy. Testing is a huge issue as the test kits have limited availability and are expensive to manufacture. Using deep learning methods on radiology images in the detection of the coronavirus as these images contain information about the spread of the virus in the lungs is extremely economical and time-saving as it can be used in areas with a lack of testing facilities. This paper focuses on binary classification and multi-class classification of COVID 19 and other diseases such as pneumonia, tuberculosis, etc. Different deep learning methods such as VGG-19, COVID-Net, ResNET+ SVM, Deep CNN, DarkCovidnet, etc., have been used, and their accuracy has been compared using the Chest X-Ray dataset. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title="deep learning">deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=radiology" title=" radiology"> radiology</a>, <a href="https://publications.waset.org/abstracts/search?q=COVID-19" title=" COVID-19"> COVID-19</a>, <a href="https://publications.waset.org/abstracts/search?q=ResNet" title=" ResNet"> ResNet</a>, <a href="https://publications.waset.org/abstracts/search?q=VGG-19" title=" VGG-19"> VGG-19</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20neural%20networks" title=" deep neural networks"> deep neural networks</a> </p> <a href="https://publications.waset.org/abstracts/127887/a-comparative-study-of-deep-learning-methods-for-covid-19-detection" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/127887.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">160</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">28</span> Electrocardiogram-Based Heartbeat Classification Using Convolutional Neural Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jacqueline%20Rose%20T.%20Alipo-on">Jacqueline Rose T. Alipo-on</a>, <a href="https://publications.waset.org/abstracts/search?q=Francesca%20Isabelle%20F.%20Escobar"> Francesca Isabelle F. Escobar</a>, <a href="https://publications.waset.org/abstracts/search?q=Myles%20Joshua%20T.%20Tan"> Myles Joshua T. Tan</a>, <a href="https://publications.waset.org/abstracts/search?q=Hezerul%20Abdul%20Karim"> Hezerul Abdul Karim</a>, <a href="https://publications.waset.org/abstracts/search?q=Nouar%20Al%20Dahoul"> Nouar Al Dahoul</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Electrocardiogram (ECG) signal analysis and processing are crucial in the diagnosis of cardiovascular diseases, which are considered one of the leading causes of mortality worldwide. However, the traditional rule-based analysis of large volumes of ECG data is time-consuming, labor-intensive, and prone to human errors. With the advancement of the programming paradigm, algorithms such as machine learning have been increasingly used to perform an analysis of ECG signals. In this paper, various deep learning algorithms were adapted to classify five classes of heartbeat types. The dataset used in this work is the synthetic MIT-BIH Arrhythmia dataset produced from generative adversarial networks (GANs). Various deep learning models such as ResNet-50 convolutional neural network (CNN), 1-D CNN, and long short-term memory (LSTM) were evaluated and compared. ResNet-50 was found to outperform other models in terms of recall and F1 score using a five-fold average score of 98.88% and 98.87%, respectively. 1-D CNN, on the other hand, was found to have the highest average precision of 98.93%. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=heartbeat%20classification" title="heartbeat classification">heartbeat classification</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20network" title=" convolutional neural network"> convolutional neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=electrocardiogram%20signals" title=" electrocardiogram signals"> electrocardiogram signals</a>, <a href="https://publications.waset.org/abstracts/search?q=generative%20adversarial%20networks" title=" generative adversarial networks"> generative adversarial networks</a>, <a href="https://publications.waset.org/abstracts/search?q=long%20short-term%20memory" title=" long short-term memory"> long short-term memory</a>, <a href="https://publications.waset.org/abstracts/search?q=ResNet-50" title=" ResNet-50"> ResNet-50</a> </p> <a href="https://publications.waset.org/abstracts/162763/electrocardiogram-based-heartbeat-classification-using-convolutional-neural-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/162763.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">128</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">27</span> Cells Detection and Recognition in Bone Marrow Examination with Deep Learning Method</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shiyin%20He">Shiyin He</a>, <a href="https://publications.waset.org/abstracts/search?q=Zheng%20Huang"> Zheng Huang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, deep learning methods are applied in bio-medical field to detect and count different types of cells in an automatic way instead of manual work in medical practice, specifically in bone marrow examination. The process is mainly composed of two steps, detection and recognition. Mask-Region-Convolutional Neural Networks (Mask-RCNN) was used for detection and image segmentation to extract cells and then Convolutional Neural Networks (CNN), as well as Deep Residual Network (ResNet) was used to classify. Result of cell detection network shows high efficiency to meet application requirements. For the cell recognition network, two networks are compared and the final system is fully applicable. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=cell%20detection" title="cell detection">cell detection</a>, <a href="https://publications.waset.org/abstracts/search?q=cell%20recognition" title=" cell recognition"> cell recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=Mask-RCNN" title=" Mask-RCNN"> Mask-RCNN</a>, <a href="https://publications.waset.org/abstracts/search?q=ResNet" title=" ResNet"> ResNet</a> </p> <a href="https://publications.waset.org/abstracts/98649/cells-detection-and-recognition-in-bone-marrow-examination-with-deep-learning-method" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/98649.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">189</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">26</span> Effective Stacking of Deep Neural Models for Automated Object Recognition in Retail Stores</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ankit%20Sinha">Ankit Sinha</a>, <a href="https://publications.waset.org/abstracts/search?q=Soham%20Banerjee"> Soham Banerjee</a>, <a href="https://publications.waset.org/abstracts/search?q=Pratik%20Chattopadhyay"> Pratik Chattopadhyay</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Automated product recognition in retail stores is an important real-world application in the domain of Computer Vision and Pattern Recognition. In this paper, we consider the problem of automatically identifying the classes of the products placed on racks in retail stores from an image of the rack and information about the query/product images. We improve upon the existing approaches in terms of effectiveness and memory requirement by developing a two-stage object detection and recognition pipeline comprising of a Faster-RCNN-based object localizer that detects the object regions in the rack image and a ResNet-18-based image encoder that classifies the detected regions into the appropriate classes. Each of the models is fine-tuned using appropriate data sets for better prediction and data augmentation is performed on each query image to prepare an extensive gallery set for fine-tuning the ResNet-18-based product recognition model. This encoder is trained using a triplet loss function following the strategy of online-hard-negative-mining for improved prediction. The proposed models are lightweight and can be connected in an end-to-end manner during deployment to automatically identify each product object placed in a rack image. Extensive experiments using Grozi-32k and GP-180 data sets verify the effectiveness of the proposed model. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=retail%20stores" title="retail stores">retail stores</a>, <a href="https://publications.waset.org/abstracts/search?q=faster-RCNN" title=" faster-RCNN"> faster-RCNN</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20localization" title=" object localization"> object localization</a>, <a href="https://publications.waset.org/abstracts/search?q=ResNet-18" title=" ResNet-18"> ResNet-18</a>, <a href="https://publications.waset.org/abstracts/search?q=triplet%20loss" title=" triplet loss"> triplet loss</a>, <a href="https://publications.waset.org/abstracts/search?q=data%20augmentation" title=" data augmentation"> data augmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=product%20recognition" title=" product recognition"> product recognition</a> </p> <a href="https://publications.waset.org/abstracts/153836/effective-stacking-of-deep-neural-models-for-automated-object-recognition-in-retail-stores" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/153836.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">156</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">25</span> Image Instance Segmentation Using Modified Mask R-CNN</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Avatharam%20Ganivada">Avatharam Ganivada</a>, <a href="https://publications.waset.org/abstracts/search?q=Krishna%20Shah"> Krishna Shah</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The Mask R-CNN is recently introduced by the team of Facebook AI Research (FAIR), which is mainly concerned with instance segmentation in images. Here, the Mask R-CNN is based on ResNet and feature pyramid network (FPN), where a single dropout method is employed. This paper provides a modified Mask R-CNN by adding multiple dropout methods into the Mask R-CNN. The proposed model has also utilized the concepts of Resnet and FPN to extract stage-wise network feature maps, wherein a top-down network path having lateral connections is used to obtain semantically strong features. The proposed model produces three outputs for each object in the image: class label, bounding box coordinates, and object mask. The performance of the proposed network is evaluated in the segmentation of every instance in images using COCO and cityscape datasets. The proposed model achieves better performance than the state-of-the-networks for the datasets. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=instance%20segmentation" title="instance segmentation">instance segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20detection" title=" object detection"> object detection</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks" title=" convolutional neural networks"> convolutional neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a> </p> <a href="https://publications.waset.org/abstracts/147310/image-instance-segmentation-using-modified-mask-r-cnn" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/147310.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">72</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">24</span> A Novel Hybrid Deep Learning Architecture for Predicting Acute Kidney Injury Using Patient Record Data and Ultrasound Kidney Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sophia%20Shi">Sophia Shi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Acute kidney injury (AKI) is the sudden onset of kidney damage in which the kidneys cannot filter waste from the blood, requiring emergency hospitalization. AKI patient mortality rate is high in the ICU and is virtually impossible for doctors to predict because it is so unexpected. Currently, there is no hybrid model predicting AKI that takes advantage of two types of data. De-identified patient data from the MIMIC-III database and de-identified kidney images and corresponding patient records from the Beijing Hospital of the Ministry of Health were collected. Using data features including serum creatinine among others, two numeric models using MIMIC and Beijing Hospital data were built, and with the hospital ultrasounds, an image-only model was built. Convolutional neural networks (CNN) were used, VGG and Resnet for numeric data and Resnet for image data, and they were combined into a hybrid model by concatenating feature maps of both types of models to create a new input. This input enters another CNN block and then two fully connected layers, ending in a binary output after running through Softmax and additional code. The hybrid model successfully predicted AKI and the highest AUROC of the model was 0.953, achieving an accuracy of 90% and F1-score of 0.91. This model can be implemented into urgent clinical settings such as the ICU and aid doctors by assessing the risk of AKI shortly after the patient’s admission to the ICU, so that doctors can take preventative measures and diminish mortality risks and severe kidney damage. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Acute%20kidney%20injury" title="Acute kidney injury">Acute kidney injury</a>, <a href="https://publications.waset.org/abstracts/search?q=Convolutional%20neural%20network" title=" Convolutional neural network"> Convolutional neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=Hybrid%20deep%20learning" title=" Hybrid deep learning"> Hybrid deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=Patient%20record%20data" title=" Patient record data"> Patient record data</a>, <a href="https://publications.waset.org/abstracts/search?q=ResNet" title=" ResNet"> ResNet</a>, <a href="https://publications.waset.org/abstracts/search?q=Ultrasound%20kidney%20images" title=" Ultrasound kidney images"> Ultrasound kidney images</a>, <a href="https://publications.waset.org/abstracts/search?q=VGG" title=" VGG"> VGG</a> </p> <a href="https://publications.waset.org/abstracts/137226/a-novel-hybrid-deep-learning-architecture-for-predicting-acute-kidney-injury-using-patient-record-data-and-ultrasound-kidney-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/137226.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">131</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">23</span> Audio-Visual Recognition Based on Effective Model and Distillation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Heng%20Yang">Heng Yang</a>, <a href="https://publications.waset.org/abstracts/search?q=Tao%20Luo"> Tao Luo</a>, <a href="https://publications.waset.org/abstracts/search?q=Yakun%20Zhang"> Yakun Zhang</a>, <a href="https://publications.waset.org/abstracts/search?q=Kai%20Wang"> Kai Wang</a>, <a href="https://publications.waset.org/abstracts/search?q=Wei%20Qin"> Wei Qin</a>, <a href="https://publications.waset.org/abstracts/search?q=Liang%20Xie"> Liang Xie</a>, <a href="https://publications.waset.org/abstracts/search?q=Ye%20Yan"> Ye Yan</a>, <a href="https://publications.waset.org/abstracts/search?q=Erwei%20Yin"> Erwei Yin</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Recent years have seen that audio-visual recognition has shown great potential in a strong noise environment. The existing method of audio-visual recognition has explored methods with ResNet and feature fusion. However, on the one hand, ResNet always occupies a large amount of memory resources, restricting the application in engineering. On the other hand, the feature merging also brings some interferences in a high noise environment. In order to solve the problems, we proposed an effective framework with bidirectional distillation. At first, in consideration of the good performance in extracting of features, we chose the light model, Efficientnet as our extractor of spatial features. Secondly, self-distillation was applied to learn more information from raw data. Finally, we proposed a bidirectional distillation in decision-level fusion. In more detail, our experimental results are based on a multi-model dataset from 24 volunteers. Eventually, the lipreading accuracy of our framework was increased by 2.3% compared with existing systems, and our framework made progress in audio-visual fusion in a high noise environment compared with the system of audio recognition without visual. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=lipreading" title="lipreading">lipreading</a>, <a href="https://publications.waset.org/abstracts/search?q=audio-visual" title=" audio-visual"> audio-visual</a>, <a href="https://publications.waset.org/abstracts/search?q=Efficientnet" title=" Efficientnet"> Efficientnet</a>, <a href="https://publications.waset.org/abstracts/search?q=distillation" title=" distillation"> distillation</a> </p> <a href="https://publications.waset.org/abstracts/146625/audio-visual-recognition-based-on-effective-model-and-distillation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/146625.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">134</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">22</span> Deep Learning Approach to Trademark Design Code Identification</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Girish%20J.%20Showkatramani">Girish J. Showkatramani</a>, <a href="https://publications.waset.org/abstracts/search?q=Arthi%20M.%20Krishna"> Arthi M. Krishna</a>, <a href="https://publications.waset.org/abstracts/search?q=Sashi%20Nareddi"> Sashi Nareddi</a>, <a href="https://publications.waset.org/abstracts/search?q=Naresh%20Nula"> Naresh Nula</a>, <a href="https://publications.waset.org/abstracts/search?q=Aaron%20Pepe"> Aaron Pepe</a>, <a href="https://publications.waset.org/abstracts/search?q=Glen%20Brown"> Glen Brown</a>, <a href="https://publications.waset.org/abstracts/search?q=Greg%20Gabel"> Greg Gabel</a>, <a href="https://publications.waset.org/abstracts/search?q=Chris%20Doninger"> Chris Doninger</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Trademark examination and approval is a complex process that involves analysis and review of the design components of the marks such as the visual representation as well as the textual data associated with marks such as marks' description. Currently, the process of identifying marks with similar visual representation is done manually in United States Patent and Trademark Office (USPTO) and takes a considerable amount of time. Moreover, the accuracy of these searches depends heavily on the experts determining the trademark design codes used to catalog the visual design codes in the mark. In this study, we explore several methods to automate trademark design code classification. Based on recent successes of convolutional neural networks in image classification, we have used several different convolutional neural networks such as Google’s Inception v3, Inception-ResNet-v2, and Xception net. The study also looks into other techniques to augment the results from CNNs such as using Open Source Computer Vision Library (OpenCV) to pre-process the images. This paper reports the results of the various models trained on year of annotated trademark images. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=trademark%20design%20code" title="trademark design code">trademark design code</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks" title=" convolutional neural networks"> convolutional neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=trademark%20image%20classification" title=" trademark image classification"> trademark image classification</a>, <a href="https://publications.waset.org/abstracts/search?q=trademark%20image%20search" title=" trademark image search"> trademark image search</a>, <a href="https://publications.waset.org/abstracts/search?q=Inception-ResNet-v2" title=" Inception-ResNet-v2"> Inception-ResNet-v2</a> </p> <a href="https://publications.waset.org/abstracts/85337/deep-learning-approach-to-trademark-design-code-identification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/85337.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">232</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">21</span> Classification of Land Cover Usage from Satellite Images Using Deep Learning Algorithms</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shaik%20Ayesha%20Fathima">Shaik Ayesha Fathima</a>, <a href="https://publications.waset.org/abstracts/search?q=Shaik%20Noor%20Jahan"> Shaik Noor Jahan</a>, <a href="https://publications.waset.org/abstracts/search?q=Duvvada%20Rajeswara%20Rao"> Duvvada Rajeswara Rao</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Earth's environment and its evolution can be seen through satellite images in near real-time. Through satellite imagery, remote sensing data provide crucial information that can be used for a variety of applications, including image fusion, change detection, land cover classification, agriculture, mining, disaster mitigation, and monitoring climate change. The objective of this project is to propose a method for classifying satellite images according to multiple predefined land cover classes. The proposed approach involves collecting data in image format. The data is then pre-processed using data pre-processing techniques. The processed data is fed into the proposed algorithm and the obtained result is analyzed. Some of the algorithms used in satellite imagery classification are U-Net, Random Forest, Deep Labv3, CNN, ANN, Resnet etc. In this project, we are using the DeepLabv3 (Atrous convolution) algorithm for land cover classification. The dataset used is the deep globe land cover classification dataset. DeepLabv3 is a semantic segmentation system that uses atrous convolution to capture multi-scale context by adopting multiple atrous rates in cascade or in parallel to determine the scale of segments. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=area%20calculation" title="area calculation">area calculation</a>, <a href="https://publications.waset.org/abstracts/search?q=atrous%20convolution" title=" atrous convolution"> atrous convolution</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20globe%20land%20cover%20classification" title=" deep globe land cover classification"> deep globe land cover classification</a>, <a href="https://publications.waset.org/abstracts/search?q=deepLabv3" title=" deepLabv3"> deepLabv3</a>, <a href="https://publications.waset.org/abstracts/search?q=land%20cover%20classification" title=" land cover classification"> land cover classification</a>, <a href="https://publications.waset.org/abstracts/search?q=resnet%2050" title=" resnet 50"> resnet 50</a> </p> <a href="https://publications.waset.org/abstracts/147677/classification-of-land-cover-usage-from-satellite-images-using-deep-learning-algorithms" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/147677.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">139</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">20</span> An Evaluation of Neural Network Efficacies for Image Recognition on Edge-AI Computer Vision Platform</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jie%20Zhao">Jie Zhao</a>, <a href="https://publications.waset.org/abstracts/search?q=Meng%20Su"> Meng Su</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Image recognition, as one of the most critical technologies in computer vision, works to help machine-like robotics understand a scene, that is, if deployed appropriately, will trigger the revolution in remote sensing and industry automation. With the developments of AI technologies, there are many prevailing and sophisticated neural networks as technologies developed for image recognition. However, computer vision platforms as hardware, supporting neural networks for image recognition, as crucial as the neural network technologies, need to be more congruently addressed as the research subjects. In contrast, different computer vision platforms are deterministic to leverage the performance of different neural networks for recognition. In this paper, three different computer vision platforms – Jetson Nano(with 4GB), a standalone laptop(with RTX 3000s, using CUDA), and Google Colab (web-based, using GPU) are explored and four prominent neural network architectures (including AlexNet, VGG(16/19), GoogleNet, and ResNet(18/34/50)), are investigated. In the context of pairwise usage between different computer vision platforms and distinctive neural networks, with the merits of recognition accuracy and time efficiency, the performances are evaluated. In the case study using public imageNets, our findings provide a nuanced perspective on optimizing image recognition tasks across Edge-AI platforms, offering guidance on selecting appropriate neural network structures to maximize performance under hardware constraints. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=alexNet" title="alexNet">alexNet</a>, <a href="https://publications.waset.org/abstracts/search?q=VGG" title=" VGG"> VGG</a>, <a href="https://publications.waset.org/abstracts/search?q=googleNet" title=" googleNet"> googleNet</a>, <a href="https://publications.waset.org/abstracts/search?q=resNet" title=" resNet"> resNet</a>, <a href="https://publications.waset.org/abstracts/search?q=Jetson%20nano" title=" Jetson nano"> Jetson nano</a>, <a href="https://publications.waset.org/abstracts/search?q=CUDA" title=" CUDA"> CUDA</a>, <a href="https://publications.waset.org/abstracts/search?q=COCO-NET" title=" COCO-NET"> COCO-NET</a>, <a href="https://publications.waset.org/abstracts/search?q=cifar10" title=" cifar10"> cifar10</a>, <a href="https://publications.waset.org/abstracts/search?q=imageNet%20large%20scale%20visual%20recognition%20challenge%20%28ILSVRC%29" title=" imageNet large scale visual recognition challenge (ILSVRC)"> imageNet large scale visual recognition challenge (ILSVRC)</a>, <a href="https://publications.waset.org/abstracts/search?q=google%20colab" title=" google colab"> google colab</a> </p> <a href="https://publications.waset.org/abstracts/176759/an-evaluation-of-neural-network-efficacies-for-image-recognition-on-edge-ai-computer-vision-platform" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/176759.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">90</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19</span> Plant Identification Using Convolution Neural Network and Vision Transformer-Based Models</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Virender%20Singh">Virender Singh</a>, <a href="https://publications.waset.org/abstracts/search?q=Mathew%20Rees"> Mathew Rees</a>, <a href="https://publications.waset.org/abstracts/search?q=Simon%20Hampton"> Simon Hampton</a>, <a href="https://publications.waset.org/abstracts/search?q=Sivaram%20Annadurai"> Sivaram Annadurai</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Plant identification is a challenging task that aims to identify the family, genus, and species according to plant morphological features. Automated deep learning-based computer vision algorithms are widely used for identifying plants and can help users narrow down the possibilities. However, numerous morphological similarities between and within species render correct classification difficult. In this paper, we tested custom convolution neural network (CNN) and vision transformer (ViT) based models using the PyTorch framework to classify plants. We used a large dataset of 88,000 provided by the Royal Horticultural Society (RHS) and a smaller dataset of 16,000 images from the PlantClef 2015 dataset for classifying plants at genus and species levels, respectively. Our results show that for classifying plants at the genus level, ViT models perform better compared to CNN-based models ResNet50 and ResNet-RS-420 and other state-of-the-art CNN-based models suggested in previous studies on a similar dataset. ViT model achieved top accuracy of 83.3% for classifying plants at the genus level. For classifying plants at the species level, ViT models perform better compared to CNN-based models ResNet50 and ResNet-RS-420, with a top accuracy of 92.5%. We show that the correct set of augmentation techniques plays an important role in classification success. In conclusion, these results could help end users, professionals and the general public alike in identifying plants quicker and with improved accuracy. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=plant%20identification" title="plant identification">plant identification</a>, <a href="https://publications.waset.org/abstracts/search?q=CNN" title=" CNN"> CNN</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=vision%20transformer" title=" vision transformer"> vision transformer</a>, <a href="https://publications.waset.org/abstracts/search?q=classification" title=" classification"> classification</a> </p> <a href="https://publications.waset.org/abstracts/162359/plant-identification-using-convolution-neural-network-and-vision-transformer-based-models" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/162359.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">103</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">18</span> Prediction of Remaining Life of Industrial Cutting Tools with Deep Learning-Assisted Image Processing Techniques</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Gizem%20Eser%20Erdek">Gizem Eser Erdek</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This study is research on predicting the remaining life of industrial cutting tools used in the industrial production process with deep learning methods. When the life of cutting tools decreases, they cause destruction to the raw material they are processing. This study it is aimed to predict the remaining life of the cutting tool based on the damage caused by the cutting tools to the raw material. For this, hole photos were collected from the hole-drilling machine for 8 months. Photos were labeled in 5 classes according to hole quality. In this way, the problem was transformed into a classification problem. Using the prepared data set, a model was created with convolutional neural networks, which is a deep learning method. In addition, VGGNet and ResNet architectures, which have been successful in the literature, have been tested on the data set. A hybrid model using convolutional neural networks and support vector machines is also used for comparison. When all models are compared, it has been determined that the model in which convolutional neural networks are used gives successful results of a %74 accuracy rate. In the preliminary studies, the data set was arranged to include only the best and worst classes, and the study gave ~93% accuracy when the binary classification model was applied. The results of this study showed that the remaining life of the cutting tools could be predicted by deep learning methods based on the damage to the raw material. Experiments have proven that deep learning methods can be used as an alternative for cutting tool life estimation. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=classification" title="classification">classification</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20network" title=" convolutional neural network"> convolutional neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=remaining%20life%20of%20industrial%20cutting%20tools" title=" remaining life of industrial cutting tools"> remaining life of industrial cutting tools</a>, <a href="https://publications.waset.org/abstracts/search?q=ResNet" title=" ResNet"> ResNet</a>, <a href="https://publications.waset.org/abstracts/search?q=support%20vector%20machine" title=" support vector machine"> support vector machine</a>, <a href="https://publications.waset.org/abstracts/search?q=VggNet" title=" VggNet"> VggNet</a> </p> <a href="https://publications.waset.org/abstracts/166428/prediction-of-remaining-life-of-industrial-cutting-tools-with-deep-learning-assisted-image-processing-techniques" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/166428.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">77</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">17</span> Deep Learning in Chest Computed Tomography to Differentiate COVID-19 from Influenza</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hongmei%20Wang">Hongmei Wang</a>, <a href="https://publications.waset.org/abstracts/search?q=Ziyun%20Xiang"> Ziyun Xiang</a>, <a href="https://publications.waset.org/abstracts/search?q=Ying%20liu"> Ying liu</a>, <a href="https://publications.waset.org/abstracts/search?q=Li%20Yu"> Li Yu</a>, <a href="https://publications.waset.org/abstracts/search?q=Dongsheng%20Yue"> Dongsheng Yue</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Intro: The COVID-19 (Corona Virus Disease 2019) has greatly changed the global economic, political and financial ecology. The mutation of the coronavirus in the UK in December 2020 has brought new panic to the world. Deep learning was performed on Chest Computed tomography (CT) of COVID-19 and Influenza and describes their characteristics. The predominant features of COVID-19 pneumonia was ground-glass opacification, followed by consolidation. Lesion density: most lesions appear as ground-glass shadows, and some lesions coexist with solid lesions. Lesion distribution: the focus is mainly on the dorsal side of the periphery of the lung, with the lower lobe of the lungs as the focus, and it is often close to the pleura. Other features it has are grid-like shadows in ground glass lesions, thickening signs of diseased vessels, air bronchi signs and halo signs. The severe disease involves whole bilateral lungs, showing white lung signs, air bronchograms can be seen, and there can be a small amount of pleural effusion in the bilateral chest cavity. At the same time, this year's flu season could be near its peak after surging throughout the United States for months. Chest CT for Influenza infection is characterized by focal ground glass shadows in the lungs, with or without patchy consolidation, and bronchiole air bronchograms are visible in the concentration. There are patchy ground-glass shadows, consolidation, air bronchus signs, mosaic lung perfusion, etc. The lesions are mostly fused, which is prominent near the hilar and two lungs. Grid-like shadows and small patchy ground-glass shadows are visible. Deep neural networks have great potential in image analysis and diagnosis that traditional machine learning algorithms do not. Method: Aiming at the two major infectious diseases COVID-19 and influenza, which are currently circulating in the world, the chest CT of patients with two infectious diseases is classified and diagnosed using deep learning algorithms. The residual network is proposed to solve the problem of network degradation when there are too many hidden layers in a deep neural network (DNN). The proposed deep residual system (ResNet) is a milestone in the history of the Convolutional neural network (CNN) images, which solves the problem of difficult training of deep CNN models. Many visual tasks can get excellent results through fine-tuning ResNet. The pre-trained convolutional neural network ResNet is introduced as a feature extractor, eliminating the need to design complex models and time-consuming training. Fastai is based on Pytorch, packaging best practices for in-depth learning strategies, and finding the best way to handle diagnoses issues. Based on the one-cycle approach of the Fastai algorithm, the classification diagnosis of lung CT for two infectious diseases is realized, and a higher recognition rate is obtained. Results: A deep learning model was developed to efficiently identify the differences between COVID-19 and influenza using chest CT. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=COVID-19" title="COVID-19">COVID-19</a>, <a href="https://publications.waset.org/abstracts/search?q=Fastai" title=" Fastai"> Fastai</a>, <a href="https://publications.waset.org/abstracts/search?q=influenza" title=" influenza"> influenza</a>, <a href="https://publications.waset.org/abstracts/search?q=transfer%20network" title=" transfer network"> transfer network</a> </p> <a href="https://publications.waset.org/abstracts/125142/deep-learning-in-chest-computed-tomography-to-differentiate-covid-19-from-influenza" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/125142.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">142</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">16</span> Classification of Foliar Nitrogen in Common Bean (Phaseolus Vulgaris L.) Using Deep Learning Models and Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Marcos%20Silva%20Tavares">Marcos Silva Tavares</a>, <a href="https://publications.waset.org/abstracts/search?q=Jamile%20Raquel%20Regazzo"> Jamile Raquel Regazzo</a>, <a href="https://publications.waset.org/abstracts/search?q=Edson%20Jos%C3%A9%20de%20Souza%20Sardinha"> Edson José de Souza Sardinha</a>, <a href="https://publications.waset.org/abstracts/search?q=Murilo%20Mesquita%20Baesso"> Murilo Mesquita Baesso</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Common beans are a widely cultivated and consumed legume globally, serving as a staple food for humans, especially in developing countries, due to their nutritional characteristics. Nitrogen (N) is the most limiting nutrient for productivity, and foliar analysis is crucial to ensure balanced nitrogen fertilization. Excessive N applications can cause, either isolated or cumulatively, soil and water contamination, plant toxicity, and increase their susceptibility to diseases and pests. However, the quantification of N using conventional methods is time-consuming and costly, demanding new technologies to optimize the adequate supply of N to plants. Thus, it becomes necessary to establish constant monitoring of the foliar content of this macronutrient in plants, mainly at the V4 stage, aiming at precision management of nitrogen fertilization. In this work, the objective was to evaluate the performance of a deep learning model, Resnet-50, in the classification of foliar nitrogen in common beans using RGB images. The BRS Estilo cultivar was sown in a greenhouse in a completely randomized design with four nitrogen doses (T1 = 0 kg N ha-1, T2 = 25 kg N ha-1, T3 = 75 kg N ha-1, and T4 = 100 kg N ha-1) and 12 replications. Pots with 5L capacity were used with a substrate composed of 43% soil (Neossolo Quartzarênico), 28.5% crushed sugarcane bagasse, and 28.5% cured bovine manure. The water supply of the plants was done with 5mm of water per day. The application of urea (45% N) and the acquisition of images occurred 14 and 32 days after sowing, respectively. A code developed in Matlab© R2022b was used to cut the original images into smaller blocks, originating an image bank composed of 4 folders representing the four classes and labeled as T1, T2, T3, and T4, each containing 500 images of 224x224 pixels obtained from plants cultivated under different N doses. The Matlab© R2022b software was used for the implementation and performance analysis of the model. The evaluation of the efficiency was done by a set of metrics, including accuracy (AC), F1-score (F1), specificity (SP), area under the curve (AUC), and precision (P). The ResNet-50 showed high performance in the classification of foliar N levels in common beans, with AC values of 85.6%. The F1 for classes T1, T2, T3, and T4 was 76, 72, 74, and 77%, respectively. This study revealed that the use of RGB images combined with deep learning can be a promising alternative to slow laboratory analyses, capable of optimizing the estimation of foliar N. This can allow rapid intervention by the producer to achieve higher productivity and less fertilizer waste. Future approaches are encouraged to develop mobile devices capable of handling images using deep learning for the classification of the nutritional status of plants in situ. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20network" title="convolutional neural network">convolutional neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=residual%20network%2050" title=" residual network 50"> residual network 50</a>, <a href="https://publications.waset.org/abstracts/search?q=nutritional%20status" title=" nutritional status"> nutritional status</a>, <a href="https://publications.waset.org/abstracts/search?q=artificial%20intelligence" title=" artificial intelligence"> artificial intelligence</a> </p> <a href="https://publications.waset.org/abstracts/193011/classification-of-foliar-nitrogen-in-common-bean-phaseolus-vulgaris-l-using-deep-learning-models-and-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/193011.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">19</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">15</span> Gene Names Identity Recognition Using Siamese Network for Biomedical Publications</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Micheal%20Olaolu%20Arowolo">Micheal Olaolu Arowolo</a>, <a href="https://publications.waset.org/abstracts/search?q=Muhammad%20Azam"> Muhammad Azam</a>, <a href="https://publications.waset.org/abstracts/search?q=Fei%20He"> Fei He</a>, <a href="https://publications.waset.org/abstracts/search?q=Mihail%20Popescu"> Mihail Popescu</a>, <a href="https://publications.waset.org/abstracts/search?q=Dong%20Xu"> Dong Xu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> As the quantity of biological articles rises, so does the number of biological route figures. Each route figure shows gene names and relationships. Annotating pathway diagrams manually is time-consuming. Advanced image understanding models could speed up curation, but they must be more precise. There is rich information in biological pathway figures. The first step to performing image understanding of these figures is to recognize gene names automatically. Classical optical character recognition methods have been employed for gene name recognition, but they are not optimized for literature mining data. This study devised a method to recognize an image bounding box of gene name as a photo using deep Siamese neural network models to outperform the existing methods using ResNet, DenseNet and Inception architectures, the results obtained about 84% accuracy. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=biological%20pathway" title="biological pathway">biological pathway</a>, <a href="https://publications.waset.org/abstracts/search?q=gene%20identification" title=" gene identification"> gene identification</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20detection" title=" object detection"> object detection</a>, <a href="https://publications.waset.org/abstracts/search?q=Siamese%20network" title=" Siamese network"> Siamese network</a> </p> <a href="https://publications.waset.org/abstracts/160725/gene-names-identity-recognition-using-siamese-network-for-biomedical-publications" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/160725.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">292</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">14</span> Research on Reservoir Lithology Prediction Based on Residual Neural Network and Squeeze-and- Excitation Neural Network</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Li%20Kewen">Li Kewen</a>, <a href="https://publications.waset.org/abstracts/search?q=Su%20Zhaoxin"> Su Zhaoxin</a>, <a href="https://publications.waset.org/abstracts/search?q=Wang%20Xingmou"> Wang Xingmou</a>, <a href="https://publications.waset.org/abstracts/search?q=Zhu%20Jian%20Bing"> Zhu Jian Bing </a> </p> <p class="card-text"><strong>Abstract:</strong></p> Conventional reservoir prediction methods ar not sufficient to explore the implicit relation between seismic attributes, and thus data utilization is low. In order to improve the predictive classification accuracy of reservoir lithology, this paper proposes a deep learning lithology prediction method based on ResNet (Residual Neural Network) and SENet (Squeeze-and-Excitation Neural Network). The neural network model is built and trained by using seismic attribute data and lithology data of Shengli oilfield, and the nonlinear mapping relationship between seismic attribute and lithology marker is established. The experimental results show that this method can significantly improve the classification effect of reservoir lithology, and the classification accuracy is close to 70%. This study can effectively predict the lithology of undrilled area and provide support for exploration and development. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20network" title="convolutional neural network">convolutional neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=lithology" title=" lithology"> lithology</a>, <a href="https://publications.waset.org/abstracts/search?q=prediction%20of%20reservoir" title=" prediction of reservoir"> prediction of reservoir</a>, <a href="https://publications.waset.org/abstracts/search?q=seismic%20attributes" title=" seismic attributes "> seismic attributes </a> </p> <a href="https://publications.waset.org/abstracts/121343/research-on-reservoir-lithology-prediction-based-on-residual-neural-network-and-squeeze-and-excitation-neural-network" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/121343.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">177</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">13</span> SEM Image Classification Using CNN Architectures</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=G%C3%BCzi%CC%87n%20Ti%CC%87rke%C5%9F">Güzi̇n Ti̇rkeş</a>, <a href="https://publications.waset.org/abstracts/search?q=%C3%96zge%20Teki%CC%87n"> Özge Teki̇n</a>, <a href="https://publications.waset.org/abstracts/search?q=Kerem%20Kurtulu%C5%9F"> Kerem Kurtuluş</a>, <a href="https://publications.waset.org/abstracts/search?q=Y.%20Yekta%20Yurtseven"> Y. Yekta Yurtseven</a>, <a href="https://publications.waset.org/abstracts/search?q=Murat%20Baran"> Murat Baran</a> </p> <p class="card-text"><strong>Abstract:</strong></p> A scanning electron microscope (SEM) is a type of electron microscope mainly used in nanoscience and nanotechnology areas. Automatic image recognition and classification are among the general areas of application concerning SEM. In line with these usages, the present paper proposes a deep learning algorithm that classifies SEM images into nine categories by means of an online application to simplify the process. The NFFA-EUROPE - 100% SEM data set, containing approximately 21,000 images, was used to train and test the algorithm at 80% and 20%, respectively. Validation was carried out using a separate data set obtained from the Middle East Technical University (METU) in Turkey. To increase the accuracy in the results, the Inception ResNet-V2 model was used in view of the Fine-Tuning approach. By using a confusion matrix, it was observed that the coated-surface category has a negative effect on the accuracy of the results since it contains other categories in the data set, thereby confusing the model when detecting category-specific patterns. For this reason, the coated-surface category was removed from the train data set, hence increasing accuracy by up to 96.5%. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks" title="convolutional neural networks">convolutional neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20classification" title=" image classification"> image classification</a>, <a href="https://publications.waset.org/abstracts/search?q=scanning%20electron%20microscope" title=" scanning electron microscope"> scanning electron microscope</a> </p> <a href="https://publications.waset.org/abstracts/160332/sem-image-classification-using-cnn-architectures" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/160332.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">125</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12</span> Robust ResNets for Chemically Reacting Flows</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Randy%20Price">Randy Price</a>, <a href="https://publications.waset.org/abstracts/search?q=Harbir%20Antil"> Harbir Antil</a>, <a href="https://publications.waset.org/abstracts/search?q=Rainald%20L%C3%B6hner"> Rainald Löhner</a>, <a href="https://publications.waset.org/abstracts/search?q=Fumiya%20Togashi"> Fumiya Togashi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Chemically reacting flows are common in engineering applications such as hypersonic flow, combustion, explosions, manufacturing process, and environmental assessments. The number of reactions in combustion simulations can exceed 100, making a large number of flow and combustion problems beyond the capabilities of current supercomputers. Motivated by this, deep neural networks (DNNs) will be introduced with the goal of eventually replacing the existing chemistry software packages with DNNs. The DNNs used in this paper are motivated by the Residual Neural Network (ResNet) architecture. In the continuum limit, ResNets become an optimization problem constrained by an ODE. Such a feature allows the use of ODE control techniques to enhance the DNNs. In this work, DNNs are constructed, which update the species un at the nᵗʰ timestep to uⁿ⁺¹ at the n+1ᵗʰ timestep. Parallel DNNs are trained for each species, taking in uⁿ as input and outputting one component of uⁿ⁺¹. These DNNs are applied to multiple species and reactions common in chemically reacting flows such as H₂-O₂ reactions. Experimental results show that the DNNs are able to accurately replicate the dynamics in various situations and in the presence of errors. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=chemical%20reacting%20flows" title="chemical reacting flows">chemical reacting flows</a>, <a href="https://publications.waset.org/abstracts/search?q=computational%20fluid%20dynamics" title=" computational fluid dynamics"> computational fluid dynamics</a>, <a href="https://publications.waset.org/abstracts/search?q=ODEs" title=" ODEs"> ODEs</a>, <a href="https://publications.waset.org/abstracts/search?q=residual%20neural%20networks" title=" residual neural networks"> residual neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=ResNets" title=" ResNets"> ResNets</a> </p> <a href="https://publications.waset.org/abstracts/152971/robust-resnets-for-chemically-reacting-flows" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/152971.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">119</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11</span> Comparison of Deep Convolutional Neural Networks Models for Plant Disease Identification</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Megha%20Gupta">Megha Gupta</a>, <a href="https://publications.waset.org/abstracts/search?q=Nupur%20Prakash"> Nupur Prakash</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Identification of plant diseases has been performed using machine learning and deep learning models on the datasets containing images of healthy and diseased plant leaves. The current study carries out an evaluation of some of the deep learning models based on convolutional neural network (CNN) architectures for identification of plant diseases. For this purpose, the publicly available New Plant Diseases Dataset, an augmented version of PlantVillage dataset, available on Kaggle platform, containing 87,900 images has been used. The dataset contained images of 26 diseases of 14 different plants and images of 12 healthy plants. The CNN models selected for the study presented in this paper are AlexNet, ZFNet, VGGNet (four models), GoogLeNet, and ResNet (three models). The selected models are trained using PyTorch, an open-source machine learning library, on Google Colaboratory. A comparative study has been carried out to analyze the high degree of accuracy achieved using these models. The highest test accuracy and F1-score of 99.59% and 0.996, respectively, were achieved by using GoogLeNet with Mini-batch momentum based gradient descent learning algorithm. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=comparative%20analysis" title="comparative analysis">comparative analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks" title=" convolutional neural networks"> convolutional neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=plant%20disease%20identification" title=" plant disease identification"> plant disease identification</a> </p> <a href="https://publications.waset.org/abstracts/138543/comparison-of-deep-convolutional-neural-networks-models-for-plant-disease-identification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/138543.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">198</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10</span> Assisted Prediction of Hypertension Based on Heart Rate Variability and Improved Residual Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yong%20Zhao">Yong Zhao</a>, <a href="https://publications.waset.org/abstracts/search?q=Jian%20He"> Jian He</a>, <a href="https://publications.waset.org/abstracts/search?q=Cheng%20Zhang"> Cheng Zhang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Cardiovascular diseases caused by hypertension are extremely threatening to human health, and early diagnosis of hypertension can save a large number of lives. Traditional hypertension detection methods require special equipment and are difficult to detect continuous blood pressure changes. In this regard, this paper first analyzes the principle of heart rate variability (HRV) and introduces sliding window and power spectral density (PSD) to analyze the time domain features and frequency domain features of HRV, and secondly, designs an HRV-based hypertension prediction network by combining Resnet, attention mechanism, and multilayer perceptron, which extracts the frequency domain through the improved ResNet18 features through a modified ResNet18, its fusion with time-domain features through an attention mechanism, and the auxiliary prediction of hypertension through a multilayer perceptron. Finally, the network was trained and tested using the publicly available SHAREE dataset on PhysioNet, and the test results showed that this network achieved 92.06% prediction accuracy for hypertension and outperformed K Near Neighbor(KNN), Bayes, Logistic, and traditional Convolutional Neural Network(CNN) models in prediction performance. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=feature%20extraction" title="feature extraction">feature extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=heart%20rate%20variability" title=" heart rate variability"> heart rate variability</a>, <a href="https://publications.waset.org/abstracts/search?q=hypertension" title=" hypertension"> hypertension</a>, <a href="https://publications.waset.org/abstracts/search?q=residual%20networks" title=" residual networks"> residual networks</a> </p> <a href="https://publications.waset.org/abstracts/165227/assisted-prediction-of-hypertension-based-on-heart-rate-variability-and-improved-residual-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/165227.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">105</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">9</span> Performance Evaluation of Distributed Deep Learning Frameworks in Cloud Environment</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shuen-Tai%20Wang">Shuen-Tai Wang</a>, <a href="https://publications.waset.org/abstracts/search?q=Fang-An%20Kuo"> Fang-An Kuo</a>, <a href="https://publications.waset.org/abstracts/search?q=Chau-Yi%20Chou"> Chau-Yi Chou</a>, <a href="https://publications.waset.org/abstracts/search?q=Yu-Bin%20Fang"> Yu-Bin Fang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> 2016 has become the year of the Artificial Intelligence explosion. AI technologies are getting more and more matured that most world well-known tech giants are making large investment to increase the capabilities in AI. Machine learning is the science of getting computers to act without being explicitly programmed, and deep learning is a subset of machine learning that uses deep neural network to train a machine to learn features directly from data. Deep learning realizes many machine learning applications which expand the field of AI. At the present time, deep learning frameworks have been widely deployed on servers for deep learning applications in both academia and industry. In training deep neural networks, there are many standard processes or algorithms, but the performance of different frameworks might be different. In this paper we evaluate the running performance of two state-of-the-art distributed deep learning frameworks that are running training calculation in parallel over multi GPU and multi nodes in our cloud environment. We evaluate the training performance of the frameworks with ResNet-50 convolutional neural network, and we analyze what factors that result in the performance among both distributed frameworks as well. Through the experimental analysis, we identify the overheads which could be further optimized. The main contribution is that the evaluation results provide further optimization directions in both performance tuning and algorithmic design. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=artificial%20intelligence" title="artificial intelligence">artificial intelligence</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks" title=" convolutional neural networks"> convolutional neural networks</a> </p> <a href="https://publications.waset.org/abstracts/110135/performance-evaluation-of-distributed-deep-learning-frameworks-in-cloud-environment" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/110135.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">211</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8</span> Faster, Lighter, More Accurate: A Deep Learning Ensemble for Content Moderation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Arian%20Hosseini">Arian Hosseini</a>, <a href="https://publications.waset.org/abstracts/search?q=Mahmudul%20Hasan"> Mahmudul Hasan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> To address the increasing need for efficient and accurate content moderation, we propose an efficient and lightweight deep classification ensemble structure. Our approach is based on a combination of simple visual features, designed for high-accuracy classification of violent content with low false positives. Our ensemble architecture utilizes a set of lightweight models with narrowed-down color features, and we apply it to both images and videos. We evaluated our approach using a large dataset of explosion and blast contents and compared its performance to popular deep learning models such as ResNet-50. Our evaluation results demonstrate significant improvements in prediction accuracy, while benefiting from 7.64x faster inference and lower computation cost. While our approach is tailored to explosion detection, it can be applied to other similar content moderation and violence detection use cases as well. Based on our experiments, we propose a "think small, think many" philosophy in classification scenarios. We argue that transforming a single, large, monolithic deep model into a verification-based step model ensemble of multiple small, simple, and lightweight models with narrowed-down visual features can possibly lead to predictions with higher accuracy. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20classification" title="deep classification">deep classification</a>, <a href="https://publications.waset.org/abstracts/search?q=content%20moderation" title=" content moderation"> content moderation</a>, <a href="https://publications.waset.org/abstracts/search?q=ensemble%20learning" title=" ensemble learning"> ensemble learning</a>, <a href="https://publications.waset.org/abstracts/search?q=explosion%20detection" title=" explosion detection"> explosion detection</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20processing" title=" video processing"> video processing</a> </p> <a href="https://publications.waset.org/abstracts/183644/faster-lighter-more-accurate-a-deep-learning-ensemble-for-content-moderation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/183644.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">54</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7</span> Video Object Segmentation for Automatic Image Annotation of Ethernet Connectors with Environment Mapping and 3D Projection</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Marrone%20Silverio%20Melo%20Dantas%20Pedro%20Henrique%20Dreyer">Marrone Silverio Melo Dantas Pedro Henrique Dreyer</a>, <a href="https://publications.waset.org/abstracts/search?q=Gabriel%20Fonseca%20Reis%20de%20Souza"> Gabriel Fonseca Reis de Souza</a>, <a href="https://publications.waset.org/abstracts/search?q=Daniel%20Bezerra"> Daniel Bezerra</a>, <a href="https://publications.waset.org/abstracts/search?q=Ricardo%20Souza"> Ricardo Souza</a>, <a href="https://publications.waset.org/abstracts/search?q=Silvia%20Lins"> Silvia Lins</a>, <a href="https://publications.waset.org/abstracts/search?q=Judith%20Kelner"> Judith Kelner</a>, <a href="https://publications.waset.org/abstracts/search?q=Djamel%20Fawzi%20Hadj%20Sadok"> Djamel Fawzi Hadj Sadok</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The creation of a dataset is time-consuming and often discourages researchers from pursuing their goals. To overcome this problem, we present and discuss two solutions adopted for the automation of this process. Both optimize valuable user time and resources and support video object segmentation with object tracking and 3D projection. In our scenario, we acquire images from a moving robotic arm and, for each approach, generate distinct annotated datasets. We evaluated the precision of the annotations by comparing these with a manually annotated dataset, as well as the efficiency in the context of detection and classification problems. For detection support, we used YOLO and obtained for the projection dataset an F1-Score, accuracy, and mAP values of 0.846, 0.924, and 0.875, respectively. Concerning the tracking dataset, we achieved an F1-Score of 0.861, an accuracy of 0.932, whereas mAP reached 0.894. In order to evaluate the quality of the annotated images used for classification problems, we employed deep learning architectures. We adopted metrics accuracy and F1-Score, for VGG, DenseNet, MobileNet, Inception, and ResNet. The VGG architecture outperformed the others for both projection and tracking datasets. It reached an accuracy and F1-score of 0.997 and 0.993, respectively. Similarly, for the tracking dataset, it achieved an accuracy of 0.991 and an F1-Score of 0.981. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=RJ45" title="RJ45">RJ45</a>, <a href="https://publications.waset.org/abstracts/search?q=automatic%20annotation" title=" automatic annotation"> automatic annotation</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20tracking" title=" object tracking"> object tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=3D%20projection" title=" 3D projection"> 3D projection</a> </p> <a href="https://publications.waset.org/abstracts/130540/video-object-segmentation-for-automatic-image-annotation-of-ethernet-connectors-with-environment-mapping-and-3d-projection" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/130540.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">167</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">‹</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=resNet&page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=resNet&page=2" rel="next">›</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">© 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">×</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>