CINXE.COM

Search results for: inception ResNet

<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: inception ResNet</title> <meta name="description" content="Search results for: inception ResNet"> <meta name="keywords" content="inception ResNet"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="inception ResNet" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="inception ResNet"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 196</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: inception ResNet</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">196</span> Clothes Identification Using Inception ResNet V2 and MobileNet V2</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Subodh%20Chandra%20Shakya">Subodh Chandra Shakya</a>, <a href="https://publications.waset.org/abstracts/search?q=Badal%20Shrestha"> Badal Shrestha</a>, <a href="https://publications.waset.org/abstracts/search?q=Suni%20Thapa"> Suni Thapa</a>, <a href="https://publications.waset.org/abstracts/search?q=Ashutosh%20Chauhan"> Ashutosh Chauhan</a>, <a href="https://publications.waset.org/abstracts/search?q=Saugat%20Adhikari"> Saugat Adhikari</a> </p> <p class="card-text"><strong>Abstract:</strong></p> To tackle our problem of clothes identification, we used different architectures of Convolutional Neural Networks. Among different architectures, the outcome from Inception ResNet V2 and MobileNet V2 seemed promising. On comparison of the metrices, we observed that the Inception ResNet V2 slightly outperforms MobileNet V2 for this purpose. So this paper of ours proposes the cloth identifier using Inception ResNet V2 and also contains the comparison between the outcome of ResNet V2 and MobileNet V2. The document here contains the results and findings of the research that we performed on the DeepFashion Dataset. To improve the dataset, we used different image preprocessing techniques like image shearing, image rotation, and denoising. The whole experiment was conducted with the intention of testing the efficiency of convolutional neural networks on cloth identification so that we could develop a reliable system that is good enough in identifying the clothes worn by the users. The whole system can be integrated with some kind of recommendation system. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=inception%20ResNet" title="inception ResNet">inception ResNet</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20net" title=" convolutional neural net"> convolutional neural net</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=confusion%20matrix" title=" confusion matrix"> confusion matrix</a>, <a href="https://publications.waset.org/abstracts/search?q=data%20augmentation" title=" data augmentation"> data augmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=data%20preprocessing" title=" data preprocessing"> data preprocessing</a> </p> <a href="https://publications.waset.org/abstracts/129604/clothes-identification-using-inception-resnet-v2-and-mobilenet-v2" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/129604.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">187</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">195</span> Deep Learning Approach to Trademark Design Code Identification</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Girish%20J.%20Showkatramani">Girish J. Showkatramani</a>, <a href="https://publications.waset.org/abstracts/search?q=Arthi%20M.%20Krishna"> Arthi M. Krishna</a>, <a href="https://publications.waset.org/abstracts/search?q=Sashi%20Nareddi"> Sashi Nareddi</a>, <a href="https://publications.waset.org/abstracts/search?q=Naresh%20Nula"> Naresh Nula</a>, <a href="https://publications.waset.org/abstracts/search?q=Aaron%20Pepe"> Aaron Pepe</a>, <a href="https://publications.waset.org/abstracts/search?q=Glen%20Brown"> Glen Brown</a>, <a href="https://publications.waset.org/abstracts/search?q=Greg%20Gabel"> Greg Gabel</a>, <a href="https://publications.waset.org/abstracts/search?q=Chris%20Doninger"> Chris Doninger</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Trademark examination and approval is a complex process that involves analysis and review of the design components of the marks such as the visual representation as well as the textual data associated with marks such as marks' description. Currently, the process of identifying marks with similar visual representation is done manually in United States Patent and Trademark Office (USPTO) and takes a considerable amount of time. Moreover, the accuracy of these searches depends heavily on the experts determining the trademark design codes used to catalog the visual design codes in the mark. In this study, we explore several methods to automate trademark design code classification. Based on recent successes of convolutional neural networks in image classification, we have used several different convolutional neural networks such as Google’s Inception v3, Inception-ResNet-v2, and Xception net. The study also looks into other techniques to augment the results from CNNs such as using Open Source Computer Vision Library (OpenCV) to pre-process the images. This paper reports the results of the various models trained on year of annotated trademark images. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=trademark%20design%20code" title="trademark design code">trademark design code</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks" title=" convolutional neural networks"> convolutional neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=trademark%20image%20classification" title=" trademark image classification"> trademark image classification</a>, <a href="https://publications.waset.org/abstracts/search?q=trademark%20image%20search" title=" trademark image search"> trademark image search</a>, <a href="https://publications.waset.org/abstracts/search?q=Inception-ResNet-v2" title=" Inception-ResNet-v2"> Inception-ResNet-v2</a> </p> <a href="https://publications.waset.org/abstracts/85337/deep-learning-approach-to-trademark-design-code-identification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/85337.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">232</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">194</span> PatchMix: Learning Transferable Semi-Supervised Representation by Predicting Patches</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Arpit%20Rai">Arpit Rai</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this work, we propose PatchMix, a semi-supervised method for pre-training visual representations. PatchMix mixes patches of two images and then solves an auxiliary task of predicting the label of each patch in the mixed image. Our experiments on the CIFAR-10, 100 and the SVHN dataset show that the representations learned by this method encodes useful information for transfer to new tasks and outperform the baseline Residual Network encoders by on CIFAR 10 by 12% on ResNet 101 and 2% on ResNet-56, by 4% on CIFAR-100 on ResNet101 and by 6% on SVHN dataset on the ResNet-101 baseline model. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=self-supervised%20learning" title="self-supervised learning">self-supervised learning</a>, <a href="https://publications.waset.org/abstracts/search?q=representation%20learning" title=" representation learning"> representation learning</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=generalization" title=" generalization"> generalization</a> </p> <a href="https://publications.waset.org/abstracts/150013/patchmix-learning-transferable-semi-supervised-representation-by-predicting-patches" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/150013.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">89</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">193</span> The Modification of Convolutional Neural Network in Fin Whale Identification</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jiahao%20Cui">Jiahao Cui</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In the past centuries, due to climate change and intense whaling, the global whale population has dramatically declined. Among the various whale species, the fin whale experienced the most drastic drop in number due to its popularity in whaling. Under this background, identifying fin whale calls could be immensely beneficial to the preservation of the species. This paper uses feature extraction to process the input audio signal, then a network based on AlexNet and three networks based on the ResNet model was constructed to classify fin whale calls. A mixture of the DOSITS database and the Watkins database was used during training. The results demonstrate that a modified ResNet network has the best performance considering precision and network complexity. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20network" title="convolutional neural network">convolutional neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=ResNet" title=" ResNet"> ResNet</a>, <a href="https://publications.waset.org/abstracts/search?q=AlexNet" title=" AlexNet"> AlexNet</a>, <a href="https://publications.waset.org/abstracts/search?q=fin%20whale%20preservation" title=" fin whale preservation"> fin whale preservation</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20extraction" title=" feature extraction"> feature extraction</a> </p> <a href="https://publications.waset.org/abstracts/155185/the-modification-of-convolutional-neural-network-in-fin-whale-identification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/155185.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">123</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">192</span> Bone Fracture Detection with X-Ray Images Using Mobilenet V3 Architecture</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ashlesha%20Khanapure">Ashlesha Khanapure</a>, <a href="https://publications.waset.org/abstracts/search?q=Harsh%20Kashyap"> Harsh Kashyap</a>, <a href="https://publications.waset.org/abstracts/search?q=Abhinav%20Anand"> Abhinav Anand</a>, <a href="https://publications.waset.org/abstracts/search?q=Sanjana%20Habib"> Sanjana Habib</a>, <a href="https://publications.waset.org/abstracts/search?q=Anupama%20Bidargaddi"> Anupama Bidargaddi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Technologies that are developing quickly are being developed daily in a variety of disciplines, particularly the medical field. For the purpose of detecting bone fractures in X-ray pictures of different body segments, our work compares the ResNet-50 and MobileNetV3 architectures. It evaluates accuracy and computing efficiency with X-rays of the elbow, hand, and shoulder from the MURA dataset. Through training and validation, the models are evaluated on normal and fractured images. While ResNet-50 showcases superior accuracy in fracture identification, MobileNetV3 showcases superior speed and resource optimization. Despite ResNet-50’s accuracy, MobileNetV3’s swifter inference makes it a viable choice for real-time clinical applications, emphasizing the importance of balancing computational efficiency and accuracy in medical imaging. We created a graphical user interface (GUI) for MobileNet V3 model bone fracture detection. This research underscores MobileNetV3’s potential to streamline bone fracture diagnoses, potentially revolutionizing orthopedic medical procedures and enhancing patient care. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=CNN" title="CNN">CNN</a>, <a href="https://publications.waset.org/abstracts/search?q=MobileNet%20V3" title=" MobileNet V3"> MobileNet V3</a>, <a href="https://publications.waset.org/abstracts/search?q=ResNet-50" title=" ResNet-50"> ResNet-50</a>, <a href="https://publications.waset.org/abstracts/search?q=healthcare" title=" healthcare"> healthcare</a>, <a href="https://publications.waset.org/abstracts/search?q=MURA" title=" MURA"> MURA</a>, <a href="https://publications.waset.org/abstracts/search?q=X-ray" title=" X-ray"> X-ray</a>, <a href="https://publications.waset.org/abstracts/search?q=fracture%20detection" title=" fracture detection"> fracture detection</a> </p> <a href="https://publications.waset.org/abstracts/182019/bone-fracture-detection-with-x-ray-images-using-mobilenet-v3-architecture" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/182019.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">65</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">191</span> Improving Axial-Attention Network via Cross-Channel Weight Sharing</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nazmul%20Shahadat">Nazmul Shahadat</a>, <a href="https://publications.waset.org/abstracts/search?q=Anthony%20S.%20Maida"> Anthony S. Maida</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In recent years, hypercomplex inspired neural networks improved deep CNN architectures due to their ability to share weights across input channels and thus improve cohesiveness of representations within the layers. The work described herein studies the effect of replacing existing layers in an Axial Attention ResNet with their quaternion variants that use cross-channel weight sharing to assess the effect on image classification. We expect the quaternion enhancements to produce improved feature maps with more interlinked representations. We experiment with the stem of the network, the bottleneck layer, and the fully connected backend by replacing them with quaternion versions. These modifications lead to novel architectures which yield improved accuracy performance on the ImageNet300k classification dataset. Our baseline networks for comparison were the original real-valued ResNet, the original quaternion-valued ResNet, and the Axial Attention ResNet. Since improvement was observed regardless of which part of the network was modified, there is a promise that this technique may be generally useful in improving classification accuracy for a large class of networks. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=axial%20attention" title="axial attention">axial attention</a>, <a href="https://publications.waset.org/abstracts/search?q=representational%20networks" title=" representational networks"> representational networks</a>, <a href="https://publications.waset.org/abstracts/search?q=weight%20sharing" title=" weight sharing"> weight sharing</a>, <a href="https://publications.waset.org/abstracts/search?q=cross-channel%20correlations" title=" cross-channel correlations"> cross-channel correlations</a>, <a href="https://publications.waset.org/abstracts/search?q=quaternion-enhanced%20axial%20attention" title=" quaternion-enhanced axial attention"> quaternion-enhanced axial attention</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20networks" title=" deep networks"> deep networks</a> </p> <a href="https://publications.waset.org/abstracts/164808/improving-axial-attention-network-via-cross-channel-weight-sharing" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/164808.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">83</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">190</span> An Auxiliary Technique for Coronary Heart Disease Prediction by Analyzing Electrocardiogram Based on ResNet and Bi-Long Short-Term Memory</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yang%20Zhang">Yang Zhang</a>, <a href="https://publications.waset.org/abstracts/search?q=Jian%20He"> Jian He</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Heart disease is one of the leading causes of death in the world, and coronary heart disease (CHD) is one of the major heart diseases. Electrocardiogram (ECG) is widely used in the detection of heart diseases, but the traditional manual method for CHD prediction by analyzing ECG requires lots of professional knowledge for doctors. This paper introduces sliding window and continuous wavelet transform (CWT) to transform ECG signals into images, and then ResNet and Bi-LSTM are introduced to build the ECG feature extraction network (namely ECGNet). At last, an auxiliary system for coronary heart disease prediction was developed based on modified ResNet18 and Bi-LSTM, and the public ECG dataset of CHD from MIMIC-3 was used to train and test the system. The experimental results show that the accuracy of the method is 83%, and the F1-score is 83%. Compared with the available methods for CHD prediction based on ECG, such as kNN, decision tree, VGGNet, etc., this method not only improves the prediction accuracy but also could avoid the degradation phenomenon of the deep learning network. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bi-LSTM" title="Bi-LSTM">Bi-LSTM</a>, <a href="https://publications.waset.org/abstracts/search?q=CHD" title=" CHD"> CHD</a>, <a href="https://publications.waset.org/abstracts/search?q=ECG" title=" ECG"> ECG</a>, <a href="https://publications.waset.org/abstracts/search?q=ResNet" title=" ResNet"> ResNet</a>, <a href="https://publications.waset.org/abstracts/search?q=sliding%C2%A0window" title=" sliding window"> sliding window</a> </p> <a href="https://publications.waset.org/abstracts/165165/an-auxiliary-technique-for-coronary-heart-disease-prediction-by-analyzing-electrocardiogram-based-on-resnet-and-bi-long-short-term-memory" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/165165.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">89</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">189</span> Attention-Based ResNet for Breast Cancer Classification</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Abebe%20Mulugojam%20Negash">Abebe Mulugojam Negash</a>, <a href="https://publications.waset.org/abstracts/search?q=Yongbin%20Yu"> Yongbin Yu</a>, <a href="https://publications.waset.org/abstracts/search?q=Ekong%20Favour"> Ekong Favour</a>, <a href="https://publications.waset.org/abstracts/search?q=Bekalu%20Nigus%20Dawit"> Bekalu Nigus Dawit</a>, <a href="https://publications.waset.org/abstracts/search?q=Molla%20Woretaw%20Teshome"> Molla Woretaw Teshome</a>, <a href="https://publications.waset.org/abstracts/search?q=Aynalem%20Birtukan%20Yirga"> Aynalem Birtukan Yirga</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Breast cancer remains a significant health concern, necessitating advancements in diagnostic methodologies. Addressing this, our paper confronts the notable challenges in breast cancer classification, particularly the imbalance in datasets and the constraints in the accuracy and interpretability of prevailing deep learning approaches. We proposed an attention-based residual neural network (ResNet), which effectively combines the robust features of ResNet with an advanced attention mechanism. Enhanced through strategic data augmentation and positive weight adjustments, this approach specifically targets the issue of data imbalance. The proposed model is tested on the BreakHis dataset and achieved accuracies of 99.00%, 99.04%, 98.67%, and 98.08% in different magnifications (40X, 100X, 200X, and 400X), respectively. We evaluated the performance by using different evaluation metrics such as precision, recall, and F1-Score and made comparisons with other state-of-the-art methods. Our experiments demonstrate that the proposed model outperforms existing approaches, achieving higher accuracy in breast cancer classification. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=residual%20neural%20network" title="residual neural network">residual neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=attention%20mechanism" title=" attention mechanism"> attention mechanism</a>, <a href="https://publications.waset.org/abstracts/search?q=positive%20weight" title=" positive weight"> positive weight</a>, <a href="https://publications.waset.org/abstracts/search?q=data%20augmentation" title=" data augmentation"> data augmentation</a> </p> <a href="https://publications.waset.org/abstracts/181531/attention-based-resnet-for-breast-cancer-classification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/181531.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">101</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">188</span> A Comparative Study of Deep Learning Methods for COVID-19 Detection</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Aishrith%20Rao">Aishrith Rao</a> </p> <p class="card-text"><strong>Abstract:</strong></p> COVID 19 is a pandemic which has resulted in thousands of deaths around the world and a huge impact on the global economy. Testing is a huge issue as the test kits have limited availability and are expensive to manufacture. Using deep learning methods on radiology images in the detection of the coronavirus as these images contain information about the spread of the virus in the lungs is extremely economical and time-saving as it can be used in areas with a lack of testing facilities. This paper focuses on binary classification and multi-class classification of COVID 19 and other diseases such as pneumonia, tuberculosis, etc. Different deep learning methods such as VGG-19, COVID-Net, ResNET+ SVM, Deep CNN, DarkCovidnet, etc., have been used, and their accuracy has been compared using the Chest X-Ray dataset. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title="deep learning">deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=radiology" title=" radiology"> radiology</a>, <a href="https://publications.waset.org/abstracts/search?q=COVID-19" title=" COVID-19"> COVID-19</a>, <a href="https://publications.waset.org/abstracts/search?q=ResNet" title=" ResNet"> ResNet</a>, <a href="https://publications.waset.org/abstracts/search?q=VGG-19" title=" VGG-19"> VGG-19</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20neural%20networks" title=" deep neural networks"> deep neural networks</a> </p> <a href="https://publications.waset.org/abstracts/127887/a-comparative-study-of-deep-learning-methods-for-covid-19-detection" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/127887.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">160</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">187</span> Electrocardiogram-Based Heartbeat Classification Using Convolutional Neural Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jacqueline%20Rose%20T.%20Alipo-on">Jacqueline Rose T. Alipo-on</a>, <a href="https://publications.waset.org/abstracts/search?q=Francesca%20Isabelle%20F.%20Escobar"> Francesca Isabelle F. Escobar</a>, <a href="https://publications.waset.org/abstracts/search?q=Myles%20Joshua%20T.%20Tan"> Myles Joshua T. Tan</a>, <a href="https://publications.waset.org/abstracts/search?q=Hezerul%20Abdul%20Karim"> Hezerul Abdul Karim</a>, <a href="https://publications.waset.org/abstracts/search?q=Nouar%20Al%20Dahoul"> Nouar Al Dahoul</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Electrocardiogram (ECG) signal analysis and processing are crucial in the diagnosis of cardiovascular diseases, which are considered one of the leading causes of mortality worldwide. However, the traditional rule-based analysis of large volumes of ECG data is time-consuming, labor-intensive, and prone to human errors. With the advancement of the programming paradigm, algorithms such as machine learning have been increasingly used to perform an analysis of ECG signals. In this paper, various deep learning algorithms were adapted to classify five classes of heartbeat types. The dataset used in this work is the synthetic MIT-BIH Arrhythmia dataset produced from generative adversarial networks (GANs). Various deep learning models such as ResNet-50 convolutional neural network (CNN), 1-D CNN, and long short-term memory (LSTM) were evaluated and compared. ResNet-50 was found to outperform other models in terms of recall and F1 score using a five-fold average score of 98.88% and 98.87%, respectively. 1-D CNN, on the other hand, was found to have the highest average precision of 98.93%. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=heartbeat%20classification" title="heartbeat classification">heartbeat classification</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20network" title=" convolutional neural network"> convolutional neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=electrocardiogram%20signals" title=" electrocardiogram signals"> electrocardiogram signals</a>, <a href="https://publications.waset.org/abstracts/search?q=generative%20adversarial%20networks" title=" generative adversarial networks"> generative adversarial networks</a>, <a href="https://publications.waset.org/abstracts/search?q=long%20short-term%20memory" title=" long short-term memory"> long short-term memory</a>, <a href="https://publications.waset.org/abstracts/search?q=ResNet-50" title=" ResNet-50"> ResNet-50</a> </p> <a href="https://publications.waset.org/abstracts/162763/electrocardiogram-based-heartbeat-classification-using-convolutional-neural-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/162763.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">128</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">186</span> Gene Names Identity Recognition Using Siamese Network for Biomedical Publications</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Micheal%20Olaolu%20Arowolo">Micheal Olaolu Arowolo</a>, <a href="https://publications.waset.org/abstracts/search?q=Muhammad%20Azam"> Muhammad Azam</a>, <a href="https://publications.waset.org/abstracts/search?q=Fei%20He"> Fei He</a>, <a href="https://publications.waset.org/abstracts/search?q=Mihail%20Popescu"> Mihail Popescu</a>, <a href="https://publications.waset.org/abstracts/search?q=Dong%20Xu"> Dong Xu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> As the quantity of biological articles rises, so does the number of biological route figures. Each route figure shows gene names and relationships. Annotating pathway diagrams manually is time-consuming. Advanced image understanding models could speed up curation, but they must be more precise. There is rich information in biological pathway figures. The first step to performing image understanding of these figures is to recognize gene names automatically. Classical optical character recognition methods have been employed for gene name recognition, but they are not optimized for literature mining data. This study devised a method to recognize an image bounding box of gene name as a photo using deep Siamese neural network models to outperform the existing methods using ResNet, DenseNet and Inception architectures, the results obtained about 84% accuracy. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=biological%20pathway" title="biological pathway">biological pathway</a>, <a href="https://publications.waset.org/abstracts/search?q=gene%20identification" title=" gene identification"> gene identification</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20detection" title=" object detection"> object detection</a>, <a href="https://publications.waset.org/abstracts/search?q=Siamese%20network" title=" Siamese network"> Siamese network</a> </p> <a href="https://publications.waset.org/abstracts/160725/gene-names-identity-recognition-using-siamese-network-for-biomedical-publications" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/160725.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">292</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">185</span> Cells Detection and Recognition in Bone Marrow Examination with Deep Learning Method</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shiyin%20He">Shiyin He</a>, <a href="https://publications.waset.org/abstracts/search?q=Zheng%20Huang"> Zheng Huang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, deep learning methods are applied in bio-medical field to detect and count different types of cells in an automatic way instead of manual work in medical practice, specifically in bone marrow examination. The process is mainly composed of two steps, detection and recognition. Mask-Region-Convolutional Neural Networks (Mask-RCNN) was used for detection and image segmentation to extract cells and then Convolutional Neural Networks (CNN), as well as Deep Residual Network (ResNet) was used to classify. Result of cell detection network shows high efficiency to meet application requirements. For the cell recognition network, two networks are compared and the final system is fully applicable. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=cell%20detection" title="cell detection">cell detection</a>, <a href="https://publications.waset.org/abstracts/search?q=cell%20recognition" title=" cell recognition"> cell recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=Mask-RCNN" title=" Mask-RCNN"> Mask-RCNN</a>, <a href="https://publications.waset.org/abstracts/search?q=ResNet" title=" ResNet"> ResNet</a> </p> <a href="https://publications.waset.org/abstracts/98649/cells-detection-and-recognition-in-bone-marrow-examination-with-deep-learning-method" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/98649.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">190</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">184</span> Effective Stacking of Deep Neural Models for Automated Object Recognition in Retail Stores</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ankit%20Sinha">Ankit Sinha</a>, <a href="https://publications.waset.org/abstracts/search?q=Soham%20Banerjee"> Soham Banerjee</a>, <a href="https://publications.waset.org/abstracts/search?q=Pratik%20Chattopadhyay"> Pratik Chattopadhyay</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Automated product recognition in retail stores is an important real-world application in the domain of Computer Vision and Pattern Recognition. In this paper, we consider the problem of automatically identifying the classes of the products placed on racks in retail stores from an image of the rack and information about the query/product images. We improve upon the existing approaches in terms of effectiveness and memory requirement by developing a two-stage object detection and recognition pipeline comprising of a Faster-RCNN-based object localizer that detects the object regions in the rack image and a ResNet-18-based image encoder that classifies the detected regions into the appropriate classes. Each of the models is fine-tuned using appropriate data sets for better prediction and data augmentation is performed on each query image to prepare an extensive gallery set for fine-tuning the ResNet-18-based product recognition model. This encoder is trained using a triplet loss function following the strategy of online-hard-negative-mining for improved prediction. The proposed models are lightweight and can be connected in an end-to-end manner during deployment to automatically identify each product object placed in a rack image. Extensive experiments using Grozi-32k and GP-180 data sets verify the effectiveness of the proposed model. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=retail%20stores" title="retail stores">retail stores</a>, <a href="https://publications.waset.org/abstracts/search?q=faster-RCNN" title=" faster-RCNN"> faster-RCNN</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20localization" title=" object localization"> object localization</a>, <a href="https://publications.waset.org/abstracts/search?q=ResNet-18" title=" ResNet-18"> ResNet-18</a>, <a href="https://publications.waset.org/abstracts/search?q=triplet%20loss" title=" triplet loss"> triplet loss</a>, <a href="https://publications.waset.org/abstracts/search?q=data%20augmentation" title=" data augmentation"> data augmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=product%20recognition" title=" product recognition"> product recognition</a> </p> <a href="https://publications.waset.org/abstracts/153836/effective-stacking-of-deep-neural-models-for-automated-object-recognition-in-retail-stores" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/153836.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">156</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">183</span> Image Instance Segmentation Using Modified Mask R-CNN</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Avatharam%20Ganivada">Avatharam Ganivada</a>, <a href="https://publications.waset.org/abstracts/search?q=Krishna%20Shah"> Krishna Shah</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The Mask R-CNN is recently introduced by the team of Facebook AI Research (FAIR), which is mainly concerned with instance segmentation in images. Here, the Mask R-CNN is based on ResNet and feature pyramid network (FPN), where a single dropout method is employed. This paper provides a modified Mask R-CNN by adding multiple dropout methods into the Mask R-CNN. The proposed model has also utilized the concepts of Resnet and FPN to extract stage-wise network feature maps, wherein a top-down network path having lateral connections is used to obtain semantically strong features. The proposed model produces three outputs for each object in the image: class label, bounding box coordinates, and object mask. The performance of the proposed network is evaluated in the segmentation of every instance in images using COCO and cityscape datasets. The proposed model achieves better performance than the state-of-the-networks for the datasets. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=instance%20segmentation" title="instance segmentation">instance segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20detection" title=" object detection"> object detection</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks" title=" convolutional neural networks"> convolutional neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a> </p> <a href="https://publications.waset.org/abstracts/147310/image-instance-segmentation-using-modified-mask-r-cnn" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/147310.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">73</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">182</span> SEM Image Classification Using CNN Architectures</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=G%C3%BCzi%CC%87n%20Ti%CC%87rke%C5%9F">Güzi̇n Ti̇rkeş</a>, <a href="https://publications.waset.org/abstracts/search?q=%C3%96zge%20Teki%CC%87n"> Özge Teki̇n</a>, <a href="https://publications.waset.org/abstracts/search?q=Kerem%20Kurtulu%C5%9F"> Kerem Kurtuluş</a>, <a href="https://publications.waset.org/abstracts/search?q=Y.%20Yekta%20Yurtseven"> Y. Yekta Yurtseven</a>, <a href="https://publications.waset.org/abstracts/search?q=Murat%20Baran"> Murat Baran</a> </p> <p class="card-text"><strong>Abstract:</strong></p> A scanning electron microscope (SEM) is a type of electron microscope mainly used in nanoscience and nanotechnology areas. Automatic image recognition and classification are among the general areas of application concerning SEM. In line with these usages, the present paper proposes a deep learning algorithm that classifies SEM images into nine categories by means of an online application to simplify the process. The NFFA-EUROPE - 100% SEM data set, containing approximately 21,000 images, was used to train and test the algorithm at 80% and 20%, respectively. Validation was carried out using a separate data set obtained from the Middle East Technical University (METU) in Turkey. To increase the accuracy in the results, the Inception ResNet-V2 model was used in view of the Fine-Tuning approach. By using a confusion matrix, it was observed that the coated-surface category has a negative effect on the accuracy of the results since it contains other categories in the data set, thereby confusing the model when detecting category-specific patterns. For this reason, the coated-surface category was removed from the train data set, hence increasing accuracy by up to 96.5%. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks" title="convolutional neural networks">convolutional neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20classification" title=" image classification"> image classification</a>, <a href="https://publications.waset.org/abstracts/search?q=scanning%20electron%20microscope" title=" scanning electron microscope"> scanning electron microscope</a> </p> <a href="https://publications.waset.org/abstracts/160332/sem-image-classification-using-cnn-architectures" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/160332.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">125</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">181</span> A Novel Hybrid Deep Learning Architecture for Predicting Acute Kidney Injury Using Patient Record Data and Ultrasound Kidney Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sophia%20Shi">Sophia Shi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Acute kidney injury (AKI) is the sudden onset of kidney damage in which the kidneys cannot filter waste from the blood, requiring emergency hospitalization. AKI patient mortality rate is high in the ICU and is virtually impossible for doctors to predict because it is so unexpected. Currently, there is no hybrid model predicting AKI that takes advantage of two types of data. De-identified patient data from the MIMIC-III database and de-identified kidney images and corresponding patient records from the Beijing Hospital of the Ministry of Health were collected. Using data features including serum creatinine among others, two numeric models using MIMIC and Beijing Hospital data were built, and with the hospital ultrasounds, an image-only model was built. Convolutional neural networks (CNN) were used, VGG and Resnet for numeric data and Resnet for image data, and they were combined into a hybrid model by concatenating feature maps of both types of models to create a new input. This input enters another CNN block and then two fully connected layers, ending in a binary output after running through Softmax and additional code. The hybrid model successfully predicted AKI and the highest AUROC of the model was 0.953, achieving an accuracy of 90% and F1-score of 0.91. This model can be implemented into urgent clinical settings such as the ICU and aid doctors by assessing the risk of AKI shortly after the patient’s admission to the ICU, so that doctors can take preventative measures and diminish mortality risks and severe kidney damage. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Acute%20kidney%20injury" title="Acute kidney injury">Acute kidney injury</a>, <a href="https://publications.waset.org/abstracts/search?q=Convolutional%20neural%20network" title=" Convolutional neural network"> Convolutional neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=Hybrid%20deep%20learning" title=" Hybrid deep learning"> Hybrid deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=Patient%20record%20data" title=" Patient record data"> Patient record data</a>, <a href="https://publications.waset.org/abstracts/search?q=ResNet" title=" ResNet"> ResNet</a>, <a href="https://publications.waset.org/abstracts/search?q=Ultrasound%20kidney%20images" title=" Ultrasound kidney images"> Ultrasound kidney images</a>, <a href="https://publications.waset.org/abstracts/search?q=VGG" title=" VGG"> VGG</a> </p> <a href="https://publications.waset.org/abstracts/137226/a-novel-hybrid-deep-learning-architecture-for-predicting-acute-kidney-injury-using-patient-record-data-and-ultrasound-kidney-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/137226.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">131</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">180</span> Video Object Segmentation for Automatic Image Annotation of Ethernet Connectors with Environment Mapping and 3D Projection</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Marrone%20Silverio%20Melo%20Dantas%20Pedro%20Henrique%20Dreyer">Marrone Silverio Melo Dantas Pedro Henrique Dreyer</a>, <a href="https://publications.waset.org/abstracts/search?q=Gabriel%20Fonseca%20Reis%20de%20Souza"> Gabriel Fonseca Reis de Souza</a>, <a href="https://publications.waset.org/abstracts/search?q=Daniel%20Bezerra"> Daniel Bezerra</a>, <a href="https://publications.waset.org/abstracts/search?q=Ricardo%20Souza"> Ricardo Souza</a>, <a href="https://publications.waset.org/abstracts/search?q=Silvia%20Lins"> Silvia Lins</a>, <a href="https://publications.waset.org/abstracts/search?q=Judith%20Kelner"> Judith Kelner</a>, <a href="https://publications.waset.org/abstracts/search?q=Djamel%20Fawzi%20Hadj%20Sadok"> Djamel Fawzi Hadj Sadok</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The creation of a dataset is time-consuming and often discourages researchers from pursuing their goals. To overcome this problem, we present and discuss two solutions adopted for the automation of this process. Both optimize valuable user time and resources and support video object segmentation with object tracking and 3D projection. In our scenario, we acquire images from a moving robotic arm and, for each approach, generate distinct annotated datasets. We evaluated the precision of the annotations by comparing these with a manually annotated dataset, as well as the efficiency in the context of detection and classification problems. For detection support, we used YOLO and obtained for the projection dataset an F1-Score, accuracy, and mAP values of 0.846, 0.924, and 0.875, respectively. Concerning the tracking dataset, we achieved an F1-Score of 0.861, an accuracy of 0.932, whereas mAP reached 0.894. In order to evaluate the quality of the annotated images used for classification problems, we employed deep learning architectures. We adopted metrics accuracy and F1-Score, for VGG, DenseNet, MobileNet, Inception, and ResNet. The VGG architecture outperformed the others for both projection and tracking datasets. It reached an accuracy and F1-score of 0.997 and 0.993, respectively. Similarly, for the tracking dataset, it achieved an accuracy of 0.991 and an F1-Score of 0.981. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=RJ45" title="RJ45">RJ45</a>, <a href="https://publications.waset.org/abstracts/search?q=automatic%20annotation" title=" automatic annotation"> automatic annotation</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20tracking" title=" object tracking"> object tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=3D%20projection" title=" 3D projection"> 3D projection</a> </p> <a href="https://publications.waset.org/abstracts/130540/video-object-segmentation-for-automatic-image-annotation-of-ethernet-connectors-with-environment-mapping-and-3d-projection" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/130540.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">167</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">179</span> Audio-Visual Recognition Based on Effective Model and Distillation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Heng%20Yang">Heng Yang</a>, <a href="https://publications.waset.org/abstracts/search?q=Tao%20Luo"> Tao Luo</a>, <a href="https://publications.waset.org/abstracts/search?q=Yakun%20Zhang"> Yakun Zhang</a>, <a href="https://publications.waset.org/abstracts/search?q=Kai%20Wang"> Kai Wang</a>, <a href="https://publications.waset.org/abstracts/search?q=Wei%20Qin"> Wei Qin</a>, <a href="https://publications.waset.org/abstracts/search?q=Liang%20Xie"> Liang Xie</a>, <a href="https://publications.waset.org/abstracts/search?q=Ye%20Yan"> Ye Yan</a>, <a href="https://publications.waset.org/abstracts/search?q=Erwei%20Yin"> Erwei Yin</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Recent years have seen that audio-visual recognition has shown great potential in a strong noise environment. The existing method of audio-visual recognition has explored methods with ResNet and feature fusion. However, on the one hand, ResNet always occupies a large amount of memory resources, restricting the application in engineering. On the other hand, the feature merging also brings some interferences in a high noise environment. In order to solve the problems, we proposed an effective framework with bidirectional distillation. At first, in consideration of the good performance in extracting of features, we chose the light model, Efficientnet as our extractor of spatial features. Secondly, self-distillation was applied to learn more information from raw data. Finally, we proposed a bidirectional distillation in decision-level fusion. In more detail, our experimental results are based on a multi-model dataset from 24 volunteers. Eventually, the lipreading accuracy of our framework was increased by 2.3% compared with existing systems, and our framework made progress in audio-visual fusion in a high noise environment compared with the system of audio recognition without visual. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=lipreading" title="lipreading">lipreading</a>, <a href="https://publications.waset.org/abstracts/search?q=audio-visual" title=" audio-visual"> audio-visual</a>, <a href="https://publications.waset.org/abstracts/search?q=Efficientnet" title=" Efficientnet"> Efficientnet</a>, <a href="https://publications.waset.org/abstracts/search?q=distillation" title=" distillation"> distillation</a> </p> <a href="https://publications.waset.org/abstracts/146625/audio-visual-recognition-based-on-effective-model-and-distillation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/146625.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">134</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">178</span> Classification of Land Cover Usage from Satellite Images Using Deep Learning Algorithms</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shaik%20Ayesha%20Fathima">Shaik Ayesha Fathima</a>, <a href="https://publications.waset.org/abstracts/search?q=Shaik%20Noor%20Jahan"> Shaik Noor Jahan</a>, <a href="https://publications.waset.org/abstracts/search?q=Duvvada%20Rajeswara%20Rao"> Duvvada Rajeswara Rao</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Earth's environment and its evolution can be seen through satellite images in near real-time. Through satellite imagery, remote sensing data provide crucial information that can be used for a variety of applications, including image fusion, change detection, land cover classification, agriculture, mining, disaster mitigation, and monitoring climate change. The objective of this project is to propose a method for classifying satellite images according to multiple predefined land cover classes. The proposed approach involves collecting data in image format. The data is then pre-processed using data pre-processing techniques. The processed data is fed into the proposed algorithm and the obtained result is analyzed. Some of the algorithms used in satellite imagery classification are U-Net, Random Forest, Deep Labv3, CNN, ANN, Resnet etc. In this project, we are using the DeepLabv3 (Atrous convolution) algorithm for land cover classification. The dataset used is the deep globe land cover classification dataset. DeepLabv3 is a semantic segmentation system that uses atrous convolution to capture multi-scale context by adopting multiple atrous rates in cascade or in parallel to determine the scale of segments. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=area%20calculation" title="area calculation">area calculation</a>, <a href="https://publications.waset.org/abstracts/search?q=atrous%20convolution" title=" atrous convolution"> atrous convolution</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20globe%20land%20cover%20classification" title=" deep globe land cover classification"> deep globe land cover classification</a>, <a href="https://publications.waset.org/abstracts/search?q=deepLabv3" title=" deepLabv3"> deepLabv3</a>, <a href="https://publications.waset.org/abstracts/search?q=land%20cover%20classification" title=" land cover classification"> land cover classification</a>, <a href="https://publications.waset.org/abstracts/search?q=resnet%2050" title=" resnet 50"> resnet 50</a> </p> <a href="https://publications.waset.org/abstracts/147677/classification-of-land-cover-usage-from-satellite-images-using-deep-learning-algorithms" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/147677.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">140</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">177</span> SEMCPRA-Sar-Esembled Model for Climate Prediction in Remote Area</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kamalpreet%20Kaur">Kamalpreet Kaur</a>, <a href="https://publications.waset.org/abstracts/search?q=Renu%20Dhir"> Renu Dhir</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Climate prediction is an essential component of climate research, which helps evaluate possible effects on economies, communities, and ecosystems. Climate prediction involves short-term weather prediction, seasonal prediction, and long-term climate change prediction. Climate prediction can use the information gathered from satellites, ground-based stations, and ocean buoys, among other sources. The paper's four architectures, such as ResNet50, VGG19, Inception-v3, and Xception, have been combined using an ensemble approach for overall performance and robustness. An ensemble of different models makes a prediction, and the majority vote determines the final prediction. The various architectures such as ResNet50, VGG19, Inception-v3, and Xception efficiently classify the dataset RSI-CB256, which contains satellite images into cloudy and non-cloudy. The generated ensembled S-E model (Sar-ensembled model) provides an accuracy of 99.25%. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=climate" title="climate">climate</a>, <a href="https://publications.waset.org/abstracts/search?q=satellite%20images" title=" satellite images"> satellite images</a>, <a href="https://publications.waset.org/abstracts/search?q=prediction" title=" prediction"> prediction</a>, <a href="https://publications.waset.org/abstracts/search?q=classification" title=" classification"> classification</a> </p> <a href="https://publications.waset.org/abstracts/178864/semcpra-sar-esembled-model-for-climate-prediction-in-remote-area" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/178864.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">74</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">176</span> An Evaluation of Neural Network Efficacies for Image Recognition on Edge-AI Computer Vision Platform</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jie%20Zhao">Jie Zhao</a>, <a href="https://publications.waset.org/abstracts/search?q=Meng%20Su"> Meng Su</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Image recognition, as one of the most critical technologies in computer vision, works to help machine-like robotics understand a scene, that is, if deployed appropriately, will trigger the revolution in remote sensing and industry automation. With the developments of AI technologies, there are many prevailing and sophisticated neural networks as technologies developed for image recognition. However, computer vision platforms as hardware, supporting neural networks for image recognition, as crucial as the neural network technologies, need to be more congruently addressed as the research subjects. In contrast, different computer vision platforms are deterministic to leverage the performance of different neural networks for recognition. In this paper, three different computer vision platforms – Jetson Nano(with 4GB), a standalone laptop(with RTX 3000s, using CUDA), and Google Colab (web-based, using GPU) are explored and four prominent neural network architectures (including AlexNet, VGG(16/19), GoogleNet, and ResNet(18/34/50)), are investigated. In the context of pairwise usage between different computer vision platforms and distinctive neural networks, with the merits of recognition accuracy and time efficiency, the performances are evaluated. In the case study using public imageNets, our findings provide a nuanced perspective on optimizing image recognition tasks across Edge-AI platforms, offering guidance on selecting appropriate neural network structures to maximize performance under hardware constraints. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=alexNet" title="alexNet">alexNet</a>, <a href="https://publications.waset.org/abstracts/search?q=VGG" title=" VGG"> VGG</a>, <a href="https://publications.waset.org/abstracts/search?q=googleNet" title=" googleNet"> googleNet</a>, <a href="https://publications.waset.org/abstracts/search?q=resNet" title=" resNet"> resNet</a>, <a href="https://publications.waset.org/abstracts/search?q=Jetson%20nano" title=" Jetson nano"> Jetson nano</a>, <a href="https://publications.waset.org/abstracts/search?q=CUDA" title=" CUDA"> CUDA</a>, <a href="https://publications.waset.org/abstracts/search?q=COCO-NET" title=" COCO-NET"> COCO-NET</a>, <a href="https://publications.waset.org/abstracts/search?q=cifar10" title=" cifar10"> cifar10</a>, <a href="https://publications.waset.org/abstracts/search?q=imageNet%20large%20scale%20visual%20recognition%20challenge%20%28ILSVRC%29" title=" imageNet large scale visual recognition challenge (ILSVRC)"> imageNet large scale visual recognition challenge (ILSVRC)</a>, <a href="https://publications.waset.org/abstracts/search?q=google%20colab" title=" google colab"> google colab</a> </p> <a href="https://publications.waset.org/abstracts/176759/an-evaluation-of-neural-network-efficacies-for-image-recognition-on-edge-ai-computer-vision-platform" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/176759.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">90</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">175</span> Utilization of Schnerr-Sauer Cavitation Model for Simulation of Cavitation Inception and Super Cavitation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mohammadreza%20Nezamirad">Mohammadreza Nezamirad</a>, <a href="https://publications.waset.org/abstracts/search?q=Azadeh%20Yazdi"> Azadeh Yazdi</a>, <a href="https://publications.waset.org/abstracts/search?q=Sepideh%20Amirahmadian"> Sepideh Amirahmadian</a>, <a href="https://publications.waset.org/abstracts/search?q=Nasim%20Sabetpour"> Nasim Sabetpour</a>, <a href="https://publications.waset.org/abstracts/search?q=Amirmasoud%20Hamedi"> Amirmasoud Hamedi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this study, the Reynolds-Stress-Navier-Stokes framework is utilized to investigate the flow inside the diesel injector nozzle. The flow is assumed to be multiphase as the formation of vapor by pressure drop is visualized. For pressure and velocity linkage, the coupled algorithm is used. Since the cavitation phenomenon inherently is unsteady, the quasi-steady approach is utilized for saving time and resources in the current study. Schnerr-Sauer cavitation model is used, which was capable of predicting flow behavior both at the initial and final steps of the cavitation process. Two different turbulent models were used in this study to clarify which one is more capable in predicting cavitation inception and super-cavitation. It was found that K-ε was more compatible with the Shnerr-Sauer cavitation model; therefore, the mentioned model is used for the rest of this study. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=CFD" title="CFD">CFD</a>, <a href="https://publications.waset.org/abstracts/search?q=RANS" title=" RANS"> RANS</a>, <a href="https://publications.waset.org/abstracts/search?q=cavitation" title=" cavitation"> cavitation</a>, <a href="https://publications.waset.org/abstracts/search?q=fuel" title=" fuel"> fuel</a>, <a href="https://publications.waset.org/abstracts/search?q=injector" title=" injector"> injector</a> </p> <a href="https://publications.waset.org/abstracts/138110/utilization-of-schnerr-sauer-cavitation-model-for-simulation-of-cavitation-inception-and-super-cavitation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/138110.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">209</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">174</span> Plant Identification Using Convolution Neural Network and Vision Transformer-Based Models</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Virender%20Singh">Virender Singh</a>, <a href="https://publications.waset.org/abstracts/search?q=Mathew%20Rees"> Mathew Rees</a>, <a href="https://publications.waset.org/abstracts/search?q=Simon%20Hampton"> Simon Hampton</a>, <a href="https://publications.waset.org/abstracts/search?q=Sivaram%20Annadurai"> Sivaram Annadurai</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Plant identification is a challenging task that aims to identify the family, genus, and species according to plant morphological features. Automated deep learning-based computer vision algorithms are widely used for identifying plants and can help users narrow down the possibilities. However, numerous morphological similarities between and within species render correct classification difficult. In this paper, we tested custom convolution neural network (CNN) and vision transformer (ViT) based models using the PyTorch framework to classify plants. We used a large dataset of 88,000 provided by the Royal Horticultural Society (RHS) and a smaller dataset of 16,000 images from the PlantClef 2015 dataset for classifying plants at genus and species levels, respectively. Our results show that for classifying plants at the genus level, ViT models perform better compared to CNN-based models ResNet50 and ResNet-RS-420 and other state-of-the-art CNN-based models suggested in previous studies on a similar dataset. ViT model achieved top accuracy of 83.3% for classifying plants at the genus level. For classifying plants at the species level, ViT models perform better compared to CNN-based models ResNet50 and ResNet-RS-420, with a top accuracy of 92.5%. We show that the correct set of augmentation techniques plays an important role in classification success. In conclusion, these results could help end users, professionals and the general public alike in identifying plants quicker and with improved accuracy. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=plant%20identification" title="plant identification">plant identification</a>, <a href="https://publications.waset.org/abstracts/search?q=CNN" title=" CNN"> CNN</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=vision%20transformer" title=" vision transformer"> vision transformer</a>, <a href="https://publications.waset.org/abstracts/search?q=classification" title=" classification"> classification</a> </p> <a href="https://publications.waset.org/abstracts/162359/plant-identification-using-convolution-neural-network-and-vision-transformer-based-models" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/162359.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">104</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">173</span> Experimental Study on Flooding Phenomena in a Three-Phase Direct Contact Heat Exchanger for the Utilisation in Solar Pond Applications</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hameed%20B.%20Mahood">Hameed B. Mahood</a>, <a href="https://publications.waset.org/abstracts/search?q=Ali%20Sh.%20Baqir"> Ali Sh. Baqir</a>, <a href="https://publications.waset.org/abstracts/search?q=Alasdair%20N.%20Campbell"> Alasdair N. Campbell</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Experiments to study the limitation of flooding inception of three-phase direct contact condenser have been carried out in a counter-current small diameter vertical condenser. The total column height was 70 cm and 4 cm diameter. Only 48 cm has been used as an active three-phase direct contact condenser height. Vapour pentane with three different initial temperatures (40, 43.5 and 47.5 °C) and water with a constant temperature (19 °C) have been used as a dispersed phase and a continuous phase respectively. Five different continuous phase mass flow rate and four different dispersed phase mass flow rate have been tested throughout the experiments. Dimensionless correlation based on the previous common flooding correlation is proposed to calculate the up flow flooding inception of the three-phase direct contact condenser. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Three-phase%20heat%20exchanger" title="Three-phase heat exchanger">Three-phase heat exchanger</a>, <a href="https://publications.waset.org/abstracts/search?q=condenser" title=" condenser"> condenser</a>, <a href="https://publications.waset.org/abstracts/search?q=solar%20energy" title=" solar energy"> solar energy</a>, <a href="https://publications.waset.org/abstracts/search?q=flooding%20phenomena" title=" flooding phenomena"> flooding phenomena</a> </p> <a href="https://publications.waset.org/abstracts/57093/experimental-study-on-flooding-phenomena-in-a-three-phase-direct-contact-heat-exchanger-for-the-utilisation-in-solar-pond-applications" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/57093.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">339</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">172</span> Prediction of Remaining Life of Industrial Cutting Tools with Deep Learning-Assisted Image Processing Techniques</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Gizem%20Eser%20Erdek">Gizem Eser Erdek</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This study is research on predicting the remaining life of industrial cutting tools used in the industrial production process with deep learning methods. When the life of cutting tools decreases, they cause destruction to the raw material they are processing. This study it is aimed to predict the remaining life of the cutting tool based on the damage caused by the cutting tools to the raw material. For this, hole photos were collected from the hole-drilling machine for 8 months. Photos were labeled in 5 classes according to hole quality. In this way, the problem was transformed into a classification problem. Using the prepared data set, a model was created with convolutional neural networks, which is a deep learning method. In addition, VGGNet and ResNet architectures, which have been successful in the literature, have been tested on the data set. A hybrid model using convolutional neural networks and support vector machines is also used for comparison. When all models are compared, it has been determined that the model in which convolutional neural networks are used gives successful results of a %74 accuracy rate. In the preliminary studies, the data set was arranged to include only the best and worst classes, and the study gave ~93% accuracy when the binary classification model was applied. The results of this study showed that the remaining life of the cutting tools could be predicted by deep learning methods based on the damage to the raw material. Experiments have proven that deep learning methods can be used as an alternative for cutting tool life estimation. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=classification" title="classification">classification</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20network" title=" convolutional neural network"> convolutional neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=remaining%20life%20of%20industrial%20cutting%20tools" title=" remaining life of industrial cutting tools"> remaining life of industrial cutting tools</a>, <a href="https://publications.waset.org/abstracts/search?q=ResNet" title=" ResNet"> ResNet</a>, <a href="https://publications.waset.org/abstracts/search?q=support%20vector%20machine" title=" support vector machine"> support vector machine</a>, <a href="https://publications.waset.org/abstracts/search?q=VggNet" title=" VggNet"> VggNet</a> </p> <a href="https://publications.waset.org/abstracts/166428/prediction-of-remaining-life-of-industrial-cutting-tools-with-deep-learning-assisted-image-processing-techniques" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/166428.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">77</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">171</span> Study of the Influence of Nozzle Length and Jet Angles on the Air Entrainment by Plunging Water Jets</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jos%C3%A9%20Luis%20Mu%C3%B1oz-Cobo%20Gonz%C3%A1lez">José Luis Muñoz-Cobo González</a>, <a href="https://publications.waset.org/abstracts/search?q=Sergio%20Chiva%20Vicent"> Sergio Chiva Vicent</a>, <a href="https://publications.waset.org/abstracts/search?q=Khaled%20Harby%20Mohamed"> Khaled Harby Mohamed</a> </p> <p class="card-text"><strong>Abstract:</strong></p> When a vertical liquid jet plunges into a liquid surface, after passing through a surrounding gas phase, it entrains a large amount of gas bubbles into the receiving pool, and it forms a large submerged two-phase region with a considerable interfacial area. At the intersection of the plunging jet and the liquid surface, free-surface instabilities are developed, and gas entrainment may be observed. If the jet impact velocity exceeds an inception velocity that is a function of the plunging flow conditions, the gas entrainment takes place. The general goal of this work is to study the effect of nozzle parameters (length-to-diameter ratio (lN/dN), jet angle (α) with the free water surface) and the jet operating conditions (initial jet diameters dN, initial jet velocity VN, and jet length x1) on the flow characteristics such as: inception velocity of the gas entrainment Ve, bubble penetration depth Hp, gas entrainment rate, Qa, centerline jet velocity Vc, and the axial jet velocity distribution Vx below the free water surface in a plunging liquid jet system. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=inclined%20plunging%20water%20jets" title="inclined plunging water jets">inclined plunging water jets</a>, <a href="https://publications.waset.org/abstracts/search?q=entrainment" title=" entrainment"> entrainment</a>, <a href="https://publications.waset.org/abstracts/search?q=two%20phase%20flow" title=" two phase flow"> two phase flow</a>, <a href="https://publications.waset.org/abstracts/search?q=nozzle%20length" title=" nozzle length"> nozzle length</a> </p> <a href="https://publications.waset.org/abstracts/15058/study-of-the-influence-of-nozzle-length-and-jet-angles-on-the-air-entrainment-by-plunging-water-jets" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/15058.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">468</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">170</span> Breast Cancer Metastasis Detection and Localization through Transfer-Learning Convolutional Neural Network Classification Based on Convolutional Denoising Autoencoder Stack</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Varun%20Agarwal">Varun Agarwal</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Introduction: With the advent of personalized medicine, histopathological review of whole slide images (WSIs) for cancer diagnosis presents an exceedingly time-consuming, complex task. Specifically, detecting metastatic regions in WSIs of sentinel lymph node biopsies necessitates a full-scanned, holistic evaluation of the image. Thus, digital pathology, low-level image manipulation algorithms, and machine learning provide significant advancements in improving the efficiency and accuracy of WSI analysis. Using Camelyon16 data, this paper proposes a deep learning pipeline to automate and ameliorate breast cancer metastasis localization and WSI classification. Methodology: The model broadly follows five stages -region of interest detection, WSI partitioning into image tiles, convolutional neural network (CNN) image-segment classifications, probabilistic mapping of tumor localizations, and further processing for whole WSI classification. Transfer learning is applied to the task, with the implementation of Inception-ResNetV2 - an effective CNN classifier that uses residual connections to enhance feature representation, adding convolved outputs in the inception unit to the proceeding input data. Moreover, in order to augment the performance of the transfer learning CNN, a stack of convolutional denoising autoencoders (CDAE) is applied to produce embeddings that enrich image representation. Through a saliency-detection algorithm, visual training segments are generated, which are then processed through a denoising autoencoder -primarily consisting of convolutional, leaky rectified linear unit, and batch normalization layers- and subsequently a contrast-normalization function. A spatial pyramid pooling algorithm extracts the key features from the processed image, creating a viable feature map for the CNN that minimizes spatial resolution and noise. Results and Conclusion: The simplified and effective architecture of the fine-tuned transfer learning Inception-ResNetV2 network enhanced with the CDAE stack yields state of the art performance in WSI classification and tumor localization, achieving AUC scores of 0.947 and 0.753, respectively. The convolutional feature retention and compilation with the residual connections to inception units synergized with the input denoising algorithm enable the pipeline to serve as an effective, efficient tool in the histopathological review of WSIs. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=breast%20cancer" title="breast cancer">breast cancer</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks" title=" convolutional neural networks"> convolutional neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=metastasis%20mapping" title=" metastasis mapping"> metastasis mapping</a>, <a href="https://publications.waset.org/abstracts/search?q=whole%20slide%20images" title=" whole slide images"> whole slide images</a> </p> <a href="https://publications.waset.org/abstracts/133783/breast-cancer-metastasis-detection-and-localization-through-transfer-learning-convolutional-neural-network-classification-based-on-convolutional-denoising-autoencoder-stack" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/133783.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">130</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">169</span> Deep Learning in Chest Computed Tomography to Differentiate COVID-19 from Influenza</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hongmei%20Wang">Hongmei Wang</a>, <a href="https://publications.waset.org/abstracts/search?q=Ziyun%20Xiang"> Ziyun Xiang</a>, <a href="https://publications.waset.org/abstracts/search?q=Ying%20liu"> Ying liu</a>, <a href="https://publications.waset.org/abstracts/search?q=Li%20Yu"> Li Yu</a>, <a href="https://publications.waset.org/abstracts/search?q=Dongsheng%20Yue"> Dongsheng Yue</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Intro: The COVID-19 (Corona Virus Disease 2019) has greatly changed the global economic, political and financial ecology. The mutation of the coronavirus in the UK in December 2020 has brought new panic to the world. Deep learning was performed on Chest Computed tomography (CT) of COVID-19 and Influenza and describes their characteristics. The predominant features of COVID-19 pneumonia was ground-glass opacification, followed by consolidation. Lesion density: most lesions appear as ground-glass shadows, and some lesions coexist with solid lesions. Lesion distribution: the focus is mainly on the dorsal side of the periphery of the lung, with the lower lobe of the lungs as the focus, and it is often close to the pleura. Other features it has are grid-like shadows in ground glass lesions, thickening signs of diseased vessels, air bronchi signs and halo signs. The severe disease involves whole bilateral lungs, showing white lung signs, air bronchograms can be seen, and there can be a small amount of pleural effusion in the bilateral chest cavity. At the same time, this year's flu season could be near its peak after surging throughout the United States for months. Chest CT for Influenza infection is characterized by focal ground glass shadows in the lungs, with or without patchy consolidation, and bronchiole air bronchograms are visible in the concentration. There are patchy ground-glass shadows, consolidation, air bronchus signs, mosaic lung perfusion, etc. The lesions are mostly fused, which is prominent near the hilar and two lungs. Grid-like shadows and small patchy ground-glass shadows are visible. Deep neural networks have great potential in image analysis and diagnosis that traditional machine learning algorithms do not. Method: Aiming at the two major infectious diseases COVID-19 and influenza, which are currently circulating in the world, the chest CT of patients with two infectious diseases is classified and diagnosed using deep learning algorithms. The residual network is proposed to solve the problem of network degradation when there are too many hidden layers in a deep neural network (DNN). The proposed deep residual system (ResNet) is a milestone in the history of the Convolutional neural network (CNN) images, which solves the problem of difficult training of deep CNN models. Many visual tasks can get excellent results through fine-tuning ResNet. The pre-trained convolutional neural network ResNet is introduced as a feature extractor, eliminating the need to design complex models and time-consuming training. Fastai is based on Pytorch, packaging best practices for in-depth learning strategies, and finding the best way to handle diagnoses issues. Based on the one-cycle approach of the Fastai algorithm, the classification diagnosis of lung CT for two infectious diseases is realized, and a higher recognition rate is obtained. Results: A deep learning model was developed to efficiently identify the differences between COVID-19 and influenza using chest CT. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=COVID-19" title="COVID-19">COVID-19</a>, <a href="https://publications.waset.org/abstracts/search?q=Fastai" title=" Fastai"> Fastai</a>, <a href="https://publications.waset.org/abstracts/search?q=influenza" title=" influenza"> influenza</a>, <a href="https://publications.waset.org/abstracts/search?q=transfer%20network" title=" transfer network"> transfer network</a> </p> <a href="https://publications.waset.org/abstracts/125142/deep-learning-in-chest-computed-tomography-to-differentiate-covid-19-from-influenza" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/125142.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">142</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">168</span> A Data-Mining Model for Protection of FACTS-Based Transmission Line</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ashok%20Kalagura">Ashok Kalagura</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents a data-mining model for fault-zone identification of flexible AC transmission systems (FACTS)-based transmission line including a thyristor-controlled series compensator (TCSC) and unified power-flow controller (UPFC), using ensemble decision trees. Given the randomness in the ensemble of decision trees stacked inside the random forests model, it provides an effective decision on the fault-zone identification. Half-cycle post-fault current and voltage samples from the fault inception are used as an input vector against target output ‘1’ for the fault after TCSC/UPFC and ‘1’ for the fault before TCSC/UPFC for fault-zone identification. The algorithm is tested on simulated fault data with wide variations in operating parameters of the power system network, including noisy environment providing a reliability measure of 99% with faster response time (3/4th cycle from fault inception). The results of the presented approach using the RF model indicate the reliable identification of the fault zone in FACTS-based transmission lines. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=distance%20relaying" title="distance relaying">distance relaying</a>, <a href="https://publications.waset.org/abstracts/search?q=fault-zone%20identification" title=" fault-zone identification"> fault-zone identification</a>, <a href="https://publications.waset.org/abstracts/search?q=random%20forests" title=" random forests"> random forests</a>, <a href="https://publications.waset.org/abstracts/search?q=RFs" title=" RFs"> RFs</a>, <a href="https://publications.waset.org/abstracts/search?q=support%20vector%20machine" title=" support vector machine"> support vector machine</a>, <a href="https://publications.waset.org/abstracts/search?q=SVM" title=" SVM"> SVM</a>, <a href="https://publications.waset.org/abstracts/search?q=thyristor-controlled%20series%20compensator" title=" thyristor-controlled series compensator"> thyristor-controlled series compensator</a>, <a href="https://publications.waset.org/abstracts/search?q=TCSC" title=" TCSC"> TCSC</a>, <a href="https://publications.waset.org/abstracts/search?q=unified%20power-%EF%AC%82ow%20controller" title=" unified power-flow controller"> unified power-flow controller</a>, <a href="https://publications.waset.org/abstracts/search?q=UPFC" title=" UPFC "> UPFC </a> </p> <a href="https://publications.waset.org/abstracts/32579/a-data-mining-model-for-protection-of-facts-based-transmission-line" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/32579.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">423</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">167</span> Application of Computational Flow Dynamics (CFD) Analysis for Surge Inception and Propagation for Low Head Hydropower Projects</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=M.%20Mohsin%20Munir">M. Mohsin Munir</a>, <a href="https://publications.waset.org/abstracts/search?q=Taimoor%20Ahmad"> Taimoor Ahmad</a>, <a href="https://publications.waset.org/abstracts/search?q=Javed%20Munir"> Javed Munir</a>, <a href="https://publications.waset.org/abstracts/search?q=Usman%20Rashid"> Usman Rashid</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Determination of maximum elevation of a flowing fluid due to sudden rejection of load in a hydropower facility is of great interest to hydraulic engineers to ensure safety of the hydraulic structures. Several mathematical models exist that employ one-dimensional modeling for the determination of surge but none of these perfectly simulate real-time circumstances. The paper envisages investigation of surge inception and propagation for a Low Head Hydropower project using Computational Fluid Dynamics (CFD) analysis on FLOW-3D software package. The fluid dynamic model utilizes its analysis for surge by employing Reynolds’ Averaged Navier-Stokes Equations (RANSE). The CFD model is designed for a case study at Taunsa hydropower Project in Pakistan. Various scenarios have run through the model keeping in view upstream boundary conditions. The prototype results were then compared with the results of physical model testing for the same scenarios. The results of the numerical model proved quite accurate coherence with the physical model testing and offers insight into phenomenon which are not apparent in physical model and shall be adopted in future for the similar low head projects limiting delays and cost incurred in the physical model testing. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=surge" title="surge">surge</a>, <a href="https://publications.waset.org/abstracts/search?q=FLOW-3D" title=" FLOW-3D"> FLOW-3D</a>, <a href="https://publications.waset.org/abstracts/search?q=numerical%20model" title=" numerical model"> numerical model</a>, <a href="https://publications.waset.org/abstracts/search?q=Taunsa" title=" Taunsa"> Taunsa</a>, <a href="https://publications.waset.org/abstracts/search?q=RANSE" title=" RANSE"> RANSE</a> </p> <a href="https://publications.waset.org/abstracts/36198/application-of-computational-flow-dynamics-cfd-analysis-for-surge-inception-and-propagation-for-low-head-hydropower-projects" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/36198.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">361</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">&lsaquo;</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=inception%20ResNet&amp;page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=inception%20ResNet&amp;page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=inception%20ResNet&amp;page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=inception%20ResNet&amp;page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=inception%20ResNet&amp;page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=inception%20ResNet&amp;page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=inception%20ResNet&amp;page=2" rel="next">&rsaquo;</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">&copy; 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">&times;</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>

Pages: 1 2 3 4 5 6 7 8 9 10