CINXE.COM
Search results for: deep learning - VGG16 - efficientNet - CNN – ensemble – dermoscopic images - melanoma
<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: deep learning - VGG16 - efficientNet - CNN – ensemble – dermoscopic images - melanoma</title> <meta name="description" content="Search results for: deep learning - VGG16 - efficientNet - CNN – ensemble – dermoscopic images - melanoma"> <meta name="keywords" content="deep learning - VGG16 - efficientNet - CNN – ensemble – dermoscopic images - melanoma"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="deep learning - VGG16 - efficientNet - CNN – ensemble – dermoscopic images - melanoma" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="deep learning - VGG16 - efficientNet - CNN – ensemble – dermoscopic images - melanoma"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 10527</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: deep learning - VGG16 - efficientNet - CNN – ensemble – dermoscopic images - melanoma</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10527</span> Melanoma and Non-Melanoma, Skin Lesion Classification, Using a Deep Learning Model</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shaira%20L.%20Kee">Shaira L. Kee</a>, <a href="https://publications.waset.org/abstracts/search?q=Michael%20Aaron%20G.%20Sy"> Michael Aaron G. Sy</a>, <a href="https://publications.waset.org/abstracts/search?q=Myles%20%20Joshua%20%20T.%20Tan"> Myles Joshua T. Tan</a>, <a href="https://publications.waset.org/abstracts/search?q=Hezerul%20Abdul%20Karim"> Hezerul Abdul Karim</a>, <a href="https://publications.waset.org/abstracts/search?q=Nouar%20AlDahoul"> Nouar AlDahoul</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Skin diseases are considered the fourth most common disease, with melanoma and non-melanoma skin cancer as the most common type of cancer in Caucasians. The alarming increase in Skin Cancer cases shows an urgent need for further research to improve diagnostic methods, as early diagnosis can significantly improve the 5-year survival rate. Machine Learning algorithms for image pattern analysis in diagnosing skin lesions can dramatically increase the accuracy rate of detection and decrease possible human errors. Several studies have shown the diagnostic performance of computer algorithms outperformed dermatologists. However, existing methods still need improvements to reduce diagnostic errors and generate efficient and accurate results. Our paper proposes an ensemble method to classify dermoscopic images into benign and malignant skin lesions. The experiments were conducted using the International Skin Imaging Collaboration (ISIC) image samples. The dataset contains 3,297 dermoscopic images with benign and malignant categories. The results show improvement in performance with an accuracy of 88% and an F1 score of 87%, outperforming other existing models such as support vector machine (SVM), Residual network (ResNet50), EfficientNetB0, EfficientNetB4, and VGG16. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20learning%20-%20VGG16%20-%20efficientNet%20-%20CNN%20%E2%80%93%20ensemble%20%E2%80%93%0D%0Adermoscopic%20images%20-%20%20melanoma" title="deep learning - VGG16 - efficientNet - CNN – ensemble – dermoscopic images - melanoma">deep learning - VGG16 - efficientNet - CNN – ensemble – dermoscopic images - melanoma</a> </p> <a href="https://publications.waset.org/abstracts/162765/melanoma-and-non-melanoma-skin-lesion-classification-using-a-deep-learning-model" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/162765.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">81</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10526</span> A Convolutional Deep Neural Network Approach for Skin Cancer Detection Using Skin Lesion Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Firas%20Gerges">Firas Gerges</a>, <a href="https://publications.waset.org/abstracts/search?q=Frank%20Y.%20Shih"> Frank Y. Shih</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Malignant melanoma, known simply as melanoma, is a type of skin cancer that appears as a mole on the skin. It is critical to detect this cancer at an early stage because it can spread across the body and may lead to the patient's death. When detected early, melanoma is curable. In this paper, we propose a deep learning model (convolutional neural networks) in order to automatically classify skin lesion images as malignant or benign. Images underwent certain pre-processing steps to diminish the effect of the normal skin region on the model. The result of the proposed model showed a significant improvement over previous work, achieving an accuracy of 97%. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title="deep learning">deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=skin%20cancer" title=" skin cancer"> skin cancer</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=melanoma" title=" melanoma"> melanoma</a> </p> <a href="https://publications.waset.org/abstracts/134720/a-convolutional-deep-neural-network-approach-for-skin-cancer-detection-using-skin-lesion-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/134720.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">148</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10525</span> A Comprehensive Study of Camouflaged Object Detection Using Deep Learning</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Khalak%20Bin%20Khair">Khalak Bin Khair</a>, <a href="https://publications.waset.org/abstracts/search?q=Saqib%20Jahir"> Saqib Jahir</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohammed%20Ibrahim"> Mohammed Ibrahim</a>, <a href="https://publications.waset.org/abstracts/search?q=Fahad%20Bin"> Fahad Bin</a>, <a href="https://publications.waset.org/abstracts/search?q=Debajyoti%20Karmaker"> Debajyoti Karmaker</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Object detection is a computer technology that deals with searching through digital images and videos for occurrences of semantic elements of a particular class. It is associated with image processing and computer vision. On top of object detection, we detect camouflage objects within an image using Deep Learning techniques. Deep learning may be a subset of machine learning that's essentially a three-layer neural network Over 6500 images that possess camouflage properties are gathered from various internet sources and divided into 4 categories to compare the result. Those images are labeled and then trained and tested using vgg16 architecture on the jupyter notebook using the TensorFlow platform. The architecture is further customized using Transfer Learning. Methods for transferring information from one or more of these source tasks to increase learning in a related target task are created through transfer learning. The purpose of this transfer of learning methodologies is to aid in the evolution of machine learning to the point where it is as efficient as human learning. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title="deep learning">deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=transfer%20learning" title=" transfer learning"> transfer learning</a>, <a href="https://publications.waset.org/abstracts/search?q=TensorFlow" title=" TensorFlow"> TensorFlow</a>, <a href="https://publications.waset.org/abstracts/search?q=camouflage" title=" camouflage"> camouflage</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20detection" title=" object detection"> object detection</a>, <a href="https://publications.waset.org/abstracts/search?q=architecture" title=" architecture"> architecture</a>, <a href="https://publications.waset.org/abstracts/search?q=accuracy" title=" accuracy"> accuracy</a>, <a href="https://publications.waset.org/abstracts/search?q=model" title=" model"> model</a>, <a href="https://publications.waset.org/abstracts/search?q=VGG16" title=" VGG16"> VGG16</a> </p> <a href="https://publications.waset.org/abstracts/152633/a-comprehensive-study-of-camouflaged-object-detection-using-deep-learning" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/152633.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">149</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10524</span> Artificial Intelligence in Melanoma Prognosis: A Narrative Review</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shohreh%20Ghasemi">Shohreh Ghasemi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Introduction: Melanoma is a complex disease with various clinical and histopathological features that impact prognosis and treatment decisions. Traditional methods of melanoma prognosis involve manual examination and interpretation of clinical and histopathological data by dermatologists and pathologists. However, the subjective nature of these assessments can lead to inter-observer variability and suboptimal prognostic accuracy. AI, with its ability to analyze vast amounts of data and identify patterns, has emerged as a promising tool for improving melanoma prognosis. Methods: A comprehensive literature search was conducted to identify studies that employed AI techniques for melanoma prognosis. The search included databases such as PubMed and Google Scholar, using keywords such as "artificial intelligence," "melanoma," and "prognosis." Studies published between 2010 and 2022 were considered. The selected articles were critically reviewed, and relevant information was extracted. Results: The review identified various AI methodologies utilized in melanoma prognosis, including machine learning algorithms, deep learning techniques, and computer vision. These techniques have been applied to diverse data sources, such as clinical images, dermoscopy images, histopathological slides, and genetic data. Studies have demonstrated the potential of AI in accurately predicting melanoma prognosis, including survival outcomes, recurrence risk, and response to therapy. AI-based prognostic models have shown comparable or even superior performance compared to traditional methods. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=artificial%20intelligence" title="artificial intelligence">artificial intelligence</a>, <a href="https://publications.waset.org/abstracts/search?q=melanoma" title=" melanoma"> melanoma</a>, <a href="https://publications.waset.org/abstracts/search?q=accuracy" title=" accuracy"> accuracy</a>, <a href="https://publications.waset.org/abstracts/search?q=prognosis%20prediction" title=" prognosis prediction"> prognosis prediction</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20analysis" title=" image analysis"> image analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=personalized%20medicine" title=" personalized medicine"> personalized medicine</a> </p> <a href="https://publications.waset.org/abstracts/171134/artificial-intelligence-in-melanoma-prognosis-a-narrative-review" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/171134.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">81</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10523</span> Brain Tumor Detection and Classification Using Pre-Trained Deep Learning Models</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Aditya%20Karade">Aditya Karade</a>, <a href="https://publications.waset.org/abstracts/search?q=Sharada%20Falane"> Sharada Falane</a>, <a href="https://publications.waset.org/abstracts/search?q=Dhananjay%20Deshmukh"> Dhananjay Deshmukh</a>, <a href="https://publications.waset.org/abstracts/search?q=Vijaykumar%20Mantri"> Vijaykumar Mantri</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Brain tumors pose a significant challenge in healthcare due to their complex nature and impact on patient outcomes. The application of deep learning (DL) algorithms in medical imaging have shown promise in accurate and efficient brain tumour detection. This paper explores the performance of various pre-trained DL models ResNet50, Xception, InceptionV3, EfficientNetB0, DenseNet121, NASNetMobile, VGG19, VGG16, and MobileNet on a brain tumour dataset sourced from Figshare. The dataset consists of MRI scans categorizing different types of brain tumours, including meningioma, pituitary, glioma, and no tumour. The study involves a comprehensive evaluation of these models’ accuracy and effectiveness in classifying brain tumour images. Data preprocessing, augmentation, and finetuning techniques are employed to optimize model performance. Among the evaluated deep learning models for brain tumour detection, ResNet50 emerges as the top performer with an accuracy of 98.86%. Following closely is Xception, exhibiting a strong accuracy of 97.33%. These models showcase robust capabilities in accurately classifying brain tumour images. On the other end of the spectrum, VGG16 trails with the lowest accuracy at 89.02%. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=brain%20tumour" title="brain tumour">brain tumour</a>, <a href="https://publications.waset.org/abstracts/search?q=MRI%20image" title=" MRI image"> MRI image</a>, <a href="https://publications.waset.org/abstracts/search?q=detecting%20and%20classifying%20tumour" title=" detecting and classifying tumour"> detecting and classifying tumour</a>, <a href="https://publications.waset.org/abstracts/search?q=pre-trained%20models" title=" pre-trained models"> pre-trained models</a>, <a href="https://publications.waset.org/abstracts/search?q=transfer%20learning" title=" transfer learning"> transfer learning</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20segmentation" title=" image segmentation"> image segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=data%20augmentation" title=" data augmentation"> data augmentation</a> </p> <a href="https://publications.waset.org/abstracts/178879/brain-tumor-detection-and-classification-using-pre-trained-deep-learning-models" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/178879.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">74</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10522</span> Faster, Lighter, More Accurate: A Deep Learning Ensemble for Content Moderation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Arian%20Hosseini">Arian Hosseini</a>, <a href="https://publications.waset.org/abstracts/search?q=Mahmudul%20Hasan"> Mahmudul Hasan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> To address the increasing need for efficient and accurate content moderation, we propose an efficient and lightweight deep classification ensemble structure. Our approach is based on a combination of simple visual features, designed for high-accuracy classification of violent content with low false positives. Our ensemble architecture utilizes a set of lightweight models with narrowed-down color features, and we apply it to both images and videos. We evaluated our approach using a large dataset of explosion and blast contents and compared its performance to popular deep learning models such as ResNet-50. Our evaluation results demonstrate significant improvements in prediction accuracy, while benefiting from 7.64x faster inference and lower computation cost. While our approach is tailored to explosion detection, it can be applied to other similar content moderation and violence detection use cases as well. Based on our experiments, we propose a "think small, think many" philosophy in classification scenarios. We argue that transforming a single, large, monolithic deep model into a verification-based step model ensemble of multiple small, simple, and lightweight models with narrowed-down visual features can possibly lead to predictions with higher accuracy. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20classification" title="deep classification">deep classification</a>, <a href="https://publications.waset.org/abstracts/search?q=content%20moderation" title=" content moderation"> content moderation</a>, <a href="https://publications.waset.org/abstracts/search?q=ensemble%20learning" title=" ensemble learning"> ensemble learning</a>, <a href="https://publications.waset.org/abstracts/search?q=explosion%20detection" title=" explosion detection"> explosion detection</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20processing" title=" video processing"> video processing</a> </p> <a href="https://publications.waset.org/abstracts/183644/faster-lighter-more-accurate-a-deep-learning-ensemble-for-content-moderation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/183644.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">55</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10521</span> Optimizing Perennial Plants Image Classification by Fine-Tuning Deep Neural Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Khairani%20Binti%20Supyan">Khairani Binti Supyan</a>, <a href="https://publications.waset.org/abstracts/search?q=Fatimah%20Khalid"> Fatimah Khalid</a>, <a href="https://publications.waset.org/abstracts/search?q=Mas%20Rina%20Mustaffa"> Mas Rina Mustaffa</a>, <a href="https://publications.waset.org/abstracts/search?q=Azreen%20Bin%20Azman"> Azreen Bin Azman</a>, <a href="https://publications.waset.org/abstracts/search?q=Amirul%20Azuani%20Romle"> Amirul Azuani Romle</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Perennial plant classification plays a significant role in various agricultural and environmental applications, assisting in plant identification, disease detection, and biodiversity monitoring. Nevertheless, attaining high accuracy in perennial plant image classification remains challenging due to the complex variations in plant appearance, the diverse range of environmental conditions under which images are captured, and the inherent variability in image quality stemming from various factors such as lighting conditions, camera settings, and focus. This paper proposes an adaptation approach to optimize perennial plant image classification by fine-tuning the pre-trained DNNs model. This paper explores the efficacy of fine-tuning prevalent architectures, namely VGG16, ResNet50, and InceptionV3, leveraging transfer learning to tailor the models to the specific characteristics of perennial plant datasets. A subset of the MYLPHerbs dataset consisted of 6 perennial plant species of 13481 images under various environmental conditions that were used in the experiments. Different strategies for fine-tuning, including adjusting learning rates, training set sizes, data augmentation, and architectural modifications, were investigated. The experimental outcomes underscore the effectiveness of fine-tuning deep neural networks for perennial plant image classification, with ResNet50 showcasing the highest accuracy of 99.78%. Despite ResNet50's superior performance, both VGG16 and InceptionV3 achieved commendable accuracy of 99.67% and 99.37%, respectively. The overall outcomes reaffirm the robustness of the fine-tuning approach across different deep neural network architectures, offering insights into strategies for optimizing model performance in the domain of perennial plant image classification. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=perennial%20plants" title="perennial plants">perennial plants</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20classification" title=" image classification"> image classification</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20neural%20networks" title=" deep neural networks"> deep neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=fine-tuning" title=" fine-tuning"> fine-tuning</a>, <a href="https://publications.waset.org/abstracts/search?q=transfer%20learning" title=" transfer learning"> transfer learning</a>, <a href="https://publications.waset.org/abstracts/search?q=VGG16" title=" VGG16"> VGG16</a>, <a href="https://publications.waset.org/abstracts/search?q=ResNet50" title=" ResNet50"> ResNet50</a>, <a href="https://publications.waset.org/abstracts/search?q=InceptionV3" title=" InceptionV3"> InceptionV3</a> </p> <a href="https://publications.waset.org/abstracts/182850/optimizing-perennial-plants-image-classification-by-fine-tuning-deep-neural-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/182850.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">66</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10520</span> Dermoscopy Compliance: Improving Melanoma Detection Pathways Through Quality Improvement</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Max%20Butler">Max Butler</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Melanoma accounts for 80% of skin cancer-related deaths globally. The poor prognosis and increasing incidence of melanoma impose a significant burden on global healthcare systems. Early detection, precise diagnosis, and preventative strategies are critical to improving patient outcomes. Dermoscopy is the gold standard for specialist assessments of pigmented skin lesions, as it can differentiate between benign and malignant growths with greater accuracy than visual inspection. In the United Kingdom, guidelines from the National Institute of Clinical Excellence (NICE) state dermoscopy should be used in all specialist assessments of pigmented skin lesions. Compliance with this guideline is low, resulting in missed and delayed melanoma diagnoses. To address this problem, a quality improvement project was initiated at Buckinghamshire Healthcare Trust (BHT) within the plastic surgery department. The target group was a trainee and consultant plastic surgeons conducting outpatient skin cancer clinics. Analysis of clinic documentation over a one-month period found that only 62% (38/61) of patients referred with pigmented skin lesions were examined using dermoscopy. To increase dermoscopy rates, teaching was delivered to the department highlighting national guidelines and the evidence base for dermoscopic examination. In addition, clinic paperwork was redesigned to include a text box for dermoscopic examination. Reauditing after the intervention found a significant increase in dermoscopy rates (52/61, p = 0.014). In conclusion, implementing a quality improvement project with targeted teaching and documentation template templates successfully increased dermoscopy rates. This is a promising step toward improving early melanoma detection and patient outcomes. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=melanoma" title="melanoma">melanoma</a>, <a href="https://publications.waset.org/abstracts/search?q=dermoscopy" title=" dermoscopy"> dermoscopy</a>, <a href="https://publications.waset.org/abstracts/search?q=plastic%20surgery" title=" plastic surgery"> plastic surgery</a>, <a href="https://publications.waset.org/abstracts/search?q=quality%20improvement" title=" quality improvement"> quality improvement</a> </p> <a href="https://publications.waset.org/abstracts/170872/dermoscopy-compliance-improving-melanoma-detection-pathways-through-quality-improvement" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/170872.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">70</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10519</span> Multi-Classification Deep Learning Model for Diagnosing Different Chest Diseases</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bandhan%20Dey">Bandhan Dey</a>, <a href="https://publications.waset.org/abstracts/search?q=Muhsina%20Bintoon%20Yiasha"> Muhsina Bintoon Yiasha</a>, <a href="https://publications.waset.org/abstracts/search?q=Gulam%20Sulaman%20Choudhury"> Gulam Sulaman Choudhury</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Chest disease is one of the most problematic ailments in our regular life. There are many known chest diseases out there. Diagnosing them correctly plays a vital role in the process of treatment. There are many methods available explicitly developed for different chest diseases. But the most common approach for diagnosing these diseases is through X-ray. In this paper, we proposed a multi-classification deep learning model for diagnosing COVID-19, lung cancer, pneumonia, tuberculosis, and atelectasis from chest X-rays. In the present work, we used the transfer learning method for better accuracy and fast training phase. The performance of three architectures is considered: InceptionV3, VGG-16, and VGG-19. We evaluated these deep learning architectures using public digital chest x-ray datasets with six classes (i.e., COVID-19, lung cancer, pneumonia, tuberculosis, atelectasis, and normal). The experiments are conducted on six-classification, and we found that VGG16 outperforms other proposed models with an accuracy of 95%. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title="deep learning">deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20classification" title=" image classification"> image classification</a>, <a href="https://publications.waset.org/abstracts/search?q=X-ray%20images" title=" X-ray images"> X-ray images</a>, <a href="https://publications.waset.org/abstracts/search?q=Tensorflow" title=" Tensorflow"> Tensorflow</a>, <a href="https://publications.waset.org/abstracts/search?q=Keras" title=" Keras"> Keras</a>, <a href="https://publications.waset.org/abstracts/search?q=chest%20diseases" title=" chest diseases"> chest diseases</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks" title=" convolutional neural networks"> convolutional neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-classification" title=" multi-classification"> multi-classification</a> </p> <a href="https://publications.waset.org/abstracts/158065/multi-classification-deep-learning-model-for-diagnosing-different-chest-diseases" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/158065.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">92</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10518</span> Application of Deep Learning in Colorization of LiDAR-Derived Intensity Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Edgardo%20V.%20Gubatanga%20Jr.">Edgardo V. Gubatanga Jr.</a>, <a href="https://publications.waset.org/abstracts/search?q=Mark%20Joshua%20Salvacion"> Mark Joshua Salvacion</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Most aerial LiDAR systems have accompanying aerial cameras in order to capture not only the terrain of the surveyed area but also its true-color appearance. However, the presence of atmospheric clouds, poor lighting conditions, and aerial camera problems during an aerial survey may cause absence of aerial photographs. These leave areas having terrain information but lacking aerial photographs. Intensity images can be derived from LiDAR data but they are only grayscale images. A deep learning model is developed to create a complex function in a form of a deep neural network relating the pixel values of LiDAR-derived intensity images and true-color images. This complex function can then be used to predict the true-color images of a certain area using intensity images from LiDAR data. The predicted true-color images do not necessarily need to be accurate compared to the real world. They are only intended to look realistic so that they can be used as base maps. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=aerial%20LiDAR" title="aerial LiDAR">aerial LiDAR</a>, <a href="https://publications.waset.org/abstracts/search?q=colorization" title=" colorization"> colorization</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=intensity%20images" title=" intensity images"> intensity images</a> </p> <a href="https://publications.waset.org/abstracts/94116/application-of-deep-learning-in-colorization-of-lidar-derived-intensity-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/94116.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">166</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10517</span> A Comparative Study of Deep Learning Methods for COVID-19 Detection</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Aishrith%20Rao">Aishrith Rao</a> </p> <p class="card-text"><strong>Abstract:</strong></p> COVID 19 is a pandemic which has resulted in thousands of deaths around the world and a huge impact on the global economy. Testing is a huge issue as the test kits have limited availability and are expensive to manufacture. Using deep learning methods on radiology images in the detection of the coronavirus as these images contain information about the spread of the virus in the lungs is extremely economical and time-saving as it can be used in areas with a lack of testing facilities. This paper focuses on binary classification and multi-class classification of COVID 19 and other diseases such as pneumonia, tuberculosis, etc. Different deep learning methods such as VGG-19, COVID-Net, ResNET+ SVM, Deep CNN, DarkCovidnet, etc., have been used, and their accuracy has been compared using the Chest X-Ray dataset. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title="deep learning">deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=radiology" title=" radiology"> radiology</a>, <a href="https://publications.waset.org/abstracts/search?q=COVID-19" title=" COVID-19"> COVID-19</a>, <a href="https://publications.waset.org/abstracts/search?q=ResNet" title=" ResNet"> ResNet</a>, <a href="https://publications.waset.org/abstracts/search?q=VGG-19" title=" VGG-19"> VGG-19</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20neural%20networks" title=" deep neural networks"> deep neural networks</a> </p> <a href="https://publications.waset.org/abstracts/127887/a-comparative-study-of-deep-learning-methods-for-covid-19-detection" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/127887.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">160</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10516</span> Deep-Learning Based Approach to Facial Emotion Recognition through Convolutional Neural Network</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nouha%20Khediri">Nouha Khediri</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohammed%20Ben%20Ammar"> Mohammed Ben Ammar</a>, <a href="https://publications.waset.org/abstracts/search?q=Monji%20Kherallah"> Monji Kherallah</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Recently, facial emotion recognition (FER) has become increasingly essential to understand the state of the human mind. Accurately classifying emotion from the face is a challenging task. In this paper, we present a facial emotion recognition approach named CV-FER, benefiting from deep learning, especially CNN and VGG16. First, the data is pre-processed with data cleaning and data rotation. Then, we augment the data and proceed to our FER model, which contains five convolutions layers and five pooling layers. Finally, a softmax classifier is used in the output layer to recognize emotions. Based on the above contents, this paper reviews the works of facial emotion recognition based on deep learning. Experiments show that our model outperforms the other methods using the same FER2013 database and yields a recognition rate of 92%. We also put forward some suggestions for future work. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=CNN" title="CNN">CNN</a>, <a href="https://publications.waset.org/abstracts/search?q=deep-learning" title=" deep-learning"> deep-learning</a>, <a href="https://publications.waset.org/abstracts/search?q=facial%20emotion%20recognition" title=" facial emotion recognition"> facial emotion recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a> </p> <a href="https://publications.waset.org/abstracts/150291/deep-learning-based-approach-to-facial-emotion-recognition-through-convolutional-neural-network" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/150291.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">95</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10515</span> A Comparison of Methods for Neural Network Aggregation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=John%20Pomerat">John Pomerat</a>, <a href="https://publications.waset.org/abstracts/search?q=Aviv%20Segev"> Aviv Segev</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Recently, deep learning has had many theoretical breakthroughs. For deep learning to be successful in the industry, however, there need to be practical algorithms capable of handling many real-world hiccups preventing the immediate application of a learning algorithm. Although AI promises to revolutionize the healthcare industry, getting access to patient data in order to train learning algorithms has not been easy. One proposed solution to this is data- sharing. In this paper, we propose an alternative protocol, based on multi-party computation, to train deep learning models while maintaining both the privacy and security of training data. We examine three methods of training neural networks in this way: Transfer learning, average ensemble learning, and series network learning. We compare these methods to the equivalent model obtained through data-sharing across two different experiments. Additionally, we address the security concerns of this protocol. While the motivating example is healthcare, our findings regarding multi-party computation of neural network training are purely theoretical and have use-cases outside the domain of healthcare. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=neural%20network%20aggregation" title="neural network aggregation">neural network aggregation</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-party%20computation" title=" multi-party computation"> multi-party computation</a>, <a href="https://publications.waset.org/abstracts/search?q=transfer%20learning" title=" transfer learning"> transfer learning</a>, <a href="https://publications.waset.org/abstracts/search?q=average%20ensemble%20learning" title=" average ensemble learning"> average ensemble learning</a> </p> <a href="https://publications.waset.org/abstracts/128037/a-comparison-of-methods-for-neural-network-aggregation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/128037.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">162</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10514</span> FracXpert: Ensemble Machine Learning Approach for Localization and Classification of Bone Fractures in Cricket Athletes</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Madushani%20Rodrigo">Madushani Rodrigo</a>, <a href="https://publications.waset.org/abstracts/search?q=Banuka%20Athuraliya"> Banuka Athuraliya</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In today's world of medical diagnosis and prediction, machine learning stands out as a strong tool, transforming old ways of caring for health. This study analyzes the use of machine learning in the specialized domain of sports medicine, with a focus on the timely and accurate detection of bone fractures in cricket athletes. Failure to identify bone fractures in real time can result in malunion or non-union conditions. To ensure proper treatment and enhance the bone healing process, accurately identifying fracture locations and types is necessary. When interpreting X-ray images, it relies on the expertise and experience of medical professionals in the identification process. Sometimes, radiographic images are of low quality, leading to potential issues. Therefore, it is necessary to have a proper approach to accurately localize and classify fractures in real time. The research has revealed that the optimal approach needs to address the stated problem and employ appropriate radiographic image processing techniques and object detection algorithms. These algorithms should effectively localize and accurately classify all types of fractures with high precision and in a timely manner. In order to overcome the challenges of misidentifying fractures, a distinct model for fracture localization and classification has been implemented. The research also incorporates radiographic image enhancement and preprocessing techniques to overcome the limitations posed by low-quality images. A classification ensemble model has been implemented using ResNet18 and VGG16. In parallel, a fracture segmentation model has been implemented using the enhanced U-Net architecture. Combining the results of these two implemented models, the FracXpert system can accurately localize exact fracture locations along with fracture types from the available 12 different types of fracture patterns, which include avulsion, comminuted, compressed, dislocation, greenstick, hairline, impacted, intraarticular, longitudinal, oblique, pathological, and spiral. This system will generate a confidence score level indicating the degree of confidence in the predicted result. Using ResNet18 and VGG16 architectures, the implemented fracture segmentation model, based on the U-Net architecture, achieved a high accuracy level of 99.94%, demonstrating its precision in identifying fracture locations. Simultaneously, the classification ensemble model achieved an accuracy of 81.0%, showcasing its ability to categorize various fracture patterns, which is instrumental in the fracture treatment process. In conclusion, FracXpert has become a promising ML application in sports medicine, demonstrating its potential to revolutionize fracture detection processes. By leveraging the power of ML algorithms, this study contributes to the advancement of diagnostic capabilities in cricket athlete healthcare, ensuring timely and accurate identification of bone fractures for the best treatment outcomes. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=multiclass%20classification" title="multiclass classification">multiclass classification</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20detection" title=" object detection"> object detection</a>, <a href="https://publications.waset.org/abstracts/search?q=ResNet18" title=" ResNet18"> ResNet18</a>, <a href="https://publications.waset.org/abstracts/search?q=U-Net" title=" U-Net"> U-Net</a>, <a href="https://publications.waset.org/abstracts/search?q=VGG16" title=" VGG16"> VGG16</a> </p> <a href="https://publications.waset.org/abstracts/184317/fracxpert-ensemble-machine-learning-approach-for-localization-and-classification-of-bone-fractures-in-cricket-athletes" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/184317.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">121</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10513</span> Ensemble of Deep CNN Architecture for Classifying the Source and Quality of Teff Cereal</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Belayneh%20Matebie">Belayneh Matebie</a>, <a href="https://publications.waset.org/abstracts/search?q=Michael%20Melese"> Michael Melese</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The study focuses on addressing the challenges in classifying and ensuring the quality of Eragrostis Teff, a small and round grain that is the smallest cereal grain. Employing a traditional classification method is challenging because of its small size and the similarity of its environmental characteristics. To overcome this, this study employs a machine learning approach to develop a source and quality classification system for Teff cereal. Data is collected from various production areas in the Amhara regions, considering two types of cereal (high and low quality) across eight classes. A total of 5,920 images are collected, with 740 images for each class. Image enhancement techniques, including scaling, data augmentation, histogram equalization, and noise removal, are applied to preprocess the data. Convolutional Neural Network (CNN) is then used to extract relevant features and reduce dimensionality. The dataset is split into 80% for training and 20% for testing. Different classifiers, including FVGG16, FINCV3, QSCTC, EMQSCTC, SVM, and RF, are employed for classification, achieving accuracy rates ranging from 86.91% to 97.72%. The ensemble of FVGG16, FINCV3, and QSCTC using the Max-Voting approach outperforms individual algorithms. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Teff" title="Teff">Teff</a>, <a href="https://publications.waset.org/abstracts/search?q=ensemble%20learning" title=" ensemble learning"> ensemble learning</a>, <a href="https://publications.waset.org/abstracts/search?q=max-voting" title=" max-voting"> max-voting</a>, <a href="https://publications.waset.org/abstracts/search?q=CNN" title=" CNN"> CNN</a>, <a href="https://publications.waset.org/abstracts/search?q=SVM" title=" SVM"> SVM</a>, <a href="https://publications.waset.org/abstracts/search?q=RF" title=" RF"> RF</a> </p> <a href="https://publications.waset.org/abstracts/186043/ensemble-of-deep-cnn-architecture-for-classifying-the-source-and-quality-of-teff-cereal" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/186043.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">55</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10512</span> Computer Aided Analysis of Breast Based Diagnostic Problems from Mammograms Using Image Processing and Deep Learning Methods</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ali%20Berkan%20Ural">Ali Berkan Ural</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents the analysis, evaluation, and pre-diagnosis of early stage breast based diagnostic problems (breast cancer, nodulesorlumps) by Computer Aided Diagnosing (CAD) system from mammogram radiological images. According to the statistics, the time factor is crucial to discover the disease in the patient (especially in women) as possible as early and fast. In the study, a new algorithm is developed using advanced image processing and deep learning method to detect and classify the problem at earlystagewithmoreaccuracy. This system first works with image processing methods (Image acquisition, Noiseremoval, Region Growing Segmentation, Morphological Operations, Breast BorderExtraction, Advanced Segmentation, ObtainingRegion Of Interests (ROIs), etc.) and segments the area of interest of the breast and then analyzes these partly obtained area for cancer detection/lumps in order to diagnosis the disease. After segmentation, with using the Spectrogramimages, 5 different deep learning based methods (specified Convolutional Neural Network (CNN) basedAlexNet, ResNet50, VGG16, DenseNet, Xception) are applied to classify the breast based problems. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=computer%20aided%20diagnosis" title="computer aided diagnosis">computer aided diagnosis</a>, <a href="https://publications.waset.org/abstracts/search?q=breast%20cancer" title=" breast cancer"> breast cancer</a>, <a href="https://publications.waset.org/abstracts/search?q=region%20growing" title=" region growing"> region growing</a>, <a href="https://publications.waset.org/abstracts/search?q=segmentation" title=" segmentation"> segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a> </p> <a href="https://publications.waset.org/abstracts/155700/computer-aided-analysis-of-breast-based-diagnostic-problems-from-mammograms-using-image-processing-and-deep-learning-methods" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/155700.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">96</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10511</span> Development of Web-Based Iceberg Detection Using Deep Learning</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=A.%20Kavya%20Sri">A. Kavya Sri</a>, <a href="https://publications.waset.org/abstracts/search?q=K.%20Sai%20Vineela"> K. Sai Vineela</a>, <a href="https://publications.waset.org/abstracts/search?q=R.%20Vanitha"> R. Vanitha</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20Rohith"> S. Rohith</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Large pieces of ice that break from the glaciers are known as icebergs. The threat that icebergs pose to navigation, production of offshore oil and gas services, and underwater pipelines makes their detection crucial. In this project, an automated iceberg tracking method using deep learning techniques and satellite images of icebergs is to be developed. With a temporal resolution of 12 days and a spatial resolution of 20 m, Sentinel-1 (SAR) images can be used to track iceberg drift over the Southern Ocean. In contrast to multispectral images, SAR images are used for analysis in meteorological conditions. This project develops a web-based graphical user interface to detect and track icebergs using sentinel-1 images. To track the movement of the icebergs by using temporal images based on their latitude and longitude values and by comparing the center and area of all detected icebergs. Testing the accuracy is done by precision and recall measures. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=synthetic%20aperture%20radar%20%28SAR%29" title="synthetic aperture radar (SAR)">synthetic aperture radar (SAR)</a>, <a href="https://publications.waset.org/abstracts/search?q=icebergs" title=" icebergs"> icebergs</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=spatial%20resolution" title=" spatial resolution"> spatial resolution</a>, <a href="https://publications.waset.org/abstracts/search?q=temporal%20resolution" title=" temporal resolution"> temporal resolution</a> </p> <a href="https://publications.waset.org/abstracts/162740/development-of-web-based-iceberg-detection-using-deep-learning" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/162740.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">91</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10510</span> An Ensemble Deep Learning Architecture for Imbalanced Classification of Thoracic Surgery Patients</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Saba%20%20Ebrahimi">Saba Ebrahimi</a>, <a href="https://publications.waset.org/abstracts/search?q=Saeed%20Ahmadian"> Saeed Ahmadian</a>, <a href="https://publications.waset.org/abstracts/search?q=Hedie%20%20Ashrafi"> Hedie Ashrafi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Selecting appropriate patients for surgery is one of the main issues in thoracic surgery (TS). Both short-term and long-term risks and benefits of surgery must be considered in the patient selection criteria. There are some limitations in the existing datasets of TS patients because of missing values of attributes and imbalanced distribution of survival classes. In this study, a novel ensemble architecture of deep learning networks is proposed based on stacking different linear and non-linear layers to deal with imbalance datasets. The categorical and numerical features are split using different layers with ability to shrink the unnecessary features. Then, after extracting the insight from the raw features, a novel biased-kernel layer is applied to reinforce the gradient of the minority class and cause the network to be trained better comparing the current methods. Finally, the performance and advantages of our proposed model over the existing models are examined for predicting patient survival after thoracic surgery using a real-life clinical data for lung cancer patients. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title="deep learning">deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=ensemble%20models" title=" ensemble models"> ensemble models</a>, <a href="https://publications.waset.org/abstracts/search?q=imbalanced%20classification" title=" imbalanced classification"> imbalanced classification</a>, <a href="https://publications.waset.org/abstracts/search?q=lung%20cancer" title=" lung cancer"> lung cancer</a>, <a href="https://publications.waset.org/abstracts/search?q=TS%20patient%20selection" title=" TS patient selection"> TS patient selection</a> </p> <a href="https://publications.waset.org/abstracts/128394/an-ensemble-deep-learning-architecture-for-imbalanced-classification-of-thoracic-surgery-patients" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/128394.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">145</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10509</span> Metastatic Polypoid Nodular Melanoma Management During The COVID-19 Pandemic</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Stefan%20Bradu">Stefan Bradu</a>, <a href="https://publications.waset.org/abstracts/search?q=Daniel%20Siegel"> Daniel Siegel</a>, <a href="https://publications.waset.org/abstracts/search?q=Jameson%20Loyal"> Jameson Loyal</a>, <a href="https://publications.waset.org/abstracts/search?q=Andrea%20Leaf"> Andrea Leaf</a>, <a href="https://publications.waset.org/abstracts/search?q=Alana%20Kurtti"> Alana Kurtti</a>, <a href="https://publications.waset.org/abstracts/search?q=Usha%20Alapati"> Usha Alapati</a>, <a href="https://publications.waset.org/abstracts/search?q=Jared%20Jagdeo"> Jared Jagdeo</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Compared with all other variants of nodular melanoma, patients with polypoid nodular melanoma have the lowest 5-year survival rate. The pathophysiology and management of polypoid melanoma are scarcely reported in the literature. Although surgical excision is the cornerstone of melanoma management, treatment of polypoid melanoma is complicated by several negative prognostic factors, including early metastasis. This report demonstrates the successful treatment of a rapidly developing red nodular polypoid melanoma with metastasis using surgery and adjuvant nivolumab in a SARS-CoV-2-positive patient who delayed seeking care due to the COVID-19 pandemic. In addition to detailing the successful treatment approach, the immunosuppressive effects of SARS-2-CoV and its possible contribution to the rapid progression of polypoid melanoma are discussed. This case highlights the complex challenges of melanoma diagnosis and management during the COVID-19 pandemic. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=covid-19" title="covid-19">covid-19</a>, <a href="https://publications.waset.org/abstracts/search?q=dermatology" title=" dermatology"> dermatology</a>, <a href="https://publications.waset.org/abstracts/search?q=immunotherapy" title=" immunotherapy"> immunotherapy</a>, <a href="https://publications.waset.org/abstracts/search?q=melanoma" title=" melanoma"> melanoma</a>, <a href="https://publications.waset.org/abstracts/search?q=nivolumab" title=" nivolumab"> nivolumab</a> </p> <a href="https://publications.waset.org/abstracts/140542/metastatic-polypoid-nodular-melanoma-management-during-the-covid-19-pandemic" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/140542.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">209</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10508</span> Breast Cancer Prediction Using Score-Level Fusion of Machine Learning and Deep Learning Models</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sam%20Khozama">Sam Khozama</a>, <a href="https://publications.waset.org/abstracts/search?q=Ali%20M.%20Mayya"> Ali M. Mayya</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Breast cancer is one of the most common types in women. Early prediction of breast cancer helps physicians detect cancer in its early stages. Big cancer data needs a very powerful tool to analyze and extract predictions. Machine learning and deep learning are two of the most efficient tools for predicting cancer based on textual data. In this study, we developed a fusion model of two machine learning and deep learning models. To obtain the final prediction, Long-Short Term Memory (LSTM) and ensemble learning with hyper parameters optimization are used, and score-level fusion is used. Experiments are done on the Breast Cancer Surveillance Consortium (BCSC) dataset after balancing and grouping the class categories. Five different training scenarios are used, and the tests show that the designed fusion model improved the performance by 3.3% compared to the individual models. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title="machine learning">machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=cancer%20prediction" title=" cancer prediction"> cancer prediction</a>, <a href="https://publications.waset.org/abstracts/search?q=breast%20cancer" title=" breast cancer"> breast cancer</a>, <a href="https://publications.waset.org/abstracts/search?q=LSTM" title=" LSTM"> LSTM</a>, <a href="https://publications.waset.org/abstracts/search?q=fusion" title=" fusion"> fusion</a> </p> <a href="https://publications.waset.org/abstracts/155602/breast-cancer-prediction-using-score-level-fusion-of-machine-learning-and-deep-learning-models" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/155602.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">163</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10507</span> Reinforcement Learning for Classification of Low-Resolution Satellite Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Khadija%20Bouzaachane">Khadija Bouzaachane</a>, <a href="https://publications.waset.org/abstracts/search?q=El%20Mahdi%20El%20Guarmah"> El Mahdi El Guarmah</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The classification of low-resolution satellite images has been a worthwhile and fertile field that attracts plenty of researchers due to its importance in monitoring geographical areas. It could be used for several purposes such as disaster management, military surveillance, agricultural monitoring. The main objective of this work is to classify efficiently and accurately low-resolution satellite images by using novel technics of deep learning and reinforcement learning. The images include roads, residential areas, industrial areas, rivers, sea lakes, and vegetation. To achieve that goal, we carried out experiments on the sentinel-2 images considering both high accuracy and efficiency classification. Our proposed model achieved a 91% accuracy on the testing dataset besides a good classification for land cover. Focus on the parameter precision; we have obtained 93% for the river, 92% for residential, 97% for residential, 96% for the forest, 87% for annual crop, 84% for herbaceous vegetation, 85% for pasture, 78% highway and 100% for Sea Lake. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=classification" title="classification">classification</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=reinforcement%20learning" title=" reinforcement learning"> reinforcement learning</a>, <a href="https://publications.waset.org/abstracts/search?q=satellite%20imagery" title=" satellite imagery"> satellite imagery</a> </p> <a href="https://publications.waset.org/abstracts/141097/reinforcement-learning-for-classification-of-low-resolution-satellite-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/141097.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">213</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10506</span> Comparison of Deep Convolutional Neural Networks Models for Plant Disease Identification</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Megha%20Gupta">Megha Gupta</a>, <a href="https://publications.waset.org/abstracts/search?q=Nupur%20Prakash"> Nupur Prakash</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Identification of plant diseases has been performed using machine learning and deep learning models on the datasets containing images of healthy and diseased plant leaves. The current study carries out an evaluation of some of the deep learning models based on convolutional neural network (CNN) architectures for identification of plant diseases. For this purpose, the publicly available New Plant Diseases Dataset, an augmented version of PlantVillage dataset, available on Kaggle platform, containing 87,900 images has been used. The dataset contained images of 26 diseases of 14 different plants and images of 12 healthy plants. The CNN models selected for the study presented in this paper are AlexNet, ZFNet, VGGNet (four models), GoogLeNet, and ResNet (three models). The selected models are trained using PyTorch, an open-source machine learning library, on Google Colaboratory. A comparative study has been carried out to analyze the high degree of accuracy achieved using these models. The highest test accuracy and F1-score of 99.59% and 0.996, respectively, were achieved by using GoogLeNet with Mini-batch momentum based gradient descent learning algorithm. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=comparative%20analysis" title="comparative analysis">comparative analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks" title=" convolutional neural networks"> convolutional neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=plant%20disease%20identification" title=" plant disease identification"> plant disease identification</a> </p> <a href="https://publications.waset.org/abstracts/138543/comparison-of-deep-convolutional-neural-networks-models-for-plant-disease-identification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/138543.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">199</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10505</span> Immunoliposomes Conjugated with CD133 Antibody for Targeting Melanoma Cancer Stem Cells</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Chuan%20Yin">Chuan Yin</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Cancer stem cells (CSCs) represent a subpopulation of cancer cells that possess the characteristics associated with normal stem cells. CD133 is a phenotype of melanoma CSCs responsible for melanoma metastasis and drug resistance. Although adriamycin (ADR) is commonly used drug in melanoma therapy, but it is ineffective in the treatment of melanoma CSCs. In this study, we constructed CD133 antibody conjugated ADR immunoliposomes (ADR-Lip-CD133) to target CD133+ melanoma CSCs. The results showed that the immunoliposomes possessed a small particle size (~150 nm), high drug encapsulation efficiency (~90%). After 72 hr treatment on the WM266-4 melanoma tumorspheres, the IC50 values of the drug formulated in ADR-Lip-CD133, ADR-Lip (ADR liposomes) and ADR are found to be 24.42, 57.13 and 59.98 ng/ml respectively, suggesting that ADR-Lip-CD133 was more effective than ADR-Lip and ADR. Significantly, ADR-Lip-CD133 could almost completely abolish the tumorigenic ability of WM266-4 tumorspheres in vivo, and showed the best therapeutic effect in WM266-4 melanoma xenograft mice. It is noteworthy that ADR-Lip-CD133 could selectively kill CD133+ melanoma CSCs of WM266-4 cells both in vitro and in vivo. ADR-Lip-CD133 represent a potential approach in targeting and killing CD133+ melanoma CSCs. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=cancer%20stem%20cells" title="cancer stem cells">cancer stem cells</a>, <a href="https://publications.waset.org/abstracts/search?q=melanoma" title=" melanoma"> melanoma</a>, <a href="https://publications.waset.org/abstracts/search?q=immunoliposomes" title=" immunoliposomes"> immunoliposomes</a>, <a href="https://publications.waset.org/abstracts/search?q=CD133" title=" CD133"> CD133</a> </p> <a href="https://publications.waset.org/abstracts/32389/immunoliposomes-conjugated-with-cd133-antibody-for-targeting-melanoma-cancer-stem-cells" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/32389.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">382</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10504</span> A Case Study of Deep Learning for Disease Detection in Crops</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Felipe%20A.%20Guth">Felipe A. Guth</a>, <a href="https://publications.waset.org/abstracts/search?q=Shane%20Ward"> Shane Ward</a>, <a href="https://publications.waset.org/abstracts/search?q=Kevin%20McDonnell"> Kevin McDonnell</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In the precision agriculture area, one of the main tasks is the automated detection of diseases in crops. Machine Learning algorithms have been studied in recent decades for such tasks in view of their potential for improving economic outcomes that automated disease detection may attain over crop fields. The latest generation of deep learning convolution neural networks has presented significant results in the area of image classification. In this way, this work has tested the implementation of an architecture of deep learning convolution neural network for the detection of diseases in different types of crops. A data augmentation strategy was used to meet the requirements of the algorithm implemented with a deep learning framework. Two test scenarios were deployed. The first scenario implemented a neural network under images extracted from a controlled environment while the second one took images both from the field and the controlled environment. The results evaluated the generalisation capacity of the neural networks in relation to the two types of images presented. Results yielded a general classification accuracy of 59% in scenario 1 and 96% in scenario 2. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks" title="convolutional neural networks">convolutional neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=disease%20detection" title=" disease detection"> disease detection</a>, <a href="https://publications.waset.org/abstracts/search?q=precision%20agriculture" title=" precision agriculture"> precision agriculture</a> </p> <a href="https://publications.waset.org/abstracts/95339/a-case-study-of-deep-learning-for-disease-detection-in-crops" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/95339.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">259</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10503</span> Classification of Computer Generated Images from Photographic Images Using Convolutional Neural Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Chaitanya%20Chawla">Chaitanya Chawla</a>, <a href="https://publications.waset.org/abstracts/search?q=Divya%20Panwar"> Divya Panwar</a>, <a href="https://publications.waset.org/abstracts/search?q=Gurneesh%20Singh%20Anand"> Gurneesh Singh Anand</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20P.%20S%20Bhatia"> M. P. S Bhatia</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents a deep-learning mechanism for classifying computer generated images and photographic images. The proposed method accounts for a convolutional layer capable of automatically learning correlation between neighbouring pixels. In the current form, Convolutional Neural Network (CNN) will learn features based on an image's content instead of the structural features of the image. The layer is particularly designed to subdue an image's content and robustly learn the sensor pattern noise features (usually inherited from image processing in a camera) as well as the statistical properties of images. The paper was assessed on latest natural and computer generated images, and it was concluded that it performs better than the current state of the art methods. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20forensics" title="image forensics">image forensics</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20graphics" title=" computer graphics"> computer graphics</a>, <a href="https://publications.waset.org/abstracts/search?q=classification" title=" classification"> classification</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks" title=" convolutional neural networks"> convolutional neural networks</a> </p> <a href="https://publications.waset.org/abstracts/95266/classification-of-computer-generated-images-from-photographic-images-using-convolutional-neural-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/95266.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">337</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10502</span> Detecting Manipulated Media Using Deep Capsule Network</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Joseph%20Uzuazomaro%20Oju">Joseph Uzuazomaro Oju</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The ease at which manipulated media can be created, and the increasing difficulty in identifying fake media makes it a great threat. Most of the applications used for the creation of these high-quality fake videos and images are built with deep learning. Hence, the use of deep learning in creating a detection mechanism cannot be overemphasized. Any successful fake media that is being detected before it reached the populace will save people from the self-doubt of either a content is genuine or fake and will ensure the credibility of videos and images. The methodology introduced in this paper approaches the manipulated media detection challenge using a combo of VGG-19 and a deep capsule network. In the case of videos, they are converted into frames, which, in turn, are resized and cropped to the face region. These preprocessed images/videos are fed to the VGG-19 network to extract the latent features. The extracted latent features are inputted into a deep capsule network enhanced with a 3D -convolution dynamic routing agreement. The 3D –convolution dynamic routing agreement algorithm helps to reduce the linkages between capsules networks. Thereby limiting the poor learning shortcoming of multiple capsule network layers. The resultant output from the deep capsule network will indicate a media to be either genuine or fake. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20capsule%20network" title="deep capsule network">deep capsule network</a>, <a href="https://publications.waset.org/abstracts/search?q=dynamic%20routing" title=" dynamic routing"> dynamic routing</a>, <a href="https://publications.waset.org/abstracts/search?q=fake%20media%20detection" title=" fake media detection"> fake media detection</a>, <a href="https://publications.waset.org/abstracts/search?q=manipulated%20media" title=" manipulated media"> manipulated media</a> </p> <a href="https://publications.waset.org/abstracts/123371/detecting-manipulated-media-using-deep-capsule-network" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/123371.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">134</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10501</span> COVID-19 Analysis with Deep Learning Model Using Chest X-Rays Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Uma%20Maheshwari%20V.">Uma Maheshwari V.</a>, <a href="https://publications.waset.org/abstracts/search?q=Rajanikanth%20Aluvalu"> Rajanikanth Aluvalu</a>, <a href="https://publications.waset.org/abstracts/search?q=Kumar%20Gautam"> Kumar Gautam</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The COVID-19 disease is a highly contagious viral infection with major worldwide health implications. The global economy suffers as a result of COVID. The spread of this pandemic disease can be slowed if positive patients are found early. COVID-19 disease prediction is beneficial for identifying patients' health problems that are at risk for COVID. Deep learning and machine learning algorithms for COVID prediction using X-rays have the potential to be extremely useful in solving the scarcity of doctors and clinicians in remote places. In this paper, a convolutional neural network (CNN) with deep layers is presented for recognizing COVID-19 patients using real-world datasets. We gathered around 6000 X-ray scan images from various sources and split them into two categories: normal and COVID-impacted. Our model examines chest X-ray images to recognize such patients. Because X-rays are commonly available and affordable, our findings show that X-ray analysis is effective in COVID diagnosis. The predictions performed well, with an average accuracy of 99% on training photographs and 88% on X-ray test images. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20CNN" title="deep CNN">deep CNN</a>, <a href="https://publications.waset.org/abstracts/search?q=COVID%E2%80%9319%20analysis" title=" COVID–19 analysis"> COVID–19 analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20extraction" title=" feature extraction"> feature extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20map" title=" feature map"> feature map</a>, <a href="https://publications.waset.org/abstracts/search?q=accuracy" title=" accuracy"> accuracy</a> </p> <a href="https://publications.waset.org/abstracts/162054/covid-19-analysis-with-deep-learning-model-using-chest-x-rays-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/162054.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">80</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10500</span> Machine Learning Predictive Models for Hydroponic Systems: A Case Study Nutrient Film Technique and Deep Flow Technique</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kritiyaporn%20Kunsook">Kritiyaporn Kunsook</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Machine learning algorithms (MLAs) such us artificial neural networks (ANNs), decision tree, support vector machines (SVMs), Naïve Bayes, and ensemble classifier by voting are powerful data driven methods that are relatively less widely used in the mapping of technique of system, and thus have not been comparatively evaluated together thoroughly in this field. The performances of a series of MLAs, ANNs, decision tree, SVMs, Naïve Bayes, and ensemble classifier by voting in technique of hydroponic systems prospectively modeling are compared based on the accuracy of each model. Classification of hydroponic systems only covers the test samples from vegetables grown with Nutrient film technique (NFT) and Deep flow technique (DFT). The feature, which are the characteristics of vegetables compose harvesting height width, temperature, require light and color. The results indicate that the classification performance of the ANNs is 98%, decision tree is 98%, SVMs is 97.33%, Naïve Bayes is 96.67%, and ensemble classifier by voting is 98.96% algorithm respectively. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=artificial%20neural%20networks" title="artificial neural networks">artificial neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=decision%20tree" title=" decision tree"> decision tree</a>, <a href="https://publications.waset.org/abstracts/search?q=support%20vector%20machines" title=" support vector machines"> support vector machines</a>, <a href="https://publications.waset.org/abstracts/search?q=na%C3%AFve%20Bayes" title=" naïve Bayes"> naïve Bayes</a>, <a href="https://publications.waset.org/abstracts/search?q=ensemble%20classifier%20by%20voting" title=" ensemble classifier by voting"> ensemble classifier by voting</a> </p> <a href="https://publications.waset.org/abstracts/91070/machine-learning-predictive-models-for-hydroponic-systems-a-case-study-nutrient-film-technique-and-deep-flow-technique" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/91070.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">372</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10499</span> A Deep Learning Based Approach for Dynamically Selecting Pre-processing Technique for Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Revoti%20Prasad%20Bora">Revoti Prasad Bora</a>, <a href="https://publications.waset.org/abstracts/search?q=Nikita%20Katyal"> Nikita Katyal</a>, <a href="https://publications.waset.org/abstracts/search?q=Saurabh%20Yadav"> Saurabh Yadav</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Pre-processing plays an important role in various image processing applications. Most of the time due to the similar nature of images, a particular pre-processing or a set of pre-processing steps are sufficient to produce the desired results. However, in the education domain, there is a wide variety of images in various aspects like images with line-based diagrams, chemical formulas, mathematical equations, etc. Hence a single pre-processing or a set of pre-processing steps may not yield good results. Therefore, a Deep Learning based approach for dynamically selecting a relevant pre-processing technique for each image is proposed. The proposed method works as a classifier to detect hidden patterns in the images and predicts the relevant pre-processing technique needed for the image. This approach experimented for an image similarity matching problem but it can be adapted to other use cases too. Experimental results showed significant improvement in average similarity ranking with the proposed method as opposed to static pre-processing techniques. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep-learning" title="deep-learning">deep-learning</a>, <a href="https://publications.waset.org/abstracts/search?q=classification" title=" classification"> classification</a>, <a href="https://publications.waset.org/abstracts/search?q=pre-processing" title=" pre-processing"> pre-processing</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=educational%20data%20mining" title=" educational data mining"> educational data mining</a> </p> <a href="https://publications.waset.org/abstracts/148397/a-deep-learning-based-approach-for-dynamically-selecting-pre-processing-technique-for-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/148397.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">164</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10498</span> Using Deep Learning in Lyme Disease Diagnosis</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Teja%20Koduru">Teja Koduru</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Untreated Lyme disease can lead to neurological, cardiac, and dermatological complications. Rapid diagnosis of the erythema migrans (EM) rash, a characteristic symptom of Lyme disease is therefore crucial to early diagnosis and treatment. In this study, we aim to utilize deep learning frameworks including Tensorflow and Keras to create deep convolutional neural networks (DCNN) to detect images of acute Lyme Disease from images of erythema migrans. This study uses a custom database of erythema migrans images of varying quality to train a DCNN capable of classifying images of EM rashes vs. non-EM rashes. Images from publicly available sources were mined to create an initial database. Machine-based removal of duplicate images was then performed, followed by a thorough examination of all images by a clinician. The resulting database was combined with images of confounding rashes and regular skin, resulting in a total of 683 images. This database was then used to create a DCNN with an accuracy of 93% when classifying images of rashes as EM vs. non EM. Finally, this model was converted into a web and mobile application to allow for rapid diagnosis of EM rashes by both patients and clinicians. This tool could be used for patient prescreening prior to treatment and lead to a lower mortality rate from Lyme disease. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Lyme" title="Lyme">Lyme</a>, <a href="https://publications.waset.org/abstracts/search?q=untreated%20Lyme" title=" untreated Lyme"> untreated Lyme</a>, <a href="https://publications.waset.org/abstracts/search?q=erythema%20migrans%20rash" title=" erythema migrans rash"> erythema migrans rash</a>, <a href="https://publications.waset.org/abstracts/search?q=EM%20rash" title=" EM rash"> EM rash</a> </p> <a href="https://publications.waset.org/abstracts/135383/using-deep-learning-in-lyme-disease-diagnosis" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/135383.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">241</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">‹</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=deep%20learning%20-%20VGG16%20-%20efficientNet%20-%20CNN%20%E2%80%93%20ensemble%20%E2%80%93%0D%0Adermoscopic%20images%20-%20%20melanoma&page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=deep%20learning%20-%20VGG16%20-%20efficientNet%20-%20CNN%20%E2%80%93%20ensemble%20%E2%80%93%0D%0Adermoscopic%20images%20-%20%20melanoma&page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=deep%20learning%20-%20VGG16%20-%20efficientNet%20-%20CNN%20%E2%80%93%20ensemble%20%E2%80%93%0D%0Adermoscopic%20images%20-%20%20melanoma&page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=deep%20learning%20-%20VGG16%20-%20efficientNet%20-%20CNN%20%E2%80%93%20ensemble%20%E2%80%93%0D%0Adermoscopic%20images%20-%20%20melanoma&page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=deep%20learning%20-%20VGG16%20-%20efficientNet%20-%20CNN%20%E2%80%93%20ensemble%20%E2%80%93%0D%0Adermoscopic%20images%20-%20%20melanoma&page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=deep%20learning%20-%20VGG16%20-%20efficientNet%20-%20CNN%20%E2%80%93%20ensemble%20%E2%80%93%0D%0Adermoscopic%20images%20-%20%20melanoma&page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=deep%20learning%20-%20VGG16%20-%20efficientNet%20-%20CNN%20%E2%80%93%20ensemble%20%E2%80%93%0D%0Adermoscopic%20images%20-%20%20melanoma&page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=deep%20learning%20-%20VGG16%20-%20efficientNet%20-%20CNN%20%E2%80%93%20ensemble%20%E2%80%93%0D%0Adermoscopic%20images%20-%20%20melanoma&page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=deep%20learning%20-%20VGG16%20-%20efficientNet%20-%20CNN%20%E2%80%93%20ensemble%20%E2%80%93%0D%0Adermoscopic%20images%20-%20%20melanoma&page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=deep%20learning%20-%20VGG16%20-%20efficientNet%20-%20CNN%20%E2%80%93%20ensemble%20%E2%80%93%0D%0Adermoscopic%20images%20-%20%20melanoma&page=350">350</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=deep%20learning%20-%20VGG16%20-%20efficientNet%20-%20CNN%20%E2%80%93%20ensemble%20%E2%80%93%0D%0Adermoscopic%20images%20-%20%20melanoma&page=351">351</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=deep%20learning%20-%20VGG16%20-%20efficientNet%20-%20CNN%20%E2%80%93%20ensemble%20%E2%80%93%0D%0Adermoscopic%20images%20-%20%20melanoma&page=2" rel="next">›</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">© 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">×</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>