CINXE.COM

Search results for: ResNet-50

<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: ResNet-50</title> <meta name="description" content="Search results for: ResNet-50"> <meta name="keywords" content="ResNet-50"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="ResNet-50" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="ResNet-50"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 22</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: ResNet-50</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">22</span> Camera Model Identification for Mi Pad 4, Oppo A37f, Samsung M20, and Oppo f9</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ulrich%20Wake">Ulrich Wake</a>, <a href="https://publications.waset.org/abstracts/search?q=Eniman%20Syamsuddin"> Eniman Syamsuddin</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The model for camera model identificaiton is trained using pretrained model ResNet43 and ResNet50. The dataset consists of 500 photos of each phone. Dataset is divided into 1280 photos for training, 320 photos for validation and 400 photos for testing. The model is trained using One Cycle Policy Method and tested using Test-Time Augmentation. Furthermore, the model is trained for 50 epoch using regularization such as drop out and early stopping. The result is 90% accuracy for validation set and above 85% for Test-Time Augmentation using ResNet50. Every model is also trained by slightly updating the pretrained model’s weights <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=%E2%80%8B%20One%20Cycle%20Policy" title="​ One Cycle Policy">​ One Cycle Policy</a>, <a href="https://publications.waset.org/abstracts/search?q=ResNet34" title=" ResNet34"> ResNet34</a>, <a href="https://publications.waset.org/abstracts/search?q=ResNet50" title=" ResNet50"> ResNet50</a>, <a href="https://publications.waset.org/abstracts/search?q=Test-Time%20Agumentation" title=" Test-Time Agumentation"> Test-Time Agumentation</a> </p> <a href="https://publications.waset.org/abstracts/124445/camera-model-identification-for-mi-pad-4-oppo-a37f-samsung-m20-and-oppo-f9" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/124445.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">208</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">21</span> Deep Learning based Image Classifiers for Detection of CSSVD in Cacao Plants</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Atuhurra%20Jesse">Atuhurra Jesse</a>, <a href="https://publications.waset.org/abstracts/search?q=N%27guessan%20Yves-Roland%20Douha"> N&#039;guessan Yves-Roland Douha</a>, <a href="https://publications.waset.org/abstracts/search?q=Pabitra%20Lenka"> Pabitra Lenka</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The detection of diseases within plants has attracted a lot of attention from computer vision enthusiasts. Despite the progress made to detect diseases in many plants, there remains a research gap to train image classifiers to detect the cacao swollen shoot virus disease or CSSVD for short, pertinent to cacao plants. This gap has mainly been due to the unavailability of high quality labeled training data. Moreover, institutions have been hesitant to share their data related to CSSVD. To fill these gaps, image classifiers to detect CSSVD-infected cacao plants are presented in this study. The classifiers are based on VGG16, ResNet50 and Vision Transformer (ViT). The image classifiers are evaluated on a recently released and publicly accessible KaraAgroAI Cocoa dataset. The best performing image classifier, based on ResNet50, achieves 95.39\% precision, 93.75\% recall, 94.34\% F1-score and 94\% accuracy on only 20 epochs. There is a +9.75\% improvement in recall when compared to previous works. These results indicate that the image classifiers learn to identify cacao plants infected with CSSVD. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=CSSVD" title="CSSVD">CSSVD</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20classification" title=" image classification"> image classification</a>, <a href="https://publications.waset.org/abstracts/search?q=ResNet50" title=" ResNet50"> ResNet50</a>, <a href="https://publications.waset.org/abstracts/search?q=vision%20transformer" title=" vision transformer"> vision transformer</a>, <a href="https://publications.waset.org/abstracts/search?q=KaraAgroAI%20cocoa%20dataset" title=" KaraAgroAI cocoa dataset"> KaraAgroAI cocoa dataset</a> </p> <a href="https://publications.waset.org/abstracts/169653/deep-learning-based-image-classifiers-for-detection-of-cssvd-in-cacao-plants" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/169653.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">103</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">20</span> Integrating Wound Location Data with Deep Learning for Improved Wound Classification</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mouli%20Banga">Mouli Banga</a>, <a href="https://publications.waset.org/abstracts/search?q=Chaya%20Ravindra"> Chaya Ravindra</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Wound classification is a crucial step in wound diagnosis. An effective classifier can aid wound specialists in identifying wound types with reduced financial and time investments, facilitating the determination of optimal treatment procedures. This study presents a deep neural network-based classifier that leverages wound images and their corresponding locations to categorize wounds into various classes, such as diabetic, pressure, surgical, and venous ulcers. By incorporating a developed body map, the process of tagging wound locations is significantly enhanced, providing healthcare specialists with a more efficient tool for wound analysis. We conducted a comparative analysis between two prominent convolutional neural network models, ResNet50 and MobileNetV2, utilizing a dataset of 730 images. Our findings reveal that the RestNet50 outperforms MovileNetV2, achieving an accuracy of approximately 90%, compared to MobileNetV2’s 83%. This disparity highlights the superior capability of ResNet50 in the context of this dataset. The results underscore the potential of integrating deep learning with spatial data to improve the precision and efficiency of wound diagnosis, ultimately contributing to better patient outcomes and reducing healthcare costs. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=wound%20classification" title="wound classification">wound classification</a>, <a href="https://publications.waset.org/abstracts/search?q=MobileNetV2" title=" MobileNetV2"> MobileNetV2</a>, <a href="https://publications.waset.org/abstracts/search?q=ResNet50" title=" ResNet50"> ResNet50</a>, <a href="https://publications.waset.org/abstracts/search?q=multimodel" title=" multimodel"> multimodel</a> </p> <a href="https://publications.waset.org/abstracts/188971/integrating-wound-location-data-with-deep-learning-for-improved-wound-classification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/188971.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">32</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19</span> Improving Chest X-Ray Disease Detection with Enhanced Data Augmentation Using Novel Approach of Diverse Conditional Wasserstein Generative Adversarial Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Malik%20Muhammad%20Arslan">Malik Muhammad Arslan</a>, <a href="https://publications.waset.org/abstracts/search?q=Muneeb%20Ullah"> Muneeb Ullah</a>, <a href="https://publications.waset.org/abstracts/search?q=Dai%20Shihan"> Dai Shihan</a>, <a href="https://publications.waset.org/abstracts/search?q=Daniyal%20Haider"> Daniyal Haider</a>, <a href="https://publications.waset.org/abstracts/search?q=Xiaodong%20Yang"> Xiaodong Yang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Chest X-rays are instrumental in the detection and monitoring of a wide array of diseases, including viral infections such as COVID-19, tuberculosis, pneumonia, lung cancer, and various cardiac and pulmonary conditions. To enhance the accuracy of diagnosis, artificial intelligence (AI) algorithms, particularly deep learning models like Convolutional Neural Networks (CNNs), are employed. However, these deep learning models demand a substantial and varied dataset to attain optimal precision. Generative Adversarial Networks (GANs) can be employed to create new data, thereby supplementing the existing dataset and enhancing the accuracy of deep learning models. Nevertheless, GANs have their limitations, such as issues related to stability, convergence, and the ability to distinguish between authentic and fabricated data. In order to overcome these challenges and advance the detection and classification of CXR normal and abnormal images, this study introduces a distinctive technique known as DCWGAN (Diverse Conditional Wasserstein GAN) for generating synthetic chest X-ray (CXR) images. The study evaluates the effectiveness of this Idiosyncratic DCWGAN technique using the ResNet50 model and compares its results with those obtained using the traditional GAN approach. The findings reveal that the ResNet50 model trained on the DCWGAN-generated dataset outperformed the model trained on the classic GAN-generated dataset. Specifically, the ResNet50 model utilizing DCWGAN synthetic images achieved impressive performance metrics with an accuracy of 0.961, precision of 0.955, recall of 0.970, and F1-Measure of 0.963. These results indicate the promising potential for the early detection of diseases in CXR images using this Inimitable approach. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=CNN" title="CNN">CNN</a>, <a href="https://publications.waset.org/abstracts/search?q=classification" title=" classification"> classification</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=GAN" title=" GAN"> GAN</a>, <a href="https://publications.waset.org/abstracts/search?q=Resnet50" title=" Resnet50"> Resnet50</a> </p> <a href="https://publications.waset.org/abstracts/176306/improving-chest-x-ray-disease-detection-with-enhanced-data-augmentation-using-novel-approach-of-diverse-conditional-wasserstein-generative-adversarial-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/176306.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">88</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">18</span> SEMCPRA-Sar-Esembled Model for Climate Prediction in Remote Area</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kamalpreet%20Kaur">Kamalpreet Kaur</a>, <a href="https://publications.waset.org/abstracts/search?q=Renu%20Dhir"> Renu Dhir</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Climate prediction is an essential component of climate research, which helps evaluate possible effects on economies, communities, and ecosystems. Climate prediction involves short-term weather prediction, seasonal prediction, and long-term climate change prediction. Climate prediction can use the information gathered from satellites, ground-based stations, and ocean buoys, among other sources. The paper's four architectures, such as ResNet50, VGG19, Inception-v3, and Xception, have been combined using an ensemble approach for overall performance and robustness. An ensemble of different models makes a prediction, and the majority vote determines the final prediction. The various architectures such as ResNet50, VGG19, Inception-v3, and Xception efficiently classify the dataset RSI-CB256, which contains satellite images into cloudy and non-cloudy. The generated ensembled S-E model (Sar-ensembled model) provides an accuracy of 99.25%. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=climate" title="climate">climate</a>, <a href="https://publications.waset.org/abstracts/search?q=satellite%20images" title=" satellite images"> satellite images</a>, <a href="https://publications.waset.org/abstracts/search?q=prediction" title=" prediction"> prediction</a>, <a href="https://publications.waset.org/abstracts/search?q=classification" title=" classification"> classification</a> </p> <a href="https://publications.waset.org/abstracts/178864/semcpra-sar-esembled-model-for-climate-prediction-in-remote-area" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/178864.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">73</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">17</span> Optimizing Perennial Plants Image Classification by Fine-Tuning Deep Neural Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Khairani%20Binti%20Supyan">Khairani Binti Supyan</a>, <a href="https://publications.waset.org/abstracts/search?q=Fatimah%20Khalid"> Fatimah Khalid</a>, <a href="https://publications.waset.org/abstracts/search?q=Mas%20Rina%20Mustaffa"> Mas Rina Mustaffa</a>, <a href="https://publications.waset.org/abstracts/search?q=Azreen%20Bin%20Azman"> Azreen Bin Azman</a>, <a href="https://publications.waset.org/abstracts/search?q=Amirul%20Azuani%20Romle"> Amirul Azuani Romle</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Perennial plant classification plays a significant role in various agricultural and environmental applications, assisting in plant identification, disease detection, and biodiversity monitoring. Nevertheless, attaining high accuracy in perennial plant image classification remains challenging due to the complex variations in plant appearance, the diverse range of environmental conditions under which images are captured, and the inherent variability in image quality stemming from various factors such as lighting conditions, camera settings, and focus. This paper proposes an adaptation approach to optimize perennial plant image classification by fine-tuning the pre-trained DNNs model. This paper explores the efficacy of fine-tuning prevalent architectures, namely VGG16, ResNet50, and InceptionV3, leveraging transfer learning to tailor the models to the specific characteristics of perennial plant datasets. A subset of the MYLPHerbs dataset consisted of 6 perennial plant species of 13481 images under various environmental conditions that were used in the experiments. Different strategies for fine-tuning, including adjusting learning rates, training set sizes, data augmentation, and architectural modifications, were investigated. The experimental outcomes underscore the effectiveness of fine-tuning deep neural networks for perennial plant image classification, with ResNet50 showcasing the highest accuracy of 99.78%. Despite ResNet50's superior performance, both VGG16 and InceptionV3 achieved commendable accuracy of 99.67% and 99.37%, respectively. The overall outcomes reaffirm the robustness of the fine-tuning approach across different deep neural network architectures, offering insights into strategies for optimizing model performance in the domain of perennial plant image classification. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=perennial%20plants" title="perennial plants">perennial plants</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20classification" title=" image classification"> image classification</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20neural%20networks" title=" deep neural networks"> deep neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=fine-tuning" title=" fine-tuning"> fine-tuning</a>, <a href="https://publications.waset.org/abstracts/search?q=transfer%20learning" title=" transfer learning"> transfer learning</a>, <a href="https://publications.waset.org/abstracts/search?q=VGG16" title=" VGG16"> VGG16</a>, <a href="https://publications.waset.org/abstracts/search?q=ResNet50" title=" ResNet50"> ResNet50</a>, <a href="https://publications.waset.org/abstracts/search?q=InceptionV3" title=" InceptionV3"> InceptionV3</a> </p> <a href="https://publications.waset.org/abstracts/182850/optimizing-perennial-plants-image-classification-by-fine-tuning-deep-neural-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/182850.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">64</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">16</span> Utilizing Federated Learning for Accurate Prediction of COVID-19 from CT Scan Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jinil%20Patel">Jinil Patel</a>, <a href="https://publications.waset.org/abstracts/search?q=Sarthak%20Patel"> Sarthak Patel</a>, <a href="https://publications.waset.org/abstracts/search?q=Sarthak%20Thakkar"> Sarthak Thakkar</a>, <a href="https://publications.waset.org/abstracts/search?q=Deepti%20Saraswat"> Deepti Saraswat</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Recently, the COVID-19 outbreak has spread across the world, leading the World Health Organization to classify it as a global pandemic. To save the patient’s life, the COVID-19 symptoms have to be identified. But using an AI (Artificial Intelligence) model to identify COVID-19 symptoms within the allotted time was challenging. The RT-PCR test was found to be inadequate in determining the COVID status of a patient. To determine if the patient has COVID-19 or not, a Computed Tomography Scan (CT scan) of patient is a better alternative. It will be challenging to compile and store all the data from various hospitals on the server, though. Federated learning, therefore, aids in resolving this problem. Certain deep learning models help to classify Covid-19. This paper will have detailed work of certain deep learning models like VGG19, ResNet50, MobileNEtv2, and Deep Learning Aggregation (DLA) along with maintaining privacy with encryption. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=federated%20learning" title="federated learning">federated learning</a>, <a href="https://publications.waset.org/abstracts/search?q=COVID-19" title=" COVID-19"> COVID-19</a>, <a href="https://publications.waset.org/abstracts/search?q=CT-scan" title=" CT-scan"> CT-scan</a>, <a href="https://publications.waset.org/abstracts/search?q=homomorphic%20encryption" title=" homomorphic encryption"> homomorphic encryption</a>, <a href="https://publications.waset.org/abstracts/search?q=ResNet50" title=" ResNet50"> ResNet50</a>, <a href="https://publications.waset.org/abstracts/search?q=VGG-19" title=" VGG-19"> VGG-19</a>, <a href="https://publications.waset.org/abstracts/search?q=MobileNetv2" title=" MobileNetv2"> MobileNetv2</a>, <a href="https://publications.waset.org/abstracts/search?q=DLA" title=" DLA"> DLA</a> </p> <a href="https://publications.waset.org/abstracts/163979/utilizing-federated-learning-for-accurate-prediction-of-covid-19-from-ct-scan-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/163979.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">73</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">15</span> Drone Classification Using Classification Methods Using Conventional Model With Embedded Audio-Visual Features</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hrishi%20Rakshit">Hrishi Rakshit</a>, <a href="https://publications.waset.org/abstracts/search?q=Pooneh%20Bagheri%20Zadeh"> Pooneh Bagheri Zadeh</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper investigates the performance of drone classification methods using conventional DCNN with different hyperparameters, when additional drone audio data is embedded in the dataset for training and further classification. In this paper, first a custom dataset is created using different images of drones from University of South California (USC) datasets and Leeds Beckett university datasets with embedded drone audio signal. The three well-known DCNN architectures namely, Resnet50, Darknet53 and Shufflenet are employed over the created dataset tuning their hyperparameters such as, learning rates, maximum epochs, Mini Batch size with different optimizers. Precision-Recall curves and F1 Scores-Threshold curves are used to evaluate the performance of the named classification algorithms. Experimental results show that Resnet50 has the highest efficiency compared to other DCNN methods. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=drone%20classifications" title="drone classifications">drone classifications</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20convolutional%20neural%20network" title=" deep convolutional neural network"> deep convolutional neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=hyperparameters" title=" hyperparameters"> hyperparameters</a>, <a href="https://publications.waset.org/abstracts/search?q=drone%20audio%20signal" title=" drone audio signal"> drone audio signal</a> </p> <a href="https://publications.waset.org/abstracts/172929/drone-classification-using-classification-methods-using-conventional-model-with-embedded-audio-visual-features" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/172929.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">104</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">14</span> Heuristic of Style Transfer for Real-Time Detection or Classification of Weather Conditions from Camera Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hamed%20Ouattara">Hamed Ouattara</a>, <a href="https://publications.waset.org/abstracts/search?q=Pierre%20Duthon"> Pierre Duthon</a>, <a href="https://publications.waset.org/abstracts/search?q=Fr%C3%A9d%C3%A9ric%20Bernardin"> Frédéric Bernardin</a>, <a href="https://publications.waset.org/abstracts/search?q=Omar%20Ait%20Aider"> Omar Ait Aider</a>, <a href="https://publications.waset.org/abstracts/search?q=Pascal%20Salmane"> Pascal Salmane</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this article, we present three neural network architectures for real-time classification of weather conditions (sunny, rainy, snowy, foggy) from images. Inspired by recent advances in style transfer, two of these architectures -Truncated ResNet50 and Truncated ResNet50 with Gram Matrix and Attention- surpass the state of the art and demonstrate re-markable generalization capability on several public databases, including Kaggle (2000 images), Kaggle 850 images, MWI (1996 images) [1], and Image2Weather [2]. Although developed for weather detection, these architectures are also suitable for other appearance-based classification tasks, such as animal species recognition, texture classification, disease detection in medical images, and industrial defect identification. We illustrate these applications in the section “Applications of Our Models to Other Tasks” with the “SIIM-ISIC Melanoma Classification Challenge 2020” [3]. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=weather%20simulation" title="weather simulation">weather simulation</a>, <a href="https://publications.waset.org/abstracts/search?q=weather%20measurement" title=" weather measurement"> weather measurement</a>, <a href="https://publications.waset.org/abstracts/search?q=weather%20classification" title=" weather classification"> weather classification</a>, <a href="https://publications.waset.org/abstracts/search?q=weather%20detection" title=" weather detection"> weather detection</a>, <a href="https://publications.waset.org/abstracts/search?q=style%20transfer" title=" style transfer"> style transfer</a>, <a href="https://publications.waset.org/abstracts/search?q=Pix2Pix" title=" Pix2Pix"> Pix2Pix</a>, <a href="https://publications.waset.org/abstracts/search?q=CycleGAN" title=" CycleGAN"> CycleGAN</a>, <a href="https://publications.waset.org/abstracts/search?q=CUT" title=" CUT"> CUT</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20style%20transfer" title=" neural style transfer"> neural style transfer</a> </p> <a href="https://publications.waset.org/abstracts/194615/heuristic-of-style-transfer-for-real-time-detection-or-classification-of-weather-conditions-from-camera-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/194615.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">0</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">13</span> Brain Tumor Detection and Classification Using Pre-Trained Deep Learning Models</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Aditya%20Karade">Aditya Karade</a>, <a href="https://publications.waset.org/abstracts/search?q=Sharada%20Falane"> Sharada Falane</a>, <a href="https://publications.waset.org/abstracts/search?q=Dhananjay%20Deshmukh"> Dhananjay Deshmukh</a>, <a href="https://publications.waset.org/abstracts/search?q=Vijaykumar%20Mantri"> Vijaykumar Mantri</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Brain tumors pose a significant challenge in healthcare due to their complex nature and impact on patient outcomes. The application of deep learning (DL) algorithms in medical imaging have shown promise in accurate and efficient brain tumour detection. This paper explores the performance of various pre-trained DL models ResNet50, Xception, InceptionV3, EfficientNetB0, DenseNet121, NASNetMobile, VGG19, VGG16, and MobileNet on a brain tumour dataset sourced from Figshare. The dataset consists of MRI scans categorizing different types of brain tumours, including meningioma, pituitary, glioma, and no tumour. The study involves a comprehensive evaluation of these models’ accuracy and effectiveness in classifying brain tumour images. Data preprocessing, augmentation, and finetuning techniques are employed to optimize model performance. Among the evaluated deep learning models for brain tumour detection, ResNet50 emerges as the top performer with an accuracy of 98.86%. Following closely is Xception, exhibiting a strong accuracy of 97.33%. These models showcase robust capabilities in accurately classifying brain tumour images. On the other end of the spectrum, VGG16 trails with the lowest accuracy at 89.02%. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=brain%20tumour" title="brain tumour">brain tumour</a>, <a href="https://publications.waset.org/abstracts/search?q=MRI%20image" title=" MRI image"> MRI image</a>, <a href="https://publications.waset.org/abstracts/search?q=detecting%20and%20classifying%20tumour" title=" detecting and classifying tumour"> detecting and classifying tumour</a>, <a href="https://publications.waset.org/abstracts/search?q=pre-trained%20models" title=" pre-trained models"> pre-trained models</a>, <a href="https://publications.waset.org/abstracts/search?q=transfer%20learning" title=" transfer learning"> transfer learning</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20segmentation" title=" image segmentation"> image segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=data%20augmentation" title=" data augmentation"> data augmentation</a> </p> <a href="https://publications.waset.org/abstracts/178879/brain-tumor-detection-and-classification-using-pre-trained-deep-learning-models" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/178879.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">74</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12</span> Plant Identification Using Convolution Neural Network and Vision Transformer-Based Models</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Virender%20Singh">Virender Singh</a>, <a href="https://publications.waset.org/abstracts/search?q=Mathew%20Rees"> Mathew Rees</a>, <a href="https://publications.waset.org/abstracts/search?q=Simon%20Hampton"> Simon Hampton</a>, <a href="https://publications.waset.org/abstracts/search?q=Sivaram%20Annadurai"> Sivaram Annadurai</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Plant identification is a challenging task that aims to identify the family, genus, and species according to plant morphological features. Automated deep learning-based computer vision algorithms are widely used for identifying plants and can help users narrow down the possibilities. However, numerous morphological similarities between and within species render correct classification difficult. In this paper, we tested custom convolution neural network (CNN) and vision transformer (ViT) based models using the PyTorch framework to classify plants. We used a large dataset of 88,000 provided by the Royal Horticultural Society (RHS) and a smaller dataset of 16,000 images from the PlantClef 2015 dataset for classifying plants at genus and species levels, respectively. Our results show that for classifying plants at the genus level, ViT models perform better compared to CNN-based models ResNet50 and ResNet-RS-420 and other state-of-the-art CNN-based models suggested in previous studies on a similar dataset. ViT model achieved top accuracy of 83.3% for classifying plants at the genus level. For classifying plants at the species level, ViT models perform better compared to CNN-based models ResNet50 and ResNet-RS-420, with a top accuracy of 92.5%. We show that the correct set of augmentation techniques plays an important role in classification success. In conclusion, these results could help end users, professionals and the general public alike in identifying plants quicker and with improved accuracy. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=plant%20identification" title="plant identification">plant identification</a>, <a href="https://publications.waset.org/abstracts/search?q=CNN" title=" CNN"> CNN</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=vision%20transformer" title=" vision transformer"> vision transformer</a>, <a href="https://publications.waset.org/abstracts/search?q=classification" title=" classification"> classification</a> </p> <a href="https://publications.waset.org/abstracts/162359/plant-identification-using-convolution-neural-network-and-vision-transformer-based-models" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/162359.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">103</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11</span> Neural Network based Risk Detection for Dyslexia and Dysgraphia in Sinhala Language Speaking Children</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Budhvin%20T.%20Withana">Budhvin T. Withana</a>, <a href="https://publications.waset.org/abstracts/search?q=Sulochana%20Rupasinghe"> Sulochana Rupasinghe</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The educational system faces a significant concern with regards to Dyslexia and Dysgraphia, which are learning disabilities impacting reading and writing abilities. This is particularly challenging for children who speak the Sinhala language due to its complexity and uniqueness. Commonly used methods to detect the risk of Dyslexia and Dysgraphia rely on subjective assessments, leading to limited coverage and time-consuming processes. Consequently, delays in diagnoses and missed opportunities for early intervention can occur. To address this issue, the project developed a hybrid model that incorporates various deep learning techniques to detect the risk of Dyslexia and Dysgraphia. Specifically, Resnet50, VGG16, and YOLOv8 models were integrated to identify handwriting issues. The outputs of these models were then combined with other input data and fed into an MLP model. Hyperparameters of the MLP model were fine-tuned using Grid Search CV, enabling the identification of optimal values for the model. This approach proved to be highly effective in accurately predicting the risk of Dyslexia and Dysgraphia, providing a valuable tool for early detection and intervention. The Resnet50 model exhibited a training accuracy of 0.9804 and a validation accuracy of 0.9653. The VGG16 model achieved a training accuracy of 0.9991 and a validation accuracy of 0.9891. The MLP model demonstrated impressive results with a training accuracy of 0.99918, a testing accuracy of 0.99223, and a loss of 0.01371. These outcomes showcase the high accuracy achieved by the proposed hybrid model in predicting the risk of Dyslexia and Dysgraphia. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=neural%20networks" title="neural networks">neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=risk%20detection%20system" title=" risk detection system"> risk detection system</a>, <a href="https://publications.waset.org/abstracts/search?q=dyslexia" title=" dyslexia"> dyslexia</a>, <a href="https://publications.waset.org/abstracts/search?q=dysgraphia" title=" dysgraphia"> dysgraphia</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=learning%20disabilities" title=" learning disabilities"> learning disabilities</a>, <a href="https://publications.waset.org/abstracts/search?q=data%20science" title=" data science"> data science</a> </p> <a href="https://publications.waset.org/abstracts/167336/neural-network-based-risk-detection-for-dyslexia-and-dysgraphia-in-sinhala-language-speaking-children" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/167336.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">64</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10</span> Neural Network-based Risk Detection for Dyslexia and Dysgraphia in Sinhala Language Speaking Children</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Budhvin%20T.%20Withana">Budhvin T. Withana</a>, <a href="https://publications.waset.org/abstracts/search?q=Sulochana%20Rupasinghe"> Sulochana Rupasinghe</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The problem of Dyslexia and Dysgraphia, two learning disabilities that affect reading and writing abilities, respectively, is a major concern for the educational system. Due to the complexity and uniqueness of the Sinhala language, these conditions are especially difficult for children who speak it. The traditional risk detection methods for Dyslexia and Dysgraphia frequently rely on subjective assessments, making it difficult to cover a wide range of risk detection and time-consuming. As a result, diagnoses may be delayed and opportunities for early intervention may be lost. The project was approached by developing a hybrid model that utilized various deep learning techniques for detecting risk of Dyslexia and Dysgraphia. Specifically, Resnet50, VGG16 and YOLOv8 were integrated to detect the handwriting issues, and their outputs were fed into an MLP model along with several other input data. The hyperparameters of the MLP model were fine-tuned using Grid Search CV, which allowed for the optimal values to be identified for the model. This approach proved to be effective in accurately predicting the risk of Dyslexia and Dysgraphia, providing a valuable tool for early detection and intervention of these conditions. The Resnet50 model achieved an accuracy of 0.9804 on the training data and 0.9653 on the validation data. The VGG16 model achieved an accuracy of 0.9991 on the training data and 0.9891 on the validation data. The MLP model achieved an impressive training accuracy of 0.99918 and a testing accuracy of 0.99223, with a loss of 0.01371. These results demonstrate that the proposed hybrid model achieved a high level of accuracy in predicting the risk of Dyslexia and Dysgraphia. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=neural%20networks" title="neural networks">neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=risk%20detection%20system" title=" risk detection system"> risk detection system</a>, <a href="https://publications.waset.org/abstracts/search?q=Dyslexia" title=" Dyslexia"> Dyslexia</a>, <a href="https://publications.waset.org/abstracts/search?q=Dysgraphia" title=" Dysgraphia"> Dysgraphia</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=learning%20disabilities" title=" learning disabilities"> learning disabilities</a>, <a href="https://publications.waset.org/abstracts/search?q=data%20science" title=" data science"> data science</a> </p> <a href="https://publications.waset.org/abstracts/167325/neural-network-based-risk-detection-for-dyslexia-and-dysgraphia-in-sinhala-language-speaking-children" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/167325.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">114</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">9</span> Preparation of Papers - Developing a Leukemia Diagnostic System Based on Hybrid Deep Learning Architectures in Actual Clinical Environments</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Skyler%20Kim">Skyler Kim</a> </p> <p class="card-text"><strong>Abstract:</strong></p> An early diagnosis of leukemia has always been a challenge to doctors and hematologists. On a worldwide basis, it was reported that there were approximately 350,000 new cases in 2012, and diagnosing leukemia was time-consuming and inefficient because of an endemic shortage of flow cytometry equipment in current clinical practice. As the number of medical diagnosis tools increased and a large volume of high-quality data was produced, there was an urgent need for more advanced data analysis methods. One of these methods was the AI approach. This approach has become a major trend in recent years, and several research groups have been working on developing these diagnostic models. However, designing and implementing a leukemia diagnostic system in real clinical environments based on a deep learning approach with larger sets remains complex. Leukemia is a major hematological malignancy that results in mortality and morbidity throughout different ages. We decided to select acute lymphocytic leukemia to develop our diagnostic system since acute lymphocytic leukemia is the most common type of leukemia, accounting for 74% of all children diagnosed with leukemia. The results from this development work can be applied to all other types of leukemia. To develop our model, the Kaggle dataset was used, which consists of 15135 total images, 8491 of these are images of abnormal cells, and 5398 images are normal. In this paper, we design and implement a leukemia diagnostic system in a real clinical environment based on deep learning approaches with larger sets. The proposed diagnostic system has the function of detecting and classifying leukemia. Different from other AI approaches, we explore hybrid architectures to improve the current performance. First, we developed two independent convolutional neural network models: VGG19 and ResNet50. Then, using both VGG19 and ResNet50, we developed a hybrid deep learning architecture employing transfer learning techniques to extract features from each input image. In our approach, fusing the features from specific abstraction layers can be deemed as auxiliary features and lead to further improvement of the classification accuracy. In this approach, features extracted from the lower levels are combined into higher dimension feature maps to help improve the discriminative capability of intermediate features and also overcome the problem of network gradient vanishing or exploding. By comparing VGG19 and ResNet50 and the proposed hybrid model, we concluded that the hybrid model had a significant advantage in accuracy. The detailed results of each model’s performance and their pros and cons will be presented in the conference. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=acute%20lymphoblastic%20leukemia" title="acute lymphoblastic leukemia">acute lymphoblastic leukemia</a>, <a href="https://publications.waset.org/abstracts/search?q=hybrid%20model" title=" hybrid model"> hybrid model</a>, <a href="https://publications.waset.org/abstracts/search?q=leukemia%20diagnostic%20system" title=" leukemia diagnostic system"> leukemia diagnostic system</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a> </p> <a href="https://publications.waset.org/abstracts/145566/preparation-of-papers-developing-a-leukemia-diagnostic-system-based-on-hybrid-deep-learning-architectures-in-actual-clinical-environments" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/145566.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">187</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8</span> Deep Learning-Based Classification of 3D CT Scans with Real Clinical Data; Impact of Image format</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Maryam%20Fallahpoor">Maryam Fallahpoor</a>, <a href="https://publications.waset.org/abstracts/search?q=Biswajeet%20Pradhan"> Biswajeet Pradhan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Background: Artificial intelligence (AI) serves as a valuable tool in mitigating the scarcity of human resources required for the evaluation and categorization of vast quantities of medical imaging data. When AI operates with optimal precision, it minimizes the demand for human interpretations and, thereby, reduces the burden on radiologists. Among various AI approaches, deep learning (DL) stands out as it obviates the need for feature extraction, a process that can impede classification, especially with intricate datasets. The advent of DL models has ushered in a new era in medical imaging, particularly in the context of COVID-19 detection. Traditional 2D imaging techniques exhibit limitations when applied to volumetric data, such as Computed Tomography (CT) scans. Medical images predominantly exist in one of two formats: neuroimaging informatics technology initiative (NIfTI) and digital imaging and communications in medicine (DICOM). Purpose: This study aims to employ DL for the classification of COVID-19-infected pulmonary patients and normal cases based on 3D CT scans while investigating the impact of image format. Material and Methods: The dataset used for model training and testing consisted of 1245 patients from IranMehr Hospital. All scans shared a matrix size of 512 × 512, although they exhibited varying slice numbers. Consequently, after loading the DICOM CT scans, image resampling and interpolation were performed to standardize the slice count. All images underwent cropping and resampling, resulting in uniform dimensions of 128 × 128 × 60. Resolution uniformity was achieved through resampling to 1 mm × 1 mm × 1 mm, and image intensities were confined to the range of (−1000, 400) Hounsfield units (HU). For classification purposes, positive pulmonary COVID-19 involvement was designated as 1, while normal images were assigned a value of 0. Subsequently, a U-net-based lung segmentation module was applied to obtain 3D segmented lung regions. The pre-processing stage included normalization, zero-centering, and shuffling. Four distinct 3D CNN models (ResNet152, ResNet50, DensNet169, and DensNet201) were employed in this study. Results: The findings revealed that the segmentation technique yielded superior results for DICOM images, which could be attributed to the potential loss of information during the conversion of original DICOM images to NIFTI format. Notably, ResNet152 and ResNet50 exhibited the highest accuracy at 90.0%, and the same models achieved the best F1 score at 87%. ResNet152 also secured the highest Area under the Curve (AUC) at 0.932. Regarding sensitivity and specificity, DensNet201 achieved the highest values at 93% and 96%, respectively. Conclusion: This study underscores the capacity of deep learning to classify COVID-19 pulmonary involvement using real 3D hospital data. The results underscore the significance of employing DICOM format 3D CT images alongside appropriate pre-processing techniques when training DL models for COVID-19 detection. This approach enhances the accuracy and reliability of diagnostic systems for COVID-19 detection. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title="deep learning">deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=COVID-19%20detection" title=" COVID-19 detection"> COVID-19 detection</a>, <a href="https://publications.waset.org/abstracts/search?q=NIFTI%20format" title=" NIFTI format"> NIFTI format</a>, <a href="https://publications.waset.org/abstracts/search?q=DICOM%20format" title=" DICOM format"> DICOM format</a> </p> <a href="https://publications.waset.org/abstracts/174800/deep-learning-based-classification-of-3d-ct-scans-with-real-clinical-data-impact-of-image-format" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/174800.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">88</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7</span> Computer Aided Analysis of Breast Based Diagnostic Problems from Mammograms Using Image Processing and Deep Learning Methods</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ali%20Berkan%20Ural">Ali Berkan Ural</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents the analysis, evaluation, and pre-diagnosis of early stage breast based diagnostic problems (breast cancer, nodulesorlumps) by Computer Aided Diagnosing (CAD) system from mammogram radiological images. According to the statistics, the time factor is crucial to discover the disease in the patient (especially in women) as possible as early and fast. In the study, a new algorithm is developed using advanced image processing and deep learning method to detect and classify the problem at earlystagewithmoreaccuracy. This system first works with image processing methods (Image acquisition, Noiseremoval, Region Growing Segmentation, Morphological Operations, Breast BorderExtraction, Advanced Segmentation, ObtainingRegion Of Interests (ROIs), etc.) and segments the area of interest of the breast and then analyzes these partly obtained area for cancer detection/lumps in order to diagnosis the disease. After segmentation, with using the Spectrogramimages, 5 different deep learning based methods (specified Convolutional Neural Network (CNN) basedAlexNet, ResNet50, VGG16, DenseNet, Xception) are applied to classify the breast based problems. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=computer%20aided%20diagnosis" title="computer aided diagnosis">computer aided diagnosis</a>, <a href="https://publications.waset.org/abstracts/search?q=breast%20cancer" title=" breast cancer"> breast cancer</a>, <a href="https://publications.waset.org/abstracts/search?q=region%20growing" title=" region growing"> region growing</a>, <a href="https://publications.waset.org/abstracts/search?q=segmentation" title=" segmentation"> segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a> </p> <a href="https://publications.waset.org/abstracts/155700/computer-aided-analysis-of-breast-based-diagnostic-problems-from-mammograms-using-image-processing-and-deep-learning-methods" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/155700.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">95</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6</span> Using Computer Vision to Detect and Localize Fractures in Wrist X-ray Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=John%20Paul%20Q.%20Tomas">John Paul Q. Tomas</a>, <a href="https://publications.waset.org/abstracts/search?q=Mark%20Wilson%20L.%20de%20los%20Reyes"> Mark Wilson L. de los Reyes</a>, <a href="https://publications.waset.org/abstracts/search?q=Kirsten%20Joyce%20P.%20Vasquez"> Kirsten Joyce P. Vasquez</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The most frequent type of fracture is a wrist fracture, which often makes it difficult for medical professionals to find and locate. In this study, fractures in wrist x-ray pictures were located and identified using deep learning and computer vision. The researchers used image filtering, masking, morphological operations, and data augmentation for the image preprocessing and trained the RetinaNet and Faster R-CNN models with ResNet50 backbones and Adam optimizers separately for each image filtering technique and projection. The RetinaNet model with Anisotropic Diffusion Smoothing filter trained with 50 epochs has obtained the greatest accuracy of 99.14%, precision of 100%, sensitivity/recall of 98.41%, specificity of 100%, and an IoU score of 56.44% for the Posteroanterior projection utilizing augmented data. For the Lateral projection using augmented data, the RetinaNet model with an Anisotropic Diffusion filter trained with 50 epochs has produced the highest accuracy of 98.40%, precision of 98.36%, sensitivity/recall of 98.36%, specificity of 98.43%, and an IoU score of 58.69%. When comparing the test results of the different individual projections, models, and image filtering techniques, the Anisotropic Diffusion filter trained with 50 epochs has produced the best classification and regression scores for both projections. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Artificial%20Intelligence" title="Artificial Intelligence">Artificial Intelligence</a>, <a href="https://publications.waset.org/abstracts/search?q=Computer%20Vision" title=" Computer Vision"> Computer Vision</a>, <a href="https://publications.waset.org/abstracts/search?q=Wrist%20Fracture" title=" Wrist Fracture"> Wrist Fracture</a>, <a href="https://publications.waset.org/abstracts/search?q=Deep%20Learning" title=" Deep Learning"> Deep Learning</a> </p> <a href="https://publications.waset.org/abstracts/162916/using-computer-vision-to-detect-and-localize-fractures-in-wrist-x-ray-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/162916.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">73</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5</span> Autism Disease Detection Using Transfer Learning Techniques: Performance Comparison between Central Processing Unit vs. Graphics Processing Unit Functions for Neural Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mst%20Shapna%20Akter">Mst Shapna Akter</a>, <a href="https://publications.waset.org/abstracts/search?q=Hossain%20Shahriar"> Hossain Shahriar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Neural network approaches are machine learning methods used in many domains, such as healthcare and cyber security. Neural networks are mostly known for dealing with image datasets. While training with the images, several fundamental mathematical operations are carried out in the Neural Network. The operation includes a number of algebraic and mathematical functions, including derivative, convolution, and matrix inversion and transposition. Such operations require higher processing power than is typically needed for computer usage. Central Processing Unit (CPU) is not appropriate for a large image size of the dataset as it is built with serial processing. While Graphics Processing Unit (GPU) has parallel processing capabilities and, therefore, has higher speed. This paper uses advanced Neural Network techniques such as VGG16, Resnet50, Densenet, Inceptionv3, Xception, Mobilenet, XGBOOST-VGG16, and our proposed models to compare CPU and GPU resources. A system for classifying autism disease using face images of an autistic and non-autistic child was used to compare performance during testing. We used evaluation matrices such as Accuracy, F1 score, Precision, Recall, and Execution time. It has been observed that GPU runs faster than the CPU in all tests performed. Moreover, the performance of the Neural Network models in terms of accuracy increases on GPU compared to CPU. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=autism%20disease" title="autism disease">autism disease</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20network" title=" neural network"> neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=CPU" title=" CPU"> CPU</a>, <a href="https://publications.waset.org/abstracts/search?q=GPU" title=" GPU"> GPU</a>, <a href="https://publications.waset.org/abstracts/search?q=transfer%20learning" title=" transfer learning"> transfer learning</a> </p> <a href="https://publications.waset.org/abstracts/160218/autism-disease-detection-using-transfer-learning-techniques-performance-comparison-between-central-processing-unit-vs-graphics-processing-unit-functions-for-neural-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/160218.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">118</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4</span> Melanoma and Non-Melanoma, Skin Lesion Classification, Using a Deep Learning Model</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shaira%20L.%20Kee">Shaira L. Kee</a>, <a href="https://publications.waset.org/abstracts/search?q=Michael%20Aaron%20G.%20Sy"> Michael Aaron G. Sy</a>, <a href="https://publications.waset.org/abstracts/search?q=Myles%20%20Joshua%20%20T.%20Tan"> Myles Joshua T. Tan</a>, <a href="https://publications.waset.org/abstracts/search?q=Hezerul%20Abdul%20Karim"> Hezerul Abdul Karim</a>, <a href="https://publications.waset.org/abstracts/search?q=Nouar%20AlDahoul"> Nouar AlDahoul</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Skin diseases are considered the fourth most common disease, with melanoma and non-melanoma skin cancer as the most common type of cancer in Caucasians. The alarming increase in Skin Cancer cases shows an urgent need for further research to improve diagnostic methods, as early diagnosis can significantly improve the 5-year survival rate. Machine Learning algorithms for image pattern analysis in diagnosing skin lesions can dramatically increase the accuracy rate of detection and decrease possible human errors. Several studies have shown the diagnostic performance of computer algorithms outperformed dermatologists. However, existing methods still need improvements to reduce diagnostic errors and generate efficient and accurate results. Our paper proposes an ensemble method to classify dermoscopic images into benign and malignant skin lesions. The experiments were conducted using the International Skin Imaging Collaboration (ISIC) image samples. The dataset contains 3,297 dermoscopic images with benign and malignant categories. The results show improvement in performance with an accuracy of 88% and an F1 score of 87%, outperforming other existing models such as support vector machine (SVM), Residual network (ResNet50), EfficientNetB0, EfficientNetB4, and VGG16. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20learning%20-%20VGG16%20-%20efficientNet%20-%20CNN%20%E2%80%93%20ensemble%20%E2%80%93%0D%0Adermoscopic%20images%20-%20%20melanoma" title="deep learning - VGG16 - efficientNet - CNN – ensemble – dermoscopic images - melanoma">deep learning - VGG16 - efficientNet - CNN – ensemble – dermoscopic images - melanoma</a> </p> <a href="https://publications.waset.org/abstracts/162765/melanoma-and-non-melanoma-skin-lesion-classification-using-a-deep-learning-model" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/162765.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">81</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3</span> Resisting Adversarial Assaults: A Model-Agnostic Autoencoder Solution</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Massimo%20Miccoli">Massimo Miccoli</a>, <a href="https://publications.waset.org/abstracts/search?q=Luca%20Marangoni"> Luca Marangoni</a>, <a href="https://publications.waset.org/abstracts/search?q=Alberto%20Aniello%20Scaringi"> Alberto Aniello Scaringi</a>, <a href="https://publications.waset.org/abstracts/search?q=Alessandro%20Marceddu"> Alessandro Marceddu</a>, <a href="https://publications.waset.org/abstracts/search?q=Alessandro%20Amicone"> Alessandro Amicone</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The susceptibility of deep neural networks (DNNs) to adversarial manipulations is a recognized challenge within the computer vision domain. Adversarial examples, crafted by adding subtle yet malicious alterations to benign images, exploit this vulnerability. Various defense strategies have been proposed to safeguard DNNs against such attacks, stemming from diverse research hypotheses. Building upon prior work, our approach involves the utilization of autoencoder models. Autoencoders, a type of neural network, are trained to learn representations of training data and reconstruct inputs from these representations, typically minimizing reconstruction errors like mean squared error (MSE). Our autoencoder was trained on a dataset of benign examples; learning features specific to them. Consequently, when presented with significantly perturbed adversarial examples, the autoencoder exhibited high reconstruction errors. The architecture of the autoencoder was tailored to the dimensions of the images under evaluation. We considered various image sizes, constructing models differently for 256x256 and 512x512 images. Moreover, the choice of the computer vision model is crucial, as most adversarial attacks are designed with specific AI structures in mind. To mitigate this, we proposed a method to replace image-specific dimensions with a structure independent of both dimensions and neural network models, thereby enhancing robustness. Our multi-modal autoencoder reconstructs the spectral representation of images across the red-green-blue (RGB) color channels. To validate our approach, we conducted experiments using diverse datasets and subjected them to adversarial attacks using models such as ResNet50 and ViT_L_16 from the torch vision library. The autoencoder extracted features used in a classification model, resulting in an MSE (RGB) of 0.014, a classification accuracy of 97.33%, and a precision of 99%. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=adversarial%20attacks" title="adversarial attacks">adversarial attacks</a>, <a href="https://publications.waset.org/abstracts/search?q=malicious%20images%20detector" title=" malicious images detector"> malicious images detector</a>, <a href="https://publications.waset.org/abstracts/search?q=binary%20classifier" title=" binary classifier"> binary classifier</a>, <a href="https://publications.waset.org/abstracts/search?q=multimodal%20transformer%20autoencoder" title=" multimodal transformer autoencoder"> multimodal transformer autoencoder</a> </p> <a href="https://publications.waset.org/abstracts/174687/resisting-adversarial-assaults-a-model-agnostic-autoencoder-solution" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/174687.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">112</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2</span> A Study on the Application of Machine Learning and Deep Learning Techniques for Skin Cancer Detection</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hritwik%20Ghosh">Hritwik Ghosh</a>, <a href="https://publications.waset.org/abstracts/search?q=Irfan%20Sadiq%20Rahat"> Irfan Sadiq Rahat</a>, <a href="https://publications.waset.org/abstracts/search?q=Sachi%20Nandan%20Mohanty"> Sachi Nandan Mohanty</a>, <a href="https://publications.waset.org/abstracts/search?q=J.%20V.%20R.%20Ravindra"> J. V. R. Ravindra</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In the rapidly evolving landscape of medical diagnostics, the early detection and accurate classification of skin cancer remain paramount for effective treatment outcomes. This research delves into the transformative potential of Artificial Intelligence (AI), specifically Deep Learning (DL), as a tool for discerning and categorizing various skin conditions. Utilizing a diverse dataset of 3,000 images representing nine distinct skin conditions, we confront the inherent challenge of class imbalance. This imbalance, where conditions like melanomas are over-represented, is addressed by incorporating class weights during the model training phase, ensuring an equitable representation of all conditions in the learning process. Our pioneering approach introduces a hybrid model, amalgamating the strengths of two renowned Convolutional Neural Networks (CNNs), VGG16 and ResNet50. These networks, pre-trained on the ImageNet dataset, are adept at extracting intricate features from images. By synergizing these models, our research aims to capture a holistic set of features, thereby bolstering classification performance. Preliminary findings underscore the hybrid model's superiority over individual models, showcasing its prowess in feature extraction and classification. Moreover, the research emphasizes the significance of rigorous data pre-processing, including image resizing, color normalization, and segmentation, in ensuring data quality and model reliability. In essence, this study illuminates the promising role of AI and DL in revolutionizing skin cancer diagnostics, offering insights into its potential applications in broader medical domains. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=artificial%20intelligence" title="artificial intelligence">artificial intelligence</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=skin%20cancer" title=" skin cancer"> skin cancer</a>, <a href="https://publications.waset.org/abstracts/search?q=dermatology" title=" dermatology"> dermatology</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks" title=" convolutional neural networks"> convolutional neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20classification" title=" image classification"> image classification</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=healthcare%20technology" title=" healthcare technology"> healthcare technology</a>, <a href="https://publications.waset.org/abstracts/search?q=cancer%20detection" title=" cancer detection"> cancer detection</a>, <a href="https://publications.waset.org/abstracts/search?q=medical%20imaging" title=" medical imaging"> medical imaging</a> </p> <a href="https://publications.waset.org/abstracts/173583/a-study-on-the-application-of-machine-learning-and-deep-learning-techniques-for-skin-cancer-detection" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/173583.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">86</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1</span> Optimized Deep Learning-Based Facial Emotion Recognition System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Erick%20C.%20Valverde">Erick C. Valverde</a>, <a href="https://publications.waset.org/abstracts/search?q=Wansu%20Lim"> Wansu Lim</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Facial emotion recognition (FER) system has been recently developed for more advanced computer vision applications. The ability to identify human emotions would enable smart healthcare facility to diagnose mental health illnesses (e.g., depression and stress) as well as better human social interactions with smart technologies. The FER system involves two steps: 1) face detection task and 2) facial emotion recognition task. It classifies the human expression in various categories such as angry, disgust, fear, happy, sad, surprise, and neutral. This system requires intensive research to address issues with human diversity, various unique human expressions, and variety of human facial features due to age differences. These issues generally affect the ability of the FER system to detect human emotions with high accuracy. Early stage of FER systems used simple supervised classification task algorithms like K-nearest neighbors (KNN) and artificial neural networks (ANN). These conventional FER systems have issues with low accuracy due to its inefficiency to extract significant features of several human emotions. To increase the accuracy of FER systems, deep learning (DL)-based methods, like convolutional neural networks (CNN), are proposed. These methods can find more complex features in the human face by means of the deeper connections within its architectures. However, the inference speed and computational costs of a DL-based FER system is often disregarded in exchange for higher accuracy results. To cope with this drawback, an optimized DL-based FER system is proposed in this study.An extreme version of Inception V3, known as Xception model, is leveraged by applying different network optimization methods. Specifically, network pruning and quantization are used to enable lower computational costs and reduce memory usage, respectively. To support low resource requirements, a 68-landmark face detector from Dlib is used in the early step of the FER system.Furthermore, a DL compiler is utilized to incorporate advanced optimization techniques to the Xception model to improve the inference speed of the FER system. In comparison to VGG-Net and ResNet50, the proposed optimized DL-based FER system experimentally demonstrates the objectives of the network optimization methods used. As a result, the proposed approach can be used to create an efficient and real-time FER system. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title="deep learning">deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=face%20detection" title=" face detection"> face detection</a>, <a href="https://publications.waset.org/abstracts/search?q=facial%20emotion%20recognition" title=" facial emotion recognition"> facial emotion recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=network%20optimization%20methods" title=" network optimization methods"> network optimization methods</a> </p> <a href="https://publications.waset.org/abstracts/147341/optimized-deep-learning-based-facial-emotion-recognition-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/147341.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">118</span> </span> </div> </div> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">&copy; 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">&times;</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>

Pages: 1 2 3 4 5 6 7 8 9 10