CINXE.COM

Search results for: VGG16

<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: VGG16</title> <meta name="description" content="Search results for: VGG16"> <meta name="keywords" content="VGG16"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="VGG16" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="VGG16"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 18</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: VGG16</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">18</span> A Comprehensive Study of Camouflaged Object Detection Using Deep Learning</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Khalak%20Bin%20Khair">Khalak Bin Khair</a>, <a href="https://publications.waset.org/abstracts/search?q=Saqib%20Jahir"> Saqib Jahir</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohammed%20Ibrahim"> Mohammed Ibrahim</a>, <a href="https://publications.waset.org/abstracts/search?q=Fahad%20Bin"> Fahad Bin</a>, <a href="https://publications.waset.org/abstracts/search?q=Debajyoti%20Karmaker"> Debajyoti Karmaker</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Object detection is a computer technology that deals with searching through digital images and videos for occurrences of semantic elements of a particular class. It is associated with image processing and computer vision. On top of object detection, we detect camouflage objects within an image using Deep Learning techniques. Deep learning may be a subset of machine learning that's essentially a three-layer neural network Over 6500 images that possess camouflage properties are gathered from various internet sources and divided into 4 categories to compare the result. Those images are labeled and then trained and tested using vgg16 architecture on the jupyter notebook using the TensorFlow platform. The architecture is further customized using Transfer Learning. Methods for transferring information from one or more of these source tasks to increase learning in a related target task are created through transfer learning. The purpose of this transfer of learning methodologies is to aid in the evolution of machine learning to the point where it is as efficient as human learning. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title="deep learning">deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=transfer%20learning" title=" transfer learning"> transfer learning</a>, <a href="https://publications.waset.org/abstracts/search?q=TensorFlow" title=" TensorFlow"> TensorFlow</a>, <a href="https://publications.waset.org/abstracts/search?q=camouflage" title=" camouflage"> camouflage</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20detection" title=" object detection"> object detection</a>, <a href="https://publications.waset.org/abstracts/search?q=architecture" title=" architecture"> architecture</a>, <a href="https://publications.waset.org/abstracts/search?q=accuracy" title=" accuracy"> accuracy</a>, <a href="https://publications.waset.org/abstracts/search?q=model" title=" model"> model</a>, <a href="https://publications.waset.org/abstracts/search?q=VGG16" title=" VGG16"> VGG16</a> </p> <a href="https://publications.waset.org/abstracts/152633/a-comprehensive-study-of-camouflaged-object-detection-using-deep-learning" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/152633.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">149</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">17</span> Recognition of Gene Names from Gene Pathway Figures Using Siamese Network</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Muhammad%20Azam">Muhammad Azam</a>, <a href="https://publications.waset.org/abstracts/search?q=Micheal%20Olaolu%20Arowolo"> Micheal Olaolu Arowolo</a>, <a href="https://publications.waset.org/abstracts/search?q=Fei%20He"> Fei He</a>, <a href="https://publications.waset.org/abstracts/search?q=Mihail%20Popescu"> Mihail Popescu</a>, <a href="https://publications.waset.org/abstracts/search?q=Dong%20Xu"> Dong Xu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The number of biological papers is growing quickly, which means that the number of biological pathway figures in those papers is also increasing quickly. Each pathway figure shows extensive biological information, like the names of genes and how the genes are related. However, manually annotating pathway figures takes a lot of time and work. Even though using advanced image understanding models could speed up the process of curation, these models still need to be made more accurate. To improve gene name recognition from pathway figures, we applied a Siamese network to map image segments to a library of pictures containing known genes in a similar way to person recognition from photos in many photo applications. We used a triple loss function and a triplet spatial pyramid pooling network by combining the triplet convolution neural network and the spatial pyramid pooling (TSPP-Net). We compared VGG19 and VGG16 as the Siamese network model. VGG16 achieved better performance with an accuracy of 93%, which is much higher than OCR results. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=biological%20pathway" title="biological pathway">biological pathway</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20understanding" title=" image understanding"> image understanding</a>, <a href="https://publications.waset.org/abstracts/search?q=gene%20name%20recognition" title=" gene name recognition"> gene name recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20detection" title=" object detection"> object detection</a>, <a href="https://publications.waset.org/abstracts/search?q=Siamese%20network" title=" Siamese network"> Siamese network</a>, <a href="https://publications.waset.org/abstracts/search?q=VGG" title=" VGG"> VGG</a> </p> <a href="https://publications.waset.org/abstracts/160723/recognition-of-gene-names-from-gene-pathway-figures-using-siamese-network" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/160723.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">291</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">16</span> Optimizing Perennial Plants Image Classification by Fine-Tuning Deep Neural Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Khairani%20Binti%20Supyan">Khairani Binti Supyan</a>, <a href="https://publications.waset.org/abstracts/search?q=Fatimah%20Khalid"> Fatimah Khalid</a>, <a href="https://publications.waset.org/abstracts/search?q=Mas%20Rina%20Mustaffa"> Mas Rina Mustaffa</a>, <a href="https://publications.waset.org/abstracts/search?q=Azreen%20Bin%20Azman"> Azreen Bin Azman</a>, <a href="https://publications.waset.org/abstracts/search?q=Amirul%20Azuani%20Romle"> Amirul Azuani Romle</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Perennial plant classification plays a significant role in various agricultural and environmental applications, assisting in plant identification, disease detection, and biodiversity monitoring. Nevertheless, attaining high accuracy in perennial plant image classification remains challenging due to the complex variations in plant appearance, the diverse range of environmental conditions under which images are captured, and the inherent variability in image quality stemming from various factors such as lighting conditions, camera settings, and focus. This paper proposes an adaptation approach to optimize perennial plant image classification by fine-tuning the pre-trained DNNs model. This paper explores the efficacy of fine-tuning prevalent architectures, namely VGG16, ResNet50, and InceptionV3, leveraging transfer learning to tailor the models to the specific characteristics of perennial plant datasets. A subset of the MYLPHerbs dataset consisted of 6 perennial plant species of 13481 images under various environmental conditions that were used in the experiments. Different strategies for fine-tuning, including adjusting learning rates, training set sizes, data augmentation, and architectural modifications, were investigated. The experimental outcomes underscore the effectiveness of fine-tuning deep neural networks for perennial plant image classification, with ResNet50 showcasing the highest accuracy of 99.78%. Despite ResNet50's superior performance, both VGG16 and InceptionV3 achieved commendable accuracy of 99.67% and 99.37%, respectively. The overall outcomes reaffirm the robustness of the fine-tuning approach across different deep neural network architectures, offering insights into strategies for optimizing model performance in the domain of perennial plant image classification. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=perennial%20plants" title="perennial plants">perennial plants</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20classification" title=" image classification"> image classification</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20neural%20networks" title=" deep neural networks"> deep neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=fine-tuning" title=" fine-tuning"> fine-tuning</a>, <a href="https://publications.waset.org/abstracts/search?q=transfer%20learning" title=" transfer learning"> transfer learning</a>, <a href="https://publications.waset.org/abstracts/search?q=VGG16" title=" VGG16"> VGG16</a>, <a href="https://publications.waset.org/abstracts/search?q=ResNet50" title=" ResNet50"> ResNet50</a>, <a href="https://publications.waset.org/abstracts/search?q=InceptionV3" title=" InceptionV3"> InceptionV3</a> </p> <a href="https://publications.waset.org/abstracts/182850/optimizing-perennial-plants-image-classification-by-fine-tuning-deep-neural-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/182850.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">64</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">15</span> Brain Tumor Detection and Classification Using Pre-Trained Deep Learning Models</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Aditya%20Karade">Aditya Karade</a>, <a href="https://publications.waset.org/abstracts/search?q=Sharada%20Falane"> Sharada Falane</a>, <a href="https://publications.waset.org/abstracts/search?q=Dhananjay%20Deshmukh"> Dhananjay Deshmukh</a>, <a href="https://publications.waset.org/abstracts/search?q=Vijaykumar%20Mantri"> Vijaykumar Mantri</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Brain tumors pose a significant challenge in healthcare due to their complex nature and impact on patient outcomes. The application of deep learning (DL) algorithms in medical imaging have shown promise in accurate and efficient brain tumour detection. This paper explores the performance of various pre-trained DL models ResNet50, Xception, InceptionV3, EfficientNetB0, DenseNet121, NASNetMobile, VGG19, VGG16, and MobileNet on a brain tumour dataset sourced from Figshare. The dataset consists of MRI scans categorizing different types of brain tumours, including meningioma, pituitary, glioma, and no tumour. The study involves a comprehensive evaluation of these models’ accuracy and effectiveness in classifying brain tumour images. Data preprocessing, augmentation, and finetuning techniques are employed to optimize model performance. Among the evaluated deep learning models for brain tumour detection, ResNet50 emerges as the top performer with an accuracy of 98.86%. Following closely is Xception, exhibiting a strong accuracy of 97.33%. These models showcase robust capabilities in accurately classifying brain tumour images. On the other end of the spectrum, VGG16 trails with the lowest accuracy at 89.02%. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=brain%20tumour" title="brain tumour">brain tumour</a>, <a href="https://publications.waset.org/abstracts/search?q=MRI%20image" title=" MRI image"> MRI image</a>, <a href="https://publications.waset.org/abstracts/search?q=detecting%20and%20classifying%20tumour" title=" detecting and classifying tumour"> detecting and classifying tumour</a>, <a href="https://publications.waset.org/abstracts/search?q=pre-trained%20models" title=" pre-trained models"> pre-trained models</a>, <a href="https://publications.waset.org/abstracts/search?q=transfer%20learning" title=" transfer learning"> transfer learning</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20segmentation" title=" image segmentation"> image segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=data%20augmentation" title=" data augmentation"> data augmentation</a> </p> <a href="https://publications.waset.org/abstracts/178879/brain-tumor-detection-and-classification-using-pre-trained-deep-learning-models" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/178879.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">74</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">14</span> Autism Disease Detection Using Transfer Learning Techniques: Performance Comparison between Central Processing Unit vs. Graphics Processing Unit Functions for Neural Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mst%20Shapna%20Akter">Mst Shapna Akter</a>, <a href="https://publications.waset.org/abstracts/search?q=Hossain%20Shahriar"> Hossain Shahriar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Neural network approaches are machine learning methods used in many domains, such as healthcare and cyber security. Neural networks are mostly known for dealing with image datasets. While training with the images, several fundamental mathematical operations are carried out in the Neural Network. The operation includes a number of algebraic and mathematical functions, including derivative, convolution, and matrix inversion and transposition. Such operations require higher processing power than is typically needed for computer usage. Central Processing Unit (CPU) is not appropriate for a large image size of the dataset as it is built with serial processing. While Graphics Processing Unit (GPU) has parallel processing capabilities and, therefore, has higher speed. This paper uses advanced Neural Network techniques such as VGG16, Resnet50, Densenet, Inceptionv3, Xception, Mobilenet, XGBOOST-VGG16, and our proposed models to compare CPU and GPU resources. A system for classifying autism disease using face images of an autistic and non-autistic child was used to compare performance during testing. We used evaluation matrices such as Accuracy, F1 score, Precision, Recall, and Execution time. It has been observed that GPU runs faster than the CPU in all tests performed. Moreover, the performance of the Neural Network models in terms of accuracy increases on GPU compared to CPU. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=autism%20disease" title="autism disease">autism disease</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20network" title=" neural network"> neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=CPU" title=" CPU"> CPU</a>, <a href="https://publications.waset.org/abstracts/search?q=GPU" title=" GPU"> GPU</a>, <a href="https://publications.waset.org/abstracts/search?q=transfer%20learning" title=" transfer learning"> transfer learning</a> </p> <a href="https://publications.waset.org/abstracts/160218/autism-disease-detection-using-transfer-learning-techniques-performance-comparison-between-central-processing-unit-vs-graphics-processing-unit-functions-for-neural-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/160218.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">118</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">13</span> Melanoma and Non-Melanoma, Skin Lesion Classification, Using a Deep Learning Model</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shaira%20L.%20Kee">Shaira L. Kee</a>, <a href="https://publications.waset.org/abstracts/search?q=Michael%20Aaron%20G.%20Sy"> Michael Aaron G. Sy</a>, <a href="https://publications.waset.org/abstracts/search?q=Myles%20%20Joshua%20%20T.%20Tan"> Myles Joshua T. Tan</a>, <a href="https://publications.waset.org/abstracts/search?q=Hezerul%20Abdul%20Karim"> Hezerul Abdul Karim</a>, <a href="https://publications.waset.org/abstracts/search?q=Nouar%20AlDahoul"> Nouar AlDahoul</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Skin diseases are considered the fourth most common disease, with melanoma and non-melanoma skin cancer as the most common type of cancer in Caucasians. The alarming increase in Skin Cancer cases shows an urgent need for further research to improve diagnostic methods, as early diagnosis can significantly improve the 5-year survival rate. Machine Learning algorithms for image pattern analysis in diagnosing skin lesions can dramatically increase the accuracy rate of detection and decrease possible human errors. Several studies have shown the diagnostic performance of computer algorithms outperformed dermatologists. However, existing methods still need improvements to reduce diagnostic errors and generate efficient and accurate results. Our paper proposes an ensemble method to classify dermoscopic images into benign and malignant skin lesions. The experiments were conducted using the International Skin Imaging Collaboration (ISIC) image samples. The dataset contains 3,297 dermoscopic images with benign and malignant categories. The results show improvement in performance with an accuracy of 88% and an F1 score of 87%, outperforming other existing models such as support vector machine (SVM), Residual network (ResNet50), EfficientNetB0, EfficientNetB4, and VGG16. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20learning%20-%20VGG16%20-%20efficientNet%20-%20CNN%20%E2%80%93%20ensemble%20%E2%80%93%0D%0Adermoscopic%20images%20-%20%20melanoma" title="deep learning - VGG16 - efficientNet - CNN – ensemble – dermoscopic images - melanoma">deep learning - VGG16 - efficientNet - CNN – ensemble – dermoscopic images - melanoma</a> </p> <a href="https://publications.waset.org/abstracts/162765/melanoma-and-non-melanoma-skin-lesion-classification-using-a-deep-learning-model" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/162765.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">81</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12</span> Neural Network based Risk Detection for Dyslexia and Dysgraphia in Sinhala Language Speaking Children</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Budhvin%20T.%20Withana">Budhvin T. Withana</a>, <a href="https://publications.waset.org/abstracts/search?q=Sulochana%20Rupasinghe"> Sulochana Rupasinghe</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The educational system faces a significant concern with regards to Dyslexia and Dysgraphia, which are learning disabilities impacting reading and writing abilities. This is particularly challenging for children who speak the Sinhala language due to its complexity and uniqueness. Commonly used methods to detect the risk of Dyslexia and Dysgraphia rely on subjective assessments, leading to limited coverage and time-consuming processes. Consequently, delays in diagnoses and missed opportunities for early intervention can occur. To address this issue, the project developed a hybrid model that incorporates various deep learning techniques to detect the risk of Dyslexia and Dysgraphia. Specifically, Resnet50, VGG16, and YOLOv8 models were integrated to identify handwriting issues. The outputs of these models were then combined with other input data and fed into an MLP model. Hyperparameters of the MLP model were fine-tuned using Grid Search CV, enabling the identification of optimal values for the model. This approach proved to be highly effective in accurately predicting the risk of Dyslexia and Dysgraphia, providing a valuable tool for early detection and intervention. The Resnet50 model exhibited a training accuracy of 0.9804 and a validation accuracy of 0.9653. The VGG16 model achieved a training accuracy of 0.9991 and a validation accuracy of 0.9891. The MLP model demonstrated impressive results with a training accuracy of 0.99918, a testing accuracy of 0.99223, and a loss of 0.01371. These outcomes showcase the high accuracy achieved by the proposed hybrid model in predicting the risk of Dyslexia and Dysgraphia. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=neural%20networks" title="neural networks">neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=risk%20detection%20system" title=" risk detection system"> risk detection system</a>, <a href="https://publications.waset.org/abstracts/search?q=dyslexia" title=" dyslexia"> dyslexia</a>, <a href="https://publications.waset.org/abstracts/search?q=dysgraphia" title=" dysgraphia"> dysgraphia</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=learning%20disabilities" title=" learning disabilities"> learning disabilities</a>, <a href="https://publications.waset.org/abstracts/search?q=data%20science" title=" data science"> data science</a> </p> <a href="https://publications.waset.org/abstracts/167336/neural-network-based-risk-detection-for-dyslexia-and-dysgraphia-in-sinhala-language-speaking-children" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/167336.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">64</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11</span> Neural Network-based Risk Detection for Dyslexia and Dysgraphia in Sinhala Language Speaking Children</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Budhvin%20T.%20Withana">Budhvin T. Withana</a>, <a href="https://publications.waset.org/abstracts/search?q=Sulochana%20Rupasinghe"> Sulochana Rupasinghe</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The problem of Dyslexia and Dysgraphia, two learning disabilities that affect reading and writing abilities, respectively, is a major concern for the educational system. Due to the complexity and uniqueness of the Sinhala language, these conditions are especially difficult for children who speak it. The traditional risk detection methods for Dyslexia and Dysgraphia frequently rely on subjective assessments, making it difficult to cover a wide range of risk detection and time-consuming. As a result, diagnoses may be delayed and opportunities for early intervention may be lost. The project was approached by developing a hybrid model that utilized various deep learning techniques for detecting risk of Dyslexia and Dysgraphia. Specifically, Resnet50, VGG16 and YOLOv8 were integrated to detect the handwriting issues, and their outputs were fed into an MLP model along with several other input data. The hyperparameters of the MLP model were fine-tuned using Grid Search CV, which allowed for the optimal values to be identified for the model. This approach proved to be effective in accurately predicting the risk of Dyslexia and Dysgraphia, providing a valuable tool for early detection and intervention of these conditions. The Resnet50 model achieved an accuracy of 0.9804 on the training data and 0.9653 on the validation data. The VGG16 model achieved an accuracy of 0.9991 on the training data and 0.9891 on the validation data. The MLP model achieved an impressive training accuracy of 0.99918 and a testing accuracy of 0.99223, with a loss of 0.01371. These results demonstrate that the proposed hybrid model achieved a high level of accuracy in predicting the risk of Dyslexia and Dysgraphia. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=neural%20networks" title="neural networks">neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=risk%20detection%20system" title=" risk detection system"> risk detection system</a>, <a href="https://publications.waset.org/abstracts/search?q=Dyslexia" title=" Dyslexia"> Dyslexia</a>, <a href="https://publications.waset.org/abstracts/search?q=Dysgraphia" title=" Dysgraphia"> Dysgraphia</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=learning%20disabilities" title=" learning disabilities"> learning disabilities</a>, <a href="https://publications.waset.org/abstracts/search?q=data%20science" title=" data science"> data science</a> </p> <a href="https://publications.waset.org/abstracts/167325/neural-network-based-risk-detection-for-dyslexia-and-dysgraphia-in-sinhala-language-speaking-children" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/167325.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">114</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10</span> American Sign Language Recognition System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rishabh%20Nagpal">Rishabh Nagpal</a>, <a href="https://publications.waset.org/abstracts/search?q=Riya%20Uchagaonkar"> Riya Uchagaonkar</a>, <a href="https://publications.waset.org/abstracts/search?q=Venkata%20Naga%20Narasimha%20Ashish%20Mernedi"> Venkata Naga Narasimha Ashish Mernedi</a>, <a href="https://publications.waset.org/abstracts/search?q=Ahmed%20Hambaba"> Ahmed Hambaba</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The rapid evolution of technology in the communication sector continually seeks to bridge the gap between different communities, notably between the deaf community and the hearing world. This project develops a comprehensive American Sign Language (ASL) recognition system, leveraging the advanced capabilities of convolutional neural networks (CNNs) and vision transformers (ViTs) to interpret and translate ASL in real-time. The primary objective of this system is to provide an effective communication tool that enables seamless interaction through accurate sign language interpretation. The architecture of the proposed system integrates dual networks -VGG16 for precise spatial feature extraction and vision transformers for contextual understanding of the sign language gestures. The system processes live input, extracting critical features through these sophisticated neural network models, and combines them to enhance gesture recognition accuracy. This integration facilitates a robust understanding of ASL by capturing detailed nuances and broader gesture dynamics. The system is evaluated through a series of tests that measure its efficiency and accuracy in real-world scenarios. Results indicate a high level of precision in recognizing diverse ASL signs, substantiating the potential of this technology in practical applications. Challenges such as enhancing the system’s ability to operate in varied environmental conditions and further expanding the dataset for training were identified and discussed. Future work will refine the model’s adaptability and incorporate haptic feedback to enhance the interactivity and richness of the user experience. This project demonstrates the feasibility of an advanced ASL recognition system and lays the groundwork for future innovations in assistive communication technologies. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=sign%20language" title="sign language">sign language</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=vision%20transformer" title=" vision transformer"> vision transformer</a>, <a href="https://publications.waset.org/abstracts/search?q=VGG16" title=" VGG16"> VGG16</a>, <a href="https://publications.waset.org/abstracts/search?q=CNN" title=" CNN"> CNN</a> </p> <a href="https://publications.waset.org/abstracts/186514/american-sign-language-recognition-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/186514.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">43</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">9</span> FracXpert: Ensemble Machine Learning Approach for Localization and Classification of Bone Fractures in Cricket Athletes</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Madushani%20Rodrigo">Madushani Rodrigo</a>, <a href="https://publications.waset.org/abstracts/search?q=Banuka%20Athuraliya"> Banuka Athuraliya</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In today's world of medical diagnosis and prediction, machine learning stands out as a strong tool, transforming old ways of caring for health. This study analyzes the use of machine learning in the specialized domain of sports medicine, with a focus on the timely and accurate detection of bone fractures in cricket athletes. Failure to identify bone fractures in real time can result in malunion or non-union conditions. To ensure proper treatment and enhance the bone healing process, accurately identifying fracture locations and types is necessary. When interpreting X-ray images, it relies on the expertise and experience of medical professionals in the identification process. Sometimes, radiographic images are of low quality, leading to potential issues. Therefore, it is necessary to have a proper approach to accurately localize and classify fractures in real time. The research has revealed that the optimal approach needs to address the stated problem and employ appropriate radiographic image processing techniques and object detection algorithms. These algorithms should effectively localize and accurately classify all types of fractures with high precision and in a timely manner. In order to overcome the challenges of misidentifying fractures, a distinct model for fracture localization and classification has been implemented. The research also incorporates radiographic image enhancement and preprocessing techniques to overcome the limitations posed by low-quality images. A classification ensemble model has been implemented using ResNet18 and VGG16. In parallel, a fracture segmentation model has been implemented using the enhanced U-Net architecture. Combining the results of these two implemented models, the FracXpert system can accurately localize exact fracture locations along with fracture types from the available 12 different types of fracture patterns, which include avulsion, comminuted, compressed, dislocation, greenstick, hairline, impacted, intraarticular, longitudinal, oblique, pathological, and spiral. This system will generate a confidence score level indicating the degree of confidence in the predicted result. Using ResNet18 and VGG16 architectures, the implemented fracture segmentation model, based on the U-Net architecture, achieved a high accuracy level of 99.94%, demonstrating its precision in identifying fracture locations. Simultaneously, the classification ensemble model achieved an accuracy of 81.0%, showcasing its ability to categorize various fracture patterns, which is instrumental in the fracture treatment process. In conclusion, FracXpert has become a promising ML application in sports medicine, demonstrating its potential to revolutionize fracture detection processes. By leveraging the power of ML algorithms, this study contributes to the advancement of diagnostic capabilities in cricket athlete healthcare, ensuring timely and accurate identification of bone fractures for the best treatment outcomes. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=multiclass%20classification" title="multiclass classification">multiclass classification</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20detection" title=" object detection"> object detection</a>, <a href="https://publications.waset.org/abstracts/search?q=ResNet18" title=" ResNet18"> ResNet18</a>, <a href="https://publications.waset.org/abstracts/search?q=U-Net" title=" U-Net"> U-Net</a>, <a href="https://publications.waset.org/abstracts/search?q=VGG16" title=" VGG16"> VGG16</a> </p> <a href="https://publications.waset.org/abstracts/184317/fracxpert-ensemble-machine-learning-approach-for-localization-and-classification-of-bone-fractures-in-cricket-athletes" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/184317.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">119</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8</span> Deep-Learning Based Approach to Facial Emotion Recognition through Convolutional Neural Network</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nouha%20Khediri">Nouha Khediri</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohammed%20Ben%20Ammar"> Mohammed Ben Ammar</a>, <a href="https://publications.waset.org/abstracts/search?q=Monji%20Kherallah"> Monji Kherallah</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Recently, facial emotion recognition (FER) has become increasingly essential to understand the state of the human mind. Accurately classifying emotion from the face is a challenging task. In this paper, we present a facial emotion recognition approach named CV-FER, benefiting from deep learning, especially CNN and VGG16. First, the data is pre-processed with data cleaning and data rotation. Then, we augment the data and proceed to our FER model, which contains five convolutions layers and five pooling layers. Finally, a softmax classifier is used in the output layer to recognize emotions. Based on the above contents, this paper reviews the works of facial emotion recognition based on deep learning. Experiments show that our model outperforms the other methods using the same FER2013 database and yields a recognition rate of 92%. We also put forward some suggestions for future work. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=CNN" title="CNN">CNN</a>, <a href="https://publications.waset.org/abstracts/search?q=deep-learning" title=" deep-learning"> deep-learning</a>, <a href="https://publications.waset.org/abstracts/search?q=facial%20emotion%20recognition" title=" facial emotion recognition"> facial emotion recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a> </p> <a href="https://publications.waset.org/abstracts/150291/deep-learning-based-approach-to-facial-emotion-recognition-through-convolutional-neural-network" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/150291.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">95</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7</span> Multi-Classification Deep Learning Model for Diagnosing Different Chest Diseases</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bandhan%20Dey">Bandhan Dey</a>, <a href="https://publications.waset.org/abstracts/search?q=Muhsina%20Bintoon%20Yiasha"> Muhsina Bintoon Yiasha</a>, <a href="https://publications.waset.org/abstracts/search?q=Gulam%20Sulaman%20Choudhury"> Gulam Sulaman Choudhury</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Chest disease is one of the most problematic ailments in our regular life. There are many known chest diseases out there. Diagnosing them correctly plays a vital role in the process of treatment. There are many methods available explicitly developed for different chest diseases. But the most common approach for diagnosing these diseases is through X-ray. In this paper, we proposed a multi-classification deep learning model for diagnosing COVID-19, lung cancer, pneumonia, tuberculosis, and atelectasis from chest X-rays. In the present work, we used the transfer learning method for better accuracy and fast training phase. The performance of three architectures is considered: InceptionV3, VGG-16, and VGG-19. We evaluated these deep learning architectures using public digital chest x-ray datasets with six classes (i.e., COVID-19, lung cancer, pneumonia, tuberculosis, atelectasis, and normal). The experiments are conducted on six-classification, and we found that VGG16 outperforms other proposed models with an accuracy of 95%. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title="deep learning">deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20classification" title=" image classification"> image classification</a>, <a href="https://publications.waset.org/abstracts/search?q=X-ray%20images" title=" X-ray images"> X-ray images</a>, <a href="https://publications.waset.org/abstracts/search?q=Tensorflow" title=" Tensorflow"> Tensorflow</a>, <a href="https://publications.waset.org/abstracts/search?q=Keras" title=" Keras"> Keras</a>, <a href="https://publications.waset.org/abstracts/search?q=chest%20diseases" title=" chest diseases"> chest diseases</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks" title=" convolutional neural networks"> convolutional neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-classification" title=" multi-classification"> multi-classification</a> </p> <a href="https://publications.waset.org/abstracts/158065/multi-classification-deep-learning-model-for-diagnosing-different-chest-diseases" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/158065.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">92</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6</span> Deep Learning based Image Classifiers for Detection of CSSVD in Cacao Plants</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Atuhurra%20Jesse">Atuhurra Jesse</a>, <a href="https://publications.waset.org/abstracts/search?q=N%27guessan%20Yves-Roland%20Douha"> N&#039;guessan Yves-Roland Douha</a>, <a href="https://publications.waset.org/abstracts/search?q=Pabitra%20Lenka"> Pabitra Lenka</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The detection of diseases within plants has attracted a lot of attention from computer vision enthusiasts. Despite the progress made to detect diseases in many plants, there remains a research gap to train image classifiers to detect the cacao swollen shoot virus disease or CSSVD for short, pertinent to cacao plants. This gap has mainly been due to the unavailability of high quality labeled training data. Moreover, institutions have been hesitant to share their data related to CSSVD. To fill these gaps, image classifiers to detect CSSVD-infected cacao plants are presented in this study. The classifiers are based on VGG16, ResNet50 and Vision Transformer (ViT). The image classifiers are evaluated on a recently released and publicly accessible KaraAgroAI Cocoa dataset. The best performing image classifier, based on ResNet50, achieves 95.39\% precision, 93.75\% recall, 94.34\% F1-score and 94\% accuracy on only 20 epochs. There is a +9.75\% improvement in recall when compared to previous works. These results indicate that the image classifiers learn to identify cacao plants infected with CSSVD. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=CSSVD" title="CSSVD">CSSVD</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20classification" title=" image classification"> image classification</a>, <a href="https://publications.waset.org/abstracts/search?q=ResNet50" title=" ResNet50"> ResNet50</a>, <a href="https://publications.waset.org/abstracts/search?q=vision%20transformer" title=" vision transformer"> vision transformer</a>, <a href="https://publications.waset.org/abstracts/search?q=KaraAgroAI%20cocoa%20dataset" title=" KaraAgroAI cocoa dataset"> KaraAgroAI cocoa dataset</a> </p> <a href="https://publications.waset.org/abstracts/169653/deep-learning-based-image-classifiers-for-detection-of-cssvd-in-cacao-plants" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/169653.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">103</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5</span> An Empirical Study on Switching Activation Functions in Shallow and Deep Neural Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Apoorva%20Vinod">Apoorva Vinod</a>, <a href="https://publications.waset.org/abstracts/search?q=Archana%20Mathur"> Archana Mathur</a>, <a href="https://publications.waset.org/abstracts/search?q=Snehanshu%20Saha"> Snehanshu Saha</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Though there exists a plethora of Activation Functions (AFs) used in single and multiple hidden layer Neural Networks (NN), their behavior always raised curiosity, whether used in combination or singly. The popular AFs –Sigmoid, ReLU, and Tanh–have performed prominently well for shallow and deep architectures. Most of the time, AFs are used singly in multi-layered NN, and, to the best of our knowledge, their performance is never studied and analyzed deeply when used in combination. In this manuscript, we experiment with multi-layered NN architecture (both on shallow and deep architectures; Convolutional NN and VGG16) and investigate how well the network responds to using two different AFs (Sigmoid-Tanh, Tanh-ReLU, ReLU-Sigmoid) used alternately against a traditional, single (Sigmoid-Sigmoid, Tanh-Tanh, ReLUReLU) combination. Our results show that using two different AFs, the network achieves better accuracy, substantially lower loss, and faster convergence on 4 computer vision (CV) and 15 Non-CV (NCV) datasets. When using different AFs, not only was the accuracy greater by 6-7%, but we also accomplished convergence twice as fast. We present a case study to investigate the probability of networks suffering vanishing and exploding gradients when using two different AFs. Additionally, we theoretically showed that a composition of two or more AFs satisfies Universal Approximation Theorem (UAT). <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=activation%20function" title="activation function">activation function</a>, <a href="https://publications.waset.org/abstracts/search?q=universal%20approximation%20function" title=" universal approximation function"> universal approximation function</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20networks" title=" neural networks"> neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=convergence" title=" convergence"> convergence</a> </p> <a href="https://publications.waset.org/abstracts/160024/an-empirical-study-on-switching-activation-functions-in-shallow-and-deep-neural-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/160024.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">158</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4</span> Computer Aided Analysis of Breast Based Diagnostic Problems from Mammograms Using Image Processing and Deep Learning Methods</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ali%20Berkan%20Ural">Ali Berkan Ural</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents the analysis, evaluation, and pre-diagnosis of early stage breast based diagnostic problems (breast cancer, nodulesorlumps) by Computer Aided Diagnosing (CAD) system from mammogram radiological images. According to the statistics, the time factor is crucial to discover the disease in the patient (especially in women) as possible as early and fast. In the study, a new algorithm is developed using advanced image processing and deep learning method to detect and classify the problem at earlystagewithmoreaccuracy. This system first works with image processing methods (Image acquisition, Noiseremoval, Region Growing Segmentation, Morphological Operations, Breast BorderExtraction, Advanced Segmentation, ObtainingRegion Of Interests (ROIs), etc.) and segments the area of interest of the breast and then analyzes these partly obtained area for cancer detection/lumps in order to diagnosis the disease. After segmentation, with using the Spectrogramimages, 5 different deep learning based methods (specified Convolutional Neural Network (CNN) basedAlexNet, ResNet50, VGG16, DenseNet, Xception) are applied to classify the breast based problems. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=computer%20aided%20diagnosis" title="computer aided diagnosis">computer aided diagnosis</a>, <a href="https://publications.waset.org/abstracts/search?q=breast%20cancer" title=" breast cancer"> breast cancer</a>, <a href="https://publications.waset.org/abstracts/search?q=region%20growing" title=" region growing"> region growing</a>, <a href="https://publications.waset.org/abstracts/search?q=segmentation" title=" segmentation"> segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a> </p> <a href="https://publications.waset.org/abstracts/155700/computer-aided-analysis-of-breast-based-diagnostic-problems-from-mammograms-using-image-processing-and-deep-learning-methods" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/155700.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">95</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3</span> Quality Analysis of Vegetables Through Image Processing</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Abdul%20Khalique%20Baloch">Abdul Khalique Baloch</a>, <a href="https://publications.waset.org/abstracts/search?q=Ali%20Okatan"> Ali Okatan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The quality analysis of food and vegetable from image is hot topic now a day, where researchers make them better then pervious findings through different technique and methods. In this research we have review the literature, and find gape from them, and suggest better proposed approach, design the algorithm, developed a software to measure the quality from images, where accuracy of image show better results, and compare the results with Perouse work done so for. The Application we uses an open-source dataset and python language with tensor flow lite framework. In this research we focus to sort food and vegetable from image, in the images, the application can sorts and make them grading after process the images, it could create less errors them human base sorting errors by manual grading. Digital pictures datasets were created. The collected images arranged by classes. The classification accuracy of the system was about 94%. As fruits and vegetables play main role in day-to-day life, the quality of fruits and vegetables is necessary in evaluating agricultural produce, the customer always buy good quality fruits and vegetables. This document is about quality detection of fruit and vegetables using images. Most of customers suffering due to unhealthy foods and vegetables by suppliers, so there is no proper quality measurement level followed by hotel managements. it have developed software to measure the quality of the fruits and vegetables by using images, it will tell you how is your fruits and vegetables are fresh or rotten. Some algorithms reviewed in this thesis including digital images, ResNet, VGG16, CNN and Transfer Learning grading feature extraction. This application used an open source dataset of images and language used python, and designs a framework of system. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title="deep learning">deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=rotten%20fruit%20detection" title=" rotten fruit detection"> rotten fruit detection</a>, <a href="https://publications.waset.org/abstracts/search?q=fruits%20quality%20criteria" title=" fruits quality criteria"> fruits quality criteria</a>, <a href="https://publications.waset.org/abstracts/search?q=vegetables%20quality%20criteria" title=" vegetables quality criteria"> vegetables quality criteria</a> </p> <a href="https://publications.waset.org/abstracts/168045/quality-analysis-of-vegetables-through-image-processing" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/168045.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">70</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2</span> A Study on the Application of Machine Learning and Deep Learning Techniques for Skin Cancer Detection</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hritwik%20Ghosh">Hritwik Ghosh</a>, <a href="https://publications.waset.org/abstracts/search?q=Irfan%20Sadiq%20Rahat"> Irfan Sadiq Rahat</a>, <a href="https://publications.waset.org/abstracts/search?q=Sachi%20Nandan%20Mohanty"> Sachi Nandan Mohanty</a>, <a href="https://publications.waset.org/abstracts/search?q=J.%20V.%20R.%20Ravindra"> J. V. R. Ravindra</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In the rapidly evolving landscape of medical diagnostics, the early detection and accurate classification of skin cancer remain paramount for effective treatment outcomes. This research delves into the transformative potential of Artificial Intelligence (AI), specifically Deep Learning (DL), as a tool for discerning and categorizing various skin conditions. Utilizing a diverse dataset of 3,000 images representing nine distinct skin conditions, we confront the inherent challenge of class imbalance. This imbalance, where conditions like melanomas are over-represented, is addressed by incorporating class weights during the model training phase, ensuring an equitable representation of all conditions in the learning process. Our pioneering approach introduces a hybrid model, amalgamating the strengths of two renowned Convolutional Neural Networks (CNNs), VGG16 and ResNet50. These networks, pre-trained on the ImageNet dataset, are adept at extracting intricate features from images. By synergizing these models, our research aims to capture a holistic set of features, thereby bolstering classification performance. Preliminary findings underscore the hybrid model's superiority over individual models, showcasing its prowess in feature extraction and classification. Moreover, the research emphasizes the significance of rigorous data pre-processing, including image resizing, color normalization, and segmentation, in ensuring data quality and model reliability. In essence, this study illuminates the promising role of AI and DL in revolutionizing skin cancer diagnostics, offering insights into its potential applications in broader medical domains. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=artificial%20intelligence" title="artificial intelligence">artificial intelligence</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=skin%20cancer" title=" skin cancer"> skin cancer</a>, <a href="https://publications.waset.org/abstracts/search?q=dermatology" title=" dermatology"> dermatology</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks" title=" convolutional neural networks"> convolutional neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20classification" title=" image classification"> image classification</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=healthcare%20technology" title=" healthcare technology"> healthcare technology</a>, <a href="https://publications.waset.org/abstracts/search?q=cancer%20detection" title=" cancer detection"> cancer detection</a>, <a href="https://publications.waset.org/abstracts/search?q=medical%20imaging" title=" medical imaging"> medical imaging</a> </p> <a href="https://publications.waset.org/abstracts/173583/a-study-on-the-application-of-machine-learning-and-deep-learning-techniques-for-skin-cancer-detection" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/173583.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">86</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1</span> Deep Learning for Image Correction in Sparse-View Computed Tomography</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shubham%20Gogri">Shubham Gogri</a>, <a href="https://publications.waset.org/abstracts/search?q=Lucia%20Florescu"> Lucia Florescu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Medical diagnosis and radiotherapy treatment planning using Computed Tomography (CT) rely on the quantitative accuracy and quality of the CT images. At the same time, requirements for CT imaging include reducing the radiation dose exposure to patients and minimizing scanning time. A solution to this is the sparse-view CT technique, based on a reduced number of projection views. This, however, introduces a new problem— the incomplete projection data results in lower quality of the reconstructed images. To tackle this issue, deep learning methods have been applied to enhance the quality of the sparse-view CT images. A first approach involved employing Mir-Net, a dedicated deep neural network designed for image enhancement. This showed promise, utilizing an intricate architecture comprising encoder and decoder networks, along with the incorporation of the Charbonnier Loss. However, this approach was computationally demanding. Subsequently, a specialized Generative Adversarial Network (GAN) architecture, rooted in the Pix2Pix framework, was implemented. This GAN framework involves a U-Net-based Generator and a Discriminator based on Convolutional Neural Networks. To bolster the GAN's performance, both Charbonnier and Wasserstein loss functions were introduced, collectively focusing on capturing minute details while ensuring training stability. The integration of the perceptual loss, calculated based on feature vectors extracted from the VGG16 network pretrained on the ImageNet dataset, further enhanced the network's ability to synthesize relevant images. A series of comprehensive experiments with clinical CT data were conducted, exploring various GAN loss functions, including Wasserstein, Charbonnier, and perceptual loss. The outcomes demonstrated significant image quality improvements, confirmed through pertinent metrics such as Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM) between the corrected images and the ground truth. Furthermore, learning curves and qualitative comparisons added evidence of the enhanced image quality and the network's increased stability, while preserving pixel value intensity. The experiments underscored the potential of deep learning frameworks in enhancing the visual interpretation of CT scans, achieving outcomes with SSIM values close to one and PSNR values reaching up to 76. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=generative%20adversarial%20networks" title="generative adversarial networks">generative adversarial networks</a>, <a href="https://publications.waset.org/abstracts/search?q=sparse%20view%20computed%20tomography" title=" sparse view computed tomography"> sparse view computed tomography</a>, <a href="https://publications.waset.org/abstracts/search?q=CT%20image%20correction" title=" CT image correction"> CT image correction</a>, <a href="https://publications.waset.org/abstracts/search?q=Mir-Net" title=" Mir-Net"> Mir-Net</a> </p> <a href="https://publications.waset.org/abstracts/172152/deep-learning-for-image-correction-in-sparse-view-computed-tomography" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/172152.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">161</span> </span> </div> </div> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">&copy; 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">&times;</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>

Pages: 1 2 3 4 5 6 7 8 9 10