CINXE.COM

Search results for: data augmentation

<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: data augmentation</title> <meta name="description" content="Search results for: data augmentation"> <meta name="keywords" content="data augmentation"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="data augmentation" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="data augmentation"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 25214</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: data augmentation</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">25214</span> Robust Barcode Detection with Synthetic-to-Real Data Augmentation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Xiaoyan%20Dai">Xiaoyan Dai</a>, <a href="https://publications.waset.org/abstracts/search?q=Hsieh%20Yisan"> Hsieh Yisan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Barcode processing of captured images is a huge challenge, as different shooting conditions can result in different barcode appearances. This paper proposes a deep learning-based barcode detection using synthetic-to-real data augmentation. We first augment barcodes themselves; we then augment images containing the barcodes to generate a large variety of data that is close to the actual shooting environments. Comparisons with previous works and evaluations with our original data show that this approach achieves state-of-the-art performance in various real images. In addition, the system uses hybrid resolution for barcode “scan” and is applicable to real-time applications. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=barcode%20detection" title="barcode detection">barcode detection</a>, <a href="https://publications.waset.org/abstracts/search?q=data%20augmentation" title=" data augmentation"> data augmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=image-based%20processing" title=" image-based processing"> image-based processing</a> </p> <a href="https://publications.waset.org/abstracts/153243/robust-barcode-detection-with-synthetic-to-real-data-augmentation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/153243.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">168</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">25213</span> Assessing Performance of Data Augmentation Techniques for a Convolutional Network Trained for Recognizing Humans in Drone Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Masood%20Varshosaz">Masood Varshosaz</a>, <a href="https://publications.waset.org/abstracts/search?q=Kamyar%20Hasanpour"> Kamyar Hasanpour</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In recent years, we have seen growing interest in recognizing humans in drone images for post-disaster search and rescue operations. Deep learning algorithms have shown great promise in this area, but they often require large amounts of labeled data to train the models. To keep the data acquisition cost low, augmentation techniques can be used to create additional data from existing images. There are many techniques of such that can help generate variations of an original image to improve the performance of deep learning algorithms. While data augmentation is potentially assumed to improve the accuracy and robustness of the models, it is important to ensure that the performance gains are not outweighed by the additional computational cost or complexity of implementing the techniques. To this end, it is important to evaluate the impact of data augmentation on the performance of the deep learning models. In this paper, we evaluated the most currently available 2D data augmentation techniques on a standard convolutional network which was trained for recognizing humans in drone images. The techniques include rotation, scaling, random cropping, flipping, shifting, and their combination. The results showed that the augmented models perform 1-3% better compared to a base network. However, as the augmented images only contain the human parts already visible in the original images, a new data augmentation approach is needed to include the invisible parts of the human body. Thus, we suggest a new method that employs simulated 3D human models to generate new data for training the network. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=human%20recognition" title="human recognition">human recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=drones" title=" drones"> drones</a>, <a href="https://publications.waset.org/abstracts/search?q=disaster%20mitigation" title=" disaster mitigation"> disaster mitigation</a> </p> <a href="https://publications.waset.org/abstracts/168645/assessing-performance-of-data-augmentation-techniques-for-a-convolutional-network-trained-for-recognizing-humans-in-drone-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/168645.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">93</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">25212</span> A Mutually Exclusive Task Generation Method Based on Data Augmentation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Haojie%20Wang">Haojie Wang</a>, <a href="https://publications.waset.org/abstracts/search?q=Xun%20Li"> Xun Li</a>, <a href="https://publications.waset.org/abstracts/search?q=Rui%20Yin"> Rui Yin</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In order to solve the memorization overfitting in the meta-learning MAML algorithm, a method of generating mutually exclusive tasks based on data augmentation is proposed. This method generates a mutex task by corresponding one feature of the data to multiple labels, so that the generated mutex task is inconsistent with the data distribution in the initial dataset. Because generating mutex tasks for all data will produce a large number of invalid data and, in the worst case, lead to exponential growth of computation, this paper also proposes a key data extraction method, that only extracts part of the data to generate the mutex task. The experiments show that the method of generating mutually exclusive tasks can effectively solve the memorization overfitting in the meta-learning MAML algorithm. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=data%20augmentation" title="data augmentation">data augmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=mutex%20task%20generation" title=" mutex task generation"> mutex task generation</a>, <a href="https://publications.waset.org/abstracts/search?q=meta-learning" title=" meta-learning"> meta-learning</a>, <a href="https://publications.waset.org/abstracts/search?q=text%20classification." title=" text classification."> text classification.</a> </p> <a href="https://publications.waset.org/abstracts/173913/a-mutually-exclusive-task-generation-method-based-on-data-augmentation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/173913.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">93</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">25211</span> A Mutually Exclusive Task Generation Method Based on Data Augmentation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Haojie%20Wang">Haojie Wang</a>, <a href="https://publications.waset.org/abstracts/search?q=Xun%20Li"> Xun Li</a>, <a href="https://publications.waset.org/abstracts/search?q=Rui%20Yin"> Rui Yin</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In order to solve the memorization overfitting in the model-agnostic meta-learning MAML algorithm, a method of generating mutually exclusive tasks based on data augmentation is proposed. This method generates a mutex task by corresponding one feature of the data to multiple labels so that the generated mutex task is inconsistent with the data distribution in the initial dataset. Because generating mutex tasks for all data will produce a large number of invalid data and, in the worst case, lead to an exponential growth of computation, this paper also proposes a key data extraction method that only extract part of the data to generate the mutex task. The experiments show that the method of generating mutually exclusive tasks can effectively solve the memorization overfitting in the meta-learning MAML algorithm. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=mutex%20task%20generation" title="mutex task generation">mutex task generation</a>, <a href="https://publications.waset.org/abstracts/search?q=data%20augmentation" title=" data augmentation"> data augmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=meta-learning" title=" meta-learning"> meta-learning</a>, <a href="https://publications.waset.org/abstracts/search?q=text%20classification." title=" text classification."> text classification.</a> </p> <a href="https://publications.waset.org/abstracts/168341/a-mutually-exclusive-task-generation-method-based-on-data-augmentation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/168341.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">143</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">25210</span> Biostimulation and Muscular Ergogenic Effect of Ozone Therapy on Buttock Augmentation: A Case Report and Literature Review</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ferreira%20R.">Ferreira R.</a>, <a href="https://publications.waset.org/abstracts/search?q=Rocha%20K."> Rocha K.</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Ozone therapy is indicated for improving skin aesthetics, bio-stimulating and ergogenic effect. This paper aims to carry out a case report that demonstrates the positive results of ozone therapy in buttock augmentation. The application showed positive results for skin bio stimulating, neocollagenesis, adipogenesis, and ergogenic muscle effect in the reported case, demonstrating to be a viable clinical technique. Buttock augmentation with ozone therapy is a promising aesthetic therapeutic modality with fast and safe results as an aesthetic therapeutic option for buttock augmentation. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=bio-stimulating%20effect" title="bio-stimulating effect">bio-stimulating effect</a>, <a href="https://publications.waset.org/abstracts/search?q=ozone%20therapy" title=" ozone therapy"> ozone therapy</a>, <a href="https://publications.waset.org/abstracts/search?q=muscular%20ergogenic" title=" muscular ergogenic"> muscular ergogenic</a>, <a href="https://publications.waset.org/abstracts/search?q=buttock%20augmentation" title=" buttock augmentation"> buttock augmentation</a> </p> <a href="https://publications.waset.org/abstracts/157131/biostimulation-and-muscular-ergogenic-effect-of-ozone-therapy-on-buttock-augmentation-a-case-report-and-literature-review" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/157131.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">294</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">25209</span> Mosaic Augmentation: Insights and Limitations</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Olivia%20A.%20Kjorlien">Olivia A. Kjorlien</a>, <a href="https://publications.waset.org/abstracts/search?q=Maryam%20Asghari"> Maryam Asghari</a>, <a href="https://publications.waset.org/abstracts/search?q=Farshid%20Alizadeh-Shabdiz"> Farshid Alizadeh-Shabdiz</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The goal of this paper is to investigate the impact of mosaic augmentation on the performance of object detection solutions. To carry out the study, YOLOv4 and YOLOv4-Tiny models have been selected, which are popular, advanced object detection models. These models are also representatives of two classes of complex and simple models. The study also has been carried out on two categories of objects, simple and complex. For this study, YOLOv4 and YOLOv4 Tiny are trained with and without mosaic augmentation for two sets of objects. While mosaic augmentation improves the performance of simple object detection, it deteriorates the performance of complex object detection, specifically having the largest negative impact on the false positive rate in a complex object detection case. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=accuracy" title="accuracy">accuracy</a>, <a href="https://publications.waset.org/abstracts/search?q=false%20positives" title=" false positives"> false positives</a>, <a href="https://publications.waset.org/abstracts/search?q=mosaic%20augmentation" title=" mosaic augmentation"> mosaic augmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20detection" title=" object detection"> object detection</a>, <a href="https://publications.waset.org/abstracts/search?q=YOLOV4" title=" YOLOV4"> YOLOV4</a>, <a href="https://publications.waset.org/abstracts/search?q=YOLOV4-Tiny" title=" YOLOV4-Tiny"> YOLOV4-Tiny</a> </p> <a href="https://publications.waset.org/abstracts/162634/mosaic-augmentation-insights-and-limitations" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/162634.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">127</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">25208</span> Medical Image Augmentation Using Spatial Transformations for Convolutional Neural Network</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Trupti%20Chavan">Trupti Chavan</a>, <a href="https://publications.waset.org/abstracts/search?q=Ramachandra%20Guda"> Ramachandra Guda</a>, <a href="https://publications.waset.org/abstracts/search?q=Kameshwar%20Rao"> Kameshwar Rao</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The lack of data is a pain problem in medical image analysis using a convolutional neural network (CNN). This work uses various spatial transformation techniques to address the medical image augmentation issue for knee detection and localization using an enhanced single shot detector (SSD) network. The spatial transforms like a negative, histogram equalization, power law, sharpening, averaging, gaussian blurring, etc. help to generate more samples, serve as pre-processing methods, and highlight the features of interest. The experimentation is done on the OpenKnee dataset which is a collection of knee images from the openly available online sources. The CNN called enhanced single shot detector (SSD) is utilized for the detection and localization of the knee joint from a given X-ray image. It is an enhanced version of the famous SSD network and is modified in such a way that it will reduce the number of prediction boxes at the output side. It consists of a classification network (VGGNET) and an auxiliary detection network. The performance is measured in mean average precision (mAP), and 99.96% mAP is achieved using the proposed enhanced SSD with spatial transformations. It is also seen that the localization boundary is comparatively more refined and closer to the ground truth in spatial augmentation and gives better detection and localization of knee joints. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=data%20augmentation" title="data augmentation">data augmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=enhanced%20SSD" title=" enhanced SSD"> enhanced SSD</a>, <a href="https://publications.waset.org/abstracts/search?q=knee%20detection%20and%20localization" title=" knee detection and localization"> knee detection and localization</a>, <a href="https://publications.waset.org/abstracts/search?q=medical%20image%20analysis" title=" medical image analysis"> medical image analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=openKnee" title=" openKnee"> openKnee</a>, <a href="https://publications.waset.org/abstracts/search?q=Spatial%20transformations" title=" Spatial transformations"> Spatial transformations</a> </p> <a href="https://publications.waset.org/abstracts/122628/medical-image-augmentation-using-spatial-transformations-for-convolutional-neural-network" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/122628.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">154</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">25207</span> Deep Feature Augmentation with Generative Adversarial Networks for Class Imbalance Learning in Medical Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rongbo%20Shen">Rongbo Shen</a>, <a href="https://publications.waset.org/abstracts/search?q=Jianhua%20Yao"> Jianhua Yao</a>, <a href="https://publications.waset.org/abstracts/search?q=Kezhou%20Yan"> Kezhou Yan</a>, <a href="https://publications.waset.org/abstracts/search?q=Kuan%20Tian"> Kuan Tian</a>, <a href="https://publications.waset.org/abstracts/search?q=Cheng%20Jiang"> Cheng Jiang</a>, <a href="https://publications.waset.org/abstracts/search?q=Ke%20Zhou"> Ke Zhou</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This study proposes a generative adversarial networks (GAN) framework to perform synthetic sampling in feature space, i.e., feature augmentation, to address the class imbalance problem in medical image analysis. A feature extraction network is first trained to convert images into feature space. Then the GAN framework incorporates adversarial learning to train a feature generator for the minority class through playing a minimax game with a discriminator. The feature generator then generates features for minority class from arbitrary latent distributions to balance the data between the majority class and the minority class. Additionally, a data cleaning technique, i.e., Tomek link, is employed to clean up undesirable conflicting features introduced from the feature augmentation and thus establish well-defined class clusters for the training. The experiment section evaluates the proposed method on two medical image analysis tasks, i.e., mass classification on mammogram and cancer metastasis classification on histopathological images. Experimental results suggest that the proposed method obtains superior or comparable performance over the state-of-the-art counterparts. Compared to all counterparts, our proposed method improves more than 1.5 percentage of accuracy. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=class%20imbalance" title="class imbalance">class imbalance</a>, <a href="https://publications.waset.org/abstracts/search?q=synthetic%20sampling" title=" synthetic sampling"> synthetic sampling</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20augmentation" title=" feature augmentation"> feature augmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=generative%20adversarial%20networks" title=" generative adversarial networks"> generative adversarial networks</a>, <a href="https://publications.waset.org/abstracts/search?q=data%20cleaning" title=" data cleaning"> data cleaning</a> </p> <a href="https://publications.waset.org/abstracts/114272/deep-feature-augmentation-with-generative-adversarial-networks-for-class-imbalance-learning-in-medical-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/114272.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">127</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">25206</span> Gluteal Augmentation: A Historical Perspective on Society&#039;s Fascination with Buttock Size</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shane%20R.%20Jackson">Shane R. Jackson</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Gluteal augmentation with fat grafting, commonly referred to as the Brazilian Butt Lift, is the fastest-growing cosmetic surgical procedure, despite the risks and controversy that surrounds it. While many commentators attribute this rise in popularity with current societal trends towards public sharing of private life, the fascination with buttock size is in fact a much older human trait. By searching beyond medical literature and delving into historical sources, from ancient civilisations, through the Renaissance and Victorian eras to the ‘Instagram generation’ of the present day, this paper examines the differences – and similarities – in society’s ideal buttock shape and size. Furthermore, the ways in which these various cultures have altered their appearance to achieve this ideal are also examined, looking at the influence of the broader historical context. A deeper understanding of the historical, cultural and psychosocial factors that influence a patient’s desire for buttock augmentation allows the clinician to formulate a well-rounded surgical plan. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=augmentation" title="augmentation">augmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=Brazilian%20butt%20lift" title=" Brazilian butt lift"> Brazilian butt lift</a>, <a href="https://publications.waset.org/abstracts/search?q=buttock" title=" buttock"> buttock</a>, <a href="https://publications.waset.org/abstracts/search?q=fat%20graft" title=" fat graft"> fat graft</a>, <a href="https://publications.waset.org/abstracts/search?q=gluteal" title=" gluteal"> gluteal</a> </p> <a href="https://publications.waset.org/abstracts/124658/gluteal-augmentation-a-historical-perspective-on-societys-fascination-with-buttock-size" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/124658.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">197</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">25205</span> Review of Speech Recognition Research on Low-Resource Languages</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=XuKe%20Cao">XuKe Cao</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper reviews the current state of research on low-resource languages in the field of speech recognition, focusing on the challenges faced by low-resource language speech recognition, including the scarcity of data resources, the lack of linguistic resources, and the diversity of dialects and accents. The article reviews recent progress in low-resource language speech recognition, including techniques such as data augmentation, end to-end models, transfer learning, and multi-task learning. Based on the challenges currently faced, the paper also provides an outlook on future research directions. Through these studies, it is expected that the performance of speech recognition for low resource languages can be improved, promoting the widespread application and adoption of related technologies. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=low-resource%20languages" title="low-resource languages">low-resource languages</a>, <a href="https://publications.waset.org/abstracts/search?q=speech%20recognition" title=" speech recognition"> speech recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=data%20augmentation%20techniques" title=" data augmentation techniques"> data augmentation techniques</a>, <a href="https://publications.waset.org/abstracts/search?q=NLP" title=" NLP"> NLP</a> </p> <a href="https://publications.waset.org/abstracts/193863/review-of-speech-recognition-research-on-low-resource-languages" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/193863.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">12</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">25204</span> Data Augmentation for Automatic Graphical User Interface Generation Based on Generative Adversarial Network</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Xulu%20Yao">Xulu Yao</a>, <a href="https://publications.waset.org/abstracts/search?q=Moi%20Hoon%20Yap"> Moi Hoon Yap</a>, <a href="https://publications.waset.org/abstracts/search?q=Yanlong%20Zhang"> Yanlong Zhang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> As a branch of artificial neural network, deep learning is widely used in the field of image recognition, but the lack of its dataset leads to imperfect model learning. By analysing the data scale requirements of deep learning and aiming at the application in GUI generation, it is found that the collection of GUI dataset is a time-consuming and labor-consuming project, which is difficult to meet the needs of current deep learning network. To solve this problem, this paper proposes a semi-supervised deep learning model that relies on the original small-scale datasets to produce a large number of reliable data sets. By combining the cyclic neural network with the generated countermeasure network, the cyclic neural network can learn the sequence relationship and characteristics of data, make the generated countermeasure network generate reasonable data, and then expand the Rico dataset. Relying on the network structure, the characteristics of collected data can be well analysed, and a large number of reasonable data can be generated according to these characteristics. After data processing, a reliable dataset for model training can be formed, which alleviates the problem of dataset shortage in deep learning. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=GUI" title="GUI">GUI</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=GAN" title=" GAN"> GAN</a>, <a href="https://publications.waset.org/abstracts/search?q=data%20augmentation" title=" data augmentation"> data augmentation</a> </p> <a href="https://publications.waset.org/abstracts/143650/data-augmentation-for-automatic-graphical-user-interface-generation-based-on-generative-adversarial-network" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/143650.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">184</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">25203</span> Peripheral Facial Nerve Palsy after Lip Augmentation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sana%20Ilyas">Sana Ilyas</a>, <a href="https://publications.waset.org/abstracts/search?q=Kishalaya%20Mukherjee"> Kishalaya Mukherjee</a>, <a href="https://publications.waset.org/abstracts/search?q=Suresh%20Shetty"> Suresh Shetty</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Lip Augmentation has become more common in recent years. Patients do not expect to experience facial palsy after having lip augmentation. This poster will present the findings of such a presentation and will discuss the possible pathophysiology and management. (This poster has been published as a paper in the dental update, June 2022) Aim: The aim of the study was to explore the link between facial nerve palsy and lip fillers, to explore the literature surrounding facial nerve palsy, and to discuss the case of a patient who presented with facial nerve palsy with seemingly unknown cause. Methodology: There was a thorough assessment of the current literature surrounding the topic. This included published papers in journals through PubMed database searches and printed books on the topic. A case presentation was discussed in detail of a patient presenting with peripheral facial nerve palsy and associating it with lip augmentation that she had a day prior. Results and Conclusion: Even though the pathophysiology may not be clear for this presentation, it is important to highlight uncommon presentations or complications that may occur after treatment. This can help with understanding and managing similar cases, should they arise.It is also important to differentiate cause and association in order to make an accurate diagnosis. This may be difficult if there is little scientific literature. Therefore, further research can help to improve the understanding of the pathophysiology of similar presentations. This poster has been published as a paper in dental update, June 2022, and therefore shares a similar conclusiom. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=facial%20palsy" title="facial palsy">facial palsy</a>, <a href="https://publications.waset.org/abstracts/search?q=lip%20augmentation" title=" lip augmentation"> lip augmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=causation%20and%20correlation" title=" causation and correlation"> causation and correlation</a>, <a href="https://publications.waset.org/abstracts/search?q=dental%20cosmetics" title=" dental cosmetics"> dental cosmetics</a> </p> <a href="https://publications.waset.org/abstracts/158439/peripheral-facial-nerve-palsy-after-lip-augmentation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/158439.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">148</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">25202</span> COVID_ICU_BERT: A Fine-Tuned Language Model for COVID-19 Intensive Care Unit Clinical Notes</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shahad%20Nagoor">Shahad Nagoor</a>, <a href="https://publications.waset.org/abstracts/search?q=Lucy%20Hederman"> Lucy Hederman</a>, <a href="https://publications.waset.org/abstracts/search?q=Kevin%20Koidl"> Kevin Koidl</a>, <a href="https://publications.waset.org/abstracts/search?q=Annalina%20Caputo"> Annalina Caputo</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Doctors’ notes reflect their impressions, attitudes, clinical sense, and opinions about patients’ conditions and progress, and other information that is essential for doctors’ daily clinical decisions. Despite their value, clinical notes are insufficiently researched within the language processing community. Automatically extracting information from unstructured text data is known to be a difficult task as opposed to dealing with structured information such as vital physiological signs, images, and laboratory results. The aim of this research is to investigate how Natural Language Processing (NLP) techniques and machine learning techniques applied to clinician notes can assist in doctors’ decision-making in Intensive Care Unit (ICU) for coronavirus disease 2019 (COVID-19) patients. The hypothesis is that clinical outcomes like survival or mortality can be useful in influencing the judgement of clinical sentiment in ICU clinical notes. This paper introduces two contributions: first, we introduce COVID_ICU_BERT, a fine-tuned version of clinical transformer models that can reliably predict clinical sentiment for notes of COVID patients in the ICU. We train the model on clinical notes for COVID-19 patients, a type of notes that were not previously seen by clinicalBERT, and Bio_Discharge_Summary_BERT. The model, which was based on clinicalBERT achieves higher predictive accuracy (Acc 93.33%, AUC 0.98, and precision 0.96 ). Second, we perform data augmentation using clinical contextual word embedding that is based on a pre-trained clinical model to balance the samples in each class in the data (survived vs. deceased patients). Data augmentation improves the accuracy of prediction slightly (Acc 96.67%, AUC 0.98, and precision 0.92 ). <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=BERT%20fine-tuning" title="BERT fine-tuning">BERT fine-tuning</a>, <a href="https://publications.waset.org/abstracts/search?q=clinical%20sentiment" title=" clinical sentiment"> clinical sentiment</a>, <a href="https://publications.waset.org/abstracts/search?q=COVID-19" title=" COVID-19"> COVID-19</a>, <a href="https://publications.waset.org/abstracts/search?q=data%20augmentation" title=" data augmentation"> data augmentation</a> </p> <a href="https://publications.waset.org/abstracts/156058/covid-icu-bert-a-fine-tuned-language-model-for-covid-19-intensive-care-unit-clinical-notes" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/156058.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">206</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">25201</span> The Implication of Augmentation Cystoplasty with Mitrofanoff Channel on Reproduction Age Group and Outcome of Pregnancy</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Amal%20A.%20Qedrah">Amal A. Qedrah</a>, <a href="https://publications.waset.org/abstracts/search?q=Sofia%20A.%20Malik"> Sofia A. Malik</a>, <a href="https://publications.waset.org/abstracts/search?q=Madiha%20Akbar"> Madiha Akbar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The aim of this article is to share a rare clinical case of pregnancy and surgical delivery in a patient who has undergone augmentation cystoplasty with mitrofanoff channel in the past. Methods: This case report is about a woman who conceived naturally at the age of 27, previously underwent augmentation cystoplasty at the age of 10 years with mitrofanoff procedure using self-clean intermittent catheterization. Furthermore, this pregnancy was complicated by the presence of preeclampsia diagnosed at term and PROM. Following the failure of induction for intrapartum preeclampsia, the patient delivered a healthy baby via low transverse cesarean section at 38 weeks done at Latifa Hospital, Dubai. Conclusion: The procedure is done at a pediatric or young age, after which most patients reach reproductive age. There is no contraindication to pregnancy vaginally or surgically; however, this case was complicated by preeclampsia, due to which this patient was taken for a cesarean section. It is advisable to consult a urologist frequently along with taking regular bacteriological urine samples and blood samples with renal ultrasonography for the evaluation of the kidney. Antibacterial treatment or prophylaxis should be used during pregnancy if necessary and intermittent self-catherization is mostly performed routinely. It is also important to have a urologist on standby during the surgery in order to avoid and/or fix any complications that might come forth. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=augmentation%20cystoplasty" title="augmentation cystoplasty">augmentation cystoplasty</a>, <a href="https://publications.waset.org/abstracts/search?q=cesarean%20section" title=" cesarean section"> cesarean section</a>, <a href="https://publications.waset.org/abstracts/search?q=delivery" title=" delivery"> delivery</a>, <a href="https://publications.waset.org/abstracts/search?q=mitrofanoff%20channel" title=" mitrofanoff channel"> mitrofanoff channel</a> </p> <a href="https://publications.waset.org/abstracts/130983/the-implication-of-augmentation-cystoplasty-with-mitrofanoff-channel-on-reproduction-age-group-and-outcome-of-pregnancy" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/130983.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">160</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">25200</span> Domain Adaptation Save Lives - Drowning Detection in Swimming Pool Scene Based on YOLOV8 Improved by Gaussian Poisson Generative Adversarial Network Augmentation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Simiao%20Ren">Simiao Ren</a>, <a href="https://publications.waset.org/abstracts/search?q=En%20Wei"> En Wei</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Drowning is a significant safety issue worldwide, and a robust computer vision-based alert system can easily prevent such tragedies in swimming pools. However, due to domain shift caused by the visual gap (potentially due to lighting, indoor scene change, pool floor color etc.) between the training swimming pool and the test swimming pool, the robustness of such algorithms has been questionable. The annotation cost for labeling each new swimming pool is too expensive for mass adoption of such a technique. To address this issue, we propose a domain-aware data augmentation pipeline based on Gaussian Poisson Generative Adversarial Network (GP-GAN). Combined with YOLOv8, we demonstrate that such a domain adaptation technique can significantly improve the model performance (from 0.24 mAP to 0.82 mAP) on new test scenes. As the augmentation method only require background imagery from the new domain (no annotation needed), we believe this is a promising, practical route for preventing swimming pool drowning. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title="computer vision">computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=YOLOv8" title=" YOLOv8"> YOLOv8</a>, <a href="https://publications.waset.org/abstracts/search?q=detection" title=" detection"> detection</a>, <a href="https://publications.waset.org/abstracts/search?q=swimming%20pool" title=" swimming pool"> swimming pool</a>, <a href="https://publications.waset.org/abstracts/search?q=drowning" title=" drowning"> drowning</a>, <a href="https://publications.waset.org/abstracts/search?q=domain%20adaptation" title=" domain adaptation"> domain adaptation</a>, <a href="https://publications.waset.org/abstracts/search?q=generative%20adversarial%20network" title=" generative adversarial network"> generative adversarial network</a>, <a href="https://publications.waset.org/abstracts/search?q=GAN" title=" GAN"> GAN</a>, <a href="https://publications.waset.org/abstracts/search?q=GP-GAN" title=" GP-GAN"> GP-GAN</a> </p> <a href="https://publications.waset.org/abstracts/163443/domain-adaptation-save-lives-drowning-detection-in-swimming-pool-scene-based-on-yolov8-improved-by-gaussian-poisson-generative-adversarial-network-augmentation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/163443.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">100</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">25199</span> Self-Inflating Soft Tissue Expander Outcome for Alveolar Ridge Augmentation a Randomized Controlled Clinical and Histological Study</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Alaa%20T.%20Ali">Alaa T. Ali</a>, <a href="https://publications.waset.org/abstracts/search?q=Nevine%20H.%20Kheir%20El%20Din"> Nevine H. Kheir El Din</a>, <a href="https://publications.waset.org/abstracts/search?q=Ehab%20S.%20Abdelhamid"> Ehab S. Abdelhamid</a>, <a href="https://publications.waset.org/abstracts/search?q=Ahmed%20E.%20Amr"> Ahmed E. Amr</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Objective: Severe alveolar bone resorption is usually associated with a deficient amount of soft tissues. soft tissue expansion is introduced to provide an adequate amount of soft tissue over the grafted area. This study aimed to assess the efficacy of sub-periosteal self-inflating osmotic tissue expanders used as preparatory surgery before horizontal alveolar ridge augmentation using autogenous onlay block bone graft. Methods: A prospective randomized controlled clinical trial was performed. Sixteen partially edentulous patients demanding horizontal bone augmentation in the anterior maxilla were randomly assigned to horizontal ridge augmentation with autogenous bone block grafts harvested from the mandibular symphysis. For the test group, soft tissue expanders were placed sub-periosteally before horizontal ridge augmentation. Impressions were taken before and after STE, and the cast models were optically scanned and superimposed to be used for volumetric analysis. Horizontal ridge augmentation was carried out after STE completion. For the control group, a periosteal releasing incision was performed during bone augmentation procedures. Implants were placed in both groups at re-entry surgery after six months period. A core biopsy was taken. Histomorphometric assessment for newly formed bone surface area, mature collagen area fraction, the osteoblasts count, and blood vessel count were performed. The change in alveolar ridge width was evaluated through bone caliper and CBCT. Results: Soft tissue expander successfully provides a Surplus amount of soft tissues in 5 out of 8 patients in the test group. Complications during the expansion period were perforation through oral mucosa occurred in two patients. Infection occurred in one patient. The mean soft tissue volume gain was 393.9 ± 322mm. After 6 months. The mean horizontal bone gains for the test and control groups were 3.14 mm and 3.69 mm, respectively. Conclusion: STE with a sub-periosteal approach is an applicable method to achieve an additional soft tissue and to reduce bone block graft exposure and wound dehiscence. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=soft%20tissue%20expander" title="soft tissue expander">soft tissue expander</a>, <a href="https://publications.waset.org/abstracts/search?q=ridge%20augmentation" title=" ridge augmentation"> ridge augmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=block%20graft" title=" block graft"> block graft</a>, <a href="https://publications.waset.org/abstracts/search?q=symphysis%20bone%20block" title=" symphysis bone block"> symphysis bone block</a> </p> <a href="https://publications.waset.org/abstracts/149988/self-inflating-soft-tissue-expander-outcome-for-alveolar-ridge-augmentation-a-randomized-controlled-clinical-and-histological-study" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/149988.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">125</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">25198</span> Self-Supervised Attributed Graph Clustering with Dual Contrastive Loss Constraints</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Lijuan%20Zhou">Lijuan Zhou</a>, <a href="https://publications.waset.org/abstracts/search?q=Mengqi%20Wu"> Mengqi Wu</a>, <a href="https://publications.waset.org/abstracts/search?q=Changyong%20Niu"> Changyong Niu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Attributed graph clustering can utilize the graph topology and node attributes to uncover hidden community structures and patterns in complex networks, aiding in the understanding and analysis of complex systems. Utilizing contrastive learning for attributed graph clustering can effectively exploit meaningful implicit relationships between data. However, existing attributed graph clustering methods based on contrastive learning suffer from the following drawbacks: 1) Complex data augmentation increases computational cost, and inappropriate data augmentation may lead to semantic drift. 2) The selection of positive and negative samples neglects the intrinsic cluster structure learned from graph topology and node attributes. Therefore, this paper proposes a method called self-supervised Attributed Graph Clustering with Dual Contrastive Loss constraints (AGC-DCL). Firstly, Siamese Multilayer Perceptron (MLP) encoders are employed to generate two views separately to avoid complex data augmentation. Secondly, the neighborhood contrastive loss is introduced to constrain node representation using local topological structure while effectively embedding attribute information through attribute reconstruction. Additionally, clustering-oriented contrastive loss is applied to fully utilize clustering information in global semantics for discriminative node representations, regarding the cluster centers from two views as negative samples to fully leverage effective clustering information from different views. Comparative clustering results with existing attributed graph clustering algorithms on six datasets demonstrate the superiority of the proposed method. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=attributed%20graph%20clustering" title="attributed graph clustering">attributed graph clustering</a>, <a href="https://publications.waset.org/abstracts/search?q=contrastive%20learning" title=" contrastive learning"> contrastive learning</a>, <a href="https://publications.waset.org/abstracts/search?q=clustering-oriented" title=" clustering-oriented"> clustering-oriented</a>, <a href="https://publications.waset.org/abstracts/search?q=self-supervised%20learning" title=" self-supervised learning"> self-supervised learning</a> </p> <a href="https://publications.waset.org/abstracts/185262/self-supervised-attributed-graph-clustering-with-dual-contrastive-loss-constraints" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/185262.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">53</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">25197</span> Bias Prevention in Automated Diagnosis of Melanoma: Augmentation of a Convolutional Neural Network Classifier</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kemka%20Ihemelandu">Kemka Ihemelandu</a>, <a href="https://publications.waset.org/abstracts/search?q=Chukwuemeka%20Ihemelandu"> Chukwuemeka Ihemelandu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Melanoma remains a public health crisis, with incidence rates increasing rapidly in the past decades. Improving diagnostic accuracy to decrease misdiagnosis using Artificial intelligence (AI) continues to be documented. Unfortunately, unintended racially biased outcomes, a product of lack of diversity in the dataset used, with a noted class imbalance favoring lighter vs. darker skin tone, have increasingly been recognized as a problem.Resulting in noted limitations of the accuracy of the Convolutional neural network (CNN)models. CNN models are prone to biased output due to biases in the dataset used to train them. Our aim in this study was the optimization of convolutional neural network algorithms to mitigate bias in the automated diagnosis of melanoma. We hypothesized that our proposed training algorithms based on a data augmentation method to optimize the diagnostic accuracy of a CNN classifier by generating new training samples from the original ones will reduce bias in the automated diagnosis of melanoma. We applied geometric transformation, including; rotations, translations, scale change, flipping, and shearing. Resulting in a CNN model that provided a modifiedinput data making for a model that could learn subtle racial features. Optimal selection of the momentum and batch hyperparameter increased our model accuracy. We show that our augmented model reduces bias while maintaining accuracy in the automated diagnosis of melanoma. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=bias" title="bias">bias</a>, <a href="https://publications.waset.org/abstracts/search?q=augmentation" title=" augmentation"> augmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=melanoma" title=" melanoma"> melanoma</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20network" title=" convolutional neural network"> convolutional neural network</a> </p> <a href="https://publications.waset.org/abstracts/147487/bias-prevention-in-automated-diagnosis-of-melanoma-augmentation-of-a-convolutional-neural-network-classifier" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/147487.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">210</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">25196</span> Image Recognition and Anomaly Detection Powered by GANs: A Systematic Review</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Agastya%20Pratap%20Singh">Agastya Pratap Singh</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Generative Adversarial Networks (GANs) have emerged as powerful tools in the fields of image recognition and anomaly detection due to their ability to model complex data distributions and generate realistic images. This systematic review explores recent advancements and applications of GANs in both image recognition and anomaly detection tasks. We discuss various GAN architectures, such as DCGAN, CycleGAN, and StyleGAN, which have been tailored to improve accuracy, robustness, and efficiency in visual data analysis. In image recognition, GANs have been used to enhance data augmentation, improve classification models, and generate high-quality synthetic images. In anomaly detection, GANs have proven effective in identifying rare and subtle abnormalities across various domains, including medical imaging, cybersecurity, and industrial inspection. The review also highlights the challenges and limitations associated with GAN-based methods, such as instability during training and mode collapse, and suggests future research directions to overcome these issues. Through this review, we aim to provide researchers with a comprehensive understanding of the capabilities and potential of GANs in transforming image recognition and anomaly detection practices. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=generative%20adversarial%20networks" title="generative adversarial networks">generative adversarial networks</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20recognition" title=" image recognition"> image recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=anomaly%20detection" title=" anomaly detection"> anomaly detection</a>, <a href="https://publications.waset.org/abstracts/search?q=DCGAN" title=" DCGAN"> DCGAN</a>, <a href="https://publications.waset.org/abstracts/search?q=CycleGAN" title=" CycleGAN"> CycleGAN</a>, <a href="https://publications.waset.org/abstracts/search?q=StyleGAN" title=" StyleGAN"> StyleGAN</a>, <a href="https://publications.waset.org/abstracts/search?q=data%20augmentation" title=" data augmentation"> data augmentation</a> </p> <a href="https://publications.waset.org/abstracts/192413/image-recognition-and-anomaly-detection-powered-by-gans-a-systematic-review" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/192413.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">20</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">25195</span> Optimizing Pediatric Pneumonia Diagnosis with Lightweight MobileNetV2 and VAE-GAN Techniques in Chest X-Ray Analysis</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shriya%20Shukla">Shriya Shukla</a>, <a href="https://publications.waset.org/abstracts/search?q=Lachin%20Fernando"> Lachin Fernando</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Pneumonia, a leading cause of mortality in young children globally, presents significant diagnostic challenges, particularly in resource-limited settings. This study presents an approach to diagnosing pediatric pneumonia using Chest X-Ray (CXR) images, employing a lightweight MobileNetV2 model enhanced with synthetic data augmentation. Addressing the challenge of dataset scarcity and imbalance, the study used a Variational Autoencoder-Generative Adversarial Network (VAE-GAN) to generate synthetic CXR images, improving the representation of normal cases in the pediatric dataset. This approach not only addresses the issues of data imbalance and scarcity prevalent in medical imaging but also provides a more accessible and reliable diagnostic tool for early pneumonia detection. The augmented data improved the model’s accuracy and generalization, achieving an overall accuracy of 95% in pneumonia detection. These findings highlight the efficacy of the MobileNetV2 model, offering a computationally efficient yet robust solution well-suited for resource-constrained environments such as mobile health applications. This study demonstrates the potential of synthetic data augmentation in enhancing medical image analysis for critical conditions like pediatric pneumonia. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=pneumonia" title="pneumonia">pneumonia</a>, <a href="https://publications.waset.org/abstracts/search?q=MobileNetV2" title=" MobileNetV2"> MobileNetV2</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20classification" title=" image classification"> image classification</a>, <a href="https://publications.waset.org/abstracts/search?q=GAN" title=" GAN"> GAN</a>, <a href="https://publications.waset.org/abstracts/search?q=VAE" title=" VAE"> VAE</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a> </p> <a href="https://publications.waset.org/abstracts/181598/optimizing-pediatric-pneumonia-diagnosis-with-lightweight-mobilenetv2-and-vae-gan-techniques-in-chest-x-ray-analysis" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/181598.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">125</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">25194</span> Content-Aware Image Augmentation for Medical Imaging Applications</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Filip%20Rusak">Filip Rusak</a>, <a href="https://publications.waset.org/abstracts/search?q=Yulia%20Arzhaeva"> Yulia Arzhaeva</a>, <a href="https://publications.waset.org/abstracts/search?q=Dadong%20Wang"> Dadong Wang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Machine learning based Computer-Aided Diagnosis (CAD) is gaining much popularity in medical imaging and diagnostic radiology. However, it requires a large amount of high quality and labeled training image datasets. The training images may come from different sources and be acquired from different radiography machines produced by different manufacturers, digital or digitized copies of film radiographs, with various sizes as well as different pixel intensity distributions. In this paper, a content-aware image augmentation method is presented to deal with these variations. The results of the proposed method have been validated graphically by plotting the removed and added seams of pixels on original images. Two different chest X-ray (CXR) datasets are used in the experiments. The CXRs in the datasets defer in size, some are digital CXRs while the others are digitized from analog CXR films. With the proposed content-aware augmentation method, the Seam Carving algorithm is employed to resize CXRs and the corresponding labels in the form of image masks, followed by histogram matching used to normalize the pixel intensities of digital radiography, based on the pixel intensity values of digitized radiographs. We implemented the algorithms, resized the well-known Montgomery dataset, to the size of the most frequently used Japanese Society of Radiological Technology (JSRT) dataset and normalized our digital CXRs for testing. This work resulted in the unified off-the-shelf CXR dataset composed of radiographs included in both, Montgomery and JSRT datasets. The experimental results show that even though the amount of augmentation is large, our algorithm can preserve the important information in lung fields, local structures, and global visual effect adequately. The proposed method can be used to augment training and testing image data sets so that the trained machine learning model can be used to process CXRs from various sources, and it can be potentially used broadly in any medical imaging applications. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=computer-aided%20diagnosis" title="computer-aided diagnosis">computer-aided diagnosis</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20augmentation" title=" image augmentation"> image augmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=lung%20segmentation" title=" lung segmentation"> lung segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=medical%20imaging" title=" medical imaging"> medical imaging</a>, <a href="https://publications.waset.org/abstracts/search?q=seam%20carving" title=" seam carving"> seam carving</a> </p> <a href="https://publications.waset.org/abstracts/98354/content-aware-image-augmentation-for-medical-imaging-applications" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/98354.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">222</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">25193</span> A Proposal of Advanced Key Performance Indicators for Assessing Six Performances of Construction Projects</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Wi%20Sung%20Yoo">Wi Sung Yoo</a>, <a href="https://publications.waset.org/abstracts/search?q=Seung%20Woo%20Lee"> Seung Woo Lee</a>, <a href="https://publications.waset.org/abstracts/search?q=Youn%20Kyoung%20Hur"> Youn Kyoung Hur</a>, <a href="https://publications.waset.org/abstracts/search?q=Sung%20Hwan%20Kim"> Sung Hwan Kim</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Large-scale construction projects are continuously increasing, and the need for tools to monitor and evaluate the project success is emphasized. At the construction industry level, there are limitations in deriving performance evaluation factors that reflect the diversity of construction sites and systems that can objectively evaluate and manage performance. Additionally, there are difficulties in integrating structured and unstructured data generated at construction sites and deriving improvements. In this study, we propose the Key Performance Indicators (KPIs) to enable performance evaluation that reflects the increased diversity of construction sites and the unstructured data generated, and present a model for measuring performance by the derived indicators. The comprehensive performance of a unit construction site is assessed based on 6 areas (Time, Cost, Quality, Safety, Environment, Productivity) and 26 indicators. We collect performance indicator information from 30 construction sites that meet legal standards and have been successfully performed. And We apply data augmentation and optimization techniques into establishing measurement standards for each indicator. In other words, the KPI for construction site performance evaluation presented in this study provides standards for evaluating performance in six areas using institutional requirement data and document data. This can be expanded to establish a performance evaluation system considering the scale and type of construction project. Also, they are expected to be used as a comprehensive indicator of the construction industry and used as basic data for tracking competitiveness at the national level and establishing policies. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=key%20performance%20indicator" title="key performance indicator">key performance indicator</a>, <a href="https://publications.waset.org/abstracts/search?q=performance%20measurement" title=" performance measurement"> performance measurement</a>, <a href="https://publications.waset.org/abstracts/search?q=structured%20and%20unstructured%20data" title=" structured and unstructured data"> structured and unstructured data</a>, <a href="https://publications.waset.org/abstracts/search?q=data%20augmentation" title=" data augmentation"> data augmentation</a> </p> <a href="https://publications.waset.org/abstracts/187387/a-proposal-of-advanced-key-performance-indicators-for-assessing-six-performances-of-construction-projects" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/187387.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">42</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">25192</span> Attention-Based ResNet for Breast Cancer Classification</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Abebe%20Mulugojam%20Negash">Abebe Mulugojam Negash</a>, <a href="https://publications.waset.org/abstracts/search?q=Yongbin%20Yu"> Yongbin Yu</a>, <a href="https://publications.waset.org/abstracts/search?q=Ekong%20Favour"> Ekong Favour</a>, <a href="https://publications.waset.org/abstracts/search?q=Bekalu%20Nigus%20Dawit"> Bekalu Nigus Dawit</a>, <a href="https://publications.waset.org/abstracts/search?q=Molla%20Woretaw%20Teshome"> Molla Woretaw Teshome</a>, <a href="https://publications.waset.org/abstracts/search?q=Aynalem%20Birtukan%20Yirga"> Aynalem Birtukan Yirga</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Breast cancer remains a significant health concern, necessitating advancements in diagnostic methodologies. Addressing this, our paper confronts the notable challenges in breast cancer classification, particularly the imbalance in datasets and the constraints in the accuracy and interpretability of prevailing deep learning approaches. We proposed an attention-based residual neural network (ResNet), which effectively combines the robust features of ResNet with an advanced attention mechanism. Enhanced through strategic data augmentation and positive weight adjustments, this approach specifically targets the issue of data imbalance. The proposed model is tested on the BreakHis dataset and achieved accuracies of 99.00%, 99.04%, 98.67%, and 98.08% in different magnifications (40X, 100X, 200X, and 400X), respectively. We evaluated the performance by using different evaluation metrics such as precision, recall, and F1-Score and made comparisons with other state-of-the-art methods. Our experiments demonstrate that the proposed model outperforms existing approaches, achieving higher accuracy in breast cancer classification. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=residual%20neural%20network" title="residual neural network">residual neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=attention%20mechanism" title=" attention mechanism"> attention mechanism</a>, <a href="https://publications.waset.org/abstracts/search?q=positive%20weight" title=" positive weight"> positive weight</a>, <a href="https://publications.waset.org/abstracts/search?q=data%20augmentation" title=" data augmentation"> data augmentation</a> </p> <a href="https://publications.waset.org/abstracts/181531/attention-based-resnet-for-breast-cancer-classification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/181531.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">101</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">25191</span> Novel Formal Verification Based Coverage Augmentation Technique</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Surinder%20Sood">Surinder Sood</a>, <a href="https://publications.waset.org/abstracts/search?q=Debajyoti%20Mukherjee"> Debajyoti Mukherjee</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Formal verification techniques have become widely popular in pre-silicon verification as an alternate to constrain random simulation based techniques. This paper proposed a novel formal verification-based coverage augmentation technique in verifying complex RTL functional verification faster. The proposed approach relies on augmenting coverage analysis coming from simulation and formal verification. Besides this, the functional qualification framework not only helps in improving the coverage at a faster pace but also aids in maturing and qualifying the formal verification infrastructure. The proposed technique has helped to achieve faster verification sign-off, resulting in faster time-to-market. The design picked had a complex control and data path and had many configurable options to meet multiple specification needs. The flow is generic, and tool independent, thereby leveraging across the projects and design will be much easier <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=COI%20%28cone%20of%20influence%29" title="COI (cone of influence)">COI (cone of influence)</a>, <a href="https://publications.waset.org/abstracts/search?q=coverage" title=" coverage"> coverage</a>, <a href="https://publications.waset.org/abstracts/search?q=formal%20verification" title=" formal verification"> formal verification</a>, <a href="https://publications.waset.org/abstracts/search?q=fault%20injection" title=" fault injection"> fault injection</a> </p> <a href="https://publications.waset.org/abstracts/159250/novel-formal-verification-based-coverage-augmentation-technique" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/159250.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">124</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">25190</span> Infrastructure Project Management and Implementation: A Case Study Of the Mokolo-Crocodile Water Augmentation Project in South Africa</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Elkington%20Sibusiso%20Mnguni">Elkington Sibusiso Mnguni</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The Mokolo-Crocodile Water Augmentation Project (MCWAP) is located in the Limpopo Province in the northern-western part of South Africa. Its purpose is to increase water supply by 30 million cubic meters per year to meet current and future demand for users, including power stations, mining houses, and the local municipality in the Lephalale area. This paper documents the planning and implementation aspects of the MCWAP infrastructure project. The study will add to the body of knowledge with respect to bulk water infrastructure development in water-scarce regions. The method used to gather and collate relevant data and information was the desktop study. The key finding was that the project was successfully completed in 2015 using conventional project management and construction methods. The project is currently being operated and maintained by the National Department of Water and Sanitation. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=construction" title="construction">construction</a>, <a href="https://publications.waset.org/abstracts/search?q=contract%20management" title=" contract management"> contract management</a>, <a href="https://publications.waset.org/abstracts/search?q=infrastructure%20project" title=" infrastructure project"> infrastructure project</a>, <a href="https://publications.waset.org/abstracts/search?q=project%20management" title=" project management"> project management</a> </p> <a href="https://publications.waset.org/abstracts/139785/infrastructure-project-management-and-implementation-a-case-study-of-the-mokolo-crocodile-water-augmentation-project-in-south-africa" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/139785.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">302</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">25189</span> Camera Model Identification for Mi Pad 4, Oppo A37f, Samsung M20, and Oppo f9</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ulrich%20Wake">Ulrich Wake</a>, <a href="https://publications.waset.org/abstracts/search?q=Eniman%20Syamsuddin"> Eniman Syamsuddin</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The model for camera model identificaiton is trained using pretrained model ResNet43 and ResNet50. The dataset consists of 500 photos of each phone. Dataset is divided into 1280 photos for training, 320 photos for validation and 400 photos for testing. The model is trained using One Cycle Policy Method and tested using Test-Time Augmentation. Furthermore, the model is trained for 50 epoch using regularization such as drop out and early stopping. The result is 90% accuracy for validation set and above 85% for Test-Time Augmentation using ResNet50. Every model is also trained by slightly updating the pretrained model’s weights <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=%E2%80%8B%20One%20Cycle%20Policy" title="​ One Cycle Policy">​ One Cycle Policy</a>, <a href="https://publications.waset.org/abstracts/search?q=ResNet34" title=" ResNet34"> ResNet34</a>, <a href="https://publications.waset.org/abstracts/search?q=ResNet50" title=" ResNet50"> ResNet50</a>, <a href="https://publications.waset.org/abstracts/search?q=Test-Time%20Agumentation" title=" Test-Time Agumentation"> Test-Time Agumentation</a> </p> <a href="https://publications.waset.org/abstracts/124445/camera-model-identification-for-mi-pad-4-oppo-a37f-samsung-m20-and-oppo-f9" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/124445.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">208</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">25188</span> Aerodynamic Bicycle Torque Augmentation with a Wells Turbine in Wheels</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Tsuyoshi%20Yamazaki">Tsuyoshi Yamazaki</a>, <a href="https://publications.waset.org/abstracts/search?q=Etsuo%20Morishita"> Etsuo Morishita</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Cyclists often run through a crosswind and sometimes we experience the adverse pressure. We came to an idea that Wells turbine can be used as power augmentation device in the crosswind something like sails of a yacht. Wells turbine always rotates in the same direction irrespective of the incoming flow direction, and we use it in the small-scale power generation in the ocean where waves create an oscillating flow. We incorporate the turbine to the wheel of a bike. A commercial device integrates strain gauges in the crank of a bike and transmitted force and torque applied to the pedal of the bike as an e-mail to the driver&rsquo;s mobile phone. We can analyze the unsteady data in a spreadsheet sent from the crank sensor. We run the bike with the crank sensor on the rollers at the exit of a low-speed wind tunnel and analyze the effect of the crosswind to the wheel with a Wells turbine. We also test the aerodynamic characteristics of the turbine separately. Although power gain depends on the flow direction, several Watts increase might be possible by the Wells turbine incorporated to a bike wheel. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=aerodynamics" title="aerodynamics">aerodynamics</a>, <a href="https://publications.waset.org/abstracts/search?q=Wells%20turbine" title=" Wells turbine"> Wells turbine</a>, <a href="https://publications.waset.org/abstracts/search?q=bicycle" title=" bicycle"> bicycle</a>, <a href="https://publications.waset.org/abstracts/search?q=wind%20engineering" title=" wind engineering"> wind engineering</a> </p> <a href="https://publications.waset.org/abstracts/84277/aerodynamic-bicycle-torque-augmentation-with-a-wells-turbine-in-wheels" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/84277.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">180</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">25187</span> Effective Stacking of Deep Neural Models for Automated Object Recognition in Retail Stores</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ankit%20Sinha">Ankit Sinha</a>, <a href="https://publications.waset.org/abstracts/search?q=Soham%20Banerjee"> Soham Banerjee</a>, <a href="https://publications.waset.org/abstracts/search?q=Pratik%20Chattopadhyay"> Pratik Chattopadhyay</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Automated product recognition in retail stores is an important real-world application in the domain of Computer Vision and Pattern Recognition. In this paper, we consider the problem of automatically identifying the classes of the products placed on racks in retail stores from an image of the rack and information about the query/product images. We improve upon the existing approaches in terms of effectiveness and memory requirement by developing a two-stage object detection and recognition pipeline comprising of a Faster-RCNN-based object localizer that detects the object regions in the rack image and a ResNet-18-based image encoder that classifies the detected regions into the appropriate classes. Each of the models is fine-tuned using appropriate data sets for better prediction and data augmentation is performed on each query image to prepare an extensive gallery set for fine-tuning the ResNet-18-based product recognition model. This encoder is trained using a triplet loss function following the strategy of online-hard-negative-mining for improved prediction. The proposed models are lightweight and can be connected in an end-to-end manner during deployment to automatically identify each product object placed in a rack image. Extensive experiments using Grozi-32k and GP-180 data sets verify the effectiveness of the proposed model. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=retail%20stores" title="retail stores">retail stores</a>, <a href="https://publications.waset.org/abstracts/search?q=faster-RCNN" title=" faster-RCNN"> faster-RCNN</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20localization" title=" object localization"> object localization</a>, <a href="https://publications.waset.org/abstracts/search?q=ResNet-18" title=" ResNet-18"> ResNet-18</a>, <a href="https://publications.waset.org/abstracts/search?q=triplet%20loss" title=" triplet loss"> triplet loss</a>, <a href="https://publications.waset.org/abstracts/search?q=data%20augmentation" title=" data augmentation"> data augmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=product%20recognition" title=" product recognition"> product recognition</a> </p> <a href="https://publications.waset.org/abstracts/153836/effective-stacking-of-deep-neural-models-for-automated-object-recognition-in-retail-stores" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/153836.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">156</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">25186</span> Brain Tumor Detection and Classification Using Pre-Trained Deep Learning Models</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Aditya%20Karade">Aditya Karade</a>, <a href="https://publications.waset.org/abstracts/search?q=Sharada%20Falane"> Sharada Falane</a>, <a href="https://publications.waset.org/abstracts/search?q=Dhananjay%20Deshmukh"> Dhananjay Deshmukh</a>, <a href="https://publications.waset.org/abstracts/search?q=Vijaykumar%20Mantri"> Vijaykumar Mantri</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Brain tumors pose a significant challenge in healthcare due to their complex nature and impact on patient outcomes. The application of deep learning (DL) algorithms in medical imaging have shown promise in accurate and efficient brain tumour detection. This paper explores the performance of various pre-trained DL models ResNet50, Xception, InceptionV3, EfficientNetB0, DenseNet121, NASNetMobile, VGG19, VGG16, and MobileNet on a brain tumour dataset sourced from Figshare. The dataset consists of MRI scans categorizing different types of brain tumours, including meningioma, pituitary, glioma, and no tumour. The study involves a comprehensive evaluation of these models’ accuracy and effectiveness in classifying brain tumour images. Data preprocessing, augmentation, and finetuning techniques are employed to optimize model performance. Among the evaluated deep learning models for brain tumour detection, ResNet50 emerges as the top performer with an accuracy of 98.86%. Following closely is Xception, exhibiting a strong accuracy of 97.33%. These models showcase robust capabilities in accurately classifying brain tumour images. On the other end of the spectrum, VGG16 trails with the lowest accuracy at 89.02%. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=brain%20tumour" title="brain tumour">brain tumour</a>, <a href="https://publications.waset.org/abstracts/search?q=MRI%20image" title=" MRI image"> MRI image</a>, <a href="https://publications.waset.org/abstracts/search?q=detecting%20and%20classifying%20tumour" title=" detecting and classifying tumour"> detecting and classifying tumour</a>, <a href="https://publications.waset.org/abstracts/search?q=pre-trained%20models" title=" pre-trained models"> pre-trained models</a>, <a href="https://publications.waset.org/abstracts/search?q=transfer%20learning" title=" transfer learning"> transfer learning</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20segmentation" title=" image segmentation"> image segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=data%20augmentation" title=" data augmentation"> data augmentation</a> </p> <a href="https://publications.waset.org/abstracts/178879/brain-tumor-detection-and-classification-using-pre-trained-deep-learning-models" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/178879.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">74</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">25185</span> Defect Classification of Hydrogen Fuel Pressure Vessels using Deep Learning</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Dongju%20Kim">Dongju Kim</a>, <a href="https://publications.waset.org/abstracts/search?q=Youngjoo%20Suh"> Youngjoo Suh</a>, <a href="https://publications.waset.org/abstracts/search?q=Hyojin%20Kim"> Hyojin Kim</a>, <a href="https://publications.waset.org/abstracts/search?q=Gyeongyeong%20Kim"> Gyeongyeong Kim</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Acoustic Emission Testing (AET) is widely used to test the structural integrity of an operational hydrogen storage container, and clustering algorithms are frequently used in pattern recognition methods to interpret AET results. However, the interpretation of AET results can vary from user to user as the tuning of the relevant parameters relies on the user's experience and knowledge of AET. Therefore, it is necessary to use a deep learning model to identify patterns in acoustic emission (AE) signal data that can be used to classify defects instead. In this paper, a deep learning-based model for classifying the types of defects in hydrogen storage tanks, using AE sensor waveforms, is proposed. As hydrogen storage tanks are commonly constructed using carbon fiber reinforced polymer composite (CFRP), a defect classification dataset is collected through a tensile test on a specimen of CFRP with an AE sensor attached. The performance of the classification model, using one-dimensional convolutional neural network (1-D CNN) and synthetic minority oversampling technique (SMOTE) data augmentation, achieved 91.09% accuracy for each defect. It is expected that the deep learning classification model in this paper, used with AET, will help in evaluating the operational safety of hydrogen storage containers. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=acoustic%20emission%20testing" title="acoustic emission testing">acoustic emission testing</a>, <a href="https://publications.waset.org/abstracts/search?q=carbon%20fiber%20reinforced%20polymer%20composite" title=" carbon fiber reinforced polymer composite"> carbon fiber reinforced polymer composite</a>, <a href="https://publications.waset.org/abstracts/search?q=one-dimensional%20convolutional%20neural%20network" title=" one-dimensional convolutional neural network"> one-dimensional convolutional neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=smote%20data%20augmentation" title=" smote data augmentation"> smote data augmentation</a> </p> <a href="https://publications.waset.org/abstracts/150903/defect-classification-of-hydrogen-fuel-pressure-vessels-using-deep-learning" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/150903.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">93</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">&lsaquo;</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=data%20augmentation&amp;page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=data%20augmentation&amp;page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=data%20augmentation&amp;page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=data%20augmentation&amp;page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=data%20augmentation&amp;page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=data%20augmentation&amp;page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=data%20augmentation&amp;page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=data%20augmentation&amp;page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=data%20augmentation&amp;page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=data%20augmentation&amp;page=840">840</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=data%20augmentation&amp;page=841">841</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=data%20augmentation&amp;page=2" rel="next">&rsaquo;</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">&copy; 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">&times;</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>

Pages: 1 2 3 4 5 6 7 8 9 10