CINXE.COM
Search results for: conditional generative adversarial net
<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: conditional generative adversarial net</title> <meta name="description" content="Search results for: conditional generative adversarial net"> <meta name="keywords" content="conditional generative adversarial net"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="conditional generative adversarial net" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="conditional generative adversarial net"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 445</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: conditional generative adversarial net</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">445</span> Generative AI: A Comparison of Conditional Tabular Generative Adversarial Networks and Conditional Tabular Generative Adversarial Networks with Gaussian Copula in Generating Synthetic Data with Synthetic Data Vault</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Lakshmi%20Prayaga">Lakshmi Prayaga</a>, <a href="https://publications.waset.org/abstracts/search?q=Chandra%20Prayaga.%20Aaron%20Wade"> Chandra Prayaga. Aaron Wade</a>, <a href="https://publications.waset.org/abstracts/search?q=Gopi%20Shankar%20Mallu"> Gopi Shankar Mallu</a>, <a href="https://publications.waset.org/abstracts/search?q=Harsha%20Satya%20Pola"> Harsha Satya Pola</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Synthetic data generated by Generative Adversarial Networks and Autoencoders is becoming more common to combat the problem of insufficient data for research purposes. However, generating synthetic data is a tedious task requiring extensive mathematical and programming background. Open-source platforms such as the Synthetic Data Vault (SDV) and Mostly AI have offered a platform that is user-friendly and accessible to non-technical professionals to generate synthetic data to augment existing data for further analysis. The SDV also provides for additions to the generic GAN, such as the Gaussian copula. We present the results from two synthetic data sets (CTGAN data and CTGAN with Gaussian Copula) generated by the SDV and report the findings. The results indicate that the ROC and AUC curves for the data generated by adding the layer of Gaussian copula are much higher than the data generated by the CTGAN. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=synthetic%20data%20generation" title="synthetic data generation">synthetic data generation</a>, <a href="https://publications.waset.org/abstracts/search?q=generative%20adversarial%20networks" title=" generative adversarial networks"> generative adversarial networks</a>, <a href="https://publications.waset.org/abstracts/search?q=conditional%20tabular%20GAN" title=" conditional tabular GAN"> conditional tabular GAN</a>, <a href="https://publications.waset.org/abstracts/search?q=Gaussian%20copula" title=" Gaussian copula"> Gaussian copula</a> </p> <a href="https://publications.waset.org/abstracts/183000/generative-ai-a-comparison-of-conditional-tabular-generative-adversarial-networks-and-conditional-tabular-generative-adversarial-networks-with-gaussian-copula-in-generating-synthetic-data-with-synthetic-data-vault" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/183000.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">82</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">444</span> Time Series Simulation by Conditional Generative Adversarial Net</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rao%20Fu">Rao Fu</a>, <a href="https://publications.waset.org/abstracts/search?q=Jie%20Chen"> Jie Chen</a>, <a href="https://publications.waset.org/abstracts/search?q=Shutian%20Zeng"> Shutian Zeng</a>, <a href="https://publications.waset.org/abstracts/search?q=Yiping%20Zhuang"> Yiping Zhuang</a>, <a href="https://publications.waset.org/abstracts/search?q=Agus%20Sudjianto"> Agus Sudjianto</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Generative Adversarial Net (GAN) has proved to be a powerful machine learning tool in image data analysis and generation. In this paper, we propose to use Conditional Generative Adversarial Net (CGAN) to learn and simulate time series data. The conditions include both categorical and continuous variables with different auxiliary information. Our simulation studies show that CGAN has the capability to learn different types of normal and heavy-tailed distributions, as well as dependent structures of different time series. It also has the capability to generate conditional predictive distributions consistent with training data distributions. We also provide an in-depth discussion on the rationale behind GAN and the neural networks as hierarchical splines to establish a clear connection with existing statistical methods of distribution generation. In practice, CGAN has a wide range of applications in market risk and counterparty risk analysis: it can be applied to learn historical data and generate scenarios for the calculation of Value-at-Risk (VaR) and Expected Shortfall (ES), and it can also predict the movement of the market risk factors. We present a real data analysis including a backtesting to demonstrate that CGAN can outperform Historical Simulation (HS), a popular method in market risk analysis to calculate VaR. CGAN can also be applied in economic time series modeling and forecasting. In this regard, we have included an example of hypothetical shock analysis for economic models and the generation of potential CCAR scenarios by CGAN at the end of the paper. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=conditional%20generative%20adversarial%20net" title="conditional generative adversarial net">conditional generative adversarial net</a>, <a href="https://publications.waset.org/abstracts/search?q=market%20and%20credit%20risk%20management" title=" market and credit risk management"> market and credit risk management</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20network" title=" neural network"> neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=time%20series" title=" time series"> time series</a> </p> <a href="https://publications.waset.org/abstracts/123535/time-series-simulation-by-conditional-generative-adversarial-net" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/123535.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">143</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">443</span> A Deep Reinforcement Learning-Based Secure Framework against Adversarial Attacks in Power System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Arshia%20Aflaki">Arshia Aflaki</a>, <a href="https://publications.waset.org/abstracts/search?q=Hadis%20Karimipour"> Hadis Karimipour</a>, <a href="https://publications.waset.org/abstracts/search?q=Anik%20Islam"> Anik Islam</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Generative Adversarial Attacks (GAAs) threaten critical sectors, ranging from fingerprint recognition to industrial control systems. Existing Deep Learning (DL) algorithms are not robust enough against this kind of cyber-attack. As one of the most critical industries in the world, the power grid is not an exception. In this study, a Deep Reinforcement Learning-based (DRL) framework assisting the DL model to improve the robustness of the model against generative adversarial attacks is proposed. Real-world smart grid stability data, as an IIoT dataset, test our method and improves the classification accuracy of a deep learning model from around 57 percent to 96 percent. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=generative%20adversarial%20attack" title="generative adversarial attack">generative adversarial attack</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20reinforcement%20learning" title=" deep reinforcement learning"> deep reinforcement learning</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=IIoT" title=" IIoT"> IIoT</a>, <a href="https://publications.waset.org/abstracts/search?q=generative%20adversarial%20networks" title=" generative adversarial networks"> generative adversarial networks</a>, <a href="https://publications.waset.org/abstracts/search?q=power%20system" title=" power system"> power system</a> </p> <a href="https://publications.waset.org/abstracts/188908/a-deep-reinforcement-learning-based-secure-framework-against-adversarial-attacks-in-power-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/188908.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">37</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">442</span> Enhancement Method of Network Traffic Anomaly Detection Model Based on Adversarial Training With Category Tags</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Zhang%20Shuqi">Zhang Shuqi</a>, <a href="https://publications.waset.org/abstracts/search?q=Liu%20Dan"> Liu Dan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> For the problems in intelligent network anomaly traffic detection models, such as low detection accuracy caused by the lack of training samples, poor effect with small sample attack detection, a classification model enhancement method, F-ACGAN(Flow Auxiliary Classifier Generative Adversarial Network) which introduces generative adversarial network and adversarial training, is proposed to solve these problems. Generating adversarial data with category labels could enhance the training effect and improve classification accuracy and model robustness. FACGAN consists of three steps: feature preprocess, which includes data type conversion, dimensionality reduction and normalization, etc.; A generative adversarial network model with feature learning ability is designed, and the sample generation effect of the model is improved through adversarial iterations between generator and discriminator. The adversarial disturbance factor of the gradient direction of the classification model is added to improve the diversity and antagonism of generated data and to promote the model to learn from adversarial classification features. The experiment of constructing a classification model with the UNSW-NB15 dataset shows that with the enhancement of FACGAN on the basic model, the classification accuracy has improved by 8.09%, and the score of F1 has improved by 6.94%. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=data%20imbalance" title="data imbalance">data imbalance</a>, <a href="https://publications.waset.org/abstracts/search?q=GAN" title=" GAN"> GAN</a>, <a href="https://publications.waset.org/abstracts/search?q=ACGAN" title=" ACGAN"> ACGAN</a>, <a href="https://publications.waset.org/abstracts/search?q=anomaly%20detection" title=" anomaly detection"> anomaly detection</a>, <a href="https://publications.waset.org/abstracts/search?q=adversarial%20training" title=" adversarial training"> adversarial training</a>, <a href="https://publications.waset.org/abstracts/search?q=data%20augmentation" title=" data augmentation"> data augmentation</a> </p> <a href="https://publications.waset.org/abstracts/156929/enhancement-method-of-network-traffic-anomaly-detection-model-based-on-adversarial-training-with-category-tags" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/156929.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">105</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">441</span> Deep Reinforcement Learning and Generative Adversarial Networks Approach to Thwart Intrusions and Adversarial Attacks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Fabrice%20Setephin%20Atedjio">Fabrice Setephin Atedjio</a>, <a href="https://publications.waset.org/abstracts/search?q=Jean-Pierre%20Lienou"> Jean-Pierre Lienou</a>, <a href="https://publications.waset.org/abstracts/search?q=Frederica%20F.%20Nelson"> Frederica F. Nelson</a>, <a href="https://publications.waset.org/abstracts/search?q=Sachin%20S.%20Shetty"> Sachin S. Shetty</a>, <a href="https://publications.waset.org/abstracts/search?q=Charles%20A.%20Kamhoua"> Charles A. Kamhoua</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Malicious users exploit vulnerabilities in computer systems, significantly disrupting their performance and revealing the inadequacies of existing protective solutions. Even machine learning-based approaches, designed to ensure reliability, can be compromised by adversarial attacks that undermine their robustness. This paper addresses two critical aspects of enhancing model reliability. First, we focus on improving model performance and robustness against adversarial threats. To achieve this, we propose a strategy by harnessing deep reinforcement learning. Second, we introduce an approach leveraging generative adversarial networks to counter adversarial attacks effectively. Our results demonstrate substantial improvements over previous works in the literature, with classifiers exhibiting enhanced accuracy in classification tasks, even in the presence of adversarial perturbations. These findings underscore the efficacy of the proposed model in mitigating intrusions and adversarial attacks within the machine-learning landscape. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title="machine learning">machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=reliability" title=" reliability"> reliability</a>, <a href="https://publications.waset.org/abstracts/search?q=adversarial%20attacks" title=" adversarial attacks"> adversarial attacks</a>, <a href="https://publications.waset.org/abstracts/search?q=deep-reinforcement%20learning" title=" deep-reinforcement learning"> deep-reinforcement learning</a>, <a href="https://publications.waset.org/abstracts/search?q=robustness" title=" robustness"> robustness</a> </p> <a href="https://publications.waset.org/abstracts/194008/deep-reinforcement-learning-and-generative-adversarial-networks-approach-to-thwart-intrusions-and-adversarial-attacks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/194008.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">9</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">440</span> AI/ML Atmospheric Parameters Retrieval Using the “Atmospheric Retrievals conditional Generative Adversarial Network (ARcGAN)”</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Thomas%20Monahan">Thomas Monahan</a>, <a href="https://publications.waset.org/abstracts/search?q=Nicolas%20Gorius"> Nicolas Gorius</a>, <a href="https://publications.waset.org/abstracts/search?q=Thanh%20Nguyen"> Thanh Nguyen</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Exoplanet atmospheric parameters retrieval is a complex, computationally intensive, inverse modeling problem in which an exoplanet’s atmospheric composition is extracted from an observed spectrum. Traditional Bayesian sampling methods require extensive time and computation, involving algorithms that compare large numbers of known atmospheric models to the input spectral data. Runtimes are directly proportional to the number of parameters under consideration. These increased power and runtime requirements are difficult to accommodate in space missions where model size, speed, and power consumption are of particular importance. The use of traditional Bayesian sampling methods, therefore, compromise model complexity or sampling accuracy. The Atmospheric Retrievals conditional Generative Adversarial Network (ARcGAN) is a deep convolutional generative adversarial network that improves on the previous model’s speed and accuracy. We demonstrate the efficacy of artificial intelligence to quickly and reliably predict atmospheric parameters and present it as a viable alternative to slow and computationally heavy Bayesian methods. In addition to its broad applicability across instruments and planetary types, ARcGAN has been designed to function on low power application-specific integrated circuits. The application of edge computing to atmospheric retrievals allows for real or near-real-time quantification of atmospheric constituents at the instrument level. Additionally, edge computing provides both high-performance and power-efficient computing for AI applications, both of which are critical for space missions. With the edge computing chip implementation, ArcGAN serves as a strong basis for the development of a similar machine-learning algorithm to reduce the downlinked data volume from the Compact Ultraviolet to Visible Imaging Spectrometer (CUVIS) onboard the DAVINCI mission to Venus. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title="deep learning">deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=generative%20adversarial%20network" title=" generative adversarial network"> generative adversarial network</a>, <a href="https://publications.waset.org/abstracts/search?q=edge%20computing" title=" edge computing"> edge computing</a>, <a href="https://publications.waset.org/abstracts/search?q=atmospheric%20parameters%20retrieval" title=" atmospheric parameters retrieval"> atmospheric parameters retrieval</a> </p> <a href="https://publications.waset.org/abstracts/143382/aiml-atmospheric-parameters-retrieval-using-the-atmospheric-retrievals-conditional-generative-adversarial-network-arcgan" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/143382.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">170</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">439</span> A Generative Adversarial Framework for Bounding Confounded Causal Effects</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yaowei%20Hu">Yaowei Hu</a>, <a href="https://publications.waset.org/abstracts/search?q=Yongkai%20Wu"> Yongkai Wu</a>, <a href="https://publications.waset.org/abstracts/search?q=Lu%20Zhang"> Lu Zhang</a>, <a href="https://publications.waset.org/abstracts/search?q=Xintao%20Wu"> Xintao Wu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Causal inference from observational data is receiving wide applications in many fields. However, unidentifiable situations, where causal effects cannot be uniquely computed from observational data, pose critical barriers to applying causal inference to complicated real applications. In this paper, we develop a bounding method for estimating the average causal effect (ACE) under unidentifiable situations due to hidden confounders. We propose to parameterize the unknown exogenous random variables and structural equations of a causal model using neural networks and implicit generative models. Then, with an adversarial learning framework, we search the parameter space to explicitly traverse causal models that agree with the given observational distribution and find those that minimize or maximize the ACE to obtain its lower and upper bounds. The proposed method does not make any assumption about the data generating process and the type of the variables. Experiments using both synthetic and real-world datasets show the effectiveness of the method. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=average%20causal%20effect" title="average causal effect">average causal effect</a>, <a href="https://publications.waset.org/abstracts/search?q=hidden%20confounding" title=" hidden confounding"> hidden confounding</a>, <a href="https://publications.waset.org/abstracts/search?q=bound%20estimation" title=" bound estimation"> bound estimation</a>, <a href="https://publications.waset.org/abstracts/search?q=generative%20adversarial%20learning" title=" generative adversarial learning"> generative adversarial learning</a> </p> <a href="https://publications.waset.org/abstracts/127808/a-generative-adversarial-framework-for-bounding-confounded-causal-effects" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/127808.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">191</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">438</span> Use of Generative Adversarial Networks (GANs) in Neuroimaging and Clinical Neuroscience Applications</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Niloufar%20Yadgari">Niloufar Yadgari</a> </p> <p class="card-text"><strong>Abstract:</strong></p> GANs are a potent form of deep learning models that have found success in various fields. They are part of the larger group of generative techniques, which aim to produce authentic data using a probabilistic model that learns distributions from actual samples. In clinical settings, GANs have demonstrated improved abilities in capturing spatially intricate, nonlinear, and possibly subtle disease impacts in contrast to conventional generative techniques. This review critically evaluates the current research on how GANs are being used in imaging studies of different neurological conditions like Alzheimer's disease, brain tumors, aging of the brain, and multiple sclerosis. We offer a clear explanation of different GAN techniques for each use case in neuroimaging and delve into the key hurdles, unanswered queries, and potential advancements in utilizing GANs in this field. Our goal is to connect advanced deep learning techniques with neurology studies, showcasing how GANs can assist in clinical decision-making and enhance our comprehension of the structural and functional aspects of brain disorders. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=GAN" title="GAN">GAN</a>, <a href="https://publications.waset.org/abstracts/search?q=pathology" title=" pathology"> pathology</a>, <a href="https://publications.waset.org/abstracts/search?q=generative%20adversarial%20network" title=" generative adversarial network"> generative adversarial network</a>, <a href="https://publications.waset.org/abstracts/search?q=neuro%20imaging" title=" neuro imaging"> neuro imaging</a> </p> <a href="https://publications.waset.org/abstracts/188651/use-of-generative-adversarial-networks-gans-in-neuroimaging-and-clinical-neuroscience-applications" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/188651.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">33</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">437</span> Generating Swarm Satellite Data Using Long Short-Term Memory and Generative Adversarial Networks for the Detection of Seismic Precursors</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yaxin%20Bi">Yaxin Bi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Accurate prediction and understanding of the evolution mechanisms of earthquakes remain challenging in the fields of geology, geophysics, and seismology. This study leverages Long Short-Term Memory (LSTM) networks and Generative Adversarial Networks (GANs), a generative model tailored to time-series data, for generating synthetic time series data based on Swarm satellite data, which will be used for detecting seismic anomalies. LSTMs demonstrated commendable predictive performance in generating synthetic data across multiple countries. In contrast, the GAN models struggled to generate synthetic data, often producing non-informative values, although they were able to capture the data distribution of the time series. These findings highlight both the promise and challenges associated with applying deep learning techniques to generate synthetic data, underscoring the potential of deep learning in generating synthetic electromagnetic satellite data. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=LSTM" title="LSTM">LSTM</a>, <a href="https://publications.waset.org/abstracts/search?q=GAN" title=" GAN"> GAN</a>, <a href="https://publications.waset.org/abstracts/search?q=earthquake" title=" earthquake"> earthquake</a>, <a href="https://publications.waset.org/abstracts/search?q=synthetic%20data" title=" synthetic data"> synthetic data</a>, <a href="https://publications.waset.org/abstracts/search?q=generative%20AI" title=" generative AI"> generative AI</a>, <a href="https://publications.waset.org/abstracts/search?q=seismic%20precursors" title=" seismic precursors"> seismic precursors</a> </p> <a href="https://publications.waset.org/abstracts/187478/generating-swarm-satellite-data-using-long-short-term-memory-and-generative-adversarial-networks-for-the-detection-of-seismic-precursors" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/187478.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">32</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">436</span> Adversarial Disentanglement Using Latent Classifier for Pose-Independent Representation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hamed%20Alqahtani">Hamed Alqahtani</a>, <a href="https://publications.waset.org/abstracts/search?q=Manolya%20Kavakli-Thorne"> Manolya Kavakli-Thorne</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The large pose discrepancy is one of the critical challenges in face recognition during video surveillance. Due to the entanglement of pose attributes with identity information, the conventional approaches for pose-independent representation lack in providing quality results in recognizing largely posed faces. In this paper, we propose a practical approach to disentangle the pose attribute from the identity information followed by synthesis of a face using a classifier network in latent space. The proposed approach employs a modified generative adversarial network framework consisting of an encoder-decoder structure embedded with a classifier in manifold space for carrying out factorization on the latent encoding. It can be further generalized to other face and non-face attributes for real-life video frames containing faces with significant attribute variations. Experimental results and comparison with state of the art in the field prove that the learned representation of the proposed approach synthesizes more compelling perceptual images through a combination of adversarial and classification losses. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=disentanglement" title="disentanglement">disentanglement</a>, <a href="https://publications.waset.org/abstracts/search?q=face%20detection" title=" face detection"> face detection</a>, <a href="https://publications.waset.org/abstracts/search?q=generative%20adversarial%20networks" title=" generative adversarial networks"> generative adversarial networks</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20surveillance" title=" video surveillance"> video surveillance</a> </p> <a href="https://publications.waset.org/abstracts/108319/adversarial-disentanglement-using-latent-classifier-for-pose-independent-representation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/108319.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">129</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">435</span> Domain Adaptation Save Lives - Drowning Detection in Swimming Pool Scene Based on YOLOV8 Improved by Gaussian Poisson Generative Adversarial Network Augmentation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Simiao%20Ren">Simiao Ren</a>, <a href="https://publications.waset.org/abstracts/search?q=En%20Wei"> En Wei</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Drowning is a significant safety issue worldwide, and a robust computer vision-based alert system can easily prevent such tragedies in swimming pools. However, due to domain shift caused by the visual gap (potentially due to lighting, indoor scene change, pool floor color etc.) between the training swimming pool and the test swimming pool, the robustness of such algorithms has been questionable. The annotation cost for labeling each new swimming pool is too expensive for mass adoption of such a technique. To address this issue, we propose a domain-aware data augmentation pipeline based on Gaussian Poisson Generative Adversarial Network (GP-GAN). Combined with YOLOv8, we demonstrate that such a domain adaptation technique can significantly improve the model performance (from 0.24 mAP to 0.82 mAP) on new test scenes. As the augmentation method only require background imagery from the new domain (no annotation needed), we believe this is a promising, practical route for preventing swimming pool drowning. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title="computer vision">computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=YOLOv8" title=" YOLOv8"> YOLOv8</a>, <a href="https://publications.waset.org/abstracts/search?q=detection" title=" detection"> detection</a>, <a href="https://publications.waset.org/abstracts/search?q=swimming%20pool" title=" swimming pool"> swimming pool</a>, <a href="https://publications.waset.org/abstracts/search?q=drowning" title=" drowning"> drowning</a>, <a href="https://publications.waset.org/abstracts/search?q=domain%20adaptation" title=" domain adaptation"> domain adaptation</a>, <a href="https://publications.waset.org/abstracts/search?q=generative%20adversarial%20network" title=" generative adversarial network"> generative adversarial network</a>, <a href="https://publications.waset.org/abstracts/search?q=GAN" title=" GAN"> GAN</a>, <a href="https://publications.waset.org/abstracts/search?q=GP-GAN" title=" GP-GAN"> GP-GAN</a> </p> <a href="https://publications.waset.org/abstracts/163443/domain-adaptation-save-lives-drowning-detection-in-swimming-pool-scene-based-on-yolov8-improved-by-gaussian-poisson-generative-adversarial-network-augmentation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/163443.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">101</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">434</span> Deep Feature Augmentation with Generative Adversarial Networks for Class Imbalance Learning in Medical Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rongbo%20Shen">Rongbo Shen</a>, <a href="https://publications.waset.org/abstracts/search?q=Jianhua%20Yao"> Jianhua Yao</a>, <a href="https://publications.waset.org/abstracts/search?q=Kezhou%20Yan"> Kezhou Yan</a>, <a href="https://publications.waset.org/abstracts/search?q=Kuan%20Tian"> Kuan Tian</a>, <a href="https://publications.waset.org/abstracts/search?q=Cheng%20Jiang"> Cheng Jiang</a>, <a href="https://publications.waset.org/abstracts/search?q=Ke%20Zhou"> Ke Zhou</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This study proposes a generative adversarial networks (GAN) framework to perform synthetic sampling in feature space, i.e., feature augmentation, to address the class imbalance problem in medical image analysis. A feature extraction network is first trained to convert images into feature space. Then the GAN framework incorporates adversarial learning to train a feature generator for the minority class through playing a minimax game with a discriminator. The feature generator then generates features for minority class from arbitrary latent distributions to balance the data between the majority class and the minority class. Additionally, a data cleaning technique, i.e., Tomek link, is employed to clean up undesirable conflicting features introduced from the feature augmentation and thus establish well-defined class clusters for the training. The experiment section evaluates the proposed method on two medical image analysis tasks, i.e., mass classification on mammogram and cancer metastasis classification on histopathological images. Experimental results suggest that the proposed method obtains superior or comparable performance over the state-of-the-art counterparts. Compared to all counterparts, our proposed method improves more than 1.5 percentage of accuracy. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=class%20imbalance" title="class imbalance">class imbalance</a>, <a href="https://publications.waset.org/abstracts/search?q=synthetic%20sampling" title=" synthetic sampling"> synthetic sampling</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20augmentation" title=" feature augmentation"> feature augmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=generative%20adversarial%20networks" title=" generative adversarial networks"> generative adversarial networks</a>, <a href="https://publications.waset.org/abstracts/search?q=data%20cleaning" title=" data cleaning"> data cleaning</a> </p> <a href="https://publications.waset.org/abstracts/114272/deep-feature-augmentation-with-generative-adversarial-networks-for-class-imbalance-learning-in-medical-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/114272.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">127</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">433</span> Semi-Supervised Outlier Detection Using a Generative and Adversary Framework</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jindong%20Gu">Jindong Gu</a>, <a href="https://publications.waset.org/abstracts/search?q=Matthias%20Schubert"> Matthias Schubert</a>, <a href="https://publications.waset.org/abstracts/search?q=Volker%20Tresp"> Volker Tresp</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In many outlier detection tasks, only training data belonging to one class, i.e., the positive class, is available. The task is then to predict a new data point as belonging either to the positive class or to the negative class, in which case the data point is considered an outlier. For this task, we propose a novel corrupted Generative Adversarial Network (CorGAN). In the adversarial process of training CorGAN, the Generator generates outlier samples for the negative class, and the Discriminator is trained to distinguish the positive training data from the generated negative data. The proposed framework is evaluated using an image dataset and a real-world network intrusion dataset. Our outlier-detection method achieves state-of-the-art performance on both tasks. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=one-class%20classification" title="one-class classification">one-class classification</a>, <a href="https://publications.waset.org/abstracts/search?q=outlier%20detection" title=" outlier detection"> outlier detection</a>, <a href="https://publications.waset.org/abstracts/search?q=generative%20adversary%20networks" title=" generative adversary networks"> generative adversary networks</a>, <a href="https://publications.waset.org/abstracts/search?q=semi-supervised%20learning" title=" semi-supervised learning"> semi-supervised learning</a> </p> <a href="https://publications.waset.org/abstracts/99065/semi-supervised-outlier-detection-using-a-generative-and-adversary-framework" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/99065.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">151</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">432</span> Improving Chest X-Ray Disease Detection with Enhanced Data Augmentation Using Novel Approach of Diverse Conditional Wasserstein Generative Adversarial Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Malik%20Muhammad%20Arslan">Malik Muhammad Arslan</a>, <a href="https://publications.waset.org/abstracts/search?q=Muneeb%20Ullah"> Muneeb Ullah</a>, <a href="https://publications.waset.org/abstracts/search?q=Dai%20Shihan"> Dai Shihan</a>, <a href="https://publications.waset.org/abstracts/search?q=Daniyal%20Haider"> Daniyal Haider</a>, <a href="https://publications.waset.org/abstracts/search?q=Xiaodong%20Yang"> Xiaodong Yang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Chest X-rays are instrumental in the detection and monitoring of a wide array of diseases, including viral infections such as COVID-19, tuberculosis, pneumonia, lung cancer, and various cardiac and pulmonary conditions. To enhance the accuracy of diagnosis, artificial intelligence (AI) algorithms, particularly deep learning models like Convolutional Neural Networks (CNNs), are employed. However, these deep learning models demand a substantial and varied dataset to attain optimal precision. Generative Adversarial Networks (GANs) can be employed to create new data, thereby supplementing the existing dataset and enhancing the accuracy of deep learning models. Nevertheless, GANs have their limitations, such as issues related to stability, convergence, and the ability to distinguish between authentic and fabricated data. In order to overcome these challenges and advance the detection and classification of CXR normal and abnormal images, this study introduces a distinctive technique known as DCWGAN (Diverse Conditional Wasserstein GAN) for generating synthetic chest X-ray (CXR) images. The study evaluates the effectiveness of this Idiosyncratic DCWGAN technique using the ResNet50 model and compares its results with those obtained using the traditional GAN approach. The findings reveal that the ResNet50 model trained on the DCWGAN-generated dataset outperformed the model trained on the classic GAN-generated dataset. Specifically, the ResNet50 model utilizing DCWGAN synthetic images achieved impressive performance metrics with an accuracy of 0.961, precision of 0.955, recall of 0.970, and F1-Measure of 0.963. These results indicate the promising potential for the early detection of diseases in CXR images using this Inimitable approach. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=CNN" title="CNN">CNN</a>, <a href="https://publications.waset.org/abstracts/search?q=classification" title=" classification"> classification</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=GAN" title=" GAN"> GAN</a>, <a href="https://publications.waset.org/abstracts/search?q=Resnet50" title=" Resnet50"> Resnet50</a> </p> <a href="https://publications.waset.org/abstracts/176306/improving-chest-x-ray-disease-detection-with-enhanced-data-augmentation-using-novel-approach-of-diverse-conditional-wasserstein-generative-adversarial-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/176306.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">88</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">431</span> Turbulent Channel Flow Synthesis using Generative Adversarial Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=John%20M.%20Lyne">John M. Lyne</a>, <a href="https://publications.waset.org/abstracts/search?q=K.%20Andrea%20Scott"> K. Andrea Scott</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In fluid dynamics, direct numerical simulations (DNS) of turbulent flows require large amounts of nodes to appropriately resolve all scales of energy transfer. Due to the size of these databases, sharing these datasets amongst the academic community is a challenge. Recent work has been done to investigate the use of super-resolution to enable database sharing, where a low-resolution flow field is super-resolved to high resolutions using a neural network. Recently, Generative Adversarial Networks (GAN) have grown in popularity with impressive results in the generation of faces, landscapes, and more. This work investigates the generation of unique high-resolution channel flow velocity fields from a low-dimensional latent space using a GAN. The training objective of the GAN is to generate samples in which the distribution of the generated samplesis ideally indistinguishable from the distribution of the training data. In this study, the network is trained using samples drawn from a statistically stationary channel flow at a Reynolds number of 560. Results show that the turbulent statistics and energy spectra of the generated flow fields are within reasonable agreement with those of the DNS data, demonstrating that GANscan produce the intricate multi-scale phenomena of turbulence. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=computational%20fluid%20dynamics" title="computational fluid dynamics">computational fluid dynamics</a>, <a href="https://publications.waset.org/abstracts/search?q=channel%20flow" title=" channel flow"> channel flow</a>, <a href="https://publications.waset.org/abstracts/search?q=turbulence" title=" turbulence"> turbulence</a>, <a href="https://publications.waset.org/abstracts/search?q=generative%20adversarial%20network" title=" generative adversarial network"> generative adversarial network</a> </p> <a href="https://publications.waset.org/abstracts/141594/turbulent-channel-flow-synthesis-using-generative-adversarial-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/141594.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">206</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">430</span> Classification of Generative Adversarial Network Generated Multivariate Time Series Data Featuring Transformer-Based Deep Learning Architecture</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Thrivikraman%20Aswathi">Thrivikraman Aswathi</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20Advaith"> S. Advaith</a> </p> <p class="card-text"><strong>Abstract:</strong></p> As there can be cases where the use of real data is somehow limited, such as when it is hard to get access to a large volume of real data, we need to go for synthetic data generation. This produces high-quality synthetic data while maintaining the statistical properties of a specific dataset. In the present work, a generative adversarial network (GAN) is trained to produce multivariate time series (MTS) data since the MTS is now being gathered more often in various real-world systems. Furthermore, the GAN-generated MTS data is fed into a transformer-based deep learning architecture that carries out the data categorization into predefined classes. Further, the model is evaluated across various distinct domains by generating corresponding MTS data. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=GAN" title="GAN">GAN</a>, <a href="https://publications.waset.org/abstracts/search?q=transformer" title=" transformer"> transformer</a>, <a href="https://publications.waset.org/abstracts/search?q=classification" title=" classification"> classification</a>, <a href="https://publications.waset.org/abstracts/search?q=multivariate%20time%20series" title=" multivariate time series"> multivariate time series</a> </p> <a href="https://publications.waset.org/abstracts/157460/classification-of-generative-adversarial-network-generated-multivariate-time-series-data-featuring-transformer-based-deep-learning-architecture" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/157460.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">130</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">429</span> Generative Adversarial Network for Bidirectional Mappings between Retinal Fundus Images and Vessel Segmented Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Haoqi%20Gao">Haoqi Gao</a>, <a href="https://publications.waset.org/abstracts/search?q=Koichi%20Ogawara"> Koichi Ogawara</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Retinal vascular segmentation of color fundus is the basis of ophthalmic computer-aided diagnosis and large-scale disease screening systems. Early screening of fundus diseases has great value for clinical medical diagnosis. The traditional methods depend on the experience of the doctor, which is time-consuming, labor-intensive, and inefficient. Furthermore, medical images are scarce and fraught with legal concerns regarding patient privacy. In this paper, we propose a new Generative Adversarial Network based on CycleGAN for retinal fundus images. This method can generate not only synthetic fundus images but also generate corresponding segmentation masks, which has certain application value and challenge in computer vision and computer graphics. In the results, we evaluate our proposed method from both quantitative and qualitative. For generated segmented images, our method achieves dice coefficient of 0.81 and PR of 0.89 on DRIVE dataset. For generated synthetic fundus images, we use ”Toy Experiment” to verify the state-of-the-art performance of our method. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=retinal%20vascular%20segmentations" title="retinal vascular segmentations">retinal vascular segmentations</a>, <a href="https://publications.waset.org/abstracts/search?q=generative%20ad-versarial%20network" title=" generative ad-versarial network"> generative ad-versarial network</a>, <a href="https://publications.waset.org/abstracts/search?q=cyclegan" title=" cyclegan"> cyclegan</a>, <a href="https://publications.waset.org/abstracts/search?q=fundus%20images" title=" fundus images"> fundus images</a> </p> <a href="https://publications.waset.org/abstracts/110591/generative-adversarial-network-for-bidirectional-mappings-between-retinal-fundus-images-and-vessel-segmented-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/110591.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">144</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">428</span> Learning Traffic Anomalies from Generative Models on Real-Time Observations</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Fotis%20I.%20Giasemis">Fotis I. Giasemis</a>, <a href="https://publications.waset.org/abstracts/search?q=Alexandros%20Sopasakis"> Alexandros Sopasakis</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This study focuses on detecting traffic anomalies using generative models applied to real-time observations. By integrating a Graph Neural Network with an attention-based mechanism within the Spatiotemporal Generative Adversarial Network framework, we enhance the capture of both spatial and temporal dependencies in traffic data. Leveraging minute-by-minute observations from cameras distributed across Gothenburg, our approach provides a more detailed and precise anomaly detection system, effectively capturing the complex topology and dynamics of urban traffic networks. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=traffic" title="traffic">traffic</a>, <a href="https://publications.waset.org/abstracts/search?q=anomaly%20detection" title=" anomaly detection"> anomaly detection</a>, <a href="https://publications.waset.org/abstracts/search?q=GNN" title=" GNN"> GNN</a>, <a href="https://publications.waset.org/abstracts/search?q=GAN" title=" GAN"> GAN</a> </p> <a href="https://publications.waset.org/abstracts/193544/learning-traffic-anomalies-from-generative-models-on-real-time-observations" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/193544.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">8</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">427</span> DISGAN: Efficient Generative Adversarial Network-Based Method for Cyber-Intrusion Detection</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hongyu%20Chen">Hongyu Chen</a>, <a href="https://publications.waset.org/abstracts/search?q=Li%20Jiang"> Li Jiang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Ubiquitous anomalies endanger the security of our system con- stantly. They may bring irreversible damages to the system and cause leakage of privacy. Thus, it is of vital importance to promptly detect these anomalies. Traditional supervised methods such as Decision Trees and Support Vector Machine (SVM) are used to classify normality and abnormality. However, in some case, the abnormal status are largely rarer than normal status, which leads to decision bias of these methods. Generative adversarial network (GAN) has been proposed to handle the case. With its strong generative ability, it only needs to learn the distribution of normal status, and identify the abnormal status through the gap between it and the learned distribution. Nevertheless, existing GAN-based models are not suitable to process data with discrete values, leading to immense degradation of detection performance. To cope with the discrete features, in this paper, we propose an efficient GAN-based model with specifically-designed loss function. Experiment results show that our model outperforms state-of-the-art models on discrete dataset and remarkably reduce the overhead. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=GAN" title="GAN">GAN</a>, <a href="https://publications.waset.org/abstracts/search?q=discrete%20feature" title=" discrete feature"> discrete feature</a>, <a href="https://publications.waset.org/abstracts/search?q=Wasserstein%20distance" title=" Wasserstein distance"> Wasserstein distance</a>, <a href="https://publications.waset.org/abstracts/search?q=multiple%20intermediate%20layers" title=" multiple intermediate layers"> multiple intermediate layers</a> </p> <a href="https://publications.waset.org/abstracts/113103/disgan-efficient-generative-adversarial-network-based-method-for-cyber-intrusion-detection" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/113103.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">129</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">426</span> MULTI-FLGANs: Multi-Distributed Adversarial Networks for Non-Independent and Identically Distributed Distribution</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Akash%20Amalan">Akash Amalan</a>, <a href="https://publications.waset.org/abstracts/search?q=Rui%20Wang"> Rui Wang</a>, <a href="https://publications.waset.org/abstracts/search?q=Yanqi%20Qiao"> Yanqi Qiao</a>, <a href="https://publications.waset.org/abstracts/search?q=Emmanouil%20Panaousis"> Emmanouil Panaousis</a>, <a href="https://publications.waset.org/abstracts/search?q=Kaitai%20Liang"> Kaitai Liang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Federated learning is an emerging concept in the domain of distributed machine learning. This concept has enabled General Adversarial Networks (GANs) to benefit from the rich distributed training data while preserving privacy. However, in a non-IID setting, current federated GAN architectures are unstable, struggling to learn the distinct features, and vulnerable to mode collapse. In this paper, we propose an architecture MULTI-FLGAN to solve the problem of low-quality images, mode collapse, and instability for non-IID datasets. Our results show that MULTI-FLGAN is four times as stable and performant (i.e., high inception score) on average over 20 clients compared to baseline FLGAN. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=federated%20learning" title="federated learning">federated learning</a>, <a href="https://publications.waset.org/abstracts/search?q=generative%20adversarial%20network" title=" generative adversarial network"> generative adversarial network</a>, <a href="https://publications.waset.org/abstracts/search?q=inference%20attack" title=" inference attack"> inference attack</a>, <a href="https://publications.waset.org/abstracts/search?q=non-IID%20data%20distribution" title=" non-IID data distribution"> non-IID data distribution</a> </p> <a href="https://publications.waset.org/abstracts/152877/multi-flgans-multi-distributed-adversarial-networks-for-non-independent-and-identically-distributed-distribution" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/152877.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">158</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">425</span> Next-Gen Solutions: How Generative AI Will Reshape Businesses</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Aishwarya%20Rai">Aishwarya Rai</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This study explores the transformative influence of generative AI on startups, businesses, and industries. We will explore how large businesses can benefit in the area of customer operations, where AI-powered chatbots can improve self-service and agent effectiveness, greatly increasing efficiency. In marketing and sales, generative AI could transform businesses by automating content development, data utilization, and personalization, resulting in a substantial increase in marketing and sales productivity. In software engineering-focused startups, generative AI can streamline activities, significantly impacting coding processes and work experiences. It can be extremely useful in product R&D for market analysis, virtual design, simulations, and test preparation, altering old workflows and increasing efficiency. Zooming into the retail and CPG industry, industry findings suggest a 1-2% increase in annual revenues, equating to $400 billion to $660 billion. By automating customer service, marketing, sales, and supply chain management, generative AI can streamline operations, optimizing personalized offerings and presenting itself as a disruptive force. While celebrating economic potential, we acknowledge challenges like external inference and adversarial attacks. Human involvement remains crucial for quality control and security in the era of generative AI-driven transformative innovation. This talk provides a comprehensive exploration of generative AI's pivotal role in reshaping businesses, recognizing its strategic impact on customer interactions, productivity, and operational efficiency. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=generative%20AI" title="generative AI">generative AI</a>, <a href="https://publications.waset.org/abstracts/search?q=digital%20transformation" title=" digital transformation"> digital transformation</a>, <a href="https://publications.waset.org/abstracts/search?q=LLM" title=" LLM"> LLM</a>, <a href="https://publications.waset.org/abstracts/search?q=artificial%20intelligence" title=" artificial intelligence"> artificial intelligence</a>, <a href="https://publications.waset.org/abstracts/search?q=startups" title=" startups"> startups</a>, <a href="https://publications.waset.org/abstracts/search?q=businesses" title=" businesses"> businesses</a> </p> <a href="https://publications.waset.org/abstracts/179625/next-gen-solutions-how-generative-ai-will-reshape-businesses" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/179625.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">76</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">424</span> Optimizing Super Resolution Generative Adversarial Networks for Resource-Efficient Single-Image Super-Resolution via Knowledge Distillation and Weight Pruning</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hussain%20Sajid">Hussain Sajid</a>, <a href="https://publications.waset.org/abstracts/search?q=Jung-Hun%20Shin"> Jung-Hun Shin</a>, <a href="https://publications.waset.org/abstracts/search?q=Kum-Won%20Cho"> Kum-Won Cho</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Image super-resolution is the most common computer vision problem with many important applications. Generative adversarial networks (GANs) have promoted remarkable advances in single-image super-resolution (SR) by recovering photo-realistic images. However, high memory requirements of GAN-based SR (mainly generators) lead to performance degradation and increased energy consumption, making it difficult to implement it onto resource-constricted devices. To relieve such a problem, In this paper, we introduce an optimized and highly efficient architecture for SR-GAN (generator) model by utilizing model compression techniques such as Knowledge Distillation and pruning, which work together to reduce the storage requirement of the model also increase in their performance. Our method begins with distilling the knowledge from a large pre-trained model to a lightweight model using different loss functions. Then, iterative weight pruning is applied to the distilled model to remove less significant weights based on their magnitude, resulting in a sparser network. Knowledge Distillation reduces the model size by 40%; pruning then reduces it further by 18%. To accelerate the learning process, we employ the Horovod framework for distributed training on a cluster of 2 nodes, each with 8 GPUs, resulting in improved training performance and faster convergence. Experimental results on various benchmarks demonstrate that the proposed compressed model significantly outperforms state-of-the-art methods in terms of peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM), and image quality for x4 super-resolution tasks. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=single-image%20super-resolution" title="single-image super-resolution">single-image super-resolution</a>, <a href="https://publications.waset.org/abstracts/search?q=generative%20adversarial%20networks" title=" generative adversarial networks"> generative adversarial networks</a>, <a href="https://publications.waset.org/abstracts/search?q=knowledge%20distillation" title=" knowledge distillation"> knowledge distillation</a>, <a href="https://publications.waset.org/abstracts/search?q=pruning" title=" pruning"> pruning</a> </p> <a href="https://publications.waset.org/abstracts/162396/optimizing-super-resolution-generative-adversarial-networks-for-resource-efficient-single-image-super-resolution-via-knowledge-distillation-and-weight-pruning" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/162396.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">96</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">423</span> Efficient Video Compression Technique Using Convolutional Neural Networks and Generative Adversarial Network</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=P.%20Karthick">P. Karthick</a>, <a href="https://publications.waset.org/abstracts/search?q=K.%20Mahesh"> K. Mahesh</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Video has become an increasingly significant component of our digital everyday contact. With the advancement of greater contents and shows of the resolution, its significant volume poses serious obstacles to the objective of receiving, distributing, compressing, and revealing video content of high quality. In this paper, we propose the primary beginning to complete a deep video compression model that jointly upgrades all video compression components. The video compression method involves splitting the video into frames, comparing the images using convolutional neural networks (CNN) to remove duplicates, repeating the single image instead of the duplicate images by recognizing and detecting minute changes using generative adversarial network (GAN) and recorded with long short-term memory (LSTM). Instead of the complete image, the small changes generated using GAN are substituted, which helps in frame level compression. Pixel wise comparison is performed using K-nearest neighbours (KNN) over the frame, clustered with K-means, and singular value decomposition (SVD) is applied for each and every frame in the video for all three color channels [Red, Green, Blue] to decrease the dimension of the utility matrix [R, G, B] by extracting its latent factors. Video frames are packed with parameters with the aid of a codec and converted to video format, and the results are compared with the original video. Repeated experiments on several videos with different sizes, duration, frames per second (FPS), and quality results demonstrate a significant resampling rate. On average, the result produced had approximately a 10% deviation in quality and more than 50% in size when compared with the original video. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=video%20compression" title="video compression">video compression</a>, <a href="https://publications.waset.org/abstracts/search?q=K-means%20clustering" title=" K-means clustering"> K-means clustering</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20network" title=" convolutional neural network"> convolutional neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=generative%20adversarial%20network" title=" generative adversarial network"> generative adversarial network</a>, <a href="https://publications.waset.org/abstracts/search?q=singular%20value%20decomposition" title=" singular value decomposition"> singular value decomposition</a>, <a href="https://publications.waset.org/abstracts/search?q=pixel%20visualization" title=" pixel visualization"> pixel visualization</a>, <a href="https://publications.waset.org/abstracts/search?q=stochastic%20gradient%20descent" title=" stochastic gradient descent"> stochastic gradient descent</a>, <a href="https://publications.waset.org/abstracts/search?q=frame%20per%20second%20extraction" title=" frame per second extraction"> frame per second extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=RGB%20channel%20extraction" title=" RGB channel extraction"> RGB channel extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=self-detection%20and%20deciding%20system" title=" self-detection and deciding system"> self-detection and deciding system</a> </p> <a href="https://publications.waset.org/abstracts/138827/efficient-video-compression-technique-using-convolutional-neural-networks-and-generative-adversarial-network" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/138827.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">187</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">422</span> A Deep Learning Based Method for Faster 3D Structural Topology Optimization</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Arya%20Prakash%20Padhi">Arya Prakash Padhi</a>, <a href="https://publications.waset.org/abstracts/search?q=Anupam%20Chakrabarti"> Anupam Chakrabarti</a>, <a href="https://publications.waset.org/abstracts/search?q=Rajib%20Chowdhury"> Rajib Chowdhury</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Topology or layout optimization often gives better performing economic structures and is very helpful in the conceptual design phase. But traditionally it is being done in finite element-based optimization schemes which, although gives a good result, is very time-consuming especially in 3D structures. Among other alternatives machine learning, especially deep learning-based methods, have a very good potential in resolving this computational issue. Here convolutional neural network (3D-CNN) based variational auto encoder (VAE) is trained using a dataset generated from commercially available topology optimization code ABAQUS Tosca using solid isotropic material with penalization (SIMP) method for compliance minimization. The encoded data in latent space is then fed to a 3D generative adversarial network (3D-GAN) to generate the outcome in 64x64x64 size. Here the network consists of 3D volumetric CNN with rectified linear unit (ReLU) activation in between and sigmoid activation in the end. The proposed network is seen to provide almost optimal results with significantly reduced computational time, as there is no iteration involved. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=3D%20generative%20adversarial%20network" title="3D generative adversarial network">3D generative adversarial network</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=structural%20topology%20optimization" title=" structural topology optimization"> structural topology optimization</a>, <a href="https://publications.waset.org/abstracts/search?q=variational%20auto%20encoder" title=" variational auto encoder"> variational auto encoder</a> </p> <a href="https://publications.waset.org/abstracts/110331/a-deep-learning-based-method-for-faster-3d-structural-topology-optimization" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/110331.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">174</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">421</span> Reviewing Image Recognition and Anomaly Detection Methods Utilizing GANs</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Agastya%20Pratap%20Singh">Agastya Pratap Singh</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This review paper examines the emerging applications of generative adversarial networks (GANs) in the fields of image recognition and anomaly detection. With the rapid growth of digital image data, the need for efficient and accurate methodologies to identify and classify images has become increasingly critical. GANs, known for their ability to generate realistic data, have gained significant attention for their potential to enhance traditional image recognition systems and improve anomaly detection performance. The paper systematically analyzes various GAN architectures and their modifications tailored for image recognition tasks, highlighting their strengths and limitations. Additionally, it delves into the effectiveness of GANs in detecting anomalies in diverse datasets, including medical imaging, industrial inspection, and surveillance. The review also discusses the challenges faced in training GANs, such as mode collapse and stability issues, and presents recent advancements aimed at overcoming these obstacles. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=generative%20adversarial%20networks" title="generative adversarial networks">generative adversarial networks</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20recognition" title=" image recognition"> image recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=anomaly%20detection" title=" anomaly detection"> anomaly detection</a>, <a href="https://publications.waset.org/abstracts/search?q=synthetic%20data%20generation" title=" synthetic data generation"> synthetic data generation</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=unsupervised%20learning" title=" unsupervised learning"> unsupervised learning</a>, <a href="https://publications.waset.org/abstracts/search?q=pattern%20recognition" title=" pattern recognition"> pattern recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=model%20evaluation" title=" model evaluation"> model evaluation</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning%20applications" title=" machine learning applications"> machine learning applications</a> </p> <a href="https://publications.waset.org/abstracts/192253/reviewing-image-recognition-and-anomaly-detection-methods-utilizing-gans" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/192253.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">26</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">420</span> Improving Fingerprinting-Based Localization System Using Generative AI</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Getaneh%20Berie%20Tarekegn">Getaneh Berie Tarekegn</a> </p> <p class="card-text"><strong>Abstract:</strong></p> A precise localization system is crucial for many artificial intelligence Internet of Things (AI-IoT) applications in the era of smart cities. Their applications include traffic monitoring, emergency alarming, environmental monitoring, location-based advertising, intelligent transportation, and smart health care. The most common method for providing continuous positioning services in outdoor environments is by using a global navigation satellite system (GNSS). Due to nonline-of-sight, multipath, and weather conditions, GNSS systems do not perform well in dense urban, urban, and suburban areas.This paper proposes a generative AI-based positioning scheme for large-scale wireless settings using fingerprinting techniques. In this article, we presented a semi-supervised deep convolutional generative adversarial network (S-DCGAN)-based radio map construction method for real-time device localization. It also employed a reliable signal fingerprint feature extraction method with t-distributed stochastic neighbor embedding (t-SNE), which extracts dominant features while eliminating noise from hybrid WLAN and long-term evolution (LTE) fingerprints. The proposed scheme reduced the workload of site surveying required to build the fingerprint database by up to 78.5% and significantly improved positioning accuracy. The results show that the average positioning error of GAILoc is less than 0.39 m, and more than 90% of the errors are less than 0.82 m. According to numerical results, SRCLoc improves positioning performance and reduces radio map construction costs significantly compared to traditional methods. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=location-aware%20services" title="location-aware services">location-aware services</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20extraction%20technique" title=" feature extraction technique"> feature extraction technique</a>, <a href="https://publications.waset.org/abstracts/search?q=generative%20adversarial%20network" title=" generative adversarial network"> generative adversarial network</a>, <a href="https://publications.waset.org/abstracts/search?q=long%20short-term%20memory" title=" long short-term memory"> long short-term memory</a>, <a href="https://publications.waset.org/abstracts/search?q=support%20vector%20machine" title=" support vector machine"> support vector machine</a> </p> <a href="https://publications.waset.org/abstracts/184493/improving-fingerprinting-based-localization-system-using-generative-ai" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/184493.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">60</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">419</span> Image Recognition and Anomaly Detection Powered by GANs: A Systematic Review</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Agastya%20Pratap%20Singh">Agastya Pratap Singh</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Generative Adversarial Networks (GANs) have emerged as powerful tools in the fields of image recognition and anomaly detection due to their ability to model complex data distributions and generate realistic images. This systematic review explores recent advancements and applications of GANs in both image recognition and anomaly detection tasks. We discuss various GAN architectures, such as DCGAN, CycleGAN, and StyleGAN, which have been tailored to improve accuracy, robustness, and efficiency in visual data analysis. In image recognition, GANs have been used to enhance data augmentation, improve classification models, and generate high-quality synthetic images. In anomaly detection, GANs have proven effective in identifying rare and subtle abnormalities across various domains, including medical imaging, cybersecurity, and industrial inspection. The review also highlights the challenges and limitations associated with GAN-based methods, such as instability during training and mode collapse, and suggests future research directions to overcome these issues. Through this review, we aim to provide researchers with a comprehensive understanding of the capabilities and potential of GANs in transforming image recognition and anomaly detection practices. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=generative%20adversarial%20networks" title="generative adversarial networks">generative adversarial networks</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20recognition" title=" image recognition"> image recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=anomaly%20detection" title=" anomaly detection"> anomaly detection</a>, <a href="https://publications.waset.org/abstracts/search?q=DCGAN" title=" DCGAN"> DCGAN</a>, <a href="https://publications.waset.org/abstracts/search?q=CycleGAN" title=" CycleGAN"> CycleGAN</a>, <a href="https://publications.waset.org/abstracts/search?q=StyleGAN" title=" StyleGAN"> StyleGAN</a>, <a href="https://publications.waset.org/abstracts/search?q=data%20augmentation" title=" data augmentation"> data augmentation</a> </p> <a href="https://publications.waset.org/abstracts/192413/image-recognition-and-anomaly-detection-powered-by-gans-a-systematic-review" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/192413.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">20</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">418</span> Electrocardiogram-Based Heartbeat Classification Using Convolutional Neural Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jacqueline%20Rose%20T.%20Alipo-on">Jacqueline Rose T. Alipo-on</a>, <a href="https://publications.waset.org/abstracts/search?q=Francesca%20Isabelle%20F.%20Escobar"> Francesca Isabelle F. Escobar</a>, <a href="https://publications.waset.org/abstracts/search?q=Myles%20Joshua%20T.%20Tan"> Myles Joshua T. Tan</a>, <a href="https://publications.waset.org/abstracts/search?q=Hezerul%20Abdul%20Karim"> Hezerul Abdul Karim</a>, <a href="https://publications.waset.org/abstracts/search?q=Nouar%20Al%20Dahoul"> Nouar Al Dahoul</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Electrocardiogram (ECG) signal analysis and processing are crucial in the diagnosis of cardiovascular diseases, which are considered one of the leading causes of mortality worldwide. However, the traditional rule-based analysis of large volumes of ECG data is time-consuming, labor-intensive, and prone to human errors. With the advancement of the programming paradigm, algorithms such as machine learning have been increasingly used to perform an analysis of ECG signals. In this paper, various deep learning algorithms were adapted to classify five classes of heartbeat types. The dataset used in this work is the synthetic MIT-BIH Arrhythmia dataset produced from generative adversarial networks (GANs). Various deep learning models such as ResNet-50 convolutional neural network (CNN), 1-D CNN, and long short-term memory (LSTM) were evaluated and compared. ResNet-50 was found to outperform other models in terms of recall and F1 score using a five-fold average score of 98.88% and 98.87%, respectively. 1-D CNN, on the other hand, was found to have the highest average precision of 98.93%. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=heartbeat%20classification" title="heartbeat classification">heartbeat classification</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20network" title=" convolutional neural network"> convolutional neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=electrocardiogram%20signals" title=" electrocardiogram signals"> electrocardiogram signals</a>, <a href="https://publications.waset.org/abstracts/search?q=generative%20adversarial%20networks" title=" generative adversarial networks"> generative adversarial networks</a>, <a href="https://publications.waset.org/abstracts/search?q=long%20short-term%20memory" title=" long short-term memory"> long short-term memory</a>, <a href="https://publications.waset.org/abstracts/search?q=ResNet-50" title=" ResNet-50"> ResNet-50</a> </p> <a href="https://publications.waset.org/abstracts/162763/electrocardiogram-based-heartbeat-classification-using-convolutional-neural-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/162763.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">128</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">417</span> Improving Fingerprinting-Based Localization System Using Generative Artificial Intelligence</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Getaneh%20Berie%20Tarekegn">Getaneh Berie Tarekegn</a> </p> <p class="card-text"><strong>Abstract:</strong></p> A precise localization system is crucial for many artificial intelligence Internet of Things (AI-IoT) applications in the era of smart cities. Their applications include traffic monitoring, emergency alarming, environmental monitoring, location-based advertising, intelligent transportation, and smart health care. The most common method for providing continuous positioning services in outdoor environments is by using a global navigation satellite system (GNSS). Due to nonline-of-sight, multipath, and weather conditions, GNSS systems do not perform well in dense urban, urban, and suburban areas.This paper proposes a generative AI-based positioning scheme for large-scale wireless settings using fingerprinting techniques. In this article, we presented a novel semi-supervised deep convolutional generative adversarial network (S-DCGAN)-based radio map construction method for real-time device localization. We also employed a reliable signal fingerprint feature extraction method with t-distributed stochastic neighbor embedding (t-SNE), which extracts dominant features while eliminating noise from hybrid WLAN and long-term evolution (LTE) fingerprints. The proposed scheme reduced the workload of site surveying required to build the fingerprint database by up to 78.5% and significantly improved positioning accuracy. The results show that the average positioning error of GAILoc is less than 39 cm, and more than 90% of the errors are less than 82 cm. That is, numerical results proved that, in comparison to traditional methods, the proposed SRCLoc method can significantly improve positioning performance and reduce radio map construction costs. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=location-aware%20services" title="location-aware services">location-aware services</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20extraction%20technique" title=" feature extraction technique"> feature extraction technique</a>, <a href="https://publications.waset.org/abstracts/search?q=generative%20adversarial%20network" title=" generative adversarial network"> generative adversarial network</a>, <a href="https://publications.waset.org/abstracts/search?q=long%20short-term%20memory" title=" long short-term memory"> long short-term memory</a>, <a href="https://publications.waset.org/abstracts/search?q=support%20vector%20machine" title=" support vector machine"> support vector machine</a> </p> <a href="https://publications.waset.org/abstracts/174097/improving-fingerprinting-based-localization-system-using-generative-artificial-intelligence" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/174097.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">71</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">416</span> GAILoc: Improving Fingerprinting-Based Localization System Using Generative Artificial Intelligence</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Getaneh%20Berie%20Tarekegn">Getaneh Berie Tarekegn</a> </p> <p class="card-text"><strong>Abstract:</strong></p> A precise localization system is crucial for many artificial intelligence Internet of Things (AI-IoT) applications in the era of smart cities. Their applications include traffic monitoring, emergency alarming, environmental monitoring, location-based advertising, intelligent transportation, and smart health care. The most common method for providing continuous positioning services in outdoor environments is by using a global navigation satellite system (GNSS). Due to nonline-of-sight, multipath, and weather conditions, GNSS systems do not perform well in dense urban, urban, and suburban areas.This paper proposes a generative AI-based positioning scheme for large-scale wireless settings using fingerprinting techniques. In this article, we presented a novel semi-supervised deep convolutional generative adversarial network (S-DCGAN)-based radio map construction method for real-time device localization. We also employed a reliable signal fingerprint feature extraction method with t-distributed stochastic neighbor embedding (t-SNE), which extracts dominant features while eliminating noise from hybrid WLAN and long-term evolution (LTE) fingerprints. The proposed scheme reduced the workload of site surveying required to build the fingerprint database by up to 78.5% and significantly improved positioning accuracy. The results show that the average positioning error of GAILoc is less than 39 cm, and more than 90% of the errors are less than 82 cm. That is, numerical results proved that, in comparison to traditional methods, the proposed SRCLoc method can significantly improve positioning performance and reduce radio map construction costs. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=location-aware%20services" title="location-aware services">location-aware services</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20extraction%20technique" title=" feature extraction technique"> feature extraction technique</a>, <a href="https://publications.waset.org/abstracts/search?q=generative%20adversarial%20network" title=" generative adversarial network"> generative adversarial network</a>, <a href="https://publications.waset.org/abstracts/search?q=long%20short-term%20memory" title=" long short-term memory"> long short-term memory</a>, <a href="https://publications.waset.org/abstracts/search?q=support%20vector%20machine" title=" support vector machine"> support vector machine</a> </p> <a href="https://publications.waset.org/abstracts/174095/gailoc-improving-fingerprinting-based-localization-system-using-generative-artificial-intelligence" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/174095.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">75</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">‹</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=conditional%20generative%20adversarial%20net&page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=conditional%20generative%20adversarial%20net&page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=conditional%20generative%20adversarial%20net&page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=conditional%20generative%20adversarial%20net&page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=conditional%20generative%20adversarial%20net&page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=conditional%20generative%20adversarial%20net&page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=conditional%20generative%20adversarial%20net&page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=conditional%20generative%20adversarial%20net&page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=conditional%20generative%20adversarial%20net&page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=conditional%20generative%20adversarial%20net&page=14">14</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=conditional%20generative%20adversarial%20net&page=15">15</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=conditional%20generative%20adversarial%20net&page=2" rel="next">›</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">© 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">×</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>