CINXE.COM

Search results for: neural generative attention

<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: neural generative attention</title> <meta name="description" content="Search results for: neural generative attention"> <meta name="keywords" content="neural generative attention"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="neural generative attention" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="neural generative attention"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 6011</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: neural generative attention</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6011</span> Unsupervised Images Generation Based on Sloan Digital Sky Survey with Deep Convolutional Generative Neural Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Guanghua%20Zhang">Guanghua Zhang</a>, <a href="https://publications.waset.org/abstracts/search?q=Fubao%20Wang"> Fubao Wang</a>, <a href="https://publications.waset.org/abstracts/search?q=Weijun%20Duan"> Weijun Duan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Convolution neural network (CNN) has attracted more and more attention on recent years. Especially in the field of computer vision and image classification. However, unsupervised learning with CNN has received less attention than supervised learning. In this work, we use a new powerful tool which is deep convolutional generative adversarial networks (DCGANs) to generate images from Sloan Digital Sky Survey. Training by various star and galaxy images, it shows that both the generator and the discriminator are good for unsupervised learning. In this paper, we also took several experiments to choose the best value for hyper-parameters and which could help to stabilize the training process and promise a good quality of the output. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=convolution%20neural%20network" title="convolution neural network">convolution neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=discriminator" title=" discriminator"> discriminator</a>, <a href="https://publications.waset.org/abstracts/search?q=generator" title=" generator"> generator</a>, <a href="https://publications.waset.org/abstracts/search?q=unsupervised%20learning" title=" unsupervised learning"> unsupervised learning</a> </p> <a href="https://publications.waset.org/abstracts/89010/unsupervised-images-generation-based-on-sloan-digital-sky-survey-with-deep-convolutional-generative-neural-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/89010.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">268</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6010</span> Evaluating Generative Neural Attention Weights-Based Chatbot on Customer Support Twitter Dataset</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sinarwati%20Mohamad%20Suhaili">Sinarwati Mohamad Suhaili</a>, <a href="https://publications.waset.org/abstracts/search?q=Naomie%20Salim"> Naomie Salim</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohamad%20Nazim%20Jambli"> Mohamad Nazim Jambli</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Sequence-to-sequence (seq2seq) models augmented with attention mechanisms are playing an increasingly important role in automated customer service. These models, which are able to recognize complex relationships between input and output sequences, are crucial for optimizing chatbot responses. Central to these mechanisms are neural attention weights that determine the focus of the model during sequence generation. Despite their widespread use, there remains a gap in the comparative analysis of different attention weighting functions within seq2seq models, particularly in the domain of chatbots using the Customer Support Twitter (CST) dataset. This study addresses this gap by evaluating four distinct attention-scoring functions—dot, multiplicative/general, additive, and an extended multiplicative function with a tanh activation parameter — in neural generative seq2seq models. Utilizing the CST dataset, these models were trained and evaluated over 10 epochs with the AdamW optimizer. Evaluation criteria included validation loss and BLEU scores implemented under both greedy and beam search strategies with a beam size of k=3. Results indicate that the model with the tanh-augmented multiplicative function significantly outperforms its counterparts, achieving the lowest validation loss (1.136484) and the highest BLEU scores (0.438926 under greedy search, 0.443000 under beam search, k=3). These results emphasize the crucial influence of selecting an appropriate attention-scoring function in improving the performance of seq2seq models for chatbots. Particularly, the model that integrates tanh activation proves to be a promising approach to improve the quality of chatbots in the customer support context. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=attention%20weight" title="attention weight">attention weight</a>, <a href="https://publications.waset.org/abstracts/search?q=chatbot" title=" chatbot"> chatbot</a>, <a href="https://publications.waset.org/abstracts/search?q=encoder-decoder" title=" encoder-decoder"> encoder-decoder</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20generative%20attention" title=" neural generative attention"> neural generative attention</a>, <a href="https://publications.waset.org/abstracts/search?q=score%20function" title=" score function"> score function</a>, <a href="https://publications.waset.org/abstracts/search?q=sequence-to-sequence" title=" sequence-to-sequence"> sequence-to-sequence</a> </p> <a href="https://publications.waset.org/abstracts/176622/evaluating-generative-neural-attention-weights-based-chatbot-on-customer-support-twitter-dataset" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/176622.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">78</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6009</span> Learning Traffic Anomalies from Generative Models on Real-Time Observations</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Fotis%20I.%20Giasemis">Fotis I. Giasemis</a>, <a href="https://publications.waset.org/abstracts/search?q=Alexandros%20Sopasakis"> Alexandros Sopasakis</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This study focuses on detecting traffic anomalies using generative models applied to real-time observations. By integrating a Graph Neural Network with an attention-based mechanism within the Spatiotemporal Generative Adversarial Network framework, we enhance the capture of both spatial and temporal dependencies in traffic data. Leveraging minute-by-minute observations from cameras distributed across Gothenburg, our approach provides a more detailed and precise anomaly detection system, effectively capturing the complex topology and dynamics of urban traffic networks. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=traffic" title="traffic">traffic</a>, <a href="https://publications.waset.org/abstracts/search?q=anomaly%20detection" title=" anomaly detection"> anomaly detection</a>, <a href="https://publications.waset.org/abstracts/search?q=GNN" title=" GNN"> GNN</a>, <a href="https://publications.waset.org/abstracts/search?q=GAN" title=" GAN"> GAN</a> </p> <a href="https://publications.waset.org/abstracts/193544/learning-traffic-anomalies-from-generative-models-on-real-time-observations" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/193544.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">8</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6008</span> Generative AI in Higher Education: Pedagogical and Ethical Guidelines for Implementation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Judit%20Vilarmau">Judit Vilarmau</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Generative AI is emerging rapidly and transforming higher education in many ways, occasioning new challenges and disrupting traditional models and methods. The studies and authors explored remark on the impact on the ethics, curriculum, and pedagogical methods. Students are increasingly using generative AI for study, as a virtual tutor, and as a resource for generating works and doing assignments. This point is crucial for educators to make sure that students are using generative AI with ethical considerations. Generative AI also has relevant benefits for educators and can help them personalize learning experiences and promote self-regulation. Educators must seek and explore tools like ChatGPT to innovate without forgetting an ethical and pedagogical perspective. Eighteen studies were systematically reviewed, and the findings provide implementation guidelines with pedagogical and ethical considerations. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=ethics" title="ethics">ethics</a>, <a href="https://publications.waset.org/abstracts/search?q=generative%20artificial%20intelligence" title=" generative artificial intelligence"> generative artificial intelligence</a>, <a href="https://publications.waset.org/abstracts/search?q=guidelines" title=" guidelines"> guidelines</a>, <a href="https://publications.waset.org/abstracts/search?q=higher%20education" title=" higher education"> higher education</a>, <a href="https://publications.waset.org/abstracts/search?q=pedagogy" title=" pedagogy"> pedagogy</a> </p> <a href="https://publications.waset.org/abstracts/179093/generative-ai-in-higher-education-pedagogical-and-ethical-guidelines-for-implementation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/179093.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">88</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6007</span> Time Series Simulation by Conditional Generative Adversarial Net</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rao%20Fu">Rao Fu</a>, <a href="https://publications.waset.org/abstracts/search?q=Jie%20Chen"> Jie Chen</a>, <a href="https://publications.waset.org/abstracts/search?q=Shutian%20Zeng"> Shutian Zeng</a>, <a href="https://publications.waset.org/abstracts/search?q=Yiping%20Zhuang"> Yiping Zhuang</a>, <a href="https://publications.waset.org/abstracts/search?q=Agus%20Sudjianto"> Agus Sudjianto</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Generative Adversarial Net (GAN) has proved to be a powerful machine learning tool in image data analysis and generation. In this paper, we propose to use Conditional Generative Adversarial Net (CGAN) to learn and simulate time series data. The conditions include both categorical and continuous variables with different auxiliary information. Our simulation studies show that CGAN has the capability to learn different types of normal and heavy-tailed distributions, as well as dependent structures of different time series. It also has the capability to generate conditional predictive distributions consistent with training data distributions. We also provide an in-depth discussion on the rationale behind GAN and the neural networks as hierarchical splines to establish a clear connection with existing statistical methods of distribution generation. In practice, CGAN has a wide range of applications in market risk and counterparty risk analysis: it can be applied to learn historical data and generate scenarios for the calculation of Value-at-Risk (VaR) and Expected Shortfall (ES), and it can also predict the movement of the market risk factors. We present a real data analysis including a backtesting to demonstrate that CGAN can outperform Historical Simulation (HS), a popular method in market risk analysis to calculate VaR. CGAN can also be applied in economic time series modeling and forecasting. In this regard, we have included an example of hypothetical shock analysis for economic models and the generation of potential CCAR scenarios by CGAN at the end of the paper. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=conditional%20generative%20adversarial%20net" title="conditional generative adversarial net">conditional generative adversarial net</a>, <a href="https://publications.waset.org/abstracts/search?q=market%20and%20credit%20risk%20management" title=" market and credit risk management"> market and credit risk management</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20network" title=" neural network"> neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=time%20series" title=" time series"> time series</a> </p> <a href="https://publications.waset.org/abstracts/123535/time-series-simulation-by-conditional-generative-adversarial-net" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/123535.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">143</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6006</span> Electrocardiogram-Based Heartbeat Classification Using Convolutional Neural Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jacqueline%20Rose%20T.%20Alipo-on">Jacqueline Rose T. Alipo-on</a>, <a href="https://publications.waset.org/abstracts/search?q=Francesca%20Isabelle%20F.%20Escobar"> Francesca Isabelle F. Escobar</a>, <a href="https://publications.waset.org/abstracts/search?q=Myles%20Joshua%20T.%20Tan"> Myles Joshua T. Tan</a>, <a href="https://publications.waset.org/abstracts/search?q=Hezerul%20Abdul%20Karim"> Hezerul Abdul Karim</a>, <a href="https://publications.waset.org/abstracts/search?q=Nouar%20Al%20Dahoul"> Nouar Al Dahoul</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Electrocardiogram (ECG) signal analysis and processing are crucial in the diagnosis of cardiovascular diseases, which are considered one of the leading causes of mortality worldwide. However, the traditional rule-based analysis of large volumes of ECG data is time-consuming, labor-intensive, and prone to human errors. With the advancement of the programming paradigm, algorithms such as machine learning have been increasingly used to perform an analysis of ECG signals. In this paper, various deep learning algorithms were adapted to classify five classes of heartbeat types. The dataset used in this work is the synthetic MIT-BIH Arrhythmia dataset produced from generative adversarial networks (GANs). Various deep learning models such as ResNet-50 convolutional neural network (CNN), 1-D CNN, and long short-term memory (LSTM) were evaluated and compared. ResNet-50 was found to outperform other models in terms of recall and F1 score using a five-fold average score of 98.88% and 98.87%, respectively. 1-D CNN, on the other hand, was found to have the highest average precision of 98.93%. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=heartbeat%20classification" title="heartbeat classification">heartbeat classification</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20network" title=" convolutional neural network"> convolutional neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=electrocardiogram%20signals" title=" electrocardiogram signals"> electrocardiogram signals</a>, <a href="https://publications.waset.org/abstracts/search?q=generative%20adversarial%20networks" title=" generative adversarial networks"> generative adversarial networks</a>, <a href="https://publications.waset.org/abstracts/search?q=long%20short-term%20memory" title=" long short-term memory"> long short-term memory</a>, <a href="https://publications.waset.org/abstracts/search?q=ResNet-50" title=" ResNet-50"> ResNet-50</a> </p> <a href="https://publications.waset.org/abstracts/162763/electrocardiogram-based-heartbeat-classification-using-convolutional-neural-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/162763.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">128</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6005</span> DeepLig: A de-novo Computational Drug Design Approach to Generate Multi-Targeted Drugs</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Anika%20Chebrolu">Anika Chebrolu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Mono-targeted drugs can be of limited efficacy against complex diseases. Recently, multi-target drug design has been approached as a promising tool to fight against these challenging diseases. However, the scope of current computational approaches for multi-target drug design is limited. DeepLig presents a de-novo drug discovery platform that uses reinforcement learning to generate and optimize novel, potent, and multitargeted drug candidates against protein targets. DeepLig’s model consists of two networks in interplay: a generative network and a predictive network. The generative network, a Stack- Augmented Recurrent Neural Network, utilizes a stack memory unit to remember and recognize molecular patterns when generating novel ligands from scratch. The generative network passes each newly created ligand to the predictive network, which then uses multiple Graph Attention Networks simultaneously to forecast the average binding affinity of the generated ligand towards multiple target proteins. With each iteration, given feedback from the predictive network, the generative network learns to optimize itself to create molecules with a higher average binding affinity towards multiple proteins. DeepLig was evaluated based on its ability to generate multi-target ligands against two distinct proteins, multi-target ligands against three distinct proteins, and multi-target ligands against two distinct binding pockets on the same protein. With each test case, DeepLig was able to create a library of valid, synthetically accessible, and novel molecules with optimal and equipotent binding energies. We propose that DeepLig provides an effective approach to design multi-targeted drug therapies that can potentially show higher success rates during in-vitro trials. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=drug%20design" title="drug design">drug design</a>, <a href="https://publications.waset.org/abstracts/search?q=multitargeticity" title=" multitargeticity"> multitargeticity</a>, <a href="https://publications.waset.org/abstracts/search?q=de-novo" title=" de-novo"> de-novo</a>, <a href="https://publications.waset.org/abstracts/search?q=reinforcement%20learning" title=" reinforcement learning"> reinforcement learning</a> </p> <a href="https://publications.waset.org/abstracts/171394/deeplig-a-de-novo-computational-drug-design-approach-to-generate-multi-targeted-drugs" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/171394.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">97</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6004</span> A Grounded Theory of Educational Leadership Development Using Generative Dialogue</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Elizabeth%20Hartney">Elizabeth Hartney</a>, <a href="https://publications.waset.org/abstracts/search?q=Keith%20Borkowsky"> Keith Borkowsky</a>, <a href="https://publications.waset.org/abstracts/search?q=Jo%20Axe"> Jo Axe</a>, <a href="https://publications.waset.org/abstracts/search?q=Doug%20Hamilton"> Doug Hamilton</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The aim of this research is to develop a grounded theory of educational leadership development, using an approach to initiating and maintaining professional growth in school principals and vice principals termed generative dialogue. The research was conducted in a relatively affluent, urban school district in Western Canada. Generative dialogue interviews were conducted by a team of consultants, and anonymous data in the form of handwritten notes were voluntarily submitted to the research team. The data were transcribed and analyzed using grounded theory. The results indicate that a key focus of educational leadership development is focused on navigating relationships within the school setting and that the generative dialogue process is helpful for principals and vice principals to explore how they might do this. Applicability and limitations of the study are addressed. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=generative%20dialogue" title="generative dialogue">generative dialogue</a>, <a href="https://publications.waset.org/abstracts/search?q=school%20principals" title=" school principals"> school principals</a>, <a href="https://publications.waset.org/abstracts/search?q=grounded%20theory" title=" grounded theory"> grounded theory</a>, <a href="https://publications.waset.org/abstracts/search?q=leadership%20development" title=" leadership development"> leadership development</a> </p> <a href="https://publications.waset.org/abstracts/92456/a-grounded-theory-of-educational-leadership-development-using-generative-dialogue" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/92456.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">356</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6003</span> Efficient Video Compression Technique Using Convolutional Neural Networks and Generative Adversarial Network</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=P.%20Karthick">P. Karthick</a>, <a href="https://publications.waset.org/abstracts/search?q=K.%20Mahesh"> K. Mahesh</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Video has become an increasingly significant component of our digital everyday contact. With the advancement of greater contents and shows of the resolution, its significant volume poses serious obstacles to the objective of receiving, distributing, compressing, and revealing video content of high quality. In this paper, we propose the primary beginning to complete a deep video compression model that jointly upgrades all video compression components. The video compression method involves splitting the video into frames, comparing the images using convolutional neural networks (CNN) to remove duplicates, repeating the single image instead of the duplicate images by recognizing and detecting minute changes using generative adversarial network (GAN) and recorded with long short-term memory (LSTM). Instead of the complete image, the small changes generated using GAN are substituted, which helps in frame level compression. Pixel wise comparison is performed using K-nearest neighbours (KNN) over the frame, clustered with K-means, and singular value decomposition (SVD) is applied for each and every frame in the video for all three color channels [Red, Green, Blue] to decrease the dimension of the utility matrix [R, G, B] by extracting its latent factors. Video frames are packed with parameters with the aid of a codec and converted to video format, and the results are compared with the original video. Repeated experiments on several videos with different sizes, duration, frames per second (FPS), and quality results demonstrate a significant resampling rate. On average, the result produced had approximately a 10% deviation in quality and more than 50% in size when compared with the original video. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=video%20compression" title="video compression">video compression</a>, <a href="https://publications.waset.org/abstracts/search?q=K-means%20clustering" title=" K-means clustering"> K-means clustering</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20network" title=" convolutional neural network"> convolutional neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=generative%20adversarial%20network" title=" generative adversarial network"> generative adversarial network</a>, <a href="https://publications.waset.org/abstracts/search?q=singular%20value%20decomposition" title=" singular value decomposition"> singular value decomposition</a>, <a href="https://publications.waset.org/abstracts/search?q=pixel%20visualization" title=" pixel visualization"> pixel visualization</a>, <a href="https://publications.waset.org/abstracts/search?q=stochastic%20gradient%20descent" title=" stochastic gradient descent"> stochastic gradient descent</a>, <a href="https://publications.waset.org/abstracts/search?q=frame%20per%20second%20extraction" title=" frame per second extraction"> frame per second extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=RGB%20channel%20extraction" title=" RGB channel extraction"> RGB channel extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=self-detection%20and%20deciding%20system" title=" self-detection and deciding system"> self-detection and deciding system</a> </p> <a href="https://publications.waset.org/abstracts/138827/efficient-video-compression-technique-using-convolutional-neural-networks-and-generative-adversarial-network" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/138827.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">187</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6002</span> Graph Neural Networks and Rotary Position Embedding for Voice Activity Detection</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=YingWei%20Tan">YingWei Tan</a>, <a href="https://publications.waset.org/abstracts/search?q=XueFeng%20Ding"> XueFeng Ding</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Attention-based voice activity detection models have gained significant attention in recent years due to their fast training speed and ability to capture a wide contextual range. The inclusion of multi-head style and position embedding in the attention architecture are crucial. Having multiple attention heads allows for differential focus on different parts of the sequence, while position embedding provides guidance for modeling dependencies between elements at various positions in the input sequence. In this work, we propose an approach by considering each head as a node, enabling the application of graph neural networks (GNN) to identify correlations among the different nodes. In addition, we adopt an implementation named rotary position embedding (RoPE), which encodes absolute positional information into the input sequence by a rotation matrix, and naturally incorporates explicit relative position information into a self-attention module. We evaluate the effectiveness of our method on a synthetic dataset, and the results demonstrate its superiority over the baseline CRNN in scenarios with low signal-to-noise ratio and noise, while also exhibiting robustness across different noise types. In summary, our proposed framework effectively combines the strengths of CNN and RNN (LSTM), and further enhances detection performance through the integration of graph neural networks and rotary position embedding. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=voice%20activity%20detection" title="voice activity detection">voice activity detection</a>, <a href="https://publications.waset.org/abstracts/search?q=CRNN" title=" CRNN"> CRNN</a>, <a href="https://publications.waset.org/abstracts/search?q=graph%20neural%20networks" title=" graph neural networks"> graph neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=rotary%20position%20embedding" title=" rotary position embedding"> rotary position embedding</a> </p> <a href="https://publications.waset.org/abstracts/179624/graph-neural-networks-and-rotary-position-embedding-for-voice-activity-detection" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/179624.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">72</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6001</span> A Generative Adversarial Framework for Bounding Confounded Causal Effects</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yaowei%20Hu">Yaowei Hu</a>, <a href="https://publications.waset.org/abstracts/search?q=Yongkai%20Wu"> Yongkai Wu</a>, <a href="https://publications.waset.org/abstracts/search?q=Lu%20Zhang"> Lu Zhang</a>, <a href="https://publications.waset.org/abstracts/search?q=Xintao%20Wu"> Xintao Wu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Causal inference from observational data is receiving wide applications in many fields. However, unidentifiable situations, where causal effects cannot be uniquely computed from observational data, pose critical barriers to applying causal inference to complicated real applications. In this paper, we develop a bounding method for estimating the average causal effect (ACE) under unidentifiable situations due to hidden confounders. We propose to parameterize the unknown exogenous random variables and structural equations of a causal model using neural networks and implicit generative models. Then, with an adversarial learning framework, we search the parameter space to explicitly traverse causal models that agree with the given observational distribution and find those that minimize or maximize the ACE to obtain its lower and upper bounds. The proposed method does not make any assumption about the data generating process and the type of the variables. Experiments using both synthetic and real-world datasets show the effectiveness of the method. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=average%20causal%20effect" title="average causal effect">average causal effect</a>, <a href="https://publications.waset.org/abstracts/search?q=hidden%20confounding" title=" hidden confounding"> hidden confounding</a>, <a href="https://publications.waset.org/abstracts/search?q=bound%20estimation" title=" bound estimation"> bound estimation</a>, <a href="https://publications.waset.org/abstracts/search?q=generative%20adversarial%20learning" title=" generative adversarial learning"> generative adversarial learning</a> </p> <a href="https://publications.waset.org/abstracts/127808/a-generative-adversarial-framework-for-bounding-confounded-causal-effects" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/127808.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">191</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6000</span> Next-Gen Solutions: How Generative AI Will Reshape Businesses</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Aishwarya%20Rai">Aishwarya Rai</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This study explores the transformative influence of generative AI on startups, businesses, and industries. We will explore how large businesses can benefit in the area of customer operations, where AI-powered chatbots can improve self-service and agent effectiveness, greatly increasing efficiency. In marketing and sales, generative AI could transform businesses by automating content development, data utilization, and personalization, resulting in a substantial increase in marketing and sales productivity. In software engineering-focused startups, generative AI can streamline activities, significantly impacting coding processes and work experiences. It can be extremely useful in product R&D for market analysis, virtual design, simulations, and test preparation, altering old workflows and increasing efficiency. Zooming into the retail and CPG industry, industry findings suggest a 1-2% increase in annual revenues, equating to $400 billion to $660 billion. By automating customer service, marketing, sales, and supply chain management, generative AI can streamline operations, optimizing personalized offerings and presenting itself as a disruptive force. While celebrating economic potential, we acknowledge challenges like external inference and adversarial attacks. Human involvement remains crucial for quality control and security in the era of generative AI-driven transformative innovation. This talk provides a comprehensive exploration of generative AI's pivotal role in reshaping businesses, recognizing its strategic impact on customer interactions, productivity, and operational efficiency. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=generative%20AI" title="generative AI">generative AI</a>, <a href="https://publications.waset.org/abstracts/search?q=digital%20transformation" title=" digital transformation"> digital transformation</a>, <a href="https://publications.waset.org/abstracts/search?q=LLM" title=" LLM"> LLM</a>, <a href="https://publications.waset.org/abstracts/search?q=artificial%20intelligence" title=" artificial intelligence"> artificial intelligence</a>, <a href="https://publications.waset.org/abstracts/search?q=startups" title=" startups"> startups</a>, <a href="https://publications.waset.org/abstracts/search?q=businesses" title=" businesses"> businesses</a> </p> <a href="https://publications.waset.org/abstracts/179625/next-gen-solutions-how-generative-ai-will-reshape-businesses" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/179625.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">76</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5999</span> Generative AI: A Comparison of Conditional Tabular Generative Adversarial Networks and Conditional Tabular Generative Adversarial Networks with Gaussian Copula in Generating Synthetic Data with Synthetic Data Vault</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Lakshmi%20Prayaga">Lakshmi Prayaga</a>, <a href="https://publications.waset.org/abstracts/search?q=Chandra%20Prayaga.%20Aaron%20Wade"> Chandra Prayaga. Aaron Wade</a>, <a href="https://publications.waset.org/abstracts/search?q=Gopi%20Shankar%20Mallu"> Gopi Shankar Mallu</a>, <a href="https://publications.waset.org/abstracts/search?q=Harsha%20Satya%20Pola"> Harsha Satya Pola</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Synthetic data generated by Generative Adversarial Networks and Autoencoders is becoming more common to combat the problem of insufficient data for research purposes. However, generating synthetic data is a tedious task requiring extensive mathematical and programming background. Open-source platforms such as the Synthetic Data Vault (SDV) and Mostly AI have offered a platform that is user-friendly and accessible to non-technical professionals to generate synthetic data to augment existing data for further analysis. The SDV also provides for additions to the generic GAN, such as the Gaussian copula. We present the results from two synthetic data sets (CTGAN data and CTGAN with Gaussian Copula) generated by the SDV and report the findings. The results indicate that the ROC and AUC curves for the data generated by adding the layer of Gaussian copula are much higher than the data generated by the CTGAN. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=synthetic%20data%20generation" title="synthetic data generation">synthetic data generation</a>, <a href="https://publications.waset.org/abstracts/search?q=generative%20adversarial%20networks" title=" generative adversarial networks"> generative adversarial networks</a>, <a href="https://publications.waset.org/abstracts/search?q=conditional%20tabular%20GAN" title=" conditional tabular GAN"> conditional tabular GAN</a>, <a href="https://publications.waset.org/abstracts/search?q=Gaussian%20copula" title=" Gaussian copula"> Gaussian copula</a> </p> <a href="https://publications.waset.org/abstracts/183000/generative-ai-a-comparison-of-conditional-tabular-generative-adversarial-networks-and-conditional-tabular-generative-adversarial-networks-with-gaussian-copula-in-generating-synthetic-data-with-synthetic-data-vault" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/183000.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">82</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5998</span> Monitor Student Concentration Levels on Online Education Sessions</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=M.%20K.%20Wijayarathna">M. K. Wijayarathna</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20M.%20Buddika%20Harshanath"> S. M. Buddika Harshanath</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Monitoring student engagement has become a crucial part of the educational process and a reliable indicator of the capacity to retain information. As online learning classrooms are now more common these days, students' attention levels have become increasingly important, making it more difficult to check each student's concentration level in an online classroom setting. To profile student attention to various gradients of engagement, a study is a plan to conduct using machine learning models. Using a convolutional neural network, the findings and confidence score of the high accuracy model are obtained. In this research, convolutional neural networks are using to help discover essential emotions that are critical in defining various levels of participation. Students' attention levels were shown to be influenced by emotions such as calm, enjoyment, surprise, and fear. An improved virtual learning system was created as a result of these data, which allowed teachers to focus their support and advise on those students who needed it. Student participation has formed as a crucial component of the learning technique and a consistent predictor of a student's capacity to retain material in the classroom. Convolutional neural networks have a plan to implement the platform. As a preliminary step, a video of the pupil would be taken. In the end, researchers used a convolutional neural network utilizing the Keras toolkit to take pictures of the recordings. Two convolutional neural network methods are planned to use to determine the pupils' attention level. Finally, those predicted student attention level results plan to display on the graphical user interface of the System. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=HTML5" title="HTML5">HTML5</a>, <a href="https://publications.waset.org/abstracts/search?q=JavaScript" title=" JavaScript"> JavaScript</a>, <a href="https://publications.waset.org/abstracts/search?q=Python%20flask%20framework" title=" Python flask framework"> Python flask framework</a>, <a href="https://publications.waset.org/abstracts/search?q=AI" title=" AI"> AI</a>, <a href="https://publications.waset.org/abstracts/search?q=graphical%20user" title=" graphical user"> graphical user</a> </p> <a href="https://publications.waset.org/abstracts/153646/monitor-student-concentration-levels-on-online-education-sessions" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/153646.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">99</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5997</span> Explainable Graph Attention Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=David%20Pham">David Pham</a>, <a href="https://publications.waset.org/abstracts/search?q=Yongfeng%20Zhang"> Yongfeng Zhang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Graphs are an important structure for data storage and computation. Recent years have seen the success of deep learning on graphs such as Graph Neural Networks (GNN) on various data mining and machine learning tasks. However, most of the deep learning models on graphs cannot easily explain their predictions and are thus often labelled as “black boxes.” For example, Graph Attention Network (GAT) is a frequently used GNN architecture, which adopts an attention mechanism to carefully select the neighborhood nodes for message passing and aggregation. However, it is difficult to explain why certain neighbors are selected while others are not and how the selected neighbors contribute to the final classification result. In this paper, we present a graph learning model called Explainable Graph Attention Network (XGAT), which integrates graph attention modeling and explainability. We use a single model to target both the accuracy and explainability of problem spaces and show that in the context of graph attention modeling, we can design a unified neighborhood selection strategy that selects appropriate neighbor nodes for both better accuracy and enhanced explainability. To justify this, we conduct extensive experiments to better understand the behavior of our model under different conditions and show an increase in both accuracy and explainability. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=explainable%20AI" title="explainable AI">explainable AI</a>, <a href="https://publications.waset.org/abstracts/search?q=graph%20attention%20network" title=" graph attention network"> graph attention network</a>, <a href="https://publications.waset.org/abstracts/search?q=graph%20neural%20network" title=" graph neural network"> graph neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=node%20classification" title=" node classification"> node classification</a> </p> <a href="https://publications.waset.org/abstracts/156796/explainable-graph-attention-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/156796.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">199</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5996</span> A Deep Reinforcement Learning-Based Secure Framework against Adversarial Attacks in Power System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Arshia%20Aflaki">Arshia Aflaki</a>, <a href="https://publications.waset.org/abstracts/search?q=Hadis%20Karimipour"> Hadis Karimipour</a>, <a href="https://publications.waset.org/abstracts/search?q=Anik%20Islam"> Anik Islam</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Generative Adversarial Attacks (GAAs) threaten critical sectors, ranging from fingerprint recognition to industrial control systems. Existing Deep Learning (DL) algorithms are not robust enough against this kind of cyber-attack. As one of the most critical industries in the world, the power grid is not an exception. In this study, a Deep Reinforcement Learning-based (DRL) framework assisting the DL model to improve the robustness of the model against generative adversarial attacks is proposed. Real-world smart grid stability data, as an IIoT dataset, test our method and improves the classification accuracy of a deep learning model from around 57 percent to 96 percent. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=generative%20adversarial%20attack" title="generative adversarial attack">generative adversarial attack</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20reinforcement%20learning" title=" deep reinforcement learning"> deep reinforcement learning</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=IIoT" title=" IIoT"> IIoT</a>, <a href="https://publications.waset.org/abstracts/search?q=generative%20adversarial%20networks" title=" generative adversarial networks"> generative adversarial networks</a>, <a href="https://publications.waset.org/abstracts/search?q=power%20system" title=" power system"> power system</a> </p> <a href="https://publications.waset.org/abstracts/188908/a-deep-reinforcement-learning-based-secure-framework-against-adversarial-attacks-in-power-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/188908.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">37</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5995</span> Revolutionizing Gaming Setup Design: Utilizing Generative and Iterative Methods to Prop and Environment Design, Transforming the Landscape of Game Development Through Automation and Innovation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rashmi%20Malik">Rashmi Malik</a>, <a href="https://publications.waset.org/abstracts/search?q=Videep%20Mishra"> Videep Mishra</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The practice of generative design has become a transformative approach for an efficient way of generating multiple iterations for any design project. The conventional way of modeling the game elements is very time-consuming and requires skilled artists to design. A 3D modeling tool like 3D S Max, Blender, etc., is used traditionally to create the game library, which will take its stipulated time to model. The study is focused on using the generative design tool to increase the efficiency in game development at the stage of prop and environment generation. This will involve procedural level and customized regulated or randomized assets generation. The paper will present the system design approach using generative tools like Grasshopper (visual scripting) and other scripting tools to automate the process of game library modeling. The script will enable the generation of multiple products from the single script, thus creating a system that lets designers /artists customize props and environments. The main goal is to measure the efficacy of the automated system generated to create a wide variety of game elements, further reducing the need for manual content creation and integrating it into the workflow of AAA and Indie Games. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=iterative%20game%20design" title="iterative game design">iterative game design</a>, <a href="https://publications.waset.org/abstracts/search?q=generative%20design" title=" generative design"> generative design</a>, <a href="https://publications.waset.org/abstracts/search?q=gaming%20asset%20automation" title=" gaming asset automation"> gaming asset automation</a>, <a href="https://publications.waset.org/abstracts/search?q=generative%20game%20design" title=" generative game design"> generative game design</a> </p> <a href="https://publications.waset.org/abstracts/173936/revolutionizing-gaming-setup-design-utilizing-generative-and-iterative-methods-to-prop-and-environment-design-transforming-the-landscape-of-game-development-through-automation-and-innovation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/173936.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">70</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5994</span> Turbulent Channel Flow Synthesis using Generative Adversarial Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=John%20M.%20Lyne">John M. Lyne</a>, <a href="https://publications.waset.org/abstracts/search?q=K.%20Andrea%20Scott"> K. Andrea Scott</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In fluid dynamics, direct numerical simulations (DNS) of turbulent flows require large amounts of nodes to appropriately resolve all scales of energy transfer. Due to the size of these databases, sharing these datasets amongst the academic community is a challenge. Recent work has been done to investigate the use of super-resolution to enable database sharing, where a low-resolution flow field is super-resolved to high resolutions using a neural network. Recently, Generative Adversarial Networks (GAN) have grown in popularity with impressive results in the generation of faces, landscapes, and more. This work investigates the generation of unique high-resolution channel flow velocity fields from a low-dimensional latent space using a GAN. The training objective of the GAN is to generate samples in which the distribution of the generated samplesis ideally indistinguishable from the distribution of the training data. In this study, the network is trained using samples drawn from a statistically stationary channel flow at a Reynolds number of 560. Results show that the turbulent statistics and energy spectra of the generated flow fields are within reasonable agreement with those of the DNS data, demonstrating that GANscan produce the intricate multi-scale phenomena of turbulence. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=computational%20fluid%20dynamics" title="computational fluid dynamics">computational fluid dynamics</a>, <a href="https://publications.waset.org/abstracts/search?q=channel%20flow" title=" channel flow"> channel flow</a>, <a href="https://publications.waset.org/abstracts/search?q=turbulence" title=" turbulence"> turbulence</a>, <a href="https://publications.waset.org/abstracts/search?q=generative%20adversarial%20network" title=" generative adversarial network"> generative adversarial network</a> </p> <a href="https://publications.waset.org/abstracts/141594/turbulent-channel-flow-synthesis-using-generative-adversarial-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/141594.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">206</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5993</span> Neural Rendering Applied to Confocal Microscopy Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Daniel%20Li">Daniel Li</a> </p> <p class="card-text"><strong>Abstract:</strong></p> We present a novel application of neural rendering methods to confocal microscopy. Neural rendering and implicit neural representations have developed at a remarkable pace, and are prevalent in modern 3D computer vision literature. However, they have not yet been applied to optical microscopy, an important imaging field where 3D volume information may be heavily sought after. In this paper, we employ neural rendering on confocal microscopy focus stack data and share the results. We highlight the benefits and potential of adding neural rendering to the toolkit of microscopy image processing techniques. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=neural%20rendering" title="neural rendering">neural rendering</a>, <a href="https://publications.waset.org/abstracts/search?q=implicit%20neural%20representations" title=" implicit neural representations"> implicit neural representations</a>, <a href="https://publications.waset.org/abstracts/search?q=confocal%20microscopy" title=" confocal microscopy"> confocal microscopy</a>, <a href="https://publications.waset.org/abstracts/search?q=medical%20image%20processing" title=" medical image processing"> medical image processing</a> </p> <a href="https://publications.waset.org/abstracts/153909/neural-rendering-applied-to-confocal-microscopy-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/153909.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">658</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5992</span> Detection of Atrial Fibrillation Using Wearables via Attentional Two-Stream Heterogeneous Networks </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Huawei%20Bai">Huawei Bai</a>, <a href="https://publications.waset.org/abstracts/search?q=Jianguo%20Yao"> Jianguo Yao</a>, <a href="https://publications.waset.org/abstracts/search?q=Fellow"> Fellow</a>, <a href="https://publications.waset.org/abstracts/search?q=IEEE"> IEEE</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Atrial fibrillation (AF) is the most common form of heart arrhythmia and is closely associated with mortality and morbidity in heart failure, stroke, and coronary artery disease. The development of single spot optical sensors enables widespread photoplethysmography (PPG) screening, especially for AF, since it represents a more convenient and noninvasive approach. To our knowledge, most existing studies based on public and unbalanced datasets can barely handle the multiple noises sources in the real world and, also, lack interpretability. In this paper, we construct a large- scale PPG dataset using measurements collected from PPG wrist- watch devices worn by volunteers and propose an attention-based two-stream heterogeneous neural network (TSHNN). The first stream is a hybrid neural network consisting of a three-layer one-dimensional convolutional neural network (1D-CNN) and two-layer attention- based bidirectional long short-term memory (Bi-LSTM) network to learn representations from temporally sampled signals. The second stream extracts latent representations from the PPG time-frequency spectrogram using a five-layer CNN. The outputs from both streams are fed into a fusion layer for the outcome. Visualization of the attention weights learned demonstrates the effectiveness of the attention mechanism against noise. The experimental results show that the TSHNN outperforms all the competitive baseline approaches and with 98.09% accuracy, achieves state-of-the-art performance. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=PPG%20wearables" title="PPG wearables">PPG wearables</a>, <a href="https://publications.waset.org/abstracts/search?q=atrial%20fibrillation" title=" atrial fibrillation"> atrial fibrillation</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20fusion" title=" feature fusion"> feature fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=attention%20mechanism" title=" attention mechanism"> attention mechanism</a>, <a href="https://publications.waset.org/abstracts/search?q=hyber%20network" title=" hyber network"> hyber network</a> </p> <a href="https://publications.waset.org/abstracts/113139/detection-of-atrial-fibrillation-using-wearables-via-attentional-two-stream-heterogeneous-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/113139.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">121</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5991</span> Attention Multiple Instance Learning for Cancer Tissue Classification in Digital Histopathology Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Afaf%20Alharbi">Afaf Alharbi</a>, <a href="https://publications.waset.org/abstracts/search?q=Qianni%20Zhang"> Qianni Zhang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The identification of malignant tissue in histopathological slides holds significant importance in both clinical settings and pathology research. This paper introduces a methodology aimed at automatically categorizing cancerous tissue through the utilization of a multiple-instance learning framework. This framework is specifically developed to acquire knowledge of the Bernoulli distribution of the bag label probability by employing neural networks. Furthermore, we put forward a neural network based permutation-invariant aggregation operator, equivalent to attention mechanisms, which is applied to the multi-instance learning network. Through empirical evaluation of an openly available colon cancer histopathology dataset, we provide evidence that our approach surpasses various conventional deep learning methods. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=attention%20multiple%20instance%20learning" title="attention multiple instance learning">attention multiple instance learning</a>, <a href="https://publications.waset.org/abstracts/search?q=MIL%20and%20transfer%20learning" title=" MIL and transfer learning"> MIL and transfer learning</a>, <a href="https://publications.waset.org/abstracts/search?q=histopathological%20slides" title=" histopathological slides"> histopathological slides</a>, <a href="https://publications.waset.org/abstracts/search?q=cancer%20tissue%20classification" title=" cancer tissue classification"> cancer tissue classification</a> </p> <a href="https://publications.waset.org/abstracts/167708/attention-multiple-instance-learning-for-cancer-tissue-classification-in-digital-histopathology-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/167708.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">110</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5990</span> The Impact of Generative AI Illustrations on Aesthetic Symbol Consumption among Consumers: A Case Study of Japanese Anime Style</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Han-Yu%20Cheng">Han-Yu Cheng</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This study aims to explore the impact of AI-generated illustration works on the aesthetic symbol consumption of consumers in Taiwan. The advancement of artificial intelligence drawing has lowered the barriers to entry, enabling more individuals to easily enter the field of illustration. Using Japanese anime style as an example, with the development of Generative Artificial Intelligence (Generative AI), an increasing number of illustration works are being generated by machines, sparking discussions about aesthetics and art consumption. Through surveys and the analysis of consumer perspectives, this research investigates how this influences consumers' aesthetic experiences and the resulting changes in the traditional art market and among creators. The study reveals that among consumers in Taiwan, particularly those interested in Japanese anime style, there is a pronounced interest and curiosity surrounding the emergence of Generative AI. This curiosity is particularly notable among individuals interested in this style but lacking the technical skills required for creating such artworks. These works, rooted in elements of Japanese anime style, find ready acceptance among enthusiasts of this style due to their stylistic alignment. Consequently, they have garnered a substantial following. Furthermore, with the reduction in entry barriers, more individuals interested in this style but lacking traditional drawing skills have been able to participate in producing such works. Against the backdrop of ongoing debates about artistic value since the advent of artificial intelligence (AI), Generative AI-generated illustration works, while not entirely displacing traditional art, to a certain extent, fulfill the aesthetic demands of this consumer group, providing a similar or analogous aesthetic consumption experience. Additionally, this research underscores the advantages and limitations of Generative AI-generated illustration works within this consumption environment. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=generative%20AI" title="generative AI">generative AI</a>, <a href="https://publications.waset.org/abstracts/search?q=anime%20aesthetics" title=" anime aesthetics"> anime aesthetics</a>, <a href="https://publications.waset.org/abstracts/search?q=Japanese%20anime%20illustration" title=" Japanese anime illustration"> Japanese anime illustration</a>, <a href="https://publications.waset.org/abstracts/search?q=art%20consumption" title=" art consumption"> art consumption</a> </p> <a href="https://publications.waset.org/abstracts/173744/the-impact-of-generative-ai-illustrations-on-aesthetic-symbol-consumption-among-consumers-a-case-study-of-japanese-anime-style" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/173744.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">72</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5989</span> Improving Student Programming Skills in Introductory Computer and Data Science Courses Using Generative AI</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Genady%20Grabarnik">Genady Grabarnik</a>, <a href="https://publications.waset.org/abstracts/search?q=Serge%20Yaskolko"> Serge Yaskolko</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Generative Artificial Intelligence (AI) has significantly expanded its applicability with the incorporation of Large Language Models (LLMs) and become a technology with promise to automate some areas that were very difficult to automate before. The paper describes the introduction of generative Artificial Intelligence into Introductory Computer and Data Science courses and analysis of effect of such introduction. The generative Artificial Intelligence is incorporated in the educational process two-fold: For the instructors, we create templates of prompts for generation of tasks, and grading of the students work, including feedback on the submitted assignments. For the students, we introduce them to basic prompt engineering, which in turn will be used for generation of test cases based on description of the problems, generating code snippets for the single block complexity programming, and partitioning into such blocks of an average size complexity programming. The above-mentioned classes are run using Large Language Models, and feedback from instructors and students and courses’ outcomes are collected. The analysis shows statistically significant positive effect and preference of both stakeholders. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=introductory%20computer%20and%20data%20science%20education" title="introductory computer and data science education">introductory computer and data science education</a>, <a href="https://publications.waset.org/abstracts/search?q=generative%20AI" title=" generative AI"> generative AI</a>, <a href="https://publications.waset.org/abstracts/search?q=large%20language%20models" title=" large language models"> large language models</a>, <a href="https://publications.waset.org/abstracts/search?q=application%20of%20LLMS%20to%20computer%20and%20data%20science%20education" title=" application of LLMS to computer and data science education"> application of LLMS to computer and data science education</a> </p> <a href="https://publications.waset.org/abstracts/175778/improving-student-programming-skills-in-introductory-computer-and-data-science-courses-using-generative-ai" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/175778.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">58</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5988</span> Use of Generative Adversarial Networks (GANs) in Neuroimaging and Clinical Neuroscience Applications</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Niloufar%20Yadgari">Niloufar Yadgari</a> </p> <p class="card-text"><strong>Abstract:</strong></p> GANs are a potent form of deep learning models that have found success in various fields. They are part of the larger group of generative techniques, which aim to produce authentic data using a probabilistic model that learns distributions from actual samples. In clinical settings, GANs have demonstrated improved abilities in capturing spatially intricate, nonlinear, and possibly subtle disease impacts in contrast to conventional generative techniques. This review critically evaluates the current research on how GANs are being used in imaging studies of different neurological conditions like Alzheimer's disease, brain tumors, aging of the brain, and multiple sclerosis. We offer a clear explanation of different GAN techniques for each use case in neuroimaging and delve into the key hurdles, unanswered queries, and potential advancements in utilizing GANs in this field. Our goal is to connect advanced deep learning techniques with neurology studies, showcasing how GANs can assist in clinical decision-making and enhance our comprehension of the structural and functional aspects of brain disorders. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=GAN" title="GAN">GAN</a>, <a href="https://publications.waset.org/abstracts/search?q=pathology" title=" pathology"> pathology</a>, <a href="https://publications.waset.org/abstracts/search?q=generative%20adversarial%20network" title=" generative adversarial network"> generative adversarial network</a>, <a href="https://publications.waset.org/abstracts/search?q=neuro%20imaging" title=" neuro imaging"> neuro imaging</a> </p> <a href="https://publications.waset.org/abstracts/188651/use-of-generative-adversarial-networks-gans-in-neuroimaging-and-clinical-neuroscience-applications" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/188651.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">33</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5987</span> Application of Neural Petri Net to Electric Control System Fault Diagnosis</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sadiq%20J.%20Abou-Loukh">Sadiq J. Abou-Loukh</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The present work deals with implementation of Petri nets, which own the perfect ability of modeling, are used to establish a fault diagnosis model. Fault diagnosis of a control system received considerable attention in the last decades. The formalism of representing neural networks based on Petri nets has been presented. Neural Petri Net (NPN) reasoning model is investigated and developed for the fault diagnosis process of electric control system. The proposed NPN has the characteristics of easy establishment and high efficiency, and fault status within the system can be described clearly when compared with traditional testing methods. The proposed system is tested and the simulation results are given. The implementation explains the advantages of using NPN method and can be used as a guide for different online applications. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=petri%20net" title="petri net">petri net</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20petri%20net" title=" neural petri net"> neural petri net</a>, <a href="https://publications.waset.org/abstracts/search?q=electric%20control%20system" title=" electric control system"> electric control system</a>, <a href="https://publications.waset.org/abstracts/search?q=fault%20diagnosis" title=" fault diagnosis"> fault diagnosis</a> </p> <a href="https://publications.waset.org/abstracts/16653/application-of-neural-petri-net-to-electric-control-system-fault-diagnosis" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/16653.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">474</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5986</span> Generating Swarm Satellite Data Using Long Short-Term Memory and Generative Adversarial Networks for the Detection of Seismic Precursors</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yaxin%20Bi">Yaxin Bi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Accurate prediction and understanding of the evolution mechanisms of earthquakes remain challenging in the fields of geology, geophysics, and seismology. This study leverages Long Short-Term Memory (LSTM) networks and Generative Adversarial Networks (GANs), a generative model tailored to time-series data, for generating synthetic time series data based on Swarm satellite data, which will be used for detecting seismic anomalies. LSTMs demonstrated commendable predictive performance in generating synthetic data across multiple countries. In contrast, the GAN models struggled to generate synthetic data, often producing non-informative values, although they were able to capture the data distribution of the time series. These findings highlight both the promise and challenges associated with applying deep learning techniques to generate synthetic data, underscoring the potential of deep learning in generating synthetic electromagnetic satellite data. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=LSTM" title="LSTM">LSTM</a>, <a href="https://publications.waset.org/abstracts/search?q=GAN" title=" GAN"> GAN</a>, <a href="https://publications.waset.org/abstracts/search?q=earthquake" title=" earthquake"> earthquake</a>, <a href="https://publications.waset.org/abstracts/search?q=synthetic%20data" title=" synthetic data"> synthetic data</a>, <a href="https://publications.waset.org/abstracts/search?q=generative%20AI" title=" generative AI"> generative AI</a>, <a href="https://publications.waset.org/abstracts/search?q=seismic%20precursors" title=" seismic precursors"> seismic precursors</a> </p> <a href="https://publications.waset.org/abstracts/187478/generating-swarm-satellite-data-using-long-short-term-memory-and-generative-adversarial-networks-for-the-detection-of-seismic-precursors" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/187478.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">32</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5985</span> A Survey of Response Generation of Dialogue Systems</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yifan%20Fan">Yifan Fan</a>, <a href="https://publications.waset.org/abstracts/search?q=Xudong%20Luo"> Xudong Luo</a>, <a href="https://publications.waset.org/abstracts/search?q=Pingping%20Lin"> Pingping Lin</a> </p> <p class="card-text"><strong>Abstract:</strong></p> An essential task in the field of artificial intelligence is to allow computers to interact with people through natural language. Therefore, researches such as virtual assistants and dialogue systems have received widespread attention from industry and academia. The response generation plays a crucial role in dialogue systems, so to push forward the research on this topic, this paper surveys various methods for response generation. We sort out these methods into three categories. First one includes finite state machine methods, framework methods, and instance methods. The second contains full-text indexing methods, ontology methods, vast knowledge base method, and some other methods. The third covers retrieval methods and generative methods. We also discuss some hybrid methods based knowledge and deep learning. We compare their disadvantages and advantages and point out in which ways these studies can be improved further. Our discussion covers some studies published in leading conferences such as IJCAI and AAAI in recent years. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title="deep learning">deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=generative" title=" generative"> generative</a>, <a href="https://publications.waset.org/abstracts/search?q=knowledge" title=" knowledge"> knowledge</a>, <a href="https://publications.waset.org/abstracts/search?q=response%20generation" title=" response generation"> response generation</a>, <a href="https://publications.waset.org/abstracts/search?q=retrieval" title=" retrieval"> retrieval</a> </p> <a href="https://publications.waset.org/abstracts/128195/a-survey-of-response-generation-of-dialogue-systems" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/128195.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">134</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5984</span> Local Boundary Analysis for Generative Theory of Tonal Music: From the Aspect of Classic Music Melody Analysis</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Po-Chun%20Wang">Po-Chun Wang</a>, <a href="https://publications.waset.org/abstracts/search?q=Yan-Ru%20Lai"> Yan-Ru Lai</a>, <a href="https://publications.waset.org/abstracts/search?q=Sophia%20I.%20C.%20Lin"> Sophia I. C. Lin</a>, <a href="https://publications.waset.org/abstracts/search?q=Alvin%20W.%20Y.%20Su"> Alvin W. Y. Su</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The Generative Theory of Tonal Music (GTTM) provides systematic approaches to recognizing local boundaries of music. The rules have been implemented in some automated melody segmentation algorithms. Besides, there are also deep learning methods with GTTM features applied to boundary detection tasks. However, these studies might face constraints such as a lack of or inconsistent label data. The GTTM database is currently the most widely used GTTM database, which includes manually labeled GTTM rules and local boundaries. Even so, we found some problems with these labels. They are sometimes discrepancies with GTTM rules. In addition, since it is labeled at different times by multiple musicians, they are not within the same scope in some cases. Therefore, in this paper, we examine this database with musicians from the aspect of classical music and relabel the scores. The relabeled database - GTTM Database v2.0 - will be released for academic research usage. Despite the experimental and statistical results showing that the relabeled database is more consistent, the improvement in boundary detection is not substantial. It seems that we need more clues than GTTM rules for boundary detection in the future. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=dataset" title="dataset">dataset</a>, <a href="https://publications.waset.org/abstracts/search?q=GTTM" title=" GTTM"> GTTM</a>, <a href="https://publications.waset.org/abstracts/search?q=local%20boundary" title=" local boundary"> local boundary</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20network" title=" neural network"> neural network</a> </p> <a href="https://publications.waset.org/abstracts/156472/local-boundary-analysis-for-generative-theory-of-tonal-music-from-the-aspect-of-classic-music-melody-analysis" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/156472.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">146</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5983</span> Influence of the Refractory Period on Neural Networks Based on the Recognition of Neural Signatures</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jos%C3%A9%20Luis%20Carrillo-Medina">José Luis Carrillo-Medina</a>, <a href="https://publications.waset.org/abstracts/search?q=Roberto%20Latorre"> Roberto Latorre</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Experimental evidence has revealed that different living neural systems can sign their output signals with some specific neural signature. Although experimental and modeling results suggest that neural signatures can have an important role in the activity of neural networks in order to identify the source of the information or to contextualize a message, the functional meaning of these neural fingerprints is still unclear. The existence of cellular mechanisms to identify the origin of individual neural signals can be a powerful information processing strategy for the nervous system. We have recently built different models to study the ability of a neural network to process information based on the emission and recognition of specific neural fingerprints. In this paper we further analyze the features that can influence on the information processing ability of this kind of networks. In particular, we focus on the role that the duration of a refractory period in each neuron after emitting a signed message can play in the network collective dynamics. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=neural%20signature" title="neural signature">neural signature</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20fingerprint" title=" neural fingerprint"> neural fingerprint</a>, <a href="https://publications.waset.org/abstracts/search?q=processing%20based%20on%20signal%20identification" title=" processing based on signal identification"> processing based on signal identification</a>, <a href="https://publications.waset.org/abstracts/search?q=self-organizing%20neural%20network" title=" self-organizing neural network"> self-organizing neural network</a> </p> <a href="https://publications.waset.org/abstracts/20408/influence-of-the-refractory-period-on-neural-networks-based-on-the-recognition-of-neural-signatures" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/20408.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">492</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5982</span> Multi-Stream Graph Attention Network for Recommendation with Knowledge Graph</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Zhifei%20Hu">Zhifei Hu</a>, <a href="https://publications.waset.org/abstracts/search?q=Feng%20Xia"> Feng Xia</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In recent years, Graph neural network has been widely used in knowledge graph recommendation. The existing recommendation methods based on graph neural network extract information from knowledge graph through entity and relation, which may not be efficient in the way of information extraction. In order to better propose useful entity information for the current recommendation task in the knowledge graph, we propose an end-to-end Neural network Model based on multi-stream graph attentional Mechanism (MSGAT), which can effectively integrate the knowledge graph into the recommendation system by evaluating the importance of entities from both users and items. Specifically, we use the attention mechanism from the user's perspective to distil the domain nodes information of the predicted item in the knowledge graph, to enhance the user's information on items, and generate the feature representation of the predicted item. Due to user history, click items can reflect the user's interest distribution, we propose a multi-stream attention mechanism, based on the user's preference for entities and relationships, and the similarity between items to be predicted and entities, aggregate user history click item's neighborhood entity information in the knowledge graph and generate the user's feature representation. We evaluate our model on three real recommendation datasets: Movielens-1M (ML-1M), LFM-1B 2015 (LFM-1B), and Amazon-Book (AZ-book). Experimental results show that compared with the most advanced models, our proposed model can better capture the entity information in the knowledge graph, which proves the validity and accuracy of the model. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=graph%20attention%20network" title="graph attention network">graph attention network</a>, <a href="https://publications.waset.org/abstracts/search?q=knowledge%20graph" title=" knowledge graph"> knowledge graph</a>, <a href="https://publications.waset.org/abstracts/search?q=recommendation" title=" recommendation"> recommendation</a>, <a href="https://publications.waset.org/abstracts/search?q=information%20propagation" title=" information propagation"> information propagation</a> </p> <a href="https://publications.waset.org/abstracts/150710/multi-stream-graph-attention-network-for-recommendation-with-knowledge-graph" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/150710.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">117</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">&lsaquo;</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=neural%20generative%20attention&amp;page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=neural%20generative%20attention&amp;page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=neural%20generative%20attention&amp;page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=neural%20generative%20attention&amp;page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=neural%20generative%20attention&amp;page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=neural%20generative%20attention&amp;page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=neural%20generative%20attention&amp;page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=neural%20generative%20attention&amp;page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=neural%20generative%20attention&amp;page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=neural%20generative%20attention&amp;page=200">200</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=neural%20generative%20attention&amp;page=201">201</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=neural%20generative%20attention&amp;page=2" rel="next">&rsaquo;</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">&copy; 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">&times;</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>

Pages: 1 2 3 4 5 6 7 8 9 10