CINXE.COM

Search EUDL

<html><head><title>Search EUDL</title><link rel="icon" href="/images/favicon.ico"><link rel="stylesheet" type="text/css" href="/css/screen.css"><link rel="stylesheet" href="/css/zenburn.css"><meta http-equiv="Content-Type" content="charset=utf-8"><meta name="viewport" content="width=device-width, initial-scale=1.0"><meta name="Description" content="Search the thousands of conference proceedings and academic journal articles housed in the European Union Digital Library"><script type="text/javascript" src="https://services.eai.eu//load-signup-form/EAI"></script><script type="text/javascript" src="https://services.eai.eu//ujs/forms/signup/sso-client.js"></script><script type="text/javascript">if (!window.EUDL){ window.EUDL={} };EUDL.cas_url="https://account.eai.eu/cas";EUDL.profile_url="https://account.eai.eu";if(window.SSO){SSO.set_mode('eai')};</script><script type="text/javascript" src="/js/jquery.js"></script><script type="text/javascript" src="/js/jquery.cookie.js"></script><script type="text/javascript" src="/js/sso.js"></script><script type="text/javascript" src="/js/jscal2.js"></script><script type="text/javascript" src="/js/lang/en.js"></script><script type="text/javascript" src="/js/jquery.colorbox-min.js"></script><script type="text/javascript" src="/js/eudl.js"></script><script type="text/javascript" src="/js/content.js"></script><link rel="stylesheet" type="text/css" href="/css/jscal/jscal2.css"><link rel="stylesheet" type="text/css" href="/css/jscal/eudl/eudl.css"><link rel="stylesheet" type="text/css" href="/css/colorbox.css"></head><body><div id="eudl-page-head"><div id="eudl-page-header"><section id="user-area"><div><nav id="right-nav"><a href="/about">About</a> | <a href="/contact">Contact Us</a> | <a class="register" href="https://account.eai.eu/register?service=http%3A%2F%2Feudl.eu%2Fcontent">Register</a> | <a class="login" href="https://account.eai.eu/cas/login?service=http%3A%2F%2Feudl.eu%2Fcontent">Login</a></nav></div></section></div></div><div id="eudl-page"><header><section id="topbar-ads"><div><a href="https://eudl.eu/"><img class="eudl-logo-top" src="https://eudl.eu/images/eudl-logo.png"></a><a href="https://eai.eu/eai-community/?mtm_campaign=community_membership&amp;mtm_kwd=eudl_community&amp;mtm_source=eudl&amp;mtm_medium=eudl_banner"><img class="eudl-ads-top" src="https://eudl.eu/images/upbanner.png"></a></div></section><section id="menu"><nav><a href="/proceedings" class=""><span>Proceedings</span><span class="icon"></span></a><a href="/series" class=""><span>Series</span><span class="icon"></span></a><a href="/journals" class=""><span>Journals</span><span class="icon"></span></a><a href="/content" class="current"><span>Search</span><span class="icon"></span></a><a href="http://eai.eu/">EAI</a></nav></section></header><div id="eaientran"></div><section id="content"><section id="content-list"><form class="search-form" id="article_search" action="/contents" method="get"><section id="articles-search" class="search"><div class="metasearch"><select id="metadata-field" name="metadata"><option name="select">Select</option><option name="title">Title</option><option name="abstract">Abstract</option><option name="author_name">Author Names</option><option name="keywords">Keywords</option><option name="doi">DOI</option><option name="all">All</option></select></div><div class="field"><input id="search-field" name="q" placeholder="search terms here…" size="30" type="text" value=""><input type="submit" id="search-submit"><div class="summary"></div></div><div class="order">Ordered by <a href="/content?order_title=asc" class="filter "><span>title</span><span class="icon"></span></a> or <a href="/content?order_year=asc" class="filter current desc"><span>year</span> <span class="icon"></span></a></div></section><section id="articles-filters"><section id="search-filters"></section></section><section id="articles-results" class="search-results"><ul class="results-list"><li class="result-item article-light first"><h3><a href="/doi/10.4108/eai.12-3-2025.1150559">Front Matter</a></h3><dl class="metadata"><dd class="value">Editorial in Proceedings of the 2nd International Conference on Machine Learning and Automation, CONF-MLA 2024, November 21, 2024, Adana, Turkey</dd><br><dt class="title">Authors: </dt><dd class="value">Mustafa Istanbullu, Anil Fernando, Marwan Omar</dd><br><dt class="title">Abstract: </dt><dd class="value abstract"><div class="shortened"><p></p></div><div class="full"></div> <span class="expander more"><a class="trigger">more »</a></span></dd></dl></li><li class="result-item article-light first"><h3><a href="/doi/10.4108/eai.21-11-2024.2354638">DSR-Net: Dynamic Star Map Denoising Algorithm Based on Deep Reinforcement Learning</a></h3><dl class="metadata"><dd class="value">Research Article in Proceedings of the 2nd International Conference on Machine Learning and Automation, CONF-MLA 2024, November 21, 2024, Adana, Turkey</dd><br><dt class="title">Authors: </dt><dd class="value">Yifan Zhao, Shiji Song, Shaochen Jiang</dd><br><dt class="title">Abstract: </dt><dd class="value abstract"><div class="shortened"><p>As astronomical observation technology continues to progress, obtaining high-quality star maps provides us with valuable opportunities to explore the universe. However, The acquired star maps are often affected by various random noises, including speckle noise, Poisson noise, Impulse noise, Therma…</p></div><div class="full"><p>As astronomical observation technology continues to progress, obtaining high-quality star maps provides us with valuable opportunities to explore the universe. However, The acquired star maps are often affected by various random noises, including speckle noise, Poisson noise, Impulse noise, Thermal noise,Reynolds noise, and Gaussian noise,etc. These noises degrade the image quality and limit the efficiency of scientific research. Traditional denoising methods are often limited in their effectiveness when faced with such complex noise and lack the ability to model the temporal features of dynamic star maps, making it difficult to handle the sparsity and complex background of dynamic star maps. Therefore, this paper introduces DSR-Net, a deep reinforcement learning-based dynamic star map denoising algorithm. The algorithm combines Convolutional Gated Recurrent Units (ConvGRU) and Region-based Reward Convolution (Rrc) modules, enabling it to capture the dynamic changes of star maps, effectively remove noise, and preserve important details in the star maps. Experimental results show that DSR-Net outperforms traditional denoising methods on multiple real dynamic star map datasets, providing an effective solution for improving the quality of star maps.</p></div> <span class="expander more"><a class="trigger">more »</a></span></dd></dl></li><li class="result-item article-light first"><h3><a href="/doi/10.4108/eai.21-11-2024.2354630">Research on Improvement of Environment Perception Algorithm for Autonomous Driving Vehicles Based on YOLOv5</a></h3><dl class="metadata"><dd class="value">Research Article in Proceedings of the 2nd International Conference on Machine Learning and Automation, CONF-MLA 2024, November 21, 2024, Adana, Turkey</dd><br><dt class="title">Authors: </dt><dd class="value">Yuxi Yang</dd><br><dt class="title">Abstract: </dt><dd class="value abstract"><div class="shortened"><p>Autonomous driving relies heavily on vehicle object detection, and YOLOv5s is presently one of the best algorithms for this purpose. However, in extreme environments such as severe weather, cars have poor perception of the environment, and their ability to detect dynamic targets is greatly affected…</p></div><div class="full"><p>Autonomous driving relies heavily on vehicle object detection, and YOLOv5s is presently one of the best algorithms for this purpose. However, in extreme environments such as severe weather, cars have poor perception of the environment, and their ability to detect dynamic targets is greatly affected, resulting in low accuracy and poor robustness of YOLOv5 object detection algorithm in pedestrian and vehicle detection. This article proposes an improved YOLOv5s algorithm. Firstly, a selective attention mechanism (SimAM) module is used to weight the output of the convolutional layer, allowing the network to quickly capture regions of interest and suppress irrelevant information; Simultaneously using lightweight convolution GSConv instead of the conventional convolution to compensate for semantic information loss and reduce model complexity; Secondly, adding a shallow detection layer changes the original algorithm's three scale detection to four scale detection, enhancing the learning ability for small-scale targets; Finally, SIoU Loss is used as the bounding box regression loss function to achieve more accurate localization of the predicted boxes. The improved YOLOv5s algorithm was tested on the CARLA simulation dataset, and simulation results showed that the average detection accuracy of the improved model reached 96.67%, which improved the detection accuracy for complex scenes.</p></div> <span class="expander more"><a class="trigger">more »</a></span></dd></dl></li><li class="result-item article-light first"><h3><a href="/doi/10.4108/eai.21-11-2024.2354632">Semantic Segmentation-Based Enhancement of Visual SLAM Loop Closure Detection in Dynamic Indoor Environments</a></h3><dl class="metadata"><dd class="value">Research Article in Proceedings of the 2nd International Conference on Machine Learning and Automation, CONF-MLA 2024, November 21, 2024, Adana, Turkey</dd><br><dt class="title">Authors: </dt><dd class="value">Lu Wang, Chao Hu, Xiaoxia Lu</dd><br><dt class="title">Abstract: </dt><dd class="value abstract"><div class="shortened"><p>Current visual SLAM loop closure detection algorithms encounter significant challenges in dynamic environments, where moving objects such as pedestrians lead to inconsistencies in feature points, compromising map accuracy. This study proposes a novel visual SLAM loop closure detection algorithm lev…</p></div><div class="full"><p>Current visual SLAM loop closure detection algorithms encounter significant challenges in dynamic environments, where moving objects such as pedestrians lead to inconsistencies in feature points, compromising map accuracy. This study proposes a novel visual SLAM loop closure detection algorithm leveraging semantic segmentation, specifically designed for complex indoor dynamic scenarios. The proposed approach introduces the Bottleneck with Squeeze and Excitation Block (BnSEBlock) to improve the U-Net++ semantic segmentation model by incorporating residual connections, dilated convolutions, and an adaptive attention mechanism. Dynamic weights are assigned to semantic information based on motion intensity and centroid coordinates, which are derived through adaptive HDBSCAN clustering. Loop closure is identified by assessing the similarity between keyframes and candidate frames using these weighted parameters. Experimental evaluations on publicly available datasets demonstrate that the enhanced U-Net++ model achieves a Mean Intersection over Union (MIoU) of 76.9% and reduces the loss to 0.172. In comparison, the traditional bag-of-words-based approach yields a maximum similarity of 0.273 for loop images. The proposed algorithm shows a 61.57% improvement in localization accuracy within dynamic indoor environments.</p></div> <span class="expander more"><a class="trigger">more »</a></span></dd></dl></li><li class="result-item article-light first"><h3><a href="/doi/10.4108/eai.21-11-2024.2354623">Optimization Strategy for Car Following and Lane Changing Models of CAV in Mixed Traffic Environments</a></h3><dl class="metadata"><dd class="value">Research Article in Proceedings of the 2nd International Conference on Machine Learning and Automation, CONF-MLA 2024, November 21, 2024, Adana, Turkey</dd><br><dt class="title">Authors: </dt><dd class="value">Wanyue Li, Haowen Cui, Liming Chen, Qing Zhan</dd><br><dt class="title">Abstract: </dt><dd class="value abstract"><div class="shortened"><p>A mixed traffic environment is an environment where different types of agents, for instance, Connected Autonomous Vehicles, Human Driven Vehicles, and pedestrians in the same traffic space. In reality, such a mixed traffic environment is the most common for Connected Autonomous Vehicles, so it is p…</p></div><div class="full"><p>A mixed traffic environment is an environment where different types of agents, for instance, Connected Autonomous Vehicles, Human Driven Vehicles, and pedestrians in the same traffic space. In reality, such a mixed traffic environment is the most common for Connected Autonomous Vehicles, so it is practical to study the trade-off between the safety and efficiency of Connected Autonomous Vehicles. The paper proposes an optimization strategy for car-following and lane-changing models of Connected Autonomous Vehicles in mixed-traffic environments. In this study, real-time data (e.g., acceleration, position, signal status, etc.) from CARLA's inbuilt sensors are utilised to dynamically adapt the vehicle's decision-making logic. Compared to existing offline optimisation methods, it can better adapt to the uncertainty in real road environments. In order to check the validity, we use Carla to set up a simulation environment and evaluate the behavior of autonomous vehicles. Furthermore, we collect data through multiple sensors, such as acceleration sensors, to accurately measure vehicle status. Ultimately, we gather the data from the sensors and analyze it by mathematical methods. Through this experiment, we find out that the lane change strategy avoids unnecessary lane changes and shows strong adaptability.</p></div> <span class="expander more"><a class="trigger">more »</a></span></dd></dl></li><li class="result-item article-light first"><h3><a href="/doi/10.4108/eai.21-11-2024.2354596">Research on Demagnetization Fault Diagnosis of Permanent Magnet Linear Synchronous Motor Based on SqueezeNet Neural Network</a></h3><dl class="metadata"><dd class="value">Research Article in Proceedings of the 2nd International Conference on Machine Learning and Automation, CONF-MLA 2024, November 21, 2024, Adana, Turkey</dd><br><dt class="title">Authors: </dt><dd class="value">Tianye Guo</dd><br><dt class="title">Abstract: </dt><dd class="value abstract"><div class="shortened"><p>This paper studies the demagnetization fault diagnosis method of Permanent Magnet Linear Synchronous Motor based on SqueezeNet neural network. A new demagnetization fault signal acquisition method is proposed to adapt to the spatial topological structure constraints of the double-stator coreless mo…</p></div><div class="full"><p>This paper studies the demagnetization fault diagnosis method of Permanent Magnet Linear Synchronous Motor based on SqueezeNet neural network. A new demagnetization fault signal acquisition method is proposed to adapt to the spatial topological structure constraints of the double-stator coreless motor, and to obtain effective demagnetization fault signals without invasive measurement, so as to improve the accuracy of the fault signal source. At the same time, a simple linear motor demagnetization fault diagnosis device is designed. The one-dimensional demagnetization fault signal is converted into a two-dimensional image through the Recurrence Plot, and fault feature information is effectively extracted. In addition, this paper innovatively uses the lightweight SqueezeNet model for training. After continuous adjustment of the SqueezeNet network model, it can efficiently complete the classification of permanent magnet linear synchronous motor demagnetization faults.</p></div> <span class="expander more"><a class="trigger">more »</a></span></dd></dl></li><li class="result-item article-light first"><h3><a href="/doi/10.4108/eai.21-11-2024.2354627">GAN-Based Architecture for Low-dose Computed Tomography Imaging Denoising </a></h3><dl class="metadata"><dd class="value">Research Article in Proceedings of the 2nd International Conference on Machine Learning and Automation, CONF-MLA 2024, November 21, 2024, Adana, Turkey</dd><br><dt class="title">Authors: </dt><dd class="value">Yunuo Wang, Ningning Yang, Jialin Li</dd><br><dt class="title">Abstract: </dt><dd class="value abstract"><div class="shortened"><p>Generative Adversarial Networks (GANs) have surfaced as a revolutionary element within the domain of low-dose computed tomography (LDCT) imaging, providing an advanced resolution to the enduring issue of reconciling radiation exposure with image quality. This comprehensive review synthesizes the ra…</p></div><div class="full"><p>Generative Adversarial Networks (GANs) have surfaced as a revolutionary element within the domain of low-dose computed tomography (LDCT) imaging, providing an advanced resolution to the enduring issue of reconciling radiation exposure with image quality. This comprehensive review synthesizes the rapid advancements in GAN-based LDCT denoising techniques, examining the evolution from foundational architectures to state-of-the-art models incorporating advanced features such as anatomical priors, perceptual loss functions, and innovative regularization strategies. We critically analyze various GAN architectures, including conditional GANs (cGANs), CycleGANs, and Super-Resolution GANs (SRGANs), elucidating their unique strengths and limitations in the context of LDCT denoising. The evaluation provides both qualitative and quantitative results related to the improvements in performance in benchmark and clinical datasets with metrics such as PSNR, SSIM, and LPIPS. After highlighting the positive results, we discuss some of the challenges preventing a wider clinical use, including the interpretability of the images generated by GANs,synthetic artifacts,and the need for clinically relevant metrics. The review concludes by highlighting the essential significance of GAN-based methodologies in the progression of precision medicine via tailored LDCT denoising models,underlining the transformative possibilities presented by artificial intelligence within contemporary radiological practice.</p></div> <span class="expander more"><a class="trigger">more »</a></span></dd></dl></li><li class="result-item article-light first"><h3><a href="/doi/10.4108/eai.21-11-2024.2354628">Research on Several Neural Network Structure for Automatic Modulation Recognition</a></h3><dl class="metadata"><dd class="value">Research Article in Proceedings of the 2nd International Conference on Machine Learning and Automation, CONF-MLA 2024, November 21, 2024, Adana, Turkey</dd><br><dt class="title">Authors: </dt><dd class="value">Yidong Xu</dd><br><dt class="title">Abstract: </dt><dd class="value abstract"><div class="shortened"><p>With the rapid development of communication technology, Automatic Modulation Recognition (AMR) based on Deep learning (DL) performs well relying on its unique advantages. However, due to the wide variety of neural networks, it is important to compare and analyze their performance and applicability …</p></div><div class="full"><p>With the rapid development of communication technology, Automatic Modulation Recognition (AMR) based on Deep learning (DL) performs well relying on its unique advantages. However, due to the wide variety of neural networks, it is important to compare and analyze their performance and applicability under specific conditions. In this paper, we select convolutional neural network (CNN) and Residual networks (Resnet), and continuously deepen the depth of the residual network to explore the influence of the accumulation of residual blocks. After simulating and analyzing the recognition effects of different network structures under -12 to 30 signal-to-noise ratio (SNR) conditions, the experimental results show that under the experimental conditions set up in this paper, the recognition rate of Resnet is about 4.8% higher than that of CNN on average when SNR is higher than 0db. After accumulating one and two residual blocks and fine-tuning the model to improve the recognition rate, the recognition rate of both networks obtained from the improvement exceeds 90% when SNR is higher than 10db.</p></div> <span class="expander more"><a class="trigger">more »</a></span></dd></dl></li><li class="result-item article-light first"><h3><a href="/doi/10.4108/eai.21-11-2024.2354633">A Study of Web Code Generation Based on ChatGPT</a></h3><dl class="metadata"><dd class="value">Research Article in Proceedings of the 2nd International Conference on Machine Learning and Automation, CONF-MLA 2024, November 21, 2024, Adana, Turkey</dd><br><dt class="title">Authors: </dt><dd class="value">Zhan Shu, Zijie Dong</dd><br><dt class="title">Abstract: </dt><dd class="value abstract"><div class="shortened"><p>With the rise of large language models (LLMs) such as ChatGPT in the field of code generation, these models have demonstrated impressive abilities in understanding code semantics and implementing complex functionalities, especially showing potential in web development scenarios. Developing web appl…</p></div><div class="full"><p>With the rise of large language models (LLMs) such as ChatGPT in the field of code generation, these models have demonstrated impressive abilities in understanding code semantics and implementing complex functionalities, especially showing potential in web development scenarios. Developing web applications is a critical task widely used in interactive software systems across various fields. However, current automated web code generation still has limitations, often failing to cover complete front-end and back-end functionalities or achieve complex interactive logic. Based on this, this paper takes ChatGPT-4o as an example, constructing a comprehensive student management system to systematically analyze and evaluate its performance and applicability in generating front-end and back-end code. First, the paper outlines the system’s requirements analysis and module design. Then, it thoroughly documents the entire process of generating front-end and back-end code based on ChatGPT-4o. Through this process, the paper examines ChatGPT-4o’s performance in terms of code generation efficiency, functionality accuracy, and the level of human intervention required, analyzing its strengths and limitations with experimental data. The experimental results indicate that large models like ChatGPT significantly simplify code generation and accelerate the development process, yet still require human optimization when handling complex logic and interaction design.</p></div> <span class="expander more"><a class="trigger">more »</a></span></dd></dl></li><li class="result-item article-light first"><h3><a href="/doi/10.4108/eai.21-11-2024.2354634">A Model for Abnormal Detection of In-Vehicle CAN Messages Based on Hyperparameter Optimized CNN</a></h3><dl class="metadata"><dd class="value">Research Article in Proceedings of the 2nd International Conference on Machine Learning and Automation, CONF-MLA 2024, November 21, 2024, Adana, Turkey</dd><br><dt class="title">Authors: </dt><dd class="value">Xiaoyu Zhou</dd><br><dt class="title">Abstract: </dt><dd class="value abstract"><div class="shortened"><p>Effectively identifying and defending against cyberattacks through intelligent means has become an important research direction for ensuring the safety of intelligent connected vehicles. The paper constructs a novel intrusion detection system framework using CNN, knowledge transfer and model ensemb…</p></div><div class="full"><p>Effectively identifying and defending against cyberattacks through intelligent means has become an important research direction for ensuring the safety of intelligent connected vehicles. The paper constructs a novel intrusion detection system framework using CNN, knowledge transfer and model ensemble methods, along with hyperparameter tuning strategies. First, a data transformation model is established to convert CAN message information into images, retaining the key information and features from the original messages while providing good visualization effects and compatibility, thereby facilitating the identification of different network attack patterns. Secondly, a novel intrusion detection system framework is built using CNN, knowledge transfer and model ensemble methods, along with hyperparameter tuning strategies, which can effectively detect various attack features targeting in-vehicle networks. Finally, the effectiveness of the framework is verified using benchmark datasets, and the detection rate data is analyzed alongside other cutting-edge frameworks, showing that this approach delivers outstanding performance and is feasible for practical application.</p></div> <span class="expander more"><a class="trigger">more »</a></span></dd></dl></li></ul><div class="pagination"><div class="results-per-page">Page size: <span class="per-page-choice"><a class="current" href="/content?articles_per_page=10">10</a></span><span class="per-page-choice"><a class="" href="/content?articles_per_page=25">25</a></span><span class="per-page-choice"><a class="" href="/content?articles_per_page=50">50</a></span></div><div class="pages"><ul class="pages-list"><li class="page"><a href="/content?articles_page=1" class="current">1</a></li><li class="page"><a href="/content?articles_page=2" class="">2</a></li><li class="page"><a href="/content?articles_page=3" class="">3</a></li><li class="page"><a href="/content?articles_page=4" class="">4</a></li><li class="page"><a href="/content?articles_page=5" class="">5</a></li><li class="page"><a href="/content?articles_page=6" class="">6</a></li><li class="page"><a href="/content?articles_page=7" class="">7</a></li><li class="page"><a href="/content?articles_page=8" class="">8</a></li><li class="page"><a href="/content?articles_page=9" class="">9</a></li><li class="page"><a href="/content?articles_page=10" class="">10</a></li><li class="page"><a href="/content?articles_page=11" class="">11</a></li><li class="page">…</li><li class="page"><a href="/content?articles_page=2">Next</a></li><li class="page"><a href="/content?articles_page=4597">Last</a></li></ul></div></div></section></form></section></section><div class="clear"></div><footer><div class="links"><a href="https://www.ebsco.com/" target="_blank"><img class="logo ebsco-logo" src="/images/ebsco.png" alt="EBSCO"></a><a href="https://www.proquest.com/" target="_blank"><img class="logo proquest-logo" src="/images/proquest.png" alt="ProQuest"></a><a href="https://dblp.uni-trier.de/db/journals/publ/icst.html" target="_blank"><img class="logo dblp-logo" src="/images/dblp.png" alt="DBLP"></a><a href="https://doaj.org/search?source=%7B%22query%22%3A%7B%22filtered%22%3A%7B%22filter%22%3A%7B%22bool%22%3A%7B%22must%22%3A%5B%7B%22term%22%3A%7B%22index.publisher.exact%22%3A%22European%20Alliance%20for%20Innovation%20(EAI)%22%7D%7D%5D%7D%7D%2C%22query%22%3A%7B%22query_string%22%3A%7B%22query%22%3A%22european%20alliance%20for%20innovation%22%2C%22default_operator%22%3A%22AND%22%2C%22default_field%22%3A%22index.publisher%22%7D%7D%7D%7D%7Dj" target="_blank"><img class="logo doaj-logo" src="/images/doaj.jpg" alt="DOAJ"></a><a href="https://www.portico.org/publishers/eai/" target="_blank"><img class="logo portico-logo" src="/images/portico.png" alt="Portico"></a><a href="http://eai.eu/" target="_blank"><img class="logo eai-logo" src="/images/eai.png"></a></div></footer></div><div class="footer-container"><div class="footer-width"><div class="footer-column logo-column"><a href="https://eai.eu/"><img src="https://eudl.eu/images/logo_new-1-1.png" alt="EAI Logo"></a></div><div class="footer-column"><h4>About EAI</h4><ul><li><a href="https://eai.eu/who-we-are/">Who We Are</a></li><li><a href="https://eai.eu/leadership/">Leadership</a></li><li><a href="https://eai.eu/research-areas/">Research Areas</a></li><li><a href="https://eai.eu/partners/">Partners</a></li><li><a href="https://eai.eu/media-center/">Media Center</a></li></ul></div><div class="footer-column"><h4>Community</h4><ul><li><a href="https://eai.eu/eai-community/">Membership</a></li><li><a href="https://eai.eu/conferences/">Conference</a></li><li><a href="https://eai.eu/recognition/">Recognition</a></li><li><a href="https://eai.eu/corporate-sponsorship">Sponsor Us</a></li></ul></div><div class="footer-column"><h4>Publish with EAI</h4><ul><li><a href="https://eai.eu/publishing">Publishing</a></li><li><a href="https://eai.eu/journals/">Journals</a></li><li><a href="https://eai.eu/proceedings/">Proceedings</a></li><li><a href="https://eai.eu/books/">Books</a></li><li><a href="https://eudl.eu/">EUDL</a></li></ul></div></div></div><script type="text/javascript" src="https://eudl.eu/js/gacode.js"></script><script src="/js/highlight.pack.js"></script><script>hljs.initHighlightingOnLoad();</script></body></html>

Pages: 1 2 3 4 5 6 7 8 9 10