CINXE.COM
14th International Conference on Computer Science, Engineering and Applications (ICCSEA 2024)
<!DOCTYPE html> <html> <head> <!--Import Google Icon Font--> <link href="https://fonts.googleapis.com/icon?family=Material+Icons" rel="stylesheet"> <link href="https://fonts.googleapis.com/css?family=Roboto+Condensed" rel="stylesheet"> <!--Import materialize.css--> <link type="text/css" rel="stylesheet" href="css/materialize.min.css" media="screen,projection" /> <link rel="stylesheet" href="https://use.fontawesome.com/releases/v5.0.13/css/all.css" integrity="sha384-DNOHZ68U8hZfKXOrtjWvjxusGo9WQnrNx2sqG0tfsghAvtVlRW3tvkXWZh58N9jp" crossorigin="anonymous"> <link type="text/css" rel="stylesheet" href="css/main.css" /> <meta charset="UTF-8"> <!--Let browser know website is optimized for mobile--> <meta name="viewport" content="width=device-width, initial-scale=1.0" /> <title>14th International Conference on Computer Science, Engineering and Applications (ICCSEA 2024)</title> <link rel="icon" type="image/ico" href="img/logo.ico"> </head> <body> <!-- Header --> <header class="main-header"> <nav class="transparent"> <div class="container"> <div class="nav-wrapper"> <a href="#" class="brand-logo">ICCSEA</a> <a href="#" data-activates="mobile-nav" class="button-collapse"> <i class="fa fa-bars"></i> </a> <ul class="right hide-on-med-and-down"> <li> <a href="index">HOME</a> </li> <li> <a href="papersubmission">PAPER SUBMISSION</a> </li> <li> <a href="committee">PROGRAM COMMITTEE</a> </li> <li> <a class="active-link" href="#">ACCEPTED PAPERS</a> </li> <li> <a href="contact">CONTACT US</a> </li> <li> <a href="venue">VENUE</a> </li> </ul> <ul class="side-nav grey darken-1 white-text" id="mobile-nav"> <h4 class="center">ICCSEA 2024</h4> <li> <div class="divider"></div> </li> <li> <a href="index"> <i class="fa fa-home white-text"></i>Home </a> </li> <li> <a href="papersubmission"> <i class="fa fa-user white-text"></i>Paper Submission </a> </li> <li> <a href="committee"> <i class="fa fa-user white-text"></i>Program Committee </a> </li> <li> <a class="active-link" href="papers"> <i class="fa fa-newspaper white-text"></i>Accepted Papers </a> </li> <li> <a href="contact"> <i class="fa fa-phone white-text"></i>Contact Us </a> </li> <li> <a href="venue"> <i class="fa fa-phone white-text"></i>Venue </a> </li> <li> <div class="divider"></div> </li> <li> <a href="/submission/index.php" target="blank" class="btn grey waves-effect waves-light">Paper Submission</a> </li> </ul> </div> </div> </nav> <!-- Showcase --> <div class="showcase container"> <div class="row"> <div class="col s12 m10 offset-m1 center grey-text text-darken-3"> <h5>Welcome to ICCSEA 2024</h5> <h2>14<sup>th</sup> International Conference on<br> Computer Science, Engineering and<br> Applications (ICCSEA 2024)</h2> <p>November 16 ~ 17, 2024, Zurich, Switzerland</p> <br> <br> </div> </div> </div> </header> <section class="section section-icons "> <div class="container"> <div class="row"> <div class="col s12 m12"> <div class="card-panel grey darken-2 z-depth-3 white-text center"> <i class="fa fa-paper-plane fa-3x"></i> <h5>Accepted Papers</h5> </div> </div> <div class="col s12 m12"> <div class="card-panel white z-depth-3 "> <!-- start of nlai --> <h6 style="color:black;font-family:classic wide,sans-cserif;font-size:20px"><b>Inclusivity in Large Language Models: Personality Traits and Gender Bias in Scientific Abstracts</b></h6> <p style="color:black;text-align:justify;font-size: 15px;">Naseela Pervez<sup>1</sup> and Alexander J. Titus<sup>1,2,3</sup>, <sup>1</sup>Information Sciences Institute, University of Southern California, <sup>2</sup>Iovine and Young Academy, University of Southern California, <sup>3</sup>In Vivo Group </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>ABSTRACT</b></h6> <p style="color:black;text-align:justify;">Large language models (LLMs) are increasingly utilized to assist in scientific and academic writing, helping authors enhance the coherence of their articles. Previous studies have highlighted stereotypes and biases present in LLM outputs, emphasizing the need to evaluate these models for their alignment with human narrative styles and potential gender biases. In this study, we assess the alignment of three prominent LLMs—Claude 3 Opus, Mistral AI Large, and Gemini 1.5 Flash—by analyzing their performance on benchmark text-generation tasks for scientific abstracts. We employ the Linguistic Inquiry and Word Count (LIWC) framework to extract lexical, psychological, and social features from the generated texts. Our findings indicate that, while these models generally produce text closely resembling human-authored content, variations in stylistic features suggest significant gender biases. This research highlights the importance of developing LLMs that maintain a diversity of writing styles to promote inclusivity in academic discourse. </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>Keywords</b></h6> <p style="color:black;text-align:justify;">Large Language Models (LLMs), Text Generation, Gender Bias, Linguistic Inquiry and Word Count (LIWC), Computational Linguistics. </p> <br> <h6 style="color:black;font-family:classic wide,sans-cserif;font-size:20px"><b>Hyperparameter Optimization for Search Relevance in E-commerce</b></h6> <p style="color:black;text-align:justify;font-size: 15px;">Manuel Dalcastagn´e and Giuseppe Di Fabbrizio, VUI, Inc., Boston, USA </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>ABSTRACT</b></h6> <p style="color:black;text-align:justify;">The configuration of retrieval and ranking strategies in search engines is traditionally done manually by search experts in a time-consuming and often irreproducible process. A typical use case is field boosting in keyword-based search, where the weights of different fields are tuned in an endless trial-and-error process to obtain what seems to be the best possible results on a small set of manually picked user queries that do not always generalize as expected. Hyperparameter optimization (HPO) methods can be employed to automatically tune search engines and solve these problems. To the best of our knowledge, there has been little work in the research community regarding the application of HPO to search relevance in e-commerce. This study demonstrates the effectiveness of HPO techniques for search relevance in e-commerce and provides insights into the impact of field boosting, retrieval query structure, and query understanding on relevance. Differential evolution (DE) optimization achieves up to 13% improvement in terms of NDCG@10 over baseline search configurations on a publicly available dataset. Also, we provide guidelines on the application of HPO to search relevance in e-commerce, addressing the characteristics of search spaces, the multifidelity of objective functions, and the use of more than one metric for multi-objective optimization. </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>Keywords</b></h6> <p style="color:black;text-align:justify;">Hyperparameter optimization, differential evolution, e-commerce search relevance optimization. </p> <br> <h6 style="color:black;font-family:classic wide,sans-cserif;font-size:20px"><b>Scalable Query Understanding for E-commerce: an Ensemble Architecture With Graph-based Optimization</b></h6> <p style="color:black;text-align:justify;font-size: 15px;">Manuel Dalcastagn´e and Giuseppe Di Fabbrizio, VUI, Inc., Boston, USA </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>ABSTRACT</b></h6> <p style="color:black;text-align:justify;">Query understanding is a critical component of e-commerce platforms, enabling accurate interpretation of users’ intents and efficient retrieval of relevant products. This paper presents a study on scalable query understanding techniques applied to a real use case in the e-commerce grocery domain. We propose a novel architecture that combines deep learning models with traditional ML models to capture query nuances and provide robust performance. Our model ensemble approach aims to capture the nuances of user queries and provide robust performance across various query types and categories. We conduct experiments on real-life datasets and demonstrate the effectiveness of our proposed solution in terms of accuracy and scalability. An optimized graphbased architecture using Ray enables efficient processing of high-volume traffic. The experimental results highlight the benefits of combining diverse models. </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>Keywords</b></h6> <p style="color:black;text-align:justify;">Query classification, query understanding, distributed and scalable machine learning. </p> <br> <h6 style="color:black;font-family:classic wide,sans-cserif;font-size:20px"><b>Identifying Students at Risk From Online Clickstream Data Using Machine Learning </b></h6> <p style="color:black;text-align:justify;font-size: 15px;"> Hadeel Alhabdan<sup>1</sup> and Ala Alluhaidan<sup>2</sup>, <sup>1</sup>College of Computing and Information Sciences, Princess Nourah bint Abdulrahman University,Riyadh, Saudi Arabia <sup>2</sup>Department of Information Systems, College of Computing and Information Sciences, Princess Nourah bint Abdulrahman University,Riyadh, Saudi Arabia </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>ABSTRACT</b></h6> <p style="color:black;text-align:justify;">This study examines the use of four machine learning methods to identify students at risk from online clickstream data for 60 courses and the students grades in these courses. To identify students at risk of failing, the study classified students with grades of “F” or “D” as at-risk, while students with grades of “A,” “B,” or “C” were classified as safe. Logistic regression, decision tree, neural networks and random forest models were used, with each model subjected to eight folds cross-validation. The decision tree model had the lowest performance across all four metrics, followed by the logistic regression model, while the neural network model showed marginally superior accuracy, sensitivity, and F1 score compared to the random forest model. The four machine learning models were found to be reliable in identifying at-risk students based on the provided online clickstream data. </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>Keywords</b></h6> <p style="color:black;text-align:justify;">Decision tree, Logistic regression, Neural networks, Online clickstream data, Random Forest. . </p> <br> <h6 style="color:black;font-family:classic wide,sans-cserif;font-size:20px"><b>Comparative Analysis on Brain Tumor Classification using Transfer Learning </b></h6> <p style="color:black;text-align:justify;font-size: 15px;">Hanan AlJuaid and Noorah Al-Sultan, Department of Computer Science, Princess Noura University, Princess Noura University, Riyadh, KSA </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>ABSTRACT</b></h6> <p style="color:black;text-align:justify;">Brain tumor classification is paramount in accurate diagnosis and treatment planning, with significant implications for patient outcomes. This research project focuses on the classification of brain tumors using deep learning techniques, specifically transfer learning in Convolutional Neural Networks (CNNs). The dataset used in this study is obtained from National Guard Hospital. The motivation for this study arise from the challenges associated with accurate brain tumor classification and the potential advantages offered by modern deep learning models. Transfer learning is employed to leverage the knowledge and pre-trained weights of existing CNN models trained on large-scale datasets. This approach enables efficient and accurate classification of brain tumor images. The performance of different pre-trained CNN models, fine-tuned specifically for brain tumor classification, is compared through experimentation and evaluation. The effectiveness and reliability of these models are assessed using key performance metrics such as accuracy, precision, and recall. The objective of this research is to identify the most accurate and robust model for brain tumor classification. The selected models for evaluation are VGG16, ResNet50, InceptionV3, and Xception. The accuracy results of these models are reported as 91.47%, 86.80%, 82.67%, and 82.13%, respectively. </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>Keywords</b></h6> <p style="color:black;text-align:justify;">Convolutional Neural Network (CNN) · Transfer Learning · Brain Tumor. . </p> <br> <h6 style="color:black;font-family:classic wide,sans-cserif;font-size:20px"><b>Smishing Detection Application Using AI</b></h6> <p style="color:black;text-align:justify;font-size: 15px;">Hanan Alossimi, Noura alotaibi, Alhnouf alsubaie, Hanan Aljuaid, Department of Computer Science, Princess Noura University, Princess Nourah bint Abdurahman University, Riyadh, KSA </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>ABSTRACT</b></h6> <p style="color:black;text-align:justify;">Due to the rapid advancement and widespread integration of technology into various faces of our lives, including work, entertainment, communication, and finance, significant transformations have occurred. These changes have brought about a paradigm shift in the way we perceive and interact with the world. There is a high risk that attackers will reveal this information because day after day they try to use a new method to get what they want. Nowadays, the easiest way to gain access or obtain sensitive information about users is to send phishing messages via SMS, so a phishing detection system is essential to keep everything safe because responding to phishing messages or accessing a URL can cause great harm to a person. The general main of this project is to build a smishing detection application for Arabic SMS messages, by building a model capable of accurately text classifying. We achieved the goal with our application Etiqa. Therefore, we tried hard to choose an effective model and achieved this by selecting a hybrid CNN-LSTM deep learning model [1]. It has been proven effective in classifying SMS messages in Arabic, achieving an accuracy of 98%, so the dataset has been collected, and processed using tools for natural language analysis, especially for the Arabic language. The algorithm was developed using Python and based on designing a simple user interface that is easy to use by using Dart programming language on the Flutter framework for Android users. Finally, the interface was integrated with our model by using Fast API. In the future, we aim in this work to develop and expand the effectiveness of the system. </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>Keywords</b></h6> <p style="color:black;text-align:justify;">Smishing – fraud – SMS – Detection - Artificial Intelligence. . </p> <br> <h6 style="color:black;font-family:classic wide,sans-cserif;font-size:20px"><b>Systematic Overview of Machine Learning Applied for Propaganda Social Impact Research </b></h6> <p style="color:black;text-align:justify;font-size: 15px;">Darius Plikynas, Institute of Data Science and Digital Technologies, Department of Mathematics and Informatics, Vilnius University, Vilnius, Lithuania </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>ABSTRACT</b></h6> <p style="color:black;text-align:justify;">The proliferation of fake news, propaganda, and disinformation (FNPD) in the era of generative AI and information warfare poses significant challenges to societal cohesion and democratic processes. This systematic review examines recent advances in machine learning (ML) techniques for detecting and assessing the social impact of FNPD. Employing the PRISMA framework, we analyze promising ML/DL methodologies and hybrid approaches in combating the spread of conspiracy theories, echo chambers, and filter bubbles that contribute to social polarization and radicalization. Our findings highlight the potential of AI-driven solutions in identifying malicious social media accounts, organized troll networks, and bot activities that target specific demographics and manipulate public discourse. We also explore future research directions for developing more robust FNPD detection systems and mitigating the fragmentation of social networks of trust and cooperation. This review provides valuable insights for researchers and policymakers addressing the complex challenges of information integrity in the digital age. </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>Keywords</b></h6> <p style="color:black;text-align:justify;">Machine Learning, Deep learning, Propaganda and Disinformation, Social Impact Analysis, PRISMA Systematic Review. </p> <br> <h6 style="color:black;font-family:classic wide,sans-cserif;font-size:20px"><b>A Survey of Evaluating Question-answering Techniques in the Era of Large Language Model Llm </b></h6> <p style="color:black;text-align:justify;font-size: 15px;">Khaled N. Al Muteb, Bader K. Alshemaimri and Jassir A. Altheyabi, College of Computer and Information Science, King saud university, Riyadh, Riyadh region, kingdom of Saudi Arabia </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>ABSTRACT</b></h6> <p style="color:black;text-align:justify;">Large language models (LLMs) are gaining increasing popularity in both academia and industry due to their exceptional performance in various applications. As LLMs continue to play a crucial role in research and everyday use, their evaluation becomes increasingly crucial, not only at the task level but also at the societal level for a better understanding of their potential risks. In recent years, significant efforts have been dedicated to examining LLMs from different perspectives. This article presents a comprehensive review of the evaluation methods for LLMs, with a specific focus on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide a comprehensive overview of the evaluation tasks, including general natural language processing tasks, reasoning, medical usage, ethics, education, natural and social sciences, agent applications, and other domains. Secondly, we delve into the evaluation methods and benchmarks, which serve as critical components in assessing the performance of LLMs, addressing the questions of "where" and "how". We then summarize the instances of success and failure of LLMs in different tasks. Finally, we shed light on several important aspects that need to be considered in the evaluation process of LLMs. </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>Keywords</b></h6> <p style="color:black;text-align:justify;">Larga language model, Question Answering, LLMs Evaluation, Knowledge base question answering, Open domain questions answering. </p> <br> <h6 style="color:black;font-family:classic wide,sans-cserif;font-size:20px"><b>Error Analysis and Cognitive Biases in Named Entity Recognition (Ner): a Comparative Study of English and Turkish News Articles </b></h6> <p style="color:black;text-align:justify;font-size: 15px;">Tolga Sahin, Department of Language Sciences, Ca’ Foscari University Venice, Venice, Italy </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>ABSTRACT</b></h6> <p style="color:black;text-align:justify;">This study investigates the performance of Named Entity Recognition (NER) tools in identifying such entities through a comparative method within English and Turkish news articles. It aims to examine potential biases in both languages (recognition accuracy) and connect these results to cognitive biases to human language processing. Using, spaCy, the first 50 lines of Turkish and English newspapers are analyzed. Through the analysis, it is revealed that the NER tool achieved a high accuracy of 93.55% in English, resulting in 87 correctly identified entities out of 93; while achieving 29.11% accuracy in Turkish with 23 entities out of 70 correctly identified. Clearly, the tool exhibited a higher rate of misclassifications and missed entities in Turkish, suggesting a bias toward non-Western names and underlining the challenges of recognizing culturally specific entities. The results suggest questions about the implications of NER biases in AI applications and its parallels with cognitive biases in humans. Such similarities tend to show how human recognition of names across different cultures tend to be similar with artificial/machine mind. The results also tell about the need for improved training data and methodologies to enhance NER performance in underrepresented languages and contribute to the ongoing discourse on ethical AI and inclusive language. </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>Keywords</b></h6> <p style="color:black;text-align:justify;">Named Entity Recognition (NER), Cognitive Bias, Error Analysis, Multilingual NLP, NLP. </p> <br> <!-- end of nlai --> <!-- start of ibcom --> <h6 style="color:black;font-family:classic wide,sans-cserif;font-size:20px"><b>Distributed Blockchain-based Firmware Update Architecture for Iot Environments</b></h6> <p style="color:black;text-align:justify;font-size: 15px;">Jes´us Rugarc<sup>1</sup>, Santiago Figueroa-Lorenzo<sup>2,3</sup>, Saioa Arrizabalaga<sup>2,3</sup>, and Nasibeh Mohammadzadeh<sup>2</sup>, <sup>1</sup>University of the Basque Country UPV/EHU, Donostia / San Sebasti´an-20018, Spain, <sup>2</sup>CEIT-Basque Research and Technology Alliance (BRTA), Donostia / San Sebasti´an-20018, Spain, <sup>3</sup>School of Engineering, University of Navarra, Tecnun, Donostia / San Sebasti´an-20018, Spain </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>ABSTRACT</b></h6> <p style="color:black;text-align:justify;">The Internet of Things (IoT) is one of the most rapidly expanding fields of technology. IoT devices often have limited capabilities when it comes to security, and have been shown to have vulnerabilities that are often exploited by malicious agents. To fix those vulnerabilities, firmware updates are often needed. The process, however, can also be vulnerable. A secure update mechanism is needed to create a more secure IoT environment. This paper proposes a secure distributed IOT firmware update solution using Hyperledger Fabric Blockchain and IPFS based on the RFC 9019 and previously proposed frameworks, contributing with a strong manifest format and defining authentication and verification procedures. More importantly, we provide a public implementation on which performance tests were made, demonstrating the promising feasibility of using distributed ledger technologies for this problem. </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>Keywords</b></h6> <p style="color:black;text-align:justify;">IoT, Hyperledger Fabric Blockchain, Security, Distributed solution, Firmware update. </p> <br> <h6 style="color:black;font-family:classic wide,sans-cserif;font-size:20px"><b>Clustering Solidity Smart Contracts by Similarity</b></h6> <p style="color:black;text-align:justify;font-size: 15px;">Ansumana F Jadama and Aditya Dilip Thakur, Faculty of Computer Science, University of New Brunswick Fredericton, NB, Canada </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>ABSTRACT</b></h6> <p style="color:black;text-align:justify;">This paper addresses the challenging task of clustering source code files within Ethereum smart contracts. The intricate structure of these files, encompassing contracts, interfaces, and libraries, presents significant challenges in identifying syntactic similarities. Our methodology employs a detailed analysis of structural, behavioral, and contextual characteristics, integrating both syntactic and semantic features. The objective is to effectively cluster source code files, thereby facilitating a deeper understanding and systematic categorization of smart contracts. This comprehensive approach aims to enhance insights into the architectural patterns and functionalities of blockchain applications, supporting improved governance and management of these systems. </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>Keywords</b></h6> <p style="color:black;text-align:justify;">Smart Contracts, Blockchain, Source Code Clustering, Syntactic Similarity, Semantic Features. </p> <br> <h6 style="color:black;font-family:classic wide,sans-cserif;font-size:20px"><b>Three Variations of Heads or Tails Game for Bitcoin</b></h6> <p style="color:black;text-align:justify;font-size: 15px;">Cyril Grunspan<sup>1</sup>, Ricardo Perez-Marco<sup>2</sup>, <sup>1</sup>Leonard de Vinci P ´ ole Univ, Finance Lab ˆ Paris, France, <sup>2</sup>CNRS, IMJ-PRG, Univ. Paris Cite´ Paris, France </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>ABSTRACT</b></h6> <p style="color:black;text-align:justify;">We present three very simple variants of the classic Heads or Tails game using chips, each of which contributes to our understanding of the Bitcoin protocol. The first variant addresses the issue of temporary Bitcoin forks, which occur when two miners discover blocks simultaneously. We determine the threshold at which an honest but temporarily “Byzantine” miner persists in mining on their fork to save his orphaned blocks. The second variant of Heads or Tails game is biased in favor of the player and helps to explain why the difficulty adjustment formula is vulnerable to attacks of Nakamoto’s consensus. We derive directly and in a simple way, without relying on a Markov decision solver as was the case until now, the threshold beyond which a miner without connectivity finds it advantageous to adopt a deviant mining strategy on Bitcoin. The third variant of Heads or Tails game is unbiased and demonstrates that this issue in the Difficulty Adjustment formula can be fully rectified. Our results are in agreement with the existing literature that we clarify both qualitatively and quantitatively using very simple models and scripts that are easy to implement. </p> <br> <h6 style="color:black;font-family:classic wide,sans-cserif;font-size:20px"><b>A Cross-chain Protocol Based on Main-subchain Architecture</b></h6> <p style="color:black;text-align:justify;font-size: 15px;">Feng Zhang<sup>1</sup>, Le Yu<sup>1</sup>, Rong Wang<sup>2</sup> and Wei-Tek Tsai<sup>3</sup>, <sup>1</sup>China Mobile Information Security Management and Operation Center, Beijing, China, <sup>2</sup>Guangzhou Institute of Software, Guangzhou 510006, China, <sup>3</sup>College of Computer and Data Science, Fuzhou University, Fuzhou 350116, China </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>ABSTRACT</b></h6> <p style="color:black;text-align:justify;">We present three very simple variants of the classic Heads or Tails game using chips, each of which contributes to our understanding of the Bitcoin protocol. The first variant addresses the issue of temporary Bitcoin forks, which occur when two miners discover blocks simultaneously. We determine the threshold at which an honest but temporarily “Byzantine” miner persists in mining on their fork to save his orphaned blocks. The second variant of Heads or Tails game is biased in favor of the player and helps to explain why the difficulty adjustment formula is vulnerable to attacks of Nakamoto’s consensus. We derive directly and in a simple way, without relying on a Markov decision solver as was the case until now, the threshold beyond which a miner without connectivity finds it advantageous to adopt a deviant mining strategy on Bitcoin. The third variant of Heads or Tails game is unbiased and demonstrates that this issue in the Difficulty Adjustment formula can be fully rectified. Our results are in agreement with the existing literature that we clarify both qualitatively and quantitatively using very simple models and scripts that are easy to implement. </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>Keywords</b></h6> <p style="color:black;text-align:justify;">Cross-chain protocols; main-subchain architecture; relay chain technology; sharded blockchain; cross-chain transactions. </p> <br> <h6 style="color:black;font-family:classic wide,sans-cserif;font-size:20px"><b>Blockchain Adoption in Data Spaces With an Edc-hfb Interface</b></h6> <p style="color:black;text-align:justify;font-size: 15px;">Yasiru Witharanage<sup>1,2</sup>, Santiago Figueroa-Lorenzo<sup>1,2, 3</sup>, and Saioa Arrizabalaga<sup>1,2, 3</sup>, <sup>1</sup>CEIT-Basque Research and Technology Alliance (BRTA), Manuel Lardizabal 15, Donostia / San Sebastian, 20018, Basque Country, Spain, <sup>2</sup>Universidad de Navarra, Tecnun, Manuel Lardizabal 13, Donostia / San Sebastian, 20018, Basque Country, Spain, <sup>3</sup>Institute of Data Science and Artificial Intelligence (DATAI), Universidad de Navarra, Edificio Ismael S´anchez Bella, Campus Universitario, 31009-Pamplona, Spain </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>ABSTRACT</b></h6> <p style="color:black;text-align:justify;">Data is a fundamental asset for organizations. Data spaces emerge as distributed structures that promote secure and reliable data sharing. The International Data Space (IDS) protocol is currently one of the main standards in the data space environment. The growing evolution of data spaces implies the emergence of challenges associated with aspects such as digital sovereignty, decentralization, veracity, security and privacy protection. Distributed Ledger Technologies (DLTs) are emerging as information structures that can provide solutions to these challenges. This paper proposes the migration of trust entities in the IDS architecture, such as the Clearing House, to Hyperledger Fabric Blockchain infrastructure as a solution mechanism to the above challenges. It also proposes the creation of an Eclipse Dataspace complement, a Hyperledger Fabric Blockchain interface (EDC-HFB), that guarantees the interaction between an EDC Connector and the blockchain. </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>Keywords</b></h6> <p style="color:black;text-align:justify;">Blockchain, Data spaces, EDC, HFB </p> <br> <!-- End of ibcom --> <!-- start of iccsea --> <h6 style="color:black;font-family:classic wide,sans-cserif;font-size:20px"><b>Development of a Co-design Architecture (Hardware/software) for Real-time Video Encryption Based Chaos</b></h6> <p style="color:black;text-align:justify;font-size: 15px;">SID Hichem<sup>1</sup> and AZZAZ Mohamed Salah<sup>1</sup>, SADOUDI Said<sup>2</sup>, <sup>1</sup>Electronic and Digital Systems Laboratory, EMP, Algiers, Algeria, <sup>2</sup>Telecommunications Laboratory, EMP, Algiers, Algeria </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>ABSTRACT</b></h6> <p style="color:black;text-align:justify;">The article presents a novel Codesign Architecture (Hardware/Software) for Real-Time Video Encryption based on Chaos. It features an auto-switched Hybrid Chaotic Key Generator integrated into a flowsymmetric cryptosystem for encrypting video streams. Using the Genesys-2 FPGA platform and Pmod CAM-OV7670 camera, the system ensures synchronized key parameters for accurate decryption. The architecture addresses key availability challenges while balancing security, performance, hardware resources and a high level of security of the real-time video stream. Experimental results demonstrate its efficacy for efficient embedded ciphering communication systems specially for real-time video stream. </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>Keywords</b></h6> <p style="color:black;text-align:justify;">Video, DSP, Chaos, Key Generator, RNG, Cryptography, NIST, Xilinx, Vivado, FPGA, Embeded system Genesys 2, VHDL, real-time, Vernam OTP, symmetric flow, synchronisation. </p> <br> <h6 style="color:black;font-family:classic wide,sans-cserif;font-size:20px"><b>Influence of Background Color on 6d Pose Tracking Accuracy</b></h6> <p style="color:black;text-align:justify;font-size: 15px;">Andreas Hubert<sup>1</sup>, Konrad Doll<sup>1</sup>, and Bernhard Sick<sup>2</sup>, <sup>1</sup>University of Applied Sciences Aschaffenburg, Germany, <sup>2</sup>University of Kassel, Germany </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>ABSTRACT</b></h6> <p style="color:black;text-align:justify;">Fast 6D pose tracking is a critical component in numerous applications ranging from robotics to augmented reality. A notable method for addressing this challenge involves simulating the last known pose and comparing it with the current one, a process central to the SE(3)-TrackNet approach, which is known for its reliability. Traditionally, this method employs a uniform black background for the simulated input. This study challenges the standard practice by demonstrating that the choice of background color can significantly influences the accuracy of 6D pose estimation. Through a series of experiments, we provide results showing that background color is a critical factor for the effectiveness of the SE(3) TrackNet approach. </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>Keywords</b></h6> <p style="color:black;text-align:justify;">machine learning, computer vision, deep learning, 6D pose estimation, data generation, simulated data. </p> <br> <h6 style="color:black;font-family:classic wide,sans-cserif;font-size:20px"><b>Security Assurance and Repudiation Threats </b></h6> <p style="color:black;text-align:justify;font-size: 15px;">Srinivas Rao Doddi<sup>1</sup> and Akshay Krishna Kotamraju<sup>2</sup>, <sup>1</sup>Department of Information Technology, University of Los Angeles, Los Angeles, California, USA, <sup>2</sup>Founder Non-profit , Think Cosmos, Saratoga, California, USA </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>ABSTRACT</b></h6> <p style="color:black;text-align:justify;">Social engineering attacks pose a serious threat to individuals and various entity’s including financial and non-financial. This paper presents a converged security framework towards a comprehensive prevention and detection controls mechanism . It also explores different types of social media attributes ,leverage data mining engineering tactics. The paper also discusses associated limitations and challenges and recommends security best practices, and proposes an integrated framework. The paper proposes a converged security framework that allows various parties from fraud, cyber, and physical security to collaborate. Additionally , the proposed framework through social media mining unearths scams related information in a protected method by preserving security and privacy. </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>Keywords</b></h6> <p style="color:black;text-align:justify;">Security, Assurance, Authentication, Information, Policy. </p> <br> <h6 style="color:black;font-family:classic wide,sans-cserif;font-size:20px"><b>Optimizing Social Welfare in Electricity Markets: a Comparative Study of Evolutionary Algorithms — Ga, Ngsa-ii, De and Milp Branch and Cut </b></h6> <p style="color:black;text-align:justify;font-size: 15px;">Ali Abbasi<sup>1, 2</sup> , Jean Gomes<sup>1</sup>, Filipe Alves<sup>1</sup>, Pedro Carvalho<sup>1, 2</sup>, Jo˜ao Luis Sobral<sup>2</sup>, and Ricardo Rodrigues<sup>1</sup>, <sup>1</sup>DTx — Digital Transformation CoLAB, University of Minho, 4800-058 Guimar˜aes, Portugal, <sup>2</sup>University of Minho, 4704-553 Braga, Portugal </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>ABSTRACT</b></h6> <p style="color:black;text-align:justify;">This work addresses the problem of maximizing social welfare in electricity markets by utilizing advanced optimization techniques to enhance both operational and economic efficiency. It explores the application of evolutionary algorithms (EAs), specifically Genetic Algorithms (GA), Differential Evolution (DE), and Non-dominated Sorting Genetic Algorithm II (NSGA-II), bench- marking their performance against exact solutions from the Branch and Cut method. A com- prehensive hyperparameter optimization (HPO) was conducted using a Tree-structured Parzen Estimator (TPE), Covariance Matrix Adaptation Evolution Strategy (CMA-ES), and Random Search to fine-tune each algorithm’s performance parameters. The study compares the exploration and exploitation capabilities of TPE and CMA-ES with Random Search in the context of HPO for GA, NSGA-II, and DE. This systematic approach highlights the relative strengths and weak- nesses of different EAs in complex market scenarios, offering insights into optimal configurations for achieving the best social welfare outcomes in electricity markets. </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>Keywords</b></h6> <p style="color:black;text-align:justify;">Electricity Market, Social Welfare, Evolutionary Algorithms, Hyperparameter Opti-mization. </p> <br> <h6 style="color:black;font-family:classic wide,sans-cserif;font-size:20px"><b>The Gambit of De-dollarization: Unveiling New Currency Frontiers Through NLP </b></h6> <p style="color:black;text-align:justify;font-size: 15px;">Vineeth Kumar Reddy Anumula and Niskhep A Kulli Sacred Heart University, CT 06825 , USA </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>ABSTRACT</b></h6> <p style="color:black;text-align:justify;">In the light of heightened geopolitical and economic volatility, conversation around de-dollarization and the rise of alternative currencies has intensified, sparking widespread public debate. This article builds on analyzing 6000 tweets retrieved from platform X, utilizing advanced natural language processing (NLP) techniques—sentiment analysis, tweet classification using BERT (Bidirectional Encoder Representations from Transformers), named entity recognition (NER), and Latent Dirichlet Allocation (LDA) modeling—to delve into these critical discussions. This study uncovers key entities and other emerging financial technologies, revealing a complex and evolving narrative. The findings underscore the critical role of social media as a barometer for global economic trends, particularly in light of ongoing debates surrounding currency alternatives. With geopolitical tensions mounting, the discourse on financial sovereignty, cryptocurrencies, and national economic strategies is becoming increasingly polarized. Sentiment analysis reveals stark contrasts in public opinion, while LDA modeling uncovers dominant themes driving the conversation. This research is especially timely, as the growing intensity of discussions on currency dominance and financial security demands a more nuanced understanding. By offering a real-time analysis of these debates, this paper provides essential insights for policymakers, economists, and academics. As the global financial landscape shifts, our findings serve as a crucial layer in the academic discourse, revealing how technology, public opinion, and geopolitics intertwine to shape the future of global economies. </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>Keywords</b></h6> <p style="color:black;text-align:justify;">Natural language processing, Sentiment analysis, Entity recognition, Latent Dirichlet Allocation(LDA), De-Dollarization. </p> <br> <h6 style="color:black;font-family:classic wide,sans-cserif;font-size:20px"><b>Lossless Compression of Volumetric Images using Online Linear Regression Optimized by Gradient Descent </b></h6> <p style="color:black;text-align:justify;font-size: 15px;">Andrea Suárez Segarra, Knowledge Transfer Unit, Centre de Recerca Matem`atica Barcelona, Spain </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>ABSTRACT</b></h6> <p style="color:black;text-align:justify;">This work investigates adaptive prediction strategies for compressing 3D volumetric image data. The framework employs online linear regression, inspired by machine learning techniques, to dynamically adjust pixel-wise predictions based on surrounding contexts. The optimization of weight vectors is achieved through gradient descent, facilitating efficient learning from residual errors during the encoding process. A static predictor that utilizes a pre-trained weight vector is derived from this framework. Additionally, non-linear adaptive prediction is explored through context clustering, classifying pixels into foreground and background based on histogram analysis. Results demonstrate that the online linear regression approach consistently outperforms traditional static predictors, as well as the FFV1 and ZIP compression algorithms. With entropy encoding via Golomb and Huffman coding, the framework achieves competitive compression ratios. These findings highlight the potential of online adaptive methods for volumetric image compression and point to promising directions for future research, particularly in advanced clustering techniques. </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>Keywords</b></h6> <p style="color:black;text-align:justify;">Lossless compression, volumetric images, online linear regression, clustering, predictive encoding, entropy. </p> <br> <h6 style="color:black;font-family:classic wide,sans-cserif;font-size:20px"><b>Fuzzy Logic Linguistic Variables Number and Bounds Optimization – (Levels 2 and 3) </b></h6> <p style="color:black;text-align:justify;font-size: 15px;">Ramesh Chandra Bagadi<sup>1</sup>, K. V. G. D. Balaji<sup>2</sup> and Rudra Pratap Das<sup>3</sup>, <sup>1</sup>Founder & Owner, Ramesh Bagadi Consulting LLC, {R042752}, Madison, Wisconsin 53715, United States Of America, <sup>2</sup>Director, RGUKT Srikakulam Campus, Rajiv Gandhi University of Knowledge Technologies (RGUKT), Srikakuklam, Andhra Pradesh, 532402, India, <sup>3</sup>Managing Director, Nabakoty Electronics, Bhubaneswar, 751007, Odisha, India. </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>ABSTRACT</b></h6> <p style="color:black;text-align:justify;">In this research investigation, the authors present a Model of Optimi-zation of the Fuzzy Logic Analysis based on the Linguistic Variables Number and their Bounds Optimization, supplemented with an OCTAVE Computer Pro-gram Code illustrating the Model genericness at Levels 2 and 3. It can be general-ly noted that in Fuzzy Logic Analysis, the Fuzzy Logic Output is good enough only up to the sanctity of the Fuzzy Logic Linguistic Variables Bounds and also the Number of Fuzzy Linguistic Variables considered. This problem can be averted when we consider holistic analysis of the all possible Fuzzy Logic Lin-guistic Variables Bounds cases constrained by heuristic colloquial sense without affecting the optimization scheme, for each number of Linguistic Variables con-sidered among all the number of possible Linguistic Variables. Such optimization analysis is mathematically validated with regards the Fuzzy Logic Linguistic Var-iables by noting that the Fuzzy Logic Final Output Answer is best for the case of Linguistic Variable Bounds which has the Minimum Sum of Squares of Devia-tions w. r. t all other answers derived considering all other possible cases of Fuzzy Logic Output Answer’s Sum of Squares of Deviations w. r. t to all other answers derived considering all other possible cases of the Fuzzy Linguistic Var-iable Bounds in holisticness. And such Optimization is validated w. r. t the Num-ber of Fuzzy Linguistic Variables by noting that for the Optimal Number of Fuzzy Linguistic Variable Bounds, the Sum of Squares of Deviations of the Fuzzy Output answers gotten from each of the best Bounds for each Number of Fuzzy Linguistic Variables considered lies at the knee point of Elbow Type Plot drawn by plotting those thusly gotten Sum of Squares of Deviations of the Fuzzy Output answers of the best Fuzzy Linguistic Variable Bounds against the Num-ber of Fuzzy Linguistic Variables considered for the analysis. This model is of profound significance as this helps one in constructing of Optimal Telescopes and as well as Optimal Fuzzy Logic based Fuzzy Governors and Controllers. </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>Keywords</b></h6> <p style="color:black;text-align:justify;">Fuzzy Logic, Fuzzy Linguistic Variables, Fuzzy Linguistic Variables Bounds, Fuzzy Linguistic Variables Bounds Optimization. </p> <br> <h6 style="color:black;font-family:classic wide,sans-cserif;font-size:20px"><b>Arabic Sign Language Detection: a Computer Vision Approach</b></h6> <p style="color:black;text-align:justify;font-size: 15px;">Shaima Alotaibi, Ghadah Alalyani, Leen Alghamdi, Joud Altowerqi, Ryouf Alghamdi, Department of Computer Science and Artificial Intelligence, College of Science and Computer Engineering, University of Jeddah, Jeddah, Saudi Arabia </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>ABSTRACT</b></h6> <p style="color:black;text-align:justify;">Sign language is a vital communication method that uses manual gestures and movements, primarily utilized by individuals who are deaf or hard of hearing. It also supports communication for those who face challenges with spoken language due to various disabilities or conditions. To bridge the communication gap between the deaf community and others, we have developed an Application for Sign Language Recognition. This application serves as an autonomous communication facilitator, eliminating the need for human translators. Our approach involves training a model to accurately recognize hand gestures made by users, which are then translated into letter using advanced machine learning algorithms and Convolutional Neural Network (CNN) models. The translated letter is displayed on the app’s screen, enabling seamless and immediate communication. Designed for ease of use and flexibility, the app allows users to engage fully in social, educational, and professional environments. The dataset powering this innovation includes 54,049 images of Arabic Sign Language (ArSL), featuring 32 standard Arabic signs and alphabets, performed by over 40 participants. Our CNN model processes this data to analyse patterns in hand movements and extract meaningful information from the gestures. The translations are then rendered as readable letter within the app, making communication accessible and straightforward. </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>Keywords</b></h6> <p style="color:black;text-align:justify;">Convolutional Neural Network (CNN), Real-Time, Compute Vision, Arabic Sign Language (ArSL) detection, MobileNetV2. </p> <br> <!-- End of iccsea --> <!-- start of gridcom --> <h6 style="color:black;font-family:classic wide,sans-cserif;font-size:20px"><b>Emulating a Computing Grid in a Local Environment for Feature Evaluation </b></h6> <p style="color:black;text-align:justify;font-size: 15px;"><p style="color:black;text-align:justify;font-size: 15px;">Jananga Kalawana<sup>1</sup>, Malith Dilshan<sup>1</sup>, Kaveesha Dinamidu<sup>1</sup>, Kalana Wijethunga<sup>1, 2</sup>, Maksim Stortvedt<sup>2</sup>, Indika Perera<sup>1</sup>, <sup>1</sup>Department of Computer Engineering, University of Moratuwa, Bandaranayake Mawatha, 10400, Moratuwa, Sri Lanka, <sup>2</sup>CERN, Esplanade des Particules 1, 1217, Meyrin, Switzerland </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>ABSTRACT</b></h6> <p style="color:black;text-align:justify;">The necessity for complex calculations in high-energy physics and large-scale data analysis has led to the development of computing grids, such as the ALICE computing grid at CERN. These grids outperform traditional supercomputers but present challenges in directly evaluating new features, as changes can disrupt production operations and require comprehensive assessments, entailing significant time investments across all components. This paper proposes a solution to this challenge by introducing a novel approach for emulating a computing grid within a local environment. This emulation, resembling a mini clone of the original computing grid, encompasses its essential components and functionalities. Local environments provide controlled settings for emulating grid components, enabling researchers to evaluate system features without impacting production environments. This investigation contributes to the evolving field of computing grids and distributed systems, offering insights into the emulation of a computing grid in a local environment for feature evaluation. </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>Keywords</b></h6> <p style="color:black;text-align:justify;">Computing Grid, Feature Evaluation, Grid Replica, Distributed Computing. </p> <br> <!-- End of gridcom --> <!-- start of sppr --> <h6 style="color:black;font-family:classic wide,sans-cserif;font-size:20px"><b>Semantic Textual Similarity in Kazakh: Dataset Development and Comparative Model Analysis </b></h6> <p style="color:black;text-align:justify;font-size: 15px;"><p style="color:black;text-align:justify;font-size: 15px;">Mamyr Altaibek, Sharipbay Altynbek, Razakhova Bibigul, Zulhazhav Altanbek, Kazakhstan Academy of Artificial Intelligence, Astana, Kazakhstan </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>ABSTRACT</b></h6> <p style="color:black;text-align:justify;">Semantic textual similarity assesses the degree of shared meaning between two textual entities. This research advanced the field by translating the STS-b evaluation dataset into Kazakh using the Google API, thereby facilitating studies in a new linguistic context. We employed various pre-trained models including BERT, SBERT, RoBERTa, and Language-agnostic BERT Sentence Embedding (LaBSE) to generate sentence embeddings. The experimental framework also integrated a Kazakh-translated SNLI dataset. Model effectiveness was quantified through Pearson and Spearman correlation coefficients, comparing predicted similarity scores against the gold standard labels. The most effective results emerged from an initial fine-tuning of the BERT model on the Kazakh-translated SNLI dataset, followed by subsequent refinements utilizing the STSb-kk dataset with the mentioned contrastive learning techniques process. </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>Keywords</b></h6> <p style="color:black;text-align:justify;">Semantic Textual Similarity, STSb Dataset, Natural Language Inference, Kazakh language. </p> <br> <h6 style="color:black;font-family:classic wide,sans-cserif;font-size:20px"><b>Semantic Textual Similarity in Kazakh: Dataset Development and Comparative Model Analysis </b></h6> <p style="color:black;text-align:justify;font-size: 15px;"><p style="color:black;text-align:justify;font-size: 15px;">Mamyr Altaibek, Sharipbay Altynbek, Razakhova Bibigul, Zulhazhav Altanbek, Kazakhstan Academy of Artificial Intelligence, Astana, Kazakhstan </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>ABSTRACT</b></h6> <p style="color:black;text-align:justify;">Semantic textual similarity assesses the degree of shared meaning between two textual entities. This research advanced the field by translating the STS-b evaluation dataset into Kazakh using the Google API, thereby facilitating studies in a new linguistic context. We employed various pre-trained models including BERT, SBERT, RoBERTa, and Language-agnostic BERT Sentence Embedding (LaBSE) to generate sentence embeddings. The experimental framework also integrated a Kazakh-translated SNLI dataset. Model effectiveness was quantified through Pearson and Spearman correlation coefficients, comparing predicted similarity scores against the gold standard labels. The most effective results emerged from an initial fine-tuning of the BERT model on the Kazakh-translated SNLI dataset, followed by subsequent refinements utilizing the STSb-kk dataset with the mentioned contrastive learning techniques process. </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>Keywords</b></h6> <p style="color:black;text-align:justify;">Semantic Textual Similarity, STSb Dataset, Natural Language Inference, Kazakh language. </p> <br> <h6 style="color:black;font-family:classic wide,sans-cserif;font-size:20px"><b>Semantic Textual Similarity in Kazakh: Dataset Development and Comparative Model Analysis </b></h6> <p style="color:black;text-align:justify;font-size: 15px;"><p style="color:black;text-align:justify;font-size: 15px;">Mamyr Altaibek, Sharipbay Altynbek, Razakhova Bibigul, Zulhazhav Altanbek, Kazakhstan Academy of Artificial Intelligence, Astana, Kazakhstan </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>ABSTRACT</b></h6> <p style="color:black;text-align:justify;">Semantic textual similarity assesses the degree of shared meaning between two textual entities. This research advanced the field by translating the STS-b evaluation dataset into Kazakh using the Google API, thereby facilitating studies in a new linguistic context. We employed various pre-trained models including BERT, SBERT, RoBERTa, and Language-agnostic BERT Sentence Embedding (LaBSE) to generate sentence embeddings. The experimental framework also integrated a Kazakh-translated SNLI dataset. Model effectiveness was quantified through Pearson and Spearman correlation coefficients, comparing predicted similarity scores against the gold standard labels. The most effective results emerged from an initial fine-tuning of the BERT model on the Kazakh-translated SNLI dataset, followed by subsequent refinements utilizing the STSb-kk dataset with the mentioned contrastive learning techniques process. </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>Keywords</b></h6> <p style="color:black;text-align:justify;">Semantic Textual Similarity, STSb Dataset, Natural Language Inference, Kazakh language. </p> <br> <h6 style="color:black;font-family:classic wide,sans-cserif;font-size:20px"><b>Comparison Between Cnn and Gnn Pipelines for Analysing the Brain in Development </b></h6> <p style="color:black;text-align:justify;font-size: 15px;"><p style="color:black;text-align:justify;font-size: 15px;">Antoine Bourlier<sup>1, 2</sup>, Elodie Chaillou<sup>1</sup>, and Jean-Yves Ramel<sup>2</sup>, <sup>1</sup>LIFAT. 37000 Tours, France, <sup>2</sup>INRAE, CNRS, Université de Tours, 37380 Nouzilly, France </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>ABSTRACT</b></h6> <p style="color:black;text-align:justify;">In this study, we present a novel pipeline designed for the analysis and comparison of non-conventional animal models, such as pigs and sheep, without relying on neuroanatomical priors. This innovative approach combines histogram-based segmentation with graph neural networks (GNNs) to overcome the limitations of traditional methods. Conventional tools often depend on predefined anatomical atlases and are typically limited in their ability to adapt to the unique characteristics of developing brains or non-conventional animal models. By generating regions of interest directly from MR images and constructing a graph representation of the brain, our method eliminates biases associated with predefined templates and avoids the black-box issues inherent in convolutional neural networks (CNNs). Our results show that the GNN-based pipeline is significantly more efficient in terms of execution time compared to CNNs, while maintaining reasonable accuracy. However, the GNN approach yields slightly lower performance in brain age prediction. Despite this, GNNs offer notable advantages, including improved interpretability and the ability to model complex relational structures within brain data. This flexibility allows for a more nuanced analysis of brain morphology and function. Future research will focus on refining graph construction techniques, incorporating edge features, and exploring various GNN architectures to enhance the pipeline’s performance. Overall, our approach provides a promising solution for unbiased, adaptable, and interpretable analysis of brain MRIs, particularly for developing brains and non-conventional animal models. </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>Keywords</b></h6> <p style="color:black;text-align:justify;">Graph, machine learning, MRI, segmentation. </p> <br> <h6 style="color:black;font-family:classic wide,sans-cserif;font-size:20px"><b>Scalable Action Mining Modified Hybrid Method using Threshold Rho with Meta Actions and Information Granules for Enhanced user Emotions in Education and Business Domain </b></h6> <p style="color:black;text-align:justify;font-size: 15px;"><p style="color:black;text-align:justify;font-size: 15px;">Angelina Tzacheva, University of North Carolina Charlotte, United States of America </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>ABSTRACT</b></h6> <p style="color:black;text-align:justify;">Action Rules are rule based systems that extract actionable patterns which are hidden in big volumes of data. Huge amount of data gets generated from Education sector, Business field, Medical domain and Social Media, in a single day. In the technological world of big data, massive amounts of data are collected by organizations, including in major domains like financial, medical, social media and Internet of Things(IoT). Mining this data can provide a lot of meaningful insights on how to improve user experience in multiple domain. Users need recommendations on actions they can undertake to increase their profit or accomplish their goals, this recommendations are provided by Actionable patterns. For example: How to improve student learning; how to increase business profitability; how to improve user experience in social media; and how to heal patients and assist hospital administrators. Action Rules provide actionable suggestions on how to change the state of an object from an existing state to a desired state for the benefit of the user. The traditional Action Rules extraction models, which analyze the data in a non distributed fashion, does not perform well when dealing larger datasets. In this work we are concentrating on the vertical data splitting strategy using information granules and creating the data partitioning more logically instead of splitting the data randomly and also generating meta actions after the vertical split. Information granules form basic entities in the world of Granular Computing(GrC), which represents meaningful smaller units derived from a larger complex information system. We introduced Modified Hybrid Action rule method with Partition Threshold Rho. Modified Hybrid Action rule mining approach combines both these frameworks and generates complete set of Action Rules, which further improves the computational performance with large datasets. </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>Keywords</b></h6> <p style="color:black;text-align:justify;">Emotion Detection, Meta Action, Information granules. </p> <br> <!-- End of sppr --> <!-- start of mlds--> <h6 style="color:black;font-family:classic wide,sans-cserif;font-size:20px"><b>Theoretical Approach on Assessing the Accuracy of the Shortest Path Non-optimal Algorithm for 2-dimensional Grids With Obstacles</b></h6> <p style="color:black;text-align:justify;font-size: 15px;"><p style="color:black;text-align:justify;font-size: 15px;">Chenghao Mo<sup>1</sup>, Durham<sup>2</sup>, <sup>1</sup>Oyster River High School, <sup>2</sup>NH, United States </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>ABSTRACT</b></h6> <p style="color:black;text-align:justify;">In many applications such as urban navigation and robotics, finding the shortest path in a 2D grid is crucial but computationally expensive using traditional optimal algorithms like Floyd-Warshall or Dijkstra. These traditional algorithms guarantee to find the shortest path at the cost of time complexity, leading to a time-consuming computation, particularly for largescale grids. Non-optimal algorithms that trade accuracy for speed have emerged to address the issue. However, the impact of grid obstacle density on the accuracy of the algorithms has not been well understood. This paper presents a theoretical framework for evaluating the accuracy of two non-optimal algorithms. By integrating theoretical analysis with extensive experimental data, this paper demonstrates how obstacle density influences algorithm performance, and proposes a methodology to select the best non-optimal algorithms based on the grid obstacle density. The theoretical framework has practical implications for applications requiring rapid path finding in complex environments. </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>Keywords</b></h6> <p style="color:black;text-align:justify;">Shortest Path Algorithms, Non-Optimal Algorithms, 2D Grids with Obstacles, Greedy Algorithm, Heuristic DFS, Algorithm Accuracy. </p> <br> <h6 style="color:black;font-family:classic wide,sans-cserif;font-size:20px"><b>Optimization of Solar Energy Integration in Smart Grid Solutions </b></h6> <p style="color:black;text-align:justify;font-size: 15px;"><p style="color:black;text-align:justify;font-size: 15px;">Shad Hasib Talukder, Rafi Abrar Kabir, Nazmus Sakib Rayhan, Shamima, Sultana, Jachi Sangma, and Md. Motaharul Islam, Department of Computer Science and Engineering, United International University, Dhaka, Bangladesh </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>ABSTRACT</b></h6> <p style="color:black;text-align:justify;">In recent years, many homes have installed solar panels to harness renewable energy, but without proper storage systems, this can lead to financial losses and inefficiencies. Our research offers a solution using smart grids, smart meters, and fuzzy rule-based algorithms to enhance solar energy efficiency and minimize these losses. By collecting data from IoT sensors, the system predicts financial impacts and recommends energy-saving practices. A real-time monitoring framework, connected to a central database, helps with decision-making and prevents future issues. With a focus on reducing financial losses through predictive analytics, our system provides a more comprehensive approach than existing solutions, leading to better energy management, cost savings and sustainability. </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>Keywords</b></h6> <p style="color:black;text-align:justify;">Smart Grid, Smart Meter, Fuzzy rule-based expert system, Centralized Database, Solar Panel. </p> <br> <h6 style="color:black;font-family:classic wide,sans-cserif;font-size:20px"><b>Jchaosindex: Measuring and Benchmarking Dispersion in Randomized Data </b></h6> <p style="color:black;text-align:justify;font-size: 15px;"><p style="color:black;text-align:justify;font-size: 15px;">Jui Keskar, Metropolitan School, Frankfurt, Germany </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>ABSTRACT</b></h6> <p style="color:black;text-align:justify;">Randomization of data is an ongoing need for various business reasons like design of clinical trials [2], or training an AI model [3], to name a few. To control the level of randomization, it is important to measure the level of randomness, i.e. unpredictability and dispersion, in the “randomized” data vis-à-vis the original data. Permutation entropy is an established techniques for measuring unpredictability and complexity of time series [4]. To measure dispersion in randomized data, a “Neighbour-displacement-delta” (NDD) based technique is proposed. JChaosIndex, a measure of dispersion, considers displacement of each data element as well as relative displacements of the neighbours of each data element. JChaosIndex measurement technique can be easily included in a programming language library or database methods or any algorithm. Importantly, this technique is domain-agnostic as it works purely on the indexes of the data record and not the actual data. </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>Keywords</b></h6> <p style="color:black;text-align:justify;">Measure of Randomness, Data Dispersion, JChaosIndex, Permutation Entropy, Neighbour Displacement Delta. </p> <br> <!-- End of mlds --> <!-- Start of ubic --> <h6 style="color:black;font-family:classic wide,sans-cserif;font-size:20px"><b>Productivity in Construction (at the Example of the German Construction Sector by Comparison With Other National Construction Sectors) </b></h6> <p style="color:black;text-align:justify;font-size: 15px;"><p style="color:black;text-align:justify;font-size: 15px;">Leif Laszig and Matthias Bahr, Department of Civil Engineering, Hochschule Biberach, Karlstr. 9-11, 88400 Biberach, Germany </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>ABSTRACT</b></h6> <p style="color:black;text-align:justify;">A downward trend on the productivity growth rate of western industrial countries construction sectors has been observed since the 1970s of the twentieth century. Noted general causes for the observable productivity slowdown can be mainly seen in low investment (in capital equipment), deficient business organization and qualification, current technologies with only limited potential for growth and the ability of innovation capacities of companies. Further possible explanations range from methodological measurement errors, demographic and structural changes up to regulation. Productivity growth is mainly driven by a continuous improvement in the quality of input factors, for example relating to training and in the field of technological expertise, and further by the acceleration of product and process innovations. The construction industry is perceived to be underdeveloped in a technical sense, willing to forego high technology, and to innovate little. It is also considered to have to fight high barriers to innovation. Thus, innovations in the construction industry are distinguished from those in other industry sectors and the service sectors in that they are strongly process-oriented, incremental and often designed to solve a specific problem that has occurred in the short-term. To enable taking targeted measures to increase productivity growth, the question arises as to the concrete causes, effects and causal mechanisms as well as the intervening influences for the observable decline of productivity. This article deals with the relationship between the indicator of operational productivity and several external factors for which an explanatory power for the development of productivity is assumed. In addition to these internal factors, external factors such as structural and demographic change, the regulatory framework (laws, directives, guidelines, etc.) and the integration of the customer (or client) into the service provision process also have an impact on productivity. Such intervening variables influence the causal mechanism and mediate the dependent variable productivity through it. They must therefore be taken into account in the consideration, even if the actual goal of knowledge does not apply to them. </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>Keywords</b></h6> <p style="color:black;text-align:justify;">productivity, value added chain, innovation, influence on productivity. </p> <br> <!-- End of ubic --> <!-- Start of scai --> <h6 style="color:black;font-family:classic wide,sans-cserif;font-size:20px"><b>Unsupervised Learning of Shape Segment Point Distribution Models with the Em Algorithm </b></h6> <p style="color:black;text-align:justify;font-size: 15px;"><p style="color:black;text-align:justify;font-size: 15px;">Abdullah A. AlShaher, Department of Computer Science and Information System, College of Business Studies Public Authority for Applied Education and Training, Kuwait </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>ABSTRACT</b></h6> <p style="color:black;text-align:justify;">This paper demonstrates how 2D handwritten shapes can be classified by analyzing shape structure. The underlying framework is a one-layer architecture where the shapes are segmented to a series of connected segments. Each segment is represented by a set of uniformly distributed landmarks along the skeleton of the character. We follow by representing each segment using the Point Distribution Model (SPDM). We then capture shape variations by learning Gaussian mixture of segment point distribution models in a two-step Expectation Maximization algorithm. The approach is tested on a set of handwritten Arabic characters. </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>Keywords</b></h6> <p style="color:black;text-align:justify;">Handwritten Arabic characters, Shape analysis, Point distribution models, Machine learning, Expectation Maximization Algorithm. </p> <br> <h6 style="color:black;font-family:classic wide,sans-cserif;font-size:20px"><b>Road Infrastructure Defect Detection using Computer Vision </b></h6> <p style="color:black;text-align:justify;font-size: 15px;"><p style="color:black;text-align:justify;font-size: 15px;">Norah A. AlSubaie, Ghada N. AlMutairi, Ghayda A. AlMalki and Sarah A. AlRumaih, Department of Computer Sciences, Princess Nourah University, Riyadh, Saudi Arabia </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>ABSTRACT</b></h6> <p style="color:black;text-align:justify;">This research introduces "Jaddah," an innovative AI-based system for the automated detection of road infrastructure defects using advanced computer vision and machine learning techniques. The project overcomes significant limitations of traditional road inspection methods, which are often slow, labour-intensive, and prone to human error. Jaddah develops a mobile application that efficiently detects and classifies road defects, such as cracks and potholes, in real time. By utilizing a comprehensive dataset of high-resolution images, we enhance model training. The implementation of the YOLOv8-seg model enables precise defect localization and segmentation, achieving impressive accuracy in identifying and categorizing road anomalies. Performance metrics indicate robust results, ensuring reliable defect detection and contributing to improved infrastructure maintenance. </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>Keywords</b></h6> <p style="color:black;text-align:justify;">Road Defect Detection, Road Infrastructure, Computer Vision, Machine Learning and Image Processing. </p> <br> <h6 style="color:black;font-family:classic wide,sans-cserif;font-size:20px"><b>Enhancing Sound Processing in Children with Autism using Technology </b></h6> <p style="color:black;text-align:justify;font-size: 15px;"><p style="color:black;text-align:justify;font-size: 15px;">Sana Alsubaie, Fatimah Alasmari, Daad Alsikhan and Reema Alsheddi, Department of Computer Sciences, Princess Nourah University, Riyadh, Saudi </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>ABSTRACT</b></h6> <p style="color:black;text-align:justify;">This project aims to develop an interactive application that helps autistic children recognize and process environmental sounds. Children with Autism Spectrum Disorder (ASD) often struggle with sound identification, leading to communication challenges. The application offers a platform where children can match sounds with images, improving their sound recognition skills. Additionally, it includes a specialist consultation feature for parents to track their child s progress and receive guidance. A key aspect of the project is a wearable bracelet designed for children with autism. The bracelet captures and identifies environmental sounds in real-time. These sounds are then sent to the application, where they are stored in the "Recordings" interface, allowing the child to revisit and reinforce their learning. Together, the application and bracelet provide a comprehensive solution to support the auditory development of children with ASD. </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>Keywords</b></h6> <p style="color:black;text-align:justify;">Autism, Sound Processing, Specialist Consultation, Learning App, ASD, Sound Recognition, Environmental Sounds. </p> <br> <!-- End of scai --> <!-- Start of semit --> <h6 style="color:black;font-family:classic wide,sans-cserif;font-size:20px"><b>An Empirical Study of Prompt-based Non-functional Requirements Classification </b></h6> <p style="color:black;text-align:justify;font-size: 15px;"><p style="color:black;text-align:justify;font-size: 15px;">Xia Li Kennesaw State University, United States of America </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>ABSTRACT</b></h6> <p style="color:black;text-align:justify;">In modern software development, Non-Functional Requirements (NFR) are essential to satisfy users’ needs. Distinguishing different categories of NFR is tedious, error-prone, and time consuming due to the complexity of software systems. In our paper, we conducted a comprehensive study to evaluate the performance of prompt-based non-functional requirements classification by designing various handcraft templates and soft templates on the pre-trained language model (i.e.,BERT). Our experimental results show that handcraft templates can achieve best effectiveness (e.g., 83.52% in terms of F1 score) but with unstable performance for different templates. Also, the performance can become stable after soft templates are concatenated with handcraft templates. </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>Keywords</b></h6> <p style="color:black;text-align:justify;">Non-functional requirements classification, prompt-based learning, pre-trained models </p> <br> <!-- End of semit --> </div> </div> </div> </div> </section> <!-- Section: Scope --> <!-- Section: Footer --> <footer class="page-footer grey lighten-1"> <div class="container"> <div class="row"> <div class="col s12 m6"> <h5 class="grey-text lighten-3"> <font color="#FFF">Contact Us</font> </h5> <a href="mailto:iccsea@iccsea2024.org" style="color:#000">iccsea@iccsea2024.org</a> </div> </div> </div> <div class="footer-copyright grey darken-2"> <div class="container center"> Copyright © ICCSEA 2024 </div> </div> </footer> <!--Import jQuery before materialize.js--> <script type="text/javascript" src="https://code.jquery.com/jquery-3.2.1.min.js"></script> <script type="text/javascript" src="js/materialize.min.js"></script> <script> $(document).ready(function() { // Custom JS & jQuery here $('.button-collapse').sideNav(); }); </script> </body> </html>