CINXE.COM

11th International Conference on Computer Science and Engineering (CSEN 2024)

<!DOCTYPE html> <html> <head> <!--Import Google Icon Font--> <link href="https://fonts.googleapis.com/icon?family=Material+Icons" rel="stylesheet"> <link href="https://fonts.googleapis.com/css?family=Roboto+Condensed" rel="stylesheet"> <!--Import materialize.css--> <link type="text/css" rel="stylesheet" href="css/materialize.min.css" media="screen,projection" /> <link rel="stylesheet" href="https://use.fontawesome.com/releases/v5.0.13/css/all.css" integrity="sha384-DNOHZ68U8hZfKXOrtjWvjxusGo9WQnrNx2sqG0tfsghAvtVlRW3tvkXWZh58N9jp" crossorigin="anonymous"> <link type="text/css" rel="stylesheet" href="css/main.css" /> <meta charset="UTF-8"> <!--Let browser know website is optimized for mobile--> <meta name="viewport" content="width=device-width, initial-scale=1.0" /> <title>11th International Conference on Computer Science and Engineering (CSEN 2024)</title> <link rel="icon" type="image/ico" href="img/logo.ico"> </head> <body> <!-- Header --> <header class="main-header"> <nav class="transparent"> <div class="container"> <div class="nav-wrapper"> <a href="#" class="brand-logo">CSEN</a> <a href="#" data-activates="mobile-nav" class="button-collapse"> <i class="fa fa-bars"></i> </a> <ul class="right hide-on-med-and-down"> <li> <a href="index">HOME</a> </li> <li> <a href="papersubmission">PAPER SUBMISSION</a> </li> <li> <a href="committee">PROGRAM COMMITTEE</a> </li> <li> <a class="active-link" href="#">ACCEPTED PAPERS</a> </li> <li> <a href="contact">CONTACT US</a> </li> <li> <a href="venue">VENUE</a> </li> </ul> <ul class="side-nav grey darken-1 white-text" id="mobile-nav"> <h4 class="center">CSEN 2024</h4> <li> <div class="divider"></div> </li> <li> <a href="index"> <i class="fa fa-home white-text"></i>Home </a> </li> <li> <a href="papersubmission"> <i class="fa fa-user white-text"></i>Paper Submission </a> </li> <li> <a href="committee"> <i class="fa fa-user white-text"></i>Program Committee </a> </li> <li> <a class="active-link" href="papers"> <i class="fa fa-newspaper white-text"></i>Accepted Papers </a> </li> <li> <a href="contact"> <i class="fa fa-phone white-text"></i>Contact Us </a> </li> <li> <a href="venue"> <i class="fa fa-phone white-text"></i>Venue </a> </li> <li> <div class="divider"></div> </li> <li> <a href="/submission/index.php" target="blank" class="btn grey waves-effect waves-light">Paper Submission</a> </li> </ul> </div> </div> </nav> <!-- Showcase --> <div class="showcase container"> <div class="row"> <div class="col s12 m10 offset-m1 center grey-text text-darken-3"> <h5>Welcome to CSEN 2024</h5> <h2>11<sup>th</sup> 11th International Conference on Computer Science and Engineering (CSEN 2024)</h2> <p>December 21 ~ 22, 2024, Sydney, Australia</p> <br> <br> </div> </div> </div> </header> <section class="section section-icons "> <div class="container"> <div class="row"> <div class="col s12 m12"> <div class="card-panel grey darken-2 z-depth-3 white-text center"> <i class="fa fa-paper-plane fa-3x"></i> <h5>Accepted Papers</h5> </div> </div> <div class="col s12 m12"> <div class="card-panel white z-depth-3 "> <!---start DSML---> <h6 style="color:black;font-family:classic wide,sans-cserif;font-size:20px"><b>Email Performance Predictions Without Campaign History</b></h6> <p style="color:black;text-align:justify;font-size: 15px;">Sourabh Khot<sup>1</sup>, Venkata Duvvuri<sup>1</sup>, Heejae Roh<sup>1</sup>, and Anish Mangipudi<sup>2</sup>, <sup>1</sup>College of Professional Studies, Northeastern University, <sup>2</sup>Langley High School, Mclean, Virginia </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>ABSTRACT</b></h6> <p style="color:black;text-align:justify;">Email will remain a vital marketing tool in 2024. Email marketing involves sending commercial emails to a targeted audience. It currently produces a significant ROI (return on investment) in the marketing sector [1]. This research paper presents a comprehensive study on predicting email open rates, focusing specifically on the influence of subject lines. The open-rate prediction algorithm SLk relies on the semantic features of subject lines utilizing a seed dataset of 4500 anonymized subject lines from diverse business sectors. The algorithm integrates data preprocessing, tokenization, and a custom-built repository of power words and negative words to enhance prediction accuracy. In our experiments the actual open rate margin of error was tracking close to whats allowed as per input error giving confidence that SLk can be directionally used for optimizing subject lines performance without prior history. The findings suggest that precise manipulation of subject line features can significantly improve the efficacy of email campaigns. </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>Keywords</b></h6> <p style="color:black;text-align:justify;">Email Marketing, Open Rate Prediction, Subject Line Analysis, Machine Learning, Natural Language Processing </p> <br> <h6 style="color:black;font-family:classic wide,sans-cserif;font-size:20px"><b>Improved Productivity With Ai Models for Sql Tasks: a Case Study </b></h6> <p style="color:black;text-align:justify;font-size: 15px;">Thanh Vu, Sara Keretna, Richi Nayak and Thiru, Telstra Group Limited and Queensland University of Technology, Australia </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>ABSTRACT</b></h6> <p style="color:black;text-align:justify;">This study investigates the practical deployment of AI-based Text-to-SQL (T2S) models on a real-world telecommunication dataset, aiming to enhance employee productivity. Our experiment addresses the unique challenges in telecommunication datasets not explored in previous works using annotated datasets. Leveraging advanced retrieval augmented generative (RAG) models like Vanna AI and Llamaindex, we benchmark their performance on synthetic datasets such as SPIDER and BIRD with different LLM backbones and subsequently compare the best-performing model to human performance on our proprietary dataset. We propose the Productivity Gain Index (PGI) to quantify the dual aspects of productivity improvement—time efficiency and accuracy—by comparing AI performance with human analysts across various SQL tasks. Results indicate significant productivity gains, with AI-based tools demonstrating superior query processing and accuracy performance. This prominent gap signals the potential of AI-based tool applications in the actual company domain for improved productivity. </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>Keywords</b></h6> <p style="color:black;text-align:justify;">Text-to-SQL, Large Language Models, Productivity Gain Index, RetrievalAugmented Generation, Artificial Intelligence Evaluation. </p> <br> <h6 style="color:black;font-family:classic wide,sans-cserif;font-size:20px"><b>Federated Learning With Differential Privacy Based on Summary Statistics </b></h6> <p style="color:black;text-align:justify;font-size: 15px;">Peng Zhang<sup>1</sup> and Pingqing Liu<sup>2</sup>, <sup>1</sup>Faculty of Science, Kunming University of Science and Technology, Kunming, China, <sup>2</sup>School of Management and Economics, Kunming University of Science and Technology, Kunming, China </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>ABSTRACT</b></h6> <p style="color:black;text-align:justify;">In data analytics, privacy preserving is receiving more and more attention, privacy concerns results in the formation of ”data silos”. Federated learning can accomplish data integrated analysis while protecting data privacy, it is currently an effective way to break the ”data silo” dilemma. In this paper, we build a federated learning framework based on differential privacy. First, for each local dataset, the summary statistics of the parameter estimates and the maximum L2 norm of the coefficient vector for the polynomial function used to approximate individual log-likelihood function are computed and transmitted to the trust center. Second, at the trust center, gaussian noise is added to the coefficients of the polynomial function which approximates the full log-likelihood function, and the parameter estimates under privacy is obtained from the noise/privacy objective function, and the estimator satisfies (ε, δ)-DP. In addition, theoretical guarantees are provided for the privacy guarantees and statistical utility of the proposed method. Finally, we verify the utility of the method using numerical simulations and apply our method in the study of salary impact factors. </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>Keywords</b></h6> <p style="color:black;text-align:justify;">Differential Privacy, Federated Learning, Gauss Function Mechanism, Summary Statistics. </p> <br> <!--- Ends DSML----> <!---start bibc---> <h6 style="color:black;font-family:classic wide,sans-cserif;font-size:20px"><b>Blockchain-based Demand-supply Matchingsystem for IOT Device Data Distribution</b></h6> <p style="color:black;text-align:justify;font-size: 15px;">Kenta Kawai, Wu Yuxiao, Yutaka Matubara, and Hiroaki Takada, Graduate School of Informatics, Nagoya University, Aichi 464-8601 </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>ABSTRACT</b></h6> <p style="color:black;text-align:justify;">The booming of IoT devices has attracted significant interest in data integration platforms that enable seamless utilization and control of sensor data across various applications. However, most existing platforms are centralized structure, aggregating data on specific companies servers. This centralization raises privacy concerns and imposes limitations on data sharing with third parties. To address these challenges, this paper proposes a decentralized demand-supply matching system for IoT device data distribution using blockchain technology. The paper details the requirements for the entire matching system, including both users and IoT devices, and introduces a system concept alongside a practical implementation. Evaluation experiments conducted on a prototype system demonstrate the feasibility and effectiveness of the proposed approach. </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>Keywords</b></h6> <p style="color:black;text-align:justify;">Blockchain, Data Marketplace, Demand-Supply Matching, IoT Data. </p> <br> <h6 style="color:black;font-family:classic wide,sans-cserif;font-size:20px"><b>Scalable Consensus for Blockchain Networks</b></h6> <p style="color:black;text-align:justify;font-size: 15px;">Vivek Ramji, Stony Brook University, New York, USA </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>ABSTRACT</b></h6> <p style="color:black;text-align:justify;">This paper presents a novel scalable consensus algorithm designed for blockchain networks, aimed at improving transaction throughput and reducing latency in distributed systems. The proposed algorithm leverages a hierarchical structure of nodes, where consensus is achieved through a multi-layered approach that balances workload across the network. By utilizing dynamic node selection and adaptive communication protocols, the algorithm ensures robustness against network partitions and Byzantine failures. Experimental results demonstrate significant improvements in scalability, with the algorithm achieving high transaction throughput even under varying network conditions. The proposed approach provides a viable solution for enhancing the efficiency of blockchain networks in real-world applications. </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>Keywords</b></h6> <p style="color:black;text-align:justify;">Distributed System, Consensus Algorithm, Fault Tolerance, Blockchain Concensus. </p> <br> <!--- Ends bibc----> <!---start nlp---> <h6 style="color:black;font-family:classic wide,sans-cserif;font-size:20px"><b>Relational Representation Augmented Graph Attention Network for Knowledge Graph Completion</b></h6> <p style="color:black;text-align:justify;font-size: 15px;">E. Aili<sup>1, 2</sup>, H. Yilahun<sup>1, 2</sup>, S. Imam<sup>1, 3</sup>, and A. Hamdulla<sup>1, 2</sup>, <sup>1</sup>School of Computer Science and Technology, Xinjiang University, Urumqi 830017, China, <sup>2</sup>Xinjiang Key Laboratory of Multilingual Information Technology, Urumqi 830017, China, <sup>3</sup>School of National Security Studies, Xinjiang University, Urumqi 830017, China </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>ABSTRACT</b></h6> <p style="color:black;text-align:justify;">Knowledge Graph Completion (KGC) is a popular topic in knowledge graph construction and related applications, aiming to complete the structure of knowledge graph by predicting missing entities or relations and mining unknown facts in the knowledge graph. In the KGC task, graph neural network (GNN)-based methods have achieved remarkable results due to their advantage of effectively capturing complex relations among entities and generating more accurate and rich entity representations by aggregating information from neighboring nodes. These methods mainly focus on the representation of entities, and the representation of relations is obtained using simple dimensional transformations or initial embeddings. This treatment ignores the diversity and complex semantics of relations, and restricts the efficiency of the model in utilizing relational information in the reasoning process. In this work, we propose the relational representation augmented graph attention network, which effectively identifies and weights neighboring relations that actually contribute to the target relation by filtering out irrelevant information through an attention function based on information and spatial domain. Furthermore, we capture complex patterns and features in the relational embedding by means of feed-forward network consisting of a series of linear transformations and nonlinear activation functions. Experiments demonstrate the very advanced performance of RRA-GAT on the link prediction task on standard datasets FB15k-237 and WN18RR(e.g., improved the MRR metric on the WN18RR dataset by 7.8%). </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>Keywords</b></h6> <p style="color:black;text-align:justify;">Knowledge Graph Completion, Knowledge Graph Embedding, Graph Neural Networks. </p> <br> <h6 style="color:black;font-family:classic wide,sans-cserif;font-size:20px"><b>Chinese Military Named Entity Recognition Based on Adversarial Training and Deep Multi-granularity Dilated Convolutions</b></h6> <p style="color:black;text-align:justify;font-size: 15px;">Qiuyan. Ji<sup>1, 2</sup>, H. Yilahun<sup>1, 2</sup>, S. Imam<sup>1, 3</sup>, and A. Hamdulla<sup>1, 2</sup>, <sup>1</sup>School of Computer Science and Technology, Xinjiang University, Urumqi 830017, China, <sup>2</sup>Xinjiang Key Laboratory of Multilingual Information Technology, Urumqi 830017, China, <sup>3</sup>School of National Security Studies, Xinjiang University, Urumqi 830017, China </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>ABSTRACT</b></h6> <p style="color:black;text-align:justify;">Named entity recognition (NER) in the military domain is crucial for information extraction and knowledge graph construction. However, military NER faces challenges such as fuzzy entity boundaries and lack of public corpora. These problems make existing NER methods ineffective when dealing with short texts and social media content. To address these challenges, we construct a military news dataset containing 11,892 Chinese military news sentences, with a total of 69,569 named entities annotated. Simultaneously, we propose a Robust Dilated-W squared NER (RDWS) model based on adversarial training and deep multi-granularity dilated convolution. The model first uses Bert-base-Chinese to extract character-level features, and then combines the fast gradient method (FGM) for adversarial training. Contextual features are captured by the BiLSTM layer, and these features are further processed using deep multi-granularity dilated convolution layers to better capture complex inter-lexical interactions. Experimental results show that the proposed method performs well on multiple datasets. </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>Keywords</b></h6> <p style="color:black;text-align:justify;">named entity recognition, adversarial training, Chinese military news, convolution. </p> <br> <!--- Ends nlp----> <!---start coraj---> <h6 style="color:black;font-family:classic wide,sans-cserif;font-size:20px"><b>A Survey Paper Exploring It Outsourcing Models and Market Trends</b></h6> <p style="color:black;text-align:justify;font-size: 15px;">Merita Bakiji, Faculty of Contemporary Sciences and Technologies , South East European University , Tetovo, North Macedonia </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>ABSTRACT</b></h6> <p style="color:black;text-align:justify;">As a result of the great boom experienced by global business, rapid technological developments, IT Outsourcing came as a result of organizations attempts to reduce operational costs and increase efficiency through external expertise.Through this study, it is intended to explore the current models of IT Outsourcing, detailing their sustainability and suitability in different market environments.This goal is attempted to be achieved by relying on a comprehensive summary of existing literature, articles and existing studies on IT Outsourcing, industry reports, consultancy reports, technological trends and their impacts on the market.The study also analyzes the IT Outsourcing industry map in the Republic of North Macedonia revealing the IT Outsourcing market and trends.By synthesizing existing research and data, this paper presents a valuable resource for decision makers in IT outsourcing, by providing practical recommendations that can serve organizations that are constantly trying to adapt to with rapidly changing market conditions. </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>Keywords</b></h6> <p style="color:black;text-align:justify;">IT Outsourcing, Artificial Intelligence, Market Trends, North Macedonia. </p> <br> <!--- Ends coraj----> <!---start ncwc---> <h6 style="color:black;font-family:classic wide,sans-cserif;font-size:20px"><b>Wireless Computing: a Mathematical Approach</b></h6> <p style="color:black;text-align:justify;font-size: 15px;">Arun Kumar Singh, School of MCS, PNG University of Technology, Lae, Papua New Guinea </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>ABSTRACT</b></h6> <p style="color:black;text-align:justify;">The use of wireless interface is a cornerstone of new-generation communication systems and is widely applied in different domains including IoT, mobile devices and sensor ones. This research paper aims at studying wireless computing from a mathematical perspective and particular areas of discussion include signal propagation, wireless channel characterization, system capacity and error control. We investigate the simple wireless communication and expand mathematical models/theories and equations to analyze the nature of the wireless systems, uses in networking and optimization. Wireless computing has become one of the most important aspects of communication in present world where data transfer across different networks is possible without any physical connections. Wireless computing systems are systems that consist of parameters of ideal systems, and use aspects of signal processing, network optimization and information theory. In this case, we discuss on mathematical models utilized in wireless communication channels; propagation models, path loss equations and interference management. Further, the paper underscores some of the key issues with the use of graph theory, pointers to the queuing theory to organize through realistic algorithm with the general aim of improving the organization of network resources as a means toward scaling up wireless networks. This theory gives details on many of the advanced topics such as error-correcting codes, modulation schemes and cryptographic methods needed in secure communication in wireless computing environment. Hence, this research seeks to provide a mathematical approach in the design, analysis and optimization of wireless systems, which we hope will help in the development of next generation wireless technologies including 5G and IoT. </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>Keywords</b></h6> <p style="color:black;text-align:justify;">Wireless Computing, Wireless Communication, Signal Propagation, Network Capacity, Error Correction, Mathematical Modeling. </p> <br> <!--- Ends ncwc----> <!---start csen---> <h6 style="color:black;font-family:classic wide,sans-cserif;font-size:20px"><b>Opioid Crisis and Data Analytics: Preventing Overdoses Through Predictive Models </b></h6> <p style="color:black;text-align:justify;font-size: 15px;">Vedamurthy Gejjegondanahalli Yogeshappa, Manager/Automation Architect, Leading Health Management Company, Dallas, United States </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>ABSTRACT</b></h6> <p style="color:black;text-align:justify;">The opioid problem continues to be something that is quite widespread in its effects on the population and contributes to thousands of deaths by overdose each year. Even after concerted efforts being made by governments and healthcare systems, deaths resulting from opioids continue to present a very difficult nut to crack. One perfect solution could be the deployment of data analytics to be able to prevent overdose incidents before they happen. This journal article focuses on the attempt to introduce a new concept in the healthcare and law enforcement areas for finding high-risk people and areas. It also talks about how the application of algorithms such as machine learning and natural language processing, among others, are of help in identifying abusive patterns, prescription anomalies or socioeconomic risks that come with prescription. The article describes the expected advantages of real-time monitoring, data aggregation from various sources, including EHRs, PDMPs, and social media, and the development of per-geography and demographic methods and models. The research also addresses ethical aspects of using data as well as privacy issues and a probability of bias in a predictive model, insisting on reporting all the methods used and frequent checks to avoid possible misapplications. Additionally, it assesses the involvement of healthcare provider implementation, data science, and policy in preventing the opioid crisis. In this paper, several advanced machine learning techniques, which include decision trees and random forests, as well as the more complex deep learning algorithms, show how the identification of effective early interventions, which are often hard to design, can help reduce overdose and enhance patient outcomes [18]. As with any analytical approach to a particular problem, we have strengths and weaknesses when applying data analytics to the opioid crisis. Machine learning algorithms themselves have been shown to be highly accurate at predicting those who may become opioid users; however, their implementation in practice entails embedding models into the current healthcare frameworks, stakeholder coordination, and addressing ethical issues. The conclusion insists on the further development of research in the sphere of predictive analytics in cases of opioid overdose, as well as the legal regulation of patient rights. </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>Keywords</b></h6> <p style="color:black;text-align:justify;">Opioid crisis, Data Analytics, Predictive models, Machine learning, Healthcare data, Public health. </p> <br> <!--- Ends csen----> <!---start aria---> <h6 style="color:black;font-family:classic wide,sans-cserif;font-size:20px"><b>The Power of Artificial Intelligence in Project Management: a Review and Evaluation Study </b></h6> <p style="color:black;text-align:justify;font-size: 15px;">Heidrich Vicci, College of Business Florida International University, USA </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>ABSTRACT</b></h6> <p style="color:black;text-align:justify;">Examining the Artificial Intelligence (AI) models can provide clear guidance for project management practice, even in outer areas that they may not have conceived. AI affords virtuous circles as symptom detection may afford novel datasets, diagnostic feedback for ML model building, and advocacy for the value and function of AI analysis of the diagnostic classifications. AI variables could also have direct predictive value as they are proposed to have some mechanism with the outcome, and AI has the potential to detect novel mechanisms. Finally, AI might use it to detect how context effects change the nature of the effects of other variables and use that to select custom actions within the nomothetic guidelines. (Sarkar et al.2022)(Wang et al., 2023)(Yathiraju2022) </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>Keywords</b></h6> <p style="color:black;text-align:justify;">Artificial Intelligence (AI), AI models, Project Management (PM). </p> <br> <!--- Ends aria----> </div> </div> </div> </div> </section> <!-- Section: Scope --> <!-- Section: Footer --> <footer class="page-footer grey lighten-1"> <div class="container"> <div class="row"> <div class="col s12 m6"> <h5 class="grey-text lighten-3"> <font color="#FFF">Contact Us</font> </h5> <a href="mailto:csen@csen2024.org" style="color:#000">csen@csen2024.org</a> </div> </div> </div> <div class="footer-copyright grey darken-2"> <div class="container center"> Copyright &copy; CSEN 2024 </div> </div> </footer> <!--Import jQuery before materialize.js--> <script type="text/javascript" src="https://code.jquery.com/jquery-3.2.1.min.js"></script> <script type="text/javascript" src="js/materialize.min.js"></script> <script> $(document).ready(function() { // Custom JS & jQuery here $('.button-collapse').sideNav(); }); </script> </body> </html>

Pages: 1 2 3 4 5 6 7 8 9 10