CINXE.COM
::Accepted Papers :: 14th International Conference on Advances in Computing and Information Technology (ACITY 2024)
<!DOCTYPE html> <html> <head> <!--Import Google Icon Font--> <link href="https://fonts.googleapis.com/icon?family=Material+Icons" rel="stylesheet"> <link rel="stylesheet" href="https://use.fontawesome.com/releases/v5.0.13/css/all.css" integrity="sha384-DNOHZ68U8hZfKXOrtjWvjxusGo9WQnrNx2sqG0tfsghAvtVlRW3tvkXWZh58N9jp" crossorigin="anonymous"> <link href="https://fonts.googleapis.com/css?family=Roboto" rel="stylesheet"> <!--Import materialize.css--> <link type="text/css" rel="stylesheet" href="css/materialize.min.css" media="screen,projection" /> <link type="text/css" rel="stylesheet" href="css/main.css" /> <!--Let browser know website is optimized for mobile--> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0" /> <title>::Accepted Papers :: 14th International Conference on Advances in Computing and Information Technology (ACITY 2024) </title> <link rel="icon" type="image/png" href="img/logo.png"> </head> <body> <!-- Responsive NavBar --> <div class="navbar-fixed"> <nav class="cyan lighten-2 z-depth-5"> <div class="container"> <div class="nav-wrapper"> <ul> <li id="b-logo"> <img id="brand-logo" class="hide-on-med-and-down" src="img/logo.png" height="65" width="80"> </li> </ul> <a class="brand-logo" href="index">ACITY</a> <a data-activates="side-nav" class="button-collapse show-on-small left"> <i class="material-icons">menu</i> </a> <ul class="right hide-on-med-and-down"> <li> <a href="index">Home</a> </li> <li> <a href="papersubmission">Paper Submission</a> </li> <li> <a href="committee">Program Committee</a> </li> <li class="active"> <a href="papers">Accepted Papers</a> </li> <li> <a href="venue">Venue</a> </li> <li> <a href="contact">Contact</a> </li> </ul> </div> </div> </nav> </div> <!-- SIDE NAVBAR --> <ul class="side-nav" id="side-nav"> <li> <div class="user-view arc"> <a href=""> <i id="cl" class="material-icons cyan-text text-lighten-2 right">close</i> </a> <a href=""> <img class="circle" src="img/logo.png"> </a> <h4 class="grey-text">ACITY</h4> </div> </li> <li> <a href="index">Home <i class="material-icons">home</i> </a> </li> <li> <a href="papersubmission">Paper Submission <i class="fas fa-paper-plane "></i> </a> </li> <li> <a href="committee">Program Committee <i class="fas fa-users"></i> </a> </li> <li class="active"> <a href="papers">Accepted Papers <i class="fas fa-calendar-alt"></i> </a> </li> <li> <a href="venue">Venue <i class="fas fa-map"></i> </a> </li> <li> <a href="contact">Contact <i class="fas fa-phone"></i> </a> </li> </ul> <!-- Section: Slider --> <section class="section-slider slider"> <div class="fixed-action-btn" id="scrollTop"> <a class="btn btn-small btn-floating waves-effect waves-light blue lighten-1 pulse" onclick="topFunction()"> <i class="material-icons">keyboard_arrow_up</i> </a> </div> <ul class="slides"> <li> <img src="img/sc-img1.jpeg" alt=""> <div class="hide-on-med-and-up caption center-align pd"> <h5>14<sup>th</sup> International Conference on Advances in Computing and Information <br>Technology (ACITY 2024) </h5> <h5 class="abx cyan">November 23 ~ 24, 2024, London, United Kingdom</h5> </div> <div class="hide-on-small-only caption center-align pc"> <h3>14<sup>th</sup> International Conference on Advances in Computing and Information<br> Technology (ACITY 2024) </h3> <h5 class="abx ">November 23 ~ 24, 2024, London, United Kingdom</h5> <br> </div> </li> <li> <img src="img/sc-img2.jpeg" alt=""> <div class="hide-on-med-and-up caption left-align pd"> <h5>14<sup>th</sup> International Conference on Advances in Computing and Information<br> Technology (ACITY 2024) </h5> <h5 class="abx cyan">November 23 ~ 24, 2024, London, United Kingdom</h5> </div> <div class="hide-on-small-only caption left-align pc"> <h3>14<sup>th</sup> International Conference on Advances in Computing and Information <br>Technology (ACITY 2024) </h3> <h5 class="abx ">November 23 ~ 24, 2024, London, United Kingdom</h5> <br> </div> </li> <li> <img src="img/sc-img3.jpg" alt=""> <div class="hide-on-med-and-up caption right-align pd"> <h5>14<sup>th</sup> International Conference on Advances in Computing and Information <br>Technology (ACITY 2024) </h5> <h5 class="abx cyan">November 23 ~ 24, 2024, London, United Kingdom</h5> </div> <div class="hide-on-small-only caption right-align pc"> <h3>14<sup>th</sup> International Conference on Advances in Computing and Information<br> Technology (ACITY 2024) </h3> <h5 class="abx ">November 23 ~ 24, 2024, London, United Kingdom</h5> <br> </div> </li> </ul> </section> <!-- Main Section - Left --> <section class="section-main"> <div class="container"> <div class="row"> <div class="col s12 m12"> <div class="card-content"> <h5 class="cyan-text center text-darken-1">Accepted Papers</h5> </div> <div class="card z-depth-2"> <div class="card-content"> <!--Start iote--> <h6 style="color:black;font-family:classic wide,sans-cserif;font-size:20px"><b>Integrating Hw/sw Functionality for Flexiblewireless Radio</b></h6> <p style="color:black;text-align:justify;font-size: 15px;">Alexander Strachan and Nigel Topham, School of Informatics, University of Edinburgh, Edinburgh, Scotland, EH8 9AB</p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>ABSTRACT</b></h6> <p style="color:black;text-align:justify;">Current methods of implementing wireless radio typically take one of two forms; either dedicated fixed-function hardware, or pure Software Defined Radio (SDR). Fixed function hardware is efficient, but being specific to each radio standard it lacks flexibility, whereas Software Defined Radio is highly flexible but requires powerful processors to meet real-time performance constraints. This paper presents a hybrid hardware/software approach that aims to combine the flexibility of SDR with the efficiency of dedicated hardware solutions. We evaluate this approach by simulating five variants of the IEEE 802.15.4 protocol, commonly known as Zigbee, and demonstrate the range of performance and power consumption characteristics for different accelerator and software configurations. Across the spectrum of configurations we see power consumption varies from 8% to 38% of a dedicated hardware implementation, and show how the hybrid approach allows a new modulation standard to be retrofitted to an existing design, with only a modest increase in power consumption.</p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>KEYWORDS</b></h6> <p style="color:black;text-align:justify">Wireless Radio, Digital Signal Processing, Embedded Systems, Computer Architecture, Accelerators. </p> <br> <h6 style="color:black;font-family:classic wide,sans-cserif;font-size:20px"><b>Sensory Technology in Eleam:innovating in Comprehensive Care and Fall Detection in Older Adults</b></h6> <p style="color:black;text-align:justify;font-size: 15px;">Mauricio Figueroa Colarte, School of Informatics and Telecommunications, Fundación Instituto Profesional DUOC UC, Viña del Mar, Chile</p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>ABSTRACT</b></h6> <p style="color:black;text-align:justify;">In the Chilean context, Long Stay Establishments for the Elderly (ELEAM) face significant challenges in comprehensive care and prevention of falls, critical incidents for this population. This project, called "ELEAM@TIC", explores the incorporation of sensor-based technology as an innovative strategy to address these problems. Through a multidisciplinary approach, the research team, led by Mauricio Figueroa Colarte, evaluated the effectiveness and acceptability of different types of sensors strategically placed on users. Preliminary results indicate that technical aspects must be improved for a notable improvement in early risk detection and response to fall incidents, suggesting significant potential to improve the quality of life of older adults in ELEAM. This project lays the foundation for future research and development in the field of inclusive technology and comprehensive care for the elderly.</p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>KEYWORDS</b></h6> <p style="color:black;text-align:justify">Falls Detection, Wereable, Sensors, Older Adults, Inclusive Technology. </p> <br> <h6 style="color:black;font-family:classic wide,sans-cserif;font-size:20px"><b>The Intersection of Iot, Industry 4.0, and Trading: Revolutionizing Financial Markets </b></h6> <p style="color:black;text-align:justify;font-size: 15px;">Prof. Salahddine KRIT, Lab.SIV/FSA, Department of Computer Science, FPO, Ibnou Zohr University, Agadir Morocco</p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>ABSTRACT</b></h6> <p style="color:black;text-align:justify;">The rapid development of Internet of Things (IoT) and Industry 4.0 technologies has revolutionized industries globally, transforming not only manufacturing and logistics but also financial markets. These technologies are creating new possibilities for data-driven trading strategies, offering unprecedented real-time insights that enable smarter, more efficient trading decisions. This article delves into the intersection of IoT, Industry 4.0, and trading, examining how these technologies are reshaping commodity markets, supply chains, algorithmic trading, and risk management. While the potential benefits are immense, challenges such as data overload, security risks, and technological fragmentation must be addressed. This paper provides a comprehensive overview of how IoT and Industry 4.0 are transforming the landscape of modern trading. </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>KEYWORDS</b></h6> <p style="color:black;text-align:justify">Internet of Things (IoT), Industry 4.0, algorithmic trading, real-time data, high-frequency trading, predictive analytics, blockchain, decentralized finance (DeFi), commodities trading, risk management, cybersecurity.</p> <br> <h6 style="color:black;font-family:classic wide,sans-cserif;font-size:20px"><b>Design and Implementation of a Real-time Rate-based Task Scheduler for Real-time Operating Systems: a Case Study With Vxworks </b></h6> <p style="color:black;text-align:justify;font-size: 15px;">Tom Springer<sup>1</sup> and Peiyi Zhao<sup>2</sup>, <sup>1</sup>Fowler School of Engineering, Chapman University, Orange, CA., USA, <sup>2</sup>Fowler School of Engineering, Chapman University, CA., USA</p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>ABSTRACT</b></h6> <p style="color:black;text-align:justify;">This paper details the implementation of a rate-based task scheduler into the VxWorks real-time operating system, intended to enhance resource allocation for distributed real-time systems, such as IoT and embedded devices. Rate-based scheduling dynamically adjusts task execution rates based on system demand, providing a flexible and efficient approach to meeting real-time constraints. The scheduler was integrated into VxWorks and evaluated using the Cheddar scheduling analysis tool and the VxWorks VxSim simulator. Initial results demonstrate improved deadline adherence and resource management under varying loads compared to traditional schedulers. Future work includes porting the scheduler to singleboard computers to assess its performance on resource-constrained IoT hardware and extending it to support resource sharing between tasks to address real-time coordination challenges. This research emphasizes the potential of rate-based scheduling for IoT applications, offering a scalable solution for managing the complexity of distributed, real-time environments in future embedded systems. </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>KEYWORDS</b></h6> <p style="color:black;text-align:justify">Real-Time systems, Networked Embedded Systems, Real-Time Operating Systems, Internet of Things Applications.</p> <br> <h6 style="color:black;font-family:classic wide,sans-cserif;font-size:20px"><b>A Health Monitoring and Water Consumption Tracking System Based on Smart Cup Using Artificial Intelligence and Internet of Things </b></h6> <p style="color:black;text-align:justify;font-size: 15px;">Haowen Yu<sup>1</sup>, Soroush Mirzaee<sup>2</sup>, <sup>1</sup>Skyline High School, 1122 228th Ave SE, Sammamish, WA 98075, <sup>2</sup>Computer Science Department, California State Polytechnic University, Pomona, CA 91768</p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>ABSTRACT</b></h6> <p style="color:black;text-align:justify;">Dehydration poses a significant global health risk, impacting a diverse population across various age groups [1]. Despite numerous efforts to promote hydration, traditional methods often rely on manual logging in mobile applications, a process prone to user neglect and inaccuracy. SmartFlask, an innovative hydration-monitoring device, seeks to address these limitations by offering a fully automated solution that seamlessly integrates into users daily routines [2]. Equipped with a time-of-flight (ToF) sensor, SmartFlask tracks water consumption without requiring user input, while an AI algorithm recommends personalized daily water intake based on individual factors such as height and weight [3]. By eliminating the need for manual tracking and providing tailored hydration goals, SmartFlask offers a comprehensive approach to improving hydration habits. This paper details the development, functionality, and potential health benefits of SmartFlask, demonstrating its promise as an effective tool for addressing dehydration on a global scale. </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>KEYWORDS</b></h6> <p style="color:black;text-align:justify">Dehydration, ToF, AI algorithm.</p> <br> <!--End iote--> <!--Start acity--> <h6 style="color:black;font-family:classic wide,sans-cserif;font-size:20px"><b>Integrated Mortality Package to Construct Life Tables by Indirect Techniques</b></h6> <p style="color:black;text-align:justify;font-size: 15px;">Salih Hamza Abuelyamen, Retired from the Central Bureau of Statistics in Sudan, Association of retired staff from the Central Bureau of Statistics - Sudan, Private Researcher</p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>ABSTRACT</b></h6> <p style="color:black;text-align:justify;">Because of memory lapse, social and other factors, direct questions on mortality status in demographic and health surveys or population censuses would not reveal accurate and complete information. Hence demographers used to apply indirect questions in data collection stage, and indirect techniques to estimate mortality indicators’ values from this data. One of the famous methods in this respect is Brass Combined Method to construct life tables by combination of child and adult survival data. To produce this information from surveys or censuses it takes a lot of time, in addition to that, the calculations include sophisticated equations using auxiliary information from different sources. This paper present computer integrated package to execute all stages of this job, starting from questionnaire design; data entry, data editing, data processing, calculation of child and adult mortality indicators and construction of life tables by this method. It is also designed to accept row data from different statistical censuses and surveys that include the required information.</p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>KEYWORDS</b></h6> <p style="color:black;text-align:justify">Life table, Mortality, Adult, Child, Data entry. </p> <br> <h6 style="color:black;font-family:classic wide,sans-cserif;font-size:20px"><b>Block with Holding Resilience</b></h6> <p style="color:black;text-align:justify;font-size: 15px;">CYRIL GRUNSPAN AND RICARDO PEREZ-MARCO </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>ABSTRACT</b></h6> <p style="color:black;text-align:justify;">It has been known for some time that the Nakamoto consensus as implemented in the Bitcoin protocol is not totally aligned with the individual interests of the participants. More precisely, it has been shown that block withholding mining strategies can exploit the difficulty adjustment algorithm of the protocol and obtain an unfair advantage. However, we show that a modification of the difficulty adjustment formula taking into account orphan blocks makes honest mining the only optimal strategy. Surprinsingly, this is still true when orphan blocks are rewarded with an amount smaller to the official block reward. This gives an incentive to signal orphan blocks. The results are independent of the connectivity of the attacker.</p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>KEYWORDS</b></h6> <p style="color:black;text-align:justify">Bitcoin, blockchain, proof-of-work, selfish mining, martingale. </p> <br> <h6 style="color:black;font-family:classic wide,sans-cserif;font-size:20px"><b>An Immersive Music Theory Education and Practicing System using Artificial Intelligence and Machine Learning </b></h6> <p style="color:black;text-align:justify;font-size: 15px;">Cyrus Chung<sup>1</sup>, Mirna Shabo<sup>2</sup>, <sup>1</sup>Santa Margarita catholic high school, 22062 Antonio Pkwy, Rancho Santa Margarita, CA 92688, <sup>2</sup>Computer Science Department, California State Polytechnic University, Pomona, CA 91768 </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>ABSTRACT</b></h6> <p style="color:black;text-align:justify;">This paper discusses the development of a piano roll application designed to make learning music theory more accessible and engaging for beginners [12]. The application introduces an interactive, real-time interface where users can place, adjust, and listen to musical notes, providing immediate feedback. Important components of the system include the Piano Roll Manager, which allows smooth note manipulation, the Save System, which ensures efficient storage and retrieval of compositions, and the AspectRatioController, which maintains consistent visual quality across different screen sizes [4]. In comparison to traditional music methods of teaching music theory, which often rely on static lessons or videos, this application provides an engaging, hands-on experience. This application offers users the ability to experiment creatively with musical elements while learning theory in an intuitive way. The experiment showed that users found the interface easy to use and the overall experience satisfying [5]. This demonstrates that our application is an effective tool for teaching introductory music theory to a broad audience. </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>KEYWORDS</b></h6> <p style="color:black;text-align:justify">Music theory education, Interactive learning, Piano roll interface, Real-time feedback. </p> <br> <h6 style="color:black;font-family:classic wide,sans-cserif;font-size:20px"><b>A Helpful and Convenient Mobile Application to Assist High School Students to Find Opportunity for Volunteering using AI and Database Authentication </b></h6> <p style="color:black;text-align:justify;font-size: 15px;">Bowen Yao<sup>1</sup>, Tyler Boulom<sup>2</sup>, <sup>1</sup>Crean Lutheran High School, 12500 Sand Canyon Avenue, Irvine, 92618, <sup>2</sup>Computer Science Department, California State Polytechnic University, Pomona, CA 91768 </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>ABSTRACT</b></h6> <p style="color:black;text-align:justify;">This paper explores the development of a volunteer platform designed to make it easier for high school students to find, join, and benefit from local volunteer opportunities. The platform includes AI-based lesson generation for skill preparation and an optimized search and filter system [9]. We address the challenges of volunteer tracking, data accuracy, and community needs by integrating tools like Firebase for user authentication and storage. Experiments on AI lesson accuracy and search functionality revealed key areas for improvement, which would refine user experience and engagement. This project ultimately fosters a supportive environment where students can grow their skills, connect with peers, and contribute positively to their communities.</p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>KEYWORDS</b></h6> <p style="color:black;text-align:justify">AI, Firebase, High School Students Volunteering Opportunities, Flutter & Dart. </p> <br> <h6 style="color:black;font-family:classic wide,sans-cserif;font-size:20px"><b>A Smart Campus Community Mobile Platform for Eca (Extra Curricular Activity) Management and Volunteer Coordination using Artificial Intelligence and Machine Learning </b></h6> <p style="color:black;text-align:justify;font-size: 15px;">Kailu Yang<sup>1</sup>, Yu Cao<sup>2</sup>, <sup>1</sup>Shenzhen College of International Education, <sup>2</sup>Computer Science Department, California State Polytechnic University, USA </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>ABSTRACT</b></h6> <p style="color:black;text-align:justify;">There is a common difficulty in finding opportunities to participate in volunteer activities in my school [1]. The app aims to tackle this problem by providing a platform for signing up to volunteer activities. The app comprises three important systems including authentication, notification and pdf summary generation. The app uses firebase as the background database to store all data and provides an authentication and notification system that makes use of providers to listen to changes of data [2]. The app’s pdf summary generation system utilizes the ‘reportlab’ module. Because of the complexity of the structure, the user interface has to be designed well to be organized and easy to use. After the completion of the app, a survey containing various questions on finding and signing up volunteer events was sent to 10 students to evaluate the effectiveness of the app in facilitating signing up to volunteer activities [15]. The results proved to be very successful and most importantly made finding the activities more convenient. The app saves precious time for students in looking for activities and allows them to compare various activities in the same place and choose the best fit one.</p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>KEYWORDS</b></h6> <p style="color:black;text-align:justify">Volunteer activity platform, Student engagement, Firebase database integration, PDF summary generation. </p> <br> <!--End acity--> <!--Start nlpta--> <h6 style="color:black;font-family:classic wide,sans-cserif;font-size:20px"><b>Lore: Logit-ranked Retriever Ensemble for Enhancing Open-domain Question Answering</b></h6> <p style="color:black;text-align:justify;font-size: 15px;">Saikrishna Sanniboina, Shiv Trivedi and Sreenidhi Vijayaraghavan, University of Illinois at Urbana-Champaign, USA</p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>ABSTRACT</b></h6> <p style="color:black;text-align:justify;">Retrieval-based question answering systems often suffer from positional bias, leading to suboptimal answer generation. We propose LoRE (Logit-Ranked Retriever Ensemble), a novel approach that improves answer accuracy and relevance by mitigating positional bias. LoRE employs an ensemble of diverse retrievers, such as BM25 and sentence transformers with FAISS indexing. A key innovation is a logit-based answer ranking algorithm that combines the logit scores from a large language model (LLM), with the retrieval ranks of the passages. Experimental results on NarrativeQA, SQuAD demonstrate that LoRE significantly outperforms existing retrieval-based methods in terms of exact match and F1 scores. On SQuAD, LoRE achieves 14.5%, 22.83%, and 14.95% improvements over the baselines for ROUGE-L, EM, and F1, respectively. Qualitatively, LoRE generates more relevant and accurate answers, especially for complex queries.</p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>KEYWORDS</b></h6> <p style="color:black;text-align:justify">Open-Domain Question Answering, Positional Bias, Sentence Transformers, Answer Ranking, Retrieval-Augmented Generation.</p> <br> <h6 style="color:black;font-family:classic wide,sans-cserif;font-size:20px"><b>Assessing Esg Compliance and Impact: a Zero-shot Learning Approach to Analyzing Fortune 500 Companies’ Sustainability Reports </b></h6> <p style="color:black;text-align:justify;font-size: 15px;">Armaan Agrawal, Princeton Day School Princeton, NJ, USA</p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>ABSTRACT</b></h6> <p style="color:black;text-align:justify;">In the evolving landscape of sustainable investing, environment, social, and governance (ESG) metrics are crucial for evaluating companies beyond financial performance. Recognizing the growing importance of ESG to stakeholders, companies release annual sustainability reports outlining their ESG goals and progress. This paper analyzes how Fortune 500 companies integrate ESG considerations into their operations and reporting. We extract the text from the sustainability reports, separate them into different sentences, classify them into nineteen ESG subcategories using a zero-shot learning model, and compare the determined ESG focuses to actual data to evaluate the authenticity and effectiveness of these reports. This examination unveils the current state of ESG compliance among leading corporations and provides insights into the challenges and successes of implementing sustainable practices. More importantly, this research aims to facilitate the process of analyzing lengthy and complex sustainability reports by offering a scalable and flexible approach through the use of zero-shot learning. By streamlining the analysis of these reports, this research contributes to a better understanding of corporate ESG efforts and their impact on a sustainable future. </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>KEYWORDS</b></h6> <p style="color:black;text-align:justify">ESG, NLP, Sustainability, Zero-Shot learning.</p> <br> <h6 style="color:black;font-family:classic wide,sans-cserif;font-size:20px"><b>Multimodal Emotion Recognition in Text Using Advanced NLP and Deep Learning Techniques </b></h6> <p style="color:black;text-align:justify;font-size: 15px;">Lucas G. M. de Castro, Adriana L. Damian, and Celso B. Carvalho, Federal University of Amazonas, Brazil</p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>ABSTRACT</b></h6> <p style="color:black;text-align:justify;">This study focuses on developing a multimodal emotion recognition system for analyzing text, audio, and video data. We propose an advanced approach that integrates natural language processing and deep learning techniques, utilizing hierarchical attention mechanisms and cross-modal transformers to improve emotion detection accuracy. Our system achieved notable performance metrics, including a 90.8% accuracy and an 89.5% F1-score, surpassing existing state-of-the-art methods. These results demonstrate the system’s effectiveness in accurately identifying emotions and its potential application in enhancing human-computer interaction and sentiment analysis tools. </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>KEYWORDS</b></h6> <p style="color:black;text-align:justify">Multimodal Emotion Recognition, Natural Language Processing (NLP), Sentiment Analysis, Deep Learning, Hierarchical Attention Mechanisms, Audio-Visual Data Analysis</p> <br> <h6 style="color:black;font-family:classic wide,sans-cserif;font-size:20px"><b>A Smart Stock Value Prediction Mobile Platform with Social Media Sentiment Analysis using Machine Learning and Nature Language Processing </b></h6> <p style="color:black;text-align:justify;font-size: 15px;">Jinmo Yan<sup>1</sup>, Garret Washburn<sup>2</sup>, <sup>1</sup>University of Pennsylvania, <sup>2</sup>Computer Science Department, California State Polytechnic University, USA </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>ABSTRACT</b></h6> <p style="color:black;text-align:justify;">This paper presents an intelligent system that combines social media sentiment analysis with historical stock data to predict stock price movements [1]. Built as a mobile application using Flutter, the system integrates a Flask web server and a fine-tuned OpenAI model to process social sentiment and make real-time stock predictions [2]. Through experiments, we tested the systems ability to correlate sentiment with stock price fluctuations and volatility. The results showed that while sentiment analysis enhances prediction accuracy, it also introduces volatility. Key challenges include handling noisy social media data and over-reliance on sentiment. Future improvements involve refining the sentiment analysis model and incorporating additional market factors [3]. This system provides investors with a more dynamic tool to assess market trends based on real-time sentiment. </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>KEYWORDS</b></h6> <p style="color:black;text-align:justify">Stock Prediction, Social Media Sentiment, AI Fine-Tuning, Flutter Mobile Application, Market Volatility </p> <br> <!--End nlpta--> <!--Start aiaa--> <h6 style="color:black;font-family:classic wide,sans-cserif;font-size:20px"><b>An Efficient Sampling Framework for Graph Convolutional Network Training</b></h6> <p style="color:black;text-align:justify;font-size: 15px;">Abderaouf GACEM, Mohammed HADDAD, and Hamida SEBA Univ Lyon, UCBL, CNRS, INSA Lyon, LIRIS, UMR5205</p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>ABSTRACT</b></h6> <p style="color:black;text-align:justify;">Graph Convolutional Networks(GCNs)have recently gained significant attention due to the success of Convolutional Neural Networks in imageand language processing,as well as the prevalence of data that can be represented as graphs. However, GCNs are limited by the size of the graphs they can handle and by the oversmoothing problem, which can be caused by the depth or the large receptive field of these networks. Various approaches have been proposed to address these limitations. One promising approach involves considering the minibatch training paradigm and extending it to graph-structured data by extracting subgraphs and using them as batches. Unlike the entries in a dataset of images, which are independent from one another, the essence of a graph lies in its topology, hence the dependency between its nodes. Consequently, the strategy of selecting subgraphs to form minibatches is a challenging task with a significant impact on the training process results. In this work, we propose a general framework for generating minibatches in an effective way that ensures minimal loss of node interdependence information, preserves the original graph properties, and diversifies the samples for the GCN to improve generalization. We test our training process on real-world datasets with several well-known GCN models and demonstrate the improved results compared to existing methods.</p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>KEYWORDS</b></h6> <p style="color:black;text-align:justify">Graph Convolutional Networks, Graph Sampling, Minibatch Training.</p> <br> <h6 style="color:black;font-family:classic wide,sans-cserif;font-size:20px"><b>Real-time Indoor Air Quality Awareness: Lumigens Integration of Visualization and Sensing Technologies </b></h6> <p style="color:black;text-align:justify;font-size: 15px;">Jiasheng Wang<sup>1</sup>, Yu Sun<sup>2</sup>, <sup>1</sup>Santa Margarita Catholic High School, 22062 Antonio Pkwy, Rancho Santa Margarita, CA 92688, <sup>2</sup>Computer Science Department, California State Polytechnic University, Pomona, CA 91768</p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>ABSTRACT</b></h6> <p style="color:black;text-align:justify;">Lumigen is an innovative air quality monitoring system designed to enhance indoor environmental awareness using real-time data visualization [1]. The system combines an air quality sensor connected to a Raspberry Pi with a set of Philips Hue lights that change color based on detected air quality levels [2]. This setup provides immediate visual feedback, alerting users to air quality changes without requiring them to check a separate device. Users can interact with Lumigen through a mobile app that facilitates real-time monitoring, historical data analysis, and customization of air quality alerts and light settings [3]. Experimental evaluations demonstrate that Lumigen effectively detects and responds to variations in air quality, with a rapid response time and high accuracy. Unlike other solutions that may require separate displays or offer limited data insights, Lumigen seamlessly integrates into everyday life, providing both visual and data-driven cues about air quality. Future developments could enhance its portability, integrate automated responses with air purifiers, and offer advanced data analytics to further empower users to manage their indoor environments proactively [4].</p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>KEYWORDS</b></h6> <p style="color:black;text-align:justify">Indoor Air Quality, Real-Time Data Visualization, Environmental Sensing, Smart Home Automation.</p> <br> <h6 style="color:black;font-family:classic wide,sans-cserif;font-size:20px"><b>Is a Nipple Worse Than a Childrens Massacre? Examining Gender and Content Biases in Chatgpt-4o </b></h6> <p style="color:black;text-align:justify;font-size: 15px;">Roberto Balestri, Dipartimento delle Arti, Università di Bologna, Italy</p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>ABSTRACT</b></h6> <p style="color:black;text-align:justify;">This study investigates ChatGPT-4os multimodal content generation, highlighting significant disparities in its treatment of sexual content and nudity versus violent and drug-related themes. Detailed analysis reveals that ChatGPT-4o consistently censors sexual content and nudity, while showing leniency towards violence and drug use. Moreover, a pronounced gender bias emerges, with female-specific content facing stricter regulation compared to male-specific content. This disparity likely stems from media scrutiny and public backlash over past AI controversies, prompting tech companies to impose stringent guidelines on sensitive issues to protect their reputations. Our findings emphasize the urgent need for AI systems to uphold genuine ethical standards and accountability, transcending mere political correctness. This research contributes to the understanding of biases in AI-driven language and multimodal models, calling for more balanced and ethical content moderation practices.</p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>KEYWORDS</b></h6> <p style="color:black;text-align:justify">Generative AI, ChatGPT-4o, Biases, Ethics, LLM.</p> <br> <h6 style="color:black;font-family:classic wide,sans-cserif;font-size:20px"><b>An Intelligent Mobile Application to Identify Factors Influencing Adolescent Mental Health Variability using Artificial Intelligence </b></h6> <p style="color:black;text-align:justify;font-size: 15px;">Richard Feng<sup>1</sup>, Soroush Mirzaee<sup>2</sup>, <sup>1</sup>Margarets Episcopal School, 31641 La Novia Ave, San Juan Capistrano, CA 92675, <sup>2</sup>Computer Science Department, California State Polytechnic University, Pomona, CA 91768 </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>ABSTRACT</b></h6> <p style="color:black;text-align:justify;">Artificial intelligence has shown promise in diagnosing mental illness in young children, a challenging task given the rise in teenagers struggling with mental health. We focus on the capabilities of machine learning and natural language processing models to accurately recognize activities that affect mental health in pre-teens and adolescents, an important step towards improving symptoms of depression and anxiety. We achieved an accuracy of 86.7% for determining sentiment from child journal entries with LSTM and BERT and a MSE of 94.6 for predicting future mental health outcomes with neural networks. We develop an innovative solution of incorporating these models inside of a mobile application as a scalable framework for data collection to track shifts in overall user wellbeing. </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>KEYWORDS</b></h6> <p style="color:black;text-align:justify">Artificial Intelligence, Sentiment analysis, Machine learning, Mental health.</p> <br> <!--End aiaa--> <!--Start dppr--> <h6 style="color:black;font-family:classic wide,sans-cserif;font-size:20px"><b>A Transition Towards Virtual Representations of Visual Scenes</b></h6> <p style="color:black;text-align:justify;font-size: 15px;">Am´erico Pereira<sup>1, 2</sup>, Pedro Carvalho<sup>1, 3</sup>, and Lu´ıs Cˆorte-Real<sup>1, 2</sup>, <sup>1</sup>Centre for Telecommunications and Multimedia, INESC TEC, Porto, Portugal, <sup>2</sup>Faculty of Engineering, University of Porto, Porto, Portugal, <sup>3</sup>Polytechnic of Porto, School of Engineering, Porto, Portugal </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>ABSTRACT</b></h6> <p style="color:black;text-align:justify;">Visual scene understanding is a fundamental task in computer vision that aims to extract meaningful information from visual data. It traditionally involves disjoint and specialized algorithms for different tasks that are tailored for specific application scenarios. This can be cumbersome when designing complex systems that include processing of visual and semantic data extracted from visual scenes, which is even more noticeable nowadays with the influx of applications for virtual or augmented reality. When designing a system that employs automatic visual scene understanding to enable a precise and semantically coherent description of the underlying scene, which can be used to fuel a visualization component with 3D virtual synthesis, the lack of flexibility and unified frameworks become more prominent. To alleviate this issue and its inherent problems, we propose an architecture that addresses the challenges of visual scene understanding and description towards a 3D virtual synthesis that enables an adaptable, unified and coherent solution. Furthermore, we expose how our proposition can be of use into multiple application areas. Additionally, we also present a proof of concept system that employs our architecture to further prove its usability in practice.</p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>KEYWORDS</b></h6> <p style="color:black;text-align:justify">Visual Scene Understanding, Scene Understanding, 3D Reconstruction, Semantic Compression.</p> <br> <h6 style="color:black;font-family:classic wide,sans-cserif;font-size:20px"><b>An Easy-to-use Mobile Application for Classifying Intracranial Hemorrhages in User Submitted MRI Brain Scans </b></h6> <p style="color:black;text-align:justify;font-size: 15px;">Yixuan Chu<sup>1</sup>, Jonathan Sahagun</sup>2</sup>, </sup>1</sup>Beijing National Day School, No.66 Yuquan Lu, Haidian District; Beijing, China, </sup>2</sup>Computer Science Department, California State Polytechnic University, Pomona </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>ABSTRACT</b></h6> <p style="color:black;text-align:justify;">As MRI imaging technology advances, the utilization of this technology remains, ultimately, in the hands of the doctors that give the analysis to the patient. While this is comforting to some, the fact still remains that the human element in making decisions to pursue treatment may be faulty. However, a second opinion, especially that of an AI model trained to identify these hemorrhages, would go a really long way in reaffirming or second guessing a medical doctors analysis. The purpose of this research paper is to outline the development process and experimentation of the MindScan Pro mobile application, an app designed to provide AI imaging analysis of MRI brain scan images [2]. To construct this app, a few key technologies were used, namely the use of the Flutter framework for the app structure, a Densenet AI model that was trained specifically on MRI scans, and a python flask server used to host the model backend. As described in the paper, multiple experiments were performed to test the accuracy and reliability of the AI model backend server, in which both experiments found positive results that the AI model was both accurate and consistent in its response times. The MindScan Pro mobile application is a solution that will undoubtedly become popular, as it provides a quick easy way for doctors and patients alike to double check hemorrhaging diagnosis and provides peace of mind.</p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>KEYWORDS</b></h6> <p style="color:black;text-align:justify">Keywords: AI, MRI, Intracranial Hemorrhage, Classification, Mobile Application.</p> <br> <!--End dppr--> <!--Start CNDC--> <h6 style="color:black;font-family:classic wide,sans-cserif;font-size:20px"><b>Wireless Communications</b></h6> <p style="color:black;text-align:justify;font-size: 15px;">Nikitha Merilena Jonnada, PhD in Information Technology (Information Security Emphasis), University of the Cumberlands, Williamsburg, Kentucky, USA </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>ABSTRACT</b></h6> <p style="color:black;text-align:justify;">In this paper, the authors discuss about the rise of wireless communications, if they are secure and safe, future of wireless industry, wireless communication security, protection methods and techniques that could help the organizations in establishing a secure wireless connection with their employees, and other factors that are important to learn and note when manufacturing, selling, or using the wireless networks and wireless communication systems.</p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>KEYWORDS</b></h6> <p style="color:black;text-align:justify">Wireless, Network, Security, Hackers, VPN, IP address.</p> <br> <!--End CNDC--> <!--Start dsa--> <h6 style="color:black;font-family:classic wide,sans-cserif;font-size:20px"><b>Scalable Action Mining Modified Hybrid Method using Threshold Rho with Meta Actions and Information Granules for Enhanced user Emotions in Education and Business Domain </b></h6> <p style="color:black;text-align:justify;font-size: 15px;">Angelina Tzacheva, University of North Carolina Charlotte, United States of America </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>ABSTRACT</b></h6> <p style="color:black;text-align:justify;">Action Rules are rule based systems that extract actionable patterns which are hidden in big volumes of data. Huge amount of data gets generated from Education sector, Business field, Medical domain and Social Media, in a single day. In the technological world of big data, massive amounts of data are collected by organizations, including in major domains like financial, medical, social media and Internet of Things(IoT). Mining this data can provide a lot of meaningful insights on how to improve user experience in multiple domain. Users need recommendations on actions they can undertake to increase their profit or accomplish their goals, this recommendations are provided by Actionable patterns. For example: How to improve student learning; how to increase business profitability; how to improve user experience in social media; and how to heal patients and assist hospital administrators. Action Rules provide actionable suggestions on how to change the state of an object from an existing state to a desired state for the benefit of the user. The traditional Action Rules extraction models, which analyze the data in a non distributed fashion, does not perform well when dealing larger datasets. In this work we are concentrating on the vertical data splitting strategy using information granules and creating the data partitioning more logically instead of splitting the data randomly and also generating meta actions after the vertical split. Information granules form basic entities in the world of Granular Computing(GrC), which represents meaningful smaller units derived from a larger complex information system. We introduced Modified Hybrid Action rule method with Partition Threshold Rho. Modified Hybrid Action rule mining approach combines both these frameworks and generates complete set of Action Rules, which further improves the computational performance with large datasets.</p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>KEYWORDS</b></h6> <p style="color:black;text-align:justify">Emotion Detection, Meta Action, Information granules.</p> <br> <h6 style="color:black;font-family:classic wide,sans-cserif;font-size:20px"><b>Exploring Soft Skills Indicators in Multiple Mini Interviews (Mmi) Scenario Responses</b></h6> <p style="color:black;text-align:justify;font-size: 15px;">Ryan Huynh, University of Surrey,United Kingdom </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>ABSTRACT</b></h6> <p style="color:black;text-align:justify;">Multiple mini-interviews (MMIs) are a widely used and validated interview method for eliciting soft skills. By using multiple, separate, and timed interviews in which each has a distinct scenario, MMIs purportedly reduce possibilities such as a biased individual dictating results, although potentially inconsistent scoring by interviewers may still impact on fairness. However, MMIs overall can be seen as challenging to run due to the number of interviewers and assessments required. In this paper, we discuss the progress in automatically, and consistently, extracting soft skills from transcriptions of MMI responses to support such assessment. While previous research has focused on extracting soft skills from job postings and written responses, to the best of our knowledge there is no other published research on soft skill extraction from MMI responses. We begin by annotating the data to assure presence of soft skills, then evaluate the effectiveness of combining word embeddings with classifiers to identify soft skill indicators. The most promising result, F1-Score = 0.79, compares favourably to previous literature on extracting soft skills from other datasets and is encouraging of further exploration.</p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>KEYWORDS</b></h6> <p style="color:black;text-align:justify">Soft Skills Extraction, Multiple Mini Interviews, MMI, Word2Vec, BERT.</p> <br> <h6 style="color:black;font-family:classic wide,sans-cserif;font-size:20px"><b>Uncovering Greenwashing: A Study on Corporate Sustainability Reports and Public Sentiment </b></h6> <p style="color:black;text-align:justify;font-size: 15px;">Rania Mokni, Technical university of berlin , Germany </p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>ABSTRACT</b></h6> <p style="color:black;text-align:justify;">Greenwashing is an increasingly widespread problem in society. It involves deceptive practices where firms provide misleading information about their sustainability efforts in corporate communications and sustainability reports (SRs). Due to deceptive strategies and knowledge disparity between companies and the public it is difficult to detect. Companies can either present genuine or misleading statements about their efforts in their SRs. The problem of greenwashing arises when there is a lack of consistency between what companies publicly declare about their environmental initiatives and their actual environmental practices. Negative public opinion might be an incentive for companies to obscure certain statements in an attempt to counteract negative headlines. In this study, we investigate the link between company’s public image and its SR. The aim is to identify if firms with negative reputations are more prone to greenwashing and use their sustainability reports to enhance their image. Analysing the top 60 companies from 17 different industries in Germany, we explore the connection between linguistic text features in their SRs and the sentiment towards them in media. For each of the 60 companies, the reports from 2020 and 2021 were collected resulting in a total of 120 reports. We are using the Linguistic Inquiry and Word Count (LIWC) software for a linguistic analysis of content, as well as conducting a sentiment analysis and examining diversity features regarding uniqueness and repetitiveness in language. To determine whether public sentiment in the news influences the linguistic characteristics of the SRs issued in the corresponding year, we conduct year-wise significance tests, the t-test and the Mann-Whitney U test, to compare the linguistic features. The paper reveals a lack of correlation between the public perception of companies and the linguistic features in their sustainability reports. A single significant difference was found, indicating that companies with negative public sentiment use negations more frequently than companies with predominantly positive public sentiment, suggesting possible greenwashing. This research shows that while detecting signs of greenwashing is possible, it remains a challenging task..</p> <h6 style="color:black;font-family:classic wide,sans-serif;"><b>KEYWORDS</b></h6> <p style="color:black;text-align:justify">Greenwashing, Sentiment Analysis, Sustainability Reports, Natural Language Processing, Public Perception.</p> <br> <!--End dsa--> </div> </div> </div> </div> </div> </section> <div class="fixed-action-btn"> <a id="menu" class="btn btn-floating cyan lighten-2 waves-effect waves-light pulse" onmouseover="$('.tap-target').tapTarget('open')"> <i class="material-icons white-text">menu</i> </a> </div> <div class="tap-target-wrapper right-align"> <div class="tap-target cyan" data-activates="menu"> <div class="tap-target-content white-text"> <h5>Reach Us</h5> <br> <i class="material-icons right">email</i>acity@acity2024.org <br> <br> <br> <i class="material-icons right">email</i>acityconff@yahoo.com <br> <br> </div> </div> <div class="tap-target-wave "> <a class="btn-floating cyan tap-target-origin waves-effect waves-light" onmousewheel="$('.tap-target').tapTarget('close')"> <i class="material-icons cyan">close</i> </a> </div> </div> <!-- Dummy Div--> <div id="txtcnt"></div> <!-- Section: Footer --> <footer class="page-footer cyan lighten-3"> <div class="container"> <div class="row"> <div class="footer-m col m3 s12 offset-m2"> <ul> <li> <a class="white-text" href="contact">Contact</a> </li> <li> <a style="color: #e6dbdb;" href="mailto:acity@acity2024.org"><b>acity@acity2024.org</b></a> </li> </ul> </div> <div class="social col m4 offset-m3 s12"> <ul> <li> <a class="blue-text text-darken-4" href="https://www.facebook.com/AIRCCPC" target="blank"> <i class="fab fa-facebook"> </i> </a> </li> <li> <a class="cyan-text " href="https://twitter.com/AIRCCFP" target="blank"> <i class="fab fa-twitter"></i> </a> </li> <li> <a class="red-text text-darken-4" href="https://youtu.be/bk3V2rkKc_c" target="blank"> <i class="fab fa-youtube"></i> </a> </li> </ul> </div> </div> </div> <div class="footer-copyright grey darken-2"> <div class="container center-align"> <large class="white-text"> All Rights Reserved ® ACITY 2024 </large> </div> </div> </footer> <!--Import jQuery before materialize.js--> <script type="text/javascript" src="https://code.jquery.com/jquery-3.2.1.min.js"></script> <script type="text/javascript" src="js/materialize.min.js"></script> <script src="js/scrolltop.js"></script> <script src="js/main.jquery.js"></script> </body> </html>